►
From YouTube: Meshery Development Meeting (Feb 17th, 2021)
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
D
You
don't
wanna
see
what
two
days
without
a
shower
looks
like.
So
that's
not.
D
It's
you
know:
well,
it's
embarrassingly
tolerable,
it's
just
that!
It's
embarrassing
compared
to
anything
that
you'd
find
in
in
norway
or
in
the
northern
or
like
the
northern
united
states.
It's
just
that
this
area
was
not
built
for
cold
weather.
So
all
the
air
conditioners
are
outside
the
swimming
pool.
Pumps
are
outside
the
you
know.
You
know
all
the
wrong
things
that
will
freeze
and
not
work
or
outside.
D
C
D
Thanks
thanks
yeah,
no
it
I
think
I
make
it
sound
worse
than
it
is.
Maybe
we
were
melting
down
snow
yesterday
because
the
roads
are
blocked,
so
you
can't
get
out
to
get
water,
and
so
it's
a
little
bit
of
a
different
taste
to
you
just
stay
away
from
the
yellow
stuff.
I
think
it's,
the
is
the
trick.
D
C
D
Meeting
sounds
good,
ani,
rudolph
or
jeez
avishak,
as
you're
lining
up
the
various
topics.
D
I
wonder
one
that
I
don't
know
that
husseina
has
necessarily
has
a
lot
of
notes
on
just
yet,
but
seeing
her
name
hussein.
If
it's
all
right
with
you,
I'm
gonna
put
down
a
topic
in
case
we
get
to
it
about.
D
Oh
nice,
okay,
good
I've
been
on
the
keyboard.
I
have
not
that'd
be
great
if
you,
if
we
get
to
this
topic,
if
you
might
walk
us
through
that,
that
would
be
nice.
D
And
just
a
quick
confirmation,
while
we're
waiting,
I
I
we
did
secure
the
caverno
presentation
for
friday.
So
I'll
probably
be
the
the
largest
topic
on
friday,
but
there's
room
for
more
people.
Have
them.
B
C
All
right,
I
think,
we've
got
12
people,
we
just
start
the
meeting
before
we
start
with
the
agendas.
Do
make
sure
you
put
in
your
names
in
the
attendees
list
on
minute
meeting
minutes
and
yeah.
That's
it
I'll!
So
vishal
do
you
want
to
start
with
the
agenda.
A
Yeah
so
today
I
am
here
to
represent
ergo
workforce
engine,
which
is
a
cloud
native
cloud
native
workforce
engine.
Can
I
screen
share
the
screen
sure
go
ahead.
A
So
this
is
from
the
officer,
argo
project,
github
rapper,
so,
firstly,
argo
workflow
is
one
of
the
component
of
argo,
which
other
component
like
argo
cd.
They
have
even
component,
so
we
are
more
interested
towards
our
go
but
pro.
So
this
is
a
means.
We
can
run
parallel
jobs
on
kubernetes,
which,
which
can
be
done
in
in
a
step-by-step
manner,
in
containers
we
can
run-
and
this
is
a
multi-step
workflow-
can
be
implemented
on
the
using
dac
one
second,
using
dad.
That
is
a
cyclic
graph.
A
We
can
use
that
and
even
we
can
run
compute
intensive
jobs
of
like
machine
learning
or
data
processing
in
algebra,
so
it
is
only
for
kubernetes
and
container
not
for
ec2
server
and
other
thing,
and
we
can
run
ci
cd
pipelines
using
our
google
flow
in
kubernetes,
so
they
have
provided
us
a
means
more
than
example.
There
are
multiple
examples,
so,
first
I
will
set
up
the
cli
and
the
kubernetes
for
ham
chart.
A
A
A
So
this
is
a
sample
example
which
is
provided
by
the
argo
cd,
argo
but
fro.
In
this
I
mean
they
are
using
a
template
wheel,
size.
So
before
that
I
mean,
let
me
show
that
I
have
imported
some
template.
Let
me
show
you:
these
are
the
different
template
which
can
be
used
for
creating
the
pipeline.
A
So
in
this
template
they
are
just
passing
the
parameter.
So
this
is
a
name
and
in
parameter.
We
are
just
passing
the
message
and
this
is
we
are
given
as
an
argument,
so
let
me
show
so
there
are
some
simple
command
for
this.
I
mean
this.
Argo
comes
after
installing
the
this
cli
version.
Argo
cli
version
I've
installed.
A
A
A
A
A
This
I
installed
the
binding
of
ergo,
and
now
when
I
have
submitted
a
job
so
this
job
I
have
submitted
using
this
command.
I
passed
the
parameter
message
equal
to
goodbye
world
and
when
I
see
that
in
the
ergo
list
the
logs
of
this
submitted
job,
I
I
can
see
that
this
job
is
completed
with
goodbye
world
and
the
this
diagram.
A
So
I
mean
this
was
a
small
poc
which
I
have
done
on.
Rcoberforum
has
given
me
a
project
where
we
need
to
automate
their
pipeline
in
this
workforce
engine,
so
I
I
will
start
working
on.
A
D
A
D
Got
you,
okay.
That
sounds
like
some
very
sad
maintainers
over
there,
but
so
did
you
happen
to
get
a
chance
to
familiarize
a
little
bit
with
oh
argos
scheduling
capability
like
to
the
extent
that
you're
able
to
invoke
a
workflow
ad
hoc,
you're,
probably
able
to
trigger
a
workflow
based
on
various
events
happening,
and
I
assume
you're
able
to
schedule
as
well.
D
Okay,
okay,
yeah,
that's
a
an
important
component
for
us,
probably
not
nearly
as
used
as
the
ability
to
trigger
various
flows
either
based
on.
I
guess
you
know
both
the
the
ability
to
ad
hoc,
like
what's
the
different
word
other
than
ad
hoc.
A
D
Yeah
for
the
scheduling,
cron
is
important
yeah.
That
type
of
scheduling
is
important
for
us
and
will
be
used
some,
although
to
be,
I
guess,
to
be
more
forthright.
It
probably
wouldn't
be
used
nearly
as
much
as
an
on-demand
invocation
of
a
given
workflow
like
to
to
have
a
workflow
invoked
from
meshary
as
and
when
mesh
redeter
deems
that
it's
appropriate
to
kick
off
a
flow
and
whether
that's
measurey
or
mystery
user,
determining
that
we'll
probably
lean
into
those
a
bit.
D
So
as
you
learn
more
about
the
various
triggers
that
argo
supports
for
what
can
trigger
a
workflow
that
that'll
be
interesting.
A
D
What
else
so?
One
of
the
things
I
think
you
were
educating
me
on
the
other
day
was
that
a
lot
of
times
just
it'll
roll
off
my
tongue,
that
hey,
let's,
let's
go,
take
a
look
at
argos
cd,
and
I
will
just,
I
will
just
say:
argo
cd,
argo,
cd,
yes,
but
do
you
want
to
do
can?
Can
you
help
clarify
part
of
the
difference
between
the
components
of
of
argo
like
what
yeah.
A
Screen
so
these
are
the
main:
are
you
able
to
see
my
screen?
A
So
these
are
the
main
argo
project
so
in
this
incremental
net
cloud
native
projects,
so
they
are
argo,
cd,
argo
event,
argo,
workflow
and
argos
ti.
These
are
the
major
projects
and
we
are
interested
in
this
one.
D
B
Yeah,
I
wonder
a
bit
something
to
remember
sort
of
like
our
use
case
and
where
we
would
like
to
interact
with
it.
Thanks
for
a
great
presentation,
by
the
way,
visual.
A
D
Yeah
yeah
fish,
although
those
aren't
easy
that
was
that
was
nice
to
expand
on
that
a
little
bit
is
karcher.
Are
you
here
on
the
call.
D
G
Demo,
the
pattern
file
thing:
we
were
interning,
creating
a
dark
ourselves
and
then
we
were
triggering
a
lot
of
things,
but
the
implementation
was
very
basic.
So
so,
for
example,
we
were
saying
that
I
want
to
basically
provision
an
ico
service
mesh
and
then
I
want
to
maybe
provision
of
grafana
add-on
and
mickey's
add-on
and
then
do
this
and
that
so,
basically
that
was
a
workflow
and
we
were
creating
a
dark
internally
and
then
we
were
doing
it
some
of
the
stuff
badly
and
some
of
this
stuff
sequentially.
G
But
a
lot
of
things
were
still
missing
from
that.
So
basically
it
would
be
one
of
the
things
where
our
engine
would
help
us.
So,
instead
of
doing
all
of
those
things
manually,
we
can't
just
leverage
our
engine,
which
would
take
all
of
those
things.
It
would
basically
take
the
dark
that
we've
generated
just
like.
I
think
we
shall
show
that
it
can't
take
above
flow.
G
It
can't
take
up
dark,
so
it
will
take
a
down
and
take
the
dark
from
us,
and
then
it
will
basically
execute
that
workflow
far
more
reliably
than
than
what
we
have
right
now,
because
it
can
obviously
do
retries
and
those
kind
of
stuff
so
that
that's
one
of
the
use
cases
we
have
for
work
and
right
now.
B
Okay,
thank
you.
Now
I
managed
to
connect
the
dot,
the
dots,
the
docs
as
well,
for
hopefully
the
dots
and
yeah,
but
this
is
brilliant.
So
you
would,
for
instance,
sort
of
like
create
the
the
based
on
the
specifications,
and
then
you
create
on
the
fly,
the
argo
pipeline
specification
or
whatever
it's
called
and
submit
it
to
argo,
and
then
it
would
run
it.
D
Yeah
use
case
yeah
right
yeah
that
I
think
part
of
it
is
part
of
the
thinking
is
that
well
is,
is
predicated
on
the
the
promise
that
people
are
building
out
content
if
you
will
or
workflows
in
different
engines
for
for
various
purposes,
and
the
second
example
was
something
like:
oh,
maybe
maybe,
if
you're
running
a
service
mesh
you'd
like
to
know
and
be
notified
in
slack
if
you
or
if
a
sidecar
goes
missing
on
or
if
a
sidecar
is
not
running.
D
The
latest
config
that
the
control
plane
has
and
that's
there's
been
a
delta
of
of
five
minutes,
or
you
know
some
certain
time
frame
by
which
it
should
have
been
updated
and
it
isn't-
and
so
you
know,
your
mesh
is
out
of
sync
and
your
data
plan
control
plane
are
not
doing
well
and
so
geez.
If
you're
running
a
service
mesh
manager,
maybe
it
would
be
sort
of
it's
the
manager
of
managers,
it's
the
mom.
D
If
you
will
it's
watching
and
by
mom,
I
guess
sort
of
proverbially
sort
of
it's
a
mother
to
service
matches,
but
it's
also
a
manager
of
managers,
and
so
maybe
it
should
notify
someone.
When
you
know
your
service
mesh
is
running
wonky,
okay.
Well,
so
in
mesry
we
could
build
out
an
integration
to
slack.
We
can
build
out
one
to
email
and
to
this
chat
and
that
chat
and
or
which
is
all,
can
make
sense.
D
There
can
be
some
native
and
quote
unquote
native
integrations
with
between
mastery
and
some
different
systems,
but
also
could
be.
You
know
quite
helpful
to
the
extent
that,
like
one
of
those
native
integrations,
if
you
will
is
to
a
workflow
engine
that
maybe
has
a
variety
of
a
variety
of
integrations
itself
and
so
sort
of
through
that
workflow
engine
meshery
becomes
capable
of
you
know
relatively
quickly,
capable
of
sending
an
sms
sending
an
email
sending
a
you
know.
D
So,
there's
just
a
few
different
use
cases
kind
of
within
there.
It
might
be
that
someone
sets
a
policy
that
says
you
know.
Abhishek
is
disallowed
from
trying
to
make
configuration
change
to
this
set
of
workloads
on
this
service
mesh
over
here
and
when
he
attempts
to
do
it.
The
system,
you
know
log,
you
know,
keeps
an
audit
trail
of
all
activities.
D
You
know
made
or
attempted
because
he's
done
it
like
five
times
in
a
row
now
and
it
looks
malicious,
not
accidental,
and
so
well,
if
that
policy,
you
know
so
so
a
little
bit
on
the
side
of
the
workflow
engine,
maybe
there's
a
policy
engine
like
caverno
or
oppa,
sitting
there
watching
those
audit
logs
and
just
running
and
doing
an
evaluation.
D
You
know
looking
over
the
history
of
those
auto
logs
and
running
performing
evaluations
in
accordance
with
whatever
the
policies
that
have
been
defined
are
well
and
then
once
it
flags
that
there's
a
policy
issue.
Okay,
so
then,
now
what
to
do
about
it?
Well,
well,
you
might
want
to
offer
to
the
user.
Well,
hey!
Did
you
want
to
get
a
phone
call?
Did
you
want
to
get
a
text
message?
Did
you
want
to
get
there's
a
bunch
of
alerting
use
cases?
D
D
Yeah,
you
know,
there's
that's
part
of
the
thinking.
The
other
part
of
the
thinking
is
like
okay
like
and
I
hope,
everybody's
sort
of
mentally
following
along
today,
if
you're,
not
and
by
the
way,
some
of
the
things
that
I
just
said,
we
should
to
michael's
point
about
docs.
We
should
probably
connect
connect
the
dots
to
the
dots,
the
I'll
share
it
real
briefly,
because
all
of
you
are
encouraged
and
welcome
to
add
some
notes
in
here
this.
D
D
So
if
you
think
about
a
scheduling
capability
that
we
were
just
alluding
to
to
the
extent
that
meshrie
has
an
embedded
workflow
engine
that
deals
with
all
the
logic
around
cron
and
what
happens
when
a
task
initiates
but
then
stops
halfway
and
what
happens
if
you
have
20
tasks
that
all
initiate
at
midnight?
And
how
do
you
deal
with
those
in
parallel?
How
do
you
have
a
you
know?
D
Evaluation,
like
you
know
the
the
runtime
evaluation
that
we
have
for
istio
right
now,
that's
as
a
user.
You
have
to
go
over
and
manually
invoke
that
check
and
just
check
up
on
whether
or
not
you're
doing
you
know
you're
running
istio
in
accordance
with
best
practices
in
accordance
with
patterns.
Maybe.
D
A
D
D
So,
very
good,
that's
great
yeah,
any
other
questions
for
michelle
and
their
thoughts
on
somebody
shaw.
I
guess
you
know,
as
you
learn
about
eve,
argo
events
that
would
round
out.
A
D
Good
deal
last
question:
for
me:
you
don't
recall,
I
know
this
wasn't
a
point
of
investigation
but
the
ability
to
visually
diagram
a
workflow
or
to
usually
yeah.
What
was
that
deck.
A
C
H
H
With
that
said,
I
am
currently
a
front-end
developer
residing
in
india.
I've
been
working
on
javascript
and
react
frameworks
for
about
four
to
five
months.
Now,
though,
I'm
looking
for
more
opportunity
in
terms
of
technical
writing,
I
also
saw
mastery
lee's
name
on
the
google
docs
developer
docs.
I
I
don't
remember
the
exact
thing,
but
it
was
supposed
to
be
a
sort
of
joint
for
people
for
technical
writers
and
for
people
of
open
source
community
to
mentor
them.
So
I
saw
these
names
there
as
well.
So
that
is
where
I
got
the
idea
of.
H
D
Oh
awesome,
nice,
I
was
yeah.
I
was
going
to
make
it
that's
fantastic.
Oh
boy,
can
you
say
your
first
name,
one
more
time.
D
Yeah
good,
that's
fantastic,
very
nice
to
have
you.
Obviously,
I'm
welcoming
new
new
folks
is
really
important
to
us,
because
we
we've
set
everything
else
aside
to
to
welcome
you
so
and
to
try
to
make
you
either.
I
don't
know
either
as
comfortable
or
as
uncomfortable
on
the
call
as
possible.
Hopefully
it's
as
comfortable,
so
you
can.
You
can
tell
a
lot
of
us
here-
are
learning
every
time
we're
on
the
call
together.
D
So
that's
part
of
the
theme
for
us
and
we
we
go
from
from
dots
to
docs
here
as
well,
and
so
okay,
no
that's
great,
very
timely
of
you
to
come
in
and
talk
about
docs
a
little
bit
we're.
Well,
we
were
there.
There
are
two
new
docs
contributors
as
of
this
last
week,
a
couple
of
people
that
were
writing
some
docs
about
meshary
ctl,
which
is
the
command
line.
D
Client
from
esri
and
one
of
those
individuals
was,
was
saying
that
they
were
not
only
did
they
want
to
contribute
to
docs,
but
in
part
they
were
choosing
to
contribute
to
that
area
of
the
documentation,
because
that
was
an
area
that
they
they
thought
that
they
wanted
to
advance
and
write
the
code
for
in
the
future.
D
So
like
what
a
what
a
great
plan,
what
it
would
have,
what
a
good
way
to
like
get
in
and
advance
their
tech,
well
their
technical
skills
by
being
able
to
write
about
it,
but
then
also
the
ability
to
understand
what
the
system
is
doing
so
that
then
later
they
can
come
over
and
write
the
code
behind
it.
So.
D
Yeah
nice,
okay,
welcome
very,
very
nice
to
have
you.
C
All
right,
so,
let's
move
on
to
the
next
item
drove
you're
up.
I
Oh
yeah
sure
so
actually
this
topic
was
part
of
yesterday
smith,
which
I
had
in
which
kosh
was
also
present,
like
he
also
started
working
on
this
particular
thing
before.
So
what
for
people
who
don't
know?
We
are
trying
to
document
the
whole
api
structure
of
mastery
itself
to
know
what
endpoint
is
doing?
I
I
D
Nice
drew
does
this
mean
I
would
just
speak
more
to
the
that
automation,
if
you
would
in
the
creation
of
the
api
documentation,
would
this
augment
the
mesh
redox?
Would
it
be
something
that
we
would
host
or
publish
each
release
or.
I
Oh
yeah,
so
about
the
automation
of
using
ghost
swagger
itself.
I
need
to
look
into
it,
but
kash
knows
more
about
it,
but
when
we
get
the
overall
yaml
of
the
apis
itself,
we
were
able
to
add
it
to
the
documentation
itself
and
see
stuff.
Like
let's
say
I
want
to
see
api
config
sync
right.
So
in
this
yaml
I
provide
each
and
every
data
to
get
a
detailed
version
of
what
that
particular
api
does.
D
That
makes
sense
did
this
if
this
interface
here
does
is
this,
as
are
we
able
to
self-host
this.
I
Yeah
we
will
like
there
are
certain
open
source
projects
through
which
we
could
use
to
add
this
to
our
docs.
E
I
This
version
currently
is
not
the
one
which
will
use
what
I'm
currently
using
to
showcase
what
the
yaml
looks
like
to
get
a
better
ui
view,
but
yeah.
We
probably
can
use
this
in
the
dock.
To
give
a
more
detailed
view
about
the
api
end
points
like
something
we
are
seeing
in
the
right
that,
if
yeah,
if
we
get
status
quo,
200
that
if
it
passing
then
what
kind
of
result
which
we
will
get
in
a
simple
result
and
stuff
like
that,
and
what
kind.
G
Of
errors,
you
can
give
you
regarding
the
hosting
thing
actually,
once
we
have
that,
and
this
auto
generated
subscriber
piano,
we
can
actually,
I
think,
our
documentation
our
jacob
base,
so
we
have
as
far
as
search,
we
have
some
ruby
gems
which
are
capable
of
passing
this
me
and
then
generating
this.
G
So
we
can
kind
of
integrate
this
swagger
of
yummy,
with
the
documentation
that
we
already
have
yeah
basically,
for
me,
is
a
is
a
plugin
which
is
capable
of
passing
this
thing
into
a
ui,
and
there
are
quite
a
few
of
them
for
me
as
well,
if
you're
using
jacob
talks,
so
we
can
still
post
it
alongside
talks.
D
Nice,
that
sounds
very
promising,
and
actually
it
makes
a
lot
of
sense.
Now.
Why
why
jagrudi
has
joined
today
sounds
like
there's.
Some
documentation
work
to
do
sounds
like
a
whole
new
area.
That's
perfect,.
B
I
have
a
question.
This
is
really
good.
Is
this?
Where
is
this?
Is
this
in
the
code?
Is
it
part
of
the
code
or
where
does
where
does
this
reside
then,
in
the
end,
eventually,
do
you
like
add
stuff
to
the
code
and
then
extract
it
from
the
code.
I
D
D
One
of
them
is
a
set
of
rest,
apis,
a
set
of
endpoints
and
over
time
any
number
of
different
contributors
have
come
to
help
expose
new
capabilities
through
rest
and
we've,
yet
to
have
a
there's,
no
there's,
no
maintainer
sort
of
ruling
the
set
of
endpoints
in
their
structure
with
an
iron
fist
or
no
one
mandating
that
they
all
as
a
whole
make
sense.
So
if,
as
we
go
to
to
look
back
over
those,
maybe
maybe
look
to
restructure
them,
then
also,
maybe
we
should
write.
You
know
document
their
per.
D
You
know
the
purpose
of
their
structure,
how
it
is
which
ones
support
what
operations,
what
people
should
expect
from
those
operations?
How
do
you
is
there
a
common
way
in
which
you
pagnate
or
a
common
way
in
which
you
could
you
can
filter
as
a
as
a
consumer
of
the
api
and
this
and
swagger
as
a
so
so
it's
a
bit
confusing
because
there's
there's
a
company
smart,
bear
and
there's
a
open
source
specification
called
swagger,
that's
a
sort
of
a
common
way
of
capturing
the
definition
of
your
apis
to
generally
rest
apis.
D
This
is
the
same
company
that
makes
what
is
that
soap?
Is
it
soap,
ui
yeah
yeah?
They
make
soap
ui,
and
so
then
it
sounds
like
please
correct
me.
You
crush
and
drew
if
I'm
wrong,
but
it
sounds
like
what
we
would
be
able
to
do
is
is
we
would
go
over?
D
Describe
their
purpose
in
a
swagger
format
in
a
swagger
yaml
format,
I
think,
which
is
what
you
were
showing
on
the
left
hand
side
of
your
screen
to
set
to
characterize
and
add
metadata
around
each
of
the
endpoints.
And
since
that's
a
specification,
swagger,
there's
tooling,
that's
been
wrapped
around
that
for
different
languages.
So
go
swagger
for
go
to
be
able
to
ingest
that
handwritten
yaml
to
then
auto-generate.
I
Yeah
only
thing
I
want
to
add
is
like
the
specification
is
open
api
specification
and
currently,
which
we
are
using,
was
version
2.0,
formerly
known
as
swagger.
B
Yeah
I
mean
I
was
just
sort
of
like
I
for
some
frameworks
and
languages,
like
I
think,
chalk
crs.
It's
a
long
time.
I've
worked
with
that
where
you
actually
have
the
the
open.
Abi
specification
is
part
of
the
the
rest
endpoints
in
the
code,
and
so
you
can
go
either
you
you
augment
your
code
with
the
documentation.
Then
you
extract
the
swagger
or
the
open
ip
specification
from
that
or
the
open
ip
open
api
document
from
that,
or
you
can
also
go
the
other
way
that
you
write
this
best.
B
E
B
G
Had
actually
because
we
are
waiting
to
answer
my
question
because
we've
been
using
most
of
the
using
ghostbusters,
so
in
that
case
right
now
how
I
was
doing
it,
I
was
creating
a
dot
profile
and
that
dot
dot
dot
go
file.
I
was
annotating
the
structs
that
we
send
as
responses,
so
that
is
the
entire
goal,
and
annotation
is
actually
four
comments
so
and
those
animations
are
just
like
any
go.
G
Generators
annotations
up,
so
you
say,
swagger
cool
and
something
in
ghostbuster
would
actually
read
those
things
and
will
automatically
generate
the
yammer
file
that
you
were
showing
on
the
left
side
of
the
screen.
So
that's
why
the
file
is
auto
generated
the
right
side,
the
ui
that
was
that
was
that
version
that
was
also
ordered.
It
could
also
be
it
would
generate
it
by
using
the
volkswagen.
If
you
are
using
go
for
hosting
it
could
be
generated
using
gems.
G
That
is
movie
if
you
are
hosting
it
alongside
docs
or
machine
learning.
So
those
both
of
them
are
operated
up
with
respect
to
closeness
with
the
code,
so
there
would
be
a
go
file.
It
could
be
a
like
dedicated
file
like
I
was
doing
where
I
was
creating
the
structs
and
annotating.
G
All
we
can
do
is
while
we
are
writing
the
proof,
the
stocks
for
responses,
we
can't
just
do
annotations
there.
The
whole
reason
that
I
was
creating
another
file
in
the
same
package
was
because
I
wanted
to
avoid
much
conflicts.
There
is
no
such
restrictions.
That
goes
to
other.
All
of
these
auto
generated
generation
tools
put
on
put
on
us
so
yeah
with
respect
to
closeness
before
we
can
do
it
as
we
please
actually.
B
H
H
So
is
this
ymf
format
like
given
to
you
on
that
screen
or
is
it
something
that
you
created.
I
For
yourself,
so
this
is
something
which
either
you
can
generate
using
ghost
swagger
or
you
can
write
it
manually
yourself.
It
is
dependent
on
the
open
api
specification
right.
You
can
look
into
it.
Then
you
will.
You
will
get
to
know
what
kind
of
format
they
need
and
what
kind
of
file
structure
they
have
the
schema
which
you
are
using.
So
if
you
want
to
check
it
out,
like
the
current
version
which
we
are
using,
is
again
2.0
open
api
specification,
so
I
think
they
should
answer
a
question.
C
I've
got
a
question
here.
Basically
it's
it's
around
the
plan
of
action,
so
why
exactly
are
we
targeting
on
hosting
it
as
a
docs
hosting
it
as
a
central
ap
endpoint,
instead
of
having
it
alongside
the
measuring
server.
G
I
think
I
think
the
idea
was
that,
just
like
we
have
from
mission
docs
in
general,
you
would
also
have
api
documentation
alongside
with
them.
I
think
that
was
the
idea,
I'm
not
quite
sure.
I
think
it's
been
a
while,
since
I
discussed
it.
C
Yeah
yeah
like
basically,
if
we
have
it
alongside
even
meshi
server,
that
should
so
we
don't
have
anything
functional
in
dogs
right
functional
I
mean,
even
if
we
host
the
docs
dot
machine
dot
io
with
this
swagger
functionality,
except
for
the
list
of
endpoints.
We
won't
know
anything
else
or
we
won't
do
anything
else
about
that
right.
D
Yeah
good
good
question
here:
I'm
not
going
to
present
say
what
the
answer
is
necessarily
but
sort
of
talk
us
through
it,
which
so
by
the
way
I
missed
part
of
what
michael's
question
was
earlier.
So
I
pontificated
on
the
wrong
thing
in
the
docs
today,
docs.meshredio
the
vast
majority
of
the
documentation
is
non-interactive
or
is
you
know,
fairly
static
in
nature?
D
If
you
were
to
browse
to,
I
think
it's
under
functionality
or
if
you
browse
to
adapters,
I
think
we'll
end
up
seeing
a
list
of
supported
service
meshes
and
if,
if
you
were
to
choose
open
service
mesh
yeah,
we
I
believe
will
end
up
seeing
and
nope.
If
you
go
back
and
maybe
choose
istio
or
console,
we
would
end
up
seeing
a
an
interac
catacota
interactive
lab.
So
let's
try
out
the
mesh
adapter
for
console
and
you
can
do
it
using
this
lab
nice.
D
So
I
point
this
out
kind
of
in
context
of
if
we're
auto-generating,
if
through
a
jekyll,
plug-in
we're
generating
some
amount
of
api
documentation
that
people
can
interact
with,
even
though
it's
static
data,
even
though
it'd
be
hard-coded
static
responses
that
might
be
that
might
be
nice
to
to
have
in
abhishek
was
part
of
your
thinking
like
hey
as
a
run
time.
D
C
Yeah
like
basically,
the
reason
I
brought
it
up
is
because
first
thing
is
that
the
effort
that
you
put
in
here
to
write
stuff
it'll
be
relatively
less
if
you
like,
if
it
is
hoshide
alongside
in
the
mastery
server
itself,
then
you'll
just
have
to
work
on
the
annotations
for
every
new
api
endpoint.
And
the
second
thing
is
that,
even
if
you
wanna-
let's
say
in
the
future,
you're
adding
more
api
endpoints,
you
can
easily
test
it
alongside
you.
Don't
have
you
don't
have
another
external
dependency
outside.
D
A
solution,
but
it's
which
is
the
same
thing
that
these
guys
were
presenting
like
yep
those
are
one
and
the
same
yeah
agreed
you
would
as
you're
writing
the
golang
as
you're
writing
out
that
endpoint
that
if
you
go
ahead
and
describe
it
a
little
bit
further
that
and
in
part
to
michael's
specific
question
about
like
hey.
How
much
of
that
description
is
embedded
in
the
go?
D
How
much
of
that
is
sort
of
on
the
side
and-
and
you
know
auto
generated
and
I'm
I'm
still
a
little
confused
on
that
point,
I
think
karsh
was
saying:
there's
a
couple
of
ways
to
go
and-
and
he
had
a
perspective
about
that
but
you're
you're,
for
I
think
I'm
pretty
sure
your
your
first
point
abhishek
is
like
really
well
received
and,
and
in
fact
is
like
there
there
is
not
intended
to
be.
D
You
know
much
of
a
difference
between
like
it's,
not
a
not
a
separate
system
in
which
okay
you're
writing
the
go
and
then
you're
checking
that
and
you're
doing
and
then
separately
as
a
separate
second
activity
you're
going
over
and
to
a
different
doc
system
and
then
updating
and
annotating
a
separate
copy
of
the
endpoints.
Like
that,
that's
not
the
goal
or
that's
not
the
case.
D
Okay,
nice,
nice,
nice,
the
second
bit,
which
I
might
have
put
some
words
in
your
mouth,
I'm
not
sure
the
the
second
part
to
in
my
mind,
is
okay.
Well,
so
great!
So
if
those,
if
these,
this
type
of
documentation
is
being,
you
know,
auto
generated
based
on
the
descript,
the
yaml,
the
open.
You
know
based
on
this.
D
Well,
that
can
be
helpful
for
the
the
contributor,
the
developer,
the
user
that
comes
over
and
goes
to
docs.mastery.io,
and
they
can
there's
a
certain
amount
of
quote,
unquote
interactivity
that
they
can
have
when
they
click
the
try
it
out
button.
If
someone
has
provided
a
small
list
of
hard-coded
responses
that
that
api
that
endpoint
might
present,
okay,
that
can
be
helpful,
was
the
second
was
the
second
part
of
what
you
were
saying
like
hey.
D
G
Actually
open
up
the
api
have
the
concept
of
environment,
so
you
can
have
any
number
of
environments
in
there.
You
can
point
to
like
right
now.
It's
pointing
to,
I
think,
a
single
environment,
but
you
can't
have
a
drop
down
list.
You
know
many
years,
you
know
where
you
can
specify
any
number
of
environments
that
you
want
to,
so
one
of
them
could
be
locals.
One
of
them
would
be
something
with
like
machine
removing
and
trying
to
access
it.
So
the
open
abs
supports
it
out
of
the
box.
G
I
I
don't
know
you
would
have
know
that
you
want
to
test
your
measuring
through
the
docks
through
this
ui
itself.
So
that's
what
I
was
actually
pointing
at
so
you
can
see.
So
I
was
pointing
out
that
you
can
have
a
number
of
environments
in
this
stock
itself
and
you
can
just
select
that
with
me.
I'm
talking
about
google
what's
called
981
and
then
you
go
on
click
on,
try
it
out
and
send
a
request
and
it
will
go
to
localhost
one
in
case
you
have
a
machine
running
somewhere
else.
G
C
But
like
basically
the
number
of
environments
so
like
let's
say
I
added
a
new
endpoint
like
okay
wait.
I
think
we
are
running
out
of
time.
Probably
we'll
discuss
this
or
we'll
move
it
to
some
other
next
time.
C
All
right
next
up
is,
I
don't
think
anirudh
is
here.
Is
he
no
usana
here.
F
Oh
wait
thanks
yeah,
so
basically
I
pushed
one
review
for
the
issue.
Do
you
want
me
to
talk
over
what
I
have
done.
D
F
F
F
So
I
was
adding
the
scripts
to
get
the
cube
config
from
the
eks
environment.
So,
as
part
of
this
I
had
made,
some
change
looks
like
that
is
not
working
for
azure
so,
but
it
wouldn't
work
for
a
any
other
cloud
service.
F
Also,
so
you
know
this
is
yeah,
so
in
scripps
dot
go
for
each
type
of
cloud
service.
We
have.
F
Defined
some
functions
where
we
generate
the
cube
config
by
using
the
cli
is
provided
by
those
particular
cloud
providers
and
we
generate
the
cube
config
dot
ml5.
So
the
problem
here
is
that
we
have
assumed
that
the
place
where
this
command
would
be
run
will
be
a
linux
based
or
unix-based
operating
system.
So
that's
how
we
have
hard-coded
this
path
as
dollar
home
dot
mystery.
F
But
what
happens
is
this
cli
could
be
run
on
a
windows
based
os
also,
so
that
is
where
it
was
failing
to
recognize
this
path,
because
in
windows
there
would
be
backslashes
in
the
path
environment
I
mean
in
the
file
paths.
So
what
I
have
done
is
yeah
in
the
config
file-
I
I
am
actually
anyway.
This
path
is
given
so
for
any
type
of
provider.
We
we
are
going
to
use
home
mystery
cubeconfig.tml,
so
I
am
using
a
file
path
module.
So
this
actually
takes
care
of
voice
level
dependencies.
F
So
what
I'm
trying
to
do
is
I
am
trying
to
create
this
directory
up
front
and
based
on
the
arguments,
we
will
be
passing
this
cube,
config
ml
path
as
input
to
these
functions,
so
I
couldn't
find
a
way
around
this,
so
it
is
kind
of
adding
a
new
parameter
to
the.
F
Functions
so
so
that
it
would
be
like
I
mean
I
am
doing
this
to
avoid
checking
for
the
os
in
the
shell
scripts.
F
F
So,
ideally,
what
we
should
do
is,
instead
of
relying
on
the
shell
script,
to
do
all
these
tasks.
We
should
do
this
in
go
program
itself.
This
is
something
we
have
discussed
some
time.
D
Wonderful
yeah
on,
I
think,
on
on
each
of
the
fronts
like
hey.
If
we
can
the
the
more
unified
the
logic
inside
of
mastery
ctl,
so
the
more
of
it
is
written
native
to
go
the
more
of
it.
You
know
the
better
experience
and
the
higher
quality
we'll
be
able
to
hopefully
guarantee
for
the
for
the
user
of
the
cli
granted.
We'll
have
you
know
a
couple
of
external
dependencies,
probably
inevitably,
and
that's
just
sort
of
the
way
that
it
is.
D
Unless
you
know
for
the
there's
like
of
these
three
cloud
providers,
each
of
them
have
their
own
kubernetes
systems,
so
aks,
eks
gke.
Each
of
these
cloud
providers
also
have
their
own
clis
that
you
use
to
interface
with
various
cloud
services,
including
their
managed
kubernetes
service.
D
You
know
the
azure
cli
installed
or
the
g
cloud
cli
or
the
aws
cli
installed
on
their
system
is
for
us
to
potentially
you
know
from
azure
ctl
to
use
their
sdk
to
embed
that
logic.
You
know
directly
and
go
hussein
is
that
is
that
aligned
with
your
ongoing
thinking
here
with
your
potential
approach
to
being
rid
of
these
bash
scripts?.
F
Oh
actually,
we
can.
Some
things
are
very
simple,
so
they
can
straightforward
be
made
into
go
code
here
right,
it's
just
invoking
one
simple
command
so,
but
for
gke
I
think
there
are
many
things
that
they
are
doing.
So
we
need
to
evaluate
how
we
can
achieve
all
of
this
in
go
programming
language.
F
So
that's
why
I
did
not
attempt
it
at
this
point
of
time.
So
so
whatever
is
required,
so
it
is
being
passed
as
input
for
now
right.
So
there
was
a
dependency
on
the
cube,
config.yaml
path
where
we
are
going
to
flatten
out
the
config
from
let's
say:
az.
Our
eks
are
gke
right,
so
the
path
is
now
anyway
os
agnostic.
Now
because
we
have
handled
it
in
the
mischievous
cli
code
itself,
I
I
meant
to
say
in
the
config
dot
go
itself.
D
But
that
makes
sense
so
so
this
pr
make
sure
that
file
path,
references
are
os
agnostic
and
you're
leveraging
the
one
of
the
standard
libraries
from
go,
the
os
package
and
some
of
it's
yeah.
D
Nice,
do
you
do
we
have
does
as
the
mesh?
How
do
I
ask
this
question,
or
do
you
have
to
as
the
mescheri
ctl
developer?
Do
you
have
to
explicitly
perform
a
check
to
ascertain
which
operating
system
mesri
ctl
is
running
on
and
then
store
that
in
a
variable
and
pass
that
variable
along
to
these
these
packages,
the
os
package
and
the
file
path
pack,
or
do
they
just
do
they
figure
it
out.
F
Yes,
they
do
figure
out
because
there
is
a
difference
between
path
and
file
path.
Module,
so
path
would
eventually
use
forward,
slash
itself,
but
file
path
will
take
care
of
so,
for
example,
this
user.current
module
right,
which
has
a
home
directory
kind
of
construct.
So
here
this
take
care
of
os
dependencies.
So
if
it
is
unix
based,
it
would
return
dollar
home.
F
So
these
packages
are
taking
care
of
voice
dependencies.
The
goal
and
packages.
F
Yeah,
so
I
have
tried
out
these
this
core
logic
as
a
sample
program
in
both
windows
based
vm,
as
well
as
linux,
based
vm.
So
these
things
work
as
expected.
Only
thing
is.
I
could
not
do
this
end
to
end,
so
if
someone
has
a
setup
where
they
can
take
the
patch
and
quickly
try
this,
I
think
someone
reported
this
issue
right,
maybe
with
with
is
a
z
cli.
If
that
person
can
quickly
verify
this,
it
would
be
great.
D
F
A
kind
of
setup
as
such.
D
D
D
Helpful
hussaina,
this
is,
for
my
perspective.
Thank
you.
This
is
great.
We're
there's
a
couple
of
more
comments
that
I
have.
I'm
gonna
put
them
into
slack
just
so
I
I
recognize
we're
a
few
minutes
over
so.
C
All
right,
I
think
we
are
done
with
most
of
our
agendas
and
clearly
we
are
out
of
running
over
time.
So
all
right,
we'll
end.
This
call
here
thanks
a
lot
everybody
for
joining
in.
It
was
a
fruitful
discussion
here.
We'll
see
you
next
on
next
call.
Next
week,
bye
y'all,
see
you
guys
bye,
guys,
bye,
see
you
bye.