►
From YouTube: Meshery Build & Release Meeting (Nov 25th, 2021)
Description
Meshery Build & Release Meeting - November 25th 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/meshery
Twitter: https://twitter.com/mesheryio
LinkedIn: https://www.linkedin.com/showcase/mes...
Docker Hub: https://hub.docker.com/u/layer5/
A
A
Okay,
so
rudraksha
won't
be
joining
us
today
and
did
turtle
for
mario
mention
anything
lee.
B
It
might
be
a
north
american
holiday
that
gets
celebrated
thanksgiving.
I'm
not
sure
I
think
a
lot
of
times.
Canada
will
celebrate
a
lot
of
the
same.
Some
of
the
same
events,
and
so
south
central
america
might
as
well.
B
So
yeah,
but
we
should
take
a
look
I'll,
identify
the
link,
I'll
grab
the
link
to
the
ui,
build
failure
and
we'll
be
able
to
figure
it
out.
A
All
right,
so,
let's
officially
start
the
meeting,
welcome
everyone
to
the
mashery,
build
and
release
meeting
today
is
the
25th
of
november
and
yeah.
Let's
start
off
with
the
measuring
v0.6
release.
A
So,
as
per
our
last
discussion,
we
are
all
green
in
in
terms
of
releasing
a
release
candidate
so
and
one
one
pr
that
that
was
on
the
pipeline
was
rudraksha's
pr.
That
did
some
fixes.
So
is
that
merged?
A
B
So
good
question:
did
you
know
that
there's
bootcase,
you
might
know
this
better
on
some
of
the
adapters
and
the
adapter
library
and
mesh
kit
and
mesh
reoperator
and
mesh
sync?
B
Do
you
not
that
you
know
those
can
be
released
independently,
but
now
is
an
opportune
time
to
rev
them,
and
so
I
don't
know
if
you
know,
oh,
have
those
been
released.
Many
of
those
been
released
recently.
B
Do
you,
if
would
you
mind,
taking
taking
a
look
at
the
operator,
mesh
sink
and
mesh
kit
and
the
adapter
library,
like
those
four
and
I'll,
try
to
take
a
look
at
the
the
adapters
themselves?
Okay,
nice.
B
B
B
B
A
Okay,
so
after
this
meeting
I
I
will
make
an
rc
and
yeah
like
we
can
let
it
sit
for
a
while
and
then
gradually
progress
towards
v0.6.
B
Is
everybody
able
to
build
and
run
measury
like
I
I
was.
I
had
a
corruption
in
my
database
or
I
just
mesh
reserver
wasn't
able
to
write
to
the
database
and
I
deleted
it
and
I
built
mescheri
redeployed
measuring
it
still
wasn't
able
to
like.
I
don't
know
if
there's
some
if
there
was
yeah,
I
don't
know
right
now,
I'm
not
able
to
build
and
run
measuring
with
database.
B
I'm
curious
verdant
on
the
call
have
you
have
you
ever
attempted
to
build
mesherie
or
run
it
locally.
B
A
Okay,
so
the
next
topic
is
building
and
reducing
watson
filters
so
with
kasha,
could
you
give
some
idea
yeah.
C
Actually,
even
I
will
have
to
sort
of
ask
that
regarding
both
actually,
I
could
understand
what
the
issue
was
about,
but
what
I'm
supposed
to
add
on
here,
that's
something
I
even
I
had
to
ask
also
the
regarding
the
comment
on
the
top.
That's
also
something
that
I
had
to
ask
about
so
yeah.
Basically,
I'm
the
one
who'd
be
getting
to
know
what
what
this
is
about.
B
Yeah
his
name
isn't
here.
So
when,
as
we
go
to
ingest
seed
content,
we
will
need
for
these
bi
filters
to
be
pre-built
and
then
and
so
right
now
we
don't
have
workflow
that
builds
each
filter
and
then
you
know
also
release.
You
know
upon
release
that
those
binaries
are
release
artifacts,
that
can
then
be
referenced
as
part
of
seed
content
ingestion,
and
so
it's
like
yeah.
We
need
we
need
to.
B
So
yeah
root
rocks
had
been,
I
think,
had
built
a
few
of
these
in
the
past.
Potentially
we
don't
have
standard
workflow
just
yet
for
built.
You
know
compiling
rust,
but
conceptually
if
we
automate
well,
hopefully
we
have
written
down
cargo
or
whatever
else
we're
using.
B
And
then
so,
a
sheesh
just
an
item
for
you
to
mentally
track.
What
you
know
as
and
when
those
were
to
ever
be
published
or
that
that
workflow
ever
comes
through
yeah.
D
I
have
made
a
note
of
that
after
a
workflow
comes
for
that
and
we
have,
we
have
releases
for
it.
The
two
things
would
need
to
be
changed.
First.
Is
that
you
know
inside
of
the
inside
of
the
make
inside
that
make
not
a
make
file?
Actually
the
docker
file.
D
I
will
I
I'll
have
to
change
some
code
so
that
right
now,
the
only
release
that
is
available
for
the
filters
I'm
using
that
instead
of
the
latest
release
so
I'll
I'll
change,
that
to
fetch
the
for
the
latest
version
and
the
same
code
would
be
changed
in
the
runtime
seeding
of
the
content.
So
I'm
yeah
I'm
keeping
a
track
of
that.
B
Yeah,
just
an
understanding
of
what
the
ask
is
with
our
convention
of
listing
people's
names
is
like
hey.
If
it's
in
square
brackets,
that's
the
person,
that's
going
to
speak
to
it,
but
we
kind
of
don't
have
a
convention
for
yeah.
I
guess
my
name
should
have
gone
there,
so
I
could
speak
to
it
to
to
kind
of
hand
it
to
rude
car
short
to
rudraksha
that
doesn't
doesn't
really
matter
but
yeah
yep
yep.
That
was
just
that's
that
one.
A
All
right,
so
we
have
a
review
for
ashish's
workflow
on
mystery
adapters,
so
ashish
final
update.
D
D
Yeah,
I've
got
I've
got
it
here,
so
I
have
updated
the
current
behavior
inside
of
the
dock
and
then
have
reflected
the
exact
same
thing
inside
of
the
inside,
of
the
pr
that
I've
made.
D
So
so
yeah,
this
is
a
step
by
step.
This
is
for
the
prerequisite
of
that
workflow,
so
that
workflow
needs
another
job
to
run
before
that
our
workflow
actually
runs,
and
this
basically
goes
through
what
is
the
prerequisite
for
it,
one
of
which
is
to
actually
upload
the
pattern
file
dynamically
as
an
artifact,
which
would
be
then
downloaded
in
the
subsequent
workflow
that
is
referenced.
D
And
then
this
is
the
step
by
step
of
what
the
functionally,
what
this
workflow
that
is
referenced
in
all
of
the
adapters
would
do,
and
after
that,
I've
added
the
expected
inputs
and
what
what's
the
expected
output
behavior,
and
I
have
added
the
exact
same
thing
inside
of
the
inside
of
the.
If
you
go
to
the
preview.
E
D
So
yeah,
that's
that's
the
update
from
my
side,
and
this
was
just
to
confirm
that
you
know
okay,
so
about
the
token
thing
that
was
being
discussed
in
that
thread.
So
the
final
update
is
that
when
we
access
secrets,
what
happens?
Is
that
the
the
secrets
that
the
workflow
tries
to
access
is
not
from
the
from
the
folk
that
is
actually
making
the
pr,
but
it
tries
to
access
the
secret
on
the
basically
the
master
which
actually
the
the
on
which
the
pr
is
being
made.
D
So,
of
course,
there
is
no
way
we
can
access
it
unless
we
are
the
workflow
actually
executes
inside
of
the
master.
So
I
haven't
removed
the
logic
where
we
do
not
use
the
remote
provider
at
all.
So
if
the
secret
is
empty,
we
fo
back
to
the
local
provider
and
local
talks
into
kernel.
So
in
this
pr,
so
rudraksha
made
a
fork
and
we
tried
this.
So
everything
works.
Everything
works
fine
in
this
case,
so
because
there's
of
course,
no
need
for
any
tokens.
All
the
checks
passed.
E
D
B
So
the
rigors,
the
docs,
that
you've
written
down
they
generically
described
the
test,
harness
that
we
have
now
this
this
set
of
workflows
and
their
purpose.
The
particulars
of
what's
being
tested
sounds
like
we're.
Not
we
don't
have
listed
in
here,
maybe
in
part,
because
it's
supposed
to
like
the
generic
capability
is
to
be
able
to
receive
a
pattern
file
which
could
do
any
number
of
things.
B
B
When
these
are
being
run,
so
these
are
being
run
on
pr
to
master.
You
know
prior
to
merge
yeah
for
each
adapter,
repo,
okay,.
D
It
depends
I
mean
it
took
off.
Where
does
it
show
the
time?
Does
anyone
know?
Where
does
it
show
the
time
for
the
action
to
run?
I
think
it
might
show
it
here.
It
took
13
minutes,
and
this
is
because
I
took
a
five
minute
sleep
in
between
and
and
one
minute
sleeping
and
that
five
minutes
sleep
was
to
make
sure
that
the
all
the
pods,
because,
according
to
me,
if
it
takes
more
than
five
minutes,
I
guess
the
pod.
D
There
is
some
issue,
so
I
have
said
that
I've
hard
coded
that
to
five
minutes
and
as
well
as
there
are
other
sleep
times
of
around
around
I
guess
60
seconds
or
so
at
max,
so
the
whole
sleep
time
would
be
around
six
minutes
and
the
next
or
the
entire
time.
The
bottleneck
is
basically
the
amount
of
time
it
takes
to
build
the
image.
D
That's
basically
what's
taking
most
of
the
time,
because
we
actually,
after
checking
out
the
and
we
check
out
the
code
of
the
of
the
of
the
branch
that
is
making
the
pr.
So
if
we
build
this
new
image-
and
this
takes
almost
around-
I
don't
know
like
five
to
six
minutes,
so
it
takes
to
actually
build
this
image.
So
the
total
comes
out
to
be
the
the
rest
of
the
things
take
around
you
know
in
seconds.
D
B
Okay,
well
we're
sleeping
for
five
minutes.
That
would
be
nice
if
that
was
like
a
five
minute
timeout,
but
while
the
timeout
is
sleeping
that
in
in
the
meantime,
we're
we're
polling
for
like
we're
checking
on
our
first
test
assertion
right,
you
know,
and
that
way
you
the
you,
might
exit
the
sleep
early,
because
you
know
you
don't
necessarily
need
to
wait
for
five
minutes.
If
it's
deployed
within
a
minute.
D
B
Well
then,
so
this
is,
you
know.
Well,
I
was
going
to
move
us
on
a
little
bit
from
this
to
talk
about
to
reflect
on
how
this
these
workflows
be
used
for
measuring
perf
like
another
component
or
measuring
operator
or
and
then
ultimately
like
from
mesherie
itself,
but
we
don't
have
to
get
into
that
right
now
would
crush
if
you
had
something
more.
C
Yeah,
so
just
two
things
one
is
in
the
workflow:
it
is
taking
probably
pattern
name
at
least
what
I
could
see,
but
pattern
names
are
not
unique.
So
in
that
case,
how
would
it
identify
which
pattern
is
being
referred
to?
The
second
is,
I
see
a
download
pattern
file
stage
so,
and
that
is,
I
think
I
think
I
think
yush
was
the
one
who
implemented
this
particular
this
particular
command
initially,
and
I
think-
and
I
think
it
already
supports
http
based
imports.
C
So,
instead
of
instead
of
local
file
reference,
you
can
directly
mention
a
url.
It
would
actually
pass
it
on.
Do
the
pulling
basically
to
actually
pass
it
on
to
the
end
point,
and
it
will
take
care
of
that.
So
that's
just
two
things.
D
Okay,
so,
first
of
all
the
on
the
second
thing,
the
the
you,
the
thing
would
be
that
the
entire
point
of
doing
this
is
to
have
the
make
that
pattern
file
dynamic.
So
the
this
set
pattern
file
job
would
be
responsible
to
make
changes
into
that
pattern
file.
D
So
so
what
I'll
have
to
do
is
to
first
set,
I
mean,
make
create
that
pattern
file
and
then
upload
it
somewhere,
not
using
this
action
that
I'm
using
right
now
and
then
download
it
from
there,
and
this
name
pattern
file
is
just
a
construct
of
this
particular
action.
It's
like
a
key
value
pair,
so,
for
example,
when
I
upload-
and
that's
I've
also
mentioned
in
the
docs
that
when
this
action
slash
upload,
hyphen
artifact
action
would
be
used
to
actually
upload
this
and
it
would.
D
It
has
to
be
uploaded
with
this
key,
because
this
is
the
key
that
I'm
using
inside
of
the
inside
of
the
workflow
to
catch
to
catch,
that
back
to
catch
that
particular
artifact
that
that's
uploaded.
So
this
is
not
uploaded
anywhere
else.
The
this
job
takes
care
of
it
and
I
think
github
has
its
own
concept
of
artifacts.
That
is
where
this
is
being
uploaded
so
and
yeah.
C
Yeah,
I
think
I
mean
I
think
you
answered
the
question,
although
I
couldn't
so
you
upload
it
somewhere
and
then
you
download
it
back
and
so
in
a
in
another
job.
You
actually
create
a
button
dynamically,
and
then
you
know
that
the
name
of
that
thing
is
parent
file
and
then
in
the
second
job
you
actually
pull
it.
You
download
it
and
then
you
pass
it
to
the
mystery
ctl
pad
and
apply
hyphen.
C
D
C
D
And
this
is
this
is
the
this
is
where
it
is
uploaded.
This
artifacts
this.
A
A
D
Yeah
I
mean
everything,
is
I
mean
we're?
Basically,
we
have
a
temp.
We
can
have
a
template,
or
I
mean
not.
Have
a
template
depends
upon
the
case
in
this
particular
in
this
particular
repository.
What
I'm
doing
is,
I
have
a.
Let
me
let
me
show
the
template,
so
I
have
a
template
of
a
particular
pattern
file
where
I
have
where
I
filled
up
on
most
of
most
other
things,
except
for
version.
D
I
have
left
the
version
empty
and
wait
a
second,
so
yep,
sorry,
okay,
so
I
haven't
left
it
empty,
but
I
had
left
it
empty.
I
have
made
it
because,
because
of
some
other
reason,
what
happens
is
that
it
takes
this
pattern
file
and
using
white.
You
would,
you
know,
updates
this
particular
thing
to
reflect
upon
like
latest
version
or
whatever
change
we
can
make.
We
can
make
it
at
this
stage,
and
here
I'm
hardcoding
1.4.4,
I'm
I'm
not
going
to
do
that.
I've
just
this.
D
Basically,
this
the
when
I'll,
create
the
pr
in
adapters.
I
will
be
fetching
the
version
name
dynamically
and
then
setting
it
here
I
can,
we
can
set
basic.
We
can
set
more
more
than
just
a
version.
We
can
create
the
entire
pattern
file
at
this
stage.
We
can
change
the
name
of
you
know
we
can
change
the
name
of
the
services
or
we
can
add
more
fields
and
do
ever
anything.
B
B
We
should
identify
kind
of
next
steps
toward
mesh
remeshery
and
using
some
of
these
we
should
definitely
if
there
aren't
open
enhancement,
requests
on
mastery
ctl.
We
should
probably
have
those.
D
Like
some
something
to
get
the
pod
names
also.
B
I
think
so
I
think
that's
kind
of
desired
like
there's.
Actually,
there's
not
a
ton
of
intelligence
there.
If
you
do
measure
ctl
system
status.
Well,
there's
one
there's
an
open
issue
right
now,
where
you
might
be
running
the
operator,
mesh
sink
and
the
broker
on
a
cluster.
B
It's
like
well
whoops,
that's
accurate,
measuring
server
isn't
running,
but
the
others
are,
and
so
I'm
giving
an
example
of
a
related
kind
of
enhancement,
request
to
be
able
to
say:
mesher
is
more
than
just
a
server,
and
it
is
valid
that
you
might
have
just
the
operator
on
clusters
versus
the
server
or
the
server
itself
might
have
crashed,
but
the
adapters
are
still
running,
and
so
so
yep,
I
think,
attacking
on
to
that,
like
the
ability
to
just
flag
provide
a
flag
as
a
specific
filter
that
you're
just
looking
for
this
pod
name
or
this
component
name,
it's
probably
okay,
it's
not
the
end
of
the
world.
B
Necessarily
that,
like
that's,
not
the
best
example
of
like
oh
hey,
it's
very
clear
that
measure
ctl
needs
to
be
updated
for
this
use
case.
It's
like
you
know
like
it,
doesn't
really
hurt
that
much
to
run
cube
ctl
to
figure
that
out
necessarily.
B
B
Good
yep
the
pattern
file-
I
don't
know
if
you
guys
just
said
this,
but
so
those
pattern
file
as
we
identify.
I'm
sorry,
the
test
cases
that
we
have
in
the
the
test
plan
spreadsheet.
B
Those
can
drive
the
creation
of
any
number
of
pattern
files
pattern,
files
that
cover
a
certain
percentage
of
those
test
cases,
and
it's
highly
likely
that
there'll
be.
You
know
one
de
facto
pattern
that
gets
run
whenever
someone
is
attempting
to
merge
a
pr
against
master
and
it
tests
probably
like
the
happy
path
so
to
speak,
like
probably
like
you
know,
doing
a
particular
thing,
but
then
separately
scheduled,
maybe
on
a
nightly
basis
or
on
a
weekly
basis
or
some
other
triggered
event.
B
Maybe
it's
time
based,
maybe
not,
is
a
a
more
complete
set
of
regression
tests
that
might
use
any
number
that
might
kick
off
any
number
of
times
and
run
multiple
pattern
files
that
that
is
a
good
next
step
like
how
do
we,
the
use
of
mini
cubes
kind
of
bothersome,
based
on
networking?
B
I
think
we've
overcome
generally,
mostly
almost
all
almost
entirely.
We've
overcome
like
the
certificate
issue.
We'd
face
the
lack
of
access
to
certificate
data
when
using
something
like
mini
cube.
B
I
think
that
that's
true,
I
think
what
we
haven't
entirely
overcome
is
caching
of,
and
this
might
be
fixed
in
the
multi-cluster
pr
that
wood
crush
has
but
there's
some
like
the
adapters
themselves.
They
they
they'll
cache
the.
C
Yeah
actually
multicultural
cpr
doesn't
address
this
issue
because
this
actually
comes
from
adapters
and
how
they
are
actually
handling
their
configuration.
That's
actually
separate,
okay,.
B
It
sounds
like
ashish
and
utkarsh
between,
where
you
guys
are
sounds
like
you,
guys,
might
be
working
toward
creating
a
few
patterns
and
that
some
of
those
patterns
are
good
candidates
for.
D
Because
I
had
question
so
what,
because
I
have
not,
you
know,
went
through
all
of
the
use
cases
of
the
pattern
files.
So
in
in
what
cases
there
is
a
pattern
that
when
that
pattern
is
deployed,
we
do
not
expect
any
pods
to
come
up.
We,
you
know
it's
it's
something.
Maybe
it
creates
a
service
or
does
something
else,
but
a
pod.
So
we
won't
be
able
to
test
that
kind
of
pattern
files
with
this
workflow,
because
it
it
has
that
expectation
of
pods
with
it.
C
Actually
so
because,
because
using
those
yamas
those
pattern
yamas,
we
can't
provision
anything
in
that
anything
would
be
just
conflicts
or
just
secrets,
or
just
cluster
rules
or
just
absorb
anything.
So
these
things
which
actually
do
not
trade
bots,
I
don't
I'm
not
sure
so
yeah,
but
so
partial
answer
is
that
we
can
provision
things
which
will
not
lead
to
creation
of
a
pod
via
patents
via
those
yamls.
C
But
what
I'm
not
sure
of
is
that
what
all
scenarios
will
we
encounter
where
we
would
want
to
test?
If
actually,
we
may
encounter
so
something
like.
But
again,
this
is
a
scenario,
a
pattern
that
I
was
writing
was.
I
was
writing
circuit
breaking
in
linkedin,
so
it
will
be
actually
applying
a
configuration
to
linkedin
so
testing
that
out
so
testing
that
the
pods
have
come
up
won't
actually
test.
If
circuit
breaking
is
working
the
way
to
test
that
would
be
quite
different
but
yeah.
C
Basically,
what
I'm
coming
to
is
that
I'm
not
quite
sure
that
what
would
be
the
right
test
for
everything,
a
pod
is
definitely
one
of
the
things,
but
the
patterns
that
we
are
writing
like
circuit
breaking
is
one
of
them.
Retries
or
retry
is
something
that
I
I'm
going
to
write
so
yeah.
There
are
patterns
which
will
not
exactly
translate
creation
of
pods,
so.
C
So
actually
so,
if
we
just
want
to
check,
if
if
we
just
want
to
test,
if,
if
the
pattern
apply
command
has
worked
and
machine
server
has
completed
responsibility
in
that
case,
so
the
source
of
fruit
is
almost
messy
server.
Mesh
reservoir
actually
does
keep
track
of
the
things
that
it's
provisioning
wire.
C
It's
called
battery
resources.
I
guess
that's
what
we
are
calling
it
so,
basically
that's
sort
of
the
source
of
truth
and
it's
also
source
of
truth,
because
there
is
only
tom
stoning.
So
if
even
if
someone
deletes
a
resource,
it's
never
deleted
available
in
the
table,
because,
basically
to
keep
track
of
the
fact
that
it
has
come
up,
it
has
died
or
something
like
that,
so
that
is
so
almost
sort
of
source
of
truth.
But
again
that
would
that
will
not.
C
That
will
only
tell
that
this
was
something
attempted,
and
we
handed
over
to
kubernetes
kubernetes
also
has
successfully
did,
is,
and
did
it,
it
did
its
thing,
but
what
we
don't
get
to
know
is
that
whatever
thing
we
actually
did
was
that
successful?
Not
so,
basically,
actually
it
would
be
more
about
that
is
the
pattern
apply
and
pattern,
workflow
is
working
or
not
or
do
are
we
testing?
Is
the
pattern
working
or
not?
That
is,
is
circuit,
breaking
pattern
that
we
have
written.
C
D
For
the
things
like,
so
I
don't
think
testing
a
circuit.
Breaking
pattern
would
be
very
easy
like
this,
because
I'll
have
to
create
I'll
have
to
first
figure
out
where
the
end
point
that
I
have
to
hit
with
what
rps
and
then
you
hit
it
and
as
pattern
files
change,
the
considerations
would
change.
So
there
cannot
be
a
general
solution
for
it.
As
far
as
I'm
thinking.
B
That's
the
goal,
yeah!
No!
Actually
I
wasn't
listening
to
the
conversation
I
apologize.
That
was
that
last
part,
which
is
to
say
like
well
yeah
like
like
like.
If
the
goal
was
to
you
know,
what's
interesting,
is
we
actually
already
test
traffic
split
using
smi
conformance?
So
once
we've
done
it
there?
That's
not
the
architecture.
There
isn't
really
something
we're
looking
to
invest
into,
but
but
technically
we've
already
done
that
the
architecture
that
we're
looking
to
use
here,
like,
I
think
the
hard
part,
might
be
discerning.
B
What
the
service
endpoint
is
of
the
sample
app.
That's
been
provisioned
a
lot
of
times
it's
going
to
be
the
same,
but
even
then
like
that's,
actually
something
that
measury
mesh
sync
specifically
needs
to
be
identified
readily
identifying.
B
Sometimes
it's
mesh
kit
that
might
need
to
identify,
because,
but
but
it's
whether
it's
written,
I'm
sorry,
whether
it's
written
into
mesh
kit
or
whether
it's
written
into
mesh
sync
is
some
intelligence
that
says
it's.
It's
identifying
all
the
service
meshes
that
are
there.
It's
identifying
it's
pulling
back
info
about
the
workloads
that
are
running
on
them,
but
it's
not
really
qualifying
it's
not
really
doing
much
more
to
say
these
are
the
workloads
on
this
mesh
and
versus
on
that
mesh.
And
then
these
are
the
endpoints
of
each
of
those
workloads.
B
It
can
help
qualify
those
a
little
bit
technically.
The
data
is
already
there.
We
have
a
use
case
for
running
for
auto,
determining
the
appropriate
url
when
someone
runs
a
performance
profile
that
ideally
inside
the
pattern
or
inside
the
workflow
inside
the
workflow
or
inside
the
pattern,
either
way
we're
able
to
specify,
like
here's,
the
name
of
the
performance
profile
and
here's,
the
name
of
the
endpoint
and
actually
inside
the
performance
profile.
You
might
already
specify
the
endpoint
and
so
yeah
there's
it.
B
You
know
that's
a
little
that
that
fully
end-to-end
kind
of
verification
a
little
bit
deeper
than
where
we
are
in
github
workflows,
but
it's
a
path
that
we've
tread
before
and
it's
it's.
It's
definitely
where
we
want
to
get
to
it's
in
part
like
the
s
p,
benchmark,
github
actions
is
like,
are
they
they're
generating
load?
B
D
D
Okay,
one
final
remark
on
this
thing.
Actually
two
final
remarks,
first,
is
that
I
think
we
should
for
adapters
checking
the
pod
is
good
enough.
As
far
as
I
think
the
extended
functionality
that
we
talked
about,
I
think
inside
of
the
workflow.
I
have
currently
set
those
expected
ports
except
expected
name
speed.
All
those
things
is
required
values.
What
I
can
do
is
remove
that
required
to
and
make
it
more
flexible.
D
So
in
future
we
can
use
that
core,
workflow
and
reuse
it
in
a
way
that
we
keep
adding
expected
this
expected
that
in
different
we
can.
We
can
keep
adding
expected
this
expected
that
into
the
in
in
there
and
when
we
reuse
it
we
will.
We
will
be
expecting
different
things
depending
upon
how
we
are
using
that
particular
workflow.
D
So
if,
in
a
pattern
file,
we
are
actually
expecting
some
services,
so
we
can
reuse
that
thing
and
then
we
can,
you
know,
make
the
use
of
expected
services,
so
that
would
make
it
to
our
extent,
a
little
bit
extent
very,
more.
D
Than
it
is
right
now
and
for
adapters
right
now,
I
think
it's
good
enough,
so
yeah
I'll
make
the
final
changes
that
you
said
in
the
in
that
pr.
A
D
Okay,
so
sometimes
it
gets
really
gray
for
me
to
understand
that
what
we
can,
what,
because
the
users
also
use
cube
ctl,
so
are
we
trying
to
completely
replace
cube
ctel
with
messy
ctrl?
There
are
things
that
we
agree
that
we
should
allow
cube
ctel
to
do,
and
this
is
not
our
job.
So
when
this
gets
gray,
I
can
kind
of
not
look.
D
I
I
kind
of
failed
to
figure
out
whether
that
should
be
something
like
a
feature
in
machine
ctl
or
that
would
be
completely
redundant
because
it
will
just
be
a
shim
over
what
cubesatlay
is
already
doing.
So,
if
you
can
clarify
you
shed
some
light
on
that
yeah,
it's
not
always
easy
easy
to
figure
out.
If.
B
We
want
to
focus
on
more
opinionated
things
like
like.
We
would
like
for
the
ability
like
since
cube
ctl.
You
can
run
any
type
of
workload.
You
know
some
kind
of
job
you
can
be
doing
whatever
with
mastery
ctl.
You
ideally
can
ask
it
questions
like
how
many
how
many
proxies
are
out
there
across
all
the
service
meshes
they're
being
managed
like
so
so
that's
a
nice.
B
So,
while
you
could
ask
you
could
do
that
with
cube
ctl,
it
would
take
a
few
commands
and
you
probably
have
to
do
some
go
templating
or
some
bash
or
some
whatever
you
know,
awk
and
said,
and
whatever
you're
going
to
use,
but
but
from
actually
ctl
you
should
be
able
to
just
call
one
command
like
show
me
all
proxies.
Show
me
status
of
the
proxies.
B
Tell
me
number
of
control
planes
like
it's
like
high
level,
curation
and
kind
of
opinionated.
So
when
we
look
at
something
like
mastery
cto
getting
the
status
of
mesherie's
pods
like
enhancing
the
system
status,
to
speak,
to
pot,
you
know
or
to
get
info
about
pods,
it's
like
you
could
use.
B
Cube
ctl
the
reason
that
we
might-
and
I
think
it's
an
okay
answer
for
now,
if
we
and
as
we
look
at
entertaining
something
like
that
in
the
future,
probably
the
justification
for
doing
so
is
is
that
we
desire
to
for
mastery
as
a
piece
of
software
to
be
able
to
help
the
user
manage
mesherie's
life
cycle
like
like
it's
nice.
B
If
measuring,
has
its
own
system
diagnostics
and
you
don't
have
to
when
someone's
having
trouble
with
measuring
you
say:
well,
okay,
get
out
cube,
ctl
and
go
get
the
logs
from
the
thing
based
on
it's
kind
of
nice.
If
you
just
say
mesh
for
ctl
system,
diagnostics
and
part
of
that
diagnostics
would
be
its
ability
to
look
at.
It
would
look
at
its
mesh
config
and
say
for
this
deployment
of
mesherie.
B
Oh?
Well,
that's
nice!
You
don't
get
that
anywhere
else,
so
hope
that
that
gives
you
some
kind
of
general.
It's
not
specific
guidance,
because
you
can't
really
give
it,
but
some
general
guidance
that
we
want
to
lean
into
things
that
you
can't
get
with
cube's
dtl,
something
that
we
don't
necessarily
want
to.
B
Do
it's
a
little
bit
embarrassing
if,
right
now
in
the
measure
ui,
if
you
go
into
the
ui
and
you
provision
istio
or
a
provision
a
mesh
and
if
we're
not
really
giving
people
the
right
feedback,
so
I'm
using
the
ui
client
as
an
example
of
the
same
type
of
a
thing
you
might
want
in
measuring
ctl.
If
you
do
measure
ctl
mesh,
deploy
and
you're
kind
of
sitting
there
and
there's
no
ability
to
like
verify,
did
that
deployment
go
well
and
you
have
to
turn
to
cube
ctl
to
do
it.
B
It
feels
even
worse
when
you're
doing
that
in
the
ui,
because
you
go
all
the
way
from
the
ui
client
over
to
cubectl
to
just
verify
that
meshri,
this
automation
did
what
it
was
supposed
to
do
and
at
that
point
you're
like
geez.
I
should
just
stay
here
on
cubectl
anyway.
There's
a
lot
of
pieces
of
software.
That
kind
of
lean
into
that
philosophy
and
acknowledge
it
and
just
say:
well
how
do
we
deploy
and
how
do
we
deliver
this
piece
of
software?
This?
B
This
kubernetes
centric
capability
or
we're
just
going
to
be
we're
just
going
to
live
behind
the
cube
ctl
we're
going
to
have
our
own
like
our
own
custom
resources
and
our
own
operator
and
controller
that
so
so
it's
just
kubernetes
native
and
kubernetes.
Only
that's
not
the
genesis
story
from
meshri
masri's
genesis
story
isn't
is
external
to
kubernetes.
It
wants
to
be
kubernetes
first
class,
but
it
also
wants
to
lean
into
use
cases
where
your
traffic
splitting
desire
or
your
performance
management
desire,
isn't
constrained
to
kubernetes.
B
Cool
well,
we
do
have
we
like.
We
do,
there's
another
issue
that
we've
been
facing.
It's
in
the
ui
so
and
mario
is
on.
I
think
he
might
be
able
to
yeah.
You
might
be
able
to
help
as
we
look
at
so
this.
This
thing
has
been
causing
red
x's
on
our
pr's
for
a
week
week
and
a
half
ish
and
boy.
I
haven't
really
spent
time
to
look
at.
E
That's
great
because
the
message
it
it
yeah,
maybe
maybe
googling
that
error.
Let
me
just
take
a
look
next.
B
Oh
yeah,
it
could
have
been
that
well,
it's.
E
Yeah,
I
think
it's
some
dependency.
B
E
B
It
wasn't
updated
from,
I
don't
know,
I
don't
know,
I
guess
I
guess.
Maybe
this
isn't
cyprus
related.
E
No,
this
is
during
the
some
post
install,
maybe
post
install
steps
on
node
side.
Of
course
it
is
the
front
end
build.
But
it's
not
it's
not
any
test,
because
I
see
husky
install
and
then
it's
installed
and
then.
B
E
B
There
was
a
little
while
ago,
though,
updates
from
depend
about
on.
One
of
them
was
to
move
us
to
next
from
next
11
to
next
12.,
and
that.
B
E
B
B
C
It's
it's
in
package,
log,
a
hyphen
log.json,
it's
a
requirement
of
a
requirement.
So
actually
because
I
tried
to
reproduce
this
issue
locally,
I
couldn't
produce
it
on
linux,
although
it
was
reproducible
mark.
So
that's
strange.
B
B
A
B
B
If
changes
in
there
potentially
move
a
dependent,
you
know
revit
dependency,
the
subdependency
may
not
be
directly
reflected,
but
the
higher
level
dependent
change
to
a
higher
level
dependency
might
be
reflected
here.
So
when
we
look
at
the
history,
the
last
commit
was
eight
days
ago,
actually
latest
change
so.
C
Yeah,
actually
it's
just
the
log.js
that
it
changed.
Probably
this
is
the
time
when
it
added
that
sub
dependency.
E
B
Does
anybody
think
that
that's
worth
a
try
to
see
what
I
guess
it's
worth
a
pr
it's
worth
to
try
to
see
what
happens
in
that
pr.
E
You
know
what
I
think
that
what
happened
is
that
there
was
a
change
in
the
package
json,
but
that
person
or
that
yeah
in
that
commit,
or
in
that
pull
request.
The
package
log
was
not
updated.
So
then,
whatever
change
in
the
package.json
like
that,
would
have
caused
this
issue.
It
was
overrated
by
the
package.
Lock
I
mean
that
that
kind
of
dependency
it
wasn't
actually
introduced
until
someone
did
introduce
the
package
lock.
So
the
package
lock
is
supposed
to
override
right,
like
whatever
dependency
resolutions
also
are.
E
In
that
case,
it
might
be
more
complex
like,
for
example,
if,
if
we
see
the
one
that
that
is
failing,
any
other
change
might
have
been
like
the
culprit
is
added
before
I'm
not
sure.
If
I'm
clear.
B
B
B
Although
like
so
so,
I
think
in
part
what
you're
saying
mario
is
like
well,
it
may
not
be
the
case
that
any
one
of
these
are
in
fact
incompatible.
It
might
be
that,
like
the
the
package,
lock
doesn't
necessarily
isn't
necessarily
accurate
accurately,
reflecting.
E
Yeah
because
the
the
committer
needs
to
include
it,
it's
not
like,
like
the
ci,
will
not
update
it
by
for
us
like
it
could
be
missed.
So
then,
whatever
change
that
broke
the
build,
it
wouldn't
break
it
unless
that
that
the
package.json
was
updated,
that
that's
what
I'm
trying
to
say,
you
know
it's,
because
I'm
seeing
that
there's
a
commit.
Eight
days
ago,
I'm
looking
at
the
ui
folder
history.
E
So
there
is
this
commits
on
the
19th.
E
E
You
need
to
click
on
ui
there
and
in
the
history
for
ui
and
then
look
look
november.
18Th
yeah
there's
a
lot.
I
already
went
over
them
like
sure,
go
to
the
18th
november
18th
yeah.
You
see
that
one.
E
So
here
is
where
the
ui
build
starts.
Failing
right.
At
least
I
couldn't
find
any
other
like
before.
E
When
I
go
to
that
pr,
I
don't
see
any
dependency
changes.
So
that's
what
makes
me
think
that
the
the
failure
was
introduced
into
two
steps
like
someone
updated
the
package.json
and
then
someone
else
updated
the
package
log
json
based
on
that,
like
whatever
other
changes
he
introduced,
he
or
she,
and
then
let
me
see
I'll
just
because
are
25
files
in
this.
So.
B
Are
you
saying
like
like
a
potential
fix
here,
is
that
for
a
contributor
to
locally
delete
their
package,
lock
rebuild,
which
will
rebuild
the
package
lock
and
then
pr
from
there.
E
Maybe
find
find
that
package
json
change
that
that
breaks
the
like
that
locally
breaks,
the
the
compilation
it
needs
to
be
like
a
non-window
system
as
per
the
errors,
and
then
once
we
find
that
we
need
to
bump
that
down
or
patch
that
dependency
in
some
some
way,
because
because
the
yeah,
because
I'm
seeing
that
the
code
there's
25
files
in
these
pr4464,
there's
no
dependency
changes,
but
it's
the
first
time
that
that
the
build
fails.
So
it
really
doesn't
make
sense.
You
know.
E
The
only
explanation
I
think
is
that
the
prior
commit
kind
of
like
introduced
this,
but
it
wasn't
detected
until
the
package
log
was
updated.
E
So
let
me
let
me
let
me
try
to
see
if
I
can
find
it.
It
should
be
the
prior
one,
smi
icon
update,
no,
no
wait,
which
one
was
it
the?
Let
me
let
me
check.
B
It
might
be
one
that
we
can
let
slide
from
or
like
one
that
we
don't
necessarily
have
to
figure
out
on
the
call.
B
C
Yeah,
I
also
thought
so.
I
tried
that
and
I
pushed
it
also
and
because
it
is,
it
is
still
broken.
So
probably
that
doesn't
work.
I
tried
that
actually.
B
Okay,
so
that's
not
it
and
then
other
than
that.
It's
probably
spending
time
in
the
that
pattern,
configurator
pr,
the
with
the
4616.
I
think
I
think
it
has
540
files
that
were
changed,
that
I
think
that's
just
because
it
had
a
it.
It
resynced
with
master,
but
there
was
only
there
was
much
you
know
like
25
that
were
actually
changed
in
there.
B
So
that
food
crush
that
might
given
that
yeah
you're,
given
that
you're
taking
the
baton
in
abhishek
or
yeah
abhishek's
absence.
B
B
It
becomes
that,
like
you
know,
we
have
a
test
assertions
and
that
kind
of
a
thing
that
you're
basically
going
to
build
out
a
framework
for,
and
I
I
wonder.
B
It
comes
from
the
kudo
builder
project
we
forked
cuddle
a
while
ago,
because
it
couldn't
do
something
we
wanted
to
do
in
the
smi
conformance
tests,
but
basically
this
this
is
a
framework
that,
if
you
want
to
automate
the
creation
of
new
kubernetes
environment,
if
your
developer
wants
an
easy
way
to
test
operators
without
writing,
go
to
developer
qa
that
you
want
different
combinations
of
kubernetes
applications.
B
You
want
to
test
kubernetes
applications
over
multiple
versions
of
kubernetes,
like
kind
of
related
to
this,
just
provides
an
easy
way
to
define
your
assertions.
I
mean
not
not
this,
I'm
sorry,
that's
not
the
only
thing
that
it
does,
but
it
provides
a
stepper
through.
If
you
have
a
list
of
assertions,
a
test
that
you
want
to
verify
as
you
go
through,
then
it
will
test
those
in
sequence
or
in
parallel.
B
B
Anything
else
for
today
so
novendo
will
make
a
release
I'll
make
a
pr
we'll
keep
investigating
the
ui
item.
We
didn't.
There
are
code.
Think
about
this
as
you
all.
Hopefully,
everyone
received
an
email
about
the
plans
for
the
dot
7
release,
part
of
the
dot
7
release
in
each
area,
we're
identifying
code
coverage
goals,
those
coded
coverage
goals
they're
both
in
terms
of
unit
testing
and
integration
testing,
so
we're
at
we're
at
it
we're
about
to
be
at
a
dot
six.
B
B
I
think
that
we
might
say
we
have
100
functional
test
coverage
when
there's
automation
for
all
of
the
test
cases
that,
like
assuming
that
all
the
test
cases
that
we
have
are
documented
in
a
spreadsheet
and
assuming
that
that's
complete,
then,
if
they're.
If
we
have
coverage
for
these
for
functional
test
cases,
then
we
would.
This
is
how
we
would
track
code
coverage
there,
functional
coverage
so.
B
Nice
all
right,
hey,
mario,
by
the
way,
do
you
are
you
guys
what's
thanksgiving
like
for
in
your
neck
of
the
woods,
do
you
guys
do
you
guys
do
thanksgiving.
E
No,
not
in
mexico,
but
we
have
at
least
out
of
five.
We
work
with
folks
in
the
u.s
so
well
they
just
don't.
B
E
Yeah
enjoy,
I
don't
know
what
was
it
some
kind
of
turkey.
B
B
Very
nice
to
see
all
of
you
guys
catch
you
tomorrow
at
the
community
call.