►
From YouTube: Meshery CI Meeting (June 10th, 2021)
Description
Meshery CI Meeting - June 10th, 2021
Join the community at https://layer5.io/community
Find Layer5 on:
GitHub: https://github.com/layer5io
Twitter: https://twitter.com/layer5
LinkedIn: https://www.linkedin.com/company/layer5
Docker Hub: https://hub.docker.com/u/layer5/
A
Welcome
everyone
to
the
mystery
cia
meeting.
We
have
this
meeting
every
two
weeks
on
thursday,
every
second
thursday.
So
all
the
tennis
is
there
are
there
and
the
first
topic
to
start
with.
Is
that
a
I
think,
the
most
important
in
this
week
and
to
not
not
merge
prs?
If
the?
If
there
are
ci
errors,
we
have
been
doing
this
for
a
while
and
we
have,
and-
and
these
in
my
opinion
are
bad
practices
in
a
in
a
big
project
or
even
in
a
small
project
and
most
important
in
an
open
source
project.
A
For
example,
this
week
we
had
two
pull
requests
that
were
merged
with
a
with
ci
errors,
and
the
problem
with
this
is
that
if
we,
if
we
merge
spears
with
ci
errors,
it's
difficult
or
it's
hard
to
identify
the
pog
later
so,
for
example,
in
this,
they
are
made
by
by
the
dependable
bot,
and
we
can
see
that
the
the
y
tests
were
failing
and
it
was
because
of
a
new
version
of
cyprus.
A
A
There
was
another
report
request
from
a
new
contributor
anita
that
is
working
very
hard
in
the
last
few
days,
and
I
and
her
pull
requests
were
merged,
also
with
ci
errors,
for
example,
these
network
errors
and
these
notify
errors
were
carry
on
and
then
all
the
next
pull
requests.
If
we
see
the
pull
requests
that
are
open
right
now,
all
are
fading
like
this
one
like
this
one
like
this
one
like
this
one
and
it's
because
of
the
notify
errors.
A
So
if
anyone
opens
a
new
request,
he
or
she
will
face
these
errors,
and
we
don't
know
if
he
or
she
is
making
those
errors
or
those
errors
are
part
of
the
master
branch.
So
we
I
I
have
talked
with
this
about
this
quickly
before
like
two
times
and
is
because
the
administrators
can't
merge
pull
requests
even
if
the
tests
are
failing.
A
So
I
think
one
one
thing
that
we
can
do
in
the
meantime
to
eliminate
or
to
remove
this
bad
practice
is
to
enforce
the
administrator
to
follow
the
practices
of
merging
pull
requests
with
only
ci
pass.
So
this
option
is
not
he's
not
activated
right
now
and
as
lee
mentioned
before,
and
he
don't
want
to
be
a
blocker
or
he
don't
want
to
to.
A
Make
but
to
that
person
that
is
contributing
to
the
project,
because
if,
for
example,
anita
is
contributing
a
lot,
we
cannot
tell
her
that
your
change
doesn't
matter
because
the
tests
are
failing.
So
I
think-
and
we
we
only
need
to
say
here
or
him
that
his
word
is
very
important
for
the
project
and
we
need
to
wait
a
couple
of
hours
or
a
couple
of
days
before
emerge
their
product,
their
changes
in
order
to
fix
the
test
before.
B
Nice
yeah:
what
do
you
guys
think
vijay
abhishek.
B
Sounds
good
sounds
good,
yeah
actually
rudolph.
I
didn't.
I
didn't
see
that
you'd
written
this
or
made
this
suggestion
yeah
this
one
is
well,
it's
tough
yeah.
This
I
agree
like
this
is
a
great
principle.
We
should
do
this.
B
I
don't
know
how
we're
going
to
do
it.
It
isn't
yep.
So
the
there's
a
few
issues
so
actually
there's
a
few
issues
that
are
listed
in
this
in
the
meeting
minutes
about
workflows
that
are
failing
and
consistently
failing
so
and
they
don't
have.
You
know,
resolution
to
them
like.
B
And
so
I
don't
know
we
can
walk
through
these
examples
like
these
are
the
stronger
examples
of
like
where
we've
had
to
override
in
the
past
or
like
where,
if
we
didn't
override,
we
weren't
going
to
be
able
to
make
progress,
and
it
has
really
it's
it's
a
fine
line
to
to
walk,
because
on
one
side,
you
run
the
risk
of.
B
Bugs
being
included
or
causing
confusion
for
others
when
they
see
red
x's
on
things
like,
oh,
is
that
me
or
you
know
what
happened
and
so
abhishek.
You
think
we
should
turn
this
on
and
enforce
it
not
do
any
releases
of
the
adapters
until
mesh
kit
is
totally
fixed
until
the
bats
testing
on
the
console.
Adapter
are
totally
addressed
until
code.
B
I
think
that
those
are
the
questions
that
we're
facing.
Yes
in
there
is.
How
do
people
feel
about,
like
you
know
how
long
it
might
take
to
have
something
merged
for
them?
But
you
know
that's
that
can
be
managed
like
you.
Can
you
can
explain
some
stuff,
you
can
tell
them.
You
know
thank
them
and
et
cetera,
and
you
know
hey,
it's
a
that's.
That's
probably
the
lowest
one,
the
other
one
that
I
would
toss
in
there
of
like.
B
Hey
when
it
becomes
compelling
to
run
that
risk
is
like
abhishek,
you've
delivered
some.
C
Yeah,
I
think
yeah
we
would
definitely
be
able
to
because
at
the
end
of
the
day,
the
checks
are
basically
on
static
functionalities
and
not
the
hacks
that
we
do
around,
so
they
are
still
like.
They
will
still
pass.
B
Okay,
yeah,
I
mean
I
that
that's
not
true,
like
there's
a
number
of
times
where
you're
we're
trying
to
do
a
demo
like
I
was
doing
yesterday
for
the
service
mesh
interface
group
or
and
where
you're
trying
to
fix
mesh,
config
or
mesh
kit
to
be
able
to
show
something
where,
if
you've
got
a
lint
error,
that
would
we
would,
you
know
not
be
able
to
do
the
demo,
because
we
would
not
be
able
to
merge
something
because
of
a
lint
error.
B
I
mean,
if
you're
actively
there
working
on
it,
you
can
hopefully
fix
it
and
get
it
done.
Were
you
listening
to
the
the
other
examples
I
was
giving
about
the
mesh
kit
dependency
bats.
B
The
meshri
istio
lint
failures
that
are,
there
have
been
there
for
a
long
time,
yeah,
okay,
so
you
I
mean
what
you're
saying
is
like
the
there
should
be
no
more
releases
of
adapters
until
those
are
fixed.
B
I
think
I
gave
another
example
like
if
there's
a
failure
on
netlify
docs
and
the
change
is
go
lang
only
that
you
should
not
that
you
know
you
would
not
it's
not.
That
you
shouldn't
is
that
no
one
could
move
forward
with
any
other
changes
until
something
that
was
orthogonal
to
the
change
at
hand
was
fixed
like
and
and
so
what
you're
saying
is.
B
We
should
hold
up
the
works
for
all
for
these
other
things
like
in
these
other
situations.
It's
not
a
should
it's
like
hey
if
we
ch.
If
we
enable
the
check
it's
you
can't
no
one
can
right
so
yeah.
So
here's
my
here's,
my
issue,
rudolph,
is
with
his
dude.
Like
I
totally
agree,
I
so
much
want
to
go
check
that
I've
thought
about
it
in
the
past,
like
dude,
I
we
wouldn't
have
made
demonstrations,
we
wouldn't
have
convinced
certain
people
to
come,
get
engaged.
B
Had
we
not
pushed
forward
in
some
cases
like
had
I,
there
are
many
many
there's
there's
what
are
there?
There
are
211
contributors
to
just
this
repo
when
you
take
the
other
40-something
repos
here,
there's
like
300
and
something
people
I
don't
know
400
people
like.
I
can't
tell
you
how
many
have
come
by
and
just
and
left
they
we
worked
with
them
invested
the
time
and
then
they
either
couldn't
figure
out
dco
or
they
just
didn't
care
or
like
there's
one
right
now,
there's
a
good
one.
B
It's
about
how
to
connect
to
github
and
import
pattern
files
from
a
phd
student
at
the
university
of
you
know,
new
york
university
and
you
know,
he's
busy
in
an
internship
now,
so
his
pr
is
just
going
to
sit
there
until
either
someone
fixes
his
stuff
or
we
let
it
go
and
like
it's
like
well
gosh,
I
spent
a
number
of
hours
with
him
both
getting
to
know
him
both
explaining
this
stuff
and
him
working
on
it
and
then
like
to
toss
away
because
or
like
potentially
toss
that
away,
if,
if
he
had
had
a
lint
air,
if
we
were
sitting
there
waiting
on
his
linter
and
stuff,
it's
like
these
are
the
things
that
that
balance.
B
I'm
not
I'm
not
saying
hey,
we
can't
go
do
this
and
this
would
just
you
know
the
world
would
fall
apart.
That's
not
what
I'm
saying
what
I'm
saying
is.
Here's
the
back
story.
Here's
the
logic
behind
which
the
the
justif,
the
the
things
that
haven't
been
figured
out
yet
to
be
able
to
you,
know,
turn
on
such
a
flag.
B
Sometimes
it's
just
like
the
builds
themselves
are
failing,
like
I
can't
believe
how
many
changes
we
do
in
ci
like
there's
a
lot
of
I
should
have
spent
this
morning
working
on
just
ci
some
automations
for
well.
Actually
we
should
talk
about
it
here,
it's
more
for
community
management
and
automating,
the
I'll
put
it
as
a
topic
and
not
not
digress.
But
okay.
A
Yeah,
I
I
understand
you,
but
I
mean
to
hold
that
pr
in
that
particular
case
of
the
phd
student.
A
I
will
note
it
will
know
his
code
will
know
disappear
right
and
the
pull
request
will
be
there
always
so
maybe
we
cannot
merge
quickly,
but
the
code
will
not
disappear
and
we
are
not
rejecting
his
code.
We
are
only
halting
until
the
test
until
we
can
fix
the
test.
I
can.
A
So
if
we
need,
if
we,
if
we
want
to
make
or
to
move
measure
it
to
this
level
of
this
project,
we
need
to
follow
best
practices
and
no
matter
what.
If,
if
this
change
is
important,
because
sometimes
the
kubernetes
needs
to
to
be
cutting
these
alpha
releases,
we
have
now
four
releases
in
a
year
three
releases
in
a
year,
sorry
for
kubernetes
and
all
the
tests
should
pass.
For
example,
there
is
the
test
grid.
A
Kubernetes
version
that
we
can
do
here
like
there
is
the
issue
release
or
the
issue
tests
and
hallways
should
be
passing
how
we
show
you
always
always.
So
if
we
want
to
to
continue
with
this
project
well
in
measuring,
I
think
we
should
follow
best
practices,
always
no
matter
what.
B
Okay,
yep
it'll,
be
at
the
risk
of
the
project
dying
right,
it'll,
be
at
the
risk
of
people,
not
contributing
there's
a
couple
of
big
differences.
So
number
one
kubernetes
at
one
point
was
the
fastest
growing
open
source
project
in
the
world,
more
people
contributing
to
it
than
anything
else.
So
it
is
not
a
good
thing
to
compare
against
yeah.
That
would
be
amazing.
It
would
be
awesome,
it
would
take.
It
would
take
measuring
being
a
household
name
for
that
to
be
the
case.
B
It
wouldn't
ever
happen
in
part,
because
it's
not
aimed
at
all
the
things
that
that
kubernetes
is
aimed
at
so
it
wouldn't
but
anyway,
but
that's
that's
related
to
your
point.
The
other
thing
is
that
what's
kubernetes
version
number,
I
don't
know
119
or
something
like
that.
Like
yeah,
it's
intentionally
trying
to
get
boring,
it's
trying
to
be
more
ubiquitously
adopted,
like
yeah
this
things
subsequently
and
necessarily
have
to
slow
down
and
for
them
to
be
able
to
scale
to
that
that
size
and
be
able
to
run
like
the
world's
infrastructure.
B
On
on
this
code,
like
it
has
to
go
through
this,
we're
at
a
dot
five
like
very
intentionally
trying
to
signal
to
people
that
there's
interesting
things
here,
but
that
there's
no
guarantees
we
created
and
spent
a
ton
of
time
like
creating
an
edge
channel
to
help
try
to
separate
the
thing
these
things.
B
I
guess
what
I'm
saying
is:
okay,
I'm
in
principle
like
very
much
for
my
part
and
very
much
aligned
with
with
the
goal,
but
but
a
this
is
a
bad
thing
to
or
not
a
good
example,
a
good
analogy
to
draw
it's
a
fantastic
aspiration,
and
do
you
know
how
many
people
are
behind
the
ci
here?
I've
spoken
to
these
guys,
like
many
many
times
like
some
of
them
have
been
contributors
here
and
like
there's
a
ton
like
it's
a
it's
a
the
like.
B
We
couldn't
I
mean
part
of
what
would
happen
is
like
what
we
should
be
going
toward
what
you're
trying
to
do,
which
is
there's
there's
only
four
of
us
on
this
week's
call,
which
is
kind
of
weird,
because
there's
usually
like
twice
as
many,
but
even
at
that,
the
other
folks
that
come.
B
I
don't
think
I've
seen
a
commit
on
it
from
any
of
them
on
ci
like
none
of
them
and
that
yeah
I
love
them
and,
and
I'm
not
saying
anything
negative
toward
them.
I'm
what
I'm
saying
is
there's
just
a
small
handful
of
folks
who
can
actually
go
fix
it
and
what'll
end
up
happening
fast,
is
we'll
get
they'll
get
bottlenecked
on
it
or
it'll
yeah.
I
mean
like
so
in
the
time
that
you
pointed
this
out
like
that.
I
have
been
from
my
part.
B
I've
been
trying
to
like
one
of
the
ones
that
was
merged
was
like.
I
accidentally
did
it
and,
like
I
was
after
the
fact,
like
okay
yeah
there's
a
lot
to
balance
like
like.
I
think
I'm
trying
to
explain
like
why,
where
we
are
where
we
are
yeah.
A
A
B
Like
I
mean
like
not
even
like
exponentially
like
we
desire
for
it
to
get
there
sure,
but
but
I
mean
like
the
the
when
you
actually
I
didn't
I
so
I
just
saw
this
today
and
so
but
yeah
when
we
talk
about.
A
B
B
Absolutely
absolutely
it'll
be
at
the
cost
of
the
other
things
like
it'll,
absolutely
help.
It'll
it'll,
restrict
it
right,
and
what
I'm
saying
is
I
don't
know
if
and
we
should
bring
this
up.
We
should
bring
this
up
like
in
the
community
caller
and
the
mastery
call
or
something
and
have
all
the
rest
of
the
maintainers
weigh
in
as
well
it
it
it
eases
every
well
yeah
it
eases
most.
Everyone's
life,
that's
participating,
and
it
significantly
hampers
those
who
are
relying
on
it.
Like
I'm.
B
Relying
on
this
being
successful
and
to
take
it
more
slowly
is
potential
death
for
the
project,
like
is,
is
what
I'm
trying
to
prevent
it's.
Why
I
spend
so
much
time
trying
to
build
the
community
and
trying
to
teach
others
and
help
them
find
things
and
get
ownership
and
do
stuff,
and
this
I'm
like
wholeheartedly
behind.
B
And
but
until
we
answer
those
things
that
I
just
like,
I
I
for
my
part,
we
would
need
proof
we
would
need
to
like
prove
like
hey
abhishek.
When
do
you
we
need
to
get
with
michael.
When
can
michael
fix
his
bat
stuff
abhishek?
How
can
the
breaking
change
for
mesh
kit?
How
can
that
go?
Go
across
the
istio
linting
issues
that
have
been
there
a
long
time
like
when
can
asuco
do
those
like?
B
It
would
be.
I'm
terrible.
B
So
yeah
guys,
I
think
this
is
a
fresh
question
for
well,
I
guess
just
for
avashek
in
this
case
you
speak
freely,
I
mean
you
say:
should
we
go
turn
this
on.
B
Is
what
to
go
turn
it
on?
No,
no.
B
I
also
let
me
let
me
step
over
to
the
same
side
that
rudolfo
is
on
because
because
it
pains
me
the
rudolph
that
you're
suggesting
the
right
thing-
and
it
pains
me
to
be
the
one
that's
like
standing
in
the
way
of
the
like
the
prince
of
the
principally
right
thing,
like
the
way
that
I
would
counter
part
of
what
I'm
saying
is
well
hey,
okay.
So
if
we
merge
some
of
this
stuff
like
okay,
a
potentially
people
are
going
to
get
confused.
Like
oh,
was
that
my
failure.
B
When
I
see
the
red
x
b
well,
the
checks
are
there
for
good
reason.
Now
it's
sometimes
it's
the
case
that,
like
the
checks
themselves,
are
just
failing
like
I
can't
tell
you
how
many
times
github
runners
just
fail
halfway
through
it
like
randomly
and
okay,
fine,
we
rerun
them
and
like
if
they
come
out
green,
like
that's,
not
that
big
of
a
thing
like
we,
you
know
we
do
that,
but
it
and
we
should
definitely
be
checking
and
making
sure
that
you
know
in
any
way,
but
the
the
negative
aspects.
B
The
other
negative
aspects
are
depending
upon
what
the
ac,
what
failed
that
can
bring
down
like
that,
can
cause
an
instable
release
depending
upon
what's
there,
it
could
leak
out
of
vulnerability.
If,
if,
like
you
know
again
like
there's,
there's
lots
of
lots
of
checks
and
lots
of
potential
around
it
could
like
yeah.
Allow
allow
a
bug
to
escape
so
the
end
end
integration
test
for
for
x
wasn't
passing
that
was
ignored
and
then
later
on,
the
user
tries
to
use
it
and
they
actually
you
know
it's
it's.
B
Maybe
they
don't
run
into
it
at
all,
because
maybe
it
was
a
failed
check
because
someone
didn't
move
the
the
you
know
like
if
it
was
a
cypress
thing
like
someone
didn't
move
the
tag
over
and
so
actually
the
functionality
works
but
the
test,
but
then
like
yeah,
then
you
have
to
come
back
to
like
it's
just
like,
like
fixing
if
to
rodolfo's
point
fixing
a
failed,
build
check
or
after
the
fact
is
much
more
bothersome
than
you
know,
than
fixing
it
right
there.
And
now
it's
like
an
escaped
bug.
It's
like
oh
geez.
B
B
And
maybe
I
think
it
was
maybe
mostly
that
that,
like
you
might
be
just
releasing
something
that's
broken
like
and
how
terrible
that
is,
that
for
the
user
to
sorry
you
know
how
terrible
is
it.
This
is
why
I'm
saying
like
like
if,
if
these
like,
I
I
let's
go
hammer
the
hell
out
of
like
the
these,
those
those
three
or
four
bullets
or
or
whatever
like,
let's
go
get.
I
totally
want
to
be.
I
the
hair,
the
hair
like
until
rodolfo
came
as
a
matter
of
fact.
B
I
was
like
super
scared
to
even
make
a
release,
because
we
just
weren't
didn't
have
much
tests
and
like
thank
goodness,
we've
got
some
functional
tests.
We've
got
some
unit
tests
happening
now
and,
and
at
least
one
person
dedicated
to
doing
more
of
that
we've
got
and
how
do
we
do
that?
Well,
we
bring
in
interns
and
pay
them
like,
and
we
can
only
do
so.
B
Much
of
that,
like
I
don't
know,
if
you
guys
know
like
layer,
5,
doesn't
sell
a
thing,
and
so
there's
zero
income
coming
in
for
the
the
thing
and
so
like
it's
such
a,
which
is
also
like,
which
is
also
why
I
hate
to
even
have
this
conversation
like
this
with
with
rudolfo,
because
I
can't
it's
hard
for
me
to
express
how
appreciative
I
am
of
him
of
all
of
you
guys
like
being
here
and
doing
this
and
well
rudolph.
Maybe
let
me
propose
this.
B
If
I
is
maybe
maybe
we
should
we
try
to
get
a
well?
Can
we
try
to
get
aggressive
on
like
be
getting
the
project
to
the
point
by
which
we
can
go
flip
on
those?
Those
flags
are
to
the
point
by
which
okay,
maybe
there's
an
intermediate
step,
which
is
like
because
an
intermediate
step
being
like
hey
two:
it
takes
two
administrators
or
two
maintainers
to
override
the
the
thing
and
then
and
then
finally
working
up
to
like.
B
Well,
it
doesn't
matter
how
many
overwrite
it
like
you
can't
and
the
reason
being
because,
like
sometimes
it
will
just
be
the
case
like
part
of
the
challenge,
and
this
this
is
rudolfo's
challenge
as
well,
is
like
okay,
hey
object,
abstract.
Do
you
know
how
to
write
a
cypress,
functional
test
or
vjdu?
B
Like
I
don't
know,
if
you
guys
are
like
me,
I
don't
know,
but
thank
goodness,
then
you
know
rudolfo
introduced
it,
and
then
we
try
to
support
his.
You
know
his
initiative
by
like
training
other
people,
but
then,
like
you
know,
I
don't
know
if
any
of
those
people
who
actually
heard
the
original
thing
are
still
here,
like,
moreover,
did
they
actually
learn
it
and
take
it
away,
like
rudolph,
has
been
doing
providing
the
right
stewardship
every
time
we
have.
B
This
call
he's
like
hey,
there's
some
starter
issues
who
wants
to
get
up
and
going
here's
a
blog
post,
like
you
know,
rudolph
you're,
doing
all
the
things
like
all
the
things
to
help
get
us
to
a
point
where
you
could
say,
look
no
lee.
We
should
turn
on
this
flag
or
we
should
have
multiple.
B
It
takes
multiple
maintainers
to
sanctify
the
fact
that
we
would
do
it
like
this.
This
is
such
an
emergency
like
we
never
that's.
That's
like
the
eject
button
that
we
we
only
pull
twice
a
year
or
something
like
and
and
there's
some
natural,
the
part
of
getting
there
is
stronger
maintainers
like
more
rodolfos.
You
know
like
more,
which
means
like
and
more
dissemination
of
the
these
practices
like
hey.
This
is
cypress.
We
do
do
you
know
these
things.
The
the
kubernetes
ci
reminds
me
of
openstack
ci,
which
is
like
these.
B
Are
they
took
me?
There
are
many
hundreds
of
people
who
built
those
like
very
intricate
things,
and-
and
I
don't
mean
that
some
of
them
are
such
just
ugly
nightmares.
Actually,
the
openstack
stuff
in
general.
I
would
you
you
couldn't
pay
me.
You
have
to
pay
me
a
million
a
year
to
touch
that
stuff.
I'd
kill
myself,
the
kubernetes
stuff,
like
it's
a
bit
more
impressive,
and
it's
like
the
compatibility
matrix
that
they
go
through
there
to
all
those
points.
Like
is
disgusting.
I
mean
disgustingly
impressive,
like
they
deploy
x,
number
of
kubernetes.
B
You
know
like
any
anyway.
I
can
talk
about
this
stuff
for
a
long
time,
but
but
the
point
is
is
like,
I
feel
awkward,
not
pushing
back.
If
I'm
trying
to
like
do
a
judo
move
for
him
like
yes
rudolfo,
we
absolutely
should.
Let
me
support
oh,
and
then
we
hit
these
like
in
my
mind,
he's
like
oh
well.
If
we
do
that
like
well,
we
can't
release
that
or
that
oh,
like
okay,
well,
let's
go
figure.
So
my
suggestion
is:
let's
go
answer
some
of
these
questions.
B
What
we've
got
one
of
them
is
like
we've
got
bats
great
and
we've
got
other
ways
of
testing
should.
Can
we
are
these
duplicative?
Should
we
consolidate?
I
don't
if
they
are
duplicative
that
would
be
helpful
to
like,
potentially
unlocking
or
it's
like.
No,
let's
go
do
bats
and
go.
Do
it
pervasively
or
I
think,
to
achieve
what
you're
saying
it's,
let's
see
if
we
can
make
some
of
these
more
prominent
focuses
like
these
aren't
these
aren't
like
on
the
roadmap
today.
Does
it
list
anything
about
ci?
B
I
I
think
it
actually.
Technically,
I
think
it
does
we
have
to.
We
should
go
look
because,
but
but
I
guess
the
point
of
saying
that
is
like
well
hey,
maybe
we
should
be
road
mapping
you
see
like
these
are.
These
are
like,
like
these
are
significant
enough
that
they
sit
equal.
You
know
beside
the
features
saying
in
this
time
frame
we're
looking
to
have
this
level
of
stability
this
level
of
like
and
and
actually
it
could
be.
You
know
a
lot
of
this
is
quantifiable
as
well
as
like.
B
Maybe
it's
a
certain
percentage
of
code
coverage
by
like
what
does
a
1.0
mean
to
us?
It
means
some
certain
percentage
code.
You
know
like
passing.
It
also
is
you
can
we
can
make
it
even
tougher,
potentially
and
say
things
like?
Well,
it's
not
just
code
coverage
and
it's
it's
like
because
you
can
have
great
code
coverage
and
still
have
like
bug
after
bug
after
buck,
what
it,
what
a
stricter
one
is,
is
prior
to
a
release.
B
You
could
make
a
statement
like
this
prior
to
a
release
like,
especially
if
we
get
to
a
cadence
like
istio
has
or
kubernetes
has
which
these
projects
are.
Freaking
huge
or
like
to
where
envoy
is
now
is
to
be
able
to
say
that
for
two
weeks,
prior
or
somewhere
in
there
that
there
has
been
some
certain
percentage
or
less
of
p0
p2
p4,
whatever
like
like
we've,
yet
to
really
give
those
hard
quantification
as
well,
is
like.
D
Actually,
I
was,
I
think,
so.
The
the
problem
is
that
people
are
integrating
stuff
is
causing
problems,
and
so
there's
uncertainty
about
the
stability
of
the
of
the
release
is
that
the
issue.
B
B
No,
that's
a
new
consideration,
so
yeah
the
workflow
like
what
the
person
what
the
contributor
did
was
the
right
thing,
but
the
workflow
needs
to
be,
or
the
tests
need
to
be
to
account
for
that
new
functionality
shift
or
like
that's,
no
longer
even
a
valid
test
or-
and
so
it's
a
combo.
It's
a
combo
of.
D
So,
but
if
there
is,
if
there
is
a
problem,
how
what
is
what
is
the
time
between
the
the
the
time
the
problem
is
introduced
and
the
time
the
problem
is
recognized.
D
So
the
point
I'm
trying
to
make
at
the
point
I'm
trying
to
say
is
I
mean
this
might
be.
This
will
be
all
very
simplistic
thoughts
you
know,
but
so
the
thing
is
there
is
no
such
thing
as
as
a
perfect
code
right,
so
you
you
always
anticipate
that
there
will
be
nobody
wants
to
make
errors.
D
Everybody
wants
everything
to
be
perfect,
but
we
know
that
in
general
that
you
know
sometimes
no
matter
how
well
we
we
design
something
that
there
is
something
that
you
know
there's
some
kind
of
problem.
So
we
I
my
thought
is,
you
cannot
stop,
but
I
still
think
you
you
you.
You
need
to
forge
forward
and
block
the
holes
wherever
they
are.
I
mean
I
I'm
saying
these
things
you
know.
D
Maybe
these
are
very
simplistic
ideas,
but
these
are
just
my
thoughts
and
I
I
I
I
understand
that
I
don't
know
the
whole
whole
picture
very
well.
So
if,
if
we
know
that
something
is
is
causing
a
if
we
know
that
something
is
a
problem,
if
they
we
know
that
there
is
a
test
that
is
causing
a
problem,
then
the
thing
is.
We
cannot
use
that
test
anymore
right,
because,
if
you're,
if
you
have
a
thermometer,
that's
showing
you
the
wrong
temperature.
D
But
yes,
so
you
have
to
get
a
new
thermometer,
so
there
has
to
be
some
some
process
that
you
need
to
build
up
so,
but
but
in
the
meantime,
so
you,
if
somebody
introduces
an
error
now,
isn't
that
the
whole
point
of
you
know
of
source
control
systems,
that
you
know
that
that
you
you
integrate
something
and
then
you're
always
able
to
revert
back
to
a
pro
to
a
prior
to
a
prior
version.
No,
I
understand
that
this
is
not
these.
These
things
are
not
done.
D
You
know
casually,
but
that
isn't
that
the
idea
behind
all
source
control
systems,
that
you
integrate
your
errors
and
you
test,
and
then
you
know
you
you're
able
to
step
back.
If
that
is
not
the
case,
but
then
you
can
do
that
when
you're
about
to
have
a
demo
in
five
minutes
right,
you
cannot
do
that.
So
the
question
comes
comes
comes
to
how
much
time
you
have
between
an
error
is
introduced
and
it
is
recognized
right
and
and
and
what
is
the
lead
time
that
you
have
to
to
to
fix
it
right.
A
Yeah
between
the
time
that
the
error
is
introduced
and
the
error
is
recognized,
it
can
happen
between
one
and
three
days
because
in
measuring
we
create
a
lot
of
peers,
so
the
next
day
after
introducing
their.
If
that
pr
is
fading,
we
we,
we
notice
that
right
so
and
in
order
and
the
time
to
solve
that
error
maybe
be
depends
of
the
context,
but
maybe
one
hour.
A
D
So
if
that
is
okay,
can
you
can
you
take
that
particular
pr
out?
If
that
is
the
case
say
if
somebody
introduce
a
pr
see,
I
don't
want
to
waste
your
time.
You
know.
So,
if
I'm
talking
too
much
just
just
stop
me,
I
won't
pursue
this.
So
so,
if
somebody
puts
puts
in
a
pr
and
is
causing
a
problem
okay,
can
you
revoke
that
pr,
or
can
this
be
done
on
a
pr
by
pr
basis
is,
is,
is
it
possible
to
test
after
pr
is,
is
reduced
or
anyway?
B
I
think
in
part
to
help
answer
that
is
that
one
unmerged
pr-
and
I
think
this
is
obvious
as
I
go
to
say
it,
but
I'll
say
it
anyway,
which
is
that
issues
with
that
pr
are
self-contained,
and
while
it
is
also
the
which
is
good.
So
if
this
one's
blowing
something
up,
it's
like
it
stays
within
there.
The
answer
to
your
question
is
like
well,
it
depends
like.
Sometimes
it
is
the
case
that
you
can
say.
B
Oh
let's
coordinate
off
that,
let's
either
like
literally
revert
or
or
let's
in
the
code,
just
like
we'll
leave
the
code
there,
but
let's
not
invoke
it
at
all.
So
it's
just
dead
code.
It
just
is
not
in
the
execution
path.
Those
can
be
the
case.
It
can
also
be
the
case
where
it's
like.
Oh
no,
there's
no
way
that
that
can't
be
there,
because
otherwise
the
whole
thing
you
know
doesn't
work
or
or
the
amount
of
refactoring.
That
has
to
happen,
and
sometimes
that's.
B
The
answer
is
like
the
answer
is
well
that's
a
vulnerability.
Well,
like
you
know,
that's
a
vulnerability
and
actually
for
a
user
to
adopt.
You
know
maybe
they're
they're
saying
that
they
don't
accept
it
without
vulnerability,
so
so
the
refactoring
does
have
to
be
done
in
general,
like
there's
a
really
the.
B
This
is
a
there's,
a
really
strong
argument
for
what
are
the
number
of
really
strong
arguments
for
what
rudolfo
is
saying
which
like
which,
if
I
was,
if
I
put
on
just
a
user
hat,
if
I
was
here
to
yeah,
if
the
project
was
past
1.0,
if
there
were
multiple
organizations,
you
know
relying
on
the
functionality
of
this
thing
and
like
trusting
that
there
are
good
stewards
like
rodolfo
behind
it.
You
know
you.
What
you
would
say
is
it's
kind
of
what
I
was
talking
about
about
roadmap,
you'd,
say
well,.
B
Passing
passing,
builds
and
in
and
vote
and
present,
vulnerabilities
and
stuff
like
those
are
those
are
equal
to.
They
are
equal
of
importance
to
features
like
like
those
are.
How
do
you?
How
do
you
say
it
like
scale?
Like
you
know,
it's
kind
of
like
you
know,
most
software
you
go
through.
You
build
the
ability,
your
function,
functionality,
functionality,
functionality,
you
eventually
like
okay,
we
should
harden
some
stuff.
We
should
make
sure
it's
scalable.
We
should
do
all
the
itables.
We
like
scalable
and
reliable
and
highly
available
and.
B
And
you
do
that
after
you
have
stuff
to
make
robust
and
also
at
that
same
time,
people
are
adopting
and
people
are
using
and
and
and
those
things
generally
are,
the
the
the
the
they
remove
grease
from
the
wheels.
If
you
will
meaning
like
like
by
their
nature,
they
slow
things
down
and,
in
some
respects
a
little
bit.
Well,
I
don't
know
that
saying
slow
down
is
intentional,
like
in
some
projects
it
is
kind
of
intentional,
because
it's
like
look
it
just
the
the
the
way
that
our
world
works.
B
Is
it
just
needs
to
bake
for
a
while,
like
that,
like
that
stuff
just
needs
to
bake
or
or
we
just
need
to
get
feedback,
or
we
just
or
we
don't
know
what
the
actual
like.
We
did
all
this
performance
testing.
You
know
with
virtual
with
fake
and
fake
environments,
but
in
the
real
world
like
it's
just
gotta,
you
know
what
am
I,
what
am
I
trying
to
say?
B
I
was
trying
to
say
that
if
you,
if
you
look
at
this
discussion
from
a
user's
perspective,
they're
different
users,
you
know
a
lot
of
them
would
as
long
as
the
the
piece
of
software
is
doing
mostly
what
they
need.
The
vast
vast
majority
of
them
would
say:
oh
yeah,
no,
I
favor
all
greens
like
that's
what
I
want
to
see
all
day
long.
B
You
know
like
show
me
that,
and
then
I've
I've
got
trust
and
confidence
that
that
this
is
what
I
should
be
using
they're
either
it's
a
point
in
time,
and
it
is
both
it's
both
it's
a
point
in
time
and
there
are
different
types
of
users,
people
that
adopt
very
super
duper
early
people
that
adopt
like
I
mean
it's,
not
even
adoption,
it's
just
all
around
them.
They
didn't
even
they
didn't
even
do
anything
it's
just,
but
that
there's
different
appetites
for
and
different
needs
for
like
no.
No,
I
need
that.
B
If
you
guys
don't,
if
you
don't
have
that
feature,
then
the
project
is
worthless
to
me.
So,
like
yeah,
I
understand
that
other
parts
broken.
That's
fine,
like
you
know,
I
don't
care
I'll
work
around
that
or,
like
you
know,
but
but
by
and
large
the
longer
the
more
users
that
come
in
the
more
the
passing
checks
become
like
the
more
stability
is
expected
and
desired,
and
rudolph
was
right.
B
I
mean
we're
also
talking
about
this,
this
curve
of
maturity
through
which
a
project
goes
and
that
the
cncf
in
particular,
doesn't
it's
kind
of
fun
done
like
well
like
it
has
three
gates
by
which
it
is
sort
of.
You
know
three
echelons
of
of
a
project,
maturity
and,
and
the.
B
B
It
ends
up
being
sandbox,
so
sandbox,
the
first
one
incubation,
the
second
and
then
graduation
is
is
the
title.
I
don't
know
that
all
those
make
sense,
I
think
sandbox
makes
some
intuitive
sense,
but
an
incubation
yeah
I
find
graduation
seems
kind
of
weird
but
graduated
projects,
and
but
irrespective
of
what
those
specific
steps
are
or
even
specifically
of
what
the
cncf
looks
for
it's
again,
it's
just
a
it's
just
a
true
statement
that
software
goes
through
a
life
cycle
and
it
matures.
D
Is
it
possible
to
have
like
two
versions
like
like
one?
Is
a
faster
market
work?
No,
I
I
see
I
I
know
that
complicates
things
that
adding
another
step.
It
may
not
even
be
a
bright
idea,
but
see.
Did
you
have
this,
this
solid
worship
that
the
people
can
rely
upon?
But
here
is
this
other
other
version?
It's
a
it's
a
it's
a
preview.
It's
a
preview
version
that
you
can
play
around
with.
But
you
know
here
is
the
disclosure
that,
if
you
are
playing
with
this,
it
has
all
the
new
new
features.
D
B
B
So
in
some
respects,
like
a
change
is
significant
as
well.
It
shouldn't
be
this
way,
but
for
this
project
in
this
at
this
moment,
it's
a
significant
change
to
enable
to
enforce
that
permission
that
to
help
to
help
get
to
a
point
where
no
it
like,
like
it,
wouldn't
like
yeah.
Of
course
we
should.
B
We
should
have
it
on
as
a
matter
of
fact,
what
does
it
really
it
like
kind
of
doesn't
even
matter
that
it
would
be
on
or
off,
because
because
we
don't,
we
didn't
need
it
anyway,
like
like
that's
where
we
want
to
be.
We
want
to
be
like
it's,
not
even
a
consideration
like
who
cares
if
it's
on
or
off,
because
and
it
should
be
on,
but
but
I
mean
like
who
cares,
because
we
don't
do.
That
anyway.
B
B
I
really
am
happy
that
we're
having
this
conversation
by
the
way,
because
because
I
couldn't
have
imagined
having
hap
having
had
it
a
long
while
ago,
there
are
these
edge
and
release
channels
good.
These
are
two
constructs.
It
took
a
long
time
to
get
right
and
rudolph
has
been
here,
helping
get
those
right.
B
Yeah
shoot
I
you
know
they
were
never
they
weren't,
I
don't
know
they
weren't
intended
for
and
unfortunately
like.
I
don't
know
that
that
the
design
of
them
helps
a
whole
bunch
in
this
regard.
For
like
the
thing
that
rudolph
is
trying
to
avoid,
if
you
make
an
edge
release
with
a
couple
of
failing,
builds
that
are
checks
that
don't
that
that
you,
you
know
that
people
consider
aren't
important
for
an
edge
release.
B
D
D
I
mean
people
can
test
all
they
like,
but
you
know,
if
you
have
say
the
whole
world
using
something
and
do
a
tester
sitting
in
a
room
testing
something
there
is,
there
is
a
greater
chance
of
things
being
uncovered.
So
there
is
some
kind
of
stability,
that's
being.
D
That
is,
being
you
know,
increased
by
by
by
in
increased
usage.
But
having
said
that,
so
you
you,
if
we
have
a
test
that
is
not
proper,
it
is
not
it
is.
It
is
indicating
the
wrong
results
we
shouldn't
be
using
that
at
all,
and
if
the
test
is
correct
and
it
says
we
are
failing-
I
don't
think
we
should
put
that
there
at
all.
But
having
said
that,
if,
if
what
seems
to
be
a
risk
in
introducing
something
can
can
be
put
in
an
alternative,
a
release,
I.
B
Yeah
it
does,
it
is
safe
for
most
users
for
us
to
release
dysfunctional
things
in
the
in
the
edge
release
the
but
there's
a
difference
between
dysfunctional
and
failing
checks
like
because
the
behavior
of
what
was
implemented
could
not
be
ideal
or
it
could
look
really
ugly
or
it
could
any
number
of
things
that
will
where
a
check
will
pass,
but
the
you
know,
but
the
feature
is
isn't
robust
so
to
so
I
I
I
we
do
want
to.
We
do
want
to
get.
B
Oh,
the
the
it's
like
the
whole
point
of
this
meeting
in
the
first
place,
which
is
like,
let's
automate
the
heck
out
of
all
the
things
that
we
do,
let's
well
the
more
efficient
that
we
can
be,
the
more
the
more
stable
things
are,
the
more
confident
you
know
all
the
things
are
like
it's
just
and
actually,
when
you
get
there
you
can
you
can.
B
It
would
be
it's
so
hard
to
turn
around
and
imagine
what
it
was
like
living
in
the
nightmare
before
that,
like
you
know
like
so
so,
I'd
like
to
identify
a
couple
of
things
and
like
since
abhishek
just
because
you're
here
and
can
walk
through
the
exercise
with
us
I'll
ask
about
a
specific
one.
I
don't
intend
to
suggest
that
it's
higher
priority
or
lower
priority
than
the
rest,
but
to
use
the
the
mesh
kit,
the
latest
version
of
mesh
kit
as
an
example.
B
Obviously
I
don't
know,
maybe
it
doesn't
matter
what
the
specifics
are,
but
there's
a
breaking
change,
and
so
I
don't
know
I
don't
know
what
I'm
trying
to
ask.
I
don't
really
want
to
ask
like
hey
how
long
will
it
take
to
fix
that?
That's
not
really
the
it's
not
really
a
helpful
question.
It's
a
point
in
time.
Question
it's
more!
Like
I'll
bet,
you
anything
we're
going
to
have
to
have
another
breaking
change
at
some
point.
I
I
don't
think
we
have
any
planned
per
se,
but,
like
it's
an
inevitable.
B
B
I'm
trying
to
get
to
a
question
which
I
I
feel
like
both
of
the
two
types
of
questions
that
I
can
ask
are
disappointing,
because
one
of
them
is
well
hey.
So
that's
currently
that's
an
issue
when,
when
what
is
it
going
to
take
and
how?
How
can
we
go
fix
that
issue?
It's
like
okay,
that's
helpful,
but
it's
somewhat
uninteresting
because
it's
not
the
root
cause
of
the
problem.
That's
just
a
symptom
of
you
know:
okay,
fine!
Let's
talk
about
the
root
cause
which
is
there
there
needed
to
be
a
breaking
change?
B
Okay,
or
how
can
we
fix
there
never
needed
to
be
a
breaking
change?
Well,
I
don't
know
talk
to
let's
talk
when
we
get
to
v
dot,
number
two
dot
v:
two
zero
for
maturation
like
post
1.0,
or
something
like
that.
Then
it's
like
it's
like
that's
kind
of
the
measure
of
whether
or
not
we
think
we're
there
anyway,
it's
like
well,
we
were
like,
if
I
had
to
bet
we
would
have
another
breaking
change
sometime
between
now
and
1.0.
B
C
B
E
Yeah,
it's
it's
doing
its
job.
B
D
This
is
actually,
I
don't
think.
I
don't
think
it
should
be
failing
a
function.
If
it
is
deprecated
doesn't
mean
it
is
invalid.
There
is,
there
is
a
difference
between
a
function,
that
is
that
that
is
not
correct
and
a
function
that
is
deprecated
a
function
that
is
deprecated
is
one
that
has
been
marked
as
as
as
having
as
being
planned
to
be
phased
out,
so
it
is
functional
at
this
particular
time.
So
that
should
not
be
a
test.
It
should
be
a
warning.
C
C
So
basically,
it's
it's
basically
that
we
don't
want
the
deprecated
function.
We
will
be
deleting
it
and
just
letting
people
know
that
we
so
we're
just
letting
people
know
that
it's
good
to
migrate
so
that
it
won't
break
the
functionality.
A
C
D
Right
but-
and
I
mean
I
want
some
argumentative
or
anything
like
that,
but
it
should
be
a
breaking
point
because
you
still
haven't
given
them
the
time
to
move
right
so
because
you
have
to
give
them
that
time
so
until
that
time
is
complete.
Yes,
so
there
is.
There
is
a
point
at
which
you
have
to
say:
that's
it.
This
is
it
okay.
You
can
use.
C
C
So
it's
not
there
and
it's
not
deprecated
in
the
current
version,
it's
deprecated
in
the
upcoming
versions.
So
basically,
if
if
they
want
to
move
from
mesh
kit
point
to
11
to
point
to
12,
then
they'll
have
to
make
the
changes
and
they
have
all
the
time
basically
to
decide
on
when
to
bump
it
and
etc.
D
C
D
It
is
it
is,
it
should
be
flagged
as
an
error
for
the
future
version
right.