►
From YouTube: CNCF Service Mesh Interface Project 2020-11-25
Description
CNCF Service Mesh Interface Project 2020-11-25
A
All
right,
very
good,
oh
thank
you.
Kevin
all
right!
Welcome
everybody!
It's
the
november
25th!
This
is
the
smi
community
meeting.
This
is
an
extended
version,
so
hopefully
it'll
be
a
little
bit
interactive.
Today's
version
has
a
single
or
today's
topic
meeting
has
a
single
topic:
smi
conformance
testing.
A
A
link
to
it
in
the
chat,
if
you
we're
going
to
be
going
over
we're
going
to
briefly
spend
time,
I
think
in
the
meeting
minutes
and
then
we're
going
to
run
on
over
to
this
document
linked
here,
which
everyone
should
have
access
to.
It
should
be
wide
open
and
today's
agenda.
A
A
So,
the
first
of
which
is
to
talk
about
what
the
heck
smi
conformance
testing
is
for
some
of
you,
you've
you've
worked
on
it,
for
others
of
you.
A
You've
heard
us
talk
about
it
on
this
call
a
few
times
and
for
some
of
you
it's
brand
new
too.
So
so,
let's
start
with
that,
and
then
let's,
let's
also
see
if
we
can
achieve
these
two
things.
I
appreciate
that
there's
a
couple
that
there's
representatives
from
each
of
the
well
a
number
of
different
implementers.
A
It's
important!
It's
important
that
you're
here
so
so
welcome
by
the
way.
If
we
don't
know
each
other,
my
name
is
lee.
I
should
probably
put
my
name
into
the.
A
Maybe
I
should
actually
I'm
one
of
a
number
of
smi
maintainers
and
so
here
to
just
here
to
try
to
help
advance
the
spec
a
little
bit.
It's
been
kind
of
a
beautiful
thing
that
we've
seen
as
an
industry
a
number
well,
it
just
depends
on
your
perspective,
but
a
beautiful
thing
from
smi's
perspective
that
there
have
been
a
couple
of
or
a
couple
or
more
implementations
of
smi
and
some
some
new
service
meshes
that
have
been
announced
and
near.
A
A
We
want
to
have
a
conformance
program
so
similar
to
so
so
to
introduce
you
to
the
genesis
of
why
we're
talking
about
this
and
why
we
think
it's
important.
Is
it
I'm
if
you're
familiar
with
the
fact
that
there
are?
A
You
know
it's
some
of
you
heard
me
say
this
before,
but
there's
last
time
I
counted,
which
was
like
two
and
a
half
years
ago.
I
think
there
was
86
distributions
of
kubernetes
and
I'm
sure
some
have
gone
away
and
some
more
have
sprouted
up.
But
there's
a
lot.
And
so,
if
you're
familiar
with
sona
boy
of
on
the
kubernetes,
well
more
or
less
of
the
kubernetes
project,
we're
more
or
less.
A
A
Some
of
those
terms
are
what
it
means
to
be
conformant,
which
makes
sense
what
it
means
to
be
compliant,
which
sounds
like
a
synonym
to
conformant
and
and
then
also
what
it
means
to
be
capable
for
a
mesh
to
be
capable
of
a
given
spec.
I
think
we
can
wrap
some
definitions
around
these
and
that
might
be
helpful,
particularly
to
some
of
the
implementations
that
don't
cover
all
the
specifications
and
perpetually
intend
not
to
cover
some
specifications.
A
C
C
A
Yeah,
it's
a
good
question.
So,
from
my
perspective,
I'll
speak
on
from
my
perspective,
it
should
the
tooling
that's
been
created.
A
There's
work
to
do
in
the
tooling
to
account
for
rio's
use
case
when
you,
when
you
take
a
step
back
from
it,
like
any
any
implementation
like
that
that
any
well
so
yeah,
so
the
the
direction
or
the
vector
from
which
the
assertions
are
written
and
applied
is
like
hey
when
you,
when
you
touch
smi
an
smi
spec
like
this,
you
configure
it
like
this.
A
A
Of
these
here-
and
I
sort
of
say
this-
I
say
so-
you
know
some
hesitancy
because
because
you
know
technically
your
rio
is
flexing
the
spec
and
rio
would
suffer
if
or
the
user
experience
in
rio
you
know
would
suffer
if
one
of
the
service
meshes
or
the
service
mesh
that
it's
using
doesn't
you
know,
behave
in
a
conformant
way.
A
So
yeah
I
mean
it
would
be
nice
to
be
able
to
with
some
level
of
authority
or
some
level
of
validation
for
all
all
the
projects
here
to
carry
the
smi
badge
and
say:
yeah,
hey
where
we've
implemented
these.
These
things.
A
A
A
Having
all
kinds
of
challenges
with
the
tools
today,
good
all
right
when
we
go
take
a
look
at
the
initiative,
it
is
to
there's
a
couple
of
things
to
dig
into.
We
don't
have
to
read
the
whole
spec
together
today.
The
design
spec
is,
you
know,
essentially
in
or
the
mechanics
by
which
the
tests
are
executed.
A
I
would
consider,
as
like
a
final
draft.
It's
it's
essentially
where
this
is
a
request
for
comment
from
all
of
you
to
comment
on
the
approach
and
and
what's
being
done
so
to
facilitate
this
testing
a
service
mesh
management
plane
meshery
is
being
used
to
essentially
run.
You
know
a
gamut
of
integration
tests
to
go
over
and
auto.
A
You
know
to
automate
the
provisioning
of
each
of
each
participating
implementation,
each
participating
service
mesh
to
deploy
a
sample
app
on
that
mesh
and
actually
the
same
sample
app
consistently
across
each
service
mesh
to
where
it's
needed
to
generate
load
like
for
traffic
metrics.
You
might
consider
that
a
simple
more
than
a
simple
get
request
might
be
quite
helpful
to
verifying
the
behavior
of
you
know
to
verify
the
accuracy
of
the
metrics
that
are
coming
back
and
whether
or
not
they're
those
mathematic
those
that
math
or
that
traffic
is
being
accurately
accounted.
A
Part
of
the
goal
here
is:
is
for
meshri
to
be
incorporated
into
the
release
process
for
each
of
the
participating
service
meshes
so
that
as
probably
not
every
release,
but
as
and
when
there's
a
major
release
or
as
and
when
each
of
the
the
projects
that
are
represented
here,
determine
that
they're
ready
to
qualify
their
compliance
that
they
could
do
so
conveniently
as
part
of
their
ci
process.
A
Lastly,
as
you
look
at
the
needs
of
a
conformance
tool
and
this
by
the
way,
if
anyone
had
lived
through
openstack-
which
I'm
hoping-
maybe
some
of
you
didn't-
have
to
go
through
that
experience,
but
but
openstack
also
has
a
project
lots
of
distributions
of
openstack,
similar
challenge
for
a
project
like
that,
like
hey
of
a
distribution,
how
do
you
know
that
it's
actually
openstack?
Does
it
adhere
to
the
openstack
apis,
the
same
for
kubernetes,
the
same
for
smi
same
for
any
spec?
A
There's
another
spec,
that's
related!
It's
service
mesh
performance,
it's
a
spec
that
that
we
we
talk
about
in
the
service
mesh
working
group,
but
mesh
re
is
a
tool
to
help
with
that
spec
as
well.
Well,
so
measuring
was
an
ideal
choice
to
the
extent
that
it
is
an
agnostic
tool,
part
of
the
goal
of
the
the
community
of
contributors,
some
some
of
whom
are
on
the
call
today
that
work
on
measuring.
I
think
their
goal
is
to
have
everyone
pass
with
flying.
Colors
like
it's.
A
A
A
A
Some
projects
will
desire
to
implement
that
and
some
won't.
So
the
question
really
is
like
okay,
so
if
there's
a
service
mesh
that
doesn't
want
to
implement
traffic
access,
control
or
whatever,
whatever
either
the
entire
spec
or
a
portion
of
the
spec,
because
it
just
isn't
applicable
to
them
when
you
know
when
there's
a
report
that
says
these
are
the
service
meshes
that
are
compatible,
this
is
their
passing.
This
is
their
state
of
compliance
state
of
compliance
with
that
version
of
smi
state
of
compliance
with
that
version
of
the
service
mesh.
A
So
you
know,
and
then,
and
then
compliance
with
each
of
the
specs
and
various
aspects
of
the
specs.
You
can
imagine,
there's
there's
a
matrix.
You
know
matrix
that
we're
I'm
looking
at
here.
A
A
Let's
say
that
they
pass
three-fourths
of
the
tests
but
that
other
spec
they're
not
going
to
do?
Should
they
perpetually
be
at
a
75,
passing
and
like
out
of
conformance
or,
and
that's
where
these
terms
that
I
was
referring
to,
like
conformance
capability
and
kind
of
compliance.
A
I
I
think
you
know
it's
been
suggested
in
this
spec
that
it's
not
as
black
and
white,
it's
not
as
it's
not
as
red
and
green,
as
you
might
think.
It's
not
as
black
and
white
as
it's
been.
It's
suggested
here
that
if
a
mesh
doesn't
intend
to
have
that
capability
or
doesn't
currently
have
that
capability,
then
then
failing
those
tests
doesn't
count
negatively
in
terms
of
their
overall
compliance.
A
A
That
so
it's
yeah,
it's
some!
It's
think
about
it.
Think
about
it!
It's
it's
one
of
those
things
that.
A
Well,
you
know
it's
one
of
those
things
depending
upon
which
way
that
went,
it
could
be
make
some
implementations,
look
good
and
some
not
and
part
of
at
least
just
for
my
part,
part
of
my
goal
is
to
make
it
make
them
look
good
or
like
highlight
the
good.
You
know
what
people,
what
implementations
are
doing
well.
D
Yeah
my
perspective
on
that
is,
I
think,
it's
fine.
I
know
that
right
now,
linker
d
accomplishes
partial
to
none
on
some
of
them
and
full
on
the
others,
and
I
think
the
transparency
is
what's
important
for
any
service
mesh
and,
as
I
think
it's
really,
it
says
a
lot
that
you
want
to
make
other
service
meshes
look
good
or
all
the
service
meshes
look
good
and
at
some
point
there's
also
the
the
onus
is
on
the
service
meshes
themselves
to
fulfill
these
things.
If
they
intend
to.
A
Yeah-
and
it's
probably
also
the
case
that
not
fulfilling
one
isn't
necessarily
a,
and
actually
I
think
if
we
do
it
kind
of
like
this
like.
If
we
talk
about
the
compliance
in
terms
of
their
capability,
it's
not
necessarily
a
black,
a
black
eye,
it's
not
necessarily
a
red
mark
because
in
fact,
you're
you're,
you
know
informing
the
users
up
front
that,
like
hey,
don't
expect
this
out
of
the
hey
hey.
If
there
was
yeah
yeah.
D
Anyway,
no,
I
think,
I
think
it's
helpful
to
the
people
who
need
it
right.
So
if
there's
somebody
who's
coming,
who
is
looking
to
use
a
service
mesh
for
a
very
specific
thing
like
traffic
split,
then
they
can
go
down
the
list
of
service
meshes
and
see
whether
that
capability
is
fulfilled.
So
I
think
it's
it
makes
sense
to
me.
I
guess
we
need
clarity
around
what
partial
means
and
what
happens
when
one
service
mesh
partially
implements
something
more
than
another
service
mesh.
A
Good
good
call
out
yeah,
just
just
recollecting
as
a
similar
thought
sort
of
that
exact
question,
and
my
hope
is,
is
that
you
all-
and
a
couple
of
others
that
aren't
able
to
come
today
are
are
more
or
less
the
ones
that
are
defining
that
that
that
hopefully,
the
the
effort
undertaken
here
is
as
well
as
I
I
would
acknowledge,
is
like
probably
one
of
those
things,
that's
kind
of
a
kind
of
a
the
non-sexy
part
of
the
project.
A
It's
sort
of
a
burden
to
do
but
kind
of
very
necessary
for,
like
I
think
helpful
to
smi
and
to
people
adopting
in
general,
is
just
to
say.
Oh
okay.
Well,
if
we
use
this
this
interface,
then
we
bet
there
are
this.
There
are
the
benefits
of
of
being
agnostic,
the
benefits
of
all
the
other
benefits
of
smi
that
that
actually
leads
us,
charles
to
the
another
set
of
questions
intended
to
be
thought
on
by
yourself
and
and
the
others
which
is
okay.
A
So
there's
four
specs
any
number
of
statements
like
assertions
that
you
could
make
that
if,
if
this
is
true,
then
this
surface
mesh
implementation
is
compliant
and
this
is
and
and
so
these
are
incomplete-
and
this
is,
I
don't
know
how
complete
universe
is
incomplete,
that
they
are
that
that's
I'm
hoping
something
that
that
you
all
will
determine
so
so
some
of
the
the
way
that
these
tests
go
is
just
when
you
deploy.
A
Some
of
these
are
very
simple,
like
black
and
white
tests.
If
you,
if
you
deploy
the
service
mesh,
is,
is
under
traffic
access
controls
like
is
this
particular
custom
resource
present,
okay,
then,
and
that
passes
and
sort
of
so
some
of
these
tests
are
defined
in
a
sequential
way,
which
is
we're
attending
to
indicate
by
like
hey
the
first
test.
These
these,
the
first
set
of
tests
has
two
assertions.
These
are
done,
sequentially
or
evaluated
sequentially.
A
The
second
set
of
tests
come
through
about
then
actually
flexing
that
capability
and
looking
for
feedback,
and
so
so
really.
E
To
ask
you:
are
we
do
we
have
dedicated
tests
for
each
api
version
because,
for
example,
with
traffic
split,
it's
very
important,
which
version
to
support
to
support
them
all
there
are
breaking
changes
between
versions,
so
I
think
that's
I
don't
know
for
flagger
users.
That's
super
important.
A
Yeah
totally
yeah
you're
right,
like
stefan
on
that.
So
the
answer
is
yes,
that
that's
the
tooling
is
cognizant
of
that
and
and
support
and
tracks
like
what
smi
version
is
being
it
tracks,
what
service
mesh
version
or
what
you
know,
what
implementation
version
and
then
what
smi
spec
version
is
being
tested.
A
Actually,
stefan,
I
think
you
probably
know
this
this
better
than
me
point
of
clarification
for
you.
I
don't
recall
where
we
landed
on
the
individual
specs
like
the
individual
specs
each
carry
their
own
version
number
correct.
E
A
E
Yeah,
it's
we
have
separate
groups,
and
the
group
is
the
subdomain
of
of
the
group
name.
A
A
A
Before
this,
we
discussed
this
initiative
and
a
couple
of
others
in
the
cncf
sig
network
and
the
cncf
service
mesh
working
group,
and
so
this
is
on
demo,
if
you
will
on
on
display
it's
caveated
with
at
cubecons,
it's
caveated
with
for
people
not
to
read
much
into
passing
or
failing
tests,
because
it's
you
know
the
initiative,
is
you
know
mid-flight,
but
in
those
demos
it
does
account
for
what
you're
saying
like
hey?
Well,
what
what
version
of
that
particular
traffic
split?
Is
it.
E
Yeah-
and
I
think
the
actual
test
should
be
different
right,
because
you
are
dealing
with
different
structures
and
different
data
types
and
everything
like.
A
Yeah,
that's
a
great
point
yeah.
I
didn't
other
people
probably
considered
that
and
whereas
I
didn't
that,
not
only
do
you
need
to
not
only
does
the
tooling
need
to
track
some
smi
versions,
spec
versions,
service,
mesh
versions,
but
also
the
tests
themselves
have
to
be
versioned.
E
A
E
Yep
we
should
mention
here
and
also
publish
in
the
final
result:
okay,
linker.
The
only
supports,
for
example,
traffic
split
v1,
alpha
1.,
open
service
mesh
only
supports
v1,
alpha
2.,
totally
yeah,
1
and
2
are
not
backwards
compatible.
So
each
service
mesh
uses
a
different
api
version
which
in
fact
it's
a
different,
has
a
different
structure.
E
A
There's
I
totally
agree
like
we've
been,
the
community
of
contributors
has
been
around
the
the
track
a
couple
of
times
on
what
that
table
looks
like
what
that
matrix
looks
like
dhruv.
Do
you
consider
that
that
that's
in
a
at
a
good
enough
that
we
should
probably
show
what
that
looks
like
right,
and
I
think
the
intention
is
for
that
that
those
reports
to
be
captured-
and
you
know
ultimately,
probably
displayed
on
smi
hyphen
spec,
that
io.
A
Drew
is
this.
Let
me
this
is
kind
of
an
early
version
like
so
so.
First
you
know
that
inside
of
the
meshery
tool
itself,
it
will
have
a
the
ability
to
and
actually
drove
me.
Well,
you
know,
let
me
okay,
so
so
inside
measuring
it'll
have
the
ability
to
overlook
the
tests
and
their
results
and
look
back
in
time
so
that
you,
as
so,
that
those
that
are
implementing
or
those
that
are
just
running
tests,
can
look
at
those
which
is
great
for
their
environment.
A
F
F
F
F
So,
just
and
to
add
to
your
point:
yeah,
we
do
have
a
sort
of
semi
table
ready,
which
probably
we
will
add.
F
It
is
currently
over
here
in
the
landscape,
but
we
will
probably
add
in
the
smi
spec
2,
where
the
main
idea
would
be
that
the
meshi
cloud
back-end
there
would
be
a
github
app
which
would
be
linked
to
which
would
be
linked
to
the
accounts
which
every
mess
specifies
and
whenever
there
is
a
update,
they
will
run
this
particular
test
in
their
ci
processes
itself.
F
And
then
we
would
store
that
the
data
of
that
particular
test
in
machine
cloud
and
later
on,
someone
can
use
that
same
json
to
populate
the
table,
which
would
be
shown,
let's
say,
smi
spec.
E
F
Particularly
yeah,
currently
we
are
just
using
one
of
the
versions
which
we
have
defined.
We
are
not
yet
currently
being
able
to
give
a
choice
between
which
particular
version
they
want
to
run
for
the
test
itself.
So
we
are
currently
running
only
one
version
and
that's
why
this
I'll
call
it
for
now
but
yeah.
Probably
we
would.
We
would
update
the
versions
as
and
when
they
are
being
updated
in
the
smi
split.
A
Other
other
questions
comments
thoughts,
so
the
the
osm
group
is
probably
been
the
most
kind
of
hot
to
trot
on
on.
You
know
having
conformance
tests,
run
and
kind
of
becoming
validated
in
that
way,
there's
been
a
couple
of
service
mesh
teams
that
are
desirous
of
participating,
but
it's
just
sort
of
a
low
priority,
and
so
we'll
be,
I
think
you
know
persistent
in
their
ear
about
it.
A
Does
that,
let
me
ask
you
this:
does
that
make
sense
about
you
know
trying
to
ensure
that
so
one
trying
to
make
it
easy
for
service
mesh
implementers?
You
know
the
service
mesh
teams
to
run
these
conformance
tests
as
part
of
their
ci
process.
A
It
does
do
you
guys
feel
like
that's
invasive
to
processes?
Is
that
the
wrong
place?
Would
you
rather
just
run
an
ad
hoc?
Would
you
rather
that
it
was
centrally
run
for
you?
You
know
the
smi
project
takes
that
on
or.
A
The
thinking
was
that
that
that
that
wouldn't
be
the
case
that
hey
the
each
team
is
empowered
with
the
tool.
Each
team
is
helping,
define
what
the
tests
are
and
that
when
a
team
does
take
the
tool
and
wants
to
report
their
test
results
that
they
would
build
that
into
their
ci
process.
They
would
they
would
identify
a
service
account
a
robot
account,
as
the
one
that's
allowed
to
send
in
test
results.
A
Because
really,
you
know,
because
anyone
can
go
download
the
tool
and
go
run
tests,
but
as
and
anyone
is
capable
of
sending
test
results
back
to
the
project.
A
A
And
then
so
so,
charles
you're,
the
you're,
the
lucky.
A
Winner,
this
is
most
squarely
aimed
your
way
for
stefan
and
deshawn.
A
It's
I'm
hoping
that
there's
utility
and
value
in
here
I'm
hoping
that
rio
has
a
real
might
be
able
to
benefit
in
the
in
a
similar
way.
A
I
think
I
think
I
need
to
give
a
little
more
thought
to
that
myself,
but,
and
then
stefan
kind
of
with
respect
to,
I
think,
stefan
you
you,
you
wear
a
few
different
hats,
like
one
flagger
hat
one,
an
smi,
maintainer
hat-
and
you
know
just
stefan,
if
you
think,
if
you
think
about
flagger
for
a
moment.
E
Flagger
only
cares
about
traffic
split,
nothing
else;
it
doesn't
use
yet
the
smi
metrics
api,
because
flagger
allows
you
to
write
custom
metrics
as
well
and
yeah.
Only
those
two
metrics
that
smi
offers
are,
let's
say
the
minimum,
but
people
want
to
do
a
bunch
of
stuff
custom
sure
so
for
from
a
flagger
perspective,
is
only
traffic
split
and
traffic
split.
E
What
exact
version,
because
yes
flagger,
for
example,
doesn't
implement
anything
else,
but
the
first
version
that
works
with
link
rd
gotcha
right
so
also
from
from
those
that
are,
you
know
they
are
not
implementing
service
mesh
interface
as
a
provider,
but,
as
I
know,
some
some
tool
that
automates
stuff
on
top
of
the
api.
It
is
not
the
service
mesh
itself.
E
This
kind
of
testing
counts
a
lot
right.
It's
I
think
it's
very
important
yeah.
I
find
it
very
important
to
have
such
an
insight
and
when
I
look
at
the
table
I
know
okay.
Traffic
speed
is
supported
this
version
by
these
providers.
This
version
by
these
providers.
It
would
be
like
awesome
information.
E
Yeah
there
are
many
things
like
for.
First
of
all,
should
I
implement
v1
alpha
3
with
headers?
Okay,
the
the
api
is
there,
but
if
no
one
actually
has
implemented
it,
why
should
I
implement
it
in
flagger?
Because
there
is
no
such
capability
in
the
underlying
infrastructure
and
for
flagger?
Is
the
service
mesh
itself
right?
So
it
also
helps
in
you
know,
deciding
when
to
implement
and
what
specific
version.
A
Yeah,
where
to
invest
yeah
fair
enough,
so
the
the
calls
to
action
today
if
we
go
so
you
know,
assuming
that
no
I
mean
so
deshawn,
hopefully
part
of
what
stefan
was
just
saying
helps
like
in
terms
of
your
thinking
of
whether
or
not
you
know
this
set
of
tooling
is
valuable
to
rio.
C
Yeah,
so
my
sauce
was
was
the
same
with
staffer,
because
I
in
real
we
actually
only.
We
also
only
use
the
traffic
slip
split
and
like
we
are
just
looking
forward
to
see
like
if
the
ssis
support
the
routing,
because
we
also
have
a
routing
portion
right
now
to
do
the
routing.
You
really
have
to
program
to
specific
things
like
steel
or
other
service
mesh
crd.
C
So
we
like
to
if
the
traffic
sleeping,
if
the
ssi
support
that,
then
we
can
just
pro
program
the
travis
meetings,
smri
spec
and
would
have
to
program
to
different
invitation.
And
if
we
see
the
testing
like
the
comfort
conference
test,
pass
for
those
providers,
we
can
we
can
support.
We
can
just
program
the
spec
and
smi's
back
and
we'll
have
to
worry
about
different
crd
on
different
implementation,
but
right
now
we're
just
using
the
traffic
stream
we're
the
same
yeah.
C
A
Makes
sense
so,
okay,
the
the
two
calls
to
action,
are
one
two
to
weigh
in.
B
A
H
Lee
I
do
have
a
question
around
the
test
cases
as
an
assertions
that
we
want
to
describe,
and
so
so
this
question
is
probably
more
specific
to
stick
fun
because
he's
the
maintainer
of
someone,
and
so
the
question
is
that
we
define
a
lot
of
assertions
for
every
spec
and
these
assertions
define
the
best
practices
and
a
lot
of
validation
cases
that
we
apply
on
that
particular
spec.
H
E
Wait
it's
even
more
complicated
than
that,
because
some
futures
depend
on
two
different
apis
at
specific
versions.
E
E
Headers
matching
rules
back
then
so
a
test
for
routing
based
on
headers.
You
need
to
match
two
two
apis
at
their
own
version.
H
So
we
would
need
to
account
for
all
of
the
combinations
and
not
just
the
independent
different
versions
of
each
pack.
A
A
There's,
I
guess
it
my
guess
is-
is
that
the
osm
team
and
the
nginx
team
are
probably
the
most
and
charles
I'm
I'm
not
saying
lincoln,
because
you're
standing
right
here
so
are
probably
most
ready
willing
to
engage
to
define
some
some
tests
and
kind
of
get
it
over
the
wall.
I
think
kuma
will
there's
a
few
open
source
contributors
who
of
kuma
land
that
are
willing
to
engage,
and
so
I
think
kuma
will
probably
come
along.
D
Yeah
I'd
say
for
us:
we
want
to
implement
these.
It's
we
have
the
road
map
they're
further
down
the
road
map
than
than
well
they're,
not
immediate
on
the
immediate
part
of
the
roadmap.
I'd
say
so.
I
know
it's
a
desire
of
the
team
to
get
smi
all
of
support
for
all
of
smi.
D
It's
really
interesting
to
see
what
osm
and
nginx
are
doing
with
their
implementation,
so
yeah
this
I
I
could
see
that
this
conformance
would
maybe
encourage
us
motivate
us
to
work
on
to
move
things
around
on
the
roadmap.
But
again,
I
know
that
there
are
quite
a
few
other
items
that
are
considered
higher
priority
at
the
moment.
A
Makes
me
sense
if
I
was
yeah
I
if
I
was
in
any
of
the
other
shoes.
I
think
that
would
perpetually
be
the
case.
I
would
perpetually
be
focused
on
features
and
fun.
A
You
know
up
until
the
point
that
actually
like
there's
users
who
are
actively
consuming
a
spec
and
they're
complaining,
saying
hey
we're
trying
to
use
this
back
and
your
match
doesn't
work
with
it
and
so
and
then
then
it
would
bubble
up
to
your
point
like
yeah,
when
a
project
like
this
shines
a
shines,
a
light
on
it
it
it
helps
in
the
priority
ranking
some,
but
so,
if
it's,
if
it
makes
sense
to
those
around
the
call,
I
mean
so,
we've
been
working
on
this
for
the
community.
A
It
has
been
working
this
for
a
long
time
and
it's
a
lot
more
of
a
challenging
thing
that
I
had
than
I
personally
had
hoped
that
it
would
be
so
I'm
eager
to
like
claim
some
small
amount
of
victory,
I'm
like
being
done
to
like
get
it
out
of
the
way,
which
means
that,
like
I
heard
traffic
split
a
couple
of
times
now,
I
was
like
strikes.
A
chord
with
flagger
strikes
a
chord
with
with
link
or
d
it.
A
My
suggestion
is:
is
that
it
we
don't
need
to
make
it
any
harder
on
everyone
than
it
needs
to
be
like
that,
like
the
test
should
be
valid,
but
anyway
it
can
also
take
a
while
to
act.
You
know
the
more
tests
you
have
the
longer
than
that
can
take
to
to
execute
the
tests
themselves
are
defined
in
yaml,
so
they're,
the
the
same
sample
app,
that's
being
used,
is
lightweight
and
essentially
custom
written
for
this
use
case.
I
think
it's
just
a
small
go
program
that
has
a
http
interface.
A
That's,
I
think
it's
instrumented
with
prometheus
so
that
it
can
help
with
some
some
reporting.
A
Okay,
good!
Well,
I
don't
know
here's
my
suggestion.
E
Hey
I'm
I'm
for
I
don't.
I
can
help
out
if
you,
if
you
want
to
writing
traffic
split
tests,
I
mean
I
in
in
flagger
there
are
end-to-end
tests
for
every
single
implementation,
smi
istio,
app
mesh,
everything
else,
contour
nginx
and
so
on.
E
A
Yeah
yeah
I'm
about
to
come
out
of
my
seat,
I'm
like
so
excited
stefan
yeah,
please
yeah,
that's
that's!
Yes!
That
would
be
lovely
for
for
my
by
the
way.
I
don't
know
this
is
really
obvious
to
anyone,
but
like
I'd
like
to
wash
my
hands
on
the
project
fairly
soon,
it's
like
we've
got
so
much
invested
in
it.
I
want
to.
I
want
to
be
successful
and
and
hopefully
help
people
I
want
those
like
charles
or
or
stefan
or
deshawn,
like
those
that
are
participating
to
well.
A
I
don't
know
frankly,
I
guess
like
to
for
it
to
be
advantageous
to
them
that
they
put
in
the
time
and
they
can
wear
a
badge
on
the
project
and
awesome
and
and
then,
but
they
just
hope,
advance
us
all
collectively,
and
so
what
I
was
trying
to
say
is
I
don't
have
for
when
I
keep
talking
about
myself,
but
I
just
want
to
make
this
clear,
like
no
personal
investment
in
what
any
of
the
tests
are,
it's
whatever
you
guys
think
that
they
should
be.
A
Even
if
smi
was
static
like
they
all
look
like
this,
but
then
we
learned
some
things,
and
so
we
should
move
to
you
know
v2
of
the
conformance
suite
smi
isn't
static
service
meshes
themselves,
they're,
not
static,
yeah,
the
compatibility
major
the
report.
It
needs
to
be
something
of
a
pivot
table
in
some
respects.
E
Or
we
can
have
a
table
pair
api
type,
traffic,
splitter
table
all
service
meshes
and
all
the
versions-
and
you
have,
I
don't
know
an
easier
way
to
read.
It.
A
So
without
other
comment,
like
hey
mission,
accomplished
them
for
me
anyway,
stefan
this
is
the
source
of
truth.
If
you
will
for
well
it's
between
this
and
the
yaml
representation
of
these,
I
think
the
the
repo
is
here
there's
a
this.
This
sample
app
is
the
one
that's
being
used.
This
thing
is
both
this
repo
contains
both
the
sample
app
as
well
as
then,
just
the
individual
conformance
tests.
A
Needless,
but
the
point
of
saying
that
is
that
these
are
written
out
in
much
the
same
way
that
you
find
in
that
dock
so
kind
of
between
that
dock.
This
is
the
realization
of
those
some
things
that
I
think
are
action
items
for
the
those
that
are
helping
it
advance.
This
is
that
some
examples
of
how
it
is
that
you
can
use
the
rest
api
of
mesherie
or
the
mesri
ctl,
the
cli
to
just
you
know,
invoke
conformance
tests
for
a
given
service.
Mesh
would
be
like
for
charles
and
others.
E
I
I
also
think
it
will
be
super
useful
to
have
something
like
I
know:
github
action.
A
E
E
Not
a
service
mesh
developer,
so
it's
just
an
idea.
Yeah,
I'm
gonna.
A
Put
that
put
that
charles,
I
assume
that
that's
a
that's
a
happy
smile
about.
I'm
gonna
put
that
suggestion
into
here,
because
because
yeah,
because
for
those
that
are
using
github
actions,
that
would
be
convenient
right.
Charles,
do
you
guys
use
github
actions,
yeah.
D
D
A
A
A
Okay,
so
the
the
plan
here
is:
this
is
a
it's
a
continued
thread
inside
of
the
standing
smi
community
meetings.
Those
are
only
half
an
hour
long,
so
we'll
just
give
kind
of
updates
about
the
progress
we'll
try
to
make
it
easy
as
easy
as
possible.
A
You
know,
charles
for,
for
you
and
others
to
you,
know
pick
it
up
and
run
it
and
then
we'll
be
asking
probably,
for
I
mean
we're
asking
now,
but
just
I'll
do
it
even
more
vocally
about
the
assertions
and
whether
or
not
you
think
that
those
are
so
I'm
still
trying
to
round
up
specific
contacts
for
all
of
the
meshes
that
are
participating.
A
Mr
connors
curious
for
your
feedback.
B
Oh
I'm
just
listening
at
the
moment
trying
to
catch
up
with
where
things
are
on
smi,
so
we've
been
exploring
this
up
late,
we're,
as
you
may
know,
we're
heavily
in
the
seo
camp
at
the
moment.
So
we
have
a
product
based
on
that,
but
smi
is
definitely
something
we're
trying
nice.
A
Do
you
consider
from
that
vantage
point
is
that
is
this,
this
effort,
helpful
or
just
sort
of
an
aside.
B
B
We
have
a
a
fork
of
the
seo
code
base
which
tailors
it
for
openshift
and
adds
other
capabilities
that
we
think
is
are
important
to
ourselves
and
obviously
we
would
like
to
have
some
tck
that
we
could
run
against
that,
but
it
doesn't
exist
so
yeah.
This
is
very
much
of
interest
and
I've
come
through
lots
of
the
java
standard
bodies
and
w3c
standard
bodies
and
the
like.
So
I'm
used
to
tcks
and
things
existing.
B
A
Last
comments
happy
thanksgiving
thanks
for
coming.