►
From YouTube: CNCF Service Mesh Interface Project 2020-10-28
Description
CNCF Service Mesh Interface Project 2020-10-28
A
Okay,
welcome
to
the
smi
community
meeting
for
october
28th
2020
dropping
a
link
to
the
notes
so
that
you
can
take
a
look
and
follow
along
and
we'll
we'll
get
started
with.
We
have
a
number
of
items
from
michelle
who
will
be
joining
us
shortly
and
we
have
a
dhruv.
I
don't
know
how
to
say
your
name.
Can
you
pronounce
that
for
me.
B
A
Do
you
need
to,
or
oh
it's
set
to
allow
share
screen
but
yeah
if
you're
gonna
show
something
five
minutes
or
less?
This
isn't
a
time
for
giant
demos.
We
just.
B
So
I
got
to
wait
a
bit:
it's
okay,.
A
Okay,
all
right,
let's
see,
I'm
just
gonna,
go
to
the
first
item
that
michelle
had
on
the
list.
She
wants
to
talk
about
releasing
a
new
version
of
the
spec
and
get
updates
to
tcp
route
and
the
addition
of
udp
route.
There
is
a
diff
that
she
dropped
in
the
notes
and
I
will
put
that
in
the
zoom
chat
as
well.
A
We
have
our
our
newest
spec
maintainers
by
the
way
on
the
call
turin
michael,
welcome.
A
C
A
Okay,
that
sounds
reasonable,
we'll
we'll
revisit
this.
If
michelle
is
able
to
join
us,
but
meanwhile
interesting
question
around
this
pr,
around
traffic
target
needs
an
lgtm,
so
that's
important
to
look
at,
but
do
we
want
to
make
sure
we
don't
have
two
core
maintainers
from
the
same
company,
changing
the
spec.
C
A
Yeah
that
just
seems
reasonable.
We
actually
have
that
role
already
for
blog
posts,
so
that
you
know
two
people
from
the
same
org
can't
just
put
their
own
blog
post
in
without
getting
a
buy-in
from
the
community.
A
So,
oh,
how
many
or
I
was
not
able
to
be
at
the
smi
metrics
call
last
week,
but
I
did
see
that
there's
a
google
doc
asking
for
feedback
on
it
did
anyone
who
went
to
that
meeting
have
something
they
want
to
tell
us
about
it.
C
A
A
Oh
excellent
and
we
have
a
michelle
and
to
bring
yeah
and
to
bring
michelle
up
to
date.
We
all
think
that
a
new
version
of
the
spec
sounds
great
and
people
should
comment
on
it
and
we
like
the
idea
of
more
than
one
org
lgtming
changed
to
the
spec.
D
Yeah,
okay,
so
like
we
have
like,
I
think
it's
a
two
lgtm
policy.
I
don't
know
that
it's
like
really
stated
in
our
governance
stock,
but
it
is
like
basically
applied
because
of
our
pr
gate,
like
the
branch
protection
rules
that
we
have
so
so
that's
there,
and
I
it's
great
that
we
don't
want
like
the
same
person
for
the
same
excuse
me
people
from
the
same
organization
to
lgtm,
but
it's
two
enough
like
at
this
point.
D
We
have
several
implementations
and
I'm
just
wondering
like
not
everybody
has
time
to
like,
like
you
know,
attend
the
meetings
and
they
should
but,
like
you
know,
life
happens.
So
should
we
be
some
other
specs
like
this
is
how
it's
done
there.
Some
other
specs
have
chosen
to
kind
of
like
they
have
like
a
person,
that's
a
core
maintainer
and
that
person
or
set
of
people
kind
of
goes
around
to
the
implementations
and
has
the
implementer
review
that
spec
change.
D
I
don't
know
if
we're
kind
of
like
there
yet
just
because,
like
I
just
want
to
make
sure,
I
think
we're
at
the
point
where
people
are
implementing
and
if
we
merge
something
could
significantly
break.
I
mean
that's
the
whole
point
of
versions
and
stuff
too,
you
could
argue,
but
yeah
does
that
make
sense.
A
And
that
seems
like
a
good
idea,
but
then
do
we
make
a
canonical
list
of
the
people
who
get
weigh
in
on
specific
spec
changes
or
like?
What's
you
know,
yeah
like
how
do
we
implement
that
or
do
we
have
like
a
lazy
consensus
with
a
certain
amount
of
period
of
time
and
just
say
like
unless
you
object
within
two
weeks
or
within
a
month?
This
is
going
to
happen.
Maybe
over
two
meetings
I
don't
know,
I
think
that's
definitely.
Okay,.
C
I
I
just
support
you
for
the
lazy
consensus.
We
just
need
to
make
sure
that
you
know
if
it's
just
one
channel
like
if
we
say
only
slag
or
whatever,
then
you
know
people
who
might
only
I
don't
know,
look
at
the
mailing
list
or
whatever
might
or
some
people
might
not
look
it's
like
at
all
and
only
show
up
at
or
read
the
the
the
google
docs
the
meeting
notes.
C
So
we
need
to
make
sure
that
you
know
everyone
has
a
fair
chance
to
actually
react
and
really
given
that
we
meet
every
two
weeks.
You
know
we
can't
say
you
know
until
next
monday
or
whatever.
We
need
to
make
sure
that
we
get
that
full
cycle
on
all
different
channels.
D
So
I
don't
think
we
have
like
a
plus
one.
Is
that
I
I
don't
think
we
have
like
a
mailing
list,
specifically
with
implementations,
people
who
have
implemented,
but
I
wonder
if,
like
that
is
a
better
way
of
going
or
lazy
consensus
plus
maybe
sending
an
announcement
on
this
mailing
list
is
a
good
way
to
announce
like
hey,
there's
a
change
and
you
should
review
it
if
you
are
implementing
smi.
Otherwise,
it's
going
to
break
and.
D
E
D
Because
I
mean
the
the
spec
is
structured
in
a
way
that
you
know
there's
like
working
documents,
so
working
documents
can
go
back
and
forth.
So
it's
okay,
I
think,
for
prs
to
get
merged
as
long
as
once
it's
merged
and
people
are
looking
at
it
that
they,
you
know,
realize
hey.
D
E
Yep
and
then
so
then,
the
question
is:
should
there
be
a
kind
of
not
a
two
week,
but
a
two
meeting
requirement
on
changes
in
a
final
spec,
which
I
think
it
sounds
like
we're
kind
of
congealing
on
that
and
then
the
second
question
is:
why
does
while
that
doesn't
apply
to
working
doc
changes?
Does
that
potentially
apply
to?
E
E
I
guess:
there's
not
based
on
you're,
based
on
your
facial
expressions,
there's
merging
a
change
into
the
master
branch
so
to
speak,
like
not
in
the
working
doc,
but
in
the
spec
itself
that
we're
that
merging
into
the
spec
itself.
That's
one
in
the
same
of
making
that
versioned
release
like
like
the
active
merging
into
the
spec
is
in
fact
the
release
of
that
spec
I
was,
I
was
thinking
hey.
There
might
be
a
difference
like
if
there,
if
we're.
D
Yeah,
let
me
let
me
maybe
walk
us
through
what
this
back
release
process
looks
like,
and
then
I
think
that
would
help
us
with
this
conversation
when
we
do
this.
Okay,.
F
One
second,
I
want
one
to
mention.
We
can
use
here
branches
and
I
think
it
will
do
exactly
what
you
want
like.
If
we
have
a
working
branch
or
a
preview
branch,
everything
gets
pr'd
against
that
branch
and
that
branch
is
at
the
end
validated
by
two
people
and
merged
into
master
and
from
there
we
do
the
release
should
that
work.
D
I
think
so,
I
think
the
only
reason
we
were
not
for
branches
was
was
because
people
wanted
historical
context
for
all
the
like
all
the
different
apis,
but
those
live
in
directories.
So
in
this
branch,
like
the
apis
with
the
different
versions
of
the
apis,
would
live
in
their
own
directories.
Right
still,.
E
F
A
G
Okay,
I'm
good
with
that.
D
I
really
wish
we
could
get
rid
of
the
directories.
It's
just
like
people
aren't
going
to
need
v1
alpha
1
and
b1
output
2
over
time.
I
think
it's
just
the
fact
that
we're
like
moving
kind
of
quickly,
but
if
we
have
branches
for
each
release,
then
it'll
just
be
like
we
could
have
the
directory
for
the
api
supported
the
versions
of
the
api
supported
in
those
branches.
F
Yeah
I
I
also
think
the
directories
can
be
annoying
at
some
point
with
all
the
versioning
in
there.
I
proposed
branches
at
the
beginning,
but
yes,
there
were
other
arguments
then.
D
So
so,
okay,
we
will
have
a
master,
seems
so
outdated,
main
branch,
which
is
net
right
now
called
master.
We
should
have
that
conversation.
C
D
Perfect,
thank
you.
Let's
just
call
it
name
so
we'll
have
this
main
branch,
previously
master
branch
and
and
then
we'll
have
like
a
working
branch
and
the
working
branch.
That's
where
you
merge
your
changes
and
then
a
release
basically
means
that
we'll
merge
that
branch
into
the
main
branch
did.
I
capture
that
correctly.
D
D
F
So
we
need
to
define
governance
right.
In
that
perspective,
we
need.
D
So,
who
wants
to
write
up
the
doc
around?
Who
wants
to
modify
the
governance
doc
and
then,
who
wants
to
define
our
release
process
in
an
md
file?.
C
C
If
you
want
to,
if
you
want
to
team
up
on
it,
I
mean
it
makes
more
sense.
If
someone
like
yourself
does
it,
who
has
a
lot
of
experience
with
it,
because
I
you
know
you
have
pretty
much
in
your
head,
but
I'm
more
than
happy
to
team
up
with
you
on
that
and
to
you
know,
awesome
if
you're
under
the
steering
seat
driving.
Thank
you.
A
H
E
And
michelle
I
wa
in
answer
to
the
other
question
we
were
talking
about
active,
I
think
it
was
the
word
describing
that
my
answer
was
I
I
want
to
or
I
want
to
engage
there
and
maybe
I
should
because
it'll
be
helpful
in
a
couple
of
other
projects.
E
Oh
but
yeah.
D
A
D
E
D
Yeah
I
get
that
I
get
that,
let's
carry
that
conversation
on
slack,
that's
cool.
D
D
Should
I
just
take
the
next
bullet?
Okay
cool
go
ahead,
so
the
next
thing
is
real
quick.
We
had
a
meeting
on
around
specifically
smi
metrics
last
week
and
just
to
see
the
conversation
we
had
john
from
the
osm
team,
basically
talk
through
how
he
implemented
smi
metrics
for
the
project,
and
so
that
was
a
good
good
conversation
good
learnings
there.
D
If
you
want
to
see
the
recording,
it's
posted
one
of
the
action
items
that
came
out
of
that
was
that
we
wanted
to
get
like
broader
feedback
on
smi
metrics.
So
I
created
a
google
doc.
So
if
you
have
implemented
smi
metrics
looking
at
you
their
own,
please
leave
feedback
or,
if
you
have
like
just
thoughts
on
on
on
that
thing
on
on
that
kind
of
stuff,
please
please
leave
feedback.
If
you
tried
to
implement
smi,
metrics
and
weren't
able
to
if
you
got
confused,
if
you
think
it
should
be
something
different.
D
All
of
that
raw
feedback
is
welcome
on
the
dock.
I'm
going
to
review
that
doc
after
the
jock
is
close.
It'll
be
open
for
two
weeks,
I'll
review
it
try
to
condense
the
information
and
then
present
it
back
to
the
community
on
one
of
these
calls
going
down.
The
next
next
item
is
some
one
of
one
of
the
pieces
of
feedback
we
got
from
that
meeting
was
that
there
are
some
questions
that
get
answered
about
the
spec
in
the
issue:
queue
and
those
don't
make
it
back
to
the
actual
spec.
D
So
we
just
as
people
in
the
community
need
to
be
like
aware
of
that.
So,
if
you've
asked
a
question,
you've
gotten
an
answer,
you
know
your
contribution
to
the
spec.
Based
on
what
clarity
you
got
would
be
more
than
welcome.
Also,
if
you're
answering
that
question,
please
go
ahead
and
like
create
a
pull
request
and
make
that
clarification
in
the
text
as
well.
So
people
don't
get
confused
a
second
or
third
time.
D
If
somebody
wants
to
do
like
a
review
of
that
kind
of
stuff
like
if
you
want
to
review
the
clothes
pull
require
or
excuse
me,
if
you
want
to
review
the
closed
issues
and
the
discussion
items,
I'm
sure
there's
going
to
be
a
lot
of
just
like
random
things.
That
would
be
great
to
add
to
the
spec.
So
that's
like
an
open
item
or
task.
I
don't
have
the
bandwidth
this
week
to
tackle
that,
but
if
anybody
else
has
some
time,
you
know
that
those
kinds
of
contributions
would
be
worth
welcome.
H
So
another
question
that
I
had
here
should
we
do
a
release
of
the
sma
metrics
with
the
with
the
current
state
of
the
spec,
because
the
spec
the
spec
differs
from
the
current
latest
release.
Right
I
mean
the
spec
changed,
but
but
there
wasn't
a
release
so
should
we
do
a
release
and
then
take
feedback
or
take
feedback
incorporated.
D
H
Yeah,
so
from
what
I
see
like
from
other
projects
differs
a
bit
right
because
the
types
are
not
needed.
Essentially,
it's
the
it's,
the
implementer
that
should
return
the
api,
like
whatever
the
defined
api,
is
I'll.
Try
to
see
if
the
types
are
similar
with
that
of
the
documentation,
if
it
is
not
I'll,
update
the
types
and
then
and
then
cut
a
release,
probably
based
on
that
in
that
matrix.
D
G
H
H
D
Okay,
would
you
update
it
to
match
what
is
the
current
version
of
what
is
the
latest
released
version
of
the
metrics
bit
of
the
spec.
H
D
F
D
So
right
now
we're
at
v1
alpha
2
working
dock
of
this
of
the
smi
metrics
spec
and
the
like.
The
latest
released
version
of
smi
metrics
in
v0.5
of
the
spec
is
v
1,
alpha
1
for
the
matrix.
D
H
Yep
I'll
try
to
update
the
so
my
takeaway
is
that
I'll
update
the
types
and
then
cut
a
release
of
the
of
the
spec,
essentially
with
the
types.
Then,
then,
the
implementations
will
obviously
will
have
to
follow
the
other
thing.
That
was
that
there
is
a
slight
difference
in
the
versions,
even
in
sma
metrics
right,
the
repo,
the
error
releases
in
the
repo,
that
is
the
implementation
side
of
stuff,
and
then
the
spec
is
different
there.
So
I
think
we
should
have
some
documentation,
probably
on
how
it
differs.
D
Good
when
you
say
cut
a
cutter
release
of
the
spec
so
so
like
we
cut
a
release
of
the
whole
spec
every
time,
any
part
of
the
spec
changes.
So
if
you
wanted
to
cut
a
release
of
v1
alpha
2
of
the
traffic
metrics
api,
then
it
would
be
then
you'd
have
to
cut
v0.6.0
of
the
actual
like
whole
spec.
H
F
Yeah,
I
think
that
the
synchronization
of
a
new
release
should
come
from,
so
we
we
do
a
release
on
the
spec
itself,
but
that
doesn't
change
anything
right.
People
should
use
all
our
crds,
so
those
crds
are
not
yet
released.
We
have
to
implement
the
sdk
for
the
change
and
publish
the
the
new
crds
and
then,
when
we
publish
the
sdk,
that's
the
final
release
of
our
spec
and
then
we
can
update
our
own
components
like
the
metrics
provider,
eco
and
others.
F
I
Hey
so
can
I
share
my
screen.
B
Okay,
I'll
sure
I
will
make
it
quick
so
leopard
is
working
to.
We
were
working
on
running
some
taste
which
satisfied
the
conformance
of
that
particular
mesh
right,
so
we
have
a
working
demo
of
it
and
machine
itself.
I
ran
it
test
while
the
meeting
was
on
to
see
and
how
it
tastes
and
runs,
and
currently
the
test
which
we
have
defined
are
over.
Here
we
have
defined
it
for
three
particular
specs.
These
are
like
infant
test
cases
which
we
have
defined.
B
We
need
to
work
on
that,
so
we
wanted
to
call
for
volunteers
who
could
give
their
input
and
tell
us
if
certain
kind
of
tests
they
want
to
invoke
for
this
report
that
you
have
created
on
the.
B
B
B
E
So
another
component
to
that,
I
added
this.
Some
some
of
you
have
seen
this
and
commented
on
the
tests
within
here.
The
and
we've
made
call
for
participation
and
defining
and
refining
and
defining
these
tests.
So
we're
still
looking
for
that.
The
demonstration
that
dhruv
was
just
showing
is
that
of
osm
and
it
running
through
a
few
different
tests
and
whether
or
not
and
what
the
results
of
those
are.
I
wouldn't
read
too
much
into
the
results,
because
there's
only
so
many
tests
defined,
we
want
to
make
sure
that
we're.
E
Two
is:
we've
talked
in
the
past
about
conformance
or
compliance
versus
capability,
acknowledging
that
not
all
implementations
intend
to
fully
implement
all
specs.
E
So
there's
an
open
question
about
the
philosophy
of
whether
or
not
a
given
mesh
should
be
considered
out
of
compliance
if
it
never
intends
to
be
fully
capable,
there's
a
good
discussion
for
that
doc
and
then
the
last
call
here
is
for
a
service
mesh
maintainer
to
well
to
to
engage
and
to
begin
refining
their
their
test
cases.
Because
we'll
we
want
to
work
toward
a
composite
report.
E
But
those
those
teams
that
have
some
time,
don't
everyone
jump
in
at
once,
because
we,
the
the
crew
here,
couldn't
take
it,
but
but
anyone
who'd
like
to
get
there
there
make
sure
that
the
their
the
mesh
that
they
represent
is
well.
You
know
well
represented
in
terms
of
conformance
how's,
your
how's,
your
chance.
D
Sounds
good.
This
is
all
really
cool
work.
I
know
we're
over,
but
is
there
a
dedicated
meeting
for
the
conformance
related
stuff
I
feel
like
it
always
gets
kind
of
thrown
in
as
a
stand-up
thing,
and
I
don't
know
if,
like
everybody's
gonna
go
and
like
you
know,
rush
to
write
test
cases,
so
not
that
that's
like
not
an
exciting
thing
to
do
is
just
you
know
when
you're
juggling
a
bunch
of
stuff
it's
hard
to
make
time
for
that,
but
it's
important.
So
we
should
so.
D
I
think
if
there
was
a
forcing
factor
of
like
getting
people
together
in
a
meeting
and
saying
you
need
to
have
read
the
stock
and
we're
going
to
go
through
this
doc
and
like
we're
going
to
answer
questions
together.
That
might
be
a
little
more
helpful
for
me,
but
maybe
I'm
alone
here
I
don't
know,
but
I
I
want
to
focus
on
it
I
want
to.
I
want
to
help.
I
want
to
you
know.
D
Cases
and
make
sure
that
osm
is
compliant
and
stuff
like
that,
so
we
definitely
have
incentive,
but
it
would
just
be
great
to
have
some
sort
of
forcing
function.
Yeah.
E
And
also
it's
a
little
bit
it's
a
little
bit
to
digest
like,
and
so
let
me
get
a
good
response
to
that,
because
there
is
a
response,
but
I
think
the
meeting
that
I
would
identify,
as
also
has
other
items
on
the
agenda
and
so
like
we
did
an
offshoot
of
traffic
metrics.
So.
A
Well,
I
think
technically
you're
moderating,
but
of
course
I
yeah
I
mean
that's
why
I
was
like
take
us
through
our
topics,
but
I
think
yes,
we
are
at
our
time
we're
over
time.
So
I
am
going
to
bump
those
last
couple
of
topics
to
the
next
meeting
and
I
will
hassle
people
on
slack,
for
you
know,
assignments
and
michelle
is
smiling,
but
honestly
michelle,
you
did
a
fantastic
job
because
you
were
taking
us
through
the
exact
topics.
We
need
to
talk
about
unintentional.