►
From YouTube: CNCF Service Mesh Interface 2021-01-06
Description
CNCF Service Mesh Interface 2021-01-06
A
Cool
all
right
so
welcome
to
2021,
and
let's
kick
off
this
new
year
with
an
even
better
first.
B
A
The
agenda
is
quite
light
here,
but
yeah.
Let's
skip
that
button
thanks
a
lot
for
subscribing
bridgette-
and
I
see
your
first
item
is
actually
yours.
So
maybe
someone
else
wants
to
support
you
scribing.
While
you
walk
us
through
the
release.
B
B
The
next
smi
release-
and
I
have
a
few
links
in
here-
a
few
notes.
We
did
it
for
requests
some
time
ago
and
got
him
to
add
more
detail.
This
is
one
of
my
colleagues
and
one
of
phil's
colleagues
who
was
looking
to
to
use
that
v1
alpha
4
version
of
traffic
specs
api,
and
so
that
kind
of
led
to
the
discussion
of
hey.
Should
we
be
cutting
a
release?
It's
been
a
while
and
then
I
know
stefan
had
written
a
gist
about
documenting
the
release
process
and
yeah.
B
A
C
Quick
quick
introduction,
yeah,
thank
you
michael
and
bridgette,
so
my
name
is
amin.
I
recently
joined
aws
as
an
sd.
I
work
on
the
kubernetes
and
serverless
area
from
time
to
time.
I
work
with
the
amazing
michael
on
the
aws
controllers
for
communities
and
yeah.
This
is
my
first
meeting
about
with
the
smi
and
I
hope
to
to
learn
with
you
guys
and
do
amazing
things
this
year.
D
Folks,
my
name
is:
I
work
for
intel
as
one
of
the
tech
lead
and
that's
my
first
meeting
today,
so
I
recently
got
started
on
looking
into
service
mesh
and
so
just
learning
the
whole
concept
and
figuring
out
how
service
mesh
would
work.
Our
primary
focus
is
tuning
service
mesh
towards
the
telco
application
models
and
that's
what
you're
looking
into
so
I'm
just
getting
started
into
whole
smi
and
what
service
meshes
so
looking
forward
to
learn
more.
A
B
No
okay,
that
was
good.
I'm
glad
that
we
did
that
so
anyway,
I'm
not
assuming
that
we're
going
to
decide
anything
on
the
call
right
at
this
moment,
especially
because
stefan
wasn't
able
to
join
us
today,
apparently,
but
I
wanted
to
make
sure
we
started
thinking
about
what
is
our
next
release?
Look
like
michael.
I
know
that
you
had
some
input
into
that
in
the
past.
What
are
your
thoughts.
A
So
I
keep
muting
myself
yeah.
I
think
that
we
should,
you
know,
really
come
up
with
a
little
bit
more
of
a
structure
and
the
cadence.
Otherwise
it's
it's
kind
of
this.
You
know
big,
like
oh
scary
thing
once
a
year,
a
new
version
or
whatever
I'd
rather
have
smaller,
batches
and
release
more
often,
especially
if
we're
not
doing
breaking
stuff
right.
A
If
we,
if
we
get
something
done
and-
and
there
is
no
good
reason-
why
not
to
cut
a
new
version-
and
I
think
and
maybe
that's
a
separate
topic
that
we
decide-
I
mean
given
that
our
cadence
is
two
weeks
that
you
know
maybe
once
a
month
or
once
every
two
months,
we
have
fixed
releases
right
and
and
even
if
it's
just
one
new
tiny
thing
we
get
out.
A
Okay,
that's
that's
fine,
but
I
at
least
that
was
my
perception
from
from
last
year,
a
little
bit
that
you
know
this
kind
of
like
took
quite
some
time.
It's
not
a
criticism.
It's
just
like
an
observation
that,
as
with
software,
it's
probably
easier
to
do
smaller
more
often
than
you
know
these
big
bang
once
a
year
releases.
I
don't
know
at
least
that's
my.
B
Yeah
I
mean
I
guess,
because
this
is
a
spec,
we
don't
want
everyone
who's,
trying
to
implement
it
to
feel
like
it's
a
moving
target
that
they
have
to
re-implement
every
two
weeks.
I
mean
we
don't
want
to
strike
that
kind
of
fear
in
everyone's
hearts,
but
at
the
same
time,
if
people
have
been
waiting
since
october
for
something
to
be
in
the
spec
like
we,
we
don't
want
to
keep
that
waiting.
You
know
what
I'm
saying
we
don't
want
to.
B
We
don't
want
the
spec
to
stand
in
the
way
of
the
implementers
moving
forward
either.
So
I
guess
I'm
asking
the
community
I'm
interested
in
what
everyone
thinks
is
like
what
is
the
right
balance
there.
A
Right-
and
I
guess
that's
my
emphasis
on
on
non-breaking
like
if
something
is
like
okay
here,
we
need
to
clarify
this
bit
or
you
know
there
are
some
some
contradict
whatever
it
is
that
that
essentially
helps
people
to
implement
it
better,
faster
or
whatever.
That's
that's
the
the
push,
not
necessarily
saying
like
okay
yeah.
Why
should
someone
wait
for
for
half
a
year?
I
I
get
the
the
okay
moving
target
thing.
D
A
B
E
So
I
mean:
are
we
talking
about
the
difference
between
like
clarifications
and
bug
fixes?
I
think
those
should
be
going
out
all
the
time
since
we're
talking
about
documentation.
You
can
always
just
be
refining
the
things
that
we've
already
talked
about,
but
then
new
features.
I
can
see
new
features
being
scary,
but
you
know
we're
also
sitting
at
alpha.
E
You
know
I
mean
like
this:
new
spec
is
going
to
come
out
and
like
from
nginx
perspective,
we're
not
going
to
immediately
just
uplift
all
the
code
right
to
support
it
like
we're
gonna
we're
gonna
have
to
take
our
time
to
to
consume
it.
If
other
releases
come
out,
we
can
always
leapfrog
too.
I
I
I'm
a
little
bit
more
sensitive
about
beta
and
then
in
an
actual,
like
production
level
releases
than
I
am
about.
E
D
A
Right
and-
and
I
guess
my
main
argument
for
a
set
release-
cadence
at
least
you
know-
I
don't
know
once
a
month
or
whatever
is
that
you
know
it
doesn't
mean
that
with
every
release
it
has
to
be
a
big
thing.
Maybe
it
is
really
a
tiny
thing,
but
we
don't
need
to
have
the
discussion
like.
When
will
the
next
release
be
it's
like?
Clearly,
this
is
gonna,
be
there
and
maybe
a
certain
feature.
A
certain
fix
makes
it
into
that
release,
or
maybe
it
pisses
the
train.
It
goes
into
the
next
one.
A
That's
fine,
but
you
know
it's
just
like
it's
clear,
every
last
monday
or
whatever
we're
doing
a
release,
and
that's
it
right
so
then,
then
we
can
still
decide
like
you
know.
Is
that
feature
x
or
whatever
ready
already
for
the
next
cadence
or
the
next
next
cutting
the
release
or
not
yeah.
B
F
Yeah,
I'm
fine
with
that.
I
know
we
we
try
to
well,
I
guess:
what's
the
cadence,
are
we
talking
monthly,
quarterly.
B
F
You
know
that
works.
I
mean,
I
guess,
if
there's
anything
that,
like
that's
experimental
right,
you
can
put
a
feature
flag
and
etc.
You
know
but
yeah
that
works
I
mean
I
think
that'll
give
the
community
a
cadence
of
when
they
can
expect
things
to
be
available.
You
know
we're
putting
that
in
the
change
log
et
cetera,
so
yeah,
I'm
I'm
on
board.
With
that
you
got
my
vote.
E
A
Right
and
I
didn't
mean
to
resolve
it
today,
I
I
what
I
I
meant
is
that
you
know
we
can
say:
okay,
this
is
the
proposal.
This
is
what
you
know.
People
who
are
on
that
call
are
fine
with
and
next
time
we
meet.
We
could
put
that
you
know
to
a
vote
or
whatever
everything
like
okay.
Can
we
resolve
that
and
then
use
that
as
the
first
you
know
try
out
with
that.
B
A
Cool
the
next
thing
I
see
here
is
the
as
my
metrics
discussion
or
because
the
other
one
also
requires
stefan's
input
relating
the
release
process.
Yeah.
It
is
the
metrics
discussion.
F
Yeah
I
just
threw
that
in
there
yeah
so
yeah,
I
want
to
take
the
temperature.
So
obviously
you
guys
know
you
all
know
that
we've
been
building
out
osm,
and
so
we
we've
kind
to
we've
hit
this
kind
of.
F
What
do
we
do
with
the
the
ui
of
both
sim
and
so
I've
had
some
some
deep
conversations
with
michelle,
and
you
know
we
feel
like
hey,
let's
what,
if
we
just
make
the
metrics
smi
metrics
robust
enough
like
that
abstraction
layer,
because
you
know
everyone
likes
cali
and
then
we
say
hey
what
if
we
can
make
this
abstraction
layer
so
that
we
can
bring
cali
into
you
know
to
look
at
any
service
mesh.
That
is
utilizing,
smi
and
kind
of
have
that
pluggable
experience.
You
know
because
you
know.
F
Obviously
our
our
cloud
teams
are
looking
at
like
azure
monitor
those
are
different,
apis,
etc.
But
you
know
we're
trying
to
figure
out
hey.
What
can
we
do
to
make
this
kind
of
modular
for
the
community?
Because
we
know
that
you
know
even
people
that
are
using
seo
they
like
cali,
so
it
seems
like
there's.
Just
a
big
community
feel
around
the
look
and
feel
of
keali
and
I'm
not
dug
that
deep,
I'm
starting
to
look
deep
into
the
metrics
api,
but
you
know
just
looking
through
it.
F
E
A
E
Oh
okay,
yeah
from
our
perspective,
we
we
think
it's
a
little
bit
constrained
too.
Okay
and
I've
been
thinking
about.
Is
there
ways
to
extend
it
to
add
metrics
into
it?
But
then
the
more
I
think
about
that.
So,
like
let's
say
you
know,
let's
say
your
data
plane
provides
this
metric,
that's
not
necessarily
represented
in
the
latency
metrics
of
smi
metric.
E
E
Are
we
doing
something
that
we're
not
really
like
good
at,
like
we're
really
like?
I
think
service
mesh
is
really
good
at
networking
and
security
and
it's
good
at
providing
observability,
but
should
we
be
worried
about
the
transport
of
observability
and
then
I
start
thinking
about
open
telemetry.
So
I
keep
walking
yeah,
that's
that's
where
I'm
at.
I
haven't
really
concluded
anything
yet,
but
I
think
there's
room
for
extensibility.
F
Yeah,
you
know
yeah
go
for
it,
michael!
No,
no
sorry!
You
were
about
to
no
no,
I
was
saying
yeah,
no,
you
so
yeah.
We
talk
about
open
telemetry
and
I
think
it's
the
same
use
case
right.
It's
like
open
telemetry
is,
can
still
talk
to
the
smi
metrics
right
to
pull
stuff.
E
Yeah
I
mean
I
haven't,
thought
that
deeply
on
it
like,
like
I'm
saying
I
keep
going
around
in
circles,
but
you
you
can
now
I'll
tell
you
that
people
that
are
using
you
know
internally
to
f5
that
are
trying
to
do
metric
c,
invisibility
type
stuff,
keep
bypassing
smi
metrics
and
going
straight
to
prometheus
and
then
right
these
exporters,
because
one
reason
is
because
they
understand
the
metrics
that
the
data
plane,
natively
supports
and
so
they're
just
sidestepping
now.
E
F
Yeah
yeah,
I
mean
I
wasn't
thinking
that
it
would
be
very
confined
right
like
if
we
look
at
like
like
google's
sre
doc,
like
like
golden
metrics
right,
it's
not
to
be
extensible,
but
enough
that
hey,
I
I
got
enough
data
to
figure
out
what's
going
on
in
my
mesh,
I
I
don't
ever
think
we'll
have
parity
with
you
know
all
the
native
stuff
out
there
and
I
don't
think
that
that
would
be
the
job
of
smi
metrics
right.
I
think
if
it
has.
B
F
Golden
metrics,
like
the
the
80
20
rule
type
of
stuff,
then
that
helps
us
be
able
to
not
worry
so
much
about
building
specific
uis
to
kind
of
surface
this
stuff
up.
We
can
just
hit
these
apis
and
draw
it
up.
E
So
that
makes
me
start
thinking
that
we
don't
really
need.
E
A
I
was,
I
was
looking
at
this
training
in
my
in
my
head.
It's
like
the
open,
telemetry
project
is
open
and
you
know
supportive.
If
we
tell
them,
you
know,
xy
said
to
to
include
that
and
metrics.
In
contrast
to
traces,
are
not
yet
ga
they,
they
are
still
in
flux.
There
is
a
good
threat
and
that's
part
of
sql
stability,
making
sure
that
openmetrics
and
and
the
the
default
serialization
format
of
telemetry
are
harmonized.
A
So
to
me
the
question
is
like:
yellow
is
a
piece
of
software
right
like
I,
I
I'm
not
entertaining
it.
What
what
beyond
what
what
is
offered
in
199?
F
A
It's
the
cement
yeah.
So
then
I
would
so.
I
would
definitely
encourage
you
to
to
go
to
that
199
comment
what
you
would
like
there
and
I'm
more
than
happy
to
take
an
action
item
to
work
with
justin
to
to
implement
or
to
introduce
the
desired
semantics
in
open
telemetry.
F
A
Right
and-
and
that's
that's
where
there's
this
big
fundamental
difference
between
promises,
exposition
from
it
and
and
metrics
that
do
not
have
prescriptive
semantics
right.
There
is
no
way,
you
know
define
what
a
certain
metrics
there
are
conventions,
how
to
name
metrics,
but
there
is
no
right
where
open
telemetry
has
a
very
opinionated
way
to
go
about
that
right,
and
I
think
we
should,
if
we,
if
we
want
that,
leverage.
What
especially
given
that
open
telemetry
is
very
open
and
supportive
there.
A
Our
wishlist
there's
no
guarantee
that
you
know
everything
that
we
say
there
will
be
implemented
one-to-one
and
maybe
someone
would
say
well,
you
know
the
specific
metric
is
already
covered
with
you
know,
whatever
is
currently
there,
these
four
or
five
categories
that
are
already
existing,
but
at
least
this
is
something
that
is.
You
know
we
wouldn't
be
making
up
a
new
standard,
but
we
would
be
building
on
it
and
extending
an
existing
one.
A
F
A
It
is
definitely
relatively
at
the
current
point
in
time
what
it
currently
does,
not
that
familiar
with
with
how
you
know
close
to
here
tightly
coupled
it
is
with
these
two,
but
I
I
believe
it
is
pretty
well
yeah.
F
I'm
assuming
that's
the
case
and
then
so
we
you
know
we
were
approaching.
This
is
okay,
a
lot
of
people
like
kali
that
whole
experience.
How
can
we
just
plug
and
play
with
kali
on
top
of
smi
or
or
any
any
service
mesh
that
is
adhering
to
the
smi
specs
right?
So
if
we
can
surface
whether
that's
linker
d,
you
know
what
matthew,
what
you're
doing
with
engine
x,
etc
like
if
you're
surfing
those
metrics
with
and
and
again
I
was
thinking
smi
metrics.
Then
we
can
build
that
layer.
E
So
that
that
to
me
sounds
like
you
know,
like
standardizing
on
a
format
which
gets
back
to,
I
think
what
michael
was
saying
is
we
can
be
working
with
open
telemetry,
then
right.
F
E
To
define
that-
and
you
know
I
mean
from
my
perspective-
I
mean
I,
I
kind
of
prefer
that
you
know
they're
really
taking
the
lead
on
on.
You
know,
telemetry.
You
know
right
and
then
the
format,
the
packets,
the
datagrams
of
what
that
telemetry
is
and
then
it
allows
us
to
then
just
feed
into
that
project,
but
then
focus
on
you
know:
l4
l7,
networking
security
and
then
the
delivery
of
those
metrics.
But
we
don't
we
don't
care
what
those
meant
we
don't
care
about.
E
D
Maybe
if
I
make
comment
coming
in
from
a
newcomer's
perspective,
so
a
lot
of
I
used
to
work
on
some
of
the
collec
telemetry
agents
collecting
telegraph,
so
some
of
the
most
of
our
customers,
if
I
see
today
they're
comfortable
in
leveraging
prometheus
and
they
already
are
looking
to
deploy
something
like
prometheus
grafana
in
within
their
environments.
D
So
one
of
the
questions
would
be.
Why
do
I
need
one
more
agent,
like
one
more
software
stack
like,
for
example?
If
I
have
prometheus,
can
I
have
a
way
to
integrate
the
mesh
metrics
into
a
prometheus
right?
So
I
guess
it's
a
it's
another
another
entity
that
they
have
to
look
into
configuring
installing
and
managing
right.
D
So,
instead
of
I
mean
if
you
have
something
like
prometheus,
a
good
well
integrated,
for
example,
open
telemetry
format,
that's
where
most
of
the
folks
are
shifting
to
it
becomes
easy
to
kind
of
leverage,
along
with
the
rest
of
the
stack
that
they
have
rest
of
the
kubernetes
applications
that
they
have
in
something
like
prometheus
and
open.
Telemetry.
F
Yeah
no
look.
I
I
totally
agree
with
that.
I
think
you
know
again
the
level
of
environment
you
go
into
right.
So
hey
you
go
into
environment
and
their
prometheus
400
500
level,
guys
or
people
like
that
works
for
them.
But
you
know
if
we
go
to
kind
of
like
the
core
of
what
smi
is
about
is
to
kind
of
like
simplify.
F
The
experience
right,
like
you
know,
permit
is
going
to
give
you
a
thousand
knobs
and
a
thousand
buttons
I
mean
you
could
just
go
to
town,
but
for
someone
who's
just
like
just
entering
into
this
space,
and
they
just
want
pretty
ui.
F
You
know
simple
experience,
not
so
worried
about
all
the
other
bells
and
whistles.
Even
something
like
prometheus
could
be
pretty
intimidating.
You
know,
to
kind
of
you
know,
create
all
those
queries
et
cetera.
D
Oh
definitely
I
mean
yeah
I'm
yet
to
explore
all
the
functionalities
of
chiari,
but
just
a
perspective
of
what's
being
used.
B
A
Right
and
just
to
clarify
the
the
offer
or
the
idea
here
with
with
open
telemetry
is
essentially
when
we
we
say
semantics.
It's
essentially
this.
This
kind
of
you
know
giving
a
name
and
defining
exactly
what
its
meaning
is
right.
So
this
metrics
here
http.server.oration,
that's
exactly
you
know,
measures
the
duration
of
the
input,
http
request.
That's
it
right,
so
that
when
you
see
that
right,
there
is
no
doubt
what
it
is
right.
People
don't
have
to
come
up
with
it.
A
A
Key
would
really
be
on
the
one
hand
and
phil
if
you
want
to
take
lead
in
that,
I'm
happy
to
support
you
there
to
review
the
existing
open,
telemet
semantics
that
are
already
there,
that
we
could
leverage.
So,
like
you
know
here
this,
and
this
is
already
covered,
and
here
is
a
set
that
we,
you
know
that
are
not
yet
there
that
would
be
new
and
then
the
way
how
how
they
want
it
if
they
expand
a
new
category
or
wherever
they
put
it.
That's
something
else.
A
That's
implementation,
detail
that
open,
telemetry
folks
need
to
sort
out,
but
I'm
also
active
in
that
community.
I
can
definitely,
as
I
said,
support
the
process
from
the
other
side
as
well,
but
I
think
that's
in
the
sense
of
having
having
something
that
is.
You
know,
interpretable
and
and
and-
and
you
know,
leverages
existing
other
cncf
projects
make
probably
most
sense
in
this
context.
Yep,
okay,
cool.
A
All
right.
We
have
a
few
minutes
left,
so
I
will
open
up
the
floor
to
any
any
other
business.
Is
there
anything
that
you
would
like
to
see?
Is
there
anything
that
you
would
want
like,
for
example,
upcoming,
kubecon
cncf
con
europe.
B
A
D
A
A
All
right,
and
if
there
is
anything
especially
looking
at
people
who
recently
tried
so
sunku
and
and
I
mean
you
know
it's
kind
of
like
help-
help
yourself
right,
so
you
don't
need
to
wait
until
someone
asks
there
are
plenty
of
issues
there.
You
can
start
reviewing.
You
can
work
on
whatever
you
like.
Usually
during
the
week.
We
are
during
the
two
weeks
where
we
we
meet
usually
on
on
slack.
So
it's
not
super
high,
verbose
and
and
busy.
A
D
Yes,
absolutely
thank
you,
yeah
tried
starting
started
reading
and
I'm
lost
in
the
bunch
of
information
out
there
for
sure.
So.
My
interest
to
start
off
with
looking
into
kpis
for
east
west
traffic,
not
so
traffic-
and
I
saw
some
of
the
discussion
in
earlier
notes,
so
you're
trying
to
understand
more
to
figure
out
how
best
to
establish
that
from
a
telco
perspective
and.
A
Always
if
you're
longer
around
with
something
you
probably
don't
see
that
you're
blind
to
it
and
fresh
eyes,
always
bring
in
a
new
nice
perspective
on.
Why
are
we
doing
things
the
way
they
are
done?
Absolutely
yeah?
Thank
you.