►
From YouTube: CNCF SIG Network Meeting 2021-02-04
Description
CNCF SIG Network Meeting 2021-02-04
A
B
Silly
are
you,
based
in
west
coast,
u.s.
A
No
I'm
austin
texas,
so
okay,
yeah
feels
pretty
good
to
say
good
morning,
though,
I'm
generally.
A
Generally
talking
to
a
lot
of
folks
in
earlier
time
zones
and
totally
and
it's
I've
found,
I've
been
found
to
be
incapable
of
saying
anything
but
good
morning
when
it's
morning
time
for
me-
and
so
this
is
nice.
B
A
That
doesn't
stop
me
from
drinking
coffee
after
after
the
morning,
though,.
A
Well,
so
the
meeting
minutes
are
in
the
chat,
I'll,
probably
we'll
paste
those
again
in
a
moment
or
two
I'll
go
ahead
and
begin
to
share
the
screen.
A
As
we
go
to
do
that,
if,
if
you're
able
to
access
the
meeting
minutes,
go
ahead
and
slap,
your
name
down
we'll
we'll
get
you
on
the.
A
A
A
It's
good
to
see
you
yeah
yeah.
I
am.
I
actually
deal
with
that
logo.
A
fair
bit.
We,
I
gotta,
tell
you
it's.
Sometimes
it's
occasionally
a
pain
in
the
rump,
because,
if
you're
dealing
with
the
svg
version,
it's
got
all
the
vertices
and
if
you're
not
careful
with
how
you
drag
and
you
drag
it.
A
I
feel,
like
I've
violated
a
couple
of
a
couple
of
copyrights
or
a
couple
of
rules,
of
how
you're
supposed
to
use
logos
just
accidentally.
A
Well,
fair
enough,
so
hey
first,
five
minutes
are
generally
a
bunch
of
bad
jokes
for
me
and
people
being
kind
laughing.
So
thanks
everyone
for
coming
we're
about
five
minutes
after
let's,
let's
get
up
and
rolling.
So
this
is
the
february
4th
2021,
the
cncf
sig
network,
meeting
public
meeting
all
are
invited.
You
don't
have
to
be
a
member.
You
just
have
to
put
up
with
a
joke
or
two
and
hopefully
speak
up,
we're
the
the
things
that
we
we
do
here.
A
The
things
that
we
discuss
here
are
furthered
by
your
participation.
So
so
please
participate.
A
couple
of
you
are
familiar
with
this,
and
some
of
you
aren't
so
I'll
say
it,
and
that
is
that
the
cncfc
network.
A
Well
has.
How
do
I
be
concise
here?
Cncfc
network
is
like
other
cigs
it
also
outside
of
its
own
charter,
which
I
won't
cover.
It
also
includes
all
two
working
groups
at
the
moment.
One
is
for
the
universal
data
plane
api,
which
is
sort
of
a
envoys
set
of
apis
and
there's
a
working
group.
There
there's
another
working
group:
that's
the
service
mesh
working
group
within
it.
A
It
has
a
few
different
initiatives
and
we've
agreed
over
the
last
few
months
to
use
this
time
to
advance
the
service
mesh
working
group
initiatives
unless
a
sig
network
topic
bumps
it
down
some
and
so
we'll
speak
to
a
couple
of
sig
network
topics.
If
any
of
you
have
sig
network
topics
by
the
way,
please
put
them
there.
If
you
have,
you
know
other
other
chef
topics.
Now
is
the
time
we'll
get
to
them.
A
A
Today,
okay,
good
all
right
so
first
topic
up
is
the
ambassador
you're
all
familiar,
no
doubt
with
ambassador
modern,
modern
proxy
that
has
envoy
inside
so
to
speak.
A
It
is
up
for
public
review.
It's
been
out
for
public
review
for
a
little
while
it's
proposed
to
be
adopted
at
an
incubation
level.
There
is
some
discourse,
some
happening
on
the
project's
name
and
potential
renaming.
A
There's
some
public
discussion
there,
that's
sort
of
the
state
of
of
them
of
that
proposal.
A
Cool
all
right
within
the
service
mesh
working
group,
the
last
two
times
we've
met,
we
spent
most
of
our
time
discussing
well
a
collection
of
concerns
around
really
around
service
mesh
performance
and
one
of
those
concerns.
So
there's
the
service
mesh
performance
spec
we'll
talk
about
that
in
a
little
bit,
but
there's
also
a
project
called
get
nighthawk
if
you're
not
familiar
with
nighthawk.
A
It's
a
load
generator
that
was
born
of
the
envoy
project,
so
envoy
has
a
load
generator
written
in
c
plus
it's
called
nighthawk
it's
gaining
in
popularity
and
in
part
to
assist
its
popularity
and
to
help
get
it
into
the
hands
of
many
there's.
An
initiative
called
get
nighthawk
to
that
has
a
couple
of
aspects
to
it.
The
core
thrust
of
the
initiative
is
to
create
some
convenient
distributions
of
nighthawk
of
that
load
generator,
and
so
the
last
couple
of
times
we've
met.
A
We've
talked
about
what
the
purpose
of
this
project
is
some
interesting
things
that
nighthawk
is
capable
of
it's
pretty
pretty
neat
we're
bringing
in
we're
partnering
with
at
least
one
university,
and
it
looks
like
a
second
which
would
be
nyu
and
a
couple
of
professors
at
each
university
to
do
some
to
ask
some
hard
questions
and
hopefully
answer
them.
A
So
so,
while
we
won't
cover
this
project
again
in
depth
today,
I
will
highlight
that,
since
last
we
met
there
have
been
a
number
of
actions,
tasks
laid
out,
the
community
members-
you
know
contributors
are
picking
up
and
I
don't
know.
A
A
So
next
time,
next
time
we'll
touch
base
on
get
nighthawk
and
then
I
think
so
any
comment
or
question
on
get
nighthawk.
B
Just
a
comment
so
auto
and
I
got
to
sync
yesterday:
we
had
a
brief
chat
about
some
of
the
requirements
or
some
of
the
things
that
tool
could
do
in
terms
of
load
generation.
So
we
we're
looking
at
how
we
could
have
a
environment
and
a
standard
methodology
to
have
a
consistent
performance
using
some
of
the
tools
like
nighthawk,
so
that
no
matter
how
many
times
you
run,
the
latency
is
kind
of
consistent,
so
we're
looking
at
now.
What
are
the
ldl
three
aspects?
B
A
Yeah
yeah,
maybe
I'll,
leave
it
at
that-
hey
where
I'm
overdue,
to
spend
some
time
with
you
too,
and
and
auto
as
well.
B
Definitely
yeah,
I
mean
one
thing
you
mentioned
themes
like
aws
and
followed
by
google
they've
started
a
set
of
standards
to
establish
for
like
some
of
these
benchmarking
or
deployment.
Rather,
I
think,
for
benchmarking
environments
just
to
have
to
to
establish
a
method
to
measure
and
also
have
a
consistent
performance,
so
I'm
not
sure
the
details.
Yet
so
that's
something
auto
mentioned.
He
would
share
soon
looking
to
see
what
what
they
are.
A
Oh
very
good
and
are
those
I
take
it
that
that's
separate
from
the
from
smp.
B
Yeah,
so
I
still
don't
know
yet,
so
I
think
we
have
a
follow-up
email,
so
we
know
soon.
A
Nice
good
good,
good,
yeah
yeah
well,
so
the
next
topic
up
is
service
mesh
patterns.
So
this
is
so
one
of
the
initiatives
that
within
the
working
group
is
well,
is
trying
to
parlay
a
little
bit
with
there's
a
there's
another
service
mesh
group
within
the
cncf.
It's
the
end
user
group
and
those
those
folks
get
together.
I
think
it's
about.
Once
a
month
I
haven't
attended
a
meeting.
They've
recently
invited
us
to
come
and
collaborate
which
is
fantastic.
We're
we're
hopeful
to
listen
to
a
lot
of
the
challenges.
A
They're
having
with
service
meshes,
give
that
feedback
to
the
projects
as
well
as
well.
Well,
a
few
things.
Actually
one
get
a
better
survey
going
on.
All
of
you
have
probably
seen
various
cncf
surveys
that
have
been
done
about
the
usage
of
particular
technologies
and
the
one
for
service
mesh
is
egregiously
wrong
and,
as
I
go
to
say
that
I
feel
like
if
people
think
about
it
like
that,
that
sort
of
feels,
like
the
fault
of
something
like
sig
network,
like
maybe
sig
network,
should
help
with
that
help.
A
Make
sure
that
it's
done
well
and
part
of
that
would
be
parlaying
with
that
end
user
group,
and
so
part
of
discussing
with
them
is
also
trying
to
help
establish
some
patterns
and
some
best
practices.
Some
some
usages
of
service
meshes
and
helping
propagate
and
educate
current
users
and
then
forthcoming.
You
know
all
the
thousands
and
thousands
of
others
that
will
come
to
use
service
meshes
in
time.
A
A
There's
some
interesting.
You
know
if
you
think
about
the
way
in
which
software
is
written
and
design
patterns.
I
think
that
this,
the
the
approach-
that's
that's
being
attempted
here
is
also
to
is
in
the
same
vein,
it's
to
as
people
discuss
circuit
breaking
that
just
as
a
random
example.
A
How
quickly
should
they
close
back
algorithms,
to
discuss
your
patterns
of
behavior
to
to
examine,
and
that's
probably,
all
context,
that's
all
specific
to
the
context,
the
applications,
the
workloads
that
are
running
to
the
needs
of
the
environment,
etc
like
each
of
these
areas,
each
of
these
functional
areas
within
a
service
mesh
deserve
a
bit
of
analysis
and
a
bit
of
I'm
trying
to
think
of
a
word
other
than
pattern,
a
bit
of
promotion
of
sort
of
the
the
common
approach,
the
common
use
of
these
things,
as
we've
been
iterating
on
these
and
working
on
these
and
trying
to
educate
what
what's,
what
we've
tried
to
do
is
come
forth
with
a
simple
way
of
articulating
capturing
that
in
yaml.
A
Why
is
that
the
goal
it
isn't
that
yaml
is
the
is
the
that's
the
goal,
because,
when
you're
discussing
a
pattern
like
this
you're
discussing
the
use
of
a
service
mesh
agnostic
of
the
underlying
technology,
whether
that's
you
know
whatever
mesh?
That
is
because
you
know
at
this
at
this
day
and
age
like
there's,
you
know,
20
plus
service
meshes
out
there.
They
pretty
much
all
support
a
retry
okay.
A
So
when
you,
when
you
give
an
example
and
you're,
promoting
an
understanding
of
how
many
retries
you
might
want
to
configure
on
your
services
and
the
considerations
that
you'd
want
to
account
for,
if
you
want
very
high
resiliency
great
set
100
retries,
but
there's
a
negative
ramification
to
that
as
well
and
there's
considerations
around
each
of
these
and
when
we
give
examples
of
those,
it
would
do
a
disservice
to
the
other
19
service
meshes
if
the
example
is
given
just
for
linker
d
or
just
for
console
or
just
for
which,
whichever
and
so
we
want
to
be
able
to
articulate
these
patterns
in
an
agnostic
way
and
in
a
simple
and
understandable
way
and
and
doing
so
in
yaml
makes
a
lot
of
sense
and
that
way
they
can
be
shared
around
as
well.
A
People
can
modify
them
and
tweak
them.
Well,
it's
one
thing
to
have
that
yaml
as
a
point
of
reference
and
it's
a
whole
nother
thing
to
have
that
yaml.
As
not
only
a
point
of
reference
but
to
be
actionable
as
well
to
be
able
to
take
that
apply
it
to
a
system
and
have
the
system
execute
the
behavior
or
apply
the
configuration
basically
apply
the
pattern,
sometimes
that's
a
standard.
A
Sorry,
a
static
application
of
that
pattern
like
to
just
apply
a
configuration
to
a
mesh,
sometimes
that's
to
over
time,
adjust
the
configuration
of
a
mesh,
because
the
pattern
calls
for
like
a
canary
deployment.
For
example,
it's
like
hey,
it's
a
it's
an
overtime
thing
or
an
over
a
certain
activity
thing.
It
needs
to
be
orchestrated.
A
And
consequently,
this
leads
us
to
well
a
specification
like
open
application
model
ohm
which
is
taking
on
a
really
hard
challenge
of
like
describing
all
the
things.
Let
me
make
a
snide
remark
just
for
a
moment
and
say
I
said:
there's
20
plus
meshes
actually
just
there's
going
to
be
another
mesh
announced
soon.
A
A
Okay.
So
we're
talking
about
patterns
the
way
to
articulate
those
to
capture
that
in
a
in
a
succinct
way,
hopefully
in
an
understandable,
understandable
way.
Hopefully
a
way
in
which
doesn't
require
you
know.
25
kubernetes
manifests
fully
described
so
this.
This
is
a
description
of,
and
maybe
it's
myopic,
and
if
it
is,
please
please
comment
and
please
but
a
description
of
a
little
bit
of
this.
This
challenge.
It
goes
something
like
if
you
want
to
describe
an
application
in
a
workload
all
of
its
infrastructure
and
the
way
that
it
should
behave.
A
I
don't
know
that
there's
a
single,
or
rather
I
would
say,
I
know
that
there
isn't
a
single
necessarily
a
single
definition
for
this.
Before
om
became
a
little
bit
popular
and
and
lee
zhang
who's
on
the
call
and
and
in
group
and
a
set
of
contributors
around
home
there
was,
I
was
working
to
you
might
chuckle.
I
was
working
with
some
folks
at
turbonomics
to
create
another
foundation,
which
is
just
what
the
world
needed
another
sibling
to
the
cncf
it
was,
it
was
a
really
long
name.
A
A
A
bunch
of
lawyers
involved,
a
bunch
of
people
involved
from
various
tech
companies
to
get
that
formed,
and
eventually
that
was
that
effort
was
set
aside
and
things
like
home
have,
like
you
know,
home
and
some
other
related
specs
have
come
forth,
and
so
anyway,
as
we
go
to
solve
this
challenge
around
how
to
take
how
to
describe
a
pattern
agnostically
and
then
have
a
system
take
that
if
you're
we've
been
looking,
you
know
the
the
challenges
here.
Are
you
can't
do
all
that
in
kubernetes?
A
It
brings
it
lets
you
describe
a
lot,
but
not
everything
in
smi.
It
lets
you
describe
well,
you
know
it's
a
as
an
smi
maintainer,
it's
fair
for
me
to
say
this
that
it's
a
like
like
every
project.
That's
here,
it's
growing!
You
know
it
continues
to
add
more
to
its
specification
and
and
so
right
now
it's
kind
of
focused
on
the
lowest
common
denominator,
set
of
capabilities
and
that's
fine.
That's
that's
appropriate.
A
A
It
leaves
a
little
bit
of
a
challenge,
I'm
not
saying
smi,
isn't
a
good
spec
or
it
doesn't
have
a
set
of
good
specs.
That's
not
the
and
service
mesh
performance
smp
it.
It
is
focused
on
capturing
and
characterizing
service,
mesh
and
workload
performance,
and
so
it
doesn't
capture
all
of
what
an
application
is
and
all
of
what
kubernetes
has
and
all
that
so
so
we're
left
with
a
bit
of
an
underlap
the
way.
I
think
I'm
visually.
A
To
just
quickly
say:
hey,
there
are
you
can
describe
things
in
kubernetes.
You
can
describe
things
in
smi,
some
in
smp
there's
some
amount
of
overlap
between
them
in
a
good
way.
You
can
describe
like
if
you
wanted
to
facilitate
something
like
a
canary
deployment
or
if
you
wanted
to
apply
a
pattern
and
have
it
be
affected
over
time.
A
You
could
describe
some
of
that
in
a
workflow
and
a
definition.
Maybe
that's
an
argo
cd
thing.
Maybe
that's
a
cadence
workflow,
a
temporal
whatever
there's
a
lot
of
engines
out
there
and
we'll
leave
policies
and
how
you
describe
things
well,
I'll
mention
this
that,
like
part
of
what
you
might
define
either
in
a
workflow
or
in
a
policy,
would
be
when
to
maybe
it's
the
initial
application
of
a
number
of
retries
that
you're
trying
to
that.
A
A
So
our
hero
steps
in
I
think
anyway,
which
is
kind
of
where
we
get
to
to
om,
which
is
to
say
this,
is
aimed
at
trying
to
describe
and
liam
like
you
might
want
to
step
in
and
if
I,
if
I
totally
bastardize
the
the
vision
and
the
definition
of
home
like
you're
gonna
have
to
which
is
like
holistically,
addressing
workloads
and
really
like
a
lot
of
their
concerns.
A
The
specification
hasn't
addressed
all
of
the
concerns
that
that
that
are
possible.
But
it
has
a
highly
extensible
model
for
building
out
support
for
well
for
traits
for
building
out
support
for
different
application
and
workload.
Concerns.
A
So
this
is
what
so
we
want
to
do
so.
I'm
going
to
pause
there
as
I
think
I've
characterized
kind
of
the
the
challenge
I'm
going
to
pause
there,
and
we
want
to
do
kind
of
a
demo
and
talk
about
how
it
is
that
meshri
as
a
service
mesh
manager,
a
multi-service
mesh
manager,
is
a
well-positioned
tool.
It's
a
tool
that
was
has
been
originally
created
for
teaching
people
service
meshes
and
doing
it
well
and
so,
promoting
patterns
and
having
measures
support.
A
Those
patterns
falls
right
in
line
with
this
vision,
but
characterize,
but
finding
trying
to
overcome
the
underlap
between
what
you
can
describe
in
these
various
specs
has
been
a
challenge
and
so
part
of
the
the
community
there
has
been
looking
at
ohm.
Just
very
recently
has
done
a
prototype
of
trying
to
integrate
home
and
overcome
this
challenge,
and-
and
so
we
want
to
do
a
demonstration
where
we
kind
of
walk
through
how
the
two
have
come
together.
A
And
so
it's
all
asked
this
so
lee
did
I
do
you
want
to
expand
on
the
definition,
the
sort
of
vision
of
open
application
model
and
maybe
introduce
it
to
introduce
ohm
to
some
folks
that
might
not
be
as
familiar.
C
Actually
it
works
for
it.
It
already
works
on
terraform
and
I
think
some
people
are
working
on
to
make
that
work
with
cloud
formation.
So
in
that,
so
in
that
sense,
it's
more
like
a
universal
application
definition.
C
So
you
can
define
application
on
top
of
different
runtimes
in
an
easier
approach,
and
I
also
know
that
there
is
integration
of
ohm
with
helm,
which
is
straightforward,
because
I
can
use
helm
to
package
those
yamas
into
application.
And
then
I
use
this
model
to
describe
that.
So
I
will
have
a
it
looks
like
I
will
have
the
application
crd,
but
underneath
the
application
crd
will
generate
very
use
a
helm
chart
to
render
your
real
yaml
files.
C
That
is
also
one
polish
I
sell
in
the
community,
and
I
think
it's
also
very
interesting,
but
yeah
just
animation.
It's
in
essentially
a
model
to
make
it
easier
for
people
to
define
application,
especially
if
you
want
to
build
something
like
a
application
platform
on
top
of
kubernetes
or
even
or
cloud
formation,
or
something
like
that.
Right.
D
So
I
do
have
a
little
bit
of
a
thought
here.
You
have
this
statement
that
just
sits
badly
with
me,
which
is
cloud
native,
is
hard,
and
it
strikes
me
that
that
may
be
true,
but
it's
basically
an
indication
of
more
fundamental
errors
lower
in
the
stack
right,
because
if
cognitive
is
hard,
something
has
been
fundamentally
done,
poorly
down
the
stack
and
I'm
not
sure
that
just
band-aiding
over
it.
On
top
of
that
is
the
right
answer
we
may
need
to
get
to
the
root
of
what
exactly
is
going
on.
D
The
one
thing
that
I
know
never
actually
does
make
things
easy
is
band-aiding
over
lower
level
mistakes
that
always
just
makes
things
harder,
and
so
the
question
I
would
ask
is:
why
is
cloud
native
heart
you've
listed
a
bunch
of
things
here
like?
Why
are
we
dealing
with
iptables
rules
right?
If
the
developer
has
to
know
about
iop
tables
rule,
something
is
fundamentally
very
broken.
D
You
know
those
sorts
of
things
if
a
developer
has
to
actually
deal
with
dns
complications.
We've
got
a
fundamental
brokenness
in
the
underlying
pieces
of
the
platform.
C
C
If
you,
if
your
question
is
about
why
we
need
abstraction
top
of
that,
I
think
it's
basically
how
the
computer
science
work
right.
C
D
Abstraction
is
fine,
but
the
point
is
some
of
what
you've
got.
There
is
literally
stuff
that
should
never
have
been
leaked
to
the
point
that
it
is.
I
mean
the
whole
game.
As
you
said,
computer
science
is
putting
the
proper
facade
on
it
so
that
you
don't
have
to
leak
all
the
nitty-gritty
details
of
the
next
layer.
D
A
Or
the
way
that
I'm
internalizing
part
of
what
ed
is
saying
is
not
necessarily
direct
is,
and
it's
like
it's
side,
swiping,
open
application
or
ohm,
but
not
I
mean
not
in
a
negative
way,
but
I
mean
it's
not
necessarily
entirely
directed
at
ohm,
either
more
like
hey
in
kubernetes.
Why
in
kubernetes,
are
we
continuing
to
expose
well
ips
for
one
or
like?
Why
are
why
something
is.
D
Yeah
to
be
clear,
I'm
absolutely
not
taking
this
way
to
put
on
ohm
is
the
one
who
who's
identified.
In
my
mind,
problems
that
are
are
not
ohm's
problems,
they're
problems
that
were
created
by
other
people
and
it's
trying
to
do
its
best
of
a
layer
that
it's
not
to
solve
them,
but
I
think
somebody
should
probably
also
be
going
down
to
the
lower
levels
and
saying
why
are
you
leaking
these
completely?
D
You
know
these
attractions
that
should
never
be
linked
to
the
developer.
Why
are
they
being
leaked
to
the
developer?
Why
are
you
making
cloud
natives
so
hard,
because,
even
in
the
case
where
you
do
actually
not
leak,
inappropriate
things,
there's
still
value
in
having
higher
level
abstractions?
D
A
Yeah
in
some
respects,
yeah
right.
It
means
that
om
is
even
more
valuable
if,
if
that's
pervasively
happening,
although
I
mean
part
of
your
other
point-
is
like
well,
yes,
there's
value
in
that
abstraction,
but
at
some
point
it's
it's
treacherous
ground
for
the
abstraction
to
be
standing
on.
If
it's,
if
the
ground
is
needed,.
D
C
D
A
Very
good,
very
good.
Okay,
I'm
not
entirely
sure
what
this
is.
A
So
let
me
see,
let's
see
how
this
settles
on
people,
if
this
is
the
right
way
of
trying
to
present
this,
so
so
so
ryan,
zhang
and
and
lee
who
have
been
kind
enough
to
educate
some
of
those
that
are
focused
on
this
patterns,
challenge
about
home
and
being
warmly
welcoming
of
like
of
trying
to
collaborate
and
help
advance
some
of
the
traits
in
home
and
and
and
so
we
gave
it
a
little
bit
of
thought,
give
it
a
week
or
two
and
and
are
trying
to
use
om
to
capture,
to
describe
a
pattern
that
I
don't
know
that
the
color
coding
here
really
helps
with
an
understanding.
A
A
There's
a
service
mesh
with
a
particular
kin
particular
configuration
that
that,
if
you
want
to
execute
this
pattern,
you
just
you
know,
hey
here's
a
mesh
with
a
config
run
that
mesh
there's
a
the
the
behavior
section
which
is
about
in
this
example,
it's
about
a
rollout
and
describing
the
the
application
that
should
be
rolled
out
and
the
characteristics
by
which
that
sequence
is
performed,
and
in
this
case
like
this
needs
to
be
abstracted
to
something
like
metrics
and
not
talking
about
you
know.
A
All
of
this
should
be
abstract
from
you
know
the
specific
anyway,
if
you
take
that
file,
you
literally
take
this
definition
and
you
were
to
give
it
to
a
system
that
were
to
integrate
with
own.
A
This
is
this
measuring
system,
so
you
see
how
this
works
for
us
a
juicy
diagram
to
let
soak
in
to
your
mind.
Let
me
see
if
I
can
walk
people
through
it
verbally
and
then
I
would
hand
the
ball
off
to
utkarsh
who's
an
open
source
contributor.
That's
been
that
tackled
this
pretty
quickly
and
wants
to
give
a
demo
of
it
in
action.
So
so
as
a
service
mesh
management
plane.
A
Meshery
is
pretty
extensible
actually
which
a
lot
of
its
approach
and
vision,
sort
of
lines
up
with
with
ohm
in
that
regard,
and
that
is
I'll
show
this
diagram
briefly,
which
is
to
say
that
each
of
the
components
inside
of
the
this
measuring
architecture
are
considered
for
how
you
might
want
to
have
choice
or
extend
it
to
do
different
things,
but
the
architecture
itself
fairly,
simple
to
the
extent
that
it
for
the
purposes
of
this
discussion.
It
is
two
things
or
you
know
three
things.
A
I
guess
whatever
it's
five
things:
fine,
but
it's
it's
a
there's
a
server.
There
are
individual
adapters,
one
for
each
service
mesh
that
it
manages
and
those
serve
those
adapters.
When
you
turn
one
on
they
register
sort
of
in
the
sequence
here,
so
they
register
their
capabilities.
A
They
register
their
ability
to
manage
a
given
mesh
with
this
server
and
it
so
they
register
in
the
capabilities
registry.
If
you
will
and
great
so
the
system
is
just
sitting
here,
listening
and
waiting
for
a
user
to
tell
it
to
do
something.
So
user
comes
over
grabs,
a
command
line,
a
cli,
so
it
runs
the
cli
pattern
and
it
wants
to
apply
a
pattern
in
this
case
retries
or
I
think
the
demo
would
be
on
on
a
rollout.
A
A
Taking
that
simple
pattern,
file
leveraging
ohm
and
its
extensible
model
for
having
traits
and
is
to
take
that
pattern,
get
it
into
the
ohm
format.
Maybe
I'm
going
through
more
details
that
are
necessary
here,
but
basically
getting
the
own
format.
I'm
handing
that
over
to
the
adapter
that
can
interface
with
kubernetes
understanding
that
there's
a
particular
set
of
operations
to
to
execute,
tells
kubernetes
to
do
that
sort
of
it
walks
through.
A
It
creates
a
dag,
a
directed
acyclic
graph
to
step
through
each
of
those
each
each
part
of
that
that
workflow
and
and
does
its
thing.
A
E
Yes,
so
I'll
just
would
start
with
how
the
yama
actually
would
look
like.
So
basically,
this
is
a
really
simple
yam
here,
although
the
yarn
is
quite
short,
you
were
trying
to
do
a
lot
of
stuff
in
here.
First
you're,
saying:
okay,
I
I
need
a
service
mesh
or
I
also
want
to
do
enable
mutual
fearless
in
there.
I
want
side,
car
injection
you're
also
trying
to
do
a
roll
out
in
here.
So
basically,
you
are
trying
to
define
a
lot
of
things
in
just
a
single
yarn.
E
Instead
of
kind
of
trying
to
deploy
multiple
garments.
Just
for
a
few
easy
things,
I
would
go
through
what
exactly
how?
Basically,
I
will
quickly
go
through
how
how
this
exactly
works
internet.
So
what
happens?
Is
machine
wrappers
with
com
machine
adapters
would
basically
say
that
I'm
capable
of
doing
this
thing.
So
basically
they
are
raised
to
their
capabilities,
which
is
a
broad
term
for
a
trade
definitions,
workload,
definitions
and
scope,
definitions
which
are
defined
by
home
and
then
now
those
are
stored
in
capabilities
registry.
B
E
A
user
the
user
doesn't
have
to
exactly
think
of
what
exactly
they'd
be
talking
to.
That
is
which
they'd
be
talking
to.
They
can't
just
give
in
the
yamaha
that
it
showed
and
that
they
can
apply
it
measured
after
would
create
a
dive
out
of
it
because
yeah
right
here,
you
can
actually
create
a
quite
complex
workflows
in
here.
E
That
is,
you
can
say,
okay
depart
from
each
of
these
two
add-ons,
but
when
you
do
it
once
you
have
audio
deploy
it's
this
cool
as
well
as
you
have
the
toilet
species
you're
also
saying:
okay,
do
grafana
spr
on,
but
do
it
only
once
prometheus
has
been
deployed,
so
meshes
would
create
a
dog
of
it
and
will
ensure
that
everything
happens
pretty
efficiently.
That
is,
you
can
do
something
confidently
so
spc.
E
Doesn't
basically,
this
roll
out
doesn't
actually
depends
on
anything,
so
the
provisioning
of
this
theo
mesh
as
and
doing
what
else
would
be
confident
while
other
add-ons
would
be
sequential,
because
you
asked
it
to
do
so
so
now
that
I'll
quickly
start
a
message
server,
because
machine
server
is
going
to
actually
register.
All
of
the
measures
of
we
would
be
listing
all
of
the
credit
capabilities
and
basically
trade
decisions,
group,
definitions
and
those
kind
of
stuff.
E
Yeah,
so
all
these
logs,
although
it's
pretty
huge,
they
are
basically
what
happened
was
when
initiated
after
booted
up
it
basically
passed
on
all
of
its
capabilities.
That
is
straight
definitions,
and
this
group
definitions,
workflow
definitions
to
mystery
server.
E
E
Run
should
be
able
to
run
the
machine
pad
and
apply,
and
it's
writing
on
exp,
because
it
is
still
working
properly.
We
can
apply
this
yaman
and
it
will
basically
do
the
stuff
that
you
asked
me
to
do.
That
is
provisional
service
mentioned.
Do
roll
outs
figure
out?
Okay?
For
me,
it's
doing
two
things
confidently
waiting
for
waiting
for
issue
to
be
deployed,
and
then
it
will
go
on
and
provision
through
it
is
the
add-on,
and
once
that
is
done,
it's
now
provisioning.
The
final
and
I
think
it
might
have
completed
so
yeah.
E
And
that's
exactly
what
we
got
in
here
in
the
system
names
because
that's
what
we
defined
in
the
yaml
file
and
the
rollout.
This
was
the
first
rollout.
So
I
mean
this
is
the
first
time
that
the
application
was
getting
deployed
so
or
we
have
all
of
them
running
at
the
same
time,
but
you
can
do
the
again.
Basically
right
now,
you
can
say
that
the
first
version
that
we
deployed
was
returning.
This
I
mean
on
on
when
something
goes
wrong.
E
It
basically
returns
this,
and
now
you
want
to
say
that
okay,
I
want
to
improve
this
message
now.
You
may
be
willing
to
do
a
rule
out
if
you
want
to
basically
do
a
calorie
release,
so
you
would
come
in
here.
You
would
again
do
mysteries.
E
Already
provisioned,
so
it
will
not
provision
this
pog.
It
will
not
provision
promises
again
and
those
kind
of
things
it
will
just
do.
Basically,
the
measure
would
just
do
garnering,
and
that
is
exactly
what
it's
doing.
So
you
asked
it
to
do
from
direct
20
traffic
to
the
version
5,
and
that
is
exactly
what
is
happening.
So
you
are
getting
20
percent
traffic
to
version
5,
while
the
rest
would
go
to
the
other
one
for
for
the
time,
duration
that
you
mentioned
the
right
now.
E
This
is
pretty
because
it's
initial
stages,
so
this
is
not
very
advanced,
but
the
intention
is
to
be
able
to
define
in
here,
but
pretty
complex
things.
That
is
you.
You
should
be
able
to
basically
provide
that.
Okay,
I
want
to
perform
smp
in
here
and,
if
I
get
me,
99,
250
ems
or
something
then
move
it
to,
let
it
be
40
or
just
move
it
to
100.
Something
like
that.
Is
it
it's
in
initial
stages.
E
So
that
is
why
this
is
pretty
rudimentary,
but
the
end
goal
is
to
be
able
to
define
those
kind
of
stuff
in
there
and
it's
because
om
is
also
pretty
extensible.
So
we
should
be
able
to
do
that
as
you
can
see
that
now
yeah.
Basically,
it
has
moved
to
one
person,
traffic
to
version
5
and
any
comments
or.
A
So
you
did
one
of
the
last
things
that
utkarsh
was
just
saying.
Is
that
part
of
our
okay
so
to
step
back
and
say
hey?
Why
are
we
talking
about
this?
Because
it's
the
service
mesh
working
group?
We
were
working
on
patterns
trying
to
educate
folks,
trying
to
help
them
adopt
and
use
cloud
native
technology
trying
to
help
simplify.
A
The
in
order
to
do
that,
hopefully,
like
I
don't
know
if
it
gets
much
more
simple
than
what
ohm
is
you
know,
is
it
really
enabled
around
that
pattern
file
that
that
that
yaml,
but
to
be
able
to
take
that
tell
a
system
to
go
apply
like?
I
think
the
rollout
makes
for
an
interesting
pattern
to
look
at,
but
it's
it's
not
the
focus
I
mean
the.
A
The
focus
here
is
to
have
a
system
that
lets
people
take
any
one
of
those
60
patterns
and
leverage
them,
and
they
did
to
tweak
them
unto
their
own
need
to
explore
with
them.
Learn
from
them
change
them.
To
begin
to
establish
something
of
a
repository
of
what
those
patterns
are,
give
them
give
them
names
yeah
to
help
people
be
successful.
Help
people
also
understand
whether
or
not
they're
doing
it
right
or
not,
very
well
or,
and.
A
There's
a
catalog
of,
and
so
I'm
going
to
speak
on,
lee's
behalf
again,
like
you
mean
I've
only
spoken
once
for
like
five
minutes,
and
so
I'm
hoping
that
you're
pleasantly
that
that
this
is
pleasingly.
I
don't
know
if
it
is
to
the
project,
but
that
some
of
these
efforts
would
ultimately
help
advance
some
sort
of
the
catalog
of
traits
that
the
home
project
is
producing
and
and
the
way
that
karsh
walked
through
this
demo
wasn't
necessarily
very
visual.
A
A
C
Yeah,
I
actually
think
this.
This
is
a
very,
very
awesome
idea,
because
we
just
from
alibaba
we
just
received
from
the
complaint
from
customers
that
that
they
want
to
use
service
mesh
by
applying
patents.
Something,
like
you
said,
patents
to
their
application.
Instead
of
trying
to
use
the
virtual
service
destination
rules,
they
don't
want
to
do
that,
so
they
want
to
use
rollout.
C
E
A
C
This
is
oh,
I
see,
I
see,
okay,
I
got
it
yeah
cool,
so
I
not
sure
if
it's
possible
for
you
to
share
out
the
the
project,
so
we
will
definitely
try
to
take
a
deep
look
at
into
it.
C
I
think
this
is
what
we
are
trying
to
pursue
in
the
community,
especially
on
a
service
mesh
site,
we're
eager
to
see
that
there's
something
which
which
can
be
named
like
patent
or
something
other,
that
it
will
give
users
a
interface
like
rollout
or
any
other
high
level
abstraction,
instead
of
just
the
virtual
service,
which
doesn't
make
sense
from
any
user's
perspective.
I
try
I'm
really
supportive
for
this
direction.
So
let's
look
deep
into
this
now.
D
I
mean
the
virtual
service
is
a
really
nice
distraction
at
the
level
that
it
lives
but
you're
absolutely
right.
It
passed
a
certain
point,
you're
sort
of
laying
out
the
links
in
the
topology
by
hand,
and
that's
that's
too
complicated.
D
With
complexity,
it's
it's
actually
a
really
good
example
of
exactly
why
you
need
higher
level
abstractions
that
are
even
building
on
good,
lower
level
attractions.
A
Yeah,
listen
so
good
lee
cool
good.
I
think
now,
we've
spoken
for
all
of
10
minutes
on
this
initiative.
As
a
matter
of
fact,
like
wow,
what
an
amazing
thing
it
would
cost
by
the
way
for
folks
that
aren't
familiar
with
this
fine
young
man
he's
in
his
junior
year
in
university
and
worked
on
this
for
all
of
a
few
days.
I
think
or
something
like
I,
it
kind
of
makes
my
jaw
drop,
so
the
guy
I
think,
he's
built.
A
I'm
not
sure
so
to
to,
I
think,
to
help
to
reinforce
what
lee
was
just
saying
about
like
hey
there,
there's
something
to
this.
This
word
pattern
or
there's
something
to
the
that
I'm
gonna
see
if
I
can
would
crash.
Do
you
mind
if
I
grab
the
share
back
from
you?
Yes,
I
hesitate
to
say
the
ball.
I
know
I.
A
This
is
a
little
more
I'd,
rather
that
I
was
able
to
present
this.
Probably
you
know-
maybe
I
should
present
it
here
is
to
say
these
patterns
are
being
there's
a
reason.
Why
there's
a
lot
of
forethought
given
to
how
many
there
are
and
what
they
might
look
like,
and
that's
because
there's
a
there's
a
book
that
will
be
in
early
release
shortly
called
service
mesh
patterns.
A
It's
gonna
go
through
the
first
30
or
so
of
them
we're
going
to
include
the
pattern
file
yaml
in
the
book
and
then
for
any
anyone
who's
reading
each
of
the
the
30
chapters.
In
that
first
book
they
want
to
try
it
out.
They
can
take
the
pattern
file
and
put
om,
put
om
to
use,
you
know,
go,
go,
do
a
mastery,
ctl,
applier
and
hopefully
learn
and
be
more
confident
in
adopting
and
running
cloud
native
infrastructure.
A
A
I
fear
I
may
end
up
in
a
divorce
if
others
don't
come
to
bear.
So
the
reason
I
make
that
bad.
Well,
bad
but
honest
joke
is,
is
as
an
invite
to
others
is
basically
to
say
lee,
like
a
ton
of
things
to
do
with
the
mesh
really
they're
totally
capable.
A
I
know
om,
isn't
wasn't
built
for
purposes
of
of
only
mesh
things
and
that's
actually
what
makes
it
you
know
very
attractive,
like
explicitly
things
outside
of
kubernetes,
which
is
where
you
know
the
rest
of
the
world
is
makes
it
interesting,
and
so
what
I'm?
What
I'm
kind
of
saying
in
part
to
like
what
I've
been
trying
to
communicate
to
very
poorly
to
sunku,
is
there's
a
lot
of
initiatives
going
on
can't
wait
for
people
to
get
excited
likely,
is
and
move
these
forward
even
faster,
so
at
least
yeah.
So.
B
Yeah,
definitely,
I
think,
that's
it's
gaining
good
traction,
so
see
more
involvement,
points
going
out
soon.
A
Would
cars
didn't
stay
up
until
1
30
am
for
for
for
no
hard
questions?
I
know
that
there's
a
hard
one
out
there
lee
did
did
crash.
Did
you
end
up
showing
one
of
the
trait
definitions
that
was
generated.
E
A
A
If
you
haven't,
if
you
haven't,
checked
out
ohm,
it's
a
it's
an
ambitious
project
and
is,
I
think,
architecturally
from
my
perspective,
the
extensibility
is
well,
it's
it's
like
designing
for
performance
up
front,
it's
sort
of
like
designing
it's
like
postponing
security.
Consideration
like
oh
we'll
get
to
this,
we'll
get
to
the
performance.
That'll
come
after
the
features
and
functions,
and
I
get
it
and
extensibility
all
the
edibles
they
like
come
after,
but
the
thing
is
is
sometimes
you
got
to
think
about
them
up
front
and
I
think
the
own
project
did
so.
A
So
as
we
go
to
wrap
here,
there's
amy's
on
the
call-
and
she
has
quite
kindly
put
together
a
a
mailing
list-
a
sub
mailing
list
of
the
sig
network,
so
they'll
be
working.
F
A
So
so
shortly
there
will
be
a
a
mailing
list
specific
to
these
topics
we've
been
holding
off
on.
When
I
say
we
I
mean
all
about
you
and
I'm
holding
off
on
just
unleashing
all
the
met
service
mesh
chatter
onto
onto
the
broader
mailing
list,
which
is
about
core
dns,
and
you
know
all
of
the
other,
all
the
non-service
mesh
projects.
So
so
we'll
put
a
we'll.
You
know,
no
doubt
there'll
be
a
link
in
here
to
that
mailing
list.
In
case
people
want
to
subscribe,
and
so.
A
A
So,
okay
cool
all
right,
hey
two
weeks
from
now
the
third
thursday
of
the
month,
we'll
we'll
see
everybody
then
thank
you
karsh.
It
was
great.