►
From YouTube: CNCF SIG Network 2020-11-05
Description
CNCF SIG Network 2020-11-05
A
B
C
A
C
Is
maybe
the
other
way
of
thinking
of
it
is
that
I
should
consider
that
I'm
lucky
that
I
can
halfway
enunciate
it
correctly.
So
I
don't.
I
don't
get
punched
through
the
zoom.
E
C
Oh
good
deal
folks,
we
are
a
couple
of
minutes
in
I'm
gonna
put
a
link
to
the
meeting
minutes
in
the
chat
these.
These
are
community
meeting
minutes
and
it's
a
community
call.
It
is
a
cncf
call,
so
we
do
record
the
meeting
and
post
it
to
youtube,
but
the
participation
in
the
meeting
and
participation
in
the
meeting
minutes
open
to
everyone.
C
You
don't
need
to
be
a
member
of
the
cncf
or
be
representing
a
particular
project
to
have
an
active
voice
and
bring
up
topics,
and
things
like
that.
So
so
please
don't
be
shy
and
if
your
name
isn't
in
the
meeting
minutes,
please
drop
it
in
a
couple,
we'll
probably
lollygag
for
another
minute
or
two,
but
it
now
is
a
good
time
to
make
a
call
for
agenda
call
for
topics.
C
C
You're,
do
you
know,
I
hope,
hopefully
there's
some
flattery
involved
in
your
topic
and
your
kind
of
your
area
of
focus
being
of
such
interest
that
we're
asking
for
an
encore
from
your
envoy
con.
Yes,
there
is
thank
you
very
much
good
deal
good
deal
yeah,
you
were
the
that
was
if
we
go
back
into
the
meeting
minutes
that
was
kind
of
a
subject
of
discussion.
C
We
we
sort
of
meandered
last
time
that
we
spoke,
but
but
that
ends
up
being
a
good
thing,
because
then
we've
we
stumble
into
topics
like
the
one
that
you're
focused
on
so
given
that
it
is
five
after
and
we've
got
some
a
small
collection
of
folks,
it's
it's.
Probably
it's
probably
time
to
get
going.
Taylor,
I'm
gonna
toss
on
maybe
another
like
we've
got,
maybe
a
couple
of
other
topics.
I
don't
know
if
how
much
either
time
or
how
much
desire.
C
So
with
no
further,
if
lauren
with
no
further
ado,
if
you
don't
mind
just
giving
a
brief
introduction
and
taking
it
away
with
telling
us
about
envoy
and
vpp.
F
Can
you
see
my
slides,
or
do
you
see
my
your
slides,
nice
and
okay
perfect?
So
then
that
means
it
works
properly.
So
I
guess
hi
everyone.
My
name
is
florian
korres,
I'm
a
cisco
technical
lead,
also
a
fido
vpp
project
maintainer,
and
the
point
with
today's
talk
is
that
would
be
to
give
you
a
high
level
overview
of
the
benefits
of
using
vpp
as
on
voice
network
stack.
F
However,
today
I'll
mainly
focus
on
how
envoy
can
leverage
user
space
networking
and
some
of
the
benefits
there
are
now
before
we
dive
in
and
in
the
interest
of
those
of
you
who
are
not
familiar
with
vpp,
which
I
hope
not
to
be
that
many.
A
very,
very
brief,
quick
introduction.
F
Vup
is
an
lt2l7
networking
stack
which,
at
its
core
leverages
to
important
ideas,
vectorized
packet
processing
and
the
modeling
of
the
forwarding
as
a
directed
graph
of
nodes.
Now,
when
these
are
done
correctly,
they
ensure
really
efficient
use
of
a
cpu's
caching
hierarchy
and
consequently
a
minimal
overhead
per
packet
when
doing
software
forwarding
now.
Another
really
important
aspect
of
this
approach
is
composability.
F
F
Looking
at
this
permales
abstract
standpoint,
it
might
be
worth
noting
that
vbp
is
typically
used
together
with
dpdk,
so
it
supports
a
large
set
of
network
interfaces,
although
it
should
be
noted
that
it
also
has
a
smaller
set
of
really
efficient
native
drivers,
it
supports
l2,
switching
bridging
ip
forwarding,
virtual
routing
and
forwarding
or
vrfs,
so
it
has
the
right
constructs
for
l2
and
iplayer
multi-tenancy,
but
in
addition
to
basic
ltv
and
l3
functions,
it
also
supports
a
multitude
of
additional
features
and
just
to
name
a
few,
a
very
efficient
ipsec
implementation,
acl
map,
mpls
segment,
routing
various
flavors
of
tunneling
protocols,
so
things
like
vxlan
and
lisp,
for
instance,
on
top
of
the
networking
stack
vpp,
also
implements
a
custom
host
stack,
built
and
optimized
in
a
very
similar
fashion,
as
one
might
expect.
F
It
has
a
session
layer
or
a
socket
layer.
Now
this
one
provides
a
number
of
features,
but
perhaps
the
most
important
for
the
context
of
the
stock
is
the
shared
memory
infrastructure
that
can
be
used
to
exchange
io
and
control
events
with
external
applications
using
per
worker
message,
cues
or
what's
depicted
here
as
mqs
finally
to
simplify
interoperability
with
applications.
Vbp
provides
a
communications,
library
or
vcl,
which
exposes
a
posix
slide,
apis
northbound
towards
the
applications.
F
So
I
guess
that
by
this
point
some
of
you
may
be
asking
a
danish
people
question
or
maybe
not
why
yet
another
host
app
and
you'd
be
right
to
do
so
because
from
a
functional
perspective,
linux
is
obviously
the
one
stack
to
use.
However,
because
linux's
networking
stack
was
designed
around
the
single
pass
warranty,
completion
model
per
packet
performance
is
limited,
and
this
is
especially
noticeable
when
hardware
acceleration
cannot
be
leveraged,
but
in
addition
to
the
speed
that
could
be
provided
by
a
faster
transport
or
a
faster,
socket
layer.
F
The
fact
that
the
stack
is
in
user
space
could
be
leveraged
to
optimize
integration
and
perhaps
even
minimize
the
amount
of
data
copies
that
happen
between
the
application
and
and
the
stack
also
because
the
whole
protocol
stack
is
packaged
with
the
application
it
could
potentially
be
customized
or
extended
in
certain
situations.
One
can
certainly
imagine
scenarios
where,
for
instance,
the
socket
provides
more
context,
data
to
to
the
underlying
layers.
F
Network
utilization
by
the
applications
also
note
that
this
does
not
preclude
kubernetes
integration.
In
fact,
vbp
can
be
used
as
a
data
plane
by
cni's
like
calico.
F
F
So
coming
back
to
this
rather
intuitively,
let's
say
the
first
step
was
to
make
sure
that
envoy
components
do
not
make
any
assumptions
with
respect
to
the
underlying
socket
layer
and
consequently,
they
always
use
generic
software
interfaces
such
that
they
can
potentially
interoperate
with
custom,
socket
layer,
implementations
once
they're
available
because
they
have
initially
none
beyond
the
linux
or
windows
depends
or
mac
os,
depending
on
how
it
was
built,
was
being
used.
F
So
obviously,
this
was
not
exactly
glamorous
work.
Most
of
the
changes
were
not
features.
They
were
more
focused
on
refactoring.
Still
out
of
the
set
of
changes
that
have
gone
in
perhaps
the
most
notable
are
the
fact
that,
as
a
community,
we
decided
as
a
core
rule.
We
now
must
avoid
using
raw
file
descriptors
anywhere
in
the
code.
F
Io
handles
are
still
exposing
the
fts,
but
last
time
I
checked,
I
think
we've
managed
to
to
clean
them
to
a
point
where
they
were
only
used
in
a
couple
of
places.
F
We've
added
support
for
pluggable
io
handle
factories,
or,
in
other
words,
support
for
multiple
types
of
sockets,
then
can
be
used
at
the
same
time.
In
the
same
instance
of
envoy.
F
An
interesting
consequence
of
the
first
point
is
the
fact
that
file
event
creation
is
now
delegated
to
the
I
o
handle
implementations,
so
the
desired
side
effect
of
this
is
that
the
socket
layer
that
provides
io
handles
is
the
one
that
decides
how
the
events
for
this
io
handles
are
created.
In
other
words,
we
now
socket
events
are
are
no
longer
tightly
coupled
with
lib
event.
F
Some
coupling
still
needs
to
exist
and
I'll
go
over
that
in
a
second,
but
now
that
one
is
not
implicit
anymore
and
finally,
perhaps
an
interesting
scenario
that
might
serve
as
an
example
for
the
community
going
forward
was
tls,
which
mainly
for
convenience
reasons,
relied
on
bios
that
needed
explicit
access
to
the
file
descriptor,
but
it
eventually
turned
out
that
writing
a
custom
bio
that
uses
the
I
o
handle
as
opposed
to
the
file.
Descriptor
is
relatively
straightforward,
so
we
we
actually
switch
to
that.
F
So
I
guess
this
reinforces
the
first
point
as
much
as
possible.
Although
it's
going
to
be
a
longer
path,
people
should
try
to
use
everything
all
the
means
necessary
to
avoid
using
the
the
raw
file
descriptor.
F
Now
this
changes
are
enough
to
allow
the
implementation
of
a
vcl
specific,
socket
interface,
but
they
still
leave
one
or
one
more
problem
to
be
solved,
as
I
alluded
to
before,
namely
both
liv
event
and
vcl
want
to
handle
the
async
polling
and
the
dispatching
of
the
I
o
handles,
but
only
one
of
them
can
be
the
main
dispatcher.
F
Now
the
solution
to
this
problem
is
to
leave
control
to
a
lib
event
and
to
register
the
event
fd
associated
to
a
vcl
workers.
Message
queue
with
limit.
Now,
if
you
recall,
since
I'm
relying
now
on
you
remembering
my
previous
slides
the
message
queues
are
used
by
vpp,
to
convey
io
and
control
events
to
the
application.
F
Now
these
are
just
the
stepping
stones
for
the
envoy,
vcl
integration
and
as
first
next
steps,
the
plan
is
to
further
optimize
the
performance.
Now
the
lowest
hanging
fruit.
There
are
the
read
operations,
as
vcl
could
pass
pointers
to
socket
data
in
the
shape
of
buffer
fragments,
instead
of
doing
a
full
map
copy.
F
The
groundwork
for
this
is
already
done
in
fact,
since
I've
done
this
the
first
time
I
I've
gotten
it
to
to
actually
work,
but
what's
left
is
the
actual,
let's
say
integration,
it's
not
enough
if
vcl
avoids
the
mem
copy
once
if
envoy
gets
the
data
in
in
its
filters
and
proceeds
to
copy
several
times
afterwards
will
obviously
lead
to
inefficient
usage.
F
So
there's
still
some
work
to
be
done
that,
but
speaking
about
performance
to
evaluate
the
potential
benefits
of
this
integration,
I
used
the
following:
topology
wherein
wrk
connects
through
vcl
and
envoy,
which
performs
http
routing
to
a
back-end
nginx.
Now
this
type
of
scenario
might
not
be
relevant
in
practice
and,
in
fact,
I'd
be
delighted
to
learn
from
you
or
any
anybody
else
who
is
using
envoy
in
practice
or
deploying
a
way
in
practice.
F
F
Performance
is
seems
to
be
very
good,
20
to
40
20,
to
40
percent
better
and
to
scale
pretty
well
the
story
in
the
margin
there
is
that
other
certain
point
about
four
to
five
workers
performance
does
not
scale
linearly
anymore,
and
it
behaves
somewhat
worse
for
larger
payloads.
Although
it
should
be
noted
here
that,
for
this
test
in
particular,
tso4
vpp
was
not
enabled
so
backing
up
or,
as
a
summary
results
are
really
encouraging
from
from
our
perspective.
F
But
there
are
still
some
things
that
need
further
investigation
for
for
a
better
understanding,
so
we're
clearly
faster
than
the
kernel,
but
we
need
to
understand
if
the
scaling
of
the
performance
has
to
do
with
my
test.
Bed
has
to
do
with
vpp
vpp
vcl
into
vpp,
sorry,
envoy,
integration,
or
maybe
the
the
problem
could
lie
in
enough
voice,
so
we
wish
we
might
need
to
to
further
optimize
some
code
there.
F
With
that,
should
you
be
interested
in
further
exploring
the
zomboid
vbp
integration?
Please
give
the
code
a
try.
You
have
a
link
now
there
to
to
my
github,
it's
a
bit
stale,
maybe
a
month
old.
I
still
need
to
upload
the
zero
copy
version
of
the
code
and
in
case
I
won't
be
able
to
answer
all
of
your
questions
here
feel
free
to
email
me
or
drop
me
on
on
voice
slack
with
that.
Thank
you
very
much
and
do
let
me
know
if
you
have
any
questions.
C
Thanks
he's
very
good
well
before
I
ask
a
couple
florin,
thank
you
and
it's
an
open
floor
for,
for
others
who
might
might
have
questions
or
comments
or
or
feedback
for
florin.
I.
F
D
This
is,
this
is
good,
of
course,
and
again,
as
ashley
already
said,
thanks
thanks
for
the
presentation
for
sure
this
is
an
interesting
work,
so
one
of
the
things
that
probably
this
group
would
be
interested
is
how
this
thing-
I
don't
know
if
you
have
played
with
it
or
do
you
have
any
thoughts
around
this?
D
How
this
thing
can
be
used
in
in,
like
in
the
cloud
native,
let's
say,
landscape
like
if
I
want
to
deploy
this
within
kubernetes
and
use
envoy
as
a
sidecar,
which
you
know
somehow
depict
it
here.
If
you,
if
you,
if
you
will.
D
F
Let's
look
at
several
things
here.
First
of
all,
could
you
use
vpp
within
with
kubernetes,
and
the
answer
would
be
yes
and,
as
I
mentioned,
and
in
case
you
I'll
pass
over
the
slides
there's,
even
a
talk
now
happening
at
kubecon
with
respect
to
the
calico
pvp
integration,
so
calgo
can
use
vpp
as
a
data
plane
coming
back
here
right.
F
So
what
sort
of
integration
should
we
expect
or
what
sort
of
integration
would
be
possible
for
for
our
envoy
with
calico
vpp
and
then
subsequently
with
the
applications
and
now
there's
several
modes
of
operation?
Yes,
you
could
deploy
envoy
as
a
sidecar
and
then
have
that
attached
to
vpp
and
you
can
have
several
instances
of
those
envoys.
F
So
not
only
one.
The
question
afterwards
is:
how
do
you
connect
the
applications
to
your
envoy
and
what
I'm
depicting
here
is
the
general
case,
which
probably
is
the
safest
case
as
well,
and
maybe
we
should
dive
into
that
a
bit
remember
here.
The
integration
is
shared
memory,
so,
but
what's
happening
here
right
now
is
here
remember
it
will
not
offer
you
the
same
sort
of
security
that
the
kernel
offers
you
today.
F
Another
mode
of
integration
would
be
to
to
have
only
one
envoy
per
node.
Let's
say
instead
of
adding
one
envoy
for
each
container.
F
Now
it's
well
known
that
envoy
does
not
support
name
spacing
at
this
point
and
there
there
have
been
efforts
from
others
in
this
direction.
There
have
not
been
upstream.
So
if,
if
we
want
to
do
something
like
that,
probably
we
will
need
to
to
change
envoy
and
finally,
the
most
efficient
way
of
doing
this
within
kubernetes
would
be
to
not
leverage
tap
interfaces,
but
actually
leverage
something
that
we
call
cut
through
sessions.
F
And
let
me
explain
so
if
two
applications
attach
via
vcl
to
a
vbp
instance
where
that
vpp
instance,
what
it
offers
is
the
socket
layer,
functionality
the
whole
stack
functionality,
but
if
both
of
those
applications
attach
to
the
same
vbp
instance
that
socket
functionality
is
actually
not
required,
the
kernel
is
known
to
be
inefficient
in
that
you,
whenever
two
applications
attached
to
it
say
using
tcp,
the
kernel
will
actually
go
through
the
tcp
layer
implementation.
So
it
does
a
lot
of
extra
work
that
it
should
not
do
well
with
vpp.
F
We
support
what
we
call
cut-throughs,
meaning
pvp
detects
that
both
envoy
and
the
application
are
attached
to
the
same
pvp
instance,
and
it
uses
pure
shared
memory
buffers
to
to
exchange
data.
This
go
now.
This
comes
with
a
caveat,
as
I
mentioned
before.
This
is
shared
memory,
so
one
we
haven't
put
that
much
effort
into
securing
properly
securing
this,
and
that
was
a
very
long
answer.
I
hope
it
clarified.
At
least
some
of
your
your.
D
D
Okay,
my
second
question,
and
that
will
be
the
last
because
I
saw
that
taylor
already
added
the
cnf
initiative
that
they
started.
One
of
the
things
that
that
actually
folks
and
other
groups
are
trying
to
actually
bring
into
the
cloud
native
world
is:
are
these
cns
or
cloud
native
networking
functions?
And
if
we,
if
we
see
this
envoy
as
a
networking
function,
I
don't
know
if
you
have
thought
about
it.
I
understand
that
it's
a
kind
of
an
experimental
but
a
question
and
maybe
an
advice.
D
Would
this
be
possible
to
actually
actively
figure
out
if
there
is
vpp
available
or
not
so
that
you
can
actually
deploy
it
in
a
public
cloud
infrastructure,
the
same,
let's
say
a
container
envoy
and
then,
if
there's
no
vpp
to
just
do
whatever
like
use
the
standard
kernel
interfaces
or
is
it?
Is
it
so
heavily
modified
that
it
can
only
function
with
vpp
or.
E
F
Is
this,
let
me
see
if
I
understand
the
question
correctly
so
you're
wondering
if,
if
from
from
the
application
or
maybe
from
no.
D
No,
no!
No!
No!
No!
No!
I'm
sorry
from
android
point
of
view,
assuming
that
I
want
to
deploy
only
envoy
in
a
container
not
as
a
sidecar,
not
as
anything.
I
consider
that
invoice.
My
function
that
I
want
to
deploy
my
my
application,
the
version
that
you
have
in
your
tree
and
essentially
I
don't
know
how
this
is
going
to
go
forward.
But
let's
say
that
I
want
to
use
this
version.
Is
it
so
heavily
modified
that
it
cannot
function
with
the
standard
sockets
or
I
mean?
F
Very
good
question
so,
as
as
mentioned
on,
or
as
I
try
to
highlight
at
one
point,
we
we
now
have
support
in
envoy
for
pluggable
socket
interfaces.
F
F
You
can
then
based
on
nonvery
specific
mechanisms
in
particular,
for
instance,
addresses
can
can
request
either
default
processing
or
an
address
could
come
with
a
hint
that
says,
please
does
address,
use
it
on
on
vcl
socket
and
then
avoid
make
make
sure
to
to
open
the
the
right
socket
for
you
or
through
the
right,
socket
interface.
So
sure
answer
to
your
question:
yes,
we
can
switch.
So
you
envoy,
you
can
bring
gumboy
up
with
the
default
kernel
interface
and
then
you
can.
You
can
load
this
additional
module.
F
Having
said
that,
the
code
that
sits
in
that
branch
that
I
mentioned
is
it's
an
extension,
but
not
an
official
extension
of
of
envoy
reason
why
it's
not
an
ex
official
extension
is
because
envoy
builds
a
static
binary
so
and
we
would
need
to
build
part
of
vpp
in
order
to
build
that
extension,
and
at
this
point
it's
pretty
envoy
already
is
building
way
too
many
things.
F
So
I
have
not
tried
to
push
this
upstream
now,
my
com.
In
my
conversations
with
matt,
we
we
sort
of
decided
that,
if
there's
enough
interest
in
having
this
upstream,
we
could
upstream
it
and
and
then
make
sure
that
everything
is
built
together
and
then
you
will
have
at
runtime
just
some
switches
that
you
can
flip
and
then
you
can
use
either
the
kernel
or
or
vcl.
G
F
We
could
do
the
second
option
that
you
that
you
mentioned
so
if
you,
if
you
have
the
right
means
of
detecting
a
vpp,
is,
is
active
starting
envoy.
Well
right
now,
one
of
the
option
is
to
envoy
startup
just
to
say
I
would
like
to
use
as
a
default.
F
F
If
you
configure
that
resolver
to
always
default
to
to
assigning
vcl
as
an
interface,
you
actually
do
not
care.
So
basically,
this
will
be
a
configuration
that
you
can
inject
at
runtime
and
say:
oh
whenever
you
open
the
new
connection
to
a
backend,
for
instance,
or
or
something
like
that,
make
sure
to
use
the
vcl
interface,
not
not
the
kernel
interface,
and
you
can
do
that
explicitly
if
you
just
if
you
are
just
worried
about
default
behavior,
then
you
can
detect
when
vpp.
F
If
you
can't
detect
when
sorry
envoy
starts,
you
can
just
configure
it
to
to
use
use
vcl
as
opposed
to
the
kernel
as
a
default.
G
C
Lauren
a
question
I
might
you
may
have
answered
this
in
one
of
nikolai's
questions,
but
the
succinctly
like
the
availability
of
since
vpp
is
user
space,
but
from
what
I
understand
it
might
require,
its
installation
requires
maybe
a
another
module,
a
kernel
module
or
two
that
you
wouldn't
commonly
find
available
in
popular
cloud
providers.
G
As
far
as
I
know,
it
works
on
any
of
the
default
kernels.
It's
more
of
the
stack
right
below
the
app
what
it
uses
and
you're
going
to
get
into
the
dpdk.
So
and
then
you
start
looking
at
acceleration
or
something
else,
but
you
don't
have
to
use
it
by
default.
F
That's
exactly
so
tyler
had
a
good
description,
so
it
all
depends
eventually
on
what
on
the
dpdk
needs
that
you
have
in.
F
If
you
deploy,
if
you
deploy
vpp
with
say
with
without
dpdk,
and
now
it
will
depend
a
lot
on
what
sort
of
drivers
you
try
to
use,
sriov,
avf
or
anything
else,
you
will
need
just
the
dependencies
for
those,
but
normally
it
should
work
with
with
all
current
well
with
all
modern
kernels,
let's
at
least
stipulate
that,
if
you're
thinking
about
kernel
modules
that
might
be
needed
and
are
not
typically
provided,
I'm
guessing
you're
thinking
about
something
like
vfio,
pci
or
stuff,
like
that.
Those
are
typically
needed
for
dpdk.
G
Okay,
gotcha
I've
seen
more
problems
on
the
the
physical
host
side
like
are
certain
things
turned
on
in
the
bios
more
than
it
does
the
kernel
work
and
then
you
get
into
stuff
on
like
privileged
mode.
If
you're
doing,
how
are
you
going
to
access?
If
you
use,
say
memif
devices
to
talk
between
containers.
D
G
E
C
Yeah,
given
those
given
those
requirements,
is
it
is
there
a
specific
well
should
take
eec2,
for
example.
Is
there
a
specific
ec2
type
that
and
os?
I
guess
that's
sort
of
well,
I
guess
this
is
also
one
of
those.
It
depends
on
what
functions
you're
going
to
use,
but
yeah.
Maybe
it's.
F
Just
enough,
I'm
sorry
for
interrupting.
We
know
so
vpp
can
be
deployed
in
pc2
and
has
been
deployed,
but,
as
you
said,
first
with
specific
functions,
I've
never
tried
doing
this,
for
instance
with
envoy
we've
done
it
with
ipsec,
for
instance,
and
just
to
see
how
how
fast
the
implementation
would
be
and
with
dvdk
it
seems
to
be
working.
Fine.
F
A
very,
very
good
point,
separate
effort,
but
as
far
as
as
far
as
we
can
tell,
and
when
I
say
we
I
mean
the
community-
the
opposition,
then
not
talking
in
the
name
of
my
employer,
but
the
community
we've
managed
to
get
this
to
to
work
in
those.
Are
those
type
of
scenarios
and
performance
used
to
be
pretty
good,
maybe
better
than
the
other
that
you've
mentioned.
C
Another
another
quick
question
it
might
have
that.
I
heard
this
out
of
context,
or
I
didn't
hear
the
right
is
there
was
part
of
your
discussion
was
about,
was
about
an
envoy
per
node
and
and
some
caveats
around
that
and
I
didn't
quite
catch
the
the
use
case
or
the
need
for
for
that
architectural
model.
F
Very
good
question,
actually
so
the
problem
that
I've
heard
now
that
I've
in
practice
was
that
envoy
communication
to
the
upper
layers
say
to
istio
becomes
the
ball
net
when
you
have
too
many
envoy
instances
deployed.
So,
for
instance,
you
you
end
up
into
large
or
moderate
deployments,
end
up
needing
hundreds
of
megabits
to
gigabits
per
second
of
control
traffic
when
in
case
of
a
restart
event,
massive
restart
event.
Let's
say
so.
F
The
idea
was
the
the
the
solution
to
to
that
problem
was
to
well,
let's
have
one
of
them
for
a
node
and
have
that
be
multi-tenant,
as
opposed
to
having
multiple
small
instances
of
envoys
that
we
load
that
sidecars,
I
think
psyllium
I've
been
working
on
that.
I
don't
know
exactly
how
far
they've
gotten
with
it.
C
Makes
thanks
for
that?
That
makes
a
lot
of
sense
questions
from
others
for.
C
C
Flooring
this
this
has
been
nice.
This
is
it's
a
special
treat,
I
think,
for
some
of
us.
We,
I
think
we
kind
of
switch
between
doing
project
reviews
to
meandering
between
a
bunch
of
topics
to
receiving
presentations
like
this,
and
I
have
to
say
that
the
nerd
in
me
appreciates
a
good
set
of
diagrams.
So
so
thank
you
very
much.
Yeah.
F
C
H
Out
appreciate
all
of
you
asking
it
was.
I
was
delighted
to
set
it
up,
always
happy
to
hear
florence
speak,
and
you
know
it
was.
It
was
good
that
this
came
up
in
the
course
of
conversation.
C
F
I
I
think
I
had
to
press
the
button
so
yeah.
C
Well,
we're
not
using
webex.
Otherwise.
I
talk
about
the
ball.
The
proverbial
ball,
oh,
very
good.
The
next
next
couple
of
topics
up.
I
think
that
the
next
couple
are
relatively
quick.
It's
more
about
probably
awareness
so
for
some
of
you,
who've
been
on
these
the
last.
C
The
more
recent
calls
we've
used
this
time
to
opportunistically,
discuss
some
of
the
work
streams
that
are
taking
place
inside
the
service
mesh
working
group,
so
the
service
mesh
working
group,
just
a
a
subgroup
of
sort
of
the
subgroup
focused
on
service
meshes,
whereas
sig
network
itself
has
a
much
broader
field
of
field
of
view.
C
It's
worth
noting
that
we
had
been
hosting
those
discussions
kind
of
those
set
those
sessions
at
this
time
kind
of
using
this
time
to
advance
some
of
those
initiatives.
A
couple
of
the
initiatives
within
there
are
people
are
requesting
more
time
to
discuss
and
advance.
Smi
conformance
is
one
and
the
other
one
is
smp.
C
This
initiative
is
well
actually,
since,
since
taylor
is
on
I'll
use,
a
common
analogy
that
that
is
used
for
cnf
conformance
and
it's
to
say
that
smi
is
a
specification.
There
are,
I
think,
seven
service
meshes
that
signal
compatibility
with
the
spec
and
that's
great.
The
last
couple
of
major
service
mesh
announcements
of
new
meshes
coming
into
the
ecosystem
were
smi
compliant,
actually,
the
last
three
four,
maybe,
and
so
as
there
is
a
sona
boyd
to
kubernetes,
to
the
90-something
distributions
of
kubernetes,
there's
kind
of
a
there's,
a.
C
An
smi
conformance
a
measuring
to
smi
to
help
validate
conformance
to
that
specification,
and
so
there's
there's
a
recurring
meeting
to
be
scheduled
to
help
advance
that
initiative.
C
If
we
are
organized
about
this,
we'll
send
out
a
poll
to
ask
what's
a
convenient
meeting
time
if
we're
not
organized
about
it,
you'll
see
it
on
your
calendar.
C
C
This
week
we
were
meeting
with
envoys
load,
generator
called
nighthawk
meeting
with
their
maintainers
and
discussing
a
number
of
things,
but
one
of
those
things
is
in
some
respects
to
what
florin
had
said
earlier
about
different.
So
envoy
has
different
distributions
and
there's
a
project
that
assists
with
that.
C
G
Hand-In-Hand
well
they're
related,
but
they
are,
I
guess,
independent
pieces,
so
I
I
could
probably
do
the
first
real,
quick
and
then
move
on
to
the
next,
which
is
maybe
more
important.
Could
I
share
my
screen.
G
So
the
cloud
native
principles
it's
trying
to
these
papers,
which
are
here
in
this
repo,
it's
a
whole
set
of
papers,
talking
trying
to
break
down
the
different
concepts
that
are
all
tied
into
what
you
have
right
here.
So
just
when
we
go
and
look
at
what
cncf
has
in
this
minimal
set
of
information.
G
Part
of
it
talks
about
what
it's
going
to
do
like
benefits
and
how
this
you
know
works
as
far
as
groups,
there's
actually
not
a
lot
that
really
talks
about
what
do
these
mean,
and
so
this
has
been
an
ongoing
work
for
quite
a
while,
and
maybe
the
newest
thing-
and
I
don't
know
if
you've
seen
this
specifically
lee
but
from
getting
feedback
talking
to
different
people
in
the
toc
and
other
places
we
had
created
the
fundamental
concepts
area.
G
So
this
is
would
tie
into
what
you
see
on
these
definitions,
and
most
of
these
would
be
agreed
by
most
people
trying
to
keep
it
more
generic
and
not
kubernetes
specific.
But
the
this
is
to
lead
up
to
these
other
set
of
papers
so
starting
out
with
a
clot
breaking
down.
What
do
we
mean
by
cloud
native
and
going
into
each
of
the
concepts?
I
think
I
just
clicked
on
the
wrong
one.
G
This
one
was
the
one
I'm
at
these
actually
start
breaking
down
all
of
this
individual
concepts
and
you'll
have
an
area
here,
that's
more
english
and
then
it's
talking
about
how
it
ties
together
with
references.
So
that's
the
big
thing.
This
isn't
just
coming
from
the
people
that
have
been
involved,
people
that
are
creating
software's
telco
service
providers
was
a
lot
of
the
focus
on
hearing
networking
folks.
G
But
these
references
are
a
lot
of
different
people.
Doing
things
in
devops,
networking
in
general,
in
cloud
native
and
the
whole
set
of
papers
is
building
through
to
eventually
what
it
gets
to
this
area.
So
what
do
we
mean
when
we
say
cloud
native
networking
and
going
down
and
trying
to
answer
different
questions,
and
then
it
actually
breaks
those
down
into
further
pieces,
so
you
have
stuff
talking
about
what
do
we
mean
by
microservices
immutable
infrastructure
and
then
getting
into
the
osi?
G
How
does
it
relate
to
the
osi
stack
and
that's
really
the
main
thing
here?
These
set
of
papers?
It's
also
available
to
give
back.
G
They
are
leveraged
by
several
different
communities
and
there's
been
a
lot
of
you
know,
collaboration
from
people
on
them
and
as
cntt.
I
don't
know
if
anyone's
aware
of
that
the
lfn
community
they
they
point
to
some
of
these.
But
it's
at
this
point.
It's
something
where
there's
a
lot
more
people
within
I'd
say
cncf
in
general
that
are
wanting
to
have
more
of
this
well-defined,
and
so
that's
the
effort,
so
we'd
be
happy
to
get
more
eyes
on
that.
B
The
so
to
take
yeah.
C
This
is,
I
guess,
well
quick
point
of
clarification
that
so
I
think
the
discussion
that
we've
had
in
this
sig
a
few
times
has
been
about
the
cloud
native
networking
principles
and
but
the
the
over
the
overarching
initiative
is
well
is
to
define,
is
to
further
refine
cloud
native,
which
I
have
to
say
you
you
guys
are
you
guys,
are
sick
puppies
for
trying
to
take
this
on
because,
like
what
a
well
one,
there's
some
natural
contention
with
trying
to
just
define
all
all
the
components
of
what
you
know,
all
the
characteristics
of
what
makes
something
cloud
native
and
and
expanding
in
that
there's
been
a
similar
initiative
that
was
proposed
by
an
architect
at
microsoft,
and
it
was
to
start
with
a
bunch
of
it
was
on
patterns
and
it
was
to
start
with
service
mesh
patterns.
C
But
but
his
vision
was
to
define
much
of
that
pattern
like
cloud
native
patterns
for
all
the
things
of
which
it
was
hard
to
fathom
that
landing
or
be
or
like
ever
congealing.
But
so
of
the
just
to
clarify
that
I
guess
the
question
is
the
cloud
native
networking
principles.
Those
are
the
deepest
set
of
papers
thus
far.
Is
that
accurate
or
are
there
are
there
some
lengthy
papers
on
what
it
means
to
be
a
micro
service
or
to
be
loosely
coupled
or
to
easily.
G
I
don't
think
there's
been
anything
that
pulls
it
all
together,
that's
as
extensive
as
these
sets
right
here
and
really.
What
we're
saying
is
these
four?
So
this
one
is
a
build
up,
but
it
has.
You
can
see
a
ton
of
references,
these
all
go
into
lots
of
lots
of
different
books
and
people
that
have
been
doing
this.
They
don't
all
say
cloud
native,
but
you
know
this
managing
servers
in
the
cloud
you
know,
but
that
goes
all
over.
G
I
don't
know
of
anything,
that's
as
extensive
as
as
these
sets,
so
it's
kind
of
an
aggregation
of
all
this
yeah.
It's
it's
been
I'd,
say
brutal
to
say
we're
taking
on
trying
to
say:
where
does
what
look
at
all
of
the
layers?
G
But
what
we've
found
specifically
in
which,
on
the
the
next
is
the
cnf
conformance
when
we're
looking
into
telco
and
how
to
try
to
help
bring
some
of
these
things
where
the
philosophies
from
like
devops,
where
ci
cd
is
just
a
norm
for
enterprise
and
everything
else,
and
try
to
bring
a
lot
of
the
philosophies
and
methodologies
that
are
already
commonplace.
You
have
to
go
further
back,
it
just
doesn't
work
unless
you
have
those
concepts
well
defined,
and
I
I
think
that's
why.
G
G
I'm
happy
to
chat
more
if
people
want
on
that.
I
would
like
to
at
least
just
mention
the
cnf
conformance
program,
and
you
can
check
out
the
presentation
that
happened
this
week
to
the
tfc
and
it
was
primarily
about
a
new
working
group,
but
that
I'm
going
to
actually
go
over
into
it's
probably
easier.
G
G
The
way
the
kubernetes
conformance
program
breaks
down,
underneath,
as
you
have
the
conformance
working
group,
sig
architecture,
sig
testing
and
they're
all
handling
different
aspects
within
the
cnf
conformance
program.
We
have
the
cnf
conformance
test
suite
project,
so
that
would
be
equivalent
to
what
sig
testing
is
doing.
G
But,
as
I
think
lee
you
might
have
mentioned
said
something
about
this
with
sauna
boy
earlier
we've.
Actually
that's.
This
project
has
created
the
test
suite
to
look
a
little
bit
more
like
sauna
boy,
as
far
as
like
configuration
and
other
stuff,
but
it
then
it
actually
has
tests
within
it
that
are
actually
there
versus
sauna.
Boy
has
a
plug-in
to
run
the
external
test,
which
you
could
have
run
directly
using
the
framework
in
the
kubernetes
cv,
but
so
that's
where
the
the
mechanics
and
the
actual
tests
are
implemented.
G
G
With
regards
to
cloud
native
best
practices
for
cns-
and
you
know,
one
of
the
things
that
we're
pointing
out
is
data
plane
cns.
So
I
think
the
stuff
that
we
were
talking
about
today
with
vpp
and
on
buoyant
stuff
is
very
important
for
these.
When
you
look
at
a
cnf
or
an
application
providing
network
functionality,
that's
at
a
non-data,
plane
layer.
It's.
G
It
may
be
a
lot
easier
to
talk
about
its
behavior
and
best
practices,
because
it's
going
to
look
more
equivalent
to
stuff
that's
already
in
agreement
or
sig
app
delivery
is
already
saying:
here's
some
best
practices,
but
when
you
get
down
to
data
plane,
cns
and
other
ones,
maybe
operators
and
stuff
that
are
tied
in
it
starts
to
get
a
little
bit
different
on.
What
does
that
look
like
on
best
practices?
G
So
this
working
group
is
going
to
be
focused
on
that
as
far
as
the
initial
scope
and
the
process
like
what
is
the
process
just
like
kubernetes,
you
walk
through
a
certain
stage.
You
run
sauna
boy,
you
have
pull
requests,
there's
a
bunch
of
things,
so
it'll
do
all
that
decisions
and
then,
as
I
said,
the
tesla
project
will
be
separate.
G
C
G
Not
much
I
mean
it's
mainly
that
they're
trying
to
get
more
people
engaged
on
it,
and
I
know
that
you
know
I
went
to
gap
delivery
yesterday
and
here
today,
because
there's
a
overlap
on
the
way
these
things
work,
but
I
think
we'll
see
more
by
kubecon
and
we'll
continue.
C
Okay
and
my
understanding
is
like
so,
we
used
to
host
the
oh,
the
cnn,
the
cncf
networking
working
group,
that's
where
before
sigs
became
a
thing
and
the
networking
working
group
sort
of
rolled
into
sig
network
or
became
sig
network.
C
I
think
the
the
structure
as
it
is
now
with
with
sigs
is
that
they
may
end
up
spawning
any
any
number
of
working
groups
within
the
sigs.
So
so
I
guess
in
part
what
I'm
trying
to
say
is
that
I
think
the
life
cycle
of
a
cig
sort
of
operates
like
in
context.
I'm
sorry,
the
life
cycle
of
a
working
group
operates
in
context
of
a
sig
and
so
yeah.
Getting
a
landing
spot
in
a
sig
makes
makes
a
lot
of
sense
as
kind
of
a
home
base.
C
There's
an
example
of
a
we're
just
talking
about
the
service
mesh
working
group,
but
another
one
within
cncf
sig
network
is
the
ud
udp,
the
universal
data
plane,
api
working
group
or
udpa,
which
is
an
envoy
onward
api
more
or
formed
around
the
envoy
api.
The
any
feedback
from
sig
app
delivery
from
your
presentation.
G
Yesterday
I
mean
they're
they're,
all
interested,
they
have
a
there's,
a
air
gap
working
group.
That's
the
one,
telco
focused
working
group
that
was
in
sick,
app
delivery
because
most
of
it's
non
non-networking,
telco
type
apps
and
that
one
cigar
sorry
air
gap
is,
is
a
more
of
an
edge
type
of
focus.
So
it
doesn't
match
up
to.
G
It
doesn't
cover
most
of
the
stuff
that
we're
talking
about
specifically
on
like
core
the
core
network
type
of
network
functions.
G
Yes,
okay,
a
native
network
function,
okay
versus
we're,
not
saying
containerized
network
function,
and
there
is
a
lot
of
different
thoughts
on
what
network
function,
whether
it's
a
name,
that's
just
a
marketing
term
or,
if
you're,
going
to
take
it
and
break
it
down
to
what
the
intent
of
those
words
are,
which
is
why
part
of
the
scope
is
making
sure
that
it's
communicated
also
within
the
working
group.
G
What
we're
saying
but
right
now
you
could,
I
would
say,
think
of
it
as-
and
this
is
from
some
of
the
even
the
service.
Telco
service
providers
is
a
telco
or
networking.
C
G
Yeah,
so
for
as
far
as
conformance
goes,
then
it's
trying
to
provide
something
for
this
right.
Now
it's
for
the
telco
space
I
mean
there's
some
of
the
service
providers
have
said.
Telco
is
a
subset
of
the
networking
domain.
So
then
it
becomes
broader
as
far
as
that
goes,
but
the
the
idea
right
now
is
to
help
telcos
and
actually
becoming
more
cloud
native
and
and
right
now,
it's
saying,
let's
focus
on
the
applications
that
are
deployed
on
their
kubernetes
based
platforms
or
distros,
whatever
you
want
to
say.
C
Thank
you
for
this
taylor.
This
is.
This
is
good.
If
you
know
please
follow
up
with
taylor.
If
you
have
questions
about
this,
I
recognize
we're
five
after
so
so
we'll
we'll
we'll
end
it
here
for
today,
but
thank
you
florin.
Thank
you,
taylor.
C
It
was
a
full
agenda
same
same
time
in
a
couple
of
weeks,
come
thanks
for
having
me
bye.
Thank
you.