►
From YouTube: Network Service Mesh WG Meeting - 2018-10-05
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Cool
great,
so
I
think
we
have
enough
on
so
that
we
can
wouldn't
get
started,
though
first
welcome
everyone
to
the
network
service
mesh
meeting.
It's
been
a
while,
since
all
of
us
have
been
able
to
hop
on
since
we've
had
a
and
s
going
on.
So
before
we
get
started.
Let's
do
some
agenda
bashing.
So
if
there's
anything
that
you
would
like
added
to
the
agenda,
please
go
ahead
and
speak
up
now.
B
Michael
and
I
would
like
to
make
a
standing
item
for
the
the
performance
testing
part
I,
don't
know
whether
we
it
was
there
last
week.
It
is
not
currently
so
can
we
add
that
with
the
standing
point
for
every
week,
please
sure
it's
a
V
NFC.
Actually,
let
me
type
it
it's
a
v
NF
CNF
testing
and
benchmarking
Michael
magic,
nickel,
magic,
so
I'd
know
where
you
want
me
to
put
it
in
yeah.
B
D
B
A
That's
narrow,
so,
let's
get
started
with
the
event,
so
our
next
big
event
is
coupon
Seattle.
You
have
two
talking
coupon
something
I
mentioned
on
the
events
as
well,
as
we
also
have
the
final
mini
summit,
which
is
part
of
the
just
a
co-hosted
with
coupon,
and
so
we
expect
to
see
some
that
lip-service
special
to
Fido
conferences
learn
many
summit
as
well.
A
A
B
A
Okay,
cube
comm
go
straight
into
the
coupon
demo
them,
so
our
goals
is
our
first
words.
We
have
want
to
have
a
basic,
local
and
remote
cross
connect
by
November,
7th,
which
lines
up
with
the
me
NFC
that
prepares
and
maids,
and
a
hardware
cross
connect
by
December
7th
nice
to
have
included
streaming,
topology,
visualizations
audio
link
and
they
point
to
quick
run
with
them
all
yourself,
which
conveniently
will
be
a
static
website
over,
preferably
over
Hugoton
path
week
you
go,
but
don't
be.
That
is
no
step
one
and
so.
A
E
First
of
all,
I
just
wanna
make
sure
the
basic
local
remote
cross,
connect
stuff
that
would
include
local
min
iff.
Mif
cross,
connects
and
sort
of
remote
cross
connects
Rubik's
land,
minimally
and
I
want
to
make
sure.
That's
actually
sounds
like
it's
lining
up
with
what
you
guys
need
for
the
BNF
see
enough
comparison
stuff,
because
I
think
we
very
much
like
to
make
sure
that
Lansing
you're
against
a
timely
manner.
So.
F
E
E
There
are
two
sets
of
things
going
on
here:
one
is
the
vnf
CNF
demo
work
which
you're
assisting
with,
which
is
awesome,
and
then
there
is
to
build
a
demo
for
never
ServiceMaster
cube
con.
There
is
a
strong
ambition
to
bring
those
two
together,
so
the
BNF
CNF
demo
can
run
network
service
mesh,
and
so
in
order
to
try
and
put
some
structure
around
that
we're
trying
to
figure
out
what
would
have
to
be
delivered
by
when
in
order
to
make
that
work
out.
So,
okay.
B
Can
we
can
we
call
it
like
that,
so
that
we
avoid
confusion
because
right
now
it
doesn't
look
like
we
have
a
different
ones.
So
can
we
have
a
demo
one
demo
to
both
titled
and
then
and
then
deliverables
for
them?
And
then
you
say
that,
ideally,
if
they
merge
that's
great
so
that
we
have
a
demo
one
plus
the
mode
too
so.
E
E
You
know
the
BFF
CNF
comparison,
folks,
well,
they're,
certainly
yo
very
valued
members
that
were
service
mesh
community
as
well.
We
don't
steer
what
they're
doing
out
of
this
meeting
right,
and
so
it's
it's
less
of
a
us
managing
the
unified
schedule
and
more
of
a
communicating
between
communities
to
try
and
get
schedules
that
mash
does
that
roughly
match
your
understanding
of
things,
Watson,
Messina,
etc.
B
B
E
About
goals
and
dates
is
twofold:
what
is
I
want
to
make
sure
that
we
we
have
mutual
understanding
in
the
hopes
that
these
two
demos
can
come
together.
The
other
one
is.
Obviously
this
is
a
community.
This
is
something
we
figure
out
as
a
group,
in
terms
of
you
know,
who's
willing
to
work
on
what
what
people
find
interesting,
it
and
so
forth.
So
I
took
a
strike
here
for
the
things
that
were
interesting
to
me
and
the
timelines
that
would
be
interesting
to
me.
E
Now
this,
the
so
I
know,
we've
got
a
bunch
of
folks
who've,
recently
popped
up
from
in
the
community
who
are
looking
for
things
that
they
would
want
to
be
able
to
share,
but
strohbeck
Yukon
in
other
contexts
or
her.
You
know
who
are
looking
for
things
to
work
on
aureus
and
some
of
these
things
I
mean
that
is
useful.
I.
E
Don't
know
who
yet
okay
cool
so
anyway
feel
free
to
speak
up.
You
know
if
you
want
to
reach
out
offline.
If
you
want
to
think
about
it
and
add
something
to
the
agenda
for
this
next
week.
That's
all
good,
too,
and
then
there
were
a
couple
things
I'd
sort
of
marked
as
nice-to-haves
and
the
reason
I
sort
of
wanted
a
list
of
those
is.
There
are
certain
things
that
we
can
do.
That
would
be
awesome,
but
they're
kind
of
orthogonal.
E
You
know
in
that
they
can
be
worked
on
sort
of,
while
other
things
are
going
on
without
disturbing
them,
and
some
of
those
are
things
like
streaming,
topology
and
visualization,
which
would
be
kind
of
a
cool
thing
to
be
able
to
show
as
part
of
the
demo.
So
you
can
see
the
links
arise
and
pass
away.
There
been
a
lot
of
conversations.
E
H
E
A
I
Yeah
I
mean
yeah.
I
could
briefly
talk
about
my
part.
So,
basically,
if
you
remember,
we
had
a
action
item
of
restructuring
API
and
at
that
time
it
seemed
to
be
a
good
point
to
look
at
their
data
play
napier
as
well,
because
I
mean
they're
kind
of
related
and
so
ed,
and
I
we
started
talking
and
basically
to
be
able
to
complete
their
api
refactoring.
I
suggested
a
couple
of
points
which
could
be
which
I
think
at
picked
up
and
built
upon
dot,
more
more
complex
structure.
So
right
now,
basically
the
data
API.
I
Let's
say
the
simple
data,
API
data
plane
controller,
which
is
currently
emerged
in
the
net
mesh.
It
can
be
used
as
a
reference
model.
It
pretty
much
does
everything
as
it's
any
data
plane
controller
would
need
to
be
do
from
the
Earth
from
the
control
plane
perspective
I
mean
exchanging
the
liveness
messages
with
the
NS
m
and,
although
all
the
nice
things
so
the
only
thing
is
missing
is
the
final
piece
which
at
hopefully
at
soon
and
then
will
be
able
to
complete
the
data
plane.
E
I
think
at
that
point
we'll
be
in
good
shape.
Obviously,
people
will
continue
to
discover
things
that
can
be
enhanced
about
it,
but
I
think
that
the
basic
structure
will
be
quite
reasonable.
I've
got
a
question.
Do
do
folks
have
an
interest
in
maybe
doing
a
review
of
that
API
here
next
week,
so
we
can
sort
of
talk
through
a
structurally
and
community
input.
B
E
E
A
number
service
manager
running
on
a
node
and
I
need
a
cross
connect
between
some
pod
that
one's
Colonel
interface
or
mm
I
have,
and
some
negotiated
tunnel
that
I've
negotiated
with
a
network
service
manager
at
another
node
I
need
a
cross
connect
that
is
you'll
composed
of
no
if'
to
VX,
LAN
and
obviously
bi-directional
and
so
I
need
a
way
to
communicate
that
to
the
data
plane.
And,
of
course,
you
get
other
niceties
like
what
exactly
what
mechanisms
can
they
do
to
playing
support
itself?
E
E
No,
absolutely
that's,
what's
actually
been
landing
in
the
patches
that
have
been
going
into
that
Sergey
and
I
have
been
working
on
they've
been
landing
the
last
week
or
two
off
of
the
repo
and
then
there's
a
sort
of
a
document
that
got
a
little
bit
started
and
will
probably
get
you
cleaned
up
to
match.
What's
in
the
code
here
talking
about
the
data
plane
API
in
the
context
of
the
other,
API
is
okay.
B
I've
got
two
more
questions,
so
are
you?
Are
you
actually
using
the
data
models
that
have
been
defined
for
this
functionality
elsewhere,
specifically,
I
know
about
two
sources:
one
is
IETF
and
networking
group
which
is
doing
hung,
but
it
can
be
translated
to
Jason
put
on
protobuf.
You
name
it
because
the
the
yang
is
the
strictest
and
the
other
guys
are
less
strict.
B
But
it
would
be
nice
if
if
they
follow
so
that
so
that
it's
easier
to
do
machine
based
translators
in
the
future
and
set
and
also
leverage
make
sure
we
don't
miss
any
functionality
and
then
there
is
and
there's
a
number
of
young
models
defined
in
ITF.
But
then
there
is
also
another
thing
called
open
cone
for
open
net
cone
or
some
open
young
thing,
which
is
also
I,
think
it's
an
open
source,
community-based
effort
to
define
network
centric
data
models
and
I'm
sure
Alto
cross-connect.
E
Actually
aren't
but
there's
a
fairly
good
reason,
and
so
here's
the
thing
what
you
said.
It
makes
a
ton
of
sense
if
I'm
talking
to
physical
devices
that
do
net
copying
things.
Yes,
yes,
this
is
part
of
the
reason
why,
in
the
API
document,
one
of
the
things
that
we
talk
about
is
distinction
between
beverage
service
measure,
the
abstract
and
network
service
mesh
in
the
particular,
as
it
involves
kubernetes,
and
one
of
the
reasons
that
we
make.
E
That
distinction
is
because,
in
the
abstract
network
service
mesh
has
the
number
service
manager
to
network
service
manager
API,
and
it
has
the
need
free
service,
red
network
service
registry.
That's
it
how
a
network
service
manager
manages
whatever
a
thing
it
needs
to
manage
is
not
business
in
the
abstract,
so
the
data
plane,
API
that
we're
talking
about
here
is
very
specific
to
how
the
network
service
manager
on
a
node
talks
to
the
V
switch
on
a
node,
and
in
that
context
we
aren't
actually
in
any
way
shape
or
form,
particularly
assisted
by
those.
B
E
They
do
essentially,
the
network
service
manager
has
a
set
of
responsibilities
in
the
abstract
which
can
be
met
in
any
manner.
That
makes
sense
for
the
network
service
manager
in
the
kubernetes
case,
which
is
what
we're
focusing
on.
We
have
sort
of
fleshed
out
what
happens
to
the
network
service
manager
in
kubernetes.
If
I
had
a
number
service
manager
that
was
managing
physical
network
boxes,
then
that
can
be
flushed
out
in
whatever
way
makes
sense
for
the
person.
Writing
that
network
service
manager,
okay,.
E
E
B
Ok,
it's
ok!
It's
ok!
Whatever
you
can,
you
can
give
me
I'm
just
now
interested
because
I
was
done.
I've
done,
bottom-up
and
bottom-up,
up
to
abstraction
through
levels
of
yonk
and
Jason
poco
system,
you're,
building
you're
going
top
down
I
just
wanted
to
see
how
that
how
you
guys
are
going
to
down
just
interest
it
yeah
and,
and
is
it?
Is
it
in
plugins
or
I'm?
Looking
at
the
repo?
Is
it
an
API
or
a
poor
summer?
I.
E
E
B
G
E
I
E
E
G
There
is,
there
is
a
preliminary
version
in
the
NSM
api
in
the
repo
right
now
under
DOCSIS
NSM
api
dot
m
day,
and
you
know
he
just
of
course
you
can
browse
to
it
because
all
the
documentation
gets
rendered
and
but
that
is
a
little
bit
incomplete
you
kind
of,
but
that
I
think
that's
what
it
is
is.
Is
you
know
and
they're,
like
filling
in
the
details.
E
G
B
E
So
this
is
the
document
that
I
was
intending
to
review
and
effectively
it's
easiest
to
talk
about
things
in
their
natural
form,
and
so
a
lot
of
those
things
are
you're
basically
talking
through
profiles,
but,
as
I
mentioned
before,
that
document
at
this
moment
is
incomplete.
So
if
you
get
to
a
point
event
where
you're
like
we
basically
feel
like
the
document
is
incomplete
or
has
become
less
coherent,
yes,
that's
true,
we
will
bring
it
up
to
stuff
for
next
week.
That's.
B
D
E
E
E
So
awesome
so
I
think
that's
it
for
the
data
plane.
Api
from
my
point
of
view,
do
you
feel
like
we
talked
about
everything.
I
Yeah,
basically,
it
would
be
great,
you
know
if
people
start
to
reviewing
it
more
actively
and
provide
a
feedback,
because
it's
extremely
important
I
mean
I
mean
definitely
it
can
be
changed
later,
but
it
would
be
nice
to
have
a
better
start.
You
know
the
more
people
chime
in
with
the
ideas,
suggestions
and
all
the.
E
B
E
G
E
A
So
don't
have
too
much
time.
We
still
have
a
lot
of
material
to
cover
some
jump
jump
to
the
next
topic,
so
architecture
architecture
review,
we've
already
spoken
by
the
data
plane
API.
Is
there
any
other
architecture,
review
items
that
are
that
we're
currently
looking
at
that
have
not
been
discussed.
E
E
E
A
Yeah,
just
to
just
for
people
be
aware,
just
because
something's
birch
to
the
repo
and
if
it's
a
document
or
spec
or
so
on,
doesn't
mean
that
that's
that's
gonna,
be
the
absolute
final
version.
If
you
find
a
hole
or
a
bug
or
something
in
there,
bring
it
up.
You
know
we,
you
know
we
want
to.
We
want
to
iterate
over
this
thing
over
and
over
again
until
we
and
we
have
something
very
solid,
so.
E
Actually,
let
me
retract
API
in
the
end,
that's
going
to
change
a
little
bit
as
we
fix
the
data
points,
though
it
definitely
has
components
the
apps
tract
and
that
I
think
will
help
understand.
Like
part
of
the
reason
they
did.
That
was
to
get
a
sense
of
what
are
the
places
where
it's
okay,
to
stick:
kubernetes
isms
and
what
are
the
places
where
it's
not
okay,
to
stick
kubernetes
systems.
A
There's
some
additional
ones
that
I
added,
so
they
use
all
of
the
same
ones,
with
the
exception
of
port
binding
and
which
is
number
seven
and
I've
added
in
five
items.
Number
one
do
not
require
turtle,
modification
or
modules,
number
two
explicitly
say:
payload
types,
new
consume
and
produce
number
three
is
lists.
A
So
one
thing
to
note
about
this:
is
these
are
not
specific
or
daddy's
there
general?
How
do
you
build
a
CNF
that
rendon
anything
that
is
kubernetes
life?
So
if
someone
decides
to
bringing
pesos
or
someone,
someone
creates
a
new
platform
in
the
future.
There
should
be
easy
to
port
these
CNS
from
one
system
to
to
another
and.
A
So
if
I
for
those
who
are
interested
in
that
particular
area,
there's
a
couple
things
that
need
to
be
that
I
need
to
do
with
it
number
one.
Is
it's
not
really
easily
consumable
so
I'm
going
to
change
it?
It's
using
some
Ruby
based
server,
I'm
gonna,
get
rid
of
that
Ruby
based
server
and
migrated
over
to
Hugo,
which
means
that
we
can
then
use
the
same
infrastructure.
We
use
to
build
our
websites
for
Denver
service
by
shot
I/o
and
do
all
reviews,
and
so
on.
A
It
should
be
a
lot
easier
to
work
with
and
that'll
also
fix
the
the
issues
around
linking
because
this
thing
has
a
very
peculiar
way
to
does
links
that
was
compatible
with
github,
so
you'll
get
the
table
of
contents
and
try
clicking
all
of
the
links.
You'll
see
the
breaks,
although,
if
you
go
into
the
actual
repo
you'll,
see
each
one
listed
there.
A
A
B
B
I'm,
just
looking
the
repo
last
one
is
metrics
and-
and
this
may
grow
depending
on
the
on
the
merit
that
we
agree
with
in
an
sm
project,
correct
or
where
are
those
13
to
17
metrics
from?
Are
they?
Is
this
your
own
thinking?
Is
it
something
from
somewhere?
Is
it
from
some
other
community,
some
other,
some
other?
What
we're
calling
the
idea
of
best
current
practice?
That's
that's.
A
A
great
question,
so
the
area
that
that
I've
so
right
now
these
primarily
came
from
from
a
series
of
things.
Some
of
it
are
from
my
from
my
experience,
I'm
working
on
network
service
mesh
and
working
within
the
container
space,
some
of
it
are,
is
based
upon
conversations
I've
had
with
with
CN
CF,
like
people
like.
A
Like
Denton
and
ARPA
Josh
the
direction
that
they
want
to
take
CN
EPS
as
well,
and
some
of
it
have
come
from
internal
conversations
I've
had
with
with
in
Red
Hat,
so
there's
a
variety
of
different
places,
and
so
I've
been
thinking
a
lot
about.
You
know
what
are
the?
What
are
the
core
like
minimum
things
that
you
should
do
in
order
to
help
someone
who's
building
a
CNF,
actually
build
something
that
will
that
will
be
easier
to
to
orchestrate
and
be
easier
to
consume
as
an
operator
and
manages
operator,
and
so
I.
A
Don't
talk
about
in
this
particular
in
this
particular
document,
but
actually
have
to
re,
actually
see
like
three
levels
of
cooperation
that
doesn't
seem
this
can
have
so
one
of
them.
Is
you
no
longer
require
any
customer
modifications
or
modules,
so,
in
other
words,
you
can
actually
run
in
a
container.
So
that's,
like
you
can
say,
like
that's.
A
bronze
level
like
a
silver
level,
would
be
like
your
CNF
now
scales
horizontally
and
the
gold
one
would
be
it
scales
horizontally
and
it
also
can
be
upgraded
and
downgraded
gracefully
without
breaking
any
any
infrastructure.
A
And
so
thanks,
so
part
of
the
idea
is
to
provide
the
guidance
so
that
CNF
operators
can
we're
seein
if
developers
can
can
build
CMS
that
will
interact
better
with
their
environments
and
the
second
one
is
to
also
give
the
operators
some
level
of
confidence
where,
if
certain,
if
certain
guidelines
are
met,
they
know
what
type
of
risk
they're
they're
taking
on
by
they
know
like
if
they
use
it
by
using
the
bronze
silver
and
gold.
If
you
take
a
bronze
one,
you
know
you
know
whatever
risk
you're
taking
versus.
B
Understood
so
I
have
two
suggestions.
If
I
can
sure
because
looks
like
you
actually
spent
quite
some
time
to
think
about
it
and
and
also
explore
both
your
experience,
your
colleagues,
friends
and
also
references.
What
will
be
really
really
really
useful
and
I
mean
whether
other
efforts
agree
is
to
actually
back
up
each
of
those
points,
and
you
already
have
content
there
with
informative
or
or
normative
references.
So
if
there
is,
this
experience
comes
from
somewhere,
whether
it
is
a
compute
world
cloud,
VM,
networking,
old
or
new,
or
some
operational
experience
or
development
experience.
B
It
should
be
called
out
to
motivate
to
basically
support
this
item,
to
be
a
requirement
and
twelve
factor
from
what
I
understand
are
actually
quite
strict
rules,
but
some
of
them
are
applied
to
the
to
the
container
apps
today
in
a
less
strict
manner.
So
I
don't
know
whether
we
want
to
express
the
strictness
that
must
shoot
and
and
and
may
like.
We
do
in
light
yet,
but
and
by
ITF
I
mean
internet
Engineering,
Task
Force,
which
is
the
standard,
the
traditional
organization
s
do,
that
is,
that
has
been
used
to
basically
build
the
internet.
B
In
case
you
guys
are
not
familiar
with
it,
just
just
FYI,
and
but
there
are
some
other
standard
bodies,
I
understand
like
Etsy
and
others
that
that
cannot
do
not
support.
You
know
some
strict
requirements
like
must,
but
it
may
be
good
to
have
an
indication
of
at
least
for
the
factors
over
12.
From
our
perspective,
to
express
the
you
know,
the
strictest
of
requirements.
So,
let's
comment
one
and
and
comment
to
is
I
would
love
to
work
with
you
to
keep
defining
those
X
X
factors
and
I
really
like
tonight.
Thank
you,
cool.
A
I
A
Any
help
I
can
get
from
from
anyone,
and
if
you
know,
someone
who
who
wants
to
help
with
this
or
could
be
helpful,
like
definitely
definitely
bring
them
over
so
I'm
Marty
shopping
to
surround
with
a
couple.
Other
I
can't
think
the
names
out
at
this
particular
point,
yet
until
I
get
confirmation
that
they
actually
want
to
participate,
but
yeah
I'm
starting
to
shop.
This
thing
this
thing
around
as
well
at
least
the
ID.
A
A
Feel
free
to
feel
free
to
ping
me
as
well,
if,
if
you
want
to,
if
you
want
to
help,
thank
you,
okay,
so
so
we
have
the
quick
toss
up
in
this
scenario.
I
want
to
make
sure
that
that
elastic
stuff
gets
in
with
the
cube
condemn,
oh
yeah,
yet
something
you
were
gonna
read.
There
was
the
five
minutes,
but
I
also
want
to
make
sure
that
we
have
enough
time
for
the
discussion
on
kerbin,
Eddie's
policy,
and
so
I
can.
B
So
I'm
gonna
be
speaking
for
me
and
McHale
and
parsley
of
Tyler
so
specifically
focusing
on
the
performance
part
vnf
versus
CNF.
So,
following
the
discussion
last
on
the
last
MSM
Cole
Michael
and
me
connected
twice
and
we
have
actually
reviewed
in
more
detail
his
current
results.
His
current
results
in
a
packet
dotnet
are
somehow
off
compared
to
what
we
are
reporting
in
FDI
yo
sis.
B
It
labs
that
are
maintained
by
Linux
Foundation
in
open
source
and
we
are
Michael-
is
now
working
on
aligning
those
by
reducing
the
configuration
from
four
cores
to
one
core
and
making
sure
that
what
he
is
watching
in
Pocket
that
Nets,
if
he,
what
he
seen
back,
that
that
is
aligned
with
FDI
yo.
The
difference
between
the
labs
is
rocky
dotnet
is
a
caustic
environment,
with
some
switches
connecting
and
things
connecting
the
hosts
that
that
he
is
renting
in
FDI.
B
Oh,
we
have
a
full
control
of
a
complete
environment,
including
the
wires,
and
there
are
no
switches.
So
we
consider
our
environment
to
be.
One
percent
are
under
our
control.
We
see
we
consider
pocket
net
to
be
80%
under
our
control.
So
that's
one.
So
that's
that's
one
piece,
second
piece:
we're
actually
going
to
be
sharing
we're
going
to
give
McHale
another
server
he's
over
the
using
one
thanks
based
on
at
thanks
that
for
organizing
the
rental
and
the
server
we
should
be
coming
online.
B
Today
we
have
some
RMA
hardware
issues
assuming
this
is
the
case.
As
of
Monday.
The
test
bet
will
be
allocated
to
McHale,
fully
under
hand
percent
under
his
control
so
that
he
can
do
his
magic
and
basically,
the
idea
is
to
progress
on
two
fronts
in
parallel,
using
exactly
the
same
software
stack
for
both
data
plane
and
orchestration,
and
that's
the
MDI
uses
it
to
note
skylake
based
as
book
and
I
will
type
that
in
and
and
also
the
packet
dotnet
machine
and
I'll
type.
B
The
references
in
a
moment
once
I
once
I
stop
speaking
and
we're
going
to
talk
again,
I
think
on
Wednesday
next
week,
and
we
have
a
meeting
with
Taylor
and
and
the
team
on
Tuesday
to
review
the
demo
scenario
and
so
on.
I
will
be
sharing
with
the
community
here
on
the
following
call:
that's
it
thank
you.
Unless
there
are
questions
three
minutes
and
a
switch
thing
makes
a
big
difference
so
claim
you
could,
then
you
discover
that
last
year?
B
Well,
it's
you
and
me
are
in
a
very
small
group
that
believed
that
there
is
an
active
device
between
two
to
D
UTS.
They
can
actually,
they
can
actually
was
the
word,
introduce
some
impairment
and
into
or
distortion
into
the
measurements
and
and
that's
what
we
exactly
gonna
capture,
however,
saying
that
pocket
dotnet
is
an
amazing
platform
because
it
allows
people
to
reproduce
it
and
by
just
booking
the
thing
by
a
minute
or
by
you
know,
a
quarter
for
an
hour
and
and
running
the
tests.
B
So
the
idea
is
to
progress
on
both
using
FDA
OCC
testbed
as
a
reference
and
try
to
come
as
close
as
possible
on
the
packet
at
net
and
explain
the
discrepancies.
So
that's
the
goal
and
we
are
quite
confident
that
we'll
be
able
to
get
there.
The
challenge
we
have
is
main
challenge.
We
have
is
time
because
we
don't
have
that
much
time,
so
hopefully
we
will
get
a
demo
working
most
likely.
It
will
be
something
interesting,
but
I
I.
B
F
B
H
B
H
B
B
J
E
H
J
We
started
by
rebuilding
the
happiest
case
and
we've
stripped
out.
We
had
a
fork,
that's
on
the
sand,
CF
work
and
there's
a
repo
called
onab
demo.
It's
a
fork,
and
we
did
a
lot
of
work
on
trying
to
make
it
repeatable
by
others
and
decided
to
start
over
and
that's
what
the
CNC
fcns
repo
is
with
all
the
comparisons
that
Mikkel
and
everyone
there's
working
on
we're
not
going
to
have
an
app
for
the
Seattle
demo.
J
We
will
contribute
the
network
functions
CNS
as
well
as
vnf
updates
back
upstream
I,
don't
know
if
we'll
ever
actually
use
their
demo.
Typically,
it
will
probably
help
them,
but
for
what
we're
going
to
do
is
recreate
some
type
of
chain
Network
function
use
case
it
may
be
based
on
the
CPUs
case
from
onap.
We
have
most
of
those
components.
We've
actually
rebuilt
all
of
them
as
containers
except
/,
BG
max.
H
E
H
E
H
E
A
H
J
Romkey,
if
you'll
follow
up
with
us
after
I'm
happy
to
talk
about
some
of
them,
that
we've
done,
we
actually
started
with
VG
NS,
and
we
went
through
that
and
we
started
building
out
the
different
workflows
and
we
decided
to
go
if
Wade
was
just
speaking
about
make
sure
that
we
can
chain
the
different
network
functions
and
focus
on
those
as
building
blocks.
Then
we
can
create
the
different
workflows.
J
So
right
now
we
have
several
different
comparisons,
as
we
as
we've
built
up
we're
also
doing
this
baseline
performance
test,
which
my
check
was
talking
about
earlier,
to
make
sure
that
the
very
lowest
level,
the
simplest
case
we
can
validate
the
hardware.
We
are
going
to
continue
to
add
workflows
and
we
did
look
at
something
like
the
VfB
and
stuff
like
that.
That's
more
user
focused
and
then
we
also
want
workflows
or
test
cases
that
are
very
specific
on
the
network
performance.
H
A
So
one
of
the
things
about
the
the
policies
in
that
scenario
is
that
they
describe
what
pods
can
be
connected
to
in
terms
of
like
you
have
a
packet,
that's
coming
in
that
that's
coming
out
there.
There
may
be
an
egress
policy
that
says
which
namespace
which
pod
labels
can
can
this
current
pod
connect
to
or
an
incoming
packet
comes
in?
A
A
However,
where
we
can
enforce
the
policy-
and
we
haven't
thought
much
about
this-
is
on
the
initial
connections
in
the
first
place
like
we
may
have
a
pause,
and
we
haven't
thought
much
about
this
particular
path.
But
we
could
we
can
think
about
where
how
do
we
ensure
that
this,
this
particular
endpoint
should
even
be
reachable
by
buy
something
that's
requesting
it
and
like
how
do
we?
A
How
do
we
handle
at
Mission
Control
in
that
particular
scenario,
and
one
answer
could
be
from
the
cmf
side,
where
the
cmf
itself
can
also
can
handle
some
admission
control,
but
simultaneously
even
reaching
the
endpoint
in
the
first
place
it
could
could
be
something
that
never
serviced
mesh
and
that
can
help
facilitate,
and
so
so.
In
that
sense,
the
the
standard,
kubernetes
network
policy
doesn't
really
make
sense
for
us,
but
there
is
a
McGuirk
policy
story
that
we
likely
need
to
do
unlikely
need
to
address
and
I
think
that
this
is
a.
H
So
one
pot
here,
so
if
you
really
look
at
kubernetes
network
policy,
it's
sort
of
really
a
security
policy
rate,
so
admission
control,
you
know
whether
that
perfect
should
be
part
of
essentially
whether
that
perfect
should
be
processed
or
not
simple,
admission
control
security
I
am
with
you
on
that.
It
is
different
from
service
mesh,
but
I'm.
Also
thinking
offered
slightly
differently.
If,
let's
say
the
network
policy
well
to
have
a
next
hop.
H
A
And
that's
that's
something
that
we're
looking
to
address
I,
believe
during
wirings
and
and
correct
me.
If
I,
if
I
mean
misinterpreting
this
but
yeah
with
the
with
the
network,
wirings
that
we're
sending
up
in
terms
of
have
how
they
get
chained
and
what
the
next
hop
should
be.
There
is
some
control
they
can
be.
They
can
be
handled
in
that
scenario,
but
perhaps
what
you're
thinking
of
is
a
more
advanced
use
case
where
you
might
have
a
an
endpoint
that
consumes
a
they
consume,
some
form
of
payload
and.
A
Then
they
can
select
what
is
the
next
one?
Maybe
some
of
them
bypass
the
firewall,
maybe
and
another
one
has
to
go
through
the
firewall,
and
so
you
make
it.
You
make
a
distinction
based
upon
some
information
which
could
be
a
header
or
some
other
mechanism
to
determine
which,
which
one
of
these
two
paths
can
be
can
be
taken.
So
I
think
is
that
closer
to
to
what
you're
thinking.
H
Okay,
so
others
also
that
are
so
right
now
your
networks
have
is
mesh,
we
do
have
certified,
I
mean
how
we
build
the
chain
right
correct.
So
basically
we
say:
hey
here
is
the
one
service,
and
then
how
do
I
tackle
the
next
service,
for
example,
firewall
to
DPI
right,
so
how
you
build
a
chain,
so
I'm
just
wondering
whether
this
construct
can
be
leveraged
for
it
I
mean
basically,
the
network
policy
itself,
but
sort
of
saying
hey
here
are
the
objects,
and
here
is
my
next
operate.
So
basically,
my
next
shop
in
the
service.
E
Who
are
to
be
isolated
and
it
is
providing
conditions
under
which
things
are
permitted
to
reach
them,
because
the
standard
the
standard
contract
in
kubernetes
is
every
pod
can
reach
every
other
pod
at
layer
3.
Unless
there's
a
network
policy
that
tells
you
to
isolate
the
pod
and-
and
so
that's
that's,
basically,
what
it
is
it.
Never
policies
and
communities
are
policies
about
isolation
of
pods.
E
That
said,
they
do
a
very
clever
thing
and
the
clever
thing
they
do
is
they
select,
which
pods
are
isolated
using
selectors
on
labels
on
the
pod,
which
is
very,
very
clever,
and
then
they
use
the
similar
selector
mechanism
to
tell
you
who
is
allowed
to
reach
isolated
pods,
which
I
think
is
also
very
clever.
I
would
expect
it
so
that
those
are
very
smart
things
when
you
bump
up
to
something
like
sto
into
a
classic
service
mash.
E
The
current
thinking
has
been
selectors
on
labels
around
the
advertisement
of
that
network
service
and
the
connection
requests
for
that
average
service.
But
you
know:
does
that
start
to
make
sense
to
your
key
in
terms
of
the
what
the
thinking
is
and
how
that
meshes
with
me.
E
Conversations
around
this
stuff
all
the
time,
so,
for
example,
we've
moved
out
someone
from
Orange
who
positive
the
channel
most
mornings
and
there's
a
bunch
of
conversations
that
have
happened
about
trying
to
steer
network
service
wirings
closer
to
virtual
hosts
and
you've.
They
were
originally
thought
about
in
the
terms
of
route
rules
and,
and
so
lots
of
really
things
happened
in
the
IRC
channel.
On
these
conversations,
except
though,
if
you
want
to
pop
in
there
and
start
a
comment
rate
about,
was
there
I
think
that
probably
would
also
be
okay.
A
Yeah,
just
so,
you
know
that
those
conversations
eventually
get
bubbled
up
here,
even
if
the
originator
can't
make
it
so
we
want
to
drive
architecture
and
so
on
through
these
meetings,
so
don't
feel
like
you're
cutting
the
rest
of
the
community
off.
If
you
decide
to
have
a
conversation
there
it'll
come
back
here
and
with
that
I
need
to
cut
off
the
meeting
cuz
we're
we're
a
few
minutes
over.
So
thank
you.
Thank
you,
everyone
for
for
attending,
and
we
will
see
you
next
week.