►
From YouTube: Network Service Mesh WG - 2018-11-20
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
B
C
Would
like
to
ask:
is
it
possible
that,
during
the
cube
column,
we
we
hold
some
kind
of
I,
don't
know
meeting
meetup
and
what's
the
proper
word
for
this,
but
kind
of
whoever
visits
there
and
is
somehow
participating
in
the
project
to
to
sit
together
and
talk
a
little
bit
through
the
project
and
how
about
the
birds
of
feather
sessions?
But
maybe.
A
Absolutely
what
we've
historically
done
in
the
past
in
places
where
network
service
commissioners
had
a
presence,
we
just
haven't
done
the
wiki
page,
updated.
Sorry,
the
the
you
know,
the
the
events
page
on
our
website
updated
for
this
is
typically
we
there's
sort
of
two
kinds
to
get
togethers
that
I
would
describe
there.
One
is
the
we
actually
get
together.
We
try
and
hammer
some
things
out,
sort
of
a
working
session.
I
think
that's
super
important
and
we'll
see
we
can
try
and
find
out
a
place
to
do
that.
A
Typically,
cube
con
has
some
sort
of
semis
public
spaces
for
this
kind
of
stuff.
Where
you
can
you
know
you,
you
don't
get
privacy,
but
you
can
get
a
bunch
of
people
together
around
the
table
and
hash
things
out
and
we'll
probably
take
advantage
of
that.
The
other
one
that
we've
traditionally
done
in
the
past
is
to
do
MSM
happy
hour.
A
So
we,
you
know,
find
a
nearby
bar
and
an
evening
and
we
basically
say
hey
if
you're
network
service
mesh-
you
come
by
here
for
an
assent
happy
hour
and
we
tend
to
get
pretty
good
attendance
at
those
as
well
with
folks
turning
up
and
sort
of
just
a
bit
saying
about
network
service
mash,
so
I
expect
we'll
probably
do
both
but
I
think
what
you're
saying
is.
Could
you
please
bloody,
we'll
get
your
event
page
up
soon,
so
that
we
can
all
see
where
we're
going
yeah.
C
A
And
I
do
apologize.
I
know
most
people
actually
do
plan
on
schedule
as
a
sort
of
a
matter
of
personal
proclivity.
I
don't-
and
this
makes
me
less
than
bp's
less
than
obvious
to
me-
that
I'm
going
to
get
these
things
out
there.
So
this
is
super
cool.
Let's
go
ahead
and
take
an
action
item
for
getting
the
site.
Updated.
Actually,
question:
do
you
want
to
Nicolai
sort
of
cut
your
teeth
a
little
bit
on
updating
the
network
server
smash
site?
A
A
B
B
D
A
Summits,
so
the
FBI
Oh
phyto
has
a
booth
at
cube
con
and
one
of
the
things
that
they
will
be
highlighting
there
at
that
booth
is
a
demo
of
network
service
mesh
now
I
expect
that
demo
will
be
a
little
bit
more
of
rah-rah.
Look
at
the
good
things
that
Yuki
and
Fido
do
in
the
network
service
mesh
context.
You
know
because
it
is
their
booths,
but
there
there
should
be
demos
there.
Okay,.
D
A
No
I
think
a
lot
of
things.
You're
talking
doing
things
like
this,
that
cross
multiple
boundaries
become
a
question
of
emphasis
right
so,
for
example,
in
the
phyto
booth.
Obviously
you
would
want
to
tell
people
why
they
care
about
network
service
mesh
and
then
why
Fido
is
awesome
in
that
context,
right
in
different
environments,
you
might
be
much
more
focused
on
network
service
mission.
You
might
not
focus
so
much
of
the
fight.
Oh
geez,
okay,
you
know
so
I
think
it's
more
a
matter
of
emphasis
because
we're
doing
a
lot
of
good
stuff
together.
Okay,.
D
And
back
to
the
blog
post,
I
guess
on
item
four
eleven,
a
couple
of
things:
real,
quick,
so
I've
got
a
couple
of
blog
content
streams
that
we
could
pursue
I'd
sort
of
done,
one
initially
with
CNF
sand,
lucano
and
VPP
Frederick
I
know,
there's
the
the
twelve
fact
or
apps
and
by
the
way
I'm
Way
more
fluent
in
Hugo
right
now.
So
I
don't
know
if
you
saw
the
format
of
that,
but
I
would
be
happy
to
to
pursue
another
template
to
make
it
more
compelling.
D
But
that
would
also
be
something
I
think
we
want
to
at
least
point
towards
and
maybe
even
summarize,
in
sort
of
a
separate
blog.
So
there's
a
number
of
things
going
on,
at
least
in
this
content,
and
so
I'll
reach
out
to
the
team
separately
to
just
figure
out
the
best
way
to
assemble
some
of
this
material
quickly
and
ready
for
cube
con.
A
B
B
B
So
for
the
people
on
from
the
Volk
I
also
have
some
feedback
for
you
later
on.
That
I
think
would
be
useful,
because
I
would
like
to
eventually
move
back
to
using
the
cross
cloud
stuff,
so
I'll
catch
up
with
you
later
on
so
I
can
I
can
tell
you
some
of
the
findings
that
I
had
and
what
drove
this
particular.
This
particular
path.
B
So
anyway,
so
if
you,
if
you,
if
you
commit
anything
simultaneously,
it's
also
it's
also
driving
the
the
packet
our
make
machinery.
So
if
you
go
into
the
we
pauses,
if
you
go
into
the
git
repository
and
you
check
it
out
and
you
type
make
packets
start
and
after
you've
created
a
variable
like
this,
you
would
take
an
answer.
File
and
I'll
put
the
template
up
for
that.
Pretty
soon
then,
and
you
just
fill
out
that
token
and
so
on,
then
it'll
also
spin
up
a
system.
B
That's
ready
for
for
your
test
to
run
so
I'll
write
some
documentation
on
how
to
run
and
how
to
run
the
test,
and
it's
the
same
machinery
that
also
drives
the
the
local
cube
and
the
local
vagrant
and
ideally
will
also
be
the
same
machinery
we
used
to
drive.
Other
clouds
is
as
well
from
command
lights.
You
could
do
like
make
GCE
start
or
eventually
you'll
be
able
to
like
make
AWS
starter.
So
on.
B
B
So
let's
start
with
the
things
that
are
that
are
done,
so
we
have
started
from
the
beginning
made
some
fixes
in
terms
of
in,
in
terms
of
some
of
the
code,
a
big
big
entry
has
been
the
VP
key
data
plane
with
some
mif
mechanism,
and
your
name
is
on
that.
You
wanted
to
talk
about
that
a
little
bit
well,.
A
F
F
Yes,
mechanism,
it
creates
file
for
socket
file
for
client
and
for
endpoint,
and
if
we
have
direct,
we
don't
use
VPP.
If
I
mean
both
sides
have
made
a
true
spend
and
if
they
choose
one
side,
for
example,
kernel
and
other
site
is
MAF
so
that
we
will
create
my
my
affinity,
BP
and
now
it
all
works.
Fine
and
so
I'm
working
on
the
BP
based
and
points
and
lipid-based
clients
and
points
is
almost
finished.
I
think
and
I
hope
of
fix
some
issues
from
here.
F
A
F
B
B
We've
added
a
script
for
vagrant
to
ease
the
creation
of
the
local
Coronet
ease
that
is
suitable
for
network
service
match.
So
this
is
also
part
of
the
machinery
that
that
I
spoke
about
before
and
finally
we're
now.
Exposing
a
monitor,
cross
connect
service,
northbound
from
n
SMD
and
is-
is
that
appropriate
for
the
monitoring
stuff
that
we
were
talking
about
last
week?
But.
A
That's
exactly
specifically
for
the
monitoring
stuff,
so
that
was
that
was
worth
it
bit,
underrated
civil
of
did
and
effectively
what
it
means
now
is
in
principle,
and
again
you
never
know
to
the
integration
test
is
finish
in
principle.
We
should
now
be
exposing
northbound
yo
cross,
connect
events,
so
that
can
be
consumed
by
the
skydive
integration,
but
probably
even
more
interesting
for
the
guys
doing
the
skydive
integration
is
in
that
commit.
A
So
I
I've
been
pointing
this
guy
there's
an
issue
for
this,
where
I've
captured
much
of
the
information
about
the
span
of
integration
and
I've
also
got
I.
Think
it's
linked
in
the
issue.
This
is
the
me
in
progress.
Column
I've
also
got
a
cute
little
slide
deck
that
walks
through
some
of
the
answers
to
the
questions
that
they
were
asking
in
IRC,
which
we
can
talk
about
after
we
talk
about
the
board
at
this
time,.
B
Great,
and
so
we
have,
we
also
have
two
patches
that
are
currently
in
review.
One
of
them
is
two
vx
land
mechanism
and
the
data
plane,
which
means
that
we're
on
the
way
to
having
a
remote
cross
connect-
and
we
there's
also
a
VPP
agent
based
ICMP
responder
and
so
not
quite
sure
what
that
means.
I'll.
F
All
right,
I
can
explain
what
does
it
mean
if
it
could
be
based
at
a
point,
as
I
find
out
that's
important
that
expose
the
PP
agent,
for
example,
you
can
have
some
kind
of
stuff
in
the
TP.
It's
yes,
it's
over
with
P
I
mean
one
point
has
always
repeat
the
agent
and
we
can
connect
to
the
patient
from
client,
for
example,
to
get
the
patients
from
endpoint
and
create
cross
connect
in
defeat
them.
If
I
have
correct
understanding.
A
Different
EVP
agent
so
basically
effectively
they're
they're.
The
BGP
agent
is
literally
just
a
VDP
instance,
together
with
something
that
it's
business
chair
can
see
the
controller
right
in
the
data
plane.
We
use
an
EVP
agent
in
the
ICMP
responder.
It
uses
a
completely
distinct
instance
of
EPP
agent
running
in
its
own
pod.
In
order
to
provide
the
ICMP
response.
A
B
B
B
A
B
H
Don't
know
if
present
yeah
it
started
I've
been
starting
developing
a
client
that
will
be
a
refugee
or
hosted
in
scale
based
on
on
the
recent
patches,
a
stream
by
I,
don't
overstrain,
I
do
I
think
it's
really
useful
and
should
not
be
so
hard
to
have
a
problem
for
fall.
Skydive
based
on
those
lead
development.
A
H
D
Hey
while
you're
bringing
them
up
and
yeah
I
would
agree,
we
absolutely
need
some
sort
of
architecture
picture.
This
may
be
part
of
it
and
thank
you
so
much
for
helping
craft
these
up
yesterday
and
answer
some
of
David
and
myself
squat
yeah.
We
we
need
something
because
there's
a
lot
of
terms
flowing
around
here
and
I
I,
just
I.
Don't
personally
have
context
as
to
where
these
pieces
are
so.
D
A
A
I
A
Cool
so
route
quickly.
This
was
a
slide
deck
that
I
put
together
because
David
ping
me
on
IRC
yesterday
was
asking
a
bunch
of
questions
and
I
realized
that
pictures
are
gonna,
be
really
super
helpful,
and
so
one
of
the
things
that
I
realized
is
that
they
needed
to
be
clarity
about
the
fact
that
there
are
different
ways
to
look
at
the
topology
right.
So
if
you
look
at
it
from
the
point
of
view
of
a
network
service
planner
and
every
service
endpoint,
the
world
really
looks
like
this
right.
A
Every
service
client
has
a
local
connection.
We
mark
local
connection
in
our
legend
has
a
local
connection.
You
know
some
kind
of
thing
happens.
It
doesn't
know
what
it
doesn't
care
and
there's
a
local
connection
from
the
network
service
endpoint
that
it's
going
to,
and
so
this
is
just
one
logical
line.
Your
point-to-point
cross
connect
between
them
as
far
as
they're
concerned,
and
so
that's
the
view
that
the
network
service
client
service
in
points
out
of
the
world.
A
A
It
has
a
connection,
a
local
connection,
into
the
data
plane.
The
data
plane
has
a
cross
connect
that
cross
connects.
That's
you
a
different
local
connection
that
is
going
to
the
network
service
endpoint
and
that's
the
way
the
nsmb
sees
the
world
Pappa.
Logically,
four
things
on
the
same
note,
four
things
on
different
nodes:
we
need
at
the
multi
node
case.
What
it
looks
like
to
a
particular
nsmb
is
I
have
a
network
service
client.
It
is
a
little
connection
to
a
data
plane.
I
should
put
the
cross
connect
in
here
as
well.
A
Apologies
I
should
fix
that
of
the
picture,
but
it
essentially
hits
a
similar
cross
connect
to
a
remote
connection
and
the
remote
connection
could
be
something
like
the
X
of
the
excellent
on
right
and
those
are
the
points
of
view
of
those
components:
the
network
service
client.
There
were
never
a
service,
endpoint
and
the
way
the
NSF.
Do
you
see
the
topologies?
They
understand
I.
A
Effectively,
the
cross
connect
object,
the
an
SMT
when
it
clocks
the
data
play.
It
simply
says
please
create
these
cross
connects
from
me
that
a
connection
over
here
in
a
connection
over
there.
Please
leave
them
into
yourself
and
cross
connect
them
and
then
the
cross
connect
objects
that
are
coming
northbound.
They
come
northbound
from
the
data
plane
once
that's
finished,
because
now
we've
got
them
and
then
they
come
up
from
the
nsmb
towards
your
topology
monitoring.
I
think
I
have
a
picture
for
that
subsequently,
as
well.
C
A
Can
totally
have
all
the
bridges
you
want,
but
that's
a
network
service,
and
this
actually
turns
out
to
be
phenomenally
helpful
because
it
bridges
are
complicated
and
so
by
making
the
bridges
and
network
service
number
one.
We
basically
never
service
mesh
itself
doesn't
have
to
deal
with
that
complication.
It
just
has
to
get
things
to
that
network
service
and
then
the
second
thing
that
actually
is
kind
of
important
from
a
sociological
point
of
view
is
there
are
a
bunch
of
people
competing
in
the
bridge
space.
A
We
don't
wanna
compete
with
them,
they're
doing
all
kinds
of
wonderful
things,
and
so
yes,
if
you
have
a
bridge,
we
can
get
you
to
your
bridge,
but
we
ourselves
do
not
provide
a
bridge.
Okay,
thanks
cool
and
then
those
are.
The
last
apology
view
is
what
things
look
like
topologically
at
the
view
of
the
cluster,
and
this
is
probably
what
you
want
to
bother
to
skydive
in
some
sense.
A
So
from
the
cluster
point
of
view.
Essentially,
you've
got
a
network
service
client,
it
has
a
local
cross,
connect
to
a
data
plane
or
it
hits
a
cross.
You
know
Vegas
cross
connected
to
a
remote
connection
which
goes
over
whatever
your
underlay
is.
You
know
the
remote
connection
might
need
the
X
LAN.
Then
it
comes
into
the
day
to
plan
on
node
2
and
it
gets
cross
connected
to
a
local
connect
across
a
local
connection
to
the
network
service
endpoint
on
node
2.
A
So
this
is
one
of
the
things
I
think
that
makes
it
simpler
is
different.
Parties
in
this
have
a
different
view
of
the
topology,
the
network
service
clients
and
that
reserves
endpoints
have
a
super,
simple
logical
view.
It's
a
point-to-point.
You
right
at
the
level
of
an
NS
m
de
the
NS
empty
only
really
understands
the
stuff
happening
on
its
nodes.
A
You
know
the
name
of
the
network
service
endpoint
that
you're
connecting
to
you,
don't
necessarily
know
where
you
know
the
name
of
the
network
service
endpoint
and
you
know
what
network
service
manager
is
managing
it.
You
don't
necessarily
know
the
things
about
it
like.
Is
it
using
them
I
off,
because
if
you're
or,
if
you're,
dealing
with
the
remote
case,
that's
not
your
problem.
So.
A
You
know
they
never
service,
endpoints
identifier,
correct,
okay,
whether
it's
local
or
remote
well,
and
then
the
the
other
question
that
came
up
was
in
talking
to
David.
He
sorted
I
started
asking
questions
about
the
flow
of
all
of
this,
which
it
turned
into
a
whole
series
of
slides.
That
I
think
are
probably
helpful.
So
I've
used
to
smiley
faces
the
packet
because
it
makes
me
happy
so
a
packet
originates
in
the
network
service
client.
A
It
goes
over
the
local
connection
of
that
network
service
client,
which
is
the
only
part
that
every
service
by
it
actually
sees
of
this
chain.
It
gets
to
the
data
playing
the
data
plane
cross,
connects
it
to
whatever
variable
connection
is,
which
typically
involves
putting
an
end
cap
around
it.
So
I
put
it
in
cap
around
our
smiley
face.
A
You
know
that
thing
goes
over
or
whatever
your
underlay
is,
which
then
arrives
at
the
data
plate
on
the
other
end,
where
it
gets
D
capped,
they
gets
cross
connected
to
the
local
connection
of
the
network
service
endpoint
and
gets
delivered
to
the
network
service
endpoint,
and
then
the
last
slide
that
one
I
think
David
has
questions
about.
I
went
full
idea,
graphic
programming,
because
you
know
he
was
asking
sort
of
how
this
related
to
things,
and
so
you've
got
the
network
service.
A
The
daemon,
the
exposes
the
moniker
cross,
connect
API,
which
will
stream
back
a
bunch
of
cross,
connects
when
this
guy
to
I
probe
asset
and
those
cross
connects
are
just
you
know.
A
cross
connect
which
I've
represented
with
a
little
diamond,
is
just
a
payload
type,
which
is
Ethernet
and
information
about
a
source
connection
and
information
about
a
destination
connection,
and
it
gets
one
cop
one
view
of
this
from
node
one.
It
gets
another
view
from
node
2.
The
thing
that's
going
to
be
true
is
the
just
the
connection
here
for
the
destination.
A
A
A
Other
things,
okay,
but
the
the
truth
of
the
matter
is,
for
your
point
of
view,
I
think
it's
just
information.
You
would
report
it
should
match
between
both
sides.
Now
one
thing
I
do
want
to
be
cautious
about
what
this
picture
is.
It's
the
it's
not
the
destination.
This
that
makes
the
connection
is
the
same.
This
could
be
blowing
the
you
know.
You
have
things.
A
A
A
A
Like
this
cross
connect
is
different
than
that
cross
connect.
This
connection,
this
remote
connection
between
them
is
the
same
thing
so
from
the
point
of
view
of
node
1
its
destination,
and
this
cross
connect
goes
out
as
a
remote
connection
from
the
point
of
view
of
node
to
its
source
comes
in
from
remote
connection,
and
those
remote
connections
are
the
same
on
both
sides.
They'll
have
exactly
the
derailleurs,
ok,.
I
A
Other
thing
to
be
cautious
of
which
is
already
by
which
connection
I'd
use
good
issue
right.
So
the
connection
ID
is
always
your
connection.
Ids
are
always
created
where
you
are
going
right.
So
if
I
am
a
MSM
one
and
I
request
a
remote
connection
from
being
from
from
the
innocent
be
unknown
to
it's,
the
innocent,
the
unknown
to
that
will
be
creating
the
connection
ID
and
the
connection.
A
I
A
A
F
D
Edie
Edie
super
extremely
helpful
now
I'd
also
like
to
see
where
this
monitor
cross
connects,
server
and
client
would
be
located
and
and
and
I
guess,
David
and
the
skydive
colleagues
ya
figure
determine
where
you
think
the
best
place
for
the
for
the
probe
would
be,
and
we
would
want
to
include
that
in
an
architecture
picture
again.
Just
so,
everybody
has
context
about
these
movement.
A
I
actually
put
the
following
thing,
because
I
think
we
were
running
at
a
time
on
this
call,
which
is
if
the
folks
involved
in
this
I'm
gonna,
do
a
chat
on
IRC.
After
this,
we
can
hammer
out
these
things,
I
think
pretty
quickly
and
I
know,
among
other
things,
for
example,
that
david
has
done
some
really
nice
sequence
diagrams,
but
I
think
elucidate
some
of
what
you're
discussing
as
well.
A
B
E
B
One
of
the
things
that
that
we're
starting
to
focus
on
as
well
now
that
we
have
these
various
components
up
and
running
is
also
eliminating
race
conditions
between
which
one
should
start
up.
So,
for
example,
if
the
data
planes
starts
up
for
an
SM
or
should
NSF
sorta
for
the
data
plane
and
we,
we
believe
that
it
shouldn't
matter
which
ones
is
up
first,
that
they
should
just
do
the
right
thing,
and
so
there's
work
done
in
order
to
in
order
to
make
that
happen.
A
C
A
B
B
B
G
G
C
G
The
OpenStack,
the
biggest
thing
that
we
have
right
now,
is
on
this:
the
DPP
Neutron
driver.
So
if
anyone
is
there's
not
a
lot
checked
off
here,
but
there's
a
lot.
That's
happening
on
this
codebase,
this
one's
really
the
biggest
blocker
for
the
open
SEC
side.
So
if,
if
there's
anyone,
that's
on
the
call-
or
you
know
anyone
that's
familiar
with
this
Neutron
driver
that
uses
EPP
as
the
V
switch
for
open
sector,
then
please
speak
up
or
shoot
me.
A
message
he's
working
Robert
is
working
flourishes
trying
to
debug
stuff.
G
It
works
in
thee
in
a
devstack
single
node.
It's
not
working
on
the
chef
deployed,
like
the
official
chef
deployed
OpenStack
hasn't
tested
on
that
Cola,
which
would
be
another
one
container
based
install
trying
to
work
out
the
bugs
on
that
and
most
of
the
risk.
The
open
sex
should
be
done,
and
but
we
haven't
had
a
lot
of
chance
to
test.
Because
of
this,
and
let's
see
we
have
a
lot
happening
with
regard
to
the
kubernetes
side.
So
if
I
come
back
in
here,
I
posted
some.
G
So,
on
the
kubernetes
side,
we
have
clusters
deploying
with
layer
two
working,
so
you
can
have
the
packet,
so
this
is
specifically
on
packet.
So
on
packet
you
can
deploy
a
kubernetes
cluster
and
have
the
layer
2
switch
setup
as
well
as
the
worker
node.
We've
done
the
implementation,
a
way
that
we
could
take
out
the
host
worker
Ned
stuff
and
plug
in
an
SM
one
area
that
we
need
to
figure
out
and
talk
with
y'all
at
some
point
about
would
be.
G
How
are
we
going
to
work
with
the
packets
which
side
most
of
what
we
were
thinking
for?
Nsm
is
worker
node
they're
still
setting
up
the
packet
switch,
which
is
what
at
least
what
we
want
from
Bayer
from
zero
to
a
working
cluster.
So
that'll
be
a
discussion
we
need
to
figure
out.
How
is
that
going
to
be
handled?
It's
currently
not
handled
in
the
terraform
packet
plugin
we're
having
to
do
it
external
we're
going
to
contribute
back
to
that.
G
So
anyways
need
to
figure
that
up
the
templating
for
the
VP
vuv
switch
to
support
the
different
topologies,
we're
in
that's
in
progress
I'm,
trying
to
figure
out
how
to
make
it
the
most
usable
between
kubernetes
and
OpenStack.
Since
we're
comparing
that's
the
ansible
side,
we
have
scripts
that
can
reconfigure
all
of
it
for
docker.
Most
of
the
previous
work
was
docker
KVM,
that's
all
of
the
cset
packet
testing
and
so
trying
to
make
it
work
for
kubernetes
and
likewise
the
helm
chart.
G
G
G
Think
I
have
I,
don't
know
if
anyone
on
this
call
is
seen,
but
here's
a
quick
overview
of
what
we're
looking
at.
So
this
is
like
a
snake
case
where
you're
going
to
in
OpenStack
you're
going
to
go
through
the
VM
and
back
out
to
the
V
switch.
This
is
VPP
and
then
on
kubernetes
will
do
the
same
and
then
we'll
have
the
optimized
version
where
we
can
go
from
container
to
container
or
pod
to
pod
for
the
the
best-case
scenario.
G
So
that's
what
we're
targeting
as
far
as
those
topology
cases
on
the
dev
side,
that's
affecting
everything
in
packet.
One
of
the
big
problems
that
we've
run
into
is
limitations
on
how
to
configure
VLANs
for
the
switches,
depending
on
how
you
do
assign
and
stuff
it
can
change
the
way
the
switch
is
configured
including.
Does
it
send
the
VLAN
tags
back
to
the
server?
Does
it
not
the
access
port
mode?
We
have
maximum
of
12
VLANs.
G
If
we
try
to
do
sharing
we've
run
into
issues
where
we
can't
assign
the
same
VLAN
to
multiple
ports,
at
least
via
the
web,
UI
seems
to
be
available
in
the
API.
So
there's
a
lot
of
weird
limitations
and
the
visibility
there
so
trying
to
document
all
those
and
find
the
workarounds
to
make
it
work
in
packet,
I.
G
Think
that's
it
right
now,
there's
a
lot
of
new
documentation.
That's
been
pushed
Eadie
Kern
did
a
first
round
for
the
end-user
documentation.
What
we're
expecting,
if
someone
doesn't
know
the
project
and
wanted
to
come
in
and
try
to
redo
the
results
unpack
it.
So
here's
what
you
would
do
and
walkthrough.
So
we
have
this
first
round
we're
going
to
keep
filling
it
out.
We
also
have
we've
pushed
up
a
lot
of
documentation
on
using
the
individual
components
like
this
is
how
to
use
the
traffic
generator
with
nfe
bench
and
stuff.
G
So
this
could
be
useful
for
other
folks
that
want
to
use
this.
This
would
work
if
you
have
your
own
kubernetes
cluster
and
want
to
use
in
a
few
bench
to
run
traffic
against
it.
You
could
spin
this
up
and
then
go
create
whatever
scenarios
you
want.
Nf,
nfe
bench
and
we've
also
put
some
stuff
like
the
issues
we've
seen
with
the
bias
and
other
things
on
the
packet
machines
for
the
quad
Intel
NICs.
G
What
do
we
need
to
do
at
what
do
we
set
up
with
grub
all
of
these
things
so
trying
to
make
more
of
that?
The
tips
and
other
things
that
we
see
in
packet
and
all
the
testing
available
for
others-
that's
it
for
me
and
nothing
right
now
for
innocent,
like
requesting
help
immediately
on
the
kubernetes
side.
The
one
thing
again
would
be
anyone
that
knows:
VPP
and
OpenStack.