►
From YouTube: CNCF CNF Testbed Meeting - 2019-03-04
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Okay,
let
me
go
ahead
and
get
started.
Could
I
request
folks
to
please
add
your
name
to
the
we're
just
priestess
in
again
a
gender
and
also
to
add
in
any
topics
that
you'd
like
to
discuss
the
background
here
is
that
I've
been
working
with
a
team
from
bulk
and
folks
from
Cisco
and
Intel
and
Red,
Hat
and
elsewhere?
A
Lack
for
Believe,
It
or
Not
ten
months
now,
but
intensively
for
about
six
to
build
out
this
CNF
test
bed
and
the
idea
for
it
is
to
have
a
straightforward,
a
way
to
talk
about
increase
use
of
kubernetes
by
telcos,
and
it's
conjoined
with
this
concept
of
cloud
native
network
functions
or
CNX
on
how
you
could
transition
bin
Apps
to
CX,
which
of
course
sounds
nice,
but
in
practice
one's
up
having
a
lot
of
challenges
to
to
move
through
so
again.
Well,
no
I!
A
Guess
we
have
a
few
new
folks
on
so
I
think
maybe
I'll
just
take
five
or
eight
minutes
and
walk
through
the
beginning
of
this
unit.
Test,
bed,
presentation
and
I
think
that
would
be
some
useful
context
for
folks
to
have
about
how
this
can
you
comes
together.
You
see,
no,
would
you
mind,
sharing
it
and
then
I
can
just
walk
through
some
of
our
thinking
and
and
I
think
that
would
then
leave
naturally
into
the
next
steps.
Conversation.
A
So
on
slide
two
yeah,
you
see
I
just
remind
folks
that
the
Linux
Foundation
is
much
more
than
just
minute.
So
since
yeah
from
the
cloud
has
been
collaborating
very
closely
with
LF
networking
and
as
we'll
talk
about,
we've
made
use
of
a
bunch
of
BN
apps
out
of
their
own
app
project
and
have
the
hope
or
expectation
of
using
more
over
time
slide.
A
3
is
the
overview
of
CN
CF
that
we
now
have
five
graduated
projects
and
15
incubating
wands
and
they're
seventeen
Platinum
members
that
are
backing
us
and
in
one
way
of
seeing
this
unit
testbed
project,
just
to
say
that
CN
CF
has
been
extraordinarily
successful,
bringing
together
the
whole
public
cloud
community
and
also
the
whole
enterprise
software
community.
And
the
question
then,
is
what
would
it
take
for
us
to
expand
to
telcos
and
their
vendors
and
slide?
A
5,
where
this
is
focus
specifically
around
the
past,
was
the
sort
of
first
release
of
ona
Amsterdam
and
that
version
of
it
ran
on
OpenStack
yum,
where
a
juror
Rackspace
and
you
supported,
being
apps
and
then
the
own
app
Casablanca.
That's
available
today
supports
kubernetes,
and
so
you
can
run
these
cloud
native
network
functions.
It
also
supports
to
be
an
absolute
stack
and,
of
course
the
kubernetes
part
can
run
either
on
top
of
bare
metal
or
in
cloud.
A
But
the
future
scenario
that
we
have
here
is
one
that
we
kind
of
want
to
focus
on
with
this
testbed,
which
is
talking
about
allowing
kubernetes
to
be
the
universal
substrate
beneath
all
the
application
functionality
that
abstract
away
the
details
of
the
bare
metal
and
in
public
cloud
and
specifically
supports
that
hybrid
cloud
functionality.
And
then,
on
top
of
that,
you
can
run
cloud
native
network
functions.
You
can
run
all
your
operating
support
system
and
business
support
system
functions
on
the
same
clusters.
A
If
you
do
have
some
need
of
VN
apps
that
are
legacy
or
that
you
haven't
been
able
to
port
over
there's
this.
These
interesting
technologies
of
queue,
burden
Birdland
that
allow
you
to
run
those
manage
them
via
quick
Nettie's
and
then
in
the
center
of
the
own
app
Orchestrator
is
also
running
on
kubernetes.
A
Let
me
just
stop
there
for
a
second,
since
we
have
a
number
of
new
folks,
any
questions
so
far
on
any
of
that
before
I
jump
into
exactly
what
we're
building
here
with
the
testbed,
but
that
this
is
sort
of
the
context
for
why
we're
building
it
or
a
vision
of
what.
What
we're
trying
to
do.
A
Okay,
silence
is
consent,
so
anyway,
slide
6
is
the
overview
of
what
we've
done,
which
is
that
we
took
several
vnx
virtual
network
functions
out
of
the
own
app
project
and
specifically
we're
using
the
broadband
network
gateway
function,
which
is
part
of
the
virtual
customer
premises,
equipment
use
case,
and
we
took
that
identical
networking
code
and
we
packaged
it
as
a
container.
A
So
the
idea,
then,
is
to
compare
the
performance
of
BNF
synapse
and
to
be
able
to
talk
about
best
practices
and
changes
necessary
and
whether
there's
any
patches
that
we
need
to
upstream
to
the
projects
and
such
now
I
will
say
that
there's
no
expectation
right
now
of
this
unit
test
bed
being
a
standalone
project.
It's
very
much
meant,
as
a
market
development
project,
to
show
the
value
of
cleanness
approach.
A
A
We
call
these
the
snake
case
where
you
can
do
a
user
space
user
space
data
point
soon,
as
you
switch
of
having
the
CNF
talk
together,
so
they're
going
to
get
a
performance
improvement
and
then
the
third
case
in
yellow
is
what
we
call
the
pipeline
case
where
to
see
an
apse
and
in
these
scenarios
we're
doing
a
chain
of
repairs
of
seeing
of
CNF
Servian
apps.
But
in
the
third
case
the
CNS
can
talk
directly
to
each
other,
using
them.
I
have
connections
and
getting
a
Google,
Astra
and
so
slide.
A
8
is
a
very
preliminary
look
of
performance
where
the
CNF
snake
and
pipeline
cases
are
six
to
more
than
eight
times
faster
than
the
the
BNF
case
and
there's
nothing
particularly
shocking
about
this
I
mean
this
is
all
the
same
reasons
that
people
like
containers
over
virtual
machines,
but
on
the
other
hand,
it
is
useful
to
see
I.
Think
the
foremost
of
this
call
we're
going
to
talk
about
how
to
try
to
slightly
more
realistic
use
cases
of
not
just
trying
to
maximize
packet
throughput
and
looking
at
some
of
the
other
changes
on
slide
9.
A
Some
of
the
other
differences
are
in
terms
of
the
amount
of
time
it
takes
to
do.
The
deployment
from
scratch
and
and
I
will
mention
that
we're
aiming
to
get
this
16
minutes
for
kubernetes
down
meaningfully
by
removing
a
reboot
is
required
right
now,
but
in
terms
of
deploying
the
network
functions
and
the
idle
state,
RAM
and
CPU
are
all
significantly
better.
A
On
the
run
time,
you
can
see
that
it's
actually
using
more
CPU,
but
that's
not
a
huge
surprise,
given
that
it's
moving
six
times
for
more
packets
through
the
latency
is
very
low
in
both
cases,
and
then
it's
the
same
performance
numbers
from
the
previous
slide.
So
then,
how
do
you
engage?
Is
really
the
purpose
of
this
call
we're
the
first
piece
is
that
there's
no
need
for
anybody
to
take
our
word
for
it
on
these
performance
numbers
we
would
love
to.
A
Have
you
replicate
this
environment
and,
in
particular,
packet
is
happy
to
make
an
API
key
available
to
allow
you
to
use
these
these
pretty
beefy
machines
to
to
do
so,
and
then
to
the
degree
that
we're
just
not
doing
things
right
that
we
have
suboptimal
configurations.
We
would
love
pool
requests
that
show
ways
to
improve
you
give
the
kubernetes
or
the
OpenStack
diplomas.
Then
another
area.
The
third
one
here
is
that
there's
this
testbed
isn't
designed
to
be
locked
to
pocket.
A
We
would
love
to
have
a
pull
request
that
would
enable
it
to
run
on
your
own
bare
metal
servers
in
your
lab
or
on
other
cloud
bare
metal
servers
like
AWS
ones,
and
then
the
fourth
is
to
package.
Your
internal
network
functions
into
bns
and
synapse
and
run
on
your
instance
of
the
testbed
and
there's
no.
You
don't
have
to
share
the
code
with
us,
but
we
would
love
to
see
the
results.
I
will
mention
that
this
whole
project
is
licensed,
Apache
2.0.
So
it's
and
agree
that
it's
useful
to
you.
A
You
can
do
anything
you
want
with
it,
but
it's
not
actually
designed
to
go
into
production
with
telcos.
Our
hope
here
is
to
engage
your
interest
vendors
to
the
degree
that
they
would
be
creating
their
own
versions
of
that
and
then
yes,
the
final
pieces
right
now.
All
of
our
work
has
been
focused
around
bare
metal
servers.
We
would
love
contributions
or
ideas
on
improving
performance
in
terms
of
virtualized
hardware
such
as
most
of
the
offerings
from
Google
all
about
Azure
and
AWS.
A
So
then,
in
terms
of
where
to
continue
the
conversation,
we're
going
to
be
doing
a
number
of
meetings
at
the
open
networking
summit
in
San
Jose
and
were
I'd
be
very
happy
to
meet
with
any
of
you
or
your
college
there.
And
then
we
will
try
to
have
a
off
at
G
con
Barcelona
and
then
there's
some
other
minis
beyond
that.
A
Shanghai
Antwerp
and
San
Diego
and
I'll
just
mention
that
Barcelona
is
on
track
to
be
a
blowout
event,
where
we're
actually
looking
in
extrapolating
right
now
to
sell
out
with
12
attendance,
which
will
make
it
the
biggest
open
source
developer
conference
ever
and
I.
Think
I
will
stop
there,
there's
some
very
useful,
appendix
slides
and
other
kinds
of
information
that
you
should
feel
free
to
go
through,
but
I
think
that's
a
kind
of
reasonable
overview
of
why
we
built
the
just
been.
A
So
could
I
open
it
up
again
for
any
questions
and
I
guess
I'd,
particularly
if
some
of
the
folks
who've
been
involved,
have
any
edits
to
what
I
just
said,
where
I
wasn't
quite
being
precise
enough
I'd
be
very
happy
to
tear
them
up
suggestions
on
how
to
describe
things
more
more
clearly.
Well
correctly,
I'm
I,
didn't
say.
B
Hello,
my
name
is
Jill
and
I'm
working
on
core
DNS,
but
also
I'm,
backed
off
Infoblox
company
I.
Had
one
question:
I
was
looking
at
this
leg
during
the
this
weekend
because
was
presented
somehow
on
last
CN
CF.
You
see
I
understood
that
the
game
is
to
show
that
anyway,
because
the
the
efficiency
is
here
for
running
your
networking
function
above
communities
then
take
advantage
of
kubernetes
is
how
I
take
the
world
picture.
I
mean
you
prove
here
that
it
is
or
you
prove
or
you.
You
want
to
prove
or
compare
that
anyway.
B
A
I
think
that's
correct.
I
mean
the
actual
performance
between
machines
shouldn't
be
meaningfully
different
between
being
absent
and
CNF,
since
the
networking
hardware
and
the
bits
over
the
wire
and
everything
are
going
to
be
nearly
identical
and
so
I
think.
What
we're
trying
to
measure
is
the
networking
performance
and
and
scalability
and
memory
usage
and
all
the
other
kinds
of
metrics
on
how
things
run
within
a
machine,
but
it
I'm
not
sure
I'm
addressing
your
question.
B
So
my
question
is
finally
what
the
full
purpose
of
this
test
bed,
because
you
say,
okay,
we're
going
from
vnf
to
CNF
is
the
world
purpose
is
okay.
You
can
now
verify
that
your
vnf,
if
your
same
function
as
a
vnf
as
the
same
performance,
is
no
sorry.
The
same
networking
function
deployed
as
a
CNF
as
the
same
performance.
Networking
performance
as
as
the
POTUS
vnf
no
and.
A
B
So
I
went
diving.
I
was
wondering
a
why?
Why?
Let's
say
on
that,
you
say
it's:
it's
natural,
that
a
container
it
goes.
It's
lighter
and
goes
quicker
than
a
VM,
but
my
ordering
was.
Is
it
because
on
the
underlying
network
we
are
using
this
men
if's
instead
of
the
virtual
I,
don't
know
what
you
call
that
vbos
is
it?
Is
it
the
underlying
networking
that
that
make
it
more
more
efficient,
as
you
say,
on
on
container
than
on
VMs.
C
This
piece
versus
the
last
piece
in
terms
of
the
contributions
to
the
improved
Foreman's
in
CNS,
but
you
know
I.
We
can
definitely.
If
you
go
back
and
look
at
the
graph
with
the
blue,
red
and
yellow
yeah,
we
can
definitely
say
that
the
difference
between
the
red
bar
here
for
the
snake
case
and
the
yellow
bar
here
for
the
pipeline
case,
that
is
definitely
switching
from
looping
through
a
V
switch
to
using
direct
cross
connects,
which,
on
a
sense,
so
that
piece
is
those
to
live
between.
C
Those
two
bars,
the
red
and
yellow
I
think
is
pretty
well
understood,
which
is
you
have
an
entirely
different
way
of
connecting
CNS
available
that
you
don't
have
for
vnfs
and
it's
a
big
improvement.
What
all
contributes
to
the
difference
between
the
blue
bar
for
via
nufs
and
the
red
bar
for
seeing
us
in
the
state
case
that
would
require
more
investigation
to
sort
of
track
down
the
difference
and
what
contributes
what
to
wear?
C
D
Much
tech
is
that
something
you
can
speak
to
as
far
as
the
performance
numbers
that
you've
seen
your
reader,
someone
else,
I
sure
in
containers
and
bm's
yeah.
E
So
I
wanted
to
make
two
points,
actually
one
just
a
comment
to
what
that
set
and
answering
Francois
a
question
and
the
other
one
question
for
them.
So
and
if
we
compare
the
vnf
topologies
or
service
chains
versus
CNF
service
chains,
the
the
difference
in
performance
we're
observing-
and
we
observe
this
both
as
part
of
the
CNF
CNC
fscanf
testing
and
also
in
the
IO
limits
foundation.
E
Networking
of
the
real
project
that
I'm
working
in
is
that
if
you
compare
a
very
simple
scenarios
of
a
single
vnf
instance
and
single
CNF
instance,
both
running
on
the
with
the
same
amount
of
resources
which
virtual
we
do
indeed
see
a
bit
of
a
difference,
CNF
configuration
being
faster.
But
but
it
comes
down
to
the
you
know:
the
restrictions
of
memory
copy
operations
between
between
the
two
and
and
there
is
not
that
not
that
much.
The
difference
is
that
the
level
of
you
know
five
five
to
two
to
ten
percent.
E
B
E
B
E
A
But
in
some
ways
it
might
be
too
realistic
or
not
optimized
enough,
because
those
network
functions
are
not
necessarily
designed
to
scale
or
have
they
haven't
necessarily
been
redefined
as
microservices
in
the
way
that
might
be
optimal.
So
I
say
that
that's
one
thing
that
we
could
go
focus
on
another
kind
of
area
that
I'd
say
is
maybe
a
little
duct-taped
together
or
not
optimal.
A
D
D
So
I'm,
sorry,
we
just
finished
my
workers,
open
networking,
summit
ons
and
that's
in
San,
Jose,
and
so
a
lot
of
this
is
tied
into
things
that
we've
seen
as
we've
moved
towards
the
current
test,
results
and
test
cases
and
trying
to
make
it
specifically
make
it
where
open
stock.
It
is
probably
one
of
the
biggest
things
where
that
can
be
repeatable
to
deploy
that
bring
it
up,
and
now
that
we've
got
to
a
point
where
that
can
be
brought
up
hundred
percent
open
source.
We
can
start
getting
improvements
from
folks.
D
So
there's
a
lot
of
things
that
are
more
of
like
improvements
and
everything.
There's
some
items
on
the
left
that
ipv6
support
I
know
that
Michael
had
been
adding
support
on
that
he's
on
the
call,
I
might
call
and
I
think
a
lot
of
that's
done
and
that
allow
us
to
some
test
cases
like
segmented
routing.
We've
talked
about
using
use
cases
that
may
tie
in
with
using
ipv6
to
MPLS.
So
there's
a
lot
of
things.
D
E
Okay,
thank
you.
Dan
Thank,
You,
Taylor
and
I
I
do
agree
specifically
on
the
ipv6
side
and
and
the
routing
side
that
that's
something
and
as
our
v6
is
to
set
that's
something
that
we
should
be
focusing
on
here.
As
for
the
on
up
use
cases,
I
think
it's
something
we
would
need
to
discuss
more
in
more
detail
but
I
think
having
a
set
of
representative
service
chains.
Use
cases
with
different
functions
in
a
chain
would
be
of
of
the
benefit
to
the
community.
So
I
agree
here
with
them.
Thanks.
A
E
E
Cases,
including
you
know,
firewall,
nothing
and
and
Crypton,
but
in
terms
of
getting
a
fully
functional
open-source
set
for
that.
That's
that
I
guess
trigger,
but
you
know
we
do
have
open
source
IPS
like
Stewart
and
potentially
other
other
applications,
but
I
think
you
know
if
you
think
about
that
and
I'll
come
back
with
some
proposals.
Thank
you.
I.
D
Do
want
to
point
out
on
the
way
now
we're
not
looking
at
I'm,
adding
the
own
a
player
immediately
on
top.
What
we
want
to
do
is
support
all
the
pieces
underneath
and
then
we'll
be
collaborating
eventually
with
projects
like
own
app
for
being
able
to
run
things
and,
as
then
point
out
earlier,
contributing
back
upstream
to
make
sure
that
they
own
app
has
a
demo.
That
runs
a
lot
of
cases
and
we
want
to
contribute
any
patches
that
go
up
there
and
we'll
be
looking
at
adding
other
layers
and
doing
things
right
now.
D
We're
saying
what
can
we
implement?
If
we
have
a
use
case
that
we
can
review
and
look
at
what's
there,
then
we
can
pull
as
much
of
that
over
into
the
test
bed
and
re-implement
it,
and
as
far
as
the
firewall
security
side,
own
app
does
have
security
use
cases.
If
we
want
to
take
a
look
at
those
and
then
edu
may
have
thoughts
on
some
security
use
cases
that
you
all
have
been
working
on
from
the
network
service,
mesh
side,
yeah.
C
I
mean
start
falling
in
a
bit
more
into
place
as
we
scale
up
and
it'll.
You
know
you
look
from
a
depression,
different
direction,
so,
for
example,
I'm
often
a
lot
of
the
security
cases,
we're
looking
at
other
networks
or
especially,
are
actually
coming
up
from
enterprise.
But
you
know,
as
the
folks
who
do
SP
knows
know
very
well,
essentially
the
kinds
of
things
you
would
do
for
enterprise
become
product
that
you
sell
if
your
SP,
so
they
become
interesting
there
as
well.
F
I'd
like
to
get
some
clarification
on
slide
6
if
I
could
well
ed
and
Dan
are
both
on
the
phone
for
the
talk
with
Ed
about
this.
So
when
we
say
running
on
top
of
identical
on-demand
hardware
from
bare
metal
hosting
so
from
the
ground
up
bare
metal,
are
we
making
a
decision
where
we're
saying
it's
going
to
be
a
software
data
plane?
F
One
hardware,
so
I
talked
with
bad
about
this
before
and
I
every
time
this
subject
comes
up,
I
think
it's
a
little
bit
hazy
whenever
I
talk
to
like
operators
or
anybody
about
the
project,
they're
always
thinking
you
know
hardware
as
far
as
a
switch
and
Asics
and
stuff
like
that.
So
this
part,
if
there's,
if
we
could
somehow
either
get
clarification,
say
we're
never
doing
that
or
we
that
we're
open
to
doing
that.
I.
Think
that
might.
C
Hardware
in
the
Box,
one
of
the
things
you
run
into
is
you've
immediately
moved
into
a
completely
spoke
world.
You
know,
generally
speaking,
I
mean
there
are
some
exceptions
so
much
past
some
basic
acceleration
that
maybe
stuff
that
can
be
taken
advantage
of
on
the
smart
Linux
effectively.
It
becomes
a
build-your-own
solution
from
the
bottom
up
at
great
expense
and
could
do
stuff
like
that.
But
anybody
who's
going
to
go
build
their
own
solution
is
then
going
to
turn
around
and
do
it
somewhat
differently?
F
So
when
we're
when
we're
talking
about
from
the
bare
metal,
is
it
you
think
it'd
be
a
good
idea
to
say:
okay,
but
we're?
You
know
we're
talking
about
smart
Nix
here
as
well
like
their
specific
Nix,
that
we're
saying
that
this
works
with,
but
we're
saying
no
Asics
like.
Can
we
just
say
that
explicitly
and
everybody
knows
what
we're
talking
about
and
where
we're
going,
because
at
this
point
then
we're
saying
at
all,
mostly
all
software.
A
Watching
I
might
make
two
edits
to
it.
One
is
that
if
the
Asics
are
publicly
available
and
I
mean
in
particular,
if
they're
available
via
you
know,
on-demand
service
from
packet
or
from
a
similar
company
like
that,
then
I'd
certainly
be
open
to
having
a
version
of
the
testbed
that
works
with
them.
It's
just
if
it's,
if
they're
not,
then
it's
just
it's
not
something
that
we
can
test
or
that
anybody
else
can
iterate
on
and
and
see
the
impact.
Oh
yeah.
F
So
there
are
open
Asics
and
that's
where
and
they're
like
part
of
I
believe
some
of
them
are
part
of
Linux
Foundation
nobody's
I,
don't
think
there's
any
that's
part
of
CNCs,
though
so
this
is
something
I've
talked
about
with
the
group
and
we
never
I.
Never
really
can
come
to
a
clear
definite.
No
we're
not
doing
that
or
yes,
we
are
so
it
sounds
like
maybe
it's
oh
we're
open
to
it.
A
Yeah
it's
sort
of
more
than
maybe,
and
particularly
if
a
vendor
wanted
to
come
in
and
say:
hey
we've
made
these
cards
available
to
pack
it
and
they're,
going
to
be
running
in
some
classroom
teams
and
we'd
like
the
testbed
to
show
the
performance
improvement
of
using
those
that's
happening
and,
and
most
importantly,
and
we're
willing
to
contribute
the
code
that
you
know
supports
that
use
case.
I
I
would
be
thrilled
for
that
scenario,
I
mean.
A
That
are
removed,
yeah
from
saying
oh
well,
you
know,
can
Mellanox
come
in
and
show
the
value
of
using
it,
but
Mellanox
Nick
over
an
Intel
neck
and
as
long
as
they're
willing
to
make
the
contributions
and-
and
you
know
that
they're
open-source
and
in
the
same
project
under
Apache
Pio
and
we're
not
requiring
the
firmware
be
open-source,
but
just
that
all
the
configuration
parameters
and
such
then
we'd
be
thrilled
to
accept
pull
requests
on
the
subject.
Okay,.
A
I
mean
I
do
think
it
I
think
it's
very
fair
to
say:
oh
you
know,
most
production
needs
new
spaces
to
go,
are
not
using
commodity
hardware
and
come
on
come
on
Dex
now,
there's
a
sort
of
separate
question
of
what
should
they
be
doing
more
than
the
future,
but
I?
Don't
really
consider
it
our
job
to
decide
that.
C
D
Well,
that
was
our
first
target
on
packet,
because
that
was
what
was
publicly
available
and
they're
releasing
Intel
necks
in
March,
and
we
actually
got
pre-release
of
those
to
start
testing
and
helped
helped
pack
it
create
or
decide
on
the
configuration
for
the
Intel
versions
and
like
the
networking
goals,
we
have
support
for
the
Mellanox
connect
x4.
D
So
that's
the
version
that
they're
using
and
we
can
deploy
all
the
different
and
we
support
open,
sac,
kubernetes,
KVM
and
docker
KVM
and
are
currently
machines
all
on
the
Mellanox
and
we've
been
able
to
use
those
the
limitations
are.
The
drivers
are
not
open
source
and
the
there's
some
weird
things
that
aren't
expected
with
how
they
show
up
in
linux
as
far
as
interfaces
they
don't
act
like
we
would
expect,
but
we've
worked
around
those
and
understand.
What's
going
on
there
and
then
there's
some
issues.
Well,
I
won't
say
issues
performance
is
lower.
D
C
Drivers
continues
to
improve,
they
have
sort
of
dropped,
a
note
indicating
that
they
moved
away
from
requiring
me
oh
yeah,
ofE,
D,
stuff
and
so
I'm,
quite
hopeful
that
that
will
improve
that
situation
greatly.
It's
been
a
long
road
with
them,
but
god
bless
them.
They
stuck
with
us
through
the
whole
thing,
so
it
may
turn
out
to
be
easier
to
get
Mellanox
support
in
the
future.
We
just
have
to
go
say.
D
Think
we
should
any
of
those-
including
this
bet,
the
cards
and
drivers
and
trying
to
encourage
them
to
move
towards
open-source
model
on
all
of
that
and
have
a
common
interface.
G
On
the
performance
issue,
I
think
that's
mainly
due
to
the
way
we
have
to
configure
it,
since
we
only
have
one
port
available,
so
we
have
to
do
some
of
the
VLAN
encapsulation
and
decapsulation
and
the
be
switch.
We
don't
have
to
do
that
on
the
info
button,
we're
planning
on
setting
up
an
environment
using
intel.
That
does
the
same
thing.
So
we
have
a
bit
of
groans
for
comparison.
D
H
C
Because
they
beautiful
imitations,
so
you
can't
keep
doing
things
in
kernel
modules
anymore,
so
that
would
be
sort
of
the
the
table
stakes
for
being
a
C
enough
and
then
for
the
pipelining
stuff.
It
turns
out
the
my
stuff
is
pretty
well
documented
and
there's
also
a
living
mif
that
can
be
used
in
pretty
much
any
CNF.
You
want
to
build
in
order
to
be
doing
the
pipelining
behavior
or
you
always
have
the
option
of
simply
building
on
VPP
it,
which
is
free
and
open
source
of
Apache
2
license
I.
D
Think
beyond
the
some
of
the
hardware
performance
and
our
network
performance
stuff
that
would
tie
in
with
nmif
and
other
things
would
be
looking
at
vnfs
that
may
offer
multiple
services.
Some
of
them
may
be
really
large
and
then
looking
at
breaking
those
down.
That's
just
following
what
would
you
do
cloud
native
direction,
anyways
and
when
you
start
doing
that
the
density
and
workload
on
the
machines
is
going
to
be
more
flexible.
D
D
So
this
use
case
may
be
something
that
we
want
to
implement
next
finish,
all
the
pieces,
because
we've
actually
done
several
of
the
other
network
functions,
but
if
there's
a
totally
different
use
case,
either
at
onap,
which
have
some
or
another
project
or
a
vendor
or
anyone
else
be
happy
to
look
at
one
specifically.
If
you
know
higher
priority
to
ones
where
there's
cut
available
to
review
a
you
know,
open
source,
we
can
review
it
and
or
specifications
about
how
it's
implemented
so
that
we
can
see.
D
D
D
A
I
think
we
can
stop
there.
I
guess
I
just
mentioned
that
we
don't
have
a
mailing
list
right
now,
but
we
do
have
github
issues
that
folks
can
feel
free
to
open
and
then-
and
we
have
the
CNN
channel
on
slack
and
we'll
need
to
see
whether
a
twice
a
month
frequency
makes
sense
here
or
maybe
we're
gonna
want
to
drop
it
back
to
once
a
month
or
incorporated
with
something
else,
but
we're
definitely
open
to
thoughts
or
suggestions.