►
Description
A
B
A
B
So
yeah
I'm
a
software
engineer
for
me
da
cunha,
developing
network
virtualization
agent,
and
so
but
I'm
just
software
engineer,
I'm,
not
even
network
guy.
Not
even
you
know,
storage
guy.
So
don't
ask
me
any
hard
questions.
Otherwise,
I
have
to
pass
it
on
to
Andrew
sitting
there.
So
he's
my
buddy
so
anyway,
so
I'm
going
to
talk
about
this
POC
stuff,
where
we
try
to
basically
abstract
everything,
compute
storage
and
network
and
anyway
this
is
typical.
B
A
large
academic
research
lab
in
Europe
and
the
environment
is
pretty
sorry
pretty
propriety,
so
they
had
proprietary
high
end
server
machine.
You
know,
network
gears
and
you
know
a
lot
of
complex
configurations
because
of
that.
So
whenever
you
know
new
project
comes
up,
then
you
know
the
first
thing
they
have
to
do
is
like
you
know,
ask
for
more
storage
and
ask
all
new
servers.
You
know
custom
configuration,
you
know,
write
papers,
ask
for
permission
and
stuff,
and
you
know
that's
you
know
developmental
in
bureaucracy
and
overhead.
B
B
You
know,
like
said
compute
and
storage
and
network
and
put
another
obstruction,
lay
on
top
of
it
and
do
some
me
know
cool,
intelligent
monitoring
and
basically
manage
every
all
the
resources
under
one
umbrella
and
of
course,
so
so
in
order
to
move
from
like
more
like
waterfall
or
agile
developmental,
you
know
development
paradigm
to
more,
like
you
know,
recent
you
know
like
a
day
pop
kind
of
style
so
that
they
can
quickly.
You
know,
prototype
stuff
and
put
it
you
know,
send
it
to
the
production
and,
of
course,
you're
going
to
delete.
B
You
know,
of
course,
the
other
requirements
like
you
know.
They
want
balance
load
across
the
different
data.
Centers
and
of
course,
did
you
know.
Maybe
she
needs
to
be
scalable,
but
you
know
I
guess
there
are
two
ways
to
scale:
I
guess
one
is
scale
up
and
not
a
scale-out
scale
up
means
it
like
I
guess
you
know,
but
you
know
paying
more
for
servers
and
paying
more
for
a
network
gears
paying
on
proprietary,
stuff
and
stuff,
but
on
the
other
hand,
scale
out.
B
B
A
B
Tenant
and
shouldn't
interact
with
other
tenants
so
and
of
course,
in
the
one
advanced
management
capability
like,
and
you
know,
recent
trend
like
big
there's
stuff
they
want
analytics,
you
know,
features
like
say
one
track
of
how
many
floors
are
being
created
each
second
and
you
know
going
where
to
do
and
maybe
doing
in
into
the
detection
stuff.
So
that's
our
got
mean
the
goal
of
this
POC,
and
so
the
use
case
is
installed.
You
know
it's
an
OpenStack
Newton,
you
know
production,
great
deployment
and
basically
you
know
software-defined
networking
is
all
about.
B
You
know
the
separation
of
control
plane
from
the
data
plane
and
because
stay
one
dinner,
call
management
capability
that
provided
by
horizon,
for
you
know
topology
management
and
deployment
and,
of
course,
the
middle
net.
Would
you
provide
routing
protocols
in
gateway
and
bgp
stuff?
And
you
know
sorry,
the
exam
stuff
and
also
for
the
map
network
in
management
purpose?
B
B
So
due
to
the
software-defined
everything
we
decided
to
use
middle
net
for
new,
you
know
network
and
self,
of
course,
for
storage
and
red
hat
for
OpenStack
management
platform,
I
mean
integration
platform
and
the
last
one
I
guess
I
guess
it's
called
a
hardware
management
and
mounting
me
right:
it's
they
use
x.x
cat
or
something
like
that.
So
it's
typically,
you
know
more.
There
are
more
open,
stuckey
solution
like
puppets
and
unstable,
but
this
customer
is
not
just
want
to
manage.
B
There's
you
know,
storage
and
network,
but
they
also
managed
what
they
want
to
be
able
to
like
up.
Do
upgrade
your
family
upgrade
so
in
this
tool
can
do
that.
So
that's
why
they
just
decided
use
just
so,
but
okay,
so
yeah
I
just
explained
a
picture.
So
you
know
those
lowest
tax
are
for
staff
and
in
the
middle
you
know,
so
those
for
the
OpenStack
controllers
and
Bob
are
for
the
compute
anyway
I'm
switching
my
slide
to.
B
B
So
the
purpose
of
software-defined
networking
is
basically
you
know
the
coupling
control
from
the
you
know,
data
plane
so
that
you
can
manage
the
network
configuration
the
center
of
publication.
You
don't
have
to
go,
eat,
routers
and
network
boxes
to
do
common
manual,
configuration
and
stuff.
So
then
it
has
to
be
agile
hospice.
You
know
scalable
and
so,
instead
of
propriety
hardware,
you
saw
this
software-defined
networking
you
know
allows
you
to
use
white
cheap
white
box
here
so
that
it
can
easily
scale
out
by
adding
more
and
more
cheaper
gears.
So.
B
B
So
historically,
Newton
was
a
part
of
noba
itself,
instead
compute
project
and
so
which
was
first
introduced
in
2010
and
it's
in
a
monolithic.
Sorry.
They
didn't
think
that
network
needs
to
be
independent
project.
So
you
don't
do
feature
is
limited.
You
know
only
L
to
dhcp,
but
I
guess
it
was
stable.
It
is
stable
and
it
did
what
it
supposed
to
do.
So
if
there
are
still
many
customers
and
clients
that
still
use
stick
to
open
Noba,
so
then
you
don't
need
on
spawn
out
of
opus
Nova
and
a
toasty
63
use
of
folsom.
B
B
So
the
problem
is
open.
Beef,
which
is
it's
so
hungry
switches,
are
ya
so
design.
Still,
you
know,
design
model
is
centralized
control,
so
that
is
instead
of
distributed
management,
so
basically
open
beasts
which
becomes
in
a
bottleneck
and
traffic
bottleneck
and
also
become.
You
know
it's
a
single
point
of
failure,
and
so
every
packets,
in
order
to
be
handled,
needs
to
go
through
this.
You
know
central
node.
B
That
means
that
there
is
a
traffic
trombone
and,
on
the
other
hand,
made
on
it
is
you
know
it's
also
open,
Newton
plugging,
so
it's
design
is
basically
middle
net.
Is
a
network
or
ballet
solution?
So
basically
all
we
need
is
of
l3
connectivity,
an
underlying
you
know
physical
Network,
then
as
law.
So
we
build
like
you
know
the
virtual
network,
it's
an
overlay.
On
top
of
you
know,
physical
Network
and
so,
and
it's
just
unlike
in
open
research
and
default
Neutron
plugging
implementation
so
which
is
centralized
the
model
is
the
distributed
control.
B
So
all
the
computations
are
down
appeared
on
the
each.
You
know
p.m.
sub
host,
meaning
that
you
know
single
VM
of
you
know.
Hypervisor
may
go
down,
but
seeing
this
is
no
single
point
of
failure.
So
it's
more
fault
tolerant
and
it's
more.
You
know
it
can
easily
scale
out
because
you
know
I.
If
we
want
to
scale
out
like
I
said
you
know,
you
just
add,
servers
and
connect
the
network
and
just
run
agent.
I
mean
mid
net
agent
on
this
server.
So.
B
And
yeah,
like
I,
said
it's
on
ordered
you
know,
agnete
looks
so.
Network
simulation
for
the
virtual
network
is
done
at
the
edge
at
each.
You
know
closest
server
so
that
that
what
single
virtual
hope
means.
So
you
don't
have
two
targets:
don't
need
to
go
to
the
central
control
node
so
that
waits
latency
is
also
low.
B
Yeah
and
also
there
are
some
more
advanced
features
like
l3
and
LSU
gateways.
Oh
my
gosh
yeah
and
we
hold
off.
I
have
bgp
board
a
nose
and
also
we
can.
We
have
a
d-type
capability.
So
if
necessary,
like
we
can
do,
may
donate
can
do
reacts.
On/Off
load
using
map
in
multiple
hovering
of
exp
3
x,
14
endpoints.
B
So
I
guess
I
don't
think
you
can
see
the
details,
but
it's
just
an
image
of
what
overlay
means.
So
we
have
unbilled
below
we
have
our
physical
network
in
a
dot
provide
earthly
connectivity,
and
on
top
of
that
you
know,
the
agent
runs
on
each
supervisor
and
vm
sits
inside,
but
those
are
mapped
to
touch
report
in
the
physical
area
and
those
physical
and
virtual
switches
and
routers
are
created
by
you
know
it
created
to
neutral
api,
and
so
so
on
this
side,
there's
a
link
so
that
the
bgp
gateways
they're
sitting
on
top.
A
B
Like
I
said,
yeah
in
firebaugh
file
is
also
provided
either.
You
know
soft
by
software
and
it's
viable
functionality
is
provided
at
the
supervisor,
so
it
doesn't.
If
the
malicious
packets
come,
you
know
it
doesn't
reach
the
physical
network
at
all.
So
that's
another
feature
and
because
dhokla
is
everywhere
and
it's
fashionable,
we
also
now
we
can
handle
Dhaka
too,
and
so
we
meet
on
itself
provides
puppet
and
ansible
modules
for
easy
installation
management
and,
and
we
open
sourced
it
a
last
year
at
the
same
time
around
sorry
last
year
around
this
time.
B
Yeah
so
anyway,
so
so
that's
the
exponent.
You
know
this
description
made
on
a
technology
and
how
what
it
does
you
know
how
it's
used
and
stuff,
and
this
is
self
architecture
which
I
guess
all
of
you
are
pretty
much
familiar
sudden
going
to
details
and
so
and
that's
the
stuff
that
they
used
for
management
for
extreme
cloud
administration
to
call
it
X
cat
and
open
source
and
yeah.
So
it
can
do
far
more
updates
and
stuff
so
and
design.
So
basically,
it's
the
same
thing
so
because
we
don't
need
that
you
know
expensive
hardware.
B
Which
is-
and
you
know
I
like
I
said
I
did
I,
think
I
didn't
explain,
but
so
yeah.
So
when
packet
comes
in
so
I
said
the
computation
is
done
at
the
edge,
so
the
packet
comes
in
then
flowed
in
for
me.
You
know
this
I
that
if
there's
a
cache
miss
so
far
is
not.
In
the
obviously
you
know,
DB
corner
module
table
taint
our
data
net
link
up
go
to
the
middle
agent,
which
does
all
the
simulations
and
computes
float
install
the
floor.
B
A
A
B
B
Emily
is
64
gigabyte
storage,
it
four
times
hotel
flight
hardware
and
870
its
CPU.
Once
you
know,
Woonsocket
intel,
haswell
and
memory
is
60
you
by
then
8
x,
8
I'll
bite,
hard
disk,
OSD
220d
by
sse4
generally
and
16
compute
nodes,
cpu,
sockets,
intel,
haswell,
20
coins
and
Emily
was
384
gigabytes
and
all
noise.
B
So
these
are
the
house,
and
so
basically
just
tell
you
know
what
kind
of
technology
we
used
so
as
open
stuck,
you
know
so
for
compute,
for
casa
nova
in
the
foliage
of
course,
staff
for
cinder
and
glancing
networking
is
done
by
Newton
plugging.
Let
me
Newton
meter,
Ned
Newton
plugging,
oh
of
course,
see
in
the
identities.
You
know
sub
back
in
keystone
and
we
use
the
horizon
as
a
dashboard.
B
We
also
in
use
the
salah
meter
and
heat
for
monitoring
the
orchestration
just
too
sure
that
we
can
do
it,
but
we
didn't
do
much
for
actual
work
into
those,
so
configurations
are
basically
default
configuration
mostly
so,
but
we
needed
some
dedicate
links
for
bgp
gateways.
Yes,
I
think
for
I
forgot
the
number,
but
then-
and
we
did
a
lot
of
tuning
using
bgp
timers,
but
the
be
figured
out
that
it
needs
to
be
bgp
timers
needed
to
be
shortened
for
the
quicker
fail
albans.
B
When
you
know
the
link
fails
and
also
allows
receive,
offload
needs
to
be
turned
off
for
gateway
knows
because
of
some
technical
issues
and
your
agent
basically
didn't
send
packets.
That's
not
large
I,
not
larger
than
the
target
beams
and
so
and
we
need
a
port,
beautiful
gateways
and
also
you
know,
I
guess
it's
a
company
with
hindsight's
common
sense,
but
yeah
we
needed
to
climb
zookeeper
connections
because
basically
used
to
keep
us,
as
you
know,
configuration
or
topology
management
and
as
I
guess.
B
Basically,
the
zookeeper
near
the
connection
as
many
as
like
eight
number
of
agent,
I
think
for
the
staff
configuration
ref
application
factor.
Is
you
know
three
of
three
and
64
OSD
spread
equally
between
8th
notes
and
SSD
to
HD?
You
know
how
this
drug
ratio
is
one
to
four.
If
I
understand
correctly,
I
guess
it's
not
really
typical.
I
get
typical
is
125,
but
we
use
wonderful
or
easy
sound.
We
saw
some
performance
improvement,
but
not
as
much
as
and
based
on
these
numbers.
We
calculated
the
placement
groups,
numbers
which
is
4096.
B
Also
we
used
to
10
gig
nicks
and
we
aggregated
it
and
we
put
two
trunks
14
villain
and
one
for
class,
the
network
for
in
a
segregation
of
network,
and
so
all
lessons
learned
yeah.
I
guess
it's
a
common
sense.
I
guess
you
know
you
should
plan
out
before
you
know
it.
You
do
plan
out,
I
mean
you
should
plow
well,
a
lot
of
seems
like
you
know,
because
we
software-defined
everything
you
know
seems
like
small
thing
can
be
completely
changed
afterwards,
but
that's
not
actually.
Okay.
B
A
B
Rabbit
meq
in
particular,
was
a
nightmare
for
this
experience.
For
this
POC
and
restarting
RabbitMQ
is
not
as
easy
as
it
may
sound.
So
call
it
just
some
time
doesn't
comply
so
anyway,
so
you
know
yeah,
but
the
time
that
the
rest
of
the
OpenStack
components
works,
seamlessly
I
mean
and
mid
net
get
over
sorry
gateway
fail
about
was
impressive
and
yeah
the
horizon.
A
B
Shows
dinner
features,
but
all
it
shows
the
user
ID
and
didn't
put
enough
information
in
a
single
place
or
it
lacks
suitability.
So
yeah,
it's
a
not
I.
Could
you
know
at
the
minister
of
energy
to
so,
but
on
the
other
hand,
float
racing
so
made
the
debugging
in
easiest
cause.
I,
guess
the
one
of
the
complaints
or
fear
against
sdn
is
that
you
know
too
much
obstruction
might
make
it
hard
to
debug.
B
But
then
now
now
you
can,
you
know,
do
the
simulation
from
the
you
know,
GUI
and
to
see
what
packets
have
even
been
dropped
on
stuff.
So
it's
now
so
now
you
have
to
to
debug.
You
know
for
virtualization
and
yeah,
so
post
a
curated
stuff
with
planes,
oh
yeah,
so
there
was
issued
with
a
MongoDB
stuff.
So
we
had
like
I
got
on
for
one
would
even
numbered
mongodb
instances.
Then
there
was
a
you
know
like
split
playing
stuff
so
and
also
be
the
same
for
zookeeper
that
meat
on
it
uses.
B
So
we
you
come
in,
like
you
know,
like
God.
None
of
you
know
instances
for
the
quorum
reason
and
yeah
we're
starting
in
rub.
Em
IQ,
it's
not
a
simple
task
and
power
outage
can
really
happen
in
practice.
So
don't
think
that's
like
still:
let's
go
sweat
and
yeah
and
some
my
name
is
you
fake?
The
extra
offload
only
works
for
on
a
single
UDP
and
yeah,
so
yeah,
so
manual
deployment
is
sooo.
You
know
complex
process,
so
we
write
some
installers,
but
guess
some
there's
some
way
to
go
and
for
the
staff
guests.
B
Most
of
these
are
familiar
to
you
so
throughput.
You
know
I,
guess
a
result
always
faster
than
rights
and
some
time
55
times
faster
and
I
hope.
I
guess
it's
faster
I
mean
it
was
quite
well
on.
Even
when
the
Spain
drive
and
you
know
enterprise-ready
and
it
will
more
faster
with
SST.
You
know
at
least
doubles
and
yeah.
So
so,
if
you
don't
have
a
SSD,
you
know
if
we
recommend
that
you
plan
well
about.
You
know
how
to
partition
your
networking,
stuff,
I,
guess
yeah.
B
So,
like
I
said,
we
used
to
10
gig
Nick's
m4
to
introduce
trunks,
but
then
what
I?
Guess
it's
Conqueror,
it's
a
previous
in
presentation,
but
so
most
of
the
time
network.
Is
you
know,
but
a
bottom
not
in
muck,
stops
and
so
I
guess
in
short,
20
Nick's
is
not
enough
for
networks
and
yeah,
so
yeah
I
guess
another
thing,
yeah
so
and
SSD
to
HDD
ratio.
So
we
use
the.
Not
conventional
ratio
want
for
Jesse
yeah,
so
it
just
it
doesn't
improve
performance
as
much
as
the
number,
but
it
improves
the
longevity.
B
So
if
the
num,
the
ratio
for
the
SSD
is
too
small,
you
know
SSD
10-12
pretty
quickly.
So
yes,
it
also
depends
on
the
type
of
is
st.
I
guess
if
it's
the
enterprise-ready
SSD,
I
guess
it's
a
big
defendant
anyway.
So
as
a
result,
yeah,
yes,
the
local
has
been
streamlined.
So
before
I
guess.
Well,
I,
don't
know
the
details
with
what
supposed
to
me,
but
I
guess
three
teams
had
to
you
know
the
same
thing.
You
know
things
need
to
go
back
and
forth
between
sweetings.
B
But
after
that,
then
you
know
yeah
pretty
much
straight.
You
know
and
of
course
it
realized
some.
You
know
business
benefits
like
yeah,
although
stuff
like
rapid
deployment
and
reduced
cost
em
among
maintainability
and
simplified
a
more
manageable
network,
because
of
that
the
faster
deployment
resulting
you
know,
put
more
productivity
and
faster
product
release,
and
you
know
you
know
perfect
for
the
company.
Yes,
I
guess
this
pretty
much
covers
what
I
wanted
to
say.
So
if
there's
any
any
questions,
I'm
happy
to
answer
as
long
as
I
can
answer
them.
B
A
Just
storage,
it
supports
a
whole
thing
of
different
use
cases.
You
heard
from
Blair
and
Steve
earlier
and
now
supporting
software-defined
networking.
Now,
there's
a
lot
of
NFV
stuff,
that's
happening
within
the
OpenStack
and
community
and
then
having
Seth
underline
that
as
well.
So
it's
very
flexible
storage
system.
It's
almost
like
a
unicorn
if
I
was
to
be
biased
about
something.