►
From YouTube: IETF92-NVO3-20150323-1520
Description
NVO3 meeting session at IETF92
2015/03/23 1520
A
A
So
I
don't
think
we're
on
video
today,
but
for
anyone
who
is
speaking
there
is
this
clever
little
outline
of
tape
here,
that's
where
you're
supposed
to
stand
so
just
in
case
someone's
watching
a
video
stream
here.
Please
do
stand
in
that
little
box
connect
one
here:
okay,
all
right
good,
so
we
are
going
to
get
started
here.
We
have
a
relatively
short
meeting
today.
Yes,
I
put
the
blue
sheets
down.
Please
make
sure
you
sign
them
and
pass
them
around
we'll
check.
A
Again
later
we
have
a
fairly
short
meeting
today
because
we
asked
for
two
sessions
instead
of
one
big
one.
The
other
sessions
on
Wednesday
went
into
the
agenda
here
in
a
minute,
we'll
we'll
go
into
that.
We
also
had.
Unfortunately,
one
of
our
presenters
have
travel
difficulty
I,
don't
know
what
that
meant
exactly,
but
he's
not
here,
so
so
that
freed
up
a
little
bit
of
time
as
well,
I'm
not
going
to
butcher
the
person's
name.
A
A
We
have
blue
sheets
going
around
and
unfortunately,
our
secretaries,
not
here
today,
Sam
couldn't
make
it
this
time.
So
I
want
to
say
thank
you
to
John
and
ignis.
Sorry
whizzing.
Where
is
he
our
note?
Our
note-taker
just
step
down?
Okay,
you
guys
get
little
special,
shiny
stars
to
put
on
your
badge
as
a
thank
you.
It
worked
in
kindergarten,
so
why
not?
A
We
actually
are
going
to.
We
actually
are
running
a
webex
here,
so,
if
anyone's
having
a
hard
time
seeing
the
screen,
it
looks
fuzzy
to
me,
but
it
could
just
be
the
angle
on
that.
I.
Don't
know
we're
also
putting
them
on
webex,
mostly
because
we've
gotten
in
the
habit
of
doing
that
for
interim
meetings.
A
A
This
is
the
agenda
for
today,
as
you
can
see,
there's
a
couple
of
slots
missing,
like
I
said
we
had
to
present
her
who
was
unable
to
make
it
so
that
actually
freed
up
about
about
30
minutes
of
agenda
time,
so
in
one
possibility
here
we'll
get
out
a
little
early,
which
would
not
be
the
end
of
the
world,
but
it
also
means
we
have
a
little
extra
time
to
talk
about
some
things,
so
we'll
use
it.
If
we
need
to
this
is
the
agenda.
A
A
So
briefly,
before
we
get
into
the
presentations
we
have
scheduled
here,
we
want
to
give
a
quick
status
discussion,
and
this
isn't
just
the
sort
of
you
know,
chairs
kind
of
throwing
up,
sand,
slides
and
then
moving
on.
We
actually
would
like
to
get
some
feedback
on
some
things.
So
so
please
pay
attention.
So
we've
had
a
number
of
interim
meetings
and
I
think
that
they've
been
productive.
That
it's
been
helpful
to
furthering
the
conversation.
We've
had
some
focus
on
particular
topics
at
those
meetings
which
I
think
has
been
a
good
idea.
A
One
thing
that
is
a
little
difficult
is
that
you
know
we
we
supposed
to
take
minutes
and
blue
sheets
at
those
meetings.
Just
like
we
do
here,
and
the
blue
sheets
aren't
exactly
you
know.
Normal
is
I
mean
here,
there's
literally
a
clipboard
with
a
blue
sheet
on
it.
So
we've
been
trying
to
do
something
in
the
interim
meetings,
which
is
to
have
an
ether
pad
page
where
everyone
can
go
type
in
their
own
name
and
I
feel
like
it's
working,
okay,
except
that
we
are
roughly
getting.
A
Maybe
half
of
the
people
actually
putting
their
names
on
it,
so
I,
I,
guess
I'm
in
part
asking
for
feedback.
Does
anyone
think
that's
a
horrible
idea
or
a
good
idea
and
I
see
a
thumbs
up?
So
that's
a
good
thing,
good
idea.
So
that's
good
feedback.
If
someone
disagrees,
please
say
so.
I
would
encourage
people
when
we
have
these
interim
meetings
to
please
make
sure
that
you
go
fill
out.
The
blue
sheet
trays
a
mic.
Sorry,
we
have
one
leg:
hi,
Tom,
Herbert,.
A
That's
actually
a
great
question,
so
we
tried,
we
tried
it
one
of
the
earlier
interim
meetings
to
take
screenshots
of
that
little
attendee
list
and
it
was
actually
awkward
because
people
kept
coming
and
going
different
times
in
the
meeting.
So
we
reconciled
like
three
different
screen
shots
to
get
the
full
list
of
names
and
some
people
complained
actually
that
it's
not
that's
not
the
way
we
take
names
for
blue
sheets.
A
So,
if
you're
in
back
of
the
room-
and
you
don't
want
to
fill
out
the
blue
sheet
technically
you're
supposed
to
fill
it
out,
but
we
can't
force
you
to
fill
it
out
so
make
of
that.
What
you
will
we
decided
to
maintain
the
same
kind
of
voluntary
process
and
it
might
be
nice
if
we
could
get
a
report.
That
said
at
the
end
of
a
webex,
here's
everybody
who
is
there,
of
course,
that
assumes
that
they
use
real
names,
etc,
etc.
So
so
yeah
we're
back
we're
back
to
an
etherpad
approach,
so
yeah.
A
A
So
milestones
so
we
we
had
a
good
discussion
about
requirements
milestones
here
and
a
result
of
that
you'll
see
them
on
the
screen
in
in
blue
kind
of
highlighted
in
blue
the
result
of
that
discussion.
We
changed
the
milestone
date
to
line
up
with
the
solutions,
milestone,
dates
which
are
there
in
green.
A
The
purpose
of
that
was
really
to
recognize
that
we
we
actually
don't
care
so
much
about
producing
requirements
by
themselves.
Obviously,
requirements
are
helpful
when
it
comes
to
producing
solutions,
but
we
really
want
to
produce
solutions
so
by
moving
them
all
together,
the
anticipation
is,
will
will
produce
them
all
as
rfcs.
Basically,
at
the
same
time,
and
of
course,
there
are
some
details
around
that,
we
might
treat
some
of
them
differently
depending
on
whether
there
are
multiple
documents
versus
we
can
merge
them
in
one
etc.
A
There's
another
question
that
the
items
that
are
in
that
sort
of
dark
red,
the
current
milestones
for
those
are
due
next
month
when
I
say
next
month.
I
mean
you
know
in
a
handful
of
weeks.
So
so
we
have
to
decide
what
we're
going
to
do
about
that.
I've
got
to
slide
in
a
minute
to
to
revisit
that
topic,
so
we'll
get
into
more
depth
there.
A
We
actually
have
time
to
talk
about
it
here,
so
I'd
like
to
do
that,
if
possible
so
jumping
through
some
document
status,
and
this
is
where
we'll
touch
on
those
milestones.
We,
of
course
we
have
a
couple
of
our
SES
published.
We
don't
have
anything
in
the
editors
queue,
and
these
are
the
two
drafts
that
I
referred
to
in
the
previous
milestone
slide
the
architecture
in
the
use
case
draft,
so
so
I
kind
of
like
some
feedback
on
this.
A
What
I'm,
thinking
and
Matthew
and
I
have
gone
back
and
forth
on
this,
and
we
recognize
pros
and
cons
to
either
approach
here.
But
what
I'm
thinking
is
that
we
want
to
get
these
documents
to
the
point
that
we're
comfortable
with
them
like
we
would
be
after
a
last
call
now
that
can
either
mean
that
we
then
published
them
as
rfcs
and
continue
on
with
our
work
or
we
can
put
them
in
a
mean.
A
It's
just
sort
of
a
informal
state
of
being
pending
solution
work
so
that
we
can
update
them
if
necessary,
as
solutions
come
along
now.
One
of
the
downsides
to
that
second
approach
is
that
we
don't
actually
have
something
set
in.
You
know
a
stake
in
the
ground
around
which
to
make
decisions
about
solutions.
That
would
be
unfortunate.
A
But
it's
kind
of
you
know
from
our
point
of
view,
it's
kind
of
arbitrary.
The
point
is
to
get
the
good
work
done
in
solutions
so
before
I,
move
on
to
the
other
document,
status,
I
actually
like
to
get
feedback
from
everybody
here,
there's
one
approach
or
the
other
seemed
like
a
good
idea
of
that
idea.
Do
you
not
care?
At
all,
I
mean
the
documents
are
where
they
are
and
we
want
to
have
discussion
about
them
either
way.
D
B
A
D
I
suggest
going
with
the
post
method
and
making
it
into
an
RFC
if
we
keep
the
requirement
and
use
cases
in
a
constant
state
of
flux,
that
does
not
give
a
solid
foundation
for
the
solutions.
Team
you'd
much
rather
have
a
set
number
of
use
cases
first
that
are
defined
and
then
make
solutions
for
them,
and
if
new
use
cases
come
up,
have
in
their
order
and
have
different
solutions
address
the
new
use
cases.
B
A
F
B
B
B
Larry
Krieger
Cisco,
so
I
guess
I've
mixed
feelings.
On
the
one
hand,
I
feel
like
with
architectures
been
out
there
for
a
long
time.
If
there
were
significant
things
to
change,
then
it
should
have
come
up
by
now.
So
we
might
as
well
just
go
ahead,
an
RFC.
It's
it
hasn't
changed.
Now
it's
unlikely.
It's
going
to
change.
The
argument
for
keeping
it
open
was
that
solutions
when
they
came
would
come
up
with
a
different
way
of
doing
the
architecture,
and
then
we
would
say:
oh
we
need
to
now
we
published
it.
B
So
maybe
what
stirred
or
to
think
about
is
doing
a
slightly
longer
last
call
than
usual,
maybe
four
weeks
or
something
to
give
people
plenty
of
time
to
review
it
in
detail
and
have
a
good
think
about
whether
they,
whether
they
think
there's
any
solutions,
are
going
to
come
up
there.
That
would
contravene
it
or
would
need
extensions
straightaway.
A
All
right
we'll
send
him,
obviously
we'll
send
a
mailing
list
message
on
this
topic
before
we,
you
know,
do
anything
to
hear
any
other
feedback
that
might
not
be
in
the
room
right
now,
but
it
sounds
like
we're
kind
of
converging
around
a
way
of
approaching
him
here.
So
so
we'll
follow
up
on
the
mailing
list
and
I
appreciate
the
feedback
continuing
with
the
document
status.
We
have,
as
I
mentioned
before,
a
number
of
requirements
drafts
that
we
are.
We
are
going
to
kind
of
hold
back
until
solutions
come
out.
A
They
don't
literally
have
to
be
published
at
the
same
time,
but
we
want
to
see
them
progressing
with
each
other
and
in
some
way
that
doesn't
mean
that
we
lack
confidence
in
these
documents.
It
actually,
in
my
mind,
has
more
to
do
with
how
we
might
end
up
publishing
them,
because
we
may
actually
want
to
merge
the
requirements
into
the
solutions
documents
in
some
cases.
So
so
please
treat
these
as,
as
being
you
know,
key
to
the
work
here,
the
decline
requirements
draft.
A
We
actually
need
an
editor
for
so
not
right
now,
but
perhaps
after
the
meeting
or
whenever
you're
comfortable
doing
it.
If
you
could
let
let
us
know
if
you're
interested
in
editing
it
with.
We
need
to
pick
somebody
who
who
can
kind
of
help.
You
know
own
it
editorially
more
so
than
technically
to
get
it
done.
A
A
The
name
is
perhaps
a
little
too
specific,
but
whatever
one
question
that
I
want
to
ask,
and
since
there
is
a
presentation
on
this
later,
we
can
actually
talk
more
explicitly
about
it.
Then.
But
the
draft
refers
to
VDP
as
a
potential
protocol
for
for
the
solution
in
this
space
and
in
particular,
so
V
DP
is
an
I
triple
e
protocol
and
in
particular,.
B
A
Would
be
some
extensions
to
it
that
would
come
to
let's
say
to
the
I
Triple
E
as
requirements
from
envio
3,
and
you
might
imagine,
then
the
envio
3
specifies
that
protocol
as
the
solution
to
the
face.
So
in
order
for
that
happen,
there's
this
kind
of
limbo
that
we
go
through,
where
we
would
send
a
liaison
to
them
telling
them
that
we're
interested
in
the
work.
A
Here's
the
requirements
and
that
will
monitor
it
and
then
later
on
perhaps
doing
a
consensus
call
as
to
whether
that
actually
solves
the
problem
for
us
and
we
would
kind
of
iterate
through
that
appropriately.
So
what
we're
thinking
is
that
we
would
send
a
liaison
message
to
the
I
Triple
E
with
that
method,
with
that
content
and
and
go
from
there,
but
I
wanted
to
make
sure
that
everyone
understood
that
process
and
if
there's
any
questions,
please
feel
free
to
ask
now
and
it
seems
fairly
straightforward.
A
I
think
the
security
requirements
draft
is
on
the
agenda.
I'm
sorry
was
on
the
agenda,
but
our
I
think
that's
one
of
the
presentations
that
are
our
speaker
couldn't
make
it
for
I
reviewed
it
very
recently
and
I
think
it's
actually
in
very
good
shape
and,
frankly,
because
there's
not
a
specific
solution
that
might
be
published
for
this.
A
We
it's
the
one
example
of
a
requirements
draft
that
we
might
decide
to
promote
to
submit
to
the
iesg
sooner
than
the
others,
so
so
I'm
looking
at
it
as
being
fairly
close
to
done
and,
of
course,
we'll
send
a
last
call
to
the
mailing
list
before
we
do
anything
like
that.
But
if
you
haven't
had
a
chance
to
review
it,
please
do
that
there
might
be
one
question
and
there
around
underlay
overlay
relationships
with
regard
to
Oh
am
so
if
anybody
has
when
we
talk
about
the
OEM
topic
on
Wednesday.
A
Something
like
that.
So
please
read
and
have
a
look
at
them.
There
was
some
good
work
done
by
the
routing
end
cap
design,
team
I
can't
remember
if
that's
the
right
title
or
not,
if
its
end
cap,
DT
or
dt
and
cat,
but
in
any
case
that
that
draft
has
very
good
considerations
on
the
previous
slide.
When
I
talked
about
the
data
plane
requirements,
we
need
an
editor
for
that
document.
A
One
of
the
things
we'd
like
to
see
happen
is
is
that
they
may
actually
be
perfectly
in
line
with
one
another
already
but
I'd
like
someone
to
evaluate
that
a
little
more
carefully
and
make
sure
that
the
data
plan
requirements
for
envio
three
they're,
certainly
more
specific
than
the
routing
endcap
requirements
that
this
document
refers
to,
but
I
like
to
make
sure
that
they're
in
sync
with
each
other
and
then
our
last
document
status
slide.
We
have
not
sent
messages.
A
The
mailing
list
about
these
two
drafts,
yet
so
I
wanted
to
give
the
room
an
opportunity
to
comment.
The
first
one
is
a
multicast
framework
draft.
It's
been
presented
at
one
or
two
interim
meetings
to
interim
meetings
and
the
the
latest
version
I
think
it
takes
in
the
feedback
that
was
received
in
those
meetings,
talks
about
mechanisms
for
distributing
multicast
traffic
and
then
some
of
the
sort
of
alcohol
and
proxy
functions
that
you
may
need
in
order
to
facilitate
that
it
is
somewhat
architectural.
It's
not
a
solution.
It's
not
requirements.
A
It's
a
discussion
about
how
to
achieve
this
goal.
So
one
of
the
questions
that
that
we
have
as
chairs
is
whether
it
should
be
published
as
its
own
draft
or
tried
to
merge
into
the
architecture.
I
don't
have
a
strong
opinion
about
that.
I
suspect
that
simply
publishing
it
would
be
the
most
straightforward
route,
so
so
I'd
like
some
feedback
on
that.
If,
if
people
agree,
then
we
would
send
a
call
for
adoption,
but
I
don't
want
to
do
a
call
for
adoption
and
then
try
to
force
some
kind
of
merger.
A
B
A
I
was
referring
to
just
the
first
document.
The
multicast
framework
draft
and
I
was
referring
to
the
possibility
of
merging
its
content
into
the
envio
three
architecture.
Draft.
That's
one
possibility.
The
other
is
that
we
adopt
it
and
and
publish
it
alongside
the
architecture.
Basically
and
like
I,
said
I
I,
don't
see
really
a
strong
advantage
or
disadvantage
to
either
approach,
except
that
it
might
be
easier
to
start
written
to
adopt
it.
I
haven't
actually
talked
about
the
second
one.
Yet
on
this
page.
G
Okay,
now
I
have
opinion
on
the
earlier
question.
You
had
about
architecture
document
stay
open
and
I
think
it
should
be
closed.
Like
all
other
working
groups
in
IETF,
we
have
architecture
documents,
we
have
use
case
documents
become
an
ops
ii,
get
them
move
forward,
because
I
can
see
this
kept
going
like
now.
We
have
a
multicast,
maybe
a
year
down
the
road.
We
have
something
else
and
this
architecture
will
become
really
hard
to
accommodate
all
those
things.
It
has
happening
every
working
group.
Here
we
have
architecture
document.
G
F
A
Thank
you
that
was,
that
was
good
feedback.
The
second
document
on
this
page
is
complicated
for
a
different
reason.
So
so
this
draft
refers
to
a
mechanism
based
on
dhcp
around
with
a
nve,
but
specifically
a
V
tap
and
VX
LAN
and
ve
would
use
dhcp
to
to
basically
be
assigned
multicast
group
addresses
for
the
underlay
if
I,
if
I
understand
it
correctly,
based
on
the
VN,
is
a
fairly
straightforward
mechanism,
at
least
I
think
so.
A
I'm
no
dhcp
expert,
the
so
technically
it
seems
like
a
pretty
simple
thing
to
address
to
adopt
and
deal
with.
However,
it
doesn't
seem
to
satisfy
the
whole
architecture
of
envy
03.
So
if
we
were
to
adopt
it,
it
would
not
be
in
lieu
of
other
control
plane
work.
It
would
be
perhaps
as
an
addition
to
that
work,
either
as
part
of
a
bigger
sweet
or
just
something
on
the
side.
A
So,
if
you
haven't
read
it,
I
would
actually
appreciate
that
you
do
because
my
inclination
is
to
adopt
it,
but
since
that
doesn't
have
a
really
clean
kind
of
fit
into
the
overall
work
stream
that
we've
got
here.
I
want
to
make
sure
that
we
understand
what
we're
doing
beforehand,
if
you
have
comments
on
it
now,
I'm
happy
to
take
them,
but
otherwise
I'd,
like
I,
said
just
encourage
you
to
read
it
and
comment
on
the
list.
F
Innokin
money
from
Dell
I'll
just
say
the
same
thing
that
I
said
at
the
interim
meeting,
which
is
that
it
really
doesn't
fit
in
with
what
we're
trying
to
do
here
right,
which
is
centralized
control
plane.
You
can
get
all
the
information
that
you
need
from
a
centralized
control,
brain
soul,
but
then
I,
but
then
again
if
this
work
were
to
be
done.
The
question
is
which
working
group
because
it
doesn't
fall.
It
doesn't
look
like
its
dealings.
A
A
And
what
you
just
said
is
actually
the
concern
that
would
lead
me
to
think
adoption
makes
sense,
because
if
we
don't
adopt
it
here,
it's
not
clear
where
the
work
might
happen.
The
dhcp
DHC
working
group
perhaps
could
certainly
they
have
the
expertise
on
the
dhcp
side
of
things.
So
so
we
there's
other,
maybe
there's
another
approach.
I,
don't
know
we
can
talk
to
the
80s
as
well.
I
agree
with
your
point,
though
it
would
not
be
in
lieu
of
the
larger
work
plan
we
have
here.
Okay,.
F
A
B
H
B
Yeah
I'm
diem,
but
first
author
of
this
draft,
as
mentions
that
you
would
like
to
see
disrupted
when,
regarding
of
the
HCP
yeah,
we
talked
to
the
DCP
chairs
and
they
they
really
reviewed
the
disrupter
and
we
handled
all
their
concert.
At
least
we
think
so,
but
they
don't.
They
don't
want
this
to
be
discussing
the
HCP.
Unless
we
adopt
us
here,
then
then
Alaska
is
slot
in
dhcp
presented
there.
Then
you
know,
that's
nothing,
that's
how
they
prefer
it,
this
kind
of
application
for
them
that
should
the
route
should
we
handle
outside
of
dhcp.
A
Good
so
we'll
follow
up,
we
can
follow
up
with
the
DHC
chairs
and
the
80s
and
and
so
on,
we'll
figure
it
out
so
yeah
anyway.
Please
do
read
it
it
like.
I
said
I
think
it's
straightforward,
but
maybe
I'm
missing
something
and
with
that
we
can
move
on
to
our
first
presentation,
but
maybe
before
I
do
just
any
last
comments
on
the
things
we've
talked
about
here
and
silence
is
golden,
so
who
is
next
here?
Tom
and
Tom
I
have
a
little
clicker
thing.
You
can
use.
B
C
No,
it's
much
better
hi.
My
name
is
Tom
Herbert
and
today
I
want
to
talk
about
identifiers
locator
addressing
so
this
is
another
data
plane
solution
for
network
virtualization,
but
it
is
kind
of
motivated
by
something
that
we
see
in
the
data
center
which
I'm
going
to
call
task
virtualization,
but
this
could
also
be
applied
to
a
general
vm
case
which
I'll
talk
a
little
bit
about
so
Tess
virtualization
is
a
concept
that
we
have
a
data
center
running
mini
tasks
for
many
jobs,
basically
how
data
center
works.
C
So
if
we
consider
kind
of
the
canonical
topology
of
a
data
center,
it's
obviously
a
very
scaled-down
model.
What
we
typically
have
our
racks
with
host
and
host
have
a
top
of
racks
which
connects
to
a
fabric,
and
we
build
out
data
centers
in
this
model,
with
many
racks
many
switches
and
what
have
you
so?
What
we
usually
have
in
a
large
data
center
is
a
job
scheduler,
and
the
role
of
the
job
scheduler
is
to
schedule
the
different
tasks
for
different
jobs.
C
So,
if
you
can
imagine
say,
company
like
Google
has
jobs
for
Gmail
Google
maths
search.
What
have
you
and
what
we
need
to
do
in
the
data
center
is
scheduled
the
task
for
these
jobs
in
order
to
meet
the
resource
requirements
of
the
jobs.
So
one
thing
I
would
mention
is
there's
not
a
lot
of
homogeneous
resources
in
a
typical
data
center.
We
have
kind
of
mixes
of
hardware
and
networking
capabilities.
So
it's
often
a
problem
to
scheduler
is
how
do
I
schedule
optimally
to
satisfy
the
resource
requirements
of
the
things
I'm
scheduling?
C
So
we
get
into
scheduling
dilemmas
pretty
quickly,
and
this
is
the
case
where
we
want
to
schedule
across
a
data
center
with
very
high
utilization.
But
then
we
have
a
problem
when
something
more
important
comes
along
more
in
print.
Ask
what
do
we
do
and,
unfortunately,
the
current
solution
kind
of
is
that
we
have
to
kill
existing
task
and
make
room
for
new
ones.
C
C
Typically,
we
would
have
to
restart
those
on
a
different
rack
and
kind
of
pick
up
the
work
it
does
make
scheduling
a
nightmare
in
the
sense
that
this
de
tributed
Java
schedule
has
to
put
a
lot
of
effort
into
trying
to
optimize
on
the
first
pass.
If
it
doesn't
do
things
right
and
have
to
rearrange
a
lot
of
work,
then
we
lose
a
lot
of
work
and
kind
of
kill
the
utilization
in
the
data
center.
So
we
really
want
is
ability
to
do
task
migration
and
again
this
would
be
under
the
auspices
of
the
job.
C
Scheduler,
so
would
have
figure
out
that
the
appropriate
place
to
run
these
new
tasks
are
on
this
rack
or
in
these
hosts
and
there's
existing
task.
But
if
I
could
just
move
those
to
another
place
where
it's
acceptable,
they
run,
then
what
I
eventually
get
is
a
new
kind
of
new
partitioning
of
the
resources
I've
been
able
to
satisfy
all
the
resources
or
all
the
test,
resources
and
life.
Ms
forward,
so
that's
kind
of
the
the
goal
and
of
course
we
want
this
to
be
sort
of
transparent
to
everybody.
C
E
C
Well,
that's
really
strong
hard
like
this.
Thank
you,
so
zero
performance
impact
when
not
migrating.
So
again,
this
is
not
a
new
green
field
where
we're
bringing
in
VMs
creating
new
products.
This
is
actually
trying
to
transition
the
existing
data
center.
How
we
run
these
jobs
into
this
new
model,
where
we're
just
adding
this
capability,
that
we
can
move
things
around
kind
of
seamlessly
and,
in
addition,
security
and
control.
These
should
just
be
straightforward,
continue
to
work
as
they
they
already
are.
It's
important
to
mention.
This
is
more
container
containerization
than
VMS.
C
Probably
we
don't
particularly
need
overlay
networks
or
virtual
switches
in
this
model,
so
that's
kind
of
where
we
depart
from
VMs.
Also.
We
do
expect,
though,
that
everything
that
currently
works
works
in
this
new
model,
so
in
particular
ecmp
the
nic
offloads
things
like
that
continue
to
work.
It's
also
an
interesting
characteristic
to
point
out
that
most
task,
we
create
probably
won't
be
migrated
anyway.
The
expense
of
task
migration
isn't
justin
moving.
The
network
address
is
probably
more
expensive
than
move
memory
in
the
storage
along
with
it.
C
So
the
address
split,
like
I,
said
this
is
modeled
very
similarly
to
I
LNP
locator
top
64
bits,
identifier,
bottom
60,
64
bits.
The
locator
will
identify
physical
host
in
the
network,
so
all
of
the
switches
and
routers
will
basically
be
routing
on
the
top
64
bits
and
it's
routable.
This
will
not
be
used
in
a
connection
endpoint
identifier.
Again,
it's
kind
of
the
virtual
virtual
address
for
the
loot
s,
not
routable.
We
can
use
that
as
connection
endpoint.
C
So
we
also
have
the
ability
to
use
this
with
kind
of
the
more
canonical
virtual
network
network
was
network
fertilization,
and
in
this
case
we
can
embed
virtual
nut-cracking
identifiers
into
an
ipv6
address.
So
in
this
model
will
use
some
of
the
bits
in
the
identifier
to
be
the
virtual
network
identifier
and
then
some
of
the
bits
to
be
virtual
address.
C
So
in
the
case
of
ipv4,
this
works
out
really
well,
because
we
can
actually
put
the
full
virtual
ipv4
address
in
the
VA
door
and
the
virtual
network
identifier
we
have
24
or
in
this
case,
would
be
29
bits
of
virtual
network
identifier.
So
this
actually
would
be
a
great
solution.
In
a
sense,
it
would
calm
place
the
same
thing
that
v
excellent
and
V
GRE
does,
but
we
do
run
into
problem.
C
This
is
doesn't
include
any
extensibility,
so
we're
using
ipv6
as
ipv6
there's
no
encapsulation,
which
means
we
wouldn't
have
any
space
for
security,
for
instance.
So
that's
kind
of
why
it's
not
a
general
purpose
solution
for
network
virtualization,
but
it
would
also
allow
us
to
do
some
things,
even
in
the
vm
case
that
we
found
kind
of
difficult
one
thing.
C
If
you
can
imagine
in
a
data
center,
we
may
have
a
server
that
is
running
as
a
common
service
across
many
different
tenants
or
virtual
networks,
and
we
want
that
connectivity
that
server
from
each
virtual
network.
So
I
guess
the
common
solution
probably
is
to
use
a
lot
of
nat,
basically
from
a
vm
not
into
a
servers
using
a
in
the
vm
address
space
one
address
and
then
that
into
a
public
address
base.
C
In
this
case,
it
actually
might
be
a
little
simpler,
because
now
we
can
just
use
a
identifier
of
the
service
and
actually
map
that
directly,
so
it
probably
gets
us
out
of
using
stateful,
not
in
that
case,
and
similarly,
this
also
could
allow
us
to
have
to
vm
than
two
different
virtual
networks
to
communicate
with
each
other.
So
again,
in
those
two
cases,
there's
no
security
as
part
of
encapsulation
like
we
would
get
with
something
like
goo,
but
the
advantage
is.
C
B
B
C
C
C
C
List,
I
believe,
is
encapsulation,
so
it
does
look
at
locator
identifier,
but
adds
an
encapsulation
headed
in
this
case.
There's
no
encapsulation,
so
the
packets
on
the
wire
would
look
like
a
IPO
tcp
/
IP
UDP
over
IP.
They
look
just
like
the
packets
that
we
send
today
and
in
fact,
that's
important
because
all
of
the
network
mechanisms
easy
MP.
What
have
you
will
still
be
using
that?
So
we
want
to.
We
want
to
make
this
virtualization
without
encapsulation
in
order
to
remain
compatible
with
rest
of
the
network.
G
B
G
C
G
C
G
C
So
it's
this
is,
this
all
needs
to
be
would
be
transparent
to
any
guest
OS.
So
this
would
be,
if
you
think
of
it,
this
way
at
the
encapsulation
layer
today,
we
would
instead
of
doing
an
encapsulation
where
we
put
in
the
physical
host
address
in
the
outer
header,
we're
just
putting
kind
of
the
physical
host
address
into
the
packet
or
the
yeah
destination
address
itself.
So
it's
the
same
operation
except
without
encapsulation.
Okay,.
B
C
Np
yeah
so
I,
LNP's,
they're,
very
interesting
and
in
fact
we're
talking
with
saleem
on
it.
The
reason
is
one:
the
control
plane
they
kind
of
have
their
own
control
plane,
and
this
is
much
more
like
an
env
03,
where
we
want
that,
centralized
control
plane
to
do
distribution
and
then
the
second
reason
is
so.
B
C
It's
the
same.
It's
logically
the
same
right.
It's
a
mapping
of
physical
or
virtual
to
physical
addresses,
some
hell,
so
I
believe
that's
kind
of
the
same
thing
most
of
ilm
p,
is
kind
of
already
mirror
here
and
we're
doing
the
64-bit
split
which,
which
makes
sense.
So
we
leverage
as
much
as
we
can
out
of
that
I
think
the
parts
where
they're
using
DNS,
to
convey
some
of
the
locator
information
probably
is
not
as
interesting
here
I'd
rather
use
the
control,
the
centralized
control
plane
friend
via
that's.
D
Bob
Moskowitz
hg
consulting
on
so
there's
no
binding
between
the
locator
and
I,
an
identifier
that
you're
doing
is
no
no
no
binding.
So
you
have
to
have
some
some
method
that
I'm
now
moving
over
here
and
you
can
trust
that
I
really
am
over
here,
and
this
is
my
new
address
where
you
will
find
me.
Is
there
a
notification
of
the
purposes
that's
currently
talking
to
in
the
move,
or
is
there
a
discovery
process
which
comes
in
fact
that
it's
moved
to
new
address
I.
C
Think
that
is
part
of
the
control
plane.
So
also
remember
that
the
the
movement
and
the
point
that
the
scheduling
of
this
is
being
done
by
a
centralized
job
scheduler
to
begin
with,
so
the
orchestration
can
definitely
happen
from
that
level.
So
it's
not
clear
to
me
that
in
a
host
has
to
tell
the
rest
of
the
world
where
it's
located
or
if
that
could
be
done
by
the
central
control,
plane,
I
think
I
think
the
latter
model
actually
a
little
will
be
a
little
more
secure.
D
Cleaner
model
or
because
we've
seen
silly
mobility
cases
how
everything
breaks.
You
know
the
cue
your
communication
breaks,
because
it's
no
longer
at
the
dress
you
bought
it
was
that
that
you've
been
talking
to
all
this
time,
all
of
us
into
someplace
else.
So
now
you
have
to
restart
your
sessions
because
of
the
move.
C
So
I
think,
from
that
point
of
view,
I'm
just
going
to
fall
back
on
saying,
there's
isomorphism
here
with
what
we
need
for
ila,
with
the
envy
of
control
plane
for
any
other
encapsulation
protocol.
So
if,
except
that
migration
we
want
to
do
live
migration,
where
we
don't
travel
connections,
then
yes,
you
need
a
control
plane
or
some
mechanism
that
says
or
tells
descenders,
okay,
the
locator
or
the
identifier,
or
the
virtual
address
for
this
host
you're
talking
to
have
moved.
Please
update
and
then
update.
All
of
your
your
tables
and
that's.
D
B
B
Diego
garcia
leone
watch
networks,
and
so
one
question
there
so
the
host
well
the
the
the
process
themselves.
Would
they
be
filling
in
a
random
or
a
well
known
location
that
that
then
gets
overwritten
by
the
V
switch
or
the
redirection
process,
because
I
mean
as
a
v6
packet
is
still
coming
out
of
the
virtualized
process.
Right.
C
So
the
way
to
think
about
this
is
it
from
the
the
task
we
use
ipv6
to
connect
so
we'll
get
some
sort
of
ipv6
address.
That
is
not
doesn't
have
a
locator
but
want
to
know
that
so
it'll
be
like
some
default
prefix
with
an
identifier.
So
that's
what
we
have
in
DNS,
so
a
task
will
get
that
send
a
packet
address
that
and
then
somewhere
at
a
lower
layer.
C
We
need
to
overwrite
that
kind
of
default
prefix
with
the
actual
locator
that
turns
out
to
be
I
believe
a
stateless
in
that
sort
of
operation
so,
for
instance,
we'd
have
to
update
the
ipv6
or
the
UDP
and
TCP
checksum
if
they
were
present.
So
in
real
implementation,
though,
because
we
know
we're
doing
this-
we
want
to
consolidate
all
that
and
hopefully
not
have
to
do
any
rewriting
of
checksums.
Anything
like
that.
So
we'll
probably
squash
that
into
a
single
operation.
A
I
Yes,
your
own
from
context
stream,
so
I
this.
This
is
another
form
of
encapsulation
really
I
mean,
let's
be
practical,
it's
completely
equivalent
to
VX
lan-
some
dates.
So
I
don't
want
to
argue
about
the
merits
of
having
goo
and
VX
lon
and
imagery
and
a
bunch
of
others.
That's
not
the
point.
The
problem
I
see
is
that
this
is
an
absolute
overlay
and
I
see
already
confusion
in
the
market
by
customers
having
data
centers
in
data
path
and
thinking
that
ipv6
solves
their
problem
and,
in
fact
nothing
is
sold
by
this
format.
I
I
I
C
Well,
but
the
overhead
is
important
to
me
because,
as
I
mentioned,
we
are
trying
to
replace
an
existing
data
center
with
the
capability.
So
I
don't
want
to
add
any
any
of
the
overhead
normally
associated
encapsulation
I
want
things
to
work
just
like
they
do
today,
so
no
overlays,
no
virtual
switches
if
I
get
in
get
rid
of
all
that
I
just
have
what
looks
like
the
same
network
that
I
have
today
only
adding
this
new
capability.
Ok,.
I
So
that
statement
is
misleading,
because
when
we
discuss
overlays
ok
there
is
this
track
which
examines
all
the
various
encapsulations,
and
this
is
one
of
them.
You
want
to
do
a
new
6.
You
want
to
do
it
in
bloom
filters.
You
want
to
do
it
with
VX,
not
whatever
you
choose.
Each
one
has
advantages.
More
beats
less
bits,
ok,
but
that's
only
a
fraction
of
the
problem
of
creating
overlay
networks.
The
biggest
problem
is
conveying
global
information.
I
B
E
Eric,
not
Mike,
I
stays
on
the
mics
are
tuned
differently,
so
clarifying
question
the
I'm.
Assuming
that
the
host
you
know
the
guest
VM
whatever
will
go
to
a
DNS
lookup
and
get
back
128
bits.
Yes,
Korean
cuate
you're,
not
assuming
that
it's
going
to
change
whatever
bits
it's
using,
but
instead
the
actual
and
the
env
evilly
rewrite
the
locator
asked
to
host
moves
and
it
gets
stuff
from
the
centralized
can
open
start
resumption.
Mm-Hmm,
okay,.
E
Yeah
there
is
a
question
about:
does
that
work
with
ipsec
are
the
cases
when
we
basically
cover
the
site.
We
have
a
pseudo
header
checksum
on
those
bits
right,
and
you
need
to
change
that
because
some
of
the
I
LNP
proposal,
I
thought,
was
advocating
at
some
point
in
time,
at
least
that
you
only
want
to
check
from
the
lower
64
bits,
etc
to
make
sure
that
and
only
use
the
lower
64
bits
in
the
host
identification.
E
C
It's
all
refer
an
ipsec.
There
is
certainly
a
lot
of
words
in
the
I
LNP
about
that,
in
which
I
think
is
useful.
The
checksum
definitely
is
something
to
consider.
We
have
to
have
the
checksum
work
as
it
does
today.
So
on
the
wire,
a
TCP
exum
has
to
cover
the
same
pseudo
header,
so
we
can't
change
tcp
checksum,
which
means
at
the
host.
C
C
E
B
This
is,
do
I,
Eric,
remember,
GSC,
geez
gyasi
was
always
supposed
to
do
the
checksum
on
the
lower
64
bits.
The
locator
was
always
zero.
So
when
you
went
back
to
the
inside
and
the
other
host,
you
return
it
back
to
zero,
so
the
checksums
would
be
equivalent
I
think
he
has
a
symmetric
communication.
So.
J
One
clarifying
question
you
mentioned:
one
of
the
differences
between
this
in
an
overlay
is
that
you
can
run
a
cmp
and
other
protocols
on
the
upper
64
bits
or
honest
on
the
ipv6
address
as
it
is
today.
Is
there
any
reason
you
think
in
the
case
of
an
overlay,
you
can't
run
those
protocols,
like
you
see,
MP
on
the
outer
header
addresses.
So
how
is
that
different?
I.
J
C
J
C
So
that
is
a
plan,
so
so
we
definitely
want
any
in
the
encapsulation
case,
we're
using
an
additional
head
or
make
sure
that
the
outer
headers
actually
deal
with
ECM
or
allow
for
ecmp.
So
that's
why,
for
instance,
doing
UDP
encapsulation
the
source
port
would
be
set
to
some
sort
of
entropy
representing
the
inner
flow.
So
we
get
the
same
effect,
but
it's
still
not
quite
the
same
in.
In
this
case,
the
outer
header
is
the
only
header,
so
the
actual
for
chapel
for
TCP
connection
would
be
kind
of
represented
by
that.
Thank
you.
Ok.
C
Oh
sorry,
I've.
A
A
G
Ok,
so
this
is
about
the
mechanism
for
MEA
to
distribute
the
map
mapping
entrance
to
the
nde.
This
particular
draft
has
been
discussed
in
the
interims
to
price
and
we
got
lots
of
comments,
and
so
today,
I'm
not
going
to
repeat
what
we
presented
an
interim
so
for
the
people
who
didn't
come
to
the
interim.
That
would
be
your
loss.
So
maybe
remember
next
time
you
have
to
come
to
the
interim.
So
in
a
nutshell,
so
the
distribution
we
allow
to
mechanism.
G
When
is
the
push
and
another
one
spool,
and
since
the
pool
mechanism
has
been
much
mature
like
Lisbeth's,
using
pool
and
very
much
mature,
the
document
self
focus
a
lot
on
the
push,
especially
incremental
Porsche
right.
Every
time
you
get
one
change:
how
do
you
incremental
e
update
to
a
meee
and
that's
also
the
area?
We
got
lots
of
comments,
and
so
today,
I'm
only
going
to
address
some
of
the
new
changes
since
the
last
interim.
G
So
basically
in
the
for
the
push
mechanism
and
ve
has
to
announce
all
to
the
MBA
that
the
number
of
ens
is
support.
It
happens
whenever
you,
the
MBE,
goes
through
a
restart
or
new
enemy.
He
comes
up
any
kind
of
change,
cause
ma
lost
connection
with
mba,
any
two
GOP
in
scoped
announcement
indicating
what
beings
he's
supporting
so
that
mba
can
send
em
a
scoped
mapping
to
the
nae
and
we're
suggesting
using
ice
ice
protocol
simply
because
we
want
to
utilize
the
CSM
p
mechanism
to
do
the
incremental
update.
G
G
So,
during
the
interim
the
comments
coming
up
with
how
do
we
represent
the
list
of
viens
because
we
can
have
24
bits
on
the
future?
Actually,
we
have
30
bits,
so
the
two
bills
to
represent
the
ends-
the
number
V
ends
each
to
be
represented-
can
be
very
hard
to
express.
So
for
that
reason
we
created
three
different
ways
for
nve
to
express
the
supported
of
the
Vian's.
What
is
the
millions
that
NDEs
participating
so
one
way?
G
G
Our
second
one
is
supposedly
if
the
network
is
designed
very
well
and
every
amelie
has
a
very
nice
range
of
feelings
to
support.
So
that's
the
best
case
is
basically
starting
and
ending.
That's
the
easiest
bestest
up
to
your
way.
We
can
support
the
third.
One
is
suppose
if
you
have
simple
virtual
switch
based
and
they
e
the
number
PN's,
your
support
is
very
limited,
so
we
can
simply
just
express
the
list
of
millions.
So
that's
the
last
one.
G
F
G
A
H
C
A
So
so,
yes,
I
think
there
will
be
more,
but
there
hasn't
been
yet
so
I
I
have
some
questions
about
how
this
fits
the
architecture
in
a
centralized
model,
but
it
may
not
be
something
that
has
easily
talked
about
in
a
large
group,
maybe
email
and
some
white
boarding.
So
so
I
don't
know
the
answer
to
your
question.
A
C
E
Eric
not
mark
so
I
think
so.
I
esaias
can
be
used
to
do
distributed
implementation
as
well
right,
and
it's
not
clear
whether
this
is
only
doing
centralized,
because
some
of
these
things
I
would
imagine
that
in
a
centralized
model,
it's
actually
a
central
controller
that
sends
out
whatever
it
is.
Here's
a
bunch
of
is
I
as
lsas.
Whatever
that
says,
oh,
you
should
have
these
VN
IDs
and
by
the
way
you
should
have
these
mac
addresses
whatever,
because
I'm
central
and
I
know
everything
all
right.
The
way
it's
described
there.
E
G
G
Well
in
this
particular
drive,
yes,
and
what
you
are
saying
that
and
they
II
can
report
what
they
have.
That's
that's
a
different
aspect.
So
so
so
when
we
say
is
you
bid
it
or
centralized
the
two
things
in
a
male
3?
We
assume
there's
MAA
with
the
information,
and
here
is
talking
about.
How
do
we
distribute
granted
that
you
have
multiple
and
the
ace
you
can
have
one,
a
mea
responsible
for
vm,
one
to
ten
thousand
another
MV,
a
responsible
for
ten
thousand
one
to
twenty
thousand.
E
G
Here's
the
thing
so
NBA
week
may
have
multiple
instance
of
NBA
right,
meay
responsible
for
one
to
ten
thousand
m82
for
them,
ten
thousand
one
to
twenty
thousand,
so
that
when
MBE
announced
himself,
I
have
real
10.
So
it
is
the
MEA
one
who
were
distribute
the
content
to
this
MV.
The
MEA
to
would
not
do
anything
so
the
a
median
self
knows
which
be
and
he's
responsible
for
so
when
he
said
that
link
state
I'll
tell
broadcast
to
all
of
the
ends.
All
the
NBA's
he
olivion's
I.
A
For
what
is
worth
eric,
your
your
questions,
exactly
what
I'm
wondering
so
so
I
I,
think
I
think
we
should
have
a
discussion
afterward
to
make
sure
we
understand
the
same
thing,
and
then
we
can
have
more
discussion
on
the
list.
Okay,
for
the
sake
of
time,
let's
just
take
one
more
question
is
a.
G
G
G
Are
always
a
blocking
or
you
never
know
so?
This
MBE
is
announcing
that
the
beings
he's
participating.
That
is
the
broadcast
because
he's
not
sure
whose
response
who
has
what
information?
So
that's
the
link
stage
and
for
query.
That's
the
unicast
like
when
I
Duke,
worry
I,
said
I
know
which,
by
name
mea,
has
the
information.
I
can
send
a
unicast
message
to
that
particular
than
VA.
A
H
H
It
is
a
this
is
a
current
display
and
restructure
the
split
sorry,
the
split
emma
is
a
is
a
particular
type
of
em
ve
that
has
a
functionality
spread
across
an
end
device,
which
is
what
we
just
pours
the
virtualization
and
an
external
network
device.
So
actually
the
control
plane
protocol
running
between
the
hypervisor
in
the
external
lve
is
kind
of
a
intro
mie
protocol.
Then
we
call
the
part
located
on
the
end
devices
called
p.m.
EDT
for
terminal
and
the
part
running
on
the
external
enemy
called
in
LA
and
is
from
network.
H
Here
we
use
the
hypervisor
to
loosely
define
the
thing
running
on
the
end
device.
It
can
be
possibly
a
container,
so
the
control
plane
protocol
is
defined
by
the
framework
draft
as
a
intro
intro
nae
protocol,
and
this
right
give
a
brief
introduction
on
the
state
transition
of
a
VIP
instance
or
in
external
MVP.
So
basically,
there
are
two
things:
why
is
that
VN
child
is
connect
to
a
particular
VN
in
another?
H
Is
to
disconnect
from
a
particular
VIN
in
this
diagram
here
gives
the
possible
state
transition
for
DSI
instance
I
an
external,
and
we
basically
trying
to
talk
about
there.
There
are
two
possible
States
when
is
associated.
Another
is
activated
and
the
state
transition
can
be
triggered
by
various
vm
event
like
migration
of
vm
creation
in
shutdown,
possibly
like
the
what
suspension
so
there's
a
transition
pitching
pitching
different
states
here
and
next
slide.
Yeah.
H
Oh
sorry,
the
the
previous
slides
are
all
the
wrap-up
for
the
existing
document
and
here
we're
trying
to
compare
it
with
the
VDP
structure
defined
by
I
Tripoli
80
2001
qpg,
because
we
think,
if
the
if
we
were
talking
about
layered
showcase,
actually
the
VDP
is
a
is
a
good
candidate
to
be
using
in
this
scenario.
So
look
at
the
bottom
part
of
the
picture.
H
Actually,
this
80
to
120
BG
defined
the
EV
be
stationary,
be
bridge
which
is
kind
of
equivalent
to
our
end
device
and
external
le.
They
define
the
PTP,
which
is
a
vs.
I
discovered
configuration
protocol
to
be
running
between
the
EV,
be
stationary
bridge
and
try
to
carries
the
virtual
machine
networking
states
between
them.
So
there
is
also
a
draft
which
is
written
by
some
experts
from
who
designed
this
pew
bg2
to
give
a
gap
analysis
between
the
current
vp
and
some
earlier
version
of
the
requirement
document.
H
H
It
is
possible
that
some
intermediate
reach
a
bitching
of
the
NYC
and
la
enhancing
the
external
airy
and
light
in
require
ministry
and
fro
talks
about
multihoming.
In
many
cases
the
number-5
say
it's
about
the
VN
connection
and
disconnection
requirements
so
the
right
hand,
side
the
right
hand.
Side
column
accurately
describes
whether
the
VDP
extension
is
required.
H
We
can
see
some
of
the
extension
can
is
required
and
producing
a
straightforward,
and
some
of
them
has
already
been
supported
by
current
VDP
like
them,
but
number
5,
which
is
V,
is
indicated
by
group
ID
in
btp,
and
also
there
are
some
current
VDB
can
stand.
The
D
associate
from
the
bridge
to
the
end
device,
which
is
also
which
has
already
been
supported,
and
here
the
rest
of
the
requirements
are
and
what's
it
called
talk
about.
H
It's
like
yeah,
the
state
transition,
because
we
define
like
the
associated
activated
states
in
current
requirement
draft
and
the
BP
has
a
prey
associate
and
an
associate
with
equip
which,
like
the
equivalent
status
and
here,
is
something
like
okay,
what's
up
missing
in
the
VDP,
because
VDP
original
was
designed
for
the
layer
2
but
guaranteed
a.m.
it
was
three
week
we
trying
to
support
both
layer
and
layer
3.
So
we
can
carry
well.
H
We
hope
you
can
carry
the
ipv4
and
ipv6
address
in
their
association
in
their
association
message
and
also
like
the
wee
one.
We
want
the
clear
indication
for
the
vm,
my
creation
events
for
for
ya
4444,
better
further
implementation.
Anyway,
so
here
it
comes
to
the
summary
of
the
possible
intentional
extensions.
H
We
require
specific
unicast
destination
MAC
address
rather
than
nearest
customer
bridge
group
multicast
address
and
also
we
need
tlv
for
integrated
check
and
we
need
another
new
filter
info
format,
type
for
the
IP
address,
binding
and
also
other
kill
clearer
migration
indicator,
which
can
be
which
can
be
designed
as
a
bit
putting
two
new
future
in
for
format.
Also,
we
want
to
clarify
it
associated
to
pre,
associate
state
transition.
H
So
what
we
are
going
to
do
is
we
need
to
perform
another
round
of
draft
editing
and
we
hope
can
stand
a
liaison
from
and
will
3280
the
one
who
requests
for
the
following
thing.
The
first
thing
is
to
some
amendments.
Another
is
the
extension
based
on
the
document
based
on
the
requirement
documents
about
a
new
future
in
when
the
new
tob
types.
I
think
that's
a
lot
last
slide
right:
okay,
yeah!
It's
up.
A
A
As
as
a
tagged,
you
know
local
link,
whether
that
was
an
extension
of
the
vApp
or
the
VN
and
larry's
the
latest
email
I
saw
on
this
was
Larry's
comment
that
actually
the
architecture
says
the
policy
is
inter
VN,
not
in
travail
and
and
so
there
seems
to
be
a
relationship
here
and
that
was
I.
Think
Larry
hit
it
on
the
nose,
at
least
for
what
I
was
thinking
about.
H
B
H
K
Yeah
I
just
wanted
to
clarify
so
there's
some
interest
in
802
dot,
one
in
doing
this.
If
envio
three
wants
it
done,
there
is
no
interest
in
doing
it.
I
mean
we're
not
pushing
it,
but
it
seems
like
it
could
be
a
good
fit
and
if
envio
three
says
yes,
this
is
the
way
we
want
to
go.
Then
I
believe
eight,
oh
two
dot
one
will
will
cooperate
with
that.
K
A
B
K
Then
I
think
once
we
decide
to
go
that
way
or
decide,
we
have
consensus
to
go
that
way.
They'll
need
to
be
some
refining
of
exactly
what
what
all
we
need
to
put
in
the
filter
info
format.
I
mean
I,
think
we
know
some
of
what
needs
to
go
in
there,
but
but
we'll
need
to
work
that
out
to
some
extent
with
guidance
from
MV
03
on
exactly
what
what
fields
in
what
variations,
because
we've
got
right
now
we
have
four
filter
info
formats
that
we
define
for
layer
24
different
situations.
E
Eric
not
mark
a
clarifying
question,
and
so
I
was
just
looking
through
the
gap
analysis
to
see
whether
there
were
any
multicast
related
gaps
and
I
didn't
find
the
string
multicast,
which
could
mean
that
everything
is
fine
and
we
don't
need
to
worry
about
anything.
But
I
was
a
bit
worried
when
I
saw
okay,
we're
passing
a
unicast
mac
address.
Well,
how
does
multicast
l2
l3
is
that
just
stuff
that
works
because
you're
sort
of
associating
your
TS
with
with
effectively
a
VLAN
or
something
right?
H
Still,
a
discussion
is
still
underway
because
originally
the
original,
the
VDP
assume
there
is
a
single
hot
petunia
and
divided
and
Amy.
So
the
multicast
is
not
a
issue
anyway,
but
here
in
MMA
03
contacts.
Actually
we
are
assuming.
Possibly
there
is
an
intermediate
classic
bridge.
So
in
that
case
we
just
like
I
said
a
couple
of
hours
ago.
Probably
we
need.
We
need
some
refinement
of
them
in
more
investigation
on
this
issue.
K
A
A
Like
I
said
earlier,
we
have
a
session
on
Wednesday
sorry.
This
is
very
small
print,
probably
for
everybody,
but
it's
it's.
This
is
the
agenda
it's
online
and
we're
going
to
talk
about
some
OEM
stuff
and
then
we've
got
some
time
carved
out
too.
Have
an
open
discussion
about
the
data
plane.
Adoption
that's
going
on
if
everyone's
perfectly
fine
we'll
have
a
very
short,
nothing
kind
of
discussion,
but
we
have
some
time
just
in
case,
so
everyone
thank
you
for
coming
and
I
look
forward
to
seeing
most
of
you.
Hopefully
all
of
you
on
Wednesday.