►
From YouTube: IETF115-PIM-20221109-0930
Description
PIM meeting session at IETF115
2022/11/09 0930
https://datatracker.ietf.org/meeting/115/proceedings/
B
C
C
B
C
E
C
F
C
Keep
an
eye
on
the
key
over
there
all
right.
Everyone
welcome,
welcome
to
Pim
might
be
our
shortest
meeting.
In
a
long
time,
I
mean
smallest
meeting,
not
sure
how
many
people
we
have
online
right
now,
so
hope
you
all
have
signed
the
blue
sheets
or
well
rather
scan
the
QR
code
or
yeah
register
online.
C
It's
a
note.
Well,
let's
see
I
guess
you
need
to
drag
that
yeah
yeah!
You
should.
You
should
all
have
read
the
note.
Well,
nothing
new
I!
Think!
Then
we
have
the
agenda.
C
C
All
right,
okay
for
the
working
group
status,
so
we
have
made
some
progress
a
couple
of
new
rfcs
since
last
time,
so
you
have,
the
young
model
has
been
published
and
some
IBM
pmld
extensions
and
published
and
yeah
at
the
bottom.
Here
we
have
three
jobs
that
they
have
requested
publication
for
I
start
packing
needs.
Revised
draft
I
got
some
good
comments
there
and
the
other
two
are
in
the
queue
for
the
Dr
Improvement.
Then
backup
Dr
not
really
making
much
progress.
There.
B
C
So,
by
the
way
we
going
forwards,
we're
trying
to
have
Shepherds
for
all
our
working
group
documents
and
ideally,
at
the
time
of
adoption,
we
think
we
should
try
to
have
a
shepherd,
so
the
shepherd
can
help.
You
know
make
sure
that
he
offers
are
making
progress,
make
sure
that
discussions
in
the
working
group
gets
Incorporated
in
the
document
and
so
on
and
also
check
if
the
document
is
ready
for
last
call
so
including
the
droughts.
C
If,
if
anyone
is
interested
in
being
a
Shepherd,
please
let
us
know
so
yeah
we
probably
won't
be
able
to
find
a
separate
for
all
our
documents
right
away,
but
at
least
when
we
do
our
last
call,
we
would
like
to
have
a
shepherd
so
second
part
of
the
status.
C
We
have
a
number
of
documents
that
are
not
being
discussed
today,
so
we
have
the
ibmp
mld-biz
documents,
there's
like
three
documents
related
to
that
here
and
we
can
probably
do
Last
Call
on
those
soon,
let's
check
with
Brian
Haber
man,
who's
been
working
on
that
what
the
state
this
is.
If
he
feels
stare
dumb,
we
just
adopted
the
3228
bits
that
you
see
near
the
bottom
there.
So
we
did
an
adoption
call.
We
didn't
get
much
input
and
we
decided
as
chairs
to
adopt
it
anyway.
C
C
Okay,
what
else
yeah
for
the
point-to-point,
mult
yeah,
multiplied
policy
and
policy
ping
come
on?
Do
you
have
any
comments
on
those
or
what
do
you
feel
like
the
status
is?
Are
they
almost
done
or.
H
Yeah
there
were
some
discussions
on
the
spring.
They
wanted
us
to
clarify
some
stuff
on
their
replication
policy,
replication
policy
and
I
think
we
submitted
the
clarifications
to
the
spring
and
we
are
just
waiting
for
last
call
again:
that's
where
it
is
standing.
Okay,.
C
Great
yeah
thanks
yeah
we've
also
had
this.
This
draft
that
I'm
an
off-ro
so
as
an
author
I
hope
it
can
go
to
last
call
soon,
but
I'll
defer
to
Mike
in
the
working
group,
yeah
and
yeah.
Also,
let's
see
yeah
I,
think
that's
pretty
much.
What
we
have
for
status,
I.
F
Just
got
one
comment
about
the
going
back
to
human
sorry,
the
point
to
multi-point
policy,
peeing
I.
Finally,
remember
I
need
to
check
my
emails
again
but
I
think
I.
Remember
the
spring
chairs
telling
stick
and
I
to
hold
off
progressing
these
until
the
replication
segment
draft
and
spring
progresses.
Is
that
true.
H
Yeah
I
I
have
to
apologize.
I
I
was
running
around
trying
to
find
my
way
through
the
staircase.
So
my
mind
was
a
little
bit
foggy
here.
So
the
the
comments
that
I
gave
was
for
the
point
to
multiply
and
policy
draft
in
in
Pim.
So
there
is
a
ping
and
then
there
is
a
policy
itself.
So
my
comments
with
regard
to
the
point
to
multiply
and
policy
with
regard
to
the
Ping
last
I
send
out
I.
Did
some
modifications
to
the
draft?
H
I
did
send
send
the
email
to
the
mpls
working
group
because
they
had
some
comments
to
ensure
that
they're,
okay,
with
the
changes
and
I,
did
ask
for
a
early
adoption,
early
allocation
of
Ina
sub
tlv,
which
I
never
got
any
response
for
that.
But
I
did
trigger
all
this
stuff
on
the
on
the
working
groups,
but
so
far
I
have
not
got
any
response.
I
would
imagine
if
we
can
get
that
early
allocation
of
the
sub
tlv.
That
would
generate
some
response
from
mpls
working
group.
Perhaps
okay.
F
C
All
right,
so
that
was
the
Steelers
any
comments,
thoughts
on
that
but
yeah.
If
you
might
be
interested
in
shepherding
a
document,
please
get
in
touch
with
us.
Otherwise
we
might
start
getting
in
touch
with
I,
guess
various
participants
and
see
if
they
can
please
help
out
there.
Anyone
would
be
welcome
to
Shepherd
it's
a
good
way
to
learn
about
the
iitf
process.
C
Okay,
so
then
I
guess
hormone
was
first
with
the
10
lights.
H
Okay,
so
I
think
the
the
giraffe
what
what's
adopted
by
the
the
working
group.
So
thank
you
for
very
much
for
that.
We
had
some
very
good
feedback
from
Sandy
and
tanmoy
from
Nokia.
So
I
tried
to
summarize
all
this
feedback
in
this
slide
to
see
what
other
work
needs
to
be
done,
any
other.
Obviously,
feedback
is
greatly
appreciated,
so
we
can
push
this
through.
H
The
last
call
and
maybe
start
looking
at
the
because
so
everybody
is
aware
of
this
draft
one
of
the
ties
that
it
has
is
in
the
beer
site
for
PIM
signaling
over
a
beer
domain,
and
this
is
where
how
it
was
born.
So
any
extra
comments
that
can
come
along
to
push
this
draft
through
would
be
greatly
appreciated.
Next
slide.
Please.
H
All
right
so
first,
the
comment
was
that,
let's
make
sure
that
it's
very
clear
that
which
message
types
we
are
going
to
support
so
looking
at
Ina
n,
a
which
I
always
have
a
hard
time
saying
it,
but
anyway,
there
are
12
message
types
this
seems
like
4pm
and
we
only
gonna
support
the
message.
Type
3,
which
is
the
join
and
the
prune,
and
basically
the
the
trick
here
is
that
most
of
this
message
type,
you
need
to
have
a
Pim
adjacency
via
Pim
hellos,
before
the
router.
H
The
pin
router
accept
the
message
most
of
these
12
message.
Types
for
that
matter,
and
one
of
the
exceptions
for
the
PM
light
interface
is
that
you
do
not
need
to
have
this
pm
adjacency
and
back
and
forth
of
the
hello
messages
for
the
router
to
accept
the
Pim,
join
and
prune
message
type
3..
So
that
was
one
thing
that
we
clarify.
Quite
you
know,
Chris
are
clear
that
there
is
no
other
message
type
that
is
supported.
The
the
second
thing
was
the
PMS
Parts
mode
again,
just
because
there's
no
hello
adjacencies.
H
That
means
that
all
the
goodies
for
the
Pim
SM
need
to
be
within
the
pin
domain.
What
that
means
is
that
you
are
not
going
to
be
able
to
have
a
RP
or
a
Dr
on
the
other
side
of
the
team
light
domain.
So
if
there
are
two
pin
domains
that
are
attached
through
a
Pim
light
domain,
you
cannot
mix
and
match
your
RPS
and
and
designated
router
over
the
PM
light
domain.
So
we
did
the
clarification
on
that
one.
H
We
also
put
a
bunch
of
texts
around
the
fact
that
the
lack
of
Hello
message
means
to
Pim
light
routers,
and
one
of
the
things
was
the
join
attribute
that
we
might
get
in
some
of
the
join
messages.
Obviously
there
is
a
I
guess:
there's
a
flag.
I
can't
remember
quite
well.
Maybe
the
experts
know
better
than
I
do,
or
there
was
a
soft
tlv
that
was
communicated
in
the
Hello
message,
saying
that
the
router
supports
the
join
attribute
with
the
lack
of
the
Hello
message.
H
Basically,
it
comes
down
to
the
fact
that
if
the
software
on
the
router
does
support
join
attributes,
they
need
to
be
able
to
process
it.
If
they
don't,
then
they
won't
process
it
and
the
last
but
not
least,
is
the
Dr
selection
again.
Obviously,
that
needs
a
Hello
message
and
with
PM
light,
the
Dr
selection
needs
to
be
within
the
Pim
domain.
It
not.
It
cannot
happen
over
the
Pim
light
interfaces
or
across
beam
light
routers
to
do
that.
Dr
selection
next
slide,
please.
H
So,
with
regard
to
the
failures
of
the
Pim
light
interface,
today,
usually,
when
the
hello
adjacency
goes
down,
we
withdraw
a
bunch
of
multicast
routes
from
the
the
the
routing
table.
In
this
case,
we
are
kind
of
suggesting
that
there
could
be
other
type
of
protocols
that
can
be
used
to
detect
that
epim
light
interface
has
gone
down
as
an
example.
H
If
you
have
a
Pim
light
interface
over
a
layer,
2
Network,
where
there
are
multi-hop
away
from
each
other,
then
you
can
use
a
protocol
like
BFD
as
an
example
to
ensure
that
there
is
connectivity
on
your
Pim
light
interface
and
if
PFD
goes
down,
then
that
the
interface
is
withdrawn
from
the
multicast
route
and
that
oif
is
erased
from
the
multicast
raft,
and
this
will
kind
of
manage
the
the
the
traffic
the
downstream
traffic
going
out
of
the
multicastra.
E
Totally
sacred,
so
I
would
really
like
to
see
that
we
don't
use
standard
pin,
but
that
this
all
would
be
running
over
TCP
RFC,
6559
Port
right
because
we've
seen
especially
when
this
is
used
together
with
beer,
we're
supposed
to
be
able
to
much
easier
support,
really
large
amount
of
state
right,
because
the
state
isn't
in
the
core
anymore,
but
we're
doing
egress
to
Ingress
PE
signaling
in
all
of
the
large-scale
multicast
deployments.
E
That
I
hear
was
always
the
problem
of
how
to
get
the
reliable
big
burst
under
reconvergence
right,
so
you're
sending
you
know,
ten
thousands
of
Pim
joints
or
Pim
leaves
right
so
and
it's
it's
just
terrible
right,
I
mean
the
everybody
you
know
as
a
vendor
is
making
smart
for
trying
to
to
make
that
work
well,
and
then
we
finally
gave
in
and
said:
okay,
let's,
let's
have
what
we
did
with
mldp,
why
people
were
using
bgp
as
one
of
the
factors,
and
so
when
we're
now
redoing
Pim.
E
Can
we
please,
you
know,
pick
up
our
own
stuff,
which
was
simply
you
know,
use
it
over
TCP
and
then
here
this
is
the
simplification
and
I
think
that
should
make
using
TCP,
as
we
did,
find
in
6559,
even
a
lot
less
difficult.
H
Okay,
yeah
I
appreciate
that
just
taking
a
step
back
again,
I
want
to
remind
how
this
thing
was
born.
This
thing
was
born,
as
you
know,
through
the
beer
right
and
some
of
the
ideas
we
were
toying
around
on
on
the
beer
side
was
that
we
made
it
very
clear
on
the
beer
side
that
this
joins
and
prunes
are
only
being
used
for
signaling.
They
are
not
something
that
are
part
of
the
pin
protocol
and
I
think.
H
At
that
point
of
time,
the
decision
was
made
between
some
parties
that
we
need
to
grab
the
pin
and
kind
of
water
it
down
to
become
distressed.
That's
fine
I
have
no
issue
with
going
over
TCP,
but
then
that
definitely
going
to
change
the
message
type
of
the
join
and
and
the
prunes
to
be
over
TCP
right.
So.
E
E
Okay-
and
just
you
know
the
problem
of
the
hellos,
which
I
guess
we're
not
going
to
have
anyhow
and
the
discovery
so,
but
just
because
it's
kind
of
overlay
used
doesn't
change
the
problem
of
diversity
Behavior,
which
is
that
we're
triggering
all
these
join
when
there's
a
route
change
right
and
that
can
result
in
thousands
of
messages
and
TCP
nicely
takes
care
of
that
in
reliable
doing
that.
B
On
camera,
Cisco
Systems,
so
just
to
give
update
the
with
respect
to
this
particular
draft
I
think
it
is
completely
agnostic
to
whether
we
do
it
over
TCP
or
UDP.
So,
whatever
spec
we
are
defining
in
this
draft,
that
is
more
to
clarify
the
base
Pim
Behavior,
where
we
always
wanted
that
hellos
are
coming
before.
Even
we
could
do
any
signaling
over
that
interface.
E
No,
and-
and
hopefully
it
is
exactly
that
way,
but
then
I
think
we
should
still
have
a
Pim
light.
Implementations
must
support
a
default
to
Port
right
26559
right
so
that
we
finally
get
a
good
reason
to
to
to
get
Port
deployed
right
for
it
being
Pim
over
TCP
right.
So
that
was
the
marketing
name
that
we
gave
it
right.
So
there's
there's
no
reason
to
say
we.
We
should
continue
to
have
a
Pim
datagram
as
the
default
for
for
something
like
this.
H
Okay,
I'll
read
that
draft
and
I'll
open
up
a
conversation.
I
6.5,
nine
other
than
that
running.
They
just
process
observation
I
know
that
Sandy
did
the
adoption
call
please
reflect
that
in
the
data
tracker
Lister
as
the
shepherd
make
her
a
delegate
in
the
working
group,
so
she
can
push
all
the
buttons.
This
is
because
both
of
both
the
chairs
are
authors
in
this
draft,
so
yeah.
We
need
to
make
sure
that
everything
looks
correctly.
So
please
just
make
sure
that
everyone
knows
that
it's
Sandy
the
one
who's
driving
everything
here.
H
Next
slide,
please
yeah,
so
I
think
we
covered.
This
is
like
two
I
was
looking
for
comments
and
stuff
like
that.
So
that's
all
good.
H
F
K
Right
thanks
good
morning,
everybody
so
just
an
intro
slide,
and
this
is
a
link
to
the
the
draft
document
that
we
have
out
there
currently
go
ahead
and
flip,
so
just
to
kind
of
go
through
the
problem
that
we're
looking
at
here
again.
K
This
is
primarily
for
boat
networks,
and
these
are
kind
of
characterized
by
everything
is
on
a
single
subnet.
We
have
displays
and
sensors
and
structure
devices
all
kind
of
connected
with
each
other.
Typically,
there's
no
internet
connection
available
to
these
devices
and
the
problem
we're
trying
to
face
is
that
in
a
situation
like
this,
you
might
have
a
one
gigabit
sensor
on
a
network
with
several
other
lower
speed:
sensors,
maybe
100
megabit
sensor.
And
if
you
have
that
gigabit
sensor
streaming
multicast
data,
then
it
a
standard
switch.
K
Will
output
that
multicast
data
and
overwhelm
the
link
to
that
100
megabit
sensor
there
so
go
ahead
and
flip.
K
So
the
solution
here
is
to
use
multicast
snooping
to
prevent
those
streams
from
going
out
to
the
100
megabit
sensor
there
and
that
forwards.
That
causes
a
switch
to
only
forward
the
packets
to
devices
on
ports
that
request
them
go
ahead
and
flip
so
kind
of
a
review
of
some
of
the
the
multicast
or
some
of
the
IPv6
addresses
that
will
kind
of
build
up
to
the
problem
that
we're
facing
with
this
based
off
of
our
c4291.
K
This
is
the
definition
of
a
unicast
address
and
you'll
see
that
the
least
significant
64
bits
is
known
as
the
interface
ID
or
IID
flip.
K
2.91
and
then
from
there
you
have
kind
of
a
definition
of
the
unicab
unicast
prefix
based
multicast
address
where
you've
got
the
64-bit
Network
prefix
there
and
a
group
ID
is
a
least
significant,
32
bits
and
then
finally,
RFC
4489
provides
a
mechanism
for
generating
a
link,
scoped
IPv6
multicast
address.
So
basically
any
link
local
IP
address
comes
with
a
block
of
multicast
addresses
that
it
can
assign
based
off
of
its
link
local
address
so
based
off
of
this.
K
K
4.1
talks
about
permanent
IPv6,
multicast
addresses
and
and
there's
the
range
for
those
4.2
talks
about
the
multicast
group
identifiers
allocated
by
Ayanna.
Again,
there's
a
range
for
that
and
then
4.3
has
a
range
specific
for
dynamic
multicast
addresses
from
hex
800
all
the
way
through
FF
next
slide.
K
So
the
problem
is:
when
you
go
to
transmit
these
multicast
addresses
on
ethernet.
What
can
happen
is
the
the
group
ID
being
in
the
least
significant
32
bits
on
ethernet.
You
only
have
48
bits,
and
so
the
first
two
octets
are
three
three
three
three
and
then
the
last
four
octets
are
the
least
significant
32
bits,
which
is
the
group
ID
there,
and
so
what
can
happen
here
is,
even
though
your
link,
scoped,
IPv6
multicast
address
is
unique.
The
128-bit
address
is
unique
when
that
goes
out
on
ethernet.
K
If
different
nodes
choose
the
same
group
ID,
then
you
end
up
with
different
link
local
addresses
with
sending
the
same
Ethernet
Mac
address
and
you
can
get
collisions
there
next
slide.
K
So
we
have
a
number
of
requirements
for
the
solution
here.
First,
it
has
to
be
zero
configuration
most
of
the
customers.
Do
not
have
expertise
on
network
configuration
and
they're,
primarily
there
to
use
the
products
they
don't.
They
don't
know
much
about
networking
in
general.
No
internet
connection
networks
are
typically
a
single
subnet
and
then
we
need
a
unique
ethernet
destination
address.
Rfc
4541,
which
talks
about
multicast
snooping.
K
It
had
kind
of
a
survey
on
switch
vendors
and
their
products
and
based
off
of
that
most
switch
vendors,
do
not
support
looking
at
the
IPv6
destination
address,
or
only
looking
at
the
ethernet
address.
Furthermore,
the
switch
parts
at
the
desired
price
point
for
these
products.
They
do
not
support
Source
specific
multicast.
So
what
we
really
need
here
is
a
unique
destination
Mac
address
for
every
individual
multicast
stream
on
the
network.
K
We
also
need
it
to
be
a
decentralized
solution
so
that
we
can
avoid
a
single
point
of
failure,
which
is
common
on
networks
that
kind
of
rely
on
that
that
provide
safety,
an
operation
of
a
vehicle.
So
the
existing
solution
for
dynamic
assignment
madcap
discussed
in
RFC
2730
that
relies
on
a
server.
So
that's
a
central
point
of
failure,
and
so
we
cannot
use
that
directly
and
then
we
also
want
to
provide
for
multiple
streams
coming
from
the
same
host.
K
K
Previously
we
talked
about
zmap
AAP,
as
there
was
a
draft
out
there
that
would
sought
to
solve
this
problem,
but
that
wasn't
Advanced
Beyond
draft
state.
So
we
had
talked
about
kind
of
resurrecting
that
but
ultimately
decided
against
that.
So
go
ahead
and
flip
to
the
next
slide.
K
K
So
we
would
update
that
table
or
that
section
in
RFC
3307,
section
4.3
was
specifically
for
dynamic
allocations
and
allocated
basically
half
the
range
of
group
IDs
to
host
based
allocations,
and
so
the
idea
there
is
to
kind
of
subdivide
that
further
and
designate
a
range
specifically
for
zero
configuration
allocations
and
we'll
go
into
specifics
on
on
the
next
slide
kind
of
step.
Two
would
be
the
application
that
wants
to
send
a
multicast
stream
would
generate
a
random
group
ID
in
the
zero
configuration
range
and
then
step.
K
K
The
the
multicast
DNS
records
before
the
initial
use
and
also
continuously
monitor
those
mdns
records
in
case
somebody
else
decided
that
they
were
going
to
use
it
and
and
their
probing
had
somehow
failed
and
then
kind
of
the
last
step.
There
is
that
the
application
uses
the
group
by
ID
to
generate
the
link,
scoped
IPv6
multicast,
address
and
then
goes
ahead
and
transmits.
K
So
in
the
next
two
slides
we're
going
to
focus
on
specifically
on
that
update
to
RFC,
3307
and
use
of
multicast
DNS,
the
the
random
group
ID
and
then
using
the
group
ID
to
generate
the
links.
Go
multicast
addresses
fairly
straightforward,
next
slide.
K
Okay,
so
RFC
3307,
section
4.3.
It
specifies
kind
of
two
ranges
in
4.3.1.
It
talks
about
server-based
allocations,
which
would
be
madcap
and
then
4.3.2
talks
about
host
based
allocations
and
in
both
cases
they
specify
the
range
there
800
through
FFF,
for
both
of
those
and
the
proposal
here
is
to
subdivide
that
and
say
half
of
that
800
through
b
ffff
would
be
for
madcap
or
server-based
allocation
and
then
to
to
take
a
range
c0
through
cff
and
say
that
is
a
reserved
for
an
mdns
base.
K
Is
your
configuration
algorithm
and
d00
through
Fe
FFF,
that's
reserved
for
future
zero
configuration
or
host
based
allocations
in
case
there's
other
ones
in
the
future,
and
then
something
that
was
kind
of
left
out
of
our
c3307.
That
I
think
is
important.
To
note
here
is
that
the
range
ff00
through
ffff
is
for
solicited
node
multicast
addresses,
and
so
we
wouldn't
want
to
to
allocate
those
or
indicate
that
those
are
available
for
a
a
dynamic
allocation,
and
that
should
be
reflected
in
the
table
next
slide.
K
Please
so
kind
of
two
slides
here
talking
about
how
we
would
use
mdns
to
ensure
that
group
ID
is
unique.
First,
a
note
on
on
how
DNS
uses
PTR
records
to
perform
reverse
lookups
and
we
see
the
same
pattern
in
mdns
as
well.
So
RFC
8501
has
these
examples
here.
So
if
you
want
to
provide
a
reverse
lookup
record
for
the
IP
address
192.0.2.1
you
create.
You
create
a
PTR
record
with
the
name.
K
1.2.0.192.In
adder.arpa
and
that
in-adder
dot
arpa
is
specifically
for
ipv4
addresses
and
note
that
the
the
octets
are
listed
there
in
reverse
order,
least
significant
octet
first.
Similarly,
when
you
go
to
IPv6
you're,
taking
your
IPv6
address
and
you're
you're
taking
each
hexadecimal
number
and
putting
it
there,
starting
with
the
least
significant
and
putting
a
dot
in
between,
and
then
you
end
with
ip6
DOT
Arbor,
so
the
proposal
uses
a
PTR
record
for
layer
2
addresses,
and
this
specifically
focus
on
on
ethernet
addresses.
K
If
there
are
other
layer
2
technologies
that
we
need
to
support,
we
would
need
to
have
separate
arpa
domains
for
those.
But
in
this
case
the
example
being
your
your
Mac
address
there,
which
is
a
IPv6
multicast
address,
because
it
starts
with
3333.
K
So
once
the
the
application
it
generates,
its
random
group
ID
and
it
it
determines
the
ethernet
address
from
there
and
it
then
probes
the
network
for
for
a
PTR
record
with
that
name
and
it
uses
the
mdns
probing
algorithm
described
in
RC,
oh
in
the
multi-sd
and
srfc
in
section
8.1.
It
sets
up
a
continuous
query
for
that.
Ptr
record
and
continuous
query
means
that
if
anybody
else
also
generates
that
and
starts
advertising
it,
then
then
your
multi-gast
DNS
or
ndns
responder
software
will
let
you
know
next
slide.
Please.
K
J
G
K
And
that
would
be
the
the
device's
hostname
a
colon
and
the
source
Port
of
the
multicast
stream.
So
there's
an
example
there,
my
host.localcolon
56296
and
just
wanted
to
note
there
that
the
reason
that
the
source
code
is
integrated
in
this
matter
that
allows
for
multiple
applications
to
be
on
the
same
host,
and
so,
if,
if
you
had
two
applications
that
on
the
same
host
that
happened
to
generate
the
same
PTR
record,
there
would
be
no
way
to
know
that
without
including
that
that
Source
Port
there.
K
K
Therefore,
it
is
you
and
not
somebody
else
on
the
same
host
and
then
the
existing
draft
basically
has
a
recommendation
that
the
application
should
retain
that
group
ID
value
in
long-term
storage
and
use
it
the
next
time
the
multicast
stream
is
transmitted.
K
So
this,
when
you
first
set
up
a
network,
there
may
be
some
conflicts
that
occur,
but
by
storing
the
group,
ID
value
and
long-term
storage
and
reusing
it
next
time
that
basically
ensures
that
your
you're
you're
only
going
to
run
into
that
conflict.
The
first
time
the
network
has
started
up
or
anytime
you,
you
might
add,
a
new
device
to
the
network
after
that.
K
That
conflicts
shouldn't
happen
and
your
network
won't
have
to
go
through
that
that
conflict
resolution
process,
where
you
have
to
generate
a
new
group,
ID
probe
again
and
transmit
or
register
that
PTR
record
next
slide.
K
So
that's
that's.
Basically,
it
wanted
to
open
up
for
any
questions.
L
L
And
could
you
go
to
the
last
slide?
You
said
you
have
to
advertise
the
port
number.
You
probably
should
advertise
the
port
number
with
the
multicast
address,
so
you
know
which
port
to
use
with
which
multicast
address,
because
there
could
a
source
that's
running.
Multiple
applications
needs
to
use
multiple
ports,
correct.
J
L
You
going
to
use
the
same
port
with
multiple
different
addresses
foreign.
K
So
in
this
case,
are
you
meaning,
like
the
destination
port
or
the
source
Port?
This.
L
K
L
Okay,
so
so
answer
my
question,
then
the
if
the
source
is
running
two
applications,
56
296
is
being
run
for
one
application.
Are
you
advertising
another
port
for
the
other
application.
K
Right,
so
what
we
would
hope
to
see
here
is
that
the
two
different
applications
generate
different
group
IDs,
and
so
the
the
name
of
the
PTR
record
is
going
to
be
different.
It
would
have
a
different
set
of
different
address.
Dot.
L
L
A
L
C
Let's
just
dig
it's
one
thing
about
the
column:
I,
don't
think
that's
allowed
in
a
domain
name
could
be
wrong,
but
yeah
I'm,
not
so
sure
if
you
can
use
that,
but
otherwise,
if
you're,
assuming
that
any
any
application
using
a
group
ID
in
this
range
would
use
this
scheme
right.
C
Okay,
so
we
are
not.
A
C
What
the
Mad
cap
status
is,
if,
if
it's
being
used
for
IPv6
today,
but
we
may
have
to
update
you,
know
madcap
RFC
or
something
like
that,
since
we
are
constraining,
you
know
what
range
they
can
use.
C
We
probably
need
some
review
of
this
I
think
from
people
working
on
mdns
or
possibly
not
sure
some
IPv6
experts
as
well,
but
let's
show
that
it
works
well
for
us.
You
know
that
we
have
a
design
that
works
for
your
use
case.
First
of
all,
okay,
that's
my
comments.
Alvaro.
I
I
So
you
know
it
sounds
like
the
working
group
in
general,
even
though
you
I
don't
think,
have
asked
for
adoption
things
that
this
is.
You
know
a
fine
solution.
There's
some
questions
that
you
can.
You
know
maybe
address
or
whatever
the
question
that
I
have
not
for
you,
but
for
the
chairs
mostly
and
is:
should
we
do
this
work,
meaning
this
is
really
an
application
of
mdns
and
an
update
of
an
Ina
registry.
I
So
what
I
would
probably
want
to
ask
you
guys?
Stegan
and
Mike
is
to
figure
out
with
probably
the
I
think
it's
the
DNS
is
the
working
group
that
does
some
DNS
and
maybe
six
men
to
figure
out.
What
do
we
do?
I
Right
I
mean
it's
probably
okay
for
us
to
process
this
document,
since
it's
a
multicast
application
in
the
end,
but
we'll
need
to
coordinate
with
them
the
RC
that
3307
was
produced
by
working
group
that
doesn't
exist
anymore,
and
so
we
need
to
figure
out
who,
who
you
know
still
owns
that
and
and
how
can
we
really
modify
the
registry
if
six
men
or
whoever
they
appear
six
guys
who
are
going
to
own?
This
are
okay
with
that,
and
and
how
do
we
get
to
that?
I
So
we'll
need
an
awkward
Nation
with
with
those
other
working
groups.
It
might
be
that
you
know,
given
that
we're
saying
that
this
is
a
good
multi-ass
solution,
that
we
can
you'll,
let
mdns
or
the
NSD,
for
example,
process
a
document.
So
we
need
to
figure
that
out
and-
and
you
know
the
the
less
overhead
that
Nate
can
see
the
better,
so
you
guys
do
all
the
work,
and
let
me
know
if
you
need
anything
from
you
know.
We
have
now
thanks
yeah
I.
F
Think
this
is
Mike.
Mcbradick
stig's
gonna
have
to
drive
that
part,
because
I'm
on
the
draft
and
I
think
it's
a
good
idea.
So
Stig
will
will
drive
that
Brian
hiberman
is
the
one
and
only
author
of
3307,
so
I
don't
see
him
on
the
Queue.
But
we'll
make
sure
that
he's
involved
in
this
as
well
and
last
comment
is,
is
magma
that
working
group
you
were
talking
about
that
no
longer
exists.
F
I
Which
sounded
like
multicast
allocation
or
something
which
I
just
looked,
and
it
happened
to
be
in
the
transport
area
for
some
reason:
okay,
I.
Don't
know
why?
So
it
we
probably
could
ask
Brian,
because
it
turns
out
that
the
Registries
are
expert
review
registries.
I
I
We
need
to
to
probably
talk
to
also
Eric
Clive,
well
Eric,
Sarah,
Klein
and
Gregory
Frank,
to
figure
out
because
they're
probably
going
to
have
to
get
an
expert
which
would
be
great
if
it
was
Brian,
because
he's
already,
you
know
maybe
familiar
with
this
and
has
at
least
in
the
draft
and
and
all
he
needs
to
do-
is
approve
if
it's
okay
for
the
range
to
be
different
right,
not
the
application
itself,
so
I
mean
this
is
just
a
bunch
of
different
parts
that
we
don't
control,
that
we
need
to
just
make
sure
that
they're,
okay,
right.
C
L
To
respond
to
your
comment,
you
know
we
live
in
a
decentralized
world
now
and
we
need
unique
group
allocation
for
applications
in
general.
The
problem
needs
to
be
solved.
It
turns
out.
This
solution
is
solving
it
only
for
a
layer,
two
Network
I
think
we
need
to
generalize
it
and
do
it
for
layer
three
as
well.
So
so,
when
you
talk
about
it
in
that
context,
then
this
working
group
should
work
on
it.
Yeah,
yes,.
C
Stig
again,
yeah
I
agree
and-
and
that's
actually
one
of
my
concerns
with
this-
maybe
is,
if
you
come
up
with
something
like
site,
local
or
or
more
Global,
you
need
to
make
sure
that
they
don't
conflict
with
each
other
I'm
also
wondering
today.
If
someone
uses
a
group
address
in
this
range,
just
by
chance
or
whatever,
how
do
we
gracefully
you
know,
detect
that
or
or
how
does
what.
L
L
Yeah
I
I
have
the
same
concern
you
do
because
this
solution
is
using
multicast
to
allocate
a
multicast.
C
E
Yeah
also,
thank
you
very
much
for
the
draft
and
I
also
don't
know
yet
what
to
think
about
the
technical
solution,
but
you
know
as
as
much
as
as
as
I
likely
may
also
run
into
it
to
issues
of
how
good
it
is
in
the
end
right
I
mean
this
is
not
the
first
time
that
the
best
thing
we
we
were
able
to
actually
manage
to
get
done,
isn't
the
best
thing
we
would
like
to
see,
and
you
know,
and
in
doubt
I
would
rather
err
on
the
side
of
getting
something
useful
deployed
than
waiting
out
for
something
much
better.
C
Yeah
all
right:
okay,
thanks
yeah,
so
I
think
I'll
reach
out
to
to
Brian
and
potentially
some
other
people
and
get
some
input
and
taking.
C
We
yeah,
we
would
like
you
all
to
read
this
and
you
know
see
if
you
like
the
idea
see
if
you
can
come
up
with
something
better
or
improvements
to
this,
so
yeah
I
kind
of
wonder
how
many
people
have
read
this
I,
don't
think
we're
ready
to
adopt
yet
but
kind
of
curious.
Just
if
you.
J
C
Mostly
have
read
it
or
if
it's
mostly
new
to
the
working
group
I
guess
we
needed
that
Paul.
If
you
want
to
check
that
yeah
yeah,
so
just
to
get
some
quick
idea
and
just
want
to
do
a
quick
pause
to
see
how
many
people
have
read
this
documents.
C
Yeah
think
we're
okay,
then
we
don't
need
to
know
exact
numbers,
but
I
suffer
see
six
people
I've
read
it
and
six
people
have
not
read
it.
So
that's
really
good
yeah.
So
please,
please
read
this
document
if
you're
interested
and
any
feedback
on
the
list
would
be
great
and
in
the
meantime,
before
the
next
meeting,
I'll
yeah
reach
out
to
some
external
people
and
see
what
we
can
do.
F
A
C
F
C
I
was
thinking
what
is.
A
G
G
Ahead:
okay,
hello,
everyone
I'm,
hung
ji
from
erosion.
Now
let
me
today
let
me
introduce
the
new
individual
draft
about
European
multicast
young
model
on
that
space.
G
G
And
now
let
me
introduce
some
background
about
this
draft.
There
is
a
class
ATF
based
European
young.
It
defines
ITF
young
data
model
for
internet
VPN
services
and
it
covers
the
following:
five
types
of
European
root
are
defined
in
RC,
7,
4,
3,
2
and
rc9136.
The
type
1
is
the
internet.
Out
Auto
Discovery
root,
Tab
2,
the
Mac
API.
What
has
been
root?
It
helps
reading
inclusive
multicast
United
root.
G
J
J
G
In
this
and
this
RFC
document
describes
the
hmp
MLD
proxy
for
European,
it
defines.
J
G
G
America's
Grove,
the
tab,
Tab
7
type
7
is
the
moneycast
membership
report.
Sync
root
and
type
eight
tablet
is
Mike
has
losing
root
that
the
the
title
type
7
and
8
are
optional
and
they
are
used
to
optimize
the
multicast
access
segment.
G
G
Here
is
a
here
is
the
structure
we
augment
the
uapi
instance
and
add
three
new
attributes
about
the
about
the
new
new
three
European
rules,
the
first
one,
the
cell
multi
advertisement-
if
it
is
enabled
with
the
PVP
code
public
and
next,
the
UV
itmp
proxy.
If,
if
it
is
enabled
it
will
trigger
image,
root,
update
with
the
Marcus
Flags
extended
community
and
the
agency
process
is
set,
it
means
the
igmp
policy
for
evapia.
G
G
This
is
the
details
about
Australian
European
roots
and
we
augmented
the
division
instance
rules,
slash
rules
and
the
first
block
is
the
Smart
route.
The
information
about
of
the
smile
root
and
root,
RT
and
RT
information
are
similar
as
other
equation
rules
and
the
internet
tag
is.
The
item
is
used
to
identify
a
broadcast
domain
and
the
second
block
is
the
moneycast
is
a
Type
3,
Type
7
root
and
the
in
Ethernet
segment.
G
Identifier
is
the
ESI
information.
The
last
block
is
the
Minecraft
Leo
sync
root.
G
And
we
will
welcome
more
comments
and
thank
you.
C
Yeah
hi,
it's
the
Stig
here,
so
the
evpn
bgp
routes.
What
I
should
say
that
you
are
you
know
describing
here
they
are
all
specified
and
best
right.
C
Yes,
so
have
you
have
you
tried
to
get
this
presented
in
in
the
best
working
group.
G
I
saw
that
this
stuff
is
used
to
handle
the
European
multicast
service
include
igmp
packets
or
the
MLG
package.
So
I
made
the
first
present
presentation
in
our
pin
working
group
in
the
P
Morgan
group.
C
I
would
say
at
least
it's
it's.
It
relies
heavily
on.
C
You
know
an
RFC
from
best,
so
so
yeah
I
think
it's
good
to
present
it
there
I
guess
you
have
to
think
about
which
working
group
it
would
you
know
fit
fit.
You
know,
you
know
what
which
working
group
should
be
the
home
or
the
document
or
owner
of
the
document,
but
but
yeah,
it's
probably
good
to
get
it
presented
there
at
least
get
input
from
people
invests.
A
J
A
C
A
J
A
F
Right
so
Stig
and
I
will
talk
we'll
reach
out
to
the
to
all
the
chairs
and
we'll
we'll
help
you
out
hanji
to
know
where
to
take
this.
G
G
M
Do
you
want
me
to
just.
F
M
M
There
seem
to
be
support
in
mops
to
adopt
this
on
Monday,
looked
pretty
good
in
preliminary
polling,
but
I'll
also
going
to
present
an
M
bone
D
in
case.
That's
another
relevant
working
group
on
this,
but
wanted
to
share
this
with
this
group,
because
this
treaty
end
is
a
model
that
that
utilizes,
the
protocols
that
Pim
has
been
developing
and
some
some
protocols
that
come
has
been
developing
and
wanted
to
share
this
with
folks
and
Tim.
M
The
make
people
aware
and
get
some
some
feedback
so
tree
DN
is
a
it's
a
it's
a
model
for
it's
a
tree
based
CDN
model,
that's
optimized
for
live
streaming
to
mass
audiences
and
the
the
problem
that
we're
trying
to
solve
with
treaty
n
is
with
live
audience
sizes,
beginning
to
approach
critical
mass.
M
As
combined
with
increasing
bit
rates
for
things
like
4K,
AK
and
augmented
reality.
Are
we
at
an
inflection
point
for
the
amount
of
resources
that
live
streaming
is
consuming
on
on
the
network?
M
The
perfect
example
of
you
know
the
the
more
the
most
recent
example
of
this
is
Thursday
Night,
Football,
American
football.
The
NFL
is,
is
for
the
first
time
being
exclusively
live
streamed
by
Amazon
Prime
that
started
about
two
months
ago
and
the
first,
the
first
night
of
that
Amazon
announced
there's
about
11
over
11
million
streams,
and
this
is
pretty
High
interest.
M
M
So
given
increasing
audience
sizes
and
increasing
bit
rates,
you
know,
are
we
at
an
inflection
point
now?
If
not
now
will
we
ever
be?
And
if
the
answer
to
either
of
these
questions
is
yes,
question
is
what
what
should
we
do
about
it?
One
thing
to
keep
in
mind
is
that
live
streaming
is
not
the
same
as
on-demand
streaming.
M
There
are
some
people
who
have
kind
of
suggested.
What's
the
difference
you
know
streaming
is
streaming.
If
people
aren't
watching
a
live
stream,
they'd
probably
just
be
watching
a
Netflix
movie.
M
So
it's
and
that's
been
working.
Just
fine.
A
key
thing
to
keep
in
mind
is
that
live
stream
is
different.
It
has
different
requirements.
The
most
notable
is
expectations
for
latency.
M
M
You
could
get
a
text
message
from
your
friend
about
the
game-winning
score
a
minute
before
it
happens
a
minute
before
you
see
it
or
you
hear
your
neighbors
or
the
bar
across
the
street
everybody's
cheering
for
something
that
you
know
you
you
end
up
seeing
a
minute
later
and
if
for,
if
you
want
to
do
things
like
micro,
betting
or
in-game
betting,
where
you
can
actually
bet
on,
you
know
we'll
we'll:
what's
the
next
pitch
going
to
be
like
or
you
know,
will,
will
they
score?
M
Well,
the
team
score
in
this
possession,
the
the
latency
requirements
are,
even
you
know,
far
far
less
far
lower.
For
that.
Another
key
thing
to
keep
in
mind
is
join
rates
are
vastly
different.
You
know,
if
you
look
at
the
graphs
for
on-demand
streaming,
it's
it's
predictable.
It
looks
about
the
same
every
night
and
it's
fairly
smooth
kind
of
increases.
M
You
know
around
eight
o'clock
and
decreases
around
midnight
when
it
comes
to
live
streaming.
Events.
Things
like
you,
know,
sporting
events,
it's
more
like
a
step
function
with
everybody
joining
all.
At
the
same
time,
next
slide.
M
In
terms
of
you
know,
network-based
reputation,
you
know,
multicast
has
been
around
for
a
while.
It's
been
fairly
successful
in
some
places.
It's
you
know
absolutely
vital
on
financial
Networks
and
it's
widely
used
on
video
distribution,
Networks
and
VPN
service
providers.
It's
it's!
It's
a
a
checkbox
requirement
and
it's
also
used
pretty
widely
on
some
Enterprise
Networks.
M
When
it
comes
to
internet
multicast.
Not
so
much,
you
know
successful.
So
it's
worth
it's
worth
asking.
You
know
what
and-
and
that's
really
you
know
what
we're
talking
about
here.
This
is
kind
of
an
internet
multicast
use
case
over
the
top
use
case.
M
So
you
can
get
lots
of
opinions
from
different
folks
about
what
went
wrong
with
internet
multicast,
but
I
think
the
the
three
biggest
ones
three
most
impactful
issues
are.
You
know
the
All
or
Nothing
problem,
that
is
every
every
layer.
M
Three
hop
between
source
and
destination
must
be
multicast
enabled
essentially
every
interface
on
every
single
router
and
firewall
on
the
internet
needs
to
be
running
some
type
of
multicast
routing
protocol
in
order
for
it
to
work
and
if
any,
if,
if
any
of
those
links
aren't
then
multicast
won't
work
right,
I
think
we're
all
too
familiar
with
this
problem.
M
Likewise,
there's
the
it's
too
complex
problem
operators
have
been
complaining
for
for
decades.
That
multicast
is
just
too
difficult
too
and
complex
to
deploy,
troubleshoot
operate,
manage
and
then
there's
a
chicken
and
egg
problem.
There's
no
multicast
content,
because
there's
no
multicast
audience
and
there's
no
multicast
audience
because
there's
no
multicast
content,
so
these
have
been
kind
of
the
the
three
big
things
that
have
plagued
internet
multicast.
M
The
good
news
is
you
know
this:
isn't
your
your
grandfather's
multicast
anymore?
We
have
some
technologies
that
have
evolved
and
advanced
to
address.
These
problems,
and
and
we're
in
pretty
good
shape
that
perhaps
that
I
think
the
folks
in
this
room
might
know
about,
but
but
outside
of
this
room,
they're
not
as
well
widely
understood
next
slide.
M
So
treaty
end
is
a
tree
based
cdns
and
it
leverages
native.
B
M
As
well
as
overlay
Concepts
to
deliver
a
service
for
end
users
and
it
can
be
delivered,
regardless
of
even
when
certain
parts
of
the
network
don't
support
multicast,
so
there's
the
native
piece
think
of
this
as
on
net.
M
This
is
the
the
part
of
the
network
that
is
multicast
enabled
and
the
key
thing
the
the
the
key
component
of
this
is
simply
SSM.
You
know
we
I
think
we
all.
We
all
know
what
SSM
is
in
this
room,
but
maybe
outside
of
this
room
again,
a
lot
of
people
have
made
up
their
mind
and
made
decisions
about
multicast,
and
it
was
based
on
the
ASM
model
and
a
lot
of
you
here.
M
A
lot
of
opinions
are
based
on
things
that
are
no
longer
relevant
or
were
only
relevant
15
years
ago,
with
the
ASM
model.
The
key
thing
about
the
key
benefit
of
SSM.
Is
it
vastly
simplifies
deployment
with
SSM?
You
can
get
rid
of
data
driven
State
creation,
RPS
msdp,
Pim,
register
messages
in
cap
and
dcap
share
trees.
Tree
switch
over
all
of
these
things
that
that
contribute
the
complexity.
You
know
I'd
say
over
90
percent
of
the
complexity
of
multicast
is
Asm.
M
You
take
away
when
you
when
you
move
to
SSM,
you
eliminate
the
vast
bulk
of
that
complexity,
so
that
kind
of
addresses
the
the
it's
too
complex
problem
in
terms
of
SSM.
It
can
be
realized,
typically
realized
with
Pim
SSM,
but
could
also
you
know
in
any
any
protocol
control,
plane
protocol
or
data
plane
protocol,
whether
it
be
mlbp,
GTM,
bgp
and
VPN.
Beer
srmpls
could
be
leveraged
to
support
SSM.
M
So
that's
the
native
piece
and
then
there's
the
overlay
and
the
key
component
of
the
overlay
is
AMT,
which
dynamically
builds
tunnels
to
end
hosts
or
or
even
application
layer
to
hop
over
the
unicast
only
parts
of
the
network.
It's
it's
most
valuable
at
The,
Last
Mile,
where
the
last
mile
doesn't
support
multicast.
You
can
just
tunnel
over
that
part,
but
it
can.
It
could
be,
in
theory,
be
used
to
support
middle
mile
or
or
first
mile
problems
pretty
much
anywhere
where
multicast
isn't
natively
available.
M
Amt
can
provide
that
bridge
on
The
Last
Mile,
it's
a
nice
benefit
is
it
simplifies
you
can
avoid
some
of
the
Wi-Fi,
the
issues
of
Wi-Fi
and
multicast
I.
Think,
like
you,
you
know
this
well
and
there's
been
some
work
on
this
and
as
well
as
other
in-home
issues
that
might
make
multicast
challenging
so
AMT,
comma
hops.
M
Over
those
places
and-
and
you
know,
from
a
service
provider
perspective,
you
just
push
it
out
to
your
border
and
then
you
know,
send
it
tunnel
it
and
send
it
to
the
end
user
and
and
they
take
care
of
it
from
there.
M
So
the
key
thing
about
these
overlays
and
you
know
specifically
AMT-
is
it
it
solves
the
All
or
Nothing
problem.
Now
we
can
hop
over
those
parts
of
the
network
that
don't
support.
Multicast
also
solves
the
chicken
and
egg
problem
because
in
theory
with
AMT
in
theory,
any
host
on
the
internet
can
be
a
potential
audience
number
talked
about
AMT,
but
any
overlay.
Networking
technology
that
you
know
enables
multicast
could
be
leveraged
and
lisp
is
a
good
example.
M
So
AMT
is
probably
the
most
common
in
use
case
just
because
it
it's.
It
was
designed
to
solve
this
last
mile
problem
and
it
can
be
integrated
into
the
host
or
the
app
or
the
even
a
home
Gateway,
but
but
lisp
you
know
has
some
nice
benefits
it.
Can
it
can
help
on
the
actually
on
the
sourcing
side,
which
is
something
that
AMT
can't
doesn't
doesn't
support?
M
Another
nice,
really,
you
know
benefit
is
incremental
deployment
it
this
treaty.
You
know
the
treaty.
End
model
supports
this
concept
of
of
incremental
deployment,
which
is
something
that
hasn't
been
available
in
the
past,
with
with
multicast
Technologies,
it's
been
kind
of,
like
I,
said
an
All
or
Nothing
problem
next
slide.
M
So
here's
kind
of
like
a
30,
000
foot
view
of
of
what
treaty
M
looks
like
you
know
within
the
broader
internet,
and
so
you
see
kind
of
the
the
big
eye
internet
within
that
is.
You
know
the
multicast
enabled
portion
of
the
network
call
this
the
the
M
bone
multicast
backbone,
and
this
is
where
a
treaty
end
provider
can
take
content
from
a
content
provider,
multicast
content
and
deliver
it
natively
to
Native
receivers
or
to
off
net
receivers.
M
M
So
you
get
you
know.
Treaty
n
is
really
SSM,
plus
AMT.
C
M
Yeah,
there's
there's,
there's
really
nothing
new
here
in
from
technological
standpoint,
it's
just
it's.
The
combination
of
really
SSM
and
AMT
to
you
know,
describe
a
you
know,
essentially
a
CDN
architecture
and
yeah
in
the
next
few.
Slides
kind
of
you
know
explain.
You
know
why
now
why?
Why
give
this
a
name?
Okay,.
C
Sure
but
yeah
it
might
help
us
reach
the
Right
audience
hopefully
well.
One
thing
I
just
want
to
add:
is
you
don't
have
it
here,
but
of
course
often
you
might
have
a
content
provider
and
static
tunnels
to
various
other
networks
and
stuff,
like
that,
so
there's
some
like
content
provider
here
could
potentially
be.
You
know
some
router
receiving
multicast
from
some
other,
provided
that
it's
the
actual
content
provider,
I
guess.
M
Yep
yeah
I
mean
you
could
substitute
just
Source.
You
know
the
content.
C
M
M
L
Hey
Larry,
yeah
I
want
to
Echo
that
comment
from
stick
because
we
have
an
AMT
document
that
describes
the
protocol,
but
we
need
something
like
a
Last
Mile
multicast
document
that
basically
says
that
this
slide
can
be
done
and
and
it's
basically
everything
that
you've
been
saying
for
quite
a
long
time
is.
You
know
the
sources
are
inside
the
data
multicast
cloud
and
you
reach
receivers
that
are
inside
of
there
or
receivers
that
can
attach
using
the
AMT
Gateway.
So
it's
it's
like.
We
need
some
kind
of
usage
document.
L
F
Yeah
and
that
could
also
be
included
in
the
Lessons
Learned
draft
that
will
be
discussed
next.
F
I
think
that's
different.
Once.
F
D
Lenny
two
two
remarks
here,
maybe
to
incorporate
also
the
work
that
Jake
has
done
for
mbon
D.
Maybe
it's
also
interesting
to
point
out
that
the
content
provider
could
be
of
net
and
connected
via
the
AMT
Gateway,
not
relay
Gateway
service,
and
the
relay
could
also
be
pushed
out
towards
an
edge
cloud
or
something
also,
if
you
don't
want
to
incorporate
it
directly
onto
the
routers
I
think
with
the
virtual
routers
and
Edge
Computing.
This
could
also
be
use
case.
M
Yeah,
that's
so
so
the
document
does
talk
about.
You
know
overlay.
It
speaks
at
a
high
level
about
overlays
and,
and
the
focus
here
is
on
Last
Mile
and
you
know
to
address
the
off
net
receivers,
but
but
it
does
talk
about
how
it
can
it
can
solve
the
middle
Mile
and
first
mile
problems
which
I
think
Jake's
Jake's
documents
have
done
a
good
job,
so
it
kind
of
in
a
hand,
wavy
fashion
kind
of
addresses
that
at
a
high
level.
M
Yeah,
so
that's
an
interesting
idea
and
I
would
invite
you
to
come
to
mbone
D
next
and
there's
going
to
be
an
interesting
presentation,
kind
of
an
update
on
the
multicast
menu
that
Lauren
is
going
to
give
and
it
kind
of
graphically
Illustrated
there's
going
to
be
a
demo
of
that.
So
it's
a
teaser
for
M
Bundy.
L
Just
one
real
brief
comment,
so
that's
a
good
point
that
Neil's
just
made
that
the
source
could
be
off
net.
The
receivers
could
be
off
net.
So
it's
not
just
a
last
mile
solution.
L
It's
also
a
first
Nile
solution
and
it
could
be
a
middle
mile
solution
too,
meaning
there's
no
native
multicast
anywhere,
and
do
we
want
to
you,
but
by
having
a
pure
overlay
that
does
it
all
is
yet
another
example-
and
you
might
want
to
do
that
just
because
you
may
not
have
multicast
so
it
you
still,
you
want
to
deliver
the
IP
multicast
Service
via
you
know
this
large
content.
So
all
these
combinations
should
be
worked
in
some
kind
of
document.
M
Yeah
agreed
and-
and
you
know
one
way
to
think
of
this-
is
you
know
the
overlay?
Is
it
may
not
be
a
destination?
I?
Think
that
you
know
it's
kind
of
you
bring
up
a
great
Point.
It
could
start
out
with
you
know
this
green
Network.
This
green
blob
in
the
middle
could
be
really
really
small,
but
over
time,
as
as
people
begin
to
see
more
and
more
value
to
multicast,
it
grows.
M
So
you
know
ideally
the
the
goal
and
the
most
efficient
scenario
is,
is
native
and
seeing
this
grow
over
time.
But
you
know
the
overlays
essentially
provide
that
bridge.
That
gets
you
from
the
green
to
the
non-green
and
and
that
grain
may
be
smaller.
M
It
may
be
big
right
now,
it's
small,
but
it
allows
it's
it's
it's
a
it's
an
architecture
that
kind
of
gets
more
and
more
optimal,
the
bigger
the
green
gets,
and
it
also
provides
a
level
of
incentive
for
providers
to
become
more
and
more
green,
but
you're,
not
you
know
what
multicast
has
always
suffered
from
is
you're
bounded.
M
The
value
of
multicast
is
bounded
by
the
size
of
the
green.
So
if
you
can,
if
you
can,
you
know,
you
know
this,
this
benefit
of
an
incremental
deployment
makes
that
green
get
bigger
ice.
Yep.
J
Hey
Ice
Arcus
doing
SSM
over
the
internet,
I
think
another
deployment
hurdle.
There
is
the
source
Discovery
right,
because
the
source
needs
to
be
fed
into
the
the
first
stop
router
and
it
has
to
be
known
by
the
application.
Are
you
going
to
discuss
that
as
well?
How
to
sort
of
get
the
source
Discovery
going.
M
Well,
we're
I
mean
this.
This
document
prescribes
SSM,
which
says
you
know,
don't
do
network-based,
Source
Discovery,
you
know,
search
in
SSM,
Source
discoveries
handle
that
about
at
a
band.
You
know
typically
by
the
application
layer.
So
that's
I
think
one
of
the
one
of
the
great
lessons
learned
and
yeah
I'm.
M
Right
yeah,
so
this
this
model
says:
do
that
you
know
punting
to
the
application
layer
is
or
some
type
of
out-of-band
mechanism
application
layers
is
probably
the
best
most
effective
place
to
do
it.
But
but
yeah
I
mean
this
this
you
know
we
narrow
the
scope
and-
and
we
you
know,
make
the
value
judgment
of
yeah
and-
and
this
has
already
been
done-
I
mean
there's
the
RFC
I
forget
what
the
RFC
is,
but
essentially.
J
M
Been
deprecated
and
you
know
it
says,
essentially
use
SSM,
so
you
know
we're
following
those
guidelines.
L
Well,
I
mean
just
a
quick
point:
this
is
Dino
again
with
respect
to
Growing
this
green
cloud.
Like
you
said
you
know,
if
you
started
with
an
overlay
to
solve
the
All
or
Nothing
problem
and
head-end
replication
kind
of
hits
the
wall,
then
providers
now
have
the
customers
coming
out
and
they
have
to
solve
the
problem,
and
so
they
have
to
they'll
be
forced
to
to
use
the
green
Cloud
because
it'll
scale,
better
so
I
think
it
would.
L
It
would
naturally
come
out
now
you
want
to
be
able
to
do
this
gracefully,
but
that's
how
you
break
the
chicken
and
egg
problem
as
well.
M
Okay,
now
next,
so
so
this
is
this.
Is
the
30
000
foot
view,
here's
kind
of,
like
you
know,
circling
into
the
the
green
part,
so
this
is
a
CDN
without
multicast.
You
know,
there's
different
models
for
cdns.
M
If
you
hit
next
Mike
yeah
there,
you
go
so
there's
different
models
for
cdns,
but
typically
you
have
these
CDN
boxes
and
you
know
you
have
a
source
that
goes
to
one
of
them
and
it
they
distribute
to
the
other
CDN
boxes
and
the
CDN
boxes
kind
of
support
local
receivers
from
there.
So
the
benefit
of
this
is
you
know
it
just
works.
You
don't
have
you
don't
need
multicast
routing
protocols
to
work
to
get
this
working.
So
that's
why
cdns
are
popular,
but
next
slide.
M
This
is
cdns
with
multicast
treaty
ends.
The
first
thing
so
hit
one
more
time,
Mike
one
more
yeah.
So
the
first
thing
we
do
is
we
replace
those
CDN
boxes
with
AMT
relays.
We
push
them
all
the
way
out
to
the
border
of
the
multicast
enabled
Network
and
we
send
traffic
only
to
receivers
that
have
two
relays
that
have
local
receivers.
So
that's
one
optimization,
but
the
key
optimization
here
compared
to
cdns
without
multicast,
is
if
these
AMT
relays
are
deployed
on
existing
Network
infrastructure.
M
That
is,
if
you
have
routers
or
network
devices
already
in
your
network
that
do
support
AMT
relay-
and
there
are
you
know,
a
number
of
platforms
from
another
different
vendors
that
do
support
AMT
relays
natively.
You
essentially
have
a
CDN
on
a
chip,
so
you
can
as
a
service
provider
as
a
comp
as
a
service
provider
as
an
operator
deliver
this
service
for
essentially
zero
capex.
It's
it's.
M
M
You
know
kind
of
as
Barnacles
that
this
is
something
that's
actually
built
into
the
infrastructure,
so
it
should
be.
You
know
dramatically
cheaper,
and
you
know
indications
folks
who
kind
of
look
at
this
model
as
it.
You
know
it
could
be
significantly
less
expensive
to
offer
this
service
than
traditional
cdns.
M
All
right
so
key
benefits.
You
know
you
get
more
efficient,
Network
utilization
right
this
not
only
for
Content.
That's
there
today,
but
it's
it's
really.
The
real
benefit
is.
It
makes
possible
new
content
from
new
contributors.
It
makes
it
more
viable
so
things
that
we
can't
imagine
things
that
don't
exist
on
the
internet
today
things
you
know,
a
good
example
is
augmented.
Reality
live
streaming
to
mass
audiences.
What?
If
what
if
you
could
have
that,
instead
of
just
watching
a
game
the
way
we've
always
watched
it
on
television?
M
What
if
you
could
have
an
AR
experience
where
you
feel
like
you're
in
the
front
row,
Center
Court
and
you
can
look
side
to
side
and
see
the
celebrities
sitting
next
to
you
and-
and
you
can
hear
them
talking
and
you
can
have
a
real
immersive
experience
things
that
aren't
really
you
know.
Could
you
do
that
today?
Could
you
have
an
AR
live
stream
of
a
sporting
event?
M
Maybe
that's
a
200
Meg,
or
maybe
one
gigabit
stream
and
deliver
that
to
you
know
tens
of
thousands,
hundreds
of
thousands
and
perhaps
millions
of
interested
receivers.
Can
the
internet
do
that
today.
I!
Don't
know
if
it
can,
it
certainly
probably
would
be
really
really
expensive,
but
this
is
pretty
trivial
for
a
multicast
enabled
world
and
it
allows
service
providers
to
offer
a
new
service.
M
You
know
replication
as
a
service.
Send
us
your
content,
we'll
deliver
it
we'll
push
out
that
replication
point
to
the
most
efficient
point
in
the
network,
so,
like
I,
said
it.
If
you're,
if
you're
existing
infrastructure
supports
multicast,
you
can
deliver
this
at
essentially
zero
additional
cost.
M
The
nice
part
about
this
is
it's
it's.
This
is
an
open,
standards-based
architecture,
that's
using
widely
available
and
understood
protocols.
This
isn't
a
you
know,
a
proprietary
CDN
technology
that
you
know
only
that
that
that
is
proprietary
and
doesn't
necessarily
interoperate
amongst
between
different
cdns.
M
It
also
has
far
less
coordination
required
between
the
content
provider
and
the
CDN
provider.
You
know
some
traditional
unicast
based
cdns
can
often
have
essentially
storage
requirements,
so
you
have
to
deal
with
data
storage
protection,
some
cases
Key
Management
with
treaty
end
you're
just
forwarding
packets.
So
you
don't
have
to
deal
with
those
issues
when
it
comes
to
the
interaction
between
the
content
provider
and
the
CDN
provider.
It's
just
those
interactions
just
happen
between
the
content
provider
and
the
end
user.
M
The
other
nice
thing
is
this
is
this:
is
a
decentralized
and
democratizing
Technology,
essentially
decentralizes
and
democratizes
content
sourcing,
you
know:
is
it
healthy
for
the
internet
and
society
that
only
a
small
handful
of
companies
kind
of
control,
nearly
all
content
distribution?
This
is
a
is
an
architecture
that
enables
us
to
get
back
to
our
decentralized
routes
for
the
internet,
and
you
know,
potentially
enable
and
and
reduce
the
costs
and
the
for
smaller
entities
to
deliver
content.
Without
you
know,
large
intermediaries
next
slide.
M
So
in
terms
of
use
cases
I've
spoken
mostly
about
you-
know,
video,
that's
that's
kind
of
the
sexiest
and
most
obvious
use
case,
but
there's
more
to
it
than
that.
You
know
it's
really.
The
use
case
is
any
multi-destination
traffic.
This
could
be
audio,
it
could
be.
Video
like
I've
talked
about
could
be
ar
it
could
be
Telemetry
data.
You
know,
I
invite
you
guys
also
to
here's.
Another
teaser
come
to
M
Bundy
and
there's
going
to
be
a
presentation
and
a
talk
from
a
a
provider.
M
That's
actually
doing
this
and
they're
they're
delivering
they're
doing
treaty
in
with
for
weather,
satellites,
atmospheric
weather
satellites
that
are
delivering
real-time
Telemetry
data
to
research,
institutions
that
are
looking
at
this
atmospheric
Telemetry.
So
that's
a
use
case
doesn't
have
to
be
video
or
it
could
be
large
file.
Software
updates.
M
This
is
less
sexy,
but
probably
a
more
acutely
painful
issue.
This
is
something
that
Jake
has
been
working
on
for
and
talked
about
for
years.
You
know
the
idea
of
like
iOS
updates
things
that
you
know
need
to
get
pushed
out
overnight
to
tens
of
millions,
hundreds
of
millions
devices
and
you
know
multi-gigabit
files.
So
that's
that's.
Certainly
an
applicable
use
case
next
slide.
M
So,
to
summarize,
you
know
we're
we
may
be
at
a
Nexus
point
for
live
streaming
on
the
internet
when
it
comes
to
kind
of
the
crossing
of
supply
and
demand
curves,
the
demand
is
being
fueled
by
exploding,
live
stream,
audience
sizes.
You
know
in
the
tens
of
millions
now
that
you
know
that
wasn't
necessarily
the
case.
We
have
that
on
a
weekly
basis.
Right
now,
and
that
wasn't
the
case.
You
know
prior
to
two
months
ago
and
increasing
bit
rates.
M
4K
knar
combine
that
with
the
supply
previously
multicast
was
complex
and
difficult.
You
know
10
15
years
ago,
but
we've
been
working
at
it.
Folks,
in
this
room
and
folks
in
mbone
day,
we've
been
working
to
improve
and
make
network
based,
replication
easier
and
more
available
than
ever.
So
when
you
combine
those
two
things,
multicast
is
easier
to
deploy
easier
to
operate
and
more
valuable
and
useful,
because
it
can
work
out
anywhere
with
combine
that
with
this
new
you
know,
bursting
demand
we're
kind
of
at
a
you
know.
M
Perhaps
a
Nexus
point
where
we're
we're
at
this
model.
You
know
multicast,
paced
cdns
becomes
much
more
viable
and
that's
that's.
What
treaty
end
is
it
describes
a
CDN
model?
That's
optimized,
to
address
this.
The
increasing
strain
that
live
stream
is
putting
on
the
network
not
only
for
the
content
today,
but
also
enabling
new
content
that
we
couldn't
imagine
in
terms
of
next
steps.
We're
seeking
working
group
adoption,
it
kind
of
fits
between
mbon,
D
and
mops
I.
M
Think
mops
is
probably
the
better
place
because
there's
more
expertise
on
the
CDN
side,
but
I'm
also
going
to
be
talking
about
this
in
MMD.
So
I
wanted
to
share
it
with
this
group.
Let
people
be
aware
of
it
perhaps
a
little
teaser
for
their
conversation
next
slide
and
the
next
the
next
meeting,
but
would
welcome
any
feedback
and
thoughts
anybody
might
have.
N
Thank
you
for
our
presentation.
I
think
this
is
a
really
very
interesting
topic,
just
a
very
quick
to
clarification.
Questions
for
the
scenario.
The
first
one
is
whether
the
source
of
the
live
streaming
in
this
scenario
will
be
very
Dynamic.
I
mean
it
can
be
anywhere
in
the
network
and
whether
it
can
be
a
problem
for
setting
up,
for
example,
setting
up
a
multi,
p2mp
path,
and
another
question
is
whether
the
receivers
will
be
very
Dynamic.
It
also
influence
the
convergence
of
the
multicaster
tree.
It's
just
for
clarification.
M
Yeah,
so
yes,
you
can
have
ephemeral,
receivers
coming
and
going
I
think
that's
kind
of
the
nature.
You
know
this
is
optimized
for
over
the
top
receivers.
So
that's
that's.
That's
certainly
a
use
case
and
in
terms
of
sourcing.
Yes,
they
can
I
mean
that
that's
a
feature
of
this.
If
the
sources
could
be
anywhere
and
that's
that's-
the
beauty
of
a
decentralized
model
sources
can
come
can
pop
up
anywhere.
You
don't
necessarily
have
to
go
through
a
handful
of
the
large.
M
You
know,
content
hyperscalers,
that
that
kind
of
deliver
content
today.
N
G
M
Mean
that's
what
Pim
does
right
so
I
I
think
that's
what
it's
there
for
that's!
What
the
multicast
writing
Protocols
are
there
for
the
the
magic
of
this
working
group
has
been
to
to
to
support
this
kind
of
dynamicity.
L
Well,
I
wanted
to
respond
using
the
AMT
gateways.
Certainly
the
receivers
could
move
around
quite
a
bit
and
they'll
just
attach
to
a
new
relay.
If
the
relay
already
has
other
gateways,
then
the
tree's
already
built
through
the
native
multicast
Cloud.
So
there's
no
joint
latency
there
from
the
sources
point
of
view.
L
The
the
vision
that
Lenny
has
is:
let's
have
the
sources
in
the
multicast
cloud,
so
whatever
the
convergence
of
a
source
moving
well,
Source
Mobility
with
multicast
hasn't
been
done
that
much
and
that's
something
to
look
at,
but
the
source
is
pretty
much
stationary
now
if
the
source
is
coming
in
through
AMT,
it's
the
same
thing
is
that
if
it
hits
a
relay
that
is
already
using
that
Source
address
and
the
tree
is
built,
we're
good
to
go
likely
it's
not
because
it's
coming
into
a
new
location,
so
that
means
there's
kind
of
an
RPF
change
that
goes
through
to
the
new
relay.
L
So
so
Looney
I,
guess
the
message
here
or
something
for
all
of
us
to
think
about
is
as
AMT
sources
and
AMT
receivers
move
around.
Will
the
tree
inside
the
native
multicast
Cloud
have
to
change
and
if
it
doesn't
It's
a
Wonderful
scaling
property.
C
L
M
Yeah
so
I
think
this
is
where
lispin
can
have
a
play.
To
be
honest
with
you,
you
know,
AMT
does
not
support
sourcing
behind
an
AMT
Gateway
and
and
that
not
to
be
mistaken,
for
what
you
know
the
work
that
Jake
has
been
doing
that
that's
been
mostly
more
middle
mile,
where
you
have
multicast
Network,
multicast
networks,
separated
by
a
unicast
network
and
and
using
AMT
to
kind
of
tunnel
between
two
multicast
islands.
M
What
this?
What
what,
when
you
have
a
source
sitting
off
net
and
the
source
is
perhaps
unicast
or
the
source
is
on?
You
know
another
Network,
you
know
that
that
might
be
where
lisp
is,
is
a
really
good
fit
because
you
know
list
has
the
ability
lisp
has
the
ability
to
support
multicast
sourcing
much
better
than
AMT?
So
you
know
I
I
think
we
can.
M
We
could
just
hand
wave
and
say
overlay
networking
and
advances
in
Tech,
different
overlay
Network
to
Technologies
can
be
used
to
solve
the
problem
of
getting
multicast
from
into
wherever
it
needs
to
go.
Even
over
points
of
the
network
that
aren't
multicast
enabled.
L
Yeah
a
good
point.
So
having
said
that,
I
mean
all
the
the
points
that
you
and
stigmate
means
that
if
you
bring
the
source
in
to
the
multicast
cloud
via
lisp,
then
the
source
address
doesn't
have
to
change.
So
you
don't
have
to
change
the
applications,
but
as
the
source
moves
it
could
always
attach
to
the
same
point
of
the
multicast
cloud.
So
the
S
comma
G
tree,
that's
built
inside
the
multicast
cloud,
doesn't
have
to
change
either.
So
that's
wonderful
as
well.
L
M
Yeah
and
to
be
honest
with
you,
you
know
anything's
possible,
but
I
I,
don't
necessarily
see
the
source
address,
changing
much
for
for
these
use
cases,
I
mean,
if
you
think
think
it
through.
You
know
why
would
the
source
address
need
to
change?
Usually
it's
a
stationary
Source
or
even
if
it's
a
drone,
maybe
it's
hopping
around,
but
in
a
relatively
bounded
area,
but
anyway,
sorry
I,
think
I've,
probably
taken.
A
F
Multicasters
I
think
it's
been
like
25
years
or
so
ago
that
I
worked
at
Cisco
with
Dino.
He
was
developer
and
I
was
a
tester
of
the
code
that
he
developed
and
a
lot
of
respect
for
both
these
guys
and
so
I
just
picked
their
brain,
and
so
I
got
a
lot
of
feedback
from
both
of
them
about
just
some
things
that
we've
learned
over
the
last
30
years
of
multicast
development.
So
we
put
this
together,
admittedly,
there's
a
lot
of
work.
That
needs
to
be
done
if
this
is
a
useful
document.
F
But
that's
why
we're
here
and
if
this
is
useful,
it
may
be
either
be
done
here,
since
it
talks
about
protocols
or
nmbon
D
next
slide
thing
if
you
can
hit
that
space
yeah
all
right.
So
this
is
the
background
that
I
just
I
just
mentioned.
We
did
discuss
the
three
of
us
in
the
last
ITF
came
up
with
this
draft.
F
It's
not
intended
to
be
a
BCP,
just
kind
of
a
historical
document
to
help
us
understand
some
of
the
previous
development
work
to
help
us
understand
the
current
work
and
we're
hopeful
that
this
will
help
people
that
are
not
familiar
with
multicast
unders,
better
understand
multicast,
because
it
still
amazes
me
that
I
talk
with
people
fairly
regularly.
That
just
say
that
multicast
is
very
complicated
to
the
point
that
Lenny
just
discussed
in
his
document.
F
So
hopefully
this
may
help
and
we
do
start
with
dvmrp
one
of
the
first
multicast
writing
protocols,
because
that's
really
how
we've
it
started.
We
solved
some
problems
with
the
dvmrp
and
made
some
design
choices.
Dino
and
others
made
some
design
choices,
Elisa
Cisco
and
we
progress
from
various
different
versions
of
igmp
to
where
we
are
today
with
ignb,
V3
and
MLD,
and
then
a
variety
of
Pim
writing
protocols
that
developed
throughout
the
years
next
slide.
F
So
I
just
chose
for
this
presentation.
Just
to
give
you
an
idea.
If
you
haven't
read
the
draft
of
some
of
the
protocols,
there's
some
that
I,
some
of
that
I
did
include
that
are
in
the
draft.
Are
things
like
Wi-Fi
that
Lenny
mentioned?
We
have
a
document
specifically
on
that
that
talks
about
challenges
of
multicast
in
a
Wi-Fi
environment
where
there's
no
acknowledgments
and
it
there's
a
lot
of
drops
I
didn't
get
into
the.
F
It
was
pretty
contentious
many
years
ago
with
the
mpls
Wars
at
the
time,
between
Cisco
and
Juniper,
mostly
with
mltp
versus
point
to
multiple
rsvte,
and
then
it
kind
of
they
all
kind
of
came
together.
So
we
included
that
in
the
draft,
but
just
with
dvmrp
it
was
a
flood
and
broom.
F
It
is
a
flood,
improved
protocol,
good
initial
solution,
but
we
quickly
realized
that
it
just
wouldn't
scale
with
higher
bit
rates
and
using
the
network
to
discover
sources
as
Lenny
just
mentioned,
was
also
something
we
originally
thought
would
be
a
good
idea.
It
makes
the
network
more
valuable
but
discovered
to
be
a
bit
too
intensive,
and
so
we
felt
that
it.
This
protocol
worked
good
in
small
scale,
development
in
deployments,
but
began
to
suffer
in
larger
multicast
environments,
so
we
needed
better
Solutions,
so
we
evolved
from
dmrp
to
some
other
protocols.
F
Next
slide,
I
just
have
a
couple
more.
The
shared
and
Source
tree
development
work.
The
shared
trees
were
designed
to
reduce
state
a
lot
of
State
discussion
over
the
years.
We
still
have
those
today,
particularly
at
a
time
when
memory
was
scarce
and
expensive
and
shortest
patch
trees
were
simpler
and
more
optimal,
but
they
consumed
more
state.
F
Switching
from
One
Tree
shared
to
the
other
source
was
a
difficult
routing
problem
problem
that
again,
I
didn't
have
to
fix,
but
developers
did
and
because,
when
you
join
the
source
tree,
you
had
to
prune
that
Source
from
the
share
tree.
So
we
could
get
into
that
a
little
bit
in
the
paper.
The
the
draft
next
slide.
F
Lenny
already
discussed
in
his
document
the
All
or
Nothing
problem
with
every
layer.
Three
hop
between
the
source
and
receivers
need
to
support
a
multi-gas
routing
protocol
and
that
tends
to
create
and
it
has
a
a
big
barrier
to
deployment,
and
so,
as
he
just
described,
there's
a
way
that
you
can
use
overlay
networking.
You
can
use
AMT
to
to
get
over
that,
and
we
kind
of
discussed
that
as
well.
F
F
So
this
is
this:
is
the
last
Light,
so
there's
so
many
other
protocols
at
the
beginning
of
the
draft
lists
them
all
I,
don't
know
like
15
20
protocols,
there's
many
more,
but
we
decided
to
just
focus
on
the
major
ones
and
again
just
we're
hopeful
that
something
like
this,
whether
it's
adopted
or
not,
will
be
able
to
help.
People
just
understand
the
history
of
the
last
30
years.
F
So
the
question
we
have
is
this:
is
this
something
that
we
should
work
on
in
the
ITF?
If
it
is,
should
we
work
it
on
it
here
in
Pim,
or
do
you
think
Cambodia
would
be
better
I'll
leave
that
to
you
and
that's
all
I
got.
F
So
is
there
anybody
in
the
queue.
F
N
I
think
this
is
necessary,
especially
for
anyone
who
is
not
familiar
with
the
history
actually
from
my
side
when
I
want
to
learn
something
about,
modcast
I
will
see,
pin
and
MPS
p2mp
and
then
beer.
A
N
A
lot
of
protocols
mentioned
here
are
also
hmp
and
MLD
for
sure.
A
lot
of
protocols
mentioned
in
the
slides,
I
I
am
not
familiar
with
so
I
think
this
is
a
very
good
document
to
make
people
know
what
has
already
happened
and
what
could
could
learn
from
it.
So
I
think
it
is
good
talking.
Thank
you,
but
I'm.
Sorry,
I
haven't
read
it
yet.
F
F
L
So
what
are
the
problems
we're
having
in
the
ietf
over
history?
Is
that
I
see
young
designers
developers
and
implementers
making
coming
up
with
proposals
and
older
people
are
saying?
No,
that's
not
going
to
work
and
they
say
well.
Why
not,
and
they
say
well,
we
did
that
so
far
long
ago
and
these
young
people
want
to
get
information
in
detail
about
what
happens.
So
we
have
to
document
history,
otherwise,
we'll
repeat
the
mistakes
so
I.
You
know,
that's
just
a
general
comment.
A
Yeah
just
wanted
to
Echo
would
love
to
see
this
document
produced
haven't
had
a
chance
to
read
it
yet
intend
to
read
it
and
give
you
some
feedback
I
drift
in
and
out
of
needing
to
have
multicast
Proficiency.
In
my
role
as
a
operator,
my
apologies
I
didn't
announce
Brian
Hoffman
from
Telus.
So
thank
you
for.
F
C
You
yeah
I
also
think
this
is
useful
and
it
might
be
good
to
discuss,
pin
bider
a
little
bit
as
well.
D
C
F
A
C
C
Best
home
is
for
this,
but
yeah
yeah,
we'll
we'll
check
with
the
embodia
as
well
before
me.
Consider
adopting
it,
but
at
least
this
seems
like
there's
a
fair
amount
of
Interest,
so
yeah.
M
Just
speaking
for
M
bone,
D,
I,
guess
it's
really
early,
so
I'm
I,
the
the
only
I'm,
not
sure
which
hat
I'm
speaking
with
is
a
Bundy
chair
or
author,
or
what
the
only
hat
I
know
that
I'm
taking
off
is
my
cogent
hat,
because
it's
really
early,
but
you
know
my
thoughts
would
be.
If
this
is
lessons
about
deployment
it
probably
in
operations,
then
it's
m
bone
D.
If
it's
more
lessons
about
protocol
development,
it
would
seem
like
it
would
be
more
Pim.
M
This
document,
I
think
does
a
little
of
both
so
I
think
it
kind
of
fits
sort
of
in
between
both,
but
to
me,
I
think
it's
more
protocol,
lessons
than
deployment
lessons
so
I
think
that
you
know,
given
that
I
would
think
that
Pim
would
be
the
better
fit.
But
you
know
I
I
would
probably
defer
to
I.
Would
recuse
myself
from
that
decision
and
let
Greg
Greg
make
that
call
for
officially
for
and
Bondi,
but
that's
just
kind
of
my
my
thoughts
in
the
middle
of
the
night.
L
I
think
documenting
history,
for
both
of
those
the
protocol,
development
and
deployment
is
going
to
be
useful.
The
question
is,
should
be
put
in
one
spec,
different
spec
I
think
there's
going
to
be
more
people
that
are
going
to
care
about
the
deployment,
because
there's
more
operators
in
the
world
than
developers
so
I
don't
know
if
we
want
to
put
make
this
to
integrated
or
separate
documents.
L
I
have
a
feeling
that
the
deployment
document
would
be
much
larger
because
there's
been
I,
don't
know
if
there's
enough
people
that
could
document
it
accurately.
But
you
know.
C
C
M
Just
before
before
sorry
of
our
start
to
jump
in
front
of
you,
but
the
other
thing
I
forgot
to
mention
was
this
is
going
to
be.
We
have
a
little
bit
of
time
in
M
Bundy
to
cover
this
as
well.
It's
kind
of
at
the
tucked
at
the
end
of
the
of
the
meeting,
so
we
may
not
get
to
it,
but
this
will
be
discussed
in
Bundy.
I
Of
Earth
and
official
Technologies,
both
types
sounds
interesting,
I
I,
don't
know
if
history
of
deployment
and
why
mospf
failed
or
whatever
is
as
interesting
as
how
do
I
take
the
tools
that
I
have
today
and
deploy
them
together.
I
So,
in
other
words,
you
know
out
of
the
whatever
number
of
RCs
we
have,
what
are
the
RCs,
that
I
need
and
how
do
I
put
them
together,
so
that
I
can
deploy
the
gas
Network
and
and
the
old
ones
the
ones
that
we
learn?
Something
from.
We
don't
need
to
talk
about
those
right.
It
might
be
more
interesting
in
having
something
of
how
do
we
move
forward
I'm
going
to
the
plane
ever?
I
What
do
I
do
not
why
it
failed
before
I
need
to
know
why
it
failed
before
to
design
a
new
solution
right.
So
so
I
don't
know.
From
my
point
of
view,
it
sounds
like
the
history
of
protocols
is
more
interesting
than
the
history
of
deployment,
and
current
deployment
is
more
interesting
than
current
technology,
which
is
already
documented.
D
Just
a
quick
remark
to
that.
I
generally
tend
to
agree
to
you.
Overall,
it's
maybe
more
important,
but
the
discussions
that
we
had
in
iitf,
114
and
115
was
the
Nautica
solution
for
auto
configuration
of
multicast
showed
that
old
drafts,
which
has
not
have
not
made
it
to
RFC
status,
were
pulled
out
again
because
the
lessons
learned
from
why
they
weren't
implemented
and
why
we
thought
it
was
a
bad
idea
to
implement
that
were
not
documented,
so
I
think
it
both
has
a
valid
point
if
you're
looking
at
implementing
new
stuff.