►
From YouTube: IETF115-MBONED-20221109-1300
Description
MBONED meeting session at IETF115
2022/11/09 1300
https://datatracker.ietf.org/meeting/115/proceedings/
A
A
A
Okay,
so
let's
kick
things
off.
We
have
a
pretty
full
agenda
today,
so
so
let's
go
ahead
and
get
started
now.
A
Here
is
the
note:
well,
you
may
have
seen
this
before
this
week.
Please
take
note
of
this
and
read
it
carefully
as
it
describes
contribution
and
participation
and
the
implications
of
that.
A
Terms
of
meeting
tips,
the
usual
from
the
last
few
meetings
sessions
being
recorded
and
if
you
need
to
speak,
push
the
appropriate
buttons.
Add
yourself
to
the
queue
unmute
yourself.
A
Like
to
add
video
you're
welcome
to
and
of
course,
masks
should
be
worn
within
the
meeting
area
and
Max
has
graciously
signed
up
to
be
a
delegate
to
enforce
this
rule
with
an
iron
fist,
if
necessary.
A
Speak
up
if
we
missed
anything
we're
going
to
cover
the
Yang
models.
Amt
Yang
models,
draft
multicast
scaling
considerations,
document
from
Jeffrey
I
will
be
talking
about
3dn
draft.
A
A
An
interesting
real
world
use
case
of
multicast
and
AMT
Lauren
is
going
to
speak,
give
an
update
on
the
multicast
menu
and
some
really
exciting
progress
she's
made
on
that
to
deliver
AMT
to
the
browser
and
if
time
allows
we're
going
to
add
Mike
Mike
we'll
speak
to
a
Lessons
Learned
document
that
looks
like
it
might
be
more
appropriate
for
PIM,
but
he'd
like
to
share
it,
and
certainly
the
folks
in
this
working
group
are
great
and
ideal
for
providing
feedback
on
that
document.
A
D
Okay,
can
you
hear
me.
C
D
A
Thank
you,
maybe
perhaps
Sandy,
maybe
one
last
email
to
the
working
group
requesting
review
on
this
document
as
a
reminder
and
then.
C
Will
okay,
maybe
kick
off
that
process.
A
You
and-
and
you
know,
to
the
others
in
the
working
group-
please
we'd
love
more
feedback.
Sandy
would
love
more
feedback
on
that.
Just
a
reminder:
the
multicast
Telemetry
draft
Greg
Sandy.
Do
you
I,
believe
you
guys
are
both
co-authors
on
that
one?
Do
you
have
any
updates
on
that.
D
Story,
it
take
me
many
times
to
open
the
mic
so
about
the
Redundant
to
ingressville
over.
We
think
that
it
has
covered
most
of
the
deploy
situations,
but
if
some
operator
or
some
others
who
has
experience
on
edge
it's
better
to
tell
us
more
about
it
and
it's
great
if
somebody
can
review
it
and
the
view
is,
give
us
more
suggestions.
Yeah.
A
A
E
Sorry,
just
Mike
will
come
when
for
for
us
a
lot
later.
So
maybe
you
can
ask
him
then
back
again,
gotcha.
A
The
multicast
to
the
browser
drafts,
so
we
in
back
in
July,
we
did
a
working
group
last
call
for
the
dorms
document.
At
the
time
there
just
wasn't
enough
responses
to
advance
I
think
we
only
had
about
two
or
three
people
speak
up
to
support
advancing.
We
need
a
little
bit
more
than
that.
So
if
you
would
like
to
see
dorms,
if
you
think
it
is
ready
to
be
Advanced
to
iesg,
please
speak
up
in
the
meantime,
unless
we
don't
hear
we're
gonna
have
to
park
it
separately.
A
The
other
three
drafts
related
to
multicast
of
the
browser
and
BC
back
in
multicast
Nat
Jake,
has
kind
of.
Let
us
know
that
he
is
now
engaged
and
has
been
assigned
to
different
projects
and
no
longer
has
as
much
time
to
work
on
these
docs.
So
he
may
need
some.
So
more
than
likely,
he
is
going
to
need
somebody
to
carry
the
torch
on
those
drafts,
and
so,
if
there
is
anyone
interested
in
proceeding
and
continuing
that
work,
please
do
speak
up.
A
Let
the
shares
and
Jake
know
he
would
love
to
have
someone
pick
that
up
for
him,
because
of
this
time
it
looks
like
he
may
not
have
be
able
to
devote
the
appropriate
amount
of
time
to
get
those
across
the
finish
line.
A
That
is
all
okay,
let's
see
who
is
up
first,
that
would
be.
Is
it
Isha.
F
A
Amt,
yes,
let.
A
The
song
will
now
talk
about
the
AMT
Yang
models.
Draft.
G
Hello,
hello,
everyone-
this
is
Eason
Liu
from
China
mobile
and
today
I
will
introduce
the
Mt
young
model.
This
is
the
first
presentation
of
this
draft
and
the
next
slide.
Please.
A
You
should
be
able
to
Advance
this
yourself.
G
So
the
so
this
is
the
Russian
zero
and
then
this
draft
effort
from
the
operators
and
vendors
Channel
Mobile,
x3c,
ZTE,
Huawei
and
Juniper
next
slide.
Please.
G
And,
as
we
know,
the
amp
defined
in
the
RFC
7450.
G
That
is
the
protocol
use
the
UDP
based
encapsulation
to
overcome
the
situation
as
the
situation
that
the
receiver
side
and
between
the
receiver
side
and
the
source
side
there
is
no
multicaster
connectivity
means
that
there
is
no
director
connection.
So
the
AMT
enables
the
sites
and
hosts
or
applications
that
do
not
have
the
native
multicast
access.
G
To
the
network
that,
with
the
multicult
connectivity
to
the
to
the
source
and.
G
The
Mt
can
to
request
and
receive
the
multicast
Cherry
traffic,
and
there
are
two
main
roads
in
the
Mt
protocol
and
the
AMT
Gateway
and
AMT
relay
and
the
AMT
Gateway
can
send
the
request
to
the
relay
and
related
for
all
the
body
cast
date
through
the
empty
tunnel
that
that
is
the
unicaster
tunnel.
G
Another
another
RC,
the
RC,
eight
triple
seven
extent,
the
Mt
relay
Discovery
process
and
a
new
DNS
resource
record
as
defined
to
publishing
the
IP
address
or
domain
name
of
the
empty
relay
and
well.
What
the
AMT
Gateway
can
query:
the
conquer
the
DNS
resource
record
by
The
Source
address
and
can
and
can
get
the
empty
relay
address
and
and
send
the
request
for
the
multicaster
traffic.
G
Thank
you
based
on
about
two
rfcs.
We
Define
the
amp
model
and
the
Yamato
augments
the
routing
data
model
for
the
control
plane
protocols,
and
there
are
two
main
parts
for
the
for
the
two
rows
of
the
Mt
protocol
and
the
AMT
relay
and
the
AMT
Gateway,
and
for
the
for
the
AMT
relay.
We,
we
Define
the
three
parts
of
that
and
the
first
is
that
itself.
Parameters
like
the
address
the
tunnel
limit
and
the
secret
key
timeout.
G
And
the
second
part
of
the
am
AMT
relay
is
that
the
empty
tunnels
and
we
Define
the
the
the
gateway
address
and
the
git
report
as
the
key
and
also
have
the
multicaster
flows
associated
with
the
specific
AMD
tunnel.
G
G
G
Okay,
so
the
next
step
we
will
add
some
amp
statistics
or
accounting
atoms
in
the
next
reward,
and
we
are
sticking
for
more
feedback
from
a
working
group.
That's
all
thank
you.
G
Yeah
I
I
I
want
I
want
to
request.
The
the
working
group
adoption
is
that
the
I
I,
don't
I,
don't
I'm
very
sure,
is
that
appropriate
for
the
volume
zero.
C
Is
it
so,
but
it
is
a
goal
at
some
point
to
to
get
it
adopted
by
embondi.
Is
that
this
is
the
the
targeted
working
group
you'd.
G
Like
yeah
yeah
yeah
I'm,
going
to
at
the
Target
group,
that's
absolutely
great.
A
Okay
and
co-authors
of
the
draft
you
guys
might
want
to
post
this
to
the
embodia
mailing
list
and
so
listen
solicit
feedback
from
there.
H
All
right,
so
this
is
informational
draft
that
captures
some
thoughts
from
our
co-authors
listed
here
scanning
considerations
for
multicast
next
slide.
Please.
H
So
when
it
comes
to
number
of
receivers
to
scale
that
we
believe
that
IP
multicast
is
the
way
to
go,
it
can
scale
to
only
DBT
number
of
receivers,
because
the
group
address
there
is
just
a
logical
representation
of
all
receivers.
If
the
multicast
tree
grows
horizontally
and
vertical
vertically,
then
you
can
just
reach
as
many
receivers
you
want.
The
next
one
is
number
flows.
H
H
So
and
that's
just
like
a
unicast
case.
So
if
you
put
multicast
flow
onto
tunnels,
then
where
the
tunnel
is
you
don't
need
those
per
flows
or
per
tree
State
anymore,
you
just
need
to
per
Tunnel
State
in
some
cases
and
finally,
when
you
have
a
large
Network
and
sometimes
it's
not
feasible
to
to
use
a
single
tunnel
to
cover
all
the
that
Ross
Network
there,
you
may
need
to
use
different
tunnels,
maybe
different
types
of
tunnels
in
different
regions
to
carry
those
overlay
traffic
next
slide.
Please.
H
So
when
it
comes
to
tunneling
Technologies,
traditionally
we
have
IP
multicast
trees
can
be
used
as
tunnels.
We
have
modp
rcvpt,
P2
and
beta
notes,
and
recently
there
has
been
the
srp-20s,
whether
it's
Sr,
mpos
or
srv6,
and
there
is
also
Ingress
replication
with
Ingress
replication.
H
You
don't
have
per
tunnel
States,
but
you
don't
have
efficient
replication
and
beer,
on
the
other
hand,
gives
you
efficient
replication
without
requiring
per
Tunnel
State.
So
that's
best
of
all
Two
Worlds,
and
if
you
want
to
do
traffic
engineering
and
without
you
using
a
per
Tunnel
State,
then
brte
gives
you
that
with
brte
you
have
a
global
bid
positions
that
encodes
replication
branches
and
they're
you
may
run
into
some
scanning.
H
Problems
on
your
domain
is
large
and
to
fix
that
there
is
the
beer
RBS,
which
is
to
me,
is
a
basically
enhanced
brte,
where
you
use
the
local,
beat
positions
in
the
recursive
B
string
structure,
and
that
allows
you
to
do
the
traffic
engineering
per
quote-unquote
tunnel
space
but
base.
But
you
don't
need
the
per
Tunnel
State
next
slide.
H
Will
you
do
tunnel
tunneling?
You
need
a
overlay
signaling
besides
the
signaling
that
is
needed
to
set
up
the
tunnels,
and
the
reason
is
that
tunnel
Ingress
need
to
know
which
flows
to
be
put
on
which
tunnel
and
for
certain
kind
of
tunnels.
The
tunnel
Ingress
also
need
to
know
which
nodes
are
the
tunnel
egresses,
for
example,
Ingress,
replication,
rsvpt,
piton,
betano
or
even
or
beer
tunnel.
H
H
So
when
we
stick
to
the
overlay
signaling,
let's
say
the
the
traffic
that
is
carried
by
the
tunnel
is
modb
traffic
or
IP
Motocross
traffic.
Then
you
can
just
use
the
those
protocols
themselves
to
Signal.
H
H
You
can
do
beer
mldp
over
beer
and
there
is
a
draft
in
the
beer
working
group
that
does
that
mlgb
signaling
over
beer.
Similarly,
we
can
use
Pim
signaling
for
IP
multicast
traffic
over
other
tunnels,
Rose
and
M,
APN
or
pmm.
Vpn
is
one
example.
Example
PM
over
beer
tunnels
is
another
example,
and
finally,
we
also
have
an
option
to
Signal
IP
multicast
over
beer
using
mod
Pro
MLD
MLD
protocol
itself.
H
H
H
So
in
the
mpgpm
mvpn
case,
we
use
the
P2P
tunnels
to
carry
the
fppn
traffic
across
the
core,
those
P2P
tunnels
we
referred
to
as
pimzitanos.
They
are
identified
by
the
PNG
routes.
The
PMD
routes
buys
binds
the
overlay
flows
to
underlay
tunnels
and
those
tunnels
are
instantiated
by
the
underlay
tunnel
and
that
type
and
instance
of
the
underly
tunnel
is
encoded
in
the
PTA
or
PNG
tunnel
attribute
of
the
PNG
route.
H
H
Here
we
refer
to
the
original
border,
routers
that
is
doing
different
types
or
a
different
instance
of
tunnel.
We
refer
those
two
as
segmentation
points
now.
The
segmentation
points
need
to
maintain
overlay
States
in
this.
In
this
case,
it's
PNG
routes.
They
will
Stitch
the
Upstream
Downstream
segments
based
on
the
overlay
pinsay
route
next
slide,
two.
A
H
Jeffrey
yeah
okay
get
in
there,
so
the
so.
The
point
here
is
that
when
you
do
the
segmentation-
or
you
want
to
do
the
tunneling,
then
those
segmentation
points
or
the
tunnel
Ingress
egress.
They
need
to
maintain
overlay
States,
and
that
goes
back
to
the
original
question.
When
you
have
many
overlay
flows,
what
do
you
do
you?
How?
H
How
can
you
scale
scale
that
so
there
are
two
options
here:
two
ways
why
not
scale
up
scale
up
the
segmentation
points
or
whatever
you
can
Ingress
egress,
so
that
they
can
handle
that
many
over
the
states
I
want
to
point
out.
One
thing
here
is
that
multicast
14
state
is
not
much
different
from
the
unicast
it
basically
a
route
0.240
instructions
here.
The
40
instruction
is
basically
some
replication
branches,
very
similar
to
the
ecmp
branches
in
the
unicast
case.
H
So
if
you
can
scale
unicast,
you
can
scale
multicasts,
it's
just
it's
just
that
your
router
implementation
needs
to
be
prepared
to
do
that.
Quite
often,
a
router
is
optimized
to
scale
for
unicast
site.
They
just
don't
allocate
enough
resources
for
multicast,
but
as
long
as
you
design
it
to
scale
up,
it
should
be
fine.
H
Another
way
is
to
scale
out
you
just
put
more
segmentation
points,
so
that's
each
segmentation
Point
only
handles
some
of
the
over
overlay
flows.
Next
slide.
H
I
This
is
Dino.
Can
you
go
back
to
the
last
slide
foreign
when
you
say
to
scale
up
multicast
forwarding
state
is
not
much
different
from
unicast
if
you're
doing
head-end
replication
or
as
you
refer
to
it
as
Ingress
replication,
it
is
considerably
more
to
store
multicast
drought
than
a
unicast,
because
unicast
only
stores
a
few
next
hops
equal
to
what
the
ecmp
fanout
would
be.
I
Usually
the
number
of
interfaces
on
the
box,
but
for
head
end,
replication,
you're,
actually
replicating
to
n
number
of
unicast
destinations,
which
can
be
really
large
and
if
there's
a
hundred
three
hundred
tunnels,
it's
significantly
larger
than
a
unicast
entry.
I
mean
there
are
advantages
and
mate
and
many
disadvantages
of
head-end
replication,
but
the
cost
is,
is
large.
I'll.
A
The
queue
so.
A
She
she
actually
psych
click
the
button.
First,
okay,.
C
A
J
Oh
very
quick
question:
actually
thank
you
for
Jeffrey's
presentation.
I
think
this
is
a
clear
descriptions
about
the
podcast
scalability,
and
maybe
you
can
find
a
similar
descriptions
in
the
MSR
problem
statement.
Maybe
you
you
have
also
noticed
that
I
think
we
both
agree
that
it
is
really
difficult
for
one
particular
solution
to
can
be
scalable
in
all
these
three
dimensions.
I
really
have
I
can
go
back
to
the
maybe
first,
the
slide
too.
The
the
the
previous
slide.
Only
one
comment
is
that.
D
J
The
previous,
the
last
it's
it's
okay,
just
I
think
you
mentioned
that
RBS
can
be
treated
as
the
brt
extension.
There
is
some
concern
for
that
point
just
because
I
think
it
is
really
different
from
the
existing
peer
architecture
and
forwarding
Behavior.
So
maybe
we
can
give.
We
can
have
more
discussions
about
that
point.
But
I
really
agree
about
the
scalability
statement.
K
A
Okay,
Jeffrey
is:
what
are
your
plans?
What
are
your
intentions
for
this
draft?
Is
it?
Are
you
seeking
adoption
here,
Pim
elsewhere,.
H
I'm
originally
I
I
I
was
thinking
that
this
belongs
to
to
pin
and
it's
indeed
it's
just
the
informational
drafts
so
because
it
I
didn't,
tie
this
to
internet
or
to
mbongdi
itself,
not
for
the
operations.
That's
why
I
was
initially
I
was
thinking
about
Pym,
but
we
can
talk
about
that.
A
All
right
so
we're
tight
on
time
and
we
squeezed
in
at
the
last
minute.
So
let's
Jeffrey,
if
you
want
to
bring
take
things
to
the
list,
be
great.
A
And
again
sorry,
everybody,
you
know
we'd
love
to
have
more
robust
conversation
and
discussion.
Apparently
90
minutes
wasn't
enough
time.
So
we'll
we'll
do
better
next
time
and
we
have
a
pretty
full
agenda.
So
I
am
going
to
talk
now
about
tree
DM
I'm,
going
to
try
and
do
this
quickly
because
I
discussed
this
in
Tim
earlier
this
morning
and
there
was
lots
of
robust
discussion
there.
A
So
problem
statement
is
with
live
audiences
kind
of
exploding
and
then
the
great
you
know
use
case
is:
is
NFL
Thursday
night
football?
It's
American
football.
A
They
are
for
the
first
time
ever,
live
streaming,
American
football
games
exclusively
and
that
that
began
as
of
two
months
ago
and
their
first
game.
The
Amazon
Prime
announced
that
they
had
over
11
million
viewers.
So
this
is,
you
know:
I,
don't
know
that
we've
had
that
many
live
simultaneous
stream
viewers
and
if
it
was,
it
was
one-off
events.
This
is
happening
now.
You
know
once
a
week
so
with
with
live
audiences
exploding
to
that
kind
of
size.
Combine
that
with
increasing
bit
rates.
A
Are
we
approaching
an
inflection
point
for
network
resources
consumed
by
live
streaming?
If
not
now
will
we
ever
be
and
if
the
answer
to
either
of
those
two
questions
is
yes,
the
next
question
is:
what
should
we
do
about
it?
A
One
thing
that
a
lot
of
folks,
you
know
wonder
about
is
you
know
streaming
has
been
around
for
a
long
time,
people
and
and
it's
you
know
some
would
argue.
It's
been
saturated
already
and
some
folks
might
say
What's
the
difference
between
you
know,
videos,
video,
whether
it's
on
demand
or
live.
You
know
if
they
weren't
watching
a
live
stream,
they'd
be
watching
an
online.
You.
A
Or
something
like
that,
what's
the
difference,
there's
a
key
difference
with
live
streaming,
and
that
is
the
expectation
of
lower
latency.
A
When
you
have
an
event
like
a
sporting
event,
you
can't
have
like
a
minute
or
two
playoff
buffer,
because
you
run
the
risk
that
that
you
may
end
up
getting
a
text
message
from
a
friend
about
the
amazing
game-winning
score
that
you
haven't
seen
or
you
hear
your
neighbors
or
the
people
at
the
bar
across
the
street
cheering
long
before
you
see
what
happens
and
if
you
there
are
other
use
cases
like
micro,
betting
in
game
betting,
where
there's
even
much
tighter
requirements
for
latency
also
join
rates
are
vastly
different.
A
If
you
look
at
the
you
know,
Network
consumption
from
from
online
streaming,
it's
it's
pretty
predictable.
Seasonal,
you
know
goes
up
around
eight
o'clock
drops
down
around
you
know
midnight.
It's
it's
smooth
and
predictable.
Join
rates
for
for
live
streaming
are
vastly
different.
You
know
it's
more
like
a
step
function
than
a
smooth
increase.
A
In
terms
of
network-based,
replication,
we've
got
multicast
and
it's
been
around
for
a
long
time
and
it's
been
fairly
successful
in
some
places.
It's
absolutely
vital
on
financial
networks
and
it's
widely
used
on
video
distribution
networks
and
vpns,
and
some
Enterprises
use
it
extensively.
But
internet
multicast
not
so
much
and
that's
what
we're
really
focused
on
here
is
over-the-top
use
cases.
Internet
multicast,
which
hasn't
enjoyed
the
same
level
of
success
as
as
other
deployments.
A
So
you
know
you
can
get
lots
of
different
opinions
as
to
what's
the
problem
with
internet
multicast,
but
I
I
think
you
know
the
three
biggest
ones
are
kind
of
the
All
or
Nothing
problem,
and
that
is
every
single
layer.
Three
hop
every
router
and
firewall
interface
between
source
and
destination
must
be
multicast
enabled.
If
any
of
those
are
not,
then
it's
not
going
to
work,
and
you
know
that
that's
that's.
A
pretty
significant
bar
things,
like
incremental
deployment,
have
not
traditionally
been
available.
A
There's
the
complaint
that
multicast
is
too
complex
to
deploy,
operate
and
troubleshoot,
that
operators
have
long
bemoaned
and
then
there's
the
chicken
and
egg
problem.
You
know,
there's
no
audience
because
there's
no
multicast
audience
because
there's
no
multicast
content,
there's
no
multicast
content,
because
there's
no
multicast
audience
so
these
are
kind
of
like
the
biggest
issues
that
have
kind
of
plagued
internet
multicast.
The
good
news
is,
and
that's
what
this
draft
is
all
about.
A
Things
have
changed
this,
isn't
your
grandfather's
multitask
Network
and
there
are
now
replication
Technologies
available
to
address
each
of
these
issues.
That's
what
treaty
end
leverages
3dn
is
about
tree
based
cdns,
and
it
takes
advantage
of
both
native
and
overlay
Concepts
to
deliver
an
end-to-end
service,
even
when
parts
of
the
network
don't
support
multicast.
A
So,
let's
start
with
the
native
part,
that's
the
on
net
and
essentially
native
on
net.
This
is
the
multicast
enabled
part
of
the
network.
I
can
leverage
SSM,
so
I.
Don't
think
I
need
to
tell
the
folks
in
this
room
but
SSM
vastly
simplifies
multicast
deployment
and
it
eliminates
you
know.
I
would
say.
90,
plus
percent
of
the
complexity
of
multicast
and
IT
addresses
the
two
complex
problems.
It
gets
rid
of
RPS
msdp,
shared
trees,
pin
register
and
Kappa
dcap
gosh
tree
switch
over
data
driven
State
creation.
A
All
of
those
things
all
the
headaches
of
multicast
are
eliminated
with
SSN
SSM
can
be
realized,
typically
with
Pim
SSM,
but
we
could
use
any
other
type
of
multicast
routing
protocol
from
mldp,
GTM,
beer,
growth
control,
plane
or
data
plane,
bgp
and
VPN
PSR
to
anything
that
can
that
can
realize
the
SSM
service
model
can
be
utilized
to
deliver
to
support
the
native
component.
A
The
on
that
component
of
multicast
for
the
parts
of
the
network,
however,
that
are
not
multicast
enabled
this
is
where
overlays
come
in
and
specifically
the
most
pertinent
protocol
is
AMT,
which
dynamically
builds
tunnels
from
a
host
to
or
application
to
a
the
a
router
that
sits
at
the
border
of
a
multicast
enabled
Network.
So
it
could
be
I'm.
Sorry,
a
host
that
is
sitting
on
the
unicast,
only
Network
a
receiver
on
the
unicast.
A
Only
part
of
the
network
can
dynamically
tunnel
to
the
multicast
network
and
receive
content.
This
also
simplifies
the
last
Model
mile
problem.
You
can
avoid
some
of
the
in-home
problems
like
Wi-Fi
and
it
enables
end
users
to
not
be
so
dependent
upon
their
last
mile
provider.
Their
last
mile
provider
doesn't
support.
Multicast
you
can
just
Tunnel
right
over
and
receive
the
service,
so
it's.
A
The
multicast
service
and
building
overlays
to
deliver
it.
This
solves
the
All
or
Nothing
problem,
also
the
chicken
and
egg
problem
and
I.
You
know
I
talk
about
AMT,
just
because
it
is
it
dynamically
builds
tunnels
and
can
be
deployed
in
both
the
application
layer.
The
host,
or
even
you
know,
a
home
Gateway
router,
but
there
are
any
type
of
overlay
solution
like
lisp
could
be
utilized
to
realize
the
overlay
to
realize
this
service.
A
A
This
is
something
that
has
traditionally
been
missing
from
multicast
like
I,
said
well,
TS
has
usually
only
been
all
or
nothing,
but
with
treaty
end
it
supports
that
model.
So
what
is
treaty
n
again?
It's
it's
really
nothing
new!
Technically,
it's
just
the
synthesis
of
a
bunch
of
things
that
we've
been
working
on
for
a
long
time
and
understand
well
and
when
you
put
them
together,
specifically
SSM
plus
AMT
put
them
together.
A
That's
what
treaty
n
is,
and
this
is
kind
of
a
picture
from
a
30
000
foot
view
you
have
a
multicast
source
coming
from
a
multicast
content
provider
that
is
connected
to
a
treaty.
End
provider-
or
you
know,
the
m-bone
multicast
enabled
part
of
the
internet,
which
is
a
subset
of
the
actual
internet,
can
deliver
traffic
to
Native
receivers
or
for
off
net
receivers
can
deliver
it
via
AMD
relays
Greg.
Can
you
wait
till
the
end,
or
do
you
have
something
now?
Oh.
A
Just
almost
there
so
we're
diving
down
into
the
into
the
CDN
part
of
the
network.
This
is
what
a
CDN
looks
like
without
multicast,
and
that
is
you
know,
there's
different
CDN
models,
but
one
of
the
more
common
ones
is.
You
know
you
have
a
source
that
sends
traffic
to
you,
deploy
a
bunch
of
multicast,
sorry,
CDN
boxes.
A
Those
are
typically,
you
know
a
stack,
a
rack
of
x86
box,
x86
servers
that
sit
in
the
network,
and
you
know
you
send
traffic
to
one
who
sends
it
to
the
others
and
they
handle
local
local
receivers.
A
Cd.
That's
that's
your
CDN
without
multicast,
here's.
What
cdns
with
multicast
looks
like
well,
you
know.
First,
we
we
rename
those
CDN
boxes,
we
call
them
AMT
relays
and
we
can
push
them
out
to
the
to
the
absolute
edge
of
the
multicast
enabled
Network.
A
The
big
benefit
is:
if
you
have
Edge
routers
Edge
devices
that
support
AMT
natively
in
the
forwarding
plane,
you
essentially
have
a
CDN
on
a
chip.
You
do
not
need
to
deploy.
The
cost
of
you
know
a
rack
of
servers
that
are
that
you
need
to
rack
and
power
and
plug
into
routers
and
consume
Revenue
generating
ports
on
a
router.
A
So
this
is
a
you
know,
a
key
benefit
of
this,
so
it
this
is
a
service
that
can
be
delivered
at
a
it
should
be
at
a
fraction
of
the
cost.
A
Essentially
you
know
if
you
have
this
infrastructure
already
in
place,
it's
just
a
couple
lines
of
configs
on
a
on
your
existing
devices,
so
zero
capex,
arguably
zero
Opex,
because
there's
no,
you
know
space
and
power,
no
new
space
and
power,
and
things
like
that
and
then
those
relays
handle
local
replication
benefits
more
scalably
and
efficiently
utilizes
the
network,
not
just
for
content
today,
because
I
think
we
fixate
sometimes
on
just
the
content
out
there
today
and
how
this
could
be.
A
You
know
reduce
traffic,
but
really
it
makes
new
content
that
that
maybe
isn't
viable
today
possible,
so
think
AR
live
streaming
to
mass
audiences.
You
know
what!
If,
instead
of
just
watching
a
game,
you
could
feel
as
if
you're
sitting
in
the
front
row,
Center
Court
and
you
can
look
around
and
see
the
celebrities
in
the
crowd
and
hear
what
they're
saying
and
have
that
experience
that
immersive
experience-
and
you
know
you
know-
say:
that's
a
500
Meg
or
a
one,
gig
stream.
A
A
This
essentially
makes
it.
You
know
trivial
cost
to
do
that
and
it
allows
service
providers
to
offer
new
Services
replication
as
a
service.
Send
us
your
multicast
content.
We
will
replicate
it
efficiently
and
you
know
maybe
charge
you
for
the
cost
of
the
time
of
of
the
AMT
tunnels,
but
it
should
be
a
fraction.
The
cost
of
what
traditional
unicast
cdns
chart
experience,
because
again
it
could
be
wrapped
into
the
existing
infrastructure.
A
It's
also
an
Open
Standards
based
architecture
using
widely
deployed
protocols.
You
know
not
not
necessarily
a
proprietary
solution
that
you
know
only
works
with,
say
one
CDN,
and
also
because
you're
just
forwarding
packets,
there's
less
coordination
required
you're,
not
really
storing
any
data
which
Sometimes
some
CDN
models
do.
So
you
don't
have
to
worry
about
data
storage
protection,
Key,
Management
relationships
between
the
content
provided
and
the
CDN.
That's
just
between
the
content
provider
and
the
end
user.
The
CDN
is
just
forwarding
packets
and
another
nice
side
of
benefited.
A
Is
this
is
a
decentralized
solution,
so
it
it
democratizes.
It
makes
it
really
inexpensive
and
thus
democratizes
content
sourcing
you
know,
is
it:
is
it
healthy
for
the
internet
that
society
that
only
a
handful
of
companies
can
kind
of
control
all
content
distribution?
Today
this
gets
back
to
the
decentralized
roots
of
the
internet,
use
cases
I
focus
on
you
know
the
sexy
stuff
like
video
and
AR,
but
really
anything
that's
multi-destination
could
be
and
and
what's
great,
is
roof
and
and
and
Rich
are
about
to
talk
about.
You
know.
K
A
Use
case
of
of
telemetry
data
that
that
that
can
take
advantage
of
this,
but
also
maybe
less
sexy.
This
is
something
Jake
has
talked
about
for
years.
This
large
software
file
updates
things
like
OS
updates.
That's
been
a
you
know,
an
issue.
That's
played
networks
for
years,
all
right.
So,
in
summary,
you
know
we
could
be
at
a
Nexus
where
you
know,
supply
and
demand.
A
Curves
are
crossing
here,
demand
you
know,
coming
from
exploding
live
audience
sizes
and
increasing
bit
rates
and
Supply.
We
now
have
through
the
work
of
you
know
this
working
group
and
other
working
groups
like
Pim
tools
out
there
that
make
it
easier
and
more
available
to
deploy
multicast
than
ever
and
treaty
end
is
basically
a
you
know.
A
CDN
model
that
is
optimized
to
address
this
increasing
strain.
That's
live
streaming
is
putting
on
Networks
to
support.
You
know
content
today
as
well.
A
As
you
know,
new
content
from
new
contributors
in
terms
of
next
steps.
I
presented
this
at
mops
I.
You
know
seeking
working
group
adoption
either
mops
or
MBD
I
think
it's
a
better
fit
for
mops
and
they're.
You
know
when
I
presented
it
on
Monday.
There
seemed
to
be
some
support
there,
so
I
think
we're
gonna
kind
of
pursue
that.
But
I
wanted
to
share
this
with
this
working
group
happy
to
take
questions.
K
But
some
number
of
years
ago,
back
when
I
was
at
Cisco
I
led
a
group
with
Taurus's
help
where
we
prototype
this
architecture
over
WebEx.
K
The
additional
gains
we
saw
were
not
just
the
efficiency
of
replication
distribution,
but
actually
at
the
contents
course
side
in
the
number
of
VMS
have
to
spun
up
spin
up
to
Source
content.
We
saw
I
think
the
numbers
were
like
90
improvement
with
only
four
receivers.
It
was
stunning
additionally
because
it
was
a
WebEx
product
or
intended
to
be
a
WebEx
product
and
never
went
past
prototype.
K
We
had
Enterprise
receiver
networks
that
were
multicast
enabled,
so
we
had
an
AMT
coordination
at
the
CE
where
they
could
take
in
AMT
tunnel
and
then
go
back
to
Native
multicast
at
their
distribution
point
everything
we
did
was
successful,
except
getting
it
past
the
PMs
to
actually
move
the
resources
into
adoption.
A
That's
great
man.
A
I,
wouldn't
call
it
flooding
I
would
call
it
just
native
multicast
forwarding,
okay,
it
would
just
you
could
in
theory
build
an
overlay,
but
then
you
know
essentially
what
this
technology
does.
Is
it
you
gain
the
most
benefits
where
you
deploy
multicast
natively,
so
those
who
deploy
multicast
natively
enjoy
the
best
benefits
and
the
most
significant
benefits
and
those
who
don't
get
tunneled
over
and
I'm.
Sorry,
we're
out
of
time
would
love
to
get
more
feedback,
maybe
Stig.
A
If
you
want
to
speak
while
I
bring
up
the
other
slides
for
Lauren,
but
apologies
again
for
not
being
able
to
support
all
the
comments
in
the
working
group.
M
Yeah,
stick
with
us
just
a
quick
comment,
so
I
I,
like
the
idea
of
being
more
democratic,
you
know
allowing
anyone
to
send
content
and
so
on,
but
I
I
feel
there
is
more
work
to
be
done
there
at
least
documenting
how
it
can
be
done,
and
you
know
AMT
doesn't
support
it.
Lisp
is
an
option.
Are
there
any
solutions
that
you
know
only
require
browser
support,
for
instance,
and
also
things
like
access
control,
of
course,
or
security
aspects.
A
C
A
Thanks
I'm
cutting
me
off
now
and
Lauren
you're
up.
N
Can
I
just
share
my
screen?
I
wanted
to
build
a.
A
Are
not
next
I'm
sorry
I
got
too
excited
rich
and
Ruth.
You
are
a
apologies.
A
O
Yes,
hi
good
afternoon
everyone
and
thanks
Lenny,
and
so
my
name
is
Ruth
Britton
and
I'm.
A
computer
systems
engineer
in
umatsat
and
together
with
Rich
Adam
from
jayant's,
will
present
the
you
my
cast
Trustville
over
AMT
service
and
the
next
slide.
Please.
O
And
just
a
quick
overview
of
the
agenda,
I'll
just
give
you
a
brief
introduction
to
umatsat
and
you
might
cut
terrestrial
over
AMT
service,
our
testing
setup
and
end
use
case,
and
then
I'll
pass
you
over
to
Rich
to
give
an
instruction
to
jayant
and
the
AMT
relay
setup
in
the
giant
infrastructure
and
next
slide.
Please
so
just
to
start
umatsat
is
the
European
Organization
for
the
exploitation
of
meteorological
satellites.
O
A
Ruth
I
gave
you
you,
you
should
be.
O
Able
to
I
see
great,
thank
you,
okay,
perfect.
So,
in
order
to
supply
this
data
to
our
users,
umetsat
has
umetcast
a
Push
Service,
delivering
environmental
data
in
near
real
time
to
its
registered
users.
So
the
data
delivery
is
implemented
as
an
IP,
multicast,
one-to-many
dissemination
system
and
with
a
single
transmission
from
the
server
and
guarantee
timeliness
distribution
to
the
end
users,
reception
stations.
O
So
therefore
it
allows
end
users
to
receive
the
data
at
their
reception
station
shortly
after
you,
metsat
makes
it
available
the
umetcast
platform
seen
here
and
has
a
single
data
source
where
files
are
encoded
into
a
multicast
stream
and
then
transport
it
to
the
user
via
two
main
networks.
You
may
call
satellite,
which
is
the
primary
method
of
dissemination,
and
you
met
cast
terrestrial,
which
is
the
backup,
and
this
can
be
done
over
giant
or
over
the
commercial
internet.
O
We
can
provide
GRE
tunnels,
but
the
setup
and
configuration
of
this
is
very
very
time
consuming
and
complicated
and
then
for
those
without
access
to
an
Enron
at
all.
We
provide
you
Metcalf
terrestrial
over
the
internet
now
in
order
to
increase
our
user
base,
simplify
the
end
user
setup
and
improve
our
portfolio
of
services.
We
are
introducing
umecost
terrestrial
over
AMT
and
as
highlighted
here
in
red,
and
this
is
capable
of
transporting
the
multicast
streams
over
Enron
and
or
the
internet
without
Reliance
on
the
end-to-end
native
multicast
connectivity.
O
So
we
can
see
that
currently,
the
you
might
cast
terrestrial
has
a
lot
of
potential
as
a
service.
It
provides
a
highly
available
and
redundant
solution
for
the
dissemination
of
nrt
environmental
data.
O
However,
while
this
system
has
many
users
globally,
we
are
currently
unable
to
service
users
in
a
virtual
environment,
as
the
internet
is
not
native
multicast
enabled-
and
this
brings
us
to
our
new
Evolution,
which
is
AMT
so
AMT
or
automatic
multicast
tunneling,
as
was
discussed
previously
facilitates
Dynamic
multicast
connectivity
between
multicast
enabled
networks
across
islands
of
unicast
only
networks
and
the
tunneling
is
performed
between
AMT
relays
and
AMT
gateways
using
UDP,
encapsulation
and
unicast
replication,
and
this
allows
users
to
receive
data
from
a
multicast
source.
Even
in
the
absence
of
end-to-end
native
multicast
connectivity.
O
So
this
diagram
here
provides
a
high
level
overview
of
our
service,
so
the
multicast
streams
will
be
available
from
one
of
three
available
AMT
relays,
which
are
hosted
on
the
giant
infrastructure.
Part
of
the
diagram
is
actually
missing
on
this
slide.
I,
don't
know
what
happened
but
ritual,
discuss
in
more
detail
and
these
relays
on
the
giant
infrastructure,
but
basically
and
these
each
have
direct
access
to
the
data
incoming
from
the
umetcast
platform
in
umets
out,
and
this
is
all
in
place
and
installed
currently.
O
So
the
client
solution
is
a
software-based
package,
including
the
AMT
Gateway
functionality
and
our
telecast
client
application.
So,
looking
at
a
standard
user
and
such
as
seen
here
on
the
diagram,
we
have
Amazon
web
services,
Google
Cloud,
even
private
or
commercial
users.
They
will
install
the
AMT
Gateway
and
a
telecast
client
software
on
their
station.
The
Telecast
application
is
Rip,
multicast
software
and
it's
provided
by
idirect,
which
was
formerly
St
engineering
and
the
server
runs
on
the
umecost
platform,
ingesting
the
files
encrypting
the
contents
and
then
producing
the
umecost
channels
using
IP
multicast.
O
The
client
software,
then,
will
receive
the
multicast
streams
decrypt
the
data,
and
this
will
run
in
conjunction
with
the
AMT
Gateway
software
on
the
end
user
station
and
The
Telecast.
Software
also
has
several
recovery
mechanisms
available,
including
forward
error,
connection,
correction,
acknowledgment
messages
and
negative
acknowledgment
messages
which
ensured
that
the
service
is
highly
available
with
few
lost
data.
O
Packets
and
the
acnak
functionality
is
provided
to
ensure
reliable
data
transfer
and
the
clients
communicate
back
with
the
server
and
you
met
up
via
a
feedback
Channel
over
the
internet
and
not
over
the
AMT
tunnels,
and
but
the
great
thing
about
this
is
entirely
software
based.
The
solution
is
compatible
with
any
infrastructure,
making
it
very
simple
and
user-friendly
setup.
O
So
we've
tested
it
on
multiple
different
platforms:
first
in
the
dedicated
J
on
test
lab
to
verify
the
software
and
in
a
standalone,
Amazon
web
services,
VM
and
finally,
as
part
of
a
bigger
project
for
the
European
weather
cloud,
and
actually
that
part
of
the
diagram
is
missing
as
well.
It
should
be
up
here
in
Orange,
and
but
this
is
a
special
case
actually
and
so
the
European
weather
cloud
is
a
cloud-based
collaboration
platform
for
meteorological
application,
development
and
operations
within
Europe.
O
So
basically,
this
is
an
ideal
solution
for
when
we
have
multiple
umetcast
terrestrial
users
and
hosted
on
the
same
infrastructure.
And
so
basically
we
set
up
this
AMT
replicator
VM
within
the
ewc,
and
so
this
replicator
will
have
the
AMT
Gateway
software
and
the
AMT
relay
software
installed
and
running
in
parallel
on
the
same
VM.
O
So
we
plan
on
hosting
several
software-based
AMT
relay
VMS
and
accessible
via
a
load
balancer,
each
servicing
up
to
5
AMT
gateways
on
the
ewc.
We
expect
to
host
up
to
20
users
and
each
receiving
currently
400
megabit
per
second.
But
this
is
going
to
increase
to
one
gigabit
per
second
over
the
coming
years,
with
our
new
missions
on
our
new
satellites.
O
So
here
I've
left
a
link
to
the
AMT
software,
both
the
Gateway
and
the
relay,
and
this
is
open
source
software
supported
by
Juniper
Networks
and
as
a
product
of
the
emblem
working
group.
What's
the
operations
and
management
area
of
the
ietf,
and
so
the
Gateway
here
is
provided
in
git,
and
this
also
has
a
relay
available
in
this
repo
as
well.
O
Except
this
relay
has
a
memory
leak,
so
it
doesn't
run
for
long
and
that's
why
I
posted
also
the
second
repository
and
provided
by
the
same
and
software
developer,
and
this
provides
the
relay
functionality,
and
so
we've
run
multiple
tests.
With
this
software
on
several
different
platforms,
one
of
the
main
tests
we
ran
was
a
performance
test
on
the
European
weather
cloud,
and
here
we
had
three
virtual
machines:
each
with
the
same
specifications
and
same
CPU,
same
memory,
same
software
running
the
AMT,
Gateway
and
The
Telecast
client,
and
we
tested
the
file
based
availability.
O
So
we
measured
what
the
client
expected
to
receive
versus
what
they
actually
received
on
each
of
the
VMS
and
while
monitoring
this
over
a
few
days,
we
noticed
that
one
virtual
machine
was
losing
10
times
more
Pockets
than
the
other
two
VMS.
Even
though,
from
our
side
there
was
no
visible
difference
between
the
VMS.
All
of
the
statistics
appeared
the
same
and
in
addition,
we
also
ran
a
test
with
the
AMD
relay
software
and
on
this
particular
VM,
and
it
started
fine,
but
the
performance
degraded
over
time
and
eventually
it
stopped
running
altogether.
O
And
so
we
contacted
the
ewc
cloud
provider
and
they
informed
us
that
this
particular
VM
was
on
a
very,
very
highly
loaded
machine.
And
so
the
software
didn't
have
enough
resources
to
run,
and
this
explained
why
we
were
getting
these
faults
in
the
software.
So
it
was
migrated
to
a
different
machine
and
we
noticed
a
drastic
improvement
in
the
availability
statistics
and
the
AMT
relay
software
around
their
issues.
O
After
that,
and
as
we
expect
to
have
multiple
users
running
this
software
and
Cloud
environments,
where
they
will
not
necessarily
have
Insight
or
access
to
the
logs
or
statistics
of
the
underlying
infrastructure,
we
realize
that
debugging
poor
performance
on
these
virtual
environments
could
prove
quite
challenging.
So
one
parameter
is
the
steel
time
which
could
provide
an
insight
into
the
status
of
the
underlying
infrastructure
of
the
VMS.
O
So
higher
values
of
a
steel
time
indicate
a
higher
load
on
the
physical
machine,
and
in
order
to
tackle
this,
then
we
are
using
the
elasticsearch
monitoring
tool
in
conjunction
with
metric
beat,
which
provides
statistics
visible
on
a
cabana
GUI
for
each
of
the
hosts
on
the
ewc.
So
this
will
be
stalled
on
all
of
our
client,
VMS
and
alerts.
Will,
let
us
know
if
the
steel
time
exceeds
a
certain
threshold.
O
We
also
noticed
that
availability
improved
when
the
UDP
received
packet
buffers
were
increased
in
the
VM,
decreasing
the
number
of
UDP
packet
losses
that
were
being
reported
and
also
in
order
to
provide
redundancy
to
our
solution.
We
have
produced
FL
over
script,
which
monitors
the
availability
of
the
AMT
relays,
both
software
and
Hardware,
based
and
in
the
event
of
one
going
down.
It
will
automatically
connect
to
the
next
available
relay.
O
As
we
can
see.
There
has
numerous
benefits
which
are
shown
here
and
the
solution
is
Carrier,
independent,
meaning
the
data
can
be
sent
over
and
runs
or
the
internet,
no
matter
the
connectivity
and
it's
entirely
software-based
solution
with
a
simple
and
quick
setup
on
the
end
user
station.
It
is
also
a
not
traversal
and
firewall
friendly,
ensuring
that
it's
straightforward
to
set
up
and
maintain
and,
most
importantly
for
us,
it
builds
up
on
existing
capabilities
and
preserves
the
same
interfaces
and
software
vis-a-vis.
O
The
user
as
the
current
GMAT
cast
terrestrial
service,
so
data
subscriptions
are
managed
through
our
Earth
observation
portal
and
fully
under
control
of
the
user.
The
data
Integrity
is
preserved
on
the
scent
encrypted
through
the
tunnels.
Animatsat
is
fully
in
control
of
what
data
is
received
by
which
user
and
then
to
wrap
up
for
the
current
status
or
through
post
on
performance
testing
is
complete.
We
have
a
client
running
on
the
ewc
for
the
past
two
months
and
receiving
400
megabit
per
second
with
this
solution,
and
they
have
a
consistency,
availability
of
99.986.
O
So
this
is
in
line
with
our
other
umicast
terrestrial
services.
So
we
have
the
setup
ready
in
the
ewc.
We've
also
tested
it
in
Amazon
web
services,
we're
already
going
to
offer
it
to
users
in
the
ewc
in
the
next
few
weeks
and
as
a
best
effort
service.
We
also
have
ongoing
discussions
currently
with
our
member
states
and
partners
and
there's
a
huge
amount
of
interest
in
it
at
the
moment,
and
we
also
have
some
pilot
users
lined
up
already
and
we
plan
to
have
the
service
fully
operational
by
June,
2023.
P
P
Thanks
very
much
Ruth,
my
laptop's
making
some
pretty
interesting
noises
at
the
minute.
So
if
it
goes
a
bit
wrong,
I'll
turn
the
video
off.
So
I
won't
talk
a
great
deal
about
us
because
I'm
conscious
of
time,
but
we
are
jeon
and
we
connect
our
rnae
networks
across
Europe
and
the
world,
and
we
also
Foster
collaboration
and
knowledge
sharing
communities
for
RNE
members
and
peers
is
the
button
for
moving
the
slide.
P
The
next
one
there
we
go
thanks
very
much,
so
we
have
a
bunch
of
members
that
we
do
work
for,
and
we
are
responsible
for
connecting
here
are
some
of
them.
You
may
or
may
not
work
with
them,
given
the
ETFs
involvement
with
RNE
networks.
I'd
be
very
surprised
if
you
hadn't
these
are
just
summer
and
we
also
work
very
closely
with
e-metsat
on
a
bunch
of
projects,
not
least
of
which
AMT
is
is
one
of
them.
So
what
have
we
been
doing?
P
I,
probably
don't
need
to
linger
very
long
on
this
slide,
because
we've
already
covered
off
where
AMT
is
by
several
people.
So
that's
all
good.
This
is
just
an
example
of
a
data
pack
that
we've
captured
during
our
testing
across
the
network.
So
you
can
see
how
it's
composed,
as
you
can
see,
it's
pretty
simple,
there's
nothing
overly
complicated
about
it.
It's
familiaries
the
header
added
in
in
the
middle,
and
what
are
we
using
to
to
do
the
magic
we're
using
Juniper
MX
devices,
specifically
mx204s?
P
P
Everything
just
works
and
we
can
use
Automation
and
standard
tools,
things
like
ends
or
Jenkins
and
robots
to
do
all
our
testing,
our
provisioning
and
our
integration
testing.
The
way
we
build
our
relays
is
as
a
really
honestick.
Essentially
we
have
a
single
connection
or
a
single
AE
connection
using
Lac
out
of
the
mxs,
and
we
connect
that
to
the
rest
of
our
Network.
Why
have
we
done
it
this
way?
Well,
it's
an
extremely
simple
build.
P
Very
basic
protocol
configuration
there's
very
little
that
can
go
wrong
and
on
the
redundancy
side,
as
Ruth
already
talked
about,
this
is
done
using
the
client
software
and
we
deploy
three
separate
reloads
so
that
people
can
connect
to
any
ones
they
want.
P
Here
is
an
example
of
the
configuration
like
everything
else.
I've
talked
about
it's
fairly,
simple,
we're
not
doing
anything
particularly
difficult
with
it.
It's
quite
easy
to
enable,
or
with
all
we
really
need
to
do
is
enable
the
igmp
and
the
AMT
protocol
itself.
I've
left
out
things
like
firewalling
and
Pim,
because
they
don't
necessarily
they're
not
necessarily
specific
to
AMT,
but
they
are
in
there
there's
some
output
from
our
AMT
relay.
P
This
shows
us
two
tunnels
that
have
connected
to
our
relay
and
have
joined
several
distinct
groups
that
we're
then
sending
sending
traffic
over
to
point
of
no
here
is
that
if
you
notice
the
interface
that
they're
bound
to
is
the
ud
interface,
this
is
using
Juniper's
standard
tunnel,
Services
Shadow
configuration
and
it
builds
Dynamic
UD
interfaces
to
bind
these
tunnels
too.
So
again,.
L
P
Standard
stuff
all
built
into
the
device,
we
don't
really
have
to
do
anything
special
to
do
that
as
Ruth
talked
about
earlier
on,
we've
got
three
relays
here,
a
picture
of
them.
It's
not
terribly
complicated,
so
I
won't
linger
too
long
on
this,
but
it
shows
that
the
relays
themselves
are
using
a
bundles
and
it
also
shows
we
have
them
in
London,
Amsterdam
and
Frankfurt.
P
It's
not
all
been
fun
and
games,
we've
had
some
caveats
and
some
deployment
issues,
most
of
which
are
now
fixed,
thankfully,
thankfully
so
protocol
authentication
by
default.
There
is
no
protocol
authentication
on
the
relay
itself,
so
we've
had
to
get
around
this
by
using
the
the
Juniper
loopback
firewall
filters.
It's
not
a
major
drama,
but
it's
just
something
we
need
to
be
aware
of,
because
we're
provisioning
a
new
firewall
to
a
new
router
to
to
run
the
AMT
relay
something
to
keep
in
mind
and
to
build
into
our
automation.
P
P
We
can't
we
tried
a
bunch
of
things
like
specifying
templates
and
and
various
other
things,
but
nothing
really
did
the
job,
so
we
enable
this
globally
on
the
on
the
router,
and
this
is
probably
more
than
anything
else,
one
of
the
reasons
we
decided
to
go
for
dedicated
devices
rather
than
rather
than
just
enabling
the
protocol
on
on
our
normal
call
routers.
P
We
also
have
some
MTU
restrictions,
so
the
customers
or
the
clients
that
connect
to
our
arrays
need
to
be
aware
of
need
to
be
aware
of
the
MTU
size
of
the
traffic.
That's
coming
from
the
stream
and
the
stream
itself
should
be
tailored
so
that
it
doesn't
go
above
full
tendency
to
account
for
the
the
AMT
header
and
the
the
encapsulated
IP
and
UDP.
P
What
issues
have
we
had?
We've
had
some
issues
with
the
packet
forwarding
engine
some
cases
and
we're
not
still
not
entirely
sure
the
exact
trigger
points
for
this.
In
some
cases,
the
the
relays
wouldn't
pass
traffic
once
it
had
gone
through
the
AMT
process.
Once
it
had
gone
through
the
protocol
it
the
packet
had
been
created,
it
would
be
dropped
egress.
P
This
has
now
been
fixed
and
it's
all
good,
but
suddenly,
due
to
the
nature
of
the
complexity
of
this
issue,
we
were
super
glad
to
have
sort
of
vendor
supported
equipment.
We
could
we
could
call
talented
and
knowledgeable
Engineers
to
get
that
fixed
in
in
a
reasonably
fast
time
frame.
So
that's
all
good.
P
P
P
That's
how
we've
been
testing
the
service,
so
obviously
Ruth
has
been
testing
it
using
her
Direction
ready,
clients
and
service,
which
has
been
great.
We've
also
used
open
source
tools
like
iperf.
Some
of
the
later
builds
allow
you
to
connect
to
SSM
connections
and
establish
those
video
lan.
P
This
is
a
suite
we
use
to
to
tests
of
new
version
of
software
before
we
roll
it
out
to
production,
and
we
we
include
AMT
in
that
and
I'll,
give
you
a
little
bit
more
detail
on
that,
and
this
is
an
example
of
what
that
would
look
like
when
it
runs
and
it
passes
successfully.
P
It
sets
up
a
bunch
of
tests,
sets
the
relay
up,
sends
a
bunch
of
traffic
and
and
then
make
sure
that
the
packet
loss
the
response
times
and
things
like
that
are
all
within
specified
limits.
So
all
good
that
runs
three
or
four
times
a
night,
and
we
typically
run
that
for
a
few
months
on
any
new
version
of
software
we're
certifying.
So
we
get
really
good
flavor
for
exactly
how
how
it
performs
with
that.
P
Given
version
of
software,
possibly
the
dullest
bar
chart
you'll
see
all
year,
but
it
is
nevertheless
accurate.
We've
been
using
Expo
testers
to
push
a
multicast
traffic
through
this
we
can
currently
run
it
about
sustained
7.5
gigabits
through
the
router
I
would
suspect
that
this
is
due
to
the
configuration
of
the
the
tunnel
services.
So
we
we
configure
tunnel
services
to
reserve
up
to
10
gigabits
of
traffic,
and
so
that's.
P
P
We
know
that
certainly
Ruth's
team
want
to
push
a
lot
of
traffic
through
our
relays,
and
we
want
to
also
potentially
offer
this
to
some
of
the
other
MNS
to
do
do
their
own
work
on
it,
and
so
we
know
that
that
multiples
of
10
gig
is
not
going
to
last
us
for
long.
So
we
need
to
definitely
look
into
100
Gig
and
the
future
from
there
or
currently
vmx's.
Don't
support
AMT
fully.
P
We
do
have
that
on
the
roadmap
we'd
like
to
see
that
and
we'll
continue
working
with
our
vendor
to
get
that
that
pushed
through
the
idea
would
be
to
dynamically
spin
up
those
relays
in
in
countries
where
are
not
necessarily
close
to
our
existing
Hardware
relay.
So
that
is
something
we
are
very
interested
in,
but
but
right
now
it's
it's
for
the
future
and
also
I
touched
on
the
the
fact
that
the
AMT
protocol
doesn't
have
built-in
authentication.
P
So
at
the
moment,
firewall
acts
through
to
those
those
AMT
relays
is,
is
hard-coded
essentially,
and
we
every
time
a
non-nren
or
or
non-partner
range
needs
access
to.
It
say,
for
example,
in
in
Amazon
or
wherever
else
we
need
to.
We
need
to
add
those
networks
into
that
firewall
manually.
We
are
looking
at
other
options
to
get
around
that,
whether
it's
a
software
thing
or
whether
we
we
apply
some
firewalling
in
front
of
it.
But
that's
that's
on
the
roadmap
and
I
think.
K
Yeah,
first
of
all,
great
great
stuff,
thanks
I
love,
seeing
this
torch
still
burning,
I'm
curious.
If
you
looked
at
Jake's
work
to
see,
if
any
of
that
can
benefit
and
if
so
considered
integration,
maybe
taking
that
torch.
P
Yeah
so
I
mean
we
used,
we
used
Jake's
software
quite
a
lot
and
during
our
testing
phases,
certainly
with
the
iperf
testing
and
things
like
that.
So
it
is,
it
is
something
that's
in
use,
it
might
be
I,
don't
know
Ruth
if
it's
something
you
want
to
want
to
touch
on,
you've
been
using
it
a
little
bit
more
than
I
have
I.
Think
yeah.
O
Yay,
exactly
Jake's
been
incredibly
helpful.
Actually,
we've
been
in
touch
with
him
quite
a
bit
regarding
the
software
and
any
things
that
we
find
with
it,
and
certainly
it's
been
what
we're
using
and
mainly
for
our
solution,
and
it's
definitely
something
that
we
can
look
into
as
well
in
the
future.
But
currently
it's
working
really
well
for
us.
So
we'll
test
it
with
a
few
users
first
and
see
how
that.
K
O
Oh
great
yeah,
you
know
it's
great,
no,
because
we've
already
spoken
to
a
lot
of
our
users
about
it
and
it's
something
they're
really
really
interested
in.
We
want
to
replace
the
GRE
terminal
setups
that
we
have
at
the
moment
to
completely
with
AMT,
and
so
it's
something
that
yeah
we'll
be
using
a
lot
in
the
future.
K
B
O
Pardon,
can
you
repeat
that
sorry.
O
So
the
AMT
relay
is
provided
Again
by
Jake,
Holland
and
Via
his
software
on
gitlab,
and
so
the
the
software
I
left
them
on
the
the
slide
there
and
like
a
bit.
B
A
I
I'll
I'll
step
in
or
I'll
throw
in
a
question
Ruth.
Can
you
talk
about
what,
if
you
didn't
use
multicast?
What,
if
you
just
did
unicast
for
your
terrestrial?
Why
not
just
do
that?
It's
easier
more
ubiquitous!
What
kind
of
savings
are
you
realizing
through
multigast.
O
Well,
currently,
we
have
quite
a
lot
of
users
that
have
access
to
Native,
multicast
entrance
and
they're
getting
very
high
availability.
They
have
a
good
setup
and
actually
a
lot
of
Mass
offices.
Don't
even
have
the
resources
really
to
change
their
setup,
and
so
once
they
have
it
going,
they
tend
to
stick
with
it
and
and
just
you
know
it
works
for
them,
and
but
obviously
not
everyone
has
this
possibility.
The
GRE
channels
in
particular
are
issues
for
us.
They
take
a
really
long
time
to
set
up
and
as
well.
O
We
also
see
a
lot
of
problems
with
them
and
it's
difficult
to
troubleshoot,
so
they're,
definitely
something
that
we
want
to
completely
get
rid
of
replace
entirely
by
AMT
and
as
well
a
lot
of
users.
Now
that
we've
been
talking
to
they're
all
setting
up
stations
in
Virtual
environments,
so
for
new
users,
we're
definitely
looking
to
start
them
off
with
AMT,
because
this
is
such
a
simple
setup.
O
It
takes
less
than
half
an
hour
to
install
and
have
everything
ready
for
them,
which
is
a
huge
difference
to
what
they
have
currently
and
so
for
some
existing
users
and
Native
multicast
works
really
well
for
them,
and
we
have
no
reason
to
change
this.
Currently,
as
we
have
a
good
setup
and
we
have
a
contract
with
giant
for
this
as
well,
but
for
the
future,
definitely
we
can
look
more
towards
AMT
and
other
Solutions
as
more
users
go
to
Virtual
environments
and
different
setups.
O
Native
multicast
and
we'll
probably
stick
with
that.
Yeah.
A
Great
well
thank
you,
Ruth
and
and
and
Rich.
This
was
really.
This
was
really
great
and
it's
kind
of
a
realization
of
the
of
of
a
lot
of
the
work
that
a
lot
of
folks
have
have
been
working
on
over
the
years.
So
it's
gratifying
to
see
this
stuff
in
in
action.
A
Okay,
so
continuing
along
there
was
some
mention
of
you
know.
Multicast
the
browser
AMT
to
the
browser.
Lauren
has
been
doing
some
work
on
this
and
has
been
enhancing
the
multicast
menu,
so
you're
up.
E
N
A
viewing
in
the
browser,
if
you
remember
at
last
iitf
I,
presented
the
multicast
menu
as
this
web
application.
A
central
collector
of
multicast
streams
that
also
allowed
viewing
these
streams
in
VLC
VLC
works
great,
but
it
was
a
not
terribly
friendly
process
to
get
it
from
multicast
menu
to
VLC.
N
I
saw
something
about
my
mic:
I
will
try
to
turn
it
down
a
little
bit
and
then
we
also
demoed
off
net
sourcing,
so
live
streaming
from
a
phone
onto
multicast
menu
and
viewing
that
today's
Improvement
that
I
want
to
talk
about
is
viewing
streams
directly
in
the
browser.
This
is
all
based
on
python,
AMT
Gateway,
that
Natalie
Chris
and
Eric
wrote
over
the
summer,
and
so
what
I've
done
is
taken
that
and
integrated
it
into
multicast
menu.
N
So
from
Alice's
browser
whoever's
browser,
there's
a
get
request
sent
to
multicast
menu,
asking
for
the
the
viewing
page
multicast
menu
upon
receiving
that
reaches
out
to
the
specified
AMT
relay
for
whatever
stream.
The
user
requested
establishes
an
AMT
tunnel
that
AMT
is
sent
to
a
local
UDP
Port,
the
python
magic.
That's
doing
the
AMT
tunnel
establishment
sends
it
to
a
local
UDP
port,
and
then
we
use
ffmpeg
all
on
multicast
menu
to
take
that
loopback
UDP
port
and
turn
it
into
hls
files
on
the
server.
N
Those
hls
files
are
then
transmitted
back
to
Alice
back
to
the
user,
who
plays
them
with
JavaScript
in
their
browser.
So
it
still
has
the
replication
component
that
AMT
that
VLC
does
it's
not
true
multicast,
because
Alice
is
presumed
to
be
on
a
unicast
only
Network,
where
we
have
to
replicate
for
that
last
step.
N
But
I'm
going
to
start
by
doing
the
same
demo
that
I
did
at
ICF
114,
where
I'm
going
to
start
streaming
from
my
iPhone,
so
I've
started
to
stream
on
my
iPhone.
If
you
go
to
multicast
menu,
this
is
a
little
hard
to
demonstrate.
Is
there
any
way
I
can
share?
My
screen
can
I
request
that.
N
N
I'm
getting
all
the
slots
requested,
many
oh
I
need
to
stop
the
slide.
Deck
has
to
be
stopped
off.
N
Okay,
great
so
we're
on
multicast
menu.
We
see
that
this
is
my
live
stream
here,
we're
going
to
go
in
and
do
a
little
bit
of
we're
going
to
go
in
and
do
a
little
bit
of
specifying
the
relay
that
we're
trying
to
go
to
this
can
be
specified
automatically
later.
But
so
we
see
that
we
have.
This
is
the
live
stream
I
have
going
I'm
going
to
click
watch
in
browser
speak
up
if
you're
not
seeing
this.
N
It
does
take
a
moment,
the
second
time
and
oftentimes
it
times
out
the
first
time,
just
because
it's
doing
all
this
work
to
establish,
but
after
refreshing
we
should
be
able
to
so.
You
can
hear
my
voice.
You
can
see
the
lovely
plant
in
my
dorm
room,
so
this
is
a
live
stream
that
I
am
currently
controlling.
N
If
I
move
it
around
a
little
bit
it,
the
video
will
eventually
catch
up
so
yeah,
so
it
is
live
and
pretty
cool
going
back
to
slides,
which
I
have
here
so
I
can
just
finish
the
slide
deck
for
my
I'll
leave
that
stream
up.
If
anyone
wants
to
join
it
themselves,
but
yeah,
so
that's
the
demo,
we
showed
viewing
a
live
stream
from
a
phone
in
a
browser.
There
are
a
couple
things
to
influence.
N
N
N
No
one
is
interested
in
watching
anymore,
so
this
tunnel
stays
open
indefinitely
at
this
point,
obviously
could
be
improved
and
then
dynamically
handling
the
AMT
relay
Choice
VLC
has
this
process
where
it
tries
all
of
the
AMT
relays
to
see
where
it
gets
data
this
implementation
currently
doesn't
it
tries
the
first
one,
and
if
that
one
fails,
then
it
fails
so
improving
that
would
be
nice
and
then
making
this
a
little
more
scalable,
looking
at
kind
of
web
application,
Securities
type
stuff,
but
yeah.
N
A
A
One
of
the
use
cases
I
could
Envision
here
is
essentially
you're
moving
the
replication
point
from
the
AFT
relay
to
the
actual
web
server,
which
might
be
you
know,
kind
of
an
interesting
use
case
for
things
like
MEC.
You
know
Edge
compute
stuff,
so
that's
kind
of
kind
of
a
oh
I
think
you've,
you've
stumbled,
Upon,
A,
really
interesting
use
case
that
might
scale
even
way
better
than
AMT
could
so
so
this
this
could
be
a
really
really
interesting
use
case
here.
A
Cool
any
other
questions
for
Lauren
next.
B
Sorry
yeah,
so
have
you
considered
what
happens
if
you
deploy
the
web
server?
Actually
in
the
mbone?
For
for
so,
do
you
always
use
AMT
anyway,
or
do
you
first
try
a
native
joint
to
get
this
video
natively.
N
A
Max,
do
I
hear
you,
volunteering
providing
a
web
server
connected
to
the
end
bone.
B
A
Well
again,
really
awesome
stuff
folks,
take
a
look
at
go
to
3dn.net
and
check
this
stuff
out,
and
it's
gonna
continue
to
be
enhanced
and
really
again
amazing,
work
from
from
Lauren
and
the
other
folks
with
whom
she's
been
collaborating.
A
Cool,
so
we
are
sort
of
out
of
time.
Mike
I
had
you
on
the
agenda.
If
we
had
time,
if
you
want
to
come
up
and
talk
about
the
Lessons
Learned
doc
until
they
cut
us
off,
you
are
welcome.
F
You
know
I'll
go
ahead
and
just
email,
the
list
I
presented
in
pin,
as
you
know,
so
let's
go
ahead
and
call
it
a
day.
Thanks
all
right,
cool.
A
Well,
everyone,
thank
you.
Lots
of
great
stuff.
I.
Think
I
learned
a
lesson
that
we
need
to
reserve
two
hours,
not
not
90
minutes,
so
apologies
for
the
rushed
agenda,
but
that's
better
than
having
no
time
at
all
and
us
being
bored.
So
thanks,
everybody
and
we'll
see
you
at
ietf,
116.