►
From YouTube: IETF114 MBONED 20220728 1730
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
E
Okay,
we're
looking
for
a
meeting
a
note
taker,
we'll
wait
for
people
to
trickle
in
anybody
interested
in
taking
notes.
You
don't
have
to
be
in
the
room
you
can
be
in
the
room
who
wants
to
be
a
notetaker.
Notetaking
is
a
excellent
opportunity
to.
A
How
about
if
you
take
notes
I'll,
take
notes,
while
you're
presenting.
E
A
E
E
Switch
to
the
go
back
to
the
I'm
less
familiar
with
that
interface.
That's
the
online
tool.
Do
the
do
the
remote
one,
I'm
sorry,
click
click
on
the
video
link.
E
B
E
E
E
Meeting
tips
you
know
for
for
those
who
are
just
arriving
today
and
haven't
been
in
any
of
these
previous
meetings,
even
if
you're
in
the
room,
please
do
join
the
meet
echo
tool
because
that
acts
as
the
blue
sheets,
and
it
also
enables
you
to
participate
in
chat
and
any
polls
that
we
have
just
a
reminder.
Please
wear
your
masks
unless
you
are
actively
speaking
at
the
mic
and
you
think
that
that
can
help
be
a
little
more
clear
and
remote
participants
keep
your
audio
and
video
off
unless
you
are
presenting.
E
We
have
a
action-packed
agenda,
let
me
know
if
we
missed
anything
or,
if
you'd
like
to
bash
this
agenda.
Otherwise
this
is
the
plan
actually
where's
warren
was
supposed
to
be.
Here
is
warren
in
here
warren
with
a
w.
E
E
Yes,
we
missed
smart
warren.
All
right,
yeah
is
there
any
other
ops
area
meetings
are
going
on
right
now.
E
E
Lunch
all
right
so
status
of
working
group,
docs
yang
models
document
sandy
are
you
on
and
sandy.
Would
you
like
to
speak
up
about
any
up?
Do
you
have
any
updates
or
anything
you'd
like
to
say
about
the
yang
models,
draft.
E
Sandy
are
you
there?
I
see
you
in
the
yeah,
okay
sandy.
Let
us
know
if
you
have,
if
you
want
to
jump
up
the
telemetry
draft,
how
how
you
will
be
presenting
on
this
and.
E
Okay
yeah,
so
that's
been
a
pretty
consistent
thing,
so
please
do
if
you're
interested.
If
you
have
interested
in
passions
in
yang
or
even
if
you
don't
that's
a
draft,
that's
been
sitting
for
a
while
and
needing
needing
comments.
So
please
speak
up.
So
please
take
a
look
at
it,
review
it
and
reach
out
to
the
authors
with
any
comments
you
might
have
the
redundant
ingress
failover
draft.
This
has
been
adopted
since
the
last
since
vienna.
E
So
that's
a
new
working
group
document.
Do
we
have
any
authors,
co-authors
want
to
speak
up
and
say
have
anything
to
note
about
that
draft,
any
updates,
or
is
it
just
the
same
as
from
when
it
was
originally
submitted?.
E
E
E
Okay,
jake:
do
you
want
to
come
up
and
give
us
an
update
on
your
multicast
to
the
browser
drafts,
the
dorms,
ambi,
cbac
and
mnet.
E
I
should
note
dorms
just
went
working
group
last
call.
We've
we've
heard
some
comments.
I
would
encourage
others.
Please
speak
up
on
list.
If
you
want
to
see
this
document
advanced
to
iesg.
G
Does
that
work
yeah
thanks
lenny!
This
is
jake.
So,
most
of
my
time
since
since
113,
as
I
mentioned,
there
was
on
the
it
was
on
the
quick
work.
G
G
G
G
Well,
soonish.
I
hope
maybe
especially,
and
I
think,
there's
been
a
number
of
networks
that
wanted
to
use
that
so
I'll
probably
want
to
want
to
push
that
forward
and
then
c
back,
probably
more
than
ambi,
because
for
for
the
end
point
authentication
quick
will
also
cover
that.
So
ambi
might
turn
into
just
a
sort
of
forwarding
for
the
network,
which
is
still
important,
but
probably
not
as
important
as
the
endpoint.
G
When
I
thought
it
was
going
to
be
using
that
so
yeah,
so
these
are
still
still
I
intend
to
to
get
them
over
the
line
one
day.
Thank
you.
G
Yeah
so
I
I
actually
asked
the
rc
editors
this
earlier
this
week,
so
they
get
auto
clustered,
apparently
when
they
reach
the,
if
there's
normative
references
between
them
when
they
reach
the
rrc
editor's
key.
But
I
can
also
just
send
a
note
and
ask
them
to
to
the
rsc
editors,
so
I
might
do
that.
Mnat
is
not
part
of
the
same
cluster.
I
just
have
a
cluster
or
I
just
wanted
a
cluster
for
dorm's
ambience,
see
back
because
they're
they're
all
part
of
the
same
protocol.
E
Great
thanks
so
again
we
encourage
everybody,
take
a
look
at
speak
up
on
dorms
and
take
a
look
at
and
review
ambience
back,
so
we
can
work
on
advancing
those
as
well
soon.
E
And
those
are
all
the
active
working
group
documents.
E
So
we'll
move
on
so
warren
was
supposed
to
be
here
and
I
believe
if
I
might
steal
his
thunder,
I
think
he
was
saying
that
his
term
as
a.d
is
expiring
soon
and
he
wanted
to
encourage
others
to
consider
becoming
an
a.d
if
you're
interested
in
becoming
an
a.d
and
being
his
successor
and
have
questions
about
that.
Please
do
reach
out
to
him.
E
D
Yeah,
this
is
how
you
song
from
future
way.
Today,
I'm
going
to
give
you
a
brief,
update
and
recap
of
this
on-pass
temp
tree
using
iom
for
multicast
on
behalf
on
behalf
of
our
co-authors,
next
slides,
please
so,
first
updates
of
this
new
revision.
We
actually
several
technologies.
This
document
is
based
on
have
been
goes
through.
The
some
of
them
have
been
published
as
rfc,
and
some
of
them
is
in
the
last
stage
for
publication.
D
So
now
is
a
good
time
to
make
sure
our
scheme
is
actually
comply
with
the
existing
standards.
D
D
First,
the
problem
we
want
to
solve
in
this
draft
is
to
apply
the
unpassed
telemetry
technologies
in
for
multicast.
We
think
it's
a
useful
to
monitor
the
multicast
traffic,
so
the
so-called
on-pass
temperature
means
we
insert
the
instruction
and
the
telemetry
data
in
user
packet
itself.
D
So
by
doing
that,
we
can
collect
the
real-time
performance
on
the
experience
of
user
traffic
and
it's
also
very
useful
in
the
in
terms
of
multicast.
It
can
help
us
to
reconstruct
the
multicast
tree
from
the
data
trees.
We
collected.
D
But
but
the
issue
about
that
is,
if
we
just
use
iom
trees
option,
it
will
introduce
a
considerable
data
redundancy,
because
each
destination
node
will
collect
the
trees
of
the
entire
past.
You
can
imagine
in
the
in
this
tree.
You
know
many
sections
actually
are
overlapped
from
the
root
to
each
node,
but
if
you
get
all
those
data
and
all
the
leaf
nodes
that
will
there
will
be
a
lot
of
data
redundancy.
We
want
to
avoid
that.
D
D
So
the
for
the
trees
option,
the
trees
option
means
we
just
keep
adding
the
telemetry
data
in
the
user
package
on
the
folding
path.
So
you
can
see
this.
This
is
why
it
will
introduce
a
lot
of
redundancy
data
in
a
multicast
tree.
D
So
the
solution
is
we
basically
combine
the
iom
trace
option
and
the
postcard
based
telemetry.
We
don't
try
to
collect
the
data
trees
for
the
entire
path.
Actually
at
each
branching
node.
We
just
configure
configure
node
to
export
the
the
data
we
collected
so
far.
Then
we
clear
the
trace
twist
and
on
then
on
each
branch.
We
can
do
the
data
collection
again.
D
So
this
is
a
figure
to
show
the
show
an
example.
You
can
see
this
multicast
tree
and
to
node
b
there
will
be
two
branches
and
node
d.
There
will
be
three
branches,
then
we
configure
the
node
b
and
the
d
to
let
them
know
and
this
point
they
need
to
just
export
the
data
collected.
So
far,
then
we
can
clear
the
the
data
part.
Then
we
can
start
over
again
on
each
on
each
branches
and
but
to
reconstruct
the
multicast
tree.
D
D
The
second
optional
solution
is
to
to
use
iom
dx
as
a
direct
export
option.
So
for
this
each
package
only
carrier
instruction
header
to
tell
you
what
data
to
collect
and
then
each
node
will
just
send
a
independent.
D
D
This
is
especially
challenging
for
multicast,
because
we
also
need
to
identify
which
branch
this
data
is
a
postcard
data
comes
from
so
to
solve
that
problem,
we
need
a
new
data
type.
We
call
that
branch
branch,
id
or
branch
identifier
solutions
that
the
branch
identifier
can
combine
contains
two
parts.
The
first
part
is
a
node
id.
D
D
D
So
left
side
you
can
shows
a
frame
format
of
this
direct
export
instruction
header.
We
will
need
to
allocate
a
flag
bit.
D
We
call
that
m
to
indicate
this
is
for
the
multicast
use
case.
So
if
the
m
bit
is
set
to
one,
it
means
there
will
be
a
optional
data
field,
the
multicast
branch
id
included
in
the
data
part.
D
You
can
see
there's
a
third
optional
part
which
includes
the
branch
id
we
just
introduced.
So
with
this
such
information
information,
it
allows
us
to
easily
reconstruct
the
multicast
tree
on
the
right
side.
You
can
see
an
example
in
the
bracket.
The
item
is
just
means
the
branch
id
you
can
see
in
the
node,
a
the.
D
D
D
Okay,
so
this
is
a
brief
introduction
about
this
and
so
far
we
think
this
document
is
already
pretty
mature
and
therefore
we
ask
the
working
group
to
consider
the
working
group,
let's
call
for
it.
Thank
you.
G
Yeah
jay
colin:
do
you
have
any
implementation
status
to
to
share
on
on
this
work?
It
looks
pretty
good.
I
think.
D
We
don't
have
an
implementation
for
this
multicast
yet,
but
we
do
have
a
implementation
for
the
iom
in
general.
D
G
I
haven't
looked
at
the
latest
version
of
the
draft.
Do
you
talk
about
the
index
stability
when,
when
interfaces
come
up
and
down
like
when
new
interfaces
are
added,
and
and
is
it
relative
to
the
current
fanout
tree
or
to
the
interfaces
that
are
available
or
like
how
are
the,
how
are
the
local
index
indices
assigned?
G
Yeah,
so
the
the
branch
id
is,
I
understand.
D
G
Unique
branch
and
then
as
it
changes
while,
while
flows
are
in
progress,
are
you
specking
out
how
the
how
the
ideas
change
or
how
to
re-aggregate
those
basically.
D
H
It's
vancouver
dr
telecom,
basically
jq,
and
you
asked
half
of
my
question
as
well.
So
when
I
was
looking
at
the
data
scheme
that
you
provided,
I
think
two
or
three
slides
back.
H
D
A
you
know,
the
the
node
id
data
is
already
available
to
be
included
so,
but
if
we
think
of
think
about
the
multicast
case,
if
there's
no
branch
id
then,
for
example
the
next
node
from
the
from
the
branching
point,
you
will
you
they
both
send
the
postcard
package.
Then
you
will,
you
will
note
no,
you
can.
You
cannot
tell
if
if
they
are
belong
to
the
different
branches
or
correct,
so
with
that
information
you
can
tell.
H
So
so
rephrasing
my
question:
I
have
2
000
multicast
streams
in
my
network
and
I
have
a
router,
a
p
router
in
my
network,
which
has
350
multicast
activated
interfaces.
Okay,
how
many
postcards
will
I
receive.
D
Oh,
so
it
yeah
each
each
tree.
Might
you
might
be
able
to
aggregate
that
explore
data,
but
it's
easy
to
distinguish
the
different
multicast
tree,
because
in
addition
to
this
branch
id,
we
also
have
the
flow
id
and
I've
got
the
the
in
the
figure.
We
have
some
other
information
to
tell
you
which
flow
it
belongs
to.
So
you
can
easily
attribute
the
data
to
different
multicast
tree.
They
will
not
mix
together.
D
Id
and
a
sequence
number
yeah
flow
id,
basically
tell
you
it's
a
unique
number
and
they
tell
you.
H
D
D
Sequence
number
tell
you
the
order
of
this.
This
package
and.
D
Is
this
is
a
come
from
the
every
packet
then
then,
and
and
each
each
node
will
a
sign
that
writes
that
part?
That's
the
multicast
branch
id,
because
I
just
you
need
to
use
that
information
to
tell
to
reconstruct
the
tree.
D
One
router,
I
said,
based
on
the
current
scheme,
is
support
up
to
256
different
branches.
H
Oh
yeah,
okay,
so
that
was
exactly
where
I
was
pointing
at
so
256.
I
don't
know
for
a
large
iptv
that
is
already
narrow
on
on
our
p
routers,
for
example.
We
have
more
than
that,
but
on
the
other
hand,
it's
it's
a
good
start
and
a
good
approach.
H
What
what
I
would
like
just
like
to
point
out
is
that
the
that
the
interface
is
really
key
and
needs
to
be
included
in
in
in
this
postcard,
because
otherwise
you
you
will
have
hard
time
because
of
the
sheer
amount
of
postcards
that
you
will.
D
So
you
think
that
eight
bits
is
to
to
to
view.
Are
you.
H
I
E
J
This
is
dino
thanks
for
the
presentation
I
just
wanted
to
say.
I
support
the
idea
of
wanting
to
solve
multicast
telemetry
at
the
olympics.
They
want
the
the
u.s
broadcaster
wants
to
look
at
data
on
the
tree,
both
downstream
and
upstream,
and
what
we
did
in
tokyo
is.
We
use
the
the
lisp
control
plane
to
to
find
out
things
like
rtt
times
on
each
branch.
J
One
way
hop
count
forward
and
reverse
latency
forward
and
reverse,
and
the
advantage
we
had
with
doing
it
with
an
overlay
is
that
we
didn't
have
to
touch
the
underlay
routes
at
all
and
we
were
still
able
to
get
that
granularity
of
information,
and
I
was
thinking-
maybe
I'll,
present,
that
next
time
at
the
next
mmt,
if
you
want
yeah.
Thank
you,
but
you
know
like
definitely
for
solving
this
problem.
E
K
E
So
just
quick
show
of
hands
who
has
read
this
draft,
thus
far
yeah
use
if
you
could
use
the
tool.
E
The
most
important
thing,
if
you
have
please
raise
your
hand
if
you
haven't
you,
can
either
abstain
or
not
raise
your
hand.
We
can
do
the
math
all
right.
So
it
sounds
like
there's
a
lot
of
interest
just
needing
more
folks
to
read
the
draft.
So
please
do
please
do
take
a
look
all
right.
K
There
we
go,
the
call
is
set
up.
K
All
right,
hello,
everyone,
I'm
max
from
the
berlin
and
I'm
presenting
the
multicast
quick
extension
yeah
jack,
presented
already
at
quick
just
before
lunch,
and
now
I'm
going
to
present
it
to
you.
So,
let's
start
with
the
basic
idea
next
slide.
K
But
yeah
the
idea
is
that
we
still
want
to
get
multicast
into
the
browser
basically
and
we're
looking
for
ways
to
do
that
and
since
browsers
have
quick
implementations,
we
thought
one
way
to
do.
That
would
be
to
use
those
quick
implementations
to
find
a
way
to
get
multicast
into
that.
So
what
this
extension
does?
K
It
basically
uses
a
quick
unicast
connection
as
a
sort
of
anchor
or
side
channel
from
where
a
client
starts,
and
it
can
say
the
client
can
say
I
support
multicast,
and
these
are
my
limits
basically
like
this
is
my
maximum
maximum
supported
rate,
etc,
and
then
the
the
server
could
tell
the
client
over
the
unicast
connection,
some
multicast
channels
basically
and
tell
the
client
to
join
these
multicast
channels
to
receive
data.
K
So
it's
server
driven
the
server
picks
which
ssm
we
only
support
ssm
channels.
The
client
should
join.
The
client
can
then
decide.
Okay,
I'm
gonna
try
to
join
these
ssm
channels
and
on
these
ssm
channels
the
client
will
find
quick
frames,
quick
packets,
which
contain
data.
Basically,
if
the
client
is
unable
to
join
these
channels,
the
server
could
then
decide
to
also
send
the
data
over
the
regular
unicast
connection,
which
means
that
the
client,
so
so,
no
matter
if
multicast
is
supported
or
not
the
client
would
still
get
the
data.
K
Of
course
they
would
have
to
set
a
flag
to
support
multicast
because,
obviously
for
some
applications
they
don't
want
multicast,
but
from
that
point
on,
it
would
just
see
normal
quick
data
arrive
basically
on
connection
okay.
So
what
quick
also
gives
us
is
a
way
to
encrypt
and
use
integrity,
so
each
packet,
no
matter
if
it's
sent
on
the
multicast
channel
or
the
regular
connection
is
encrypted.
K
Obviously,
the
issue
is
that
every
receiver
gets
the
same
packet
over
the
multicast
channel.
So
it's
not
a
high
bar
to
decrypt,
because
the
key
is
sent
to
everyone
the
same
key,
so
that
alone
isn't
enough
to
to
guarantee
integrity.
So
we
also
send
integrity
frames
which
are
basically
hashes
for
each
packet
that
guarantee
that
when
the
when
the
receiver
sees
a
packet
over
the
multicast,
it
knows
that
it's
a
valid
packet
right.
So
the
the
client
also
acts
the
packet
that
receives
the
packets.
K
It
receives
over
the
multicast
channels
over
unicast,
so
the
server
knows
which
packets
arrive,
which
packets
get
lost.
So
in
that
way
we
have
reliability
and
the
flow
control
and
congestion
control
are
obviously
different.
So
the,
as
I
said
before,
the
client
sets
its
own
limits
and
the
that's
the
way
congestion
control
is
done.
Of
course,
the
server
could
also
do
something
that
sees
that
a
lot
of
packets
get
dropped
because
it
doesn't
receive
the
x.
Then
it
could
also
tell
the
client
to
leave
some
channels
right.
G
K
Client
can't
send
multicast
and
do
you
know
sure
yeah.
J
G
J
K
K
They
will
get
the
data,
but
they
can't
necessarily
decrypt
it
or
check
the
integrity.
J
It's
funny
it's
fine.
I
just
want
to
know
what
the
trade-offs
were
in
the
architecture
and
so
that
that
means
you're
actually
putting
some
access
control.
This
is
really
a
source
initiated
multicast
communication
and
the
source
really
has
control
of
who
joins
the
group.
Okay
sounds
good.
Okay,
I'm
not
I
shouldn't
have
said,
sounds
good,
but
okay,
okay,.
K
Right,
as
I
said,
the
the
the
packets
arriving
over
multicast
are
for
the
application.
They
they
don't
differentiate.
Yes,
jake.
G
Yeah
again
just
to
clarify
the
from
a
multicast
channel
perspective,
you're
right,
there's
a
source
in
a
receiver,
but
since
this
is
like
maintained
in
as
part
of
a
quick
connection,
it's
this
is
why
we
chose
the
terminology,
server
and
client
in
it's
in
the
quick
context
that
we
have
a
server
and
a.
B
G
J
G
I
Yeah
kyle
rose,
I
mean
the.
I
think,
the
one
of
the
important
things
about
this
about
this.
This
proposal
is
that
it's
it's
not
it's
not
a
general
multicast
mechanism.
It's
intended
specifically
for
the
case
in
which
you
already
have
you
have
a
relationship
with
a
with
a
quick
server
and
we're
just
providing
an
alternate
means
for
transmitting
data
that
might
be
shared
among
many
different
clients.
J
So
you
don't
have
to
unicast
replicate
over
the
unicast
quick
channels.
Okay,
now
another
thing:
that's
correct,
okay,
so
another
thing
I
thought
about
it
might
be
hard
for
me
to
join
the
group
because
since
the
control
channel
tells
me
yes
comment,
g
is,
I
may
not
know
it,
I
could
guess
it,
but
I
may
not
know
it
so
it
would
be
hard
for
me
to
join
receive
on
encrypted
data.
That
I
can't
decrypt
is
that
you
agree
with
that.
Okay,.
F
Yeah
all
right,
okay,
yeah
stick
with
us,
so
this
sounds
pretty
cool.
Just
one
thought
I
had
is
in
in
the
beer
working
group.
They
are
looking
at
stuff
where
a
source
can
send
to
certain
clients
without
the
client
having
to
join
first.
So
it's
really.
The
membership
is
driven
all
by
the
source
and
they're
looking
into
doing
using
it
for
http
in
some
cases.
So
I
think
this
could
fit
well
within
that.
Of
course,
beer
is
not
that
used
much
used
yet
so
you
want
to
solve
it
for
multicast
in
general.
L
K
K
Right
it
scales
better
than
having
10
000
clients
receiving
the
data
over
unicast.
Altogether
right
I
mean
you're
gonna
have
to
we
we're
thinking.
We
were
thinking
about
not
always
acting
stuff
right
and
you
can
use
but
yeah.
It
would
be
some
of
the
violation
of
quick.
I
Right
and
and
because
of
the
way
that
quick
handles
acts,
it's
probably
sublinear
anyway,
assuming
that
you're
not
losing
a
lot
of
packets
but
yeah.
I
think
this
is
one
of
those
things
where,
like
we
have
ideas
about
how
this
is
going
to
behave,
but
we're
not
really
going
to
know
until
it's
in
practice
and
we're
experimenting
with
it.
So,
but
I
mean
you
know,
we're
open
to
to.
You
know
helpful
analysis
from
other
people
as
well
or
thoughts
on
how
this
might
work
out.
L
Yeah,
I
actually
don't
know
much
about
quick,
so
my
questions,
let
me
be
down,
but
I
guess
the
one
difference
between
this
and
10
000
separate
the
unicast
sessions
or
the
different
differences
there
is
that
those
10
000
different
sessions.
They
are
separate
ones,
and
here
this
is
one
this
all
these
eggs
are
for
the
same
session.
F
G
Yeah,
you
can
act
one
in
many,
but
to
come
back
to
the
general
scaling
questions,
I
would
draw
an
analogy
to
norm
the
existing
knack
or
unreliable
multicast
spec.
So
this
one
talks
about
a
single
server
scaling
only
in
the
tens
of
thousands
for
for
the
reasons
you
were
talking
about,
but
we
think
we
can
distribute
this
over
multiple
servers.
In
the
same
way,
we
do
other
kinds
of
unicast
distribution.
G
But
that's
not
part
of
this
spec
at
this
time,
also
like
fec
frames.
If
it
turns
out
to
be
useful-
and
maybe
we
can
do
more
aggregation
of
max
of
the
ax
or
an
ax
or
something
but
right
now-
we're
trying
to
be
as
vanilla
as
we
can
in
a
quick
context.
J
So
I
have
a
bunch
of
detailed
questions
I
want
to
let
you
finish
your
presentation,
but
actually,
I
think
jake
answered
half
of
the
things.
The
broad
statement
I
want
to
make
have
you
guys
looked
at
norm.
The
answer
is,
yes,
looks
like
he
looked
at
next
because
I
was
going
to
say:
did
you
look
at
the
early
van
jacobson
work
and
the
work
that
the
cisco's
guys
did
in
the
90s
on
pgm,
pretty
good
multicast?
These
are
all
various
forms
of
reliable
multicast
transports
and
the
questions
about
returns.
J
K
Yeah,
so
speaking
of
retransmits,
the
retransmit
could
happen
both.
So
you
could
have
quick
datagrams,
so
you
don't
have
retransmits,
but
if
you
have
stream
frames,
the
retransmit
could
happen
over
both
multicast.
If
enough
clients
or
receivers
lose
the
packet
right
or
you
could
individually
retransmit
them
over
the
unicast
channel
as
well.
E
E
What
do
you
recommend
do
you
think,
do
you
think
you
want
to
finish
the
presentation
and
then
take
questions,
or
is
this
helpful.
E
E
And
just
just
a
note,
gory
mentioned
on
the
chat
that
quick
does
not
need
to
hack
every
packet.
D
B
K
Right,
that's
the
the
problems
we
solve,
but
yes,
as
I
said
already,
we
want
to
get
it
into
the
browser
we're
going
to
have
encrypt
like
encryption
for
the
packets.
We
have
integrity
checks
like
ambi
users,
as
jack
said,
so
envy
isn't
yeah.
K
Right,
okay,
I
think
I
said
most
of
that
already
right.
The
scalability
is
in
the
data
and
not
the
control,
plane,
yeah,
okay,
right,
the
concept
of
the
draft
we
already
on
version
3.
We
had
some
reviews
internally
and
from
lucas
and
from
kyle
and
the
for
us
at
least
the
current
structure
seems
clear
in
the
architecture
and
we
don't
see
any
reasons
that
make
it
like
impossible
to
actually
get
it
deployed
and
use
it
and
or
things
that
are
in
violation
of
the.
K
Draft
for
multicast,
so
we
got
some
feedback
from
the
quick
working
object.
Do
you
wanna.
G
Yeah
I
mean
there
weren't
many
people
who
commented
lars
had
some
positive
things
to
say.
I
think
alex
questioned
whether
it
should
happen
in
quick.
You
know
I
think
that
can
be
discussed
on
the
mailing
list.
Martin
duke
said
he
was.
He
found
it
technically
interesting
and
would
be
doing
a
review.
So
I'm
looking
forward
to
that,
you
know
I
I
I
expect
that
a
lot
of
people
in
quick
remain
skeptical.
Some
of
the
offline
comments.
G
I've
had
say
that
that
there's
some
skepticism
that
we
will
adequately
solve
the
sort
of
origin
security
model
for
for
for
counting
as
non-mixed
content
in
browsers.
G
G
I
think
that
they
are
minimal
enough
that
they
can
be
addressed,
especially
if
we
consider
things
like
so
apple
talked
about
how
they
they
sort
of
treat
a
joint
of
a
multicast
group
as
a
under
the
same
rubric
as
sort
of
local
discovery.
Things
as
a
potential
privacy.
G
Exposure
that
requires
a
user
input.
So
this
kind
of
thing,
I
think,
if
it
becomes
more
normalized,
can
contribute
to
protecting
users
against
inappropriate
privacy
violations
and
could-
and
I
think
that
that's
the
main
difference
in
terms
of
security,
although
I'm
not
sure
that
I
that
I've
fully
captured
everyone's
objections
to
it
or
that
they
fully
thought
it
through
honestly,
because
a
lot
of
them,
I
would
say,
maybe
don't
really
want
to
think
about
it
too
hard.
Yet
I
think
that
will
change
if
we
can
manage
to
get
a
decent
deployment.
G
That
does
something,
but
it's
going
to
not
be
in
a
browser.
First,
it's
going
to
have
to
be
like
in
a
fat
client,
that's
launched
from
a
browser
if
it's
going
to
address
web
video,
for
example,
but
but
we
would
like
to
do
that
in
our
you
know,
with
our
demo
yeah,
that's
kind
of
where
we're
headed
first
and
then
one
day,
maybe
into
a
browser.
J
All
right
sure,
so
this
is
dino,
so
I'm
getting
a
really
strong
gut
feeling
that
this
is
a
really
good
architecture.
I
just
there's
all
these
positive
vibes
going
through
my
mind,
so
I
think
I'm
really
happy
and
I'll
explain
why
you
solve
the
source
discovery
problem,
because
it's
done
at
the
source.
That's
really
good.
J
The
fact
that
you're,
mixing,
unicast
and
multicast
means,
if
you
wanted
the
unicast
connections
to
go
on
an
underlay
and
you
wanted
multicast
to
go
on
an
overlay.
This
could
happen
with
this
architecture.
That's
really
cool,
at
least
for
me,
because
I'm
an
overlay
guy
these
days
now
I
have
a
detailed
question.
So
in
pgm
there
was
next
that
came
from
the
receivers
when
they
saw
pack
it
out
of
sequence
and
the
router's
intermediate
would
build
these
lost
neighborhoods.
J
So
when
the
packet
was
retransmitted,
it
would
only
go
to
the
lost
neighborhoods
and
not
to
the
receivers
that
received
it.
So
can
you
explain
I?
I
realized
that
if
you
want
to
retransmit
a
multicast
packet
that
has
been
lost
somewhere
on
some
branch
somewhere,
that
you
could,
you
may
not
be
able
to,
you
may
or
may
not
be
able
to
identify
where
it
was
lost,
and
you
could
certainly
retransmit
on
the
unicash
channels,
which
could
be
inefficient
if
the
lost
neighborhood
was
really
large
right.
J
So
that's
my
question:
what
are
you
guys
thinking
about
how
to
do
this,
and
I
do
you
just
re-transmit
on
multicast
and
let
the
guys
who
got
it
drop
the
duplicates
or
because
there's
we
did
a
lot
of
research
with
this
on
pgm
and
we
thought
that
the
knack
neighborhood
and
having
router
store
state
about
lost
messages
was
worthwhile
turned
out
to
be
complicated
and
maybe
an
over
optimization.
G
Yeah,
I
would
say
we're
we're
not
sure,
but
in
terms
of
the
architecture
and
the
document,
this
would
be
a
server
implementation
decision.
So
it's
not
going
to
be
routers
I'll.
Tell
you
that
much
it's
going
to
be
endpoints,
so
it's
going
to
be
a
server
endpoint
and
in
a
deployment
model
we
would
expect.
Probably
to
have
you
know,
servers
co-located
in
a
network
and
to
maybe
like
I'm,
not
going
to
give
you
this
on
the
first
pass.
G
But
one
day
if
this
takes
off
yeah
we'll
have
servers
that
are
sort
of
dedicated
to
a
particular
network
and
when
there's
loss
that's
correlated
across
that
network.
I
would
expect
the
servers
to
you
know
as
an
optimization
to
prefer
to
retransmit
over
multicast
for
that
network.
And
if
there
was
anybody
who
got
it,
they
would
be
able,
as
a
natural
part
of
quick
to
discard,
repeats
that
shouldn't
be
a
problem
yeah.
But
you
know
that's
that's
sort
of
optimization
for
later
yeah.
J
So
what
what
ended
up
happening
in
pgm
is
that
we
didn't
want
to
have
this
router
assist
neck
thing
and
so
pgm
turned
out
to
be
a
reliable
transport
and
what
happened
was
we
when
there
was
any
knack
that
came
back
the
reach
it
retransmitted
down
the
tree
and
what
jake
just
said
is
the
the
receivers
who
got
it
just
threw
it
away?
It
was
much
simpler
and
it
was
more
end
to
end.
So.
J
Having
said
that,
since
this
is
really
a
reliable
transport
protocol,
it
should
probably
be
done
in
the
transport
area,
but
the
problem
there
is,
is
everybody
hates
multicast
there,
so
we
need
good
representation
to
do
it.
So
you
know
so
maybe
you
teach
us
transport
or
we
teach
them
multicast.
I
don't
know.
E
Why
don't
you
just
finish?
You
got
two
slides
and
then.
K
So
yeah
the
the
the
the
last
two
sites
are
basically
about
our
implementation,
so
we
implemented
started
to
implement
it
in
chromium
and
we
got
all
the
we
got,
the
frames,
the
new
frames
and
we
got
the
the
transfer
permitted
and
so
on
the
thing
we're
missing.
There
is
how
to
feed
the
packets
we
receive
over
multicast
into
the
event
loop
of
the
regular
quick
connection.
Basically-
and
that's.
I
K
Tricky
in
chromium
so
now
we're
thinking
about
for
the
first
demo
to
instead
use
something
like
a
a
I
o,
quick
or
something
in
python,
where
it's
much
simpler
to
just
feed
the
packets
in
and
you
don't
have
all
the
overhead
of
chromium.
K
Of
course,
the
goal
is
to
get
into
a
browser,
so
eventually
we
will
do
it,
hopefully
in
chromium,
but
just
for
the
first
test
like
that
yeah
also,
there
are
some
issues
with
transporter,
but
I
think
we're
just
gonna
skip
over
those
for
now,
and
rather
have
the
more
multicast
focus
discussion
right
and
that's
it.
H
First
of
all,
condos
for
doing
the
work,
and
I
think
you've
got
a
really
good
approach
here.
In
a
matter
of
transparency,
I
already
provided
all
some
of
the
points
to
jake
offline.
H
I
think
what
would
be
interesting
is
to
couple
this
also
not
just
to
native
multicast,
but
also
to
amt,
so
that
you
have
the
opportunity
to
say
I.
I
will
try
first
to
to
join
a
multicast
group
once
once
it's
already
set
up,
and
then
you
try,
if
there's
an
amt
relay
which
is
still
better
than
unicasting,
it
all
the
way.
H
Maybe
you
should
rethink
about
this,
because
if
this
is
fundamentally
for
life,
events
from
my
so
so
live
tv
like
super
bowl
or
whatever.
H
Now
from
our
experience,
there's
only
a
really
tiny
time
frame
where
you
have
the
buffering
and
where
it
really
makes
sense
to
retransmit
the
data
packets
in
most
of
the
cases,
especially
if
you're
further
away
from
those
retransmission
servers,
I'm
talking
about
50,
60
milliseconds,
typically,
then
re-transmission
doesn't
make
sense,
and
then
you
don't
have
to
actually
think
about
re-transmission.
If
you,
if
you're
talking
about-
and
this
was
another
use
case-
downloading
files
via
multicast,
so
large
downloads,
etc.
Then
of
course
it
makes
sense.
H
Then
you
need
those
packages,
but
I
think
it's
more
to
focus
on
what
is
really
required,
for
which
you
can
use
case.
K
G
Sure
to
also
respond
to
that
we've
been
looking
at,
like
some
of
the
mock
work,
the
media
over
quick.
We
think
that
there's
good
synergy
with
the
push
approaches
they're
using
using
there
for
rush
and
warp
and
possibly
for
the
arkwick
stuff
or
the
quick
r
stuff
that
they're
talking
about,
and
the
idea
is
as
long
as
you're
using
server
initiated
streams.
G
The
whole
stream
has
retransmits
sort
of
built
in,
but
you
can
still
reset
the
stream
and
then
that
stream
can
be
dropped.
So
you
can
have
unreliable
transmission
at
the
sort
of
level
of
a
frame
or
a
segment
if
it,
if
it
times
out
there,
some
of
the
work
that
they're
doing
there,
we
think
will
mesh
well
with
this
approach
that
we
have
here
is
basically
the
point.
So
I'd
encourage.
K
So
so
we
actually
have
a
mechanic
for
bundling
x
right
where
you
wait
for
a
time
where,
like
like
you,
don't
have
to
act
immediately,
but
you
can
wait
like
a
server
set
timeout
before
you
have
to
like,
say:
okay
now
right,
so
we
have
that
because
yeah
in
quick,
especially
the
the
egging,
is
worse
than
tcp.
I
think
just
from
the
overhead
yeah.
You
know.
J
Regarding
the
comment
you
just
made
about
live
and
re-transmissions,
I
would
just
say:
don't:
have
an
option
never
to
retransmit
and
for
live
events.
Use
fec,
because
fec
will
correct
most
errors
in
real
time
and
just
as
a
data
point
solan.
The
solana
blockchain
uses
fec
and
udp
transmission.
That
blockchain
does
not
use
a
reliable
transport
protocol
and
it's
working
really
well,
and
this
is
over,
like
tens
of
thousands
of
nodes.
A
J
Absolutely
open
source,
yeah,
yeah
and
then
my
other
question
was,
is
you
seem
to
just
I
don't
think
you're
restricting
it
to
one
to
many,
but
you
described
it
as
one
to
many.
If
you
were
wanted
to
have
a
whiteboard
session
with
100
people
drawing
on
a
whiteboard,
would
this
just
be
multiple
instances
of
one
to
many
or
is
there
any
provision
to
do
many
to
many
in
any
specific
way.
K
J
Okay,
so
if
you
have
one
dimension,
you
will
have
n
squared
unicast
connections
and
that
yeah.
L
A
A
J
E
All
right,
one
last
question:
the
the
this
mixing
of
multicast
and
unicast.
Does
that?
E
Does
that
imply
that
if
you
wanted
to
do
something
like
unicast
bespoke
advertising
with
a
multicast
stream,
you
know
this
this.
This
would
work
well
or
you
know,
being
able
to
mix
in
unicast,
stuff
and
multicast
stuff.
M
A
E
Been
speaking
for
the
last
few
ietfs
about
about
the
work
she's
been
doing,
and
it's
culminated
with
off
net
sourcing
and
some
recent
enhancements
with
multicast
menu.
M
Yeah,
so
I
want
to
start
by
just
talking
through
the
stuff,
the
two
and
a
half
components
I
have
and
then
hopefully
we'll
try
some
live
demos
and
see
how
that
goes.
I
saw
the
homework
went
out
on
the
mailing
list,
but
if
anyone
wants
to
download
vlc4
that
hasn't
already
and
participate
in
the
live
demos,
if
you
go
to
trudienne.net,
there's
a
link
on
the
top
of
the
page
for
vlc4.
E
M
There
we
go
awesome:
okay,
so
yeah
off
net
sourcing
and
the
multicast
menu
like
I
said
I
have
two
and
a
half
components:
multicast
menu
and
the
off
net
sourcing
bit
are
the
main
stuff,
and
then
I
just
want
to
talk
about
tjtv,
because
I
think
it's
a
cool
example
of
lowering
the
barrier
of
entry
to
streaming
multicast,
so
multicast
menu
started
off.
Looking
like
this
and
the
main
benefits
of
it
were
it
lets
you
register
and
or
add
your
multicast
streams,
yeah
yeah
better.
M
There
we
go
okay,
so
let's
you
register
and
and
or
add
your
multicast
stream.
So
if
you've
got
a
multicast
enabled
network-
and
you
are
putting
your
own
stream
out
there-
you
can
either
manually
report
it
so
typing
in
source
group,
udp
port
and
a
description
of
it
or
let
multicast
menu
itself,
just
kind
of
pick
it
up
every
night.
It
goes
through
internet2
and
giant
to
look
for
multicast
streams.
Essentially
it's
just
hitting
looking
glasses
and
running,
show
multicast
or
out
detail
to
pick
up
any
streams
that
are
going
through
there.
M
Alternatively-
and
this
is
where
the
off
net
sourcing
bit
comes
in,
if
you're
not
on
a
multicast
enabled
network,
you
can
upload
a
file,
a
video
file
and
have
it
translated
and
there's
an
api
to
do
this
all
programmatically.
M
We
do
basic
like
did
you
give
an
an
actual
ipv4
address,
but
not
really
anything
beyond
that.
At
this
point,
and
then
a
protocol
handler
from
for
opening
directly
from
the
browser
would
be
nice
and
then
a
student
at
tu,
berlin,
recently
kind
of
redid
the
ui
made
it
look
a
lot
better.
M
So
we've
got
a
thumbnail
for
each
screen.
Now,
like
I
said,
ui
overhaul
and
also
the
ability
to
sort
streams
by
categories
like
trending
editors,
choice
and
various
genres.
M
And
then
the
hope
is
eventually
that
this
multicast
live
or
this
off
net
sourcing
app
can
also
become
an
off
net
receiving
app.
So
you
have
some
sort
of
amd
gateway
implementation
in
the
app
and
then
doing
it
for
ios
as
opposed
to
just
android,
and
then
before
we
jump
into
demos.
I
just
wanted
to
highlight.
Tjtv
my
old
high
school
is
one
of
the
ones
running
an
amt
relay,
and
it's
just
very
easy
to.
M
If
you
have
access
to
a
multicast
enabled
network
set
up
streams,
I've
just
got
vlc
sending
them
to
the
amt
relay
and
actually
the
first
demo
is
watching
one
of
those
streams.
So
for
demo
this
is
the
kind
of
topology
that
we're
working
with
multicast
translator
amt
relay
in
the
box,
sending
that
tjtv
are
all
on
a
multicast
enabled
network
and
then
the
rest
of
the
stuff.
Isn't
we
have
our
three
imaginary
people
in
blue
and
all
the
stuff
in
green
is
infrastructure
that
actually
exists.
M
So
I'm
going
to
jump
out
of
presentation
mode
and
split
my
screen,
because
I
want
to
also
see
the
terminal
over
there.
That's
showing
what's
happening
in
the
background,
so
the
first
demo
is
just
viewing
the
tjtv
streams
at
all
so
going
into
multicast
menu
where
that
tv
is
behind
okay,
so
going
into
multicast
menu,
finding
a
stream
you're
interested
in
watching
and
opening
it
directly
from
vlc
saving
placing
it's
just
opening
in
dlc.
M
M
So
the
flow
or
the
flow
that
happened
there
was
before
any
of
this.
I
added
go
back
into
full
screen.
M
So
this
is
the
offnet
sourcing
bit,
and
this
is
the
first
of
the
two
off
net
sourcing
capabilities
where
we're
streaming
from
a
file.
So
a
pre-recorded
video.
M
If
I
didn't
have
access
to
a
multicast
enabled
network,
I
could
have
done
tjtv
this
way,
just
uploading,
the
video
so
going
to
multicast
menu,
add
stream
and
uploading
a
file
specifying
some
basic
stream
information
to
help
people
know
what
it
is
and
then
selecting
our
file
and
as
soon
as
I
hit
submit
here,
it's
gonna
buffer
for
a
second,
but
when
it
goes
through
you'll
see
that
the
translator
is
now
receiving
a
udp
source
in
and
it's
translating
it
as
multicast.
M
It
picked
a
multicast
group
address
for
it
and
then
it
pinged
multicast
menus
api
to
add
it,
and
when
we
refresh
the
page
here,
we
see
that
our
video
has
picked
up
the
source.
Our
video
and
multicast
menu
has
picked
up
the
source
in
the
group
that
was
assigned
to
it.
So
if
we
go
back
over
into
the
main
page,
we
can
again
go
through
the
process
of
opening
in
vlc.
M
Now
this
one
takes
a
couple
seconds
to
load
up,
I've
noticed
should
pop
up
there
we
go
and
we
have
our
source
that
is
streaming
for
multicast
menu
that
I
just
uploaded.
M
This
meeting,
so
we're
not
actually
going
to
open
the
the
app
because
the
video
encoding
is
still
messed
up,
but
on
my
phone
I
have
an
app.
It's
called
high
vision
live
and
is
there
a
camera
that
shows
anyway
it's
on
my
phone?
It's
called
hi
vision
live
and
all
I
did
was
type
in
the
url
of
this
transport
translator
and
I'm
going
to
start
a
stream
and
I'll
prop
it
up.
M
Okay,
hopefully
that
okay,
so
we
see
that
we
have
a
second
message
here
from
our
translator
actually
before
we
do
that
see
that
we
accepted
an
srt
source
connection.
That's
what's
coming
from
the
phone,
that's
that
first
hop
into
the
transport
translator
and
then
we're
forwarding
on
to
our
multicast
translator
to
add
to
multicast
menu.
M
M
There
we
go,
and
so
now
we're
streaming
and,
like
I
said,
audio
comes
through
just
fine.
I
just
have
my
computer
muted
at
the
moment,
but
yeah.
M
Yes
and,
like
I
said,
the
goal
is
to
get
to
a
separate
like
multicast
live
app
where
it
can
be
both
a
sender
and
receiver,
but
in
the
interim
high
vision,
live
android
and
ios
does
the
job
perfectly
fine.
E
Yeah,
so
just
for
those
you
know
not
following
along
what
what
lauren's
essentially
built
here
is
something
that
ietf
hasn't
done
in
about
15
to
20
years,
which
is
stream
a
an
ietf
over
the
mbone
and
better.
Yet
it
can
be
received
by
anybody
on
unicast
only
network,
so
so
we're
getting
ietf
back
over
being
multicasted.
And
yes,.
E
Is
it
being
multicasted,
but
it's
being
received
by
it's
being
transported
by
a
multicast
but
received
by
anybody
on
the
internet,
including
unicast,
only
which
is
something
that
you
know
even
20
years
ago?
We
couldn't
do
when
multi,
when
these
meetings
were
multicasted.
M
Yeah
with
amt,
so
the
I
stopped
the
stream,
so
you
can
stop
being
cameraman,
and
the
part
I
didn't
address
is
that
the
yeah,
if
you
hit
stop
stream,
that
all
of
the
the
teardown
is
very
automatic.
So
when
it
stops
receiving
translation
here,
it'll
call
back
to
multicast
menu
and
say:
hey,
I'm
not
getting
a
source
anymore.
M
Please
delete
the
entry,
so
people
aren't
looking
up
stale
sources
and
for
that
last
the
flow
for
that
last
was
from
a
phone
into
our
transport
translator,
which
is
just
an
aws
box
sitting
on
the
unicast
regular
internet,
then
into
multicast
into
the
multicast
translator
back
out
as
an
api
call
yeah.
So
most
of
this
is
still
pretty
actively
being
developed,
like
I
had
to
do's
on
each
slide,
so
we're
building
up
little
by
little
trying
to
pick
and
pull
different
bits
of
technology
to
push
them
all
together.
A
A
I
just
had
a
question
about
other
source
options.
I
mean
first
of
all,
fantastic.
This
is
very
exciting.
If,
if
I
had
something
like
an
existing
ip
camera,
can
we
pull
that
that
stream
as
well
somehow
set
it
as
a
destination
into
the
translator
and
get
that
sourced.
M
K
M
Yeah
yeah,
so
I've
still
got
my
slides
right,
yep,
okay,
yeah,
so
what's
actually
happening
is
it's
sending
the
high
vision
app
is
using
srt,
which
is
udp
based,
but
still
has
some
server
client
aspects
and
that's
going
into
a
transport
transport
translator
which
is
just
taking
that
srt
udp
and
changing
it
to
regular
udp
and
that's
what's
going
into
the
multicast
translator.
Okay,.
K
I
I
just
wanted
to
to
thank
you
for
this
presentation
and
say
that
I
think
this
is
the
first
time
I've
ever
seen.
A
live
demo
at
ietf
go
down
without
a
hitch,
so
really
good
job.
That
was
fantastic.
I.
G
Hi,
I'm
jake
when
you
say
regular
udp,
you
mean
raw
ts
encoded
inside
the
udp
right.
This
is
mpeg-ts.
That's.
A
M
M
E
M
E
Great,
that's
pretty
amazing
work,
and
this
is
just
for
those.
Without
the
background.
This
has
been
a
multi-year
project.
It
started
with
william
zhang
at
thomas
jefferson,
high
school
about
five
years
ago.
E
He
deployed
the
first
amt
relay
and
then
two
and
a
half
years
ago,
lauren
two
and
a
half
three
years
ago,
lauren
picked
up
that
work
and
has
you
know
she
built
the
multicast
menu
and
has
extended
and
has
enhanced
and
we're
up
to
off
net
sourcing,
which
is
something
we've
been
talking
about
for
years
and
she
was
able
to
do
this
so
really,
really
impressive
and
and
the
work
continues
max
there.
Others
have
been
collaborating
and
max
is
gonna
at
the
end.
E
Talk
about
some
work
that
his
student
is
doing
in
collaboration
with
lauren.
So
this
is
a
great
project
and
it's
exciting
and,
like
I
said,
we're
we're
getting
ietf
backing
up
back
on
the
mbone
and
we're
getting
content
on
the
m-bone
and
there's
lots
of
neat
stuff.
So
I
encourage
folks
go
visit.
The
multicast
menu
check
this
out.
It
works.
It's
real
you
when
you,
when
you
receive
that
content
you're
watching
multicast
streams
over
the
mbone.
You
know
something
that
you
know.
E
If
we
had
20
years
ago,
the
world
might
be
a
different
place,
but
but
it's
pretty
exciting.
So
next
up
is
eric.
You
are.
C
Hello,
I
am
eric
from
viva,
we
build
software
for
multicasting,
zoom
and
webex
and
other
meetings,
and
what
I
want
to
talk
briefly
about
is
the
history
of
some
receivers
and
clients
as
a
way
to
think
about
the
past
and
potentially
inform
quick,
multicast
and
other
considerations.
So
I
thought
it'd
be
useful
to
just
give
a
very
very
brief.
You
know
overview
of
some
of
the
things
that
I
experienced
and
and
then
some
ideas
about
about
future
implementation.
C
So
next
slide
so
so
this
is
some
legacy
receivers
that
you
might
have
remembered,
starlight
networks
back
in
1996
bought
by
picturetel,
and
they
did
mpeg-1
multicast
and
we
built
a
web-based
kiosk
for
them
very
close
to
my
heart,
because
I
met
my
wife
at
a
conference
when
we
built
out
a
little
multicast
kiosk
using
starlight
networks
and
then
progressive
networks,
real
player
again,
another
multicast
receiver,
quicktime
windows,
media
vlc.
C
Of
course,
enterprise
companies
like
vbrik,
had
other
multicast
receivers,
and
so
this
is
thick
software
deployed
by
enterprises
to
receive
multicast,
typically
within
a
very
closed
network.
Typically
one
to
12
channels
could
be
an
iptv
scenario.
It
could
be
just
a
all
hands,
meeting
type
scenario:
very
kind
of
internal
single
domain,
controlled
environment,
but
yeah
a
range
of
multicast
receiver,
software
next
slide.
C
So
then
it
evolved
a
bit
more
moved
into
taking
the
the
windows,
media
player
or
real
player
as
a
as
a
control
or
an
ns
api
plug
into
the
browser
so
wanted
to
get
this
multicast
experience
into
the
browser
context.
So
you
could
do
interesting
things
about
it,
then
flash
with
the
flash
animation
movies
and
eventually
video
that
then
provided
multicast
capabilities
from
their
flash
server.
They
had
a
concept
called
fusion
where
they
were
blending
peer-to-peer
and
traditional
multicast
and
unicast
failover
same
thing
with
windows.
C
So
browser
context
is
really
important
at
this
point
where
we
want
to
do
a
lot
of
interactivity
and
other
things
in
that
browser
and
have
it
be
a
web
experience
and
not
just
a
you
know,
a
thick
client
experience,
okay
next,
so
then,
what
are
we
doing
today?
Well,
in
the
context
of
trying
to
get
multicast
video
to
render
inside
of
a
browser,
so
here's
a
video.js
plugin
and
a
browser
joining
a
live
stream.
C
The
way
we
have
to
do
it
now
and
by
we
I
mean
there's
various
companies
that
do
enterprise
video
like
high
vision,
kumu
vbrik,
kaltura
others.
So
these
are
companies
that
provide
internal,
all
hands
meetings
and
use
cases
for
one-to-many
video
where
they
still
want
to
utilize
multicast
in
some
way.
So
what
they
do
is
they
have
agent
software?
You
could
call
it
a
multicast
gateway
and
they
push
that
out
to
every
single
desktop
and
it
becomes
a
a
web
server.
C
So
they're
running
a
local
host
web
server
on
their
their
pc
mac
ends
up
being
this
kind
of
strange
java
implementation.
That's
pretty
tricky,
but
here
you
are
listening
on.
You
know
for
a
local
host
https
request
from
a
browser,
and
meanwhile
you
can.
These
agents
can
be
can
be
available.
So
when
the
browser
makes
this
local
https
request
for
a
service,
it
will
then
open
up
and
join
a
multicast.
C
So
you
could
do
a
transport
stream
multicast
and
then
this
agent
would
receive
it,
and
transfer
stream
happened
to
be
pretty
handy
for
hls,
so
you
could
just
take
the
transport
stream
and
re-packetize
package
it
into
http
hls
tls
to
the
browser.
So
that's
how
you
get
transport
stream
into
hls
via
a
local
host
browser
web
server
into
into
a
local
browser
and
lots
of
security
problems
with
that,
but
at
least
it
solved
the
problem
of
of
getting
multicast
into
the
browser,
so
we're
obviously
really
eager
for
quick,
multicast
and
better
solutions
to
do
this.
C
One
more
thing-
and
we're
almost
done
so
last
slide
is-
is
what
what
is
vivo
doing.
So
we
are
actually
building
thick
clients
for
all
the
different
platforms.
So
there's
a
zoom
version,
google
meet
version,
it
has
we're
using
vp8,
it's
rtp.
We
use
ford
air
correction,
it
has
chromium
built
into
it,
so
we
could
do
the
interactivity
with
chromium.
C
It's
a
g,
streamer
codebase
with
ulpfec
and
uses
qt
for
the
for
the
wrapper
lots
of
opportunity
to
make
this
much
more
simple.
With
with
quick
multicast,
and
since
we
have
chromium,
maybe
we
can
get
to
get
to
that
very
quickly.
Take
out
the
g
streamer
make
this
just
be
one
interactive
experience,
and
that
would
definitely
make
our
customers
a
lot
happier.
So
last
slide
is
the
is
the
future
and
jake
eager
to
talk
with
you
about
it?.
G
Great
thanks
eric.
I
just
had
a
question
two
slides
back
in
this
deployment
model
where
you've
got
a
localhost
server
yeah.
What
do
you
do
for
mobile
or
do
you
have
do
you
have
mobile
support
because
there's
like
sandboxing
issues
with
this
model
there
right.
C
E
I
C
That's
true:
that's
right.
I
was
definitely
shy,
but
when,
when
my
wife
was
at
the
conference
I
was
I
had
the
gumption
enough
to
go
and
say
check
this
demo
out
and
it
worked
as
well.
So
I
got
lucky
and
I've
been
married
ever
since
so
I've
been
very
lucky.
That
was
a
that
was
a
multicast
join,
that's
still
successful.
C
C
Like
that
possible.
C
E
Okay,
max
you're
up.
K
All
right
yeah,
so
this
is
just
going
to
be
quickly
basically.
E
K
K
So
that's
all
the
different
ways.
Jake
did
some
testing
at
last
at
the
last
second
about
the
tunneling
and
so
on.
So
since
that
worked,
we're
hoping
to
it
will
work
here
as
well,
so
we
will
have
ipv4
and
mp6
and
so
on.
It
should
be
finished
relatively
soon.
The
project
is
going
to
be
over
in
like
three
weeks,
and
the
last
thing
to
do
basically
is.
We
are
also
looking
where
we're
gonna
upstream
it
hopefully
and
we're
also
looking
for
deployments.
K
If
anybody
has
an
ipv6
capable
amt
relay
somewhere
or
could
enable
aimed
ipv6
on
a
relay
somewhere.
That
would
be
great
in
the
mbon,
we're
still
looking
to
serve
our
own
relay
at
tu
berlin.
So
so
it
would
be
the
deutsche
four
chunks
net,
so
dfn,
which
is
not
really
jean
but
connected
to
zhong,
and
it's
all
a
bit
bureaucratic
and
we're
trying
to
figure
out,
and
hopefully
we
get
a
relay
up
soon
as
well.
All
right
thanks.
G
K
The
the
next
student
is
implementing
graceful
failover
for
mt,
so
make
use
of
the
lflac
that's
there,
but
as
far
as
I
understand
it
not
really
used
anywhere
so
far,
and
the
idea
is
that
the
relay
sets
the
outflag
when
it's
about
to
shut
down,
or
it
knows
that
it's
going
to
shut
down
soon,
for
whatever
reason
and
the
the
gateways
can
start
discovery
and
in
that
way,
keep
the
stream
running
and
don't
have
any
interruptions
by
the
the
relay
suddenly
disappearing.
K
This
is
part
of
a
bachelor
thesis,
so
the
implementation
should
also
be
done
in
the
near
future.
Okay,
the
last
slide
good
question.
J
A
J
J
I'm
worried
about
I'm
worried
about
the
relay
encapsulating
to
where
there's
no
client
anymore.
Oh.
J
Do
you
know
what
I
mean?
No,
if
the
client
just
drops
off
the
air,
the
the
the
if
the
amt
gave
me
drops
off
the
air
and
then
the
the
amt
relay
still
has
the
tunnel
to
it.
It'll
keep
encapsulating
to
it
until
some
time
out.
So
I
just
didn't
want
to
send
all
the
data
on
the
network
to
no
place
right
right.
I
want
it
to
happen
sooner,
so
I
was
wondering
if
there's
a
sorry,
I'm
going
away
sort
of
thing
when.
J
K
Yeah
but
yeah
thanks
for
jake
for
also
coming
up
with
that
idea
in
a
way
or
like
helping
the
student
along
as
well
right
and
since
the
bachelorette
has
to
be
some
scientific
part.
So
it's
gonna
include
some
measurements,
hopefully,
and
how
much
time
you
save
and
so
on.
Yep.
J
Yeah,
actually
there
is
a
reason
if,
if
there's
a
shut,
if
if
something
goes
away,
maybe
the
the
relay
can
send
a
query
sooner,
so
it
can
detect
that
no
igmp
report
is
going
to
come
back
within
10
seconds.
You
could
shut
down
the
tunnel
in
10
seconds
instead
of
three
minutes.
That's
useful
right.
K
Yeah,
okay,
all
right
and
the
final
one
is
basically
what
lauren
already
talked
about
and
shown
this
update
to
the
multicast
menu.
I
guess
the
interesting
part
here
is
that
on
one
hand,
the
the
pulling
of
the
preview
frames
is
gonna
do
some
measurements
on
how
much
resources
that's
gonna,
take
and
so
on.
But
the
other
thing
is:
how
do
you
do
like
ranking
for
multicast
streams
right?
Because,
unlike
twitch
or
something
you
don't
have
the
viewer
numbers?
You
don't
know
who's
actually
watching
the
stream?
K
If
you
have
something
like
multicast,
quick,
of
course,
you
would
know
that,
but
for
the
multicast
menu,
especially
there's
no
good
way
of
doing
that.
So
he
implemented
something
like
cache.
The
the
the
same
algorithm
that's
used
for
cache
management,
where
you
use
the
likes
on
the
website
to
to
figure
out
which
stream
is
relatively
popular
but
yeah
there
might
be
future
work
on
seeing
like
if
this
also
could
be
related
to
telemetry
and
so
on.
K
K
Yeah,
so
there's
also
the
the
idea
for
a
future
problem,
but
it's
like
a
big
idea,
so
we
would
need
a
bigger
group
for
that.
Maybe
in
the
over
the
winter
term
is
to
try
to
get
apt-get
distribution
with
multicast
implemented
right.
So
you
would
popular
properly
popular
packets
would
get
distributed
over
multicast
if
a
lot
of
people
are
requesting
at
the
same
time,
next
slide.
K
Yeah
and
that's
basically
it
so
if
any
of
you
have
ideas
for
similar
scope,
topics
related
to
multicast
or
just
any
other
collaborations,
you
would
want
to
do.
We
have
a
lot
of
students
applying
for
theses.
We
can't
we
don't.
We
can't
come
up
with
enough
ideas
for
interesting
thesis,
so
yeah
we
would
be
very
happy
if
any
of
you
have
ideas
or
something
like
that,
all
right,
thanks,
lauren.
M
Yeah
with
the
trending
streams,
is
he
what
is
he
tracking
for
trending
streams,
just
the
like
button?
So
would
there
be
any
value
in
tracking
like
how
many
people
are
actually
clicking
to
open
the
stream
as
well?
K
Right,
I
I
I
guess.
Yes,
the
the
ideas
then
or
the
issue,
I
guess
is,
do
you
like
weigh
them
differently?
But
yes,
it's.
It's
probably
a
good
idea
to
like
for
future.
I
can
tell
yeah
thanks,
yes,
and
you
don't
know
like
if
somebody
launched
from
the
con
so
from
the
command
line.
You
also
don't,
I
guess,
wouldn't
know.
F
Stick
with
us,
could
you
track
how
long
people
actually
are
watching?
If
people
you
know
just
watch
it
for
a
few
seconds,
then
they
probably
didn't
care
about
it.
If
they
watched
a
long
time,
they
might
really
like
it
or
no.
E
I
would
imagine
that
you
should
be
able
to
track
one,
how
many
people
click
on
the
entry
and
two,
how
many
people
could
clicked
on
the
launch
in
vlc.
I
K
If,
if
they
didn't
use
the
command
line
right
right
for
the
for
the
second
one,
and
that's
like
that's,
why
so
far?
Here's
the
like
button,
because
that's
like
more
a
strong
indication,
that's
actually
like
something
they
want
to
watch
and
are
interested
in
watching
but
yeah.
It's
definitely
like
an,
I
think,
a
bigger
area
where
you
can
probably
come
up
some
clever
mechanism
somewhere
else
that
that
you
could
use
yeah.
E
K
K
All
right
thanks
a
lot.
E
All
right,
thank
you.
So
we
do
have
a
little
bit
of
time,
but
that
doesn't
mean
you
can
leave
so
everybody
please
feet
proud
on
floor
eyes,
eyes
forward,
hands,
folded
till
the
end,
but
we
have
some
time
anybody
wanna
anybody,
have
anything
else.
They'd
like
to
bring
up
and
discuss.
E
Jake,
are
you
trying
to
leave?
Are
you
okay?
Yes,
t-shirts
are
here,
I
should
say,
like
I
mentioned,
there's
there's
a
there's,
a
slack
group
that
a
lot
of
this
activity
is
happening
on
reach
out
to
me.
I
can
get
you
added
to
the
slack
group
or
you
know
any
of
the
the
others
in
this
group,
and
we
can
add
you
and
get
be
part
of
the
revolution.
E
B
A
A
A
E
So
for
the
folks
still
on
we've,
actually
the
meeting's
over.
So
thanks
for
coming
and
we'll
see
you
in
london.