►
From YouTube: IETF113-MOQ-20220323-0900
Description
MOQ meeting session at IETF113
2022/03/23 0900
https://datatracker.ietf.org/meeting/113/proceedings/
C
C
C
This
is
today's
agenda:
we'll
go
over
a
few
administrative
things
lead
up
in
and
then
go
into
use
case
overviews
looking
at
a
couple
of
different
use
cases
a
bit
of
this
a
little
bit
of
the
potential
solution,
space
for
future
work,
but
that's
setting
the
stage
for
discussing
what's
possible,
not
solutions,
then
we'll
have
the
discussion
and
wrap
up.
So
that's
the
plan
for
today.
C
C
C
So
why
are
we
here?
We
are
here
to
discuss
if
there
are
media
delivery
challenges
that
we
have
common
enough
understanding
of
so
do
we
understand
what
it
is
about?
What's
the
problems
are,
has
issues
or
challenges
that
we
have
encountered
so
say
in
in
trying
to
do
things
and
not
theoretical?
Only
they
are
sufficient
interest
to
solve.
C
So
this
is
a
non-working
group
forming
boss.
There
are
a
couple
of
potential
outcomes
after
this
buff
one.
Is
that
there's
one
or
more
problems?
There's
the
sufficient
support
to
be
addressed
interesting
and
then
we
go
into
some
type
of
scoping
of
the
work
figure
out.
If
there's
an
existing
working
group
or
you
need
to
write
a
charter
for
it
and
then
do
the
discussion
around
that.
C
C
So
that's
kind
of
the
scope
setting
and
let's
go
into
the
use
case
overview
now.
Good
games.
C
D
Excellent
good
morning,
people
so
in
in
this
bit
of
a
presentation,
I'm
going
to
be
covering
a
couple
of
different
aspects
of
the
use
cases
little
disclaimer
before
I
continue.
This
is
views
of
mostly
myself
and
also
of
spencer
and
not
really
of
a
non-existent
working
group.
D
D
We
got
here
because
we've
had
an
awful
lot
of
discussions
about
this
over
a
very
long
time.
Some
of
the
first
bits
of
documents
and
discussions
were
happening.
2017
and
perhaps
even
earlier.
We've
also
had
the
mock
mailing
list
around
for
a
while.
We've
had
a
few
side
meetings,
there's
also
been
with,
with
asides
from
the
use
cases.
Draft
that
my
self
and
spencer
have
been
working
on.
There's
also
been
many
other
drafts
that
have
been
published
out
there.
D
That
are
you
that
have
got
some
sort
of
an
attempt
to
solve
something
that
may
fit
into
the
use
cases
that
that
we're
going
to
be
talking
through
today
and
I'm
not
going
to
go
through
these.
But
there's
there's
a
few
here
that
just
shows
sort
of
how
long
this
timeline
is
progressing.
We've
sort
of
had
these
these
documents
around
for
a
couple
of
years
now,
and
this
isn't
a
complete
list,
there's
rips,
which
is
one
other
and
there's
probably
some
more
out
there-
that
we've
completely
missed.
D
So
perhaps
the
big
question
we're
going
to
ask
is
well:
we
have
all
these
other
protocols
that
can
ship
media.
Why
would
we
put
all
this
effort
into
using
quick?
Well,
quick,
although
it
is
quite
general
purpose
it,
it
gives
a
lot
of
useful
functionality
that
have
have
very
useful
applicability
in
in
the
media
domain.
D
D
So
let
me
talk
about
the
use
cases
so,
spencer
and
I've
spent
an
awful
lot
of
time
since
the
last
side
meeting
better
refining
the
ontology
of
of
these
use
cases-
and
this
is
not
complete,
but
it
is
what
we
think
enough
to
explain
the
scope
and
and
the
the
area
that
we're
working
in
and
I've
put
latency
on
the
other
axis,
not
because
it
is
the
most
important
thing,
but
more
just
to
give
some
structure
to
this
there's
certainly
many
other
requirements,
both
functional
and
known
that
we
could
do
it.
D
This
is
just
one
of
them
that
clearly
is
very
important
for
people
and
just
sort
of
working.
Your
way
from
top
to
bottom
we've
got
sort
of
what
we
call
interactive
media,
so
that
could
be
things
from
like
cloud-based
gaming,
remote
desktoping
operations,
both
of
those
tend
to
be
more
of
a
one-to-one
operation,
and
then
we
have
video
conferencing,
which
is
either
a
one-to-one
or
a
many-to-many.
D
The
live
media
use
cases
which
I'll
be
talking
a
bit
more
in
depth.
Here
are,
but
are
not
the
only
important
ones,
sort
of
we
sort
of
split
them
into
three
sort
of
groups
live
media
ingest,
which
is
where
media
is
coming
out
of
being
sent
on
to
somewhere.
So
an
example
would
be
an
output
from
a
camera
into
a
transcoder
or
some
other
into
the
broadcast
chain.
D
Syndication
is
a
little
bit
more
complex
and
this
is
sort
of
where
it
is
being
fanned
out
into
distribution
networks.
So,
for
example,
when
it
comes
out
of
out
of
the
broadcast
chain
in
in
television
and
needs
to
be
sent
on
to
cable
tv
providers
or
over
satellite
or
terrestrial
networks
and
for
live
media
streaming,
this
is
playback.
D
This
is
this:
is
p
people
consuming
and
watching
live
media
and
and-
and
lastly,
I'm
not
going
to
talk
about
this
too
much
is
more
of
the
on-demand
media
use
cases
where
media
is
stored
and
and
also
made
available
for
playback,
but
for,
for
all
intents
and
purposes,
I'm
not
going
to
be
dwelling
too
much
on
that
one.
D
So
to
set
a
bit
of
context
here.
This
is
a
very
simplified
example
of
a
television
broadcast
chain
end
to
end
and
I've
grayed
out
the
bits
that
are
perhaps
not
so
pertinent
for
this
discussion.
D
But
I
think
it's
kind
of
interesting
to
talk
about
sort
of
the
end
to
end
or
the
glass
to
glass
call
it
what
you
will
and
we
start
over
on
the
left
hand,
side
with
sources
cameras,
either
inside
of
studio
or
remote
being
fed
into
a
central
point,
which
is
typically
a
master
control
room
and
then
managed
and
sent
out
into
the
distribution
side
of
things
such
as
for
transcoding
and
eventually
delivery
for
the
viewer
in
the
ip
domain.
D
There
is
also,
if
we
zoom
in
for
a
little
bit
this,
this
lovely
green
box
here
of
live
streaming,
and
this
is
representative
of
use
cases
where
we're
talking
about
user-generated
content
there
in
these
three
sort
of
ingest
arrows
as
it
were,
the
blue,
the
purple
and
the
green.
They
all
are
effectively
ingest,
but
they
do
have
some
key
differences
in
their
requirements
and
how
they
work.
D
The
two
most
notable
differences
between
them
are
around
sort
of
throughput
and
resiliency
requirements
and
in
the
case
of
in
some
cases
bi-directionality
for
outside
broadcasts,
for
example,
it
may
be
required
that
you
send
audio
and
video
both
ways
say:
a
person,
a
weather
person
in
bad
weather,
having
a
interview
with
a
news
presenter
in
the
studio.
They
need
that
bi-directionality
there.
D
When
we
talk
about
syndication.
This
is
again
that
fan
out
onto
delivery
networks,
and
there
isn't.
The
the
key
distinction
here
is
that
there
isn't
any
further
transcoding
or
splitting
up
of
the
video.
There
may
be
a
transmuxing
or
repackaging
up,
there's
no
further
transformations
to
the
video,
no
further
editorialization,
and
then.
Lastly,
we
got
to
the
streaming,
which
should
be
fairly
clear
here,
which
is:
is
just
people
fetching
the
media
and
playing
it
out?
D
F
Hello
is
that
working
all
right?
Do
you
so,
when
you're
merging
multiple
media
streams
does
that
go
under
the
like?
Which
section
would
that
go
into,
or
is
that
just
a
sort
of
independent
question,
because
you
have
to
synchronize
them
right.
D
F
Well,
I
mean
I,
I
guess,
there's
there's
several
that
I
think
of
as
as
related
those
being
things
like
multi-language
channels
for
the
audio
streams
merging
the
video
and.
F
You
know
the
the
sort
of
if,
if
you
have
any
sort
of
user,
driven
switching
of
views
that
is
important
to
have
synchronized
on
the
on
the
viewer
side
if
you've
got,
and
so
I
guess
things
like
overlays
also
might
be
relevant
here,
although
I'm
not
sure.
Maybe
that
is
a
different
section.
F
So,
but
these
are
the
kinds
of
things
I
was.
I
was
considering.
D
So
they
are
an
important
concern,
but
I
don't
think
that
that's
particularly
orthogonal,
because
you'll
have
situations
where
at
any
point
in
this
in
in
the
chain
where
there
may
be
multiple
forms
of
media
that
need
to
be
pushed
through.
So
the
cameras
out
there
that
can
do
audio
and
video
and
then
there's
others
that
are
just
video
only
because
they're
used
in
a
studio
where
it's
assumed
that
audio
is
supplied
separately
and
they're
kind
of
like
lanes
in
the
highway
as
it
were,
and
sometimes
they
merge
off.
D
And
sometimes
they
new
lanes
emerge
depending
on
where
they
are
in
the
chain
and
what?
What
kind
of
what
kind
of
things
happening.
F
So
part
of
the
sort
of
solution
space
that
this
this
work
would
need
to
address
is
how
to
synchronize
those
properly
like
rtp
style,
with
timestamps
or
whatever.
D
So
pete
resnick
just
left
a
comment
in
the
chat.
I
I'm
happy
to
answer
this.
He
says
maybe
the
caffeine
wore
off,
but
is
there
something
about
the
use
cases
that
are
different
in
streaming
that
is
currently
going
on?
I
think
the
answer
is
no.
There's
nothing
different
here,
we're
just
trying
to
be
very
clear
about
what
is
what
what
is
out
there.
What
is
the
existing
state
of
affairs
in
terms
of
where
the
applicability
for
this
work
might
go.
E
H
Mozinati
in
in
two
more
slide
decks
there'll,
be
more
descriptions
about
the
use
cases
and
I
think
it'll
be
clear
what
the
real
differences
are
between
just
today's
traditional
streaming
and
some
of
these
new
use
cases
that
require
maybe
more
advanced
types
of
streaming,
lower,
latency
types
of
streaming
and
maybe
even
more
hybrid,
switching
between
extremely
low
latency
streaming,
almost
like
video
conferencing,
type,
latencies
and
traditional,
you
know,
distribution,
delivery
and
for
the
comment
about
jake's
coming
about,
I
would
take
it
as
almost
like
control
protocols
or
or
you
know.
H
I
didn't
think
that
the
the
group
was
going
to
be
looking
at
things
like
signaling
and
control
protocols,
because
you
think
about
this.
You
know
at
the
bottom,
where
there's
media
distribution
and
media
transport
then
there'll
also
be
you
know,
feedback
and
control
protocols,
and
I
didn't
think
that
the
people
were
in
this
group
were
going
to
be
looking
at
the
control
protocols
for
things
like
user
selection
and
things
like
that.
I
Spencer
dawkins,
I
just
wanted
to
add
the
draft
that
james
and
I
worked
on,
has
a
section
trying
to
explain
at
a
high
level
what
the
difference
between
interactive
and
live
and
not
live.
I
don't
remember
what
we
called
it
media.
I
I
So,
if
you're
trying
to
figure
out
things,
if
you
click,
you
know,
if
you
don't,
if
you
have
a
copy
of
the
slides
and
click
on
them,
that
that
would
take
you
to
the
right
place
in
the
draft.
I
C
Okay:
let's
go
to
the
next
part
presentation
now
so
ying
I
will
put
up
the
slides
if
you
don't
want
to
drown.
C
G
G
First,
to
borrow
james
diagram
on
the
live
media
broadcast
chain.
Here
I
just
want
to
highlight
the
focus
of
the
talk
is
on
the
live
ingestion
part
of
the
chain
and
and
also
in
particular,
I'm
talking
about
ingestion
from
clients
to
live
stream
live
streaming
platforms.
G
G
There
are
clients
around
the
tv
broadcast
facilities
where
they
use
high-end
hardware
encoders
or
it
can
be
ingestion
from
consumer
level
devices,
game
consoles
software
such
as
obs
or
ingestion
from
browsers
mobile
phones.
So
the
diversity
of
the
encoders
ecosystem
is
actually
poses
some
unique
challenges
in
in
terms
of
live
ingestion.
G
G
We
consider
that
are
important
and
aware
that
there
are
challenges
so
first,
we
we
want
to
able
to
support
high
visual
quality,
live
streams,
for
example
4k
hdr,
but
not
limiting
to
that.
We
can
see
in
the
future.
There
probably
will
be
demand
for
even
higher
resolution,
higher
frame
rate
content-
and
this
is
mostly
driven
by
broadcast
level
premium-
live
events
like
sports
gaming
tournaments
concerts,
but
we
also
see
increasing
demand
from
general
users,
especially
in
terms
of
gaming,
live
streams.
G
So
this
leads
to
the
next
requirement
about
coda
agility.
In
order
to
support
high
visual
quality
content.
We
want
to
use
newer
video
codecs
such
as
hevc
bpm,
vp9,
even
av1,
that
can
have
better
compression
rate
so
that
we
can
stream
at
high
quality
at
a
lower
bit
rate,
and
also
some
of
the
new
features
like
hdr
also
require
requires
new
codec
support.
G
G
G
I
think
basically,
everyone
likes
wants
low
latency,
but
you
theoretically
can
always
drive
down
latency
by
lowering
quality,
lowering
a
bit
rate
resolution
or
dropping
frames,
but
this
may
not
necessarily
be
like
always
the
trade-off
we
want,
for
example,
if
it's
a
live
concert,
then
maybe
we
can
afford
a
little
bit
more
latency
in
order
to
maintain
high
visual
quality.
G
G
The
last
update
was
like
2012..
Furthermore,
it
uses
a
4-bit
idiom
for
codex,
so
only
16
values
for
the
codec.
You
can
see.
We
will
easily
run
out
of
values
for
new
codecs
and
also
in
terms
of
latency,
because
it
is
directly
on
top
of
tcp,
so
you
can
easily
switch
to
use
quick
for
it,
so
it
has
the
head
of
line
blocking
issue.
G
So
a
few
years
ago
we
also
added
support
for
using
webrtc
to
do
live
stream
ingestion.
We
added
primarily
to
be
able
to
support
a
live
stream
from
browsers
in
order
to
lower
the
barrier
for
new
creators.
G
G
We
understand
that
now
there
is
a
newer
version
of
it
that
support
low
latency,
but
we
don't
see
a
lot
of
demand
like
adoption
from
the
encoders
and
furthermore,
because
hrs
and
dash
they're
really
created
for
delivery.
So
some
of
its
constructs
like
the
playlist
and
manifest
those
are
not
really
needed
for
ingestion,
but
we
still
need
to
support
it,
but
they
do
add
overhead
in
in
live
streaming.
G
Lastly,
the
srt
protocol
is
a
relatively
new
protocol.
I
think
it's
cr
created
mostly
to
solve
some
of
the
codec
and
latency
issue.
We
do
see
that
it
has
a
very
good
adoption
among
tv
broadcast
providers.
G
The
challenge
we
see
is
we're
using
it
for
large-scale
deployment,
because
it's
because
it
is
the
udp
based
protocol,
so
load
balancing
is
not
a
given
for
for
off-the-shelf
load
balancer
for
udp
and
also
its
current
life.
G
G
Would
you
see
that
quick
can
provide
a
unique
solution
for
some
of
the
challenges
like
deployment
and
latency,
so
we're
very
happy
to
see
that
there
are
movements
in
this
area.
There
are
proposed
quick
based
solutions,
and
some
of
them
are
explicit,
specifically
focusing
on
live
media
ingest.
Some
are,
and
some
are
like.
The
design
is
initially
for
delivery,
but
they
can
also
be
used
for
ingest,
but
in
general
yeah.
We
we're
really
looking
forward
to
collaborate
through
the
ietf
community
to
find
solutions
that
can
address
these
challenges.
J
Yes,
so
I'd
like
to
talk
a
little
bit
about
the
trade-off
between
the
latency
and
the
quality.
So
when
you
implemented
the
webrtc
ingestion,
you
actually
natively,
you
actually
change
the
congestion
control
algorithm
to
favor
the
quality
over
the
latency.
G
That
is
a
that
is
a
good
question.
I
think
we
really
need
some
more
experience
with
testing
it.
I
think
like
using
the
bbr
and
quick.
I
can't
give
a
definite
answer
now,
but.
K
Luke
to
answer
bernard's
question
a
little
bit:
I've
implemented
bbr
for
warp
and
yeah.
We
see
the
issues
with
probe
rtt
state.
I
think
it
comes
down
to
when
you're
doing,
frame
based
delivery.
K
So
I
definitely
see
that
at
least
it's
going
to
be
algorithms
that
should
be
suited
towards
live
video
and
there's
other
issues
like
my
video
is
mostly
application
limited.
So
a
lot
of
the
time
you're
not
fully
utilizing
the
congestion
window
and
the
algorithms
aren't
designed
for
that.
So
there's
just
a
huge
swath
of
space
for
optimizing
congestion
control.
I
I
want
to
thank
ying
and
luke
for
bringing
this
presentation
and
the
next
presentations
forward,
and
I
want
to
thank
you
both
for
answers
to
bernard's
question.
I
think
that
it's
fair
to
say
that
the
last
time
I
asked
about
such
things,
the
possibility
we
just
don't
know
that
much
in
the
congestion
control,
iccrg
sense
of
the
word.
I
We
don't.
We
just
don't
know
that
much
about
the
interaction
of
various
congestion
control
mechanisms
that
might
be
used
from
the
same
endpoint
and
how
they
would
they
would
play
with
each
other.
I
The
you
know
the
big
issue
for
like
scream
the
media
control
or
media
or
congestion
control
things
was
that
they
would
not
self
congest.
You
know
self-congest
that
they
would
that
they
would
not
that
they
could
that
they
could
share
bandwidth
between
themselves
and
other
in
connections
that
we're
doing
scream,
and
so
the
question
about
having
two
arbitrary
congestion
mechanisms.
L
G
L
L
Kind
of
fundamental
to
right
to
the
information
theory
of
space,
so
the
question
is
really
whether
you
can
get
if
you
have
to
drop
latency
drop
per
quality
because
of
increa,
because
latency
otherwise
will
get
increased
too
much,
whether
you
can
get
it
fast
back
up
fast
enough.
I
think,
and
that
is
a
common
problem
with
every
solution.
G
G
C
M
G
Yeah,
it's
in
terms
of
load
balancing
because
of
the
the
udp
protocol
itself.
Right,
it's
not
connection
based,
so
load
balancing
is
definitely
not
a
given.
I'm
not
saying
you
can't
do
it.
It's
just
more
more
complicated.
G
G
That's
true
yeah,
I
think
the
basically,
I
think
for
quick,
though,
because
it's
connection
based
and
also
load
balancing
is
much
well
documented.
I
think
that
there
are
a
lot
of
documentation
on
how
to
do
load,
balancing
for
quick.
E
Thanks,
stefan.
N
N
We
may
not
even
have
a
problem
with
implementations
themselves,
but
a
problem
with
the
way
how
the
implementations
are
configured
so
there's,
indeed,
a
lot
of
knobs
that
can
be
played
with
so
I
I
think
it
in
that
regard.
The
slides
may
be
a
little
bit
misleading
because,
as
harold
said-
and
I
agree
with
that,
the
problem
is
a
fundamental
one
which
should
occur
in
the
same
way
with
all
the
other
protocols
right.
N
So
that's
point
number
one
point
number
two
is
I
said
that
a
thousand
times
I
say
it
again:
congestion
control
is
overrated
here
in
this
organization,
the
networks
nowadays
are
sufficiently
elastic
that
you
know
you
can't
push
out
more
for
a
while
without
any
everything
melting
down,
certainly
for
and
for
a
used
case
where
you
basically
have
only
one
stream
like
that
ingestion
right.
N
What
happens
in
practice
is
people
just
ignore
it,
and
things
work
just
fine
so
and
that's
something
which
would
probably
require
a
lot
of,
or
at
least
some
software
work,
both
on
the
current
webrtc
senders
and
also
in
the
quick
stack
potentially
so.
My
third
thing
is,
and
that's
independent
of
this
draft
here
I
think,
an
independent
of
this
light
deck.
I
think
it's
time
that
we
put
together
a
draft
that
defines
our
view
on
the
terminology
for
those
various
latencies
right.
N
N
Basically,
you
you
start
shipping
packets
before
you
have
encoded
a
whole
frame.
On
the
other
hand,
some
of
the
tv
guys
think
ultra
high
ultra
low
latency
is
when
you
don't
send
a
whole
crop.
So
a
group
of
pictures
right,
so
someone
should
write
a
draft.
Maybe
I
write
that
myself
one
day,
yeah
and
and
just
just
just
put
that
terminology
problem
aside
once
and
for
good.
Thank
you.
E
I'll
just
jump
in
and
say
that
with
respect
to
that
last
point
that
this
draft
that
james
and
spencer
written
does
have
a
classification
of
different
things
and
tries
to
demarcate
ultra
low
latencies
lower
than
one
second,
because
I
think
there's
an
agreement
that
one
second
is
not
really.
Then.
E
O
So
I
want
to
just
come
back
to
the
the
congestion
control
issue,
because
you
know,
like
it's
itf,
there's
nothing
more
important
than
congestion
control.
O
The
I
don't
want
to
give
the
impression
that
people
have
not
played
deeply
with
the
congestion
control
and
quick.
We
unders
like
there's
a
ton
of
different
people,
including
us,
have
done
a
lot
of
experiments
with
us.
I've
played
with
it
I've
implemented
alternative
implementations.
We've
been
working
closely
with
christian
on
the
quick
implementation.
Some
you
know
some
of
these
types
of
things.
O
What
I
would
say,
though,
that's
relevant
for
this
buff
here,
is
that
I
I
think
there
are
things
you
could
do
to
the
quick
congestion
controller
that
would
improve
it
for
media,
no
matter
what
type
of
media,
what
what
version
of
low
latency
you,
you
think
you're
dealing
with
right
and
we
know
some
of
those
and
they
seem
like
they
can
easily
fit
in
with
the
quick
framework.
I
think
that's
something
that's
completely
separate
from
this.
Work
can
be
taken
to
the
quick
working
group
of
how
to
have
something.
O
That's
done
for
that,
but
what
this
working
group,
I
think
needs
to
focus
on
is
given
whatever
you
do,
getting,
quick,
which
it
is
what
it
is
today
and
you
get
other
things.
Is
you
know?
How
do
we
build
the
type
of
rate
adaptation
and
how
do
we
interact
with
that
or
you
know?
How
do
we
adapt
what
we're
trying
to
accomplish
as
an
application
layer?
O
On
top
of
what
quick
basically
gives
us,
and
it's
very
clear
that
you
can
get
a
fair
amount
of
you
can
get
a
long
ways
on
quick
as
it
stands
today
with
no
changes,
and
you
could
probably
get
slightly
better
if
you
made
some
changes
to
quick
and
those
can
be
proposed
separately
over
to
the
quick
working
group,
so
that
would
that
would
be
my
view,
but
I
so
I
don't
think
we're
talking
a
lot
about
the
congestion
control
issues
and
how
we
might
want
to
change
bbr
and
all
of
those
those
issues
here.
O
P
Okay,
can
you
kermit.
E
P
Okay,
so
two
things.
The
first
thing
is
about
congestion
control
quickly.
The
person
who
spent
a
lot
of
time
working
congestion,
control
for
life
and
non-live
media
delivery
at
youtube.
It
does
matter
a
lot
to
user
experience
and
in
non-trivial
way.
P
The
second
is
more
to
comment
about
webrtc
and
I'm
having
a
little
bit
of
deja
vu
here,
because
when
web
transport
was
first
proposed,
people
assured
me
that
things
can
be
solved.
The
web
transport
laws
can
be
solved
with
webrtc
data
channels.
P
You
just
need
to
implement
sdp
and
then
implement
ice
things
that
run
on
top
of
it
and
then
implement
a
ctp,
and
then
there
are
also
dragons,
and
all
of
that
is
true,
because
webrtc
is
like
network-wise
during
complete
in
the
sense
it
can
do
anything,
but
that
doesn't
mean
it
is
the
best
tool
for
the
jobs.
P
That
is
to
say,
if
we
went
through
that
there
are
people
who
tried
to
use
it
for
delivering
high
quality
live
media
and
their
experience
was
that
it
is
not
ready
in
the
state
it
is,
and
the
answer
the
question
would
be.
Do
we
really
want
to
turn
webrtc
into
something
that
can
do
this,
or
should
we
start
building
with
a
more
with
a
different
starting
point
like
quick,
and
I
think
at
least
for
me
personally,
quick
seems
to
be
more
promising,
because
quick
dash
over
http
3
is
a
really
mature,
widely
deployed
solution.
K
Hello,
so
I've
got
some
non-professional
slides
just
to
mix
it
up,
but
yeah
I'm
luke.
I
work
in
twitch
for
a
while
now
and
amazon,
so
I
wanna
talk
about
distribution,
mostly
focusing
on
the
requirements
and
use
cases,
and
you.
K
Going
a
little
bit
into
the
flame
war,
that
is,
this
protocol
protocol
can
do
x,
y
z,
yeah
distribution,
the
the
main
difference
with
distribution
is
versus
ingest.
Is
this
fan
app
mechanism?
It's
no
longer
a
one-to-one
protocol.
You
have
to
make
sure
that
whatever
distribution
protocol
is
designed
to
be
replicated
and
passed
off
multiple
hops.
K
K
The
big
one
is
you
just
can't
have
feedback,
you
can't
have
the
viewer
send
something
to
the
encoder
say:
hey
lower
your
bitrate,
it
just
doesn't
work
because
you
have
too
many
viewers
trying
to
watch
the
same
content,
and
so
it
does
limit
the
space
a
little
bit
your
solutions,
you
can't
so
that's
why
there's
different
protocols
typically.
K
The
other
thing
that's
well,
I
kind
of
mentioned
here
is
congestion,
because
we
can't
tell
the
encoder
hey
lower
your
bitrate.
We
need
to
drop
something
we
need
to
somehow
lower
the
bit
rate.
This
is
typically
done
using
abr.
The
idea
is
that
you
create
different
renditions
and
you
can
I
I've
got
boundaries,
you
can
switch
between
them.
It's
very
jerky.
It's
very
slow
and
dramatic.
You
kind
of
have
to
plan
in
advance.
K
K
We
don't
really
see
frame
dropping
too
much
in
distribution,
but
that's
also
an
option
that
would
be
another
way
to
reduce
the
bitrate
and
typically,
instead,
you,
these
particles,
will
buffer
hls.
You
you've
seen
it
like
in
dash,
if
you've
ever
just
your
buffer
runs
out,
because
this
abr
wasn't
fast
enough
for
congestion
control.
You
pause
and
you
wait
till
the
buffer
fills
up.
K
The
other
thing
is
kind
of
unique
for
distribution
is
latency.
So
this
I
imagine
what
we
talked
about
a
lot.
I
don't
like
talking
numbers.
I
think
I
see
the
chat
people
still
arguing
over
like
is
25
or
100
or
right
threshold.
K
It
really
just
depends
on
the
broadcast
like
what
the
pres
and
even
the
viewer,
what
the
preference
are.
So,
if
I'm
doing
a
presentation
like
this,
actually
the
latency
doesn't
matter
that
much
it
could
be.
I
put
I
call
interactive
latency
like
people
in
chat
can
say
stuff,
but
they
don't
need
to
be
able
to
talk
back
to
me.
Obviously,
if
you
have
a
conversation,
you
want
real
time
you
want
to
be.
You
want
to
be
able
to
talk
to
another
person,
but
somebody
is
also
watching
from
home.
K
Has
no
intention
of
interacting
with
chat
and
they
can
watch
it
lossless.
Maybe
they
don't
want
to
drop
any
data
at
all.
They
want
to
see
every
pixel,
like
you
know,
they're,
watching
a
soccer
game
so
like
for
the
soccer
example,
you
can
have
one
one
viewer
might
care
about.
They
need
real
time.
They
need
that
their
coach
or
something
they
need
to
see
the
the
game
asap.
K
You
have
another
side
where
people
are
just
maybe
they're
betting
on
the
game,
maybe
they're
chatting
with
their
friends.
They
want
lower
latency
there.
They
won't
be
interactive
and
then
the
further
spectrum.
You
have
even
the
same
people
watching
the
same
game,
but
they
want
to
see
all
the
action
they
don't
want
to
miss
anything.
K
K
Especially
somebody
in
the
us
on
fiber
can
watch
at
a
lower
latency
than
somebody
in
brazil
on
a
cellular
network
just
and
the
the
way
you
tweak
the
buffer
size
is
very
important
for
distribution
and
hence
the
latency
and
the
other
thing
that
doesn't
get
talked
about
too
much
is
compatibility,
so
we
have
a
lot
of
viewers
and
we
need
to
make
sure
it
works
on
every
device.
K
In
a
broadcast
scenario,
you
can
get
away
a
lot
of
the
times
with
saying,
like
I
have
a
studio
or
I
have
a
dedicated
software,
I
can
run
on
the
person's
computer
to
create
this
stream,
but
for
the
distribution
use
case,
we
just
don't
have
that
luxury.
All
the
time
like
it
just
needs
to
work
on
their
tv,
it
needs
to
work
in
their
browser
and
it
needs
to
it
needs
to
work
on
their
phone
and
again
their
network.
It
needs
to
work
wherever
they
are
in
the
world.
K
You
could
be
watching
somebody
streaming
from
the
us.
You
could
be
watching
from
korea
again
on
us
on
a
mobile
device
on
a
on
a
desktop,
and
this
has
a
lot
of
ramifications
for
the
protocol
and,
of
course,
that's
what
we
run
into
a
lot
honestly,
the
biggest
thing
with
distribution.
I
think
these
are
limited
to
only
a
handful
of
protocols
based
on
compatibility.
K
So
speaking
of
the
protocols,
so
twitch
is
an
hls
stack,
specifically
we're
using
what
we
call
lhls.
It's
a
chunk
transfer
variant
that
we
developed.
K
Also,
if
you
want
more
details,
I
sent
out
at
google
doc
about
this
before
the
talk
to
the
mailing
list,
but
the
main
issues
with
all
the
different
flavors
of
hls
is
it's
got
headphone
blocking.
The
idea
is
that
all
the
data
must
arrive
like
you
must
download
every
segment
and
every
segment
depends
on
every
previous
segment,
which
means
every
frame
depends
on
every
previous
frame,
which
means
you
need
a
large
buffer
to
handle
any
variations
in
the
network.
K
There
are
obviously
ways
to
reduce
latency,
like
the
fact
that
you
have
these
different
flavors
of
hls,
but
you're,
still
fundamentally
limited
by
the
fact
that
you
just
can't
drop
data,
and
if
there
is
a
network
issue
you
you
rely
on
avr
to
switch
to
avoid
it
and
to
recover,
and
that's
just
too
slow.
K
We
also
have
some
issues
with
client-side
abr,
that's
just
worth
mentioning,
because
when
you
deliver
download
frame
by
frame
it
just
breaks
a
lot
of
abr
algorithms
that
traditionally
use
for
hls.
But
it's
not
a
fundament.
It's
it's
something
that
there's
a
lot
of
different
solutions
and
we've
done
a
lot
of
different
things,
including
just
like
almost
literally
running
a
speed
test,
while
we're
viewing
the
stream
just
to
try
and
figure
out.
If
we
can
switch
up.
K
So
I
was
tasked
with
trying
to
reduce
latency
on
twitch
and
we
ran
a
big
project
to
try
and
use
webrtc
again.
The
document
has
more
details,
but
this
is
a
hybrid
approach
where
we
took
our
existing
ingest
stack
and
just
tried
to
put
webrtc
for
distribution.
The
biggest
issue
with
that
was
just
we
didn't
have
any
control
over
the
quality
we
wanted
to
have
support
these
use
cases
where
we
don't
need
real
time.
K
In
fact,
we
have
like
a
in
just
stacked
with
a
ton
of
latency,
and
then
we
will
try
to
use
distribution
at
the
very
end
and
webrtc's
forcing
us
to
make
it
real
time.
We
have
to
a
lot
of
people
bringing
up
like.
Well,
that's
not
a
problem
with
the
the
wire
format
and
that's
true,
like
webrtc
as
a
wire
protocol,
you
can
do
a
lot
of
stuff
there's
a
lot
of
extensions.
You
could
do
but
again
for
web
support.
K
We
need
the
browser
to
work
and
it's
not
a
great
way
of
of
conveying
this
information
too.
The
implementation,
because
we
need
browser,
support.
It
means
we
have
to
petition
google
to
change
things.
We
have
to
petition
ietf
to
add
stuff
to
the
the
protocol.
We
wanted
something
that
we
had
more
control
over
again
using
web
transport
and
the
idea
is
that
we
could
send
packets
or
we
can.
We
can
do
our
own
protocol
and
you
see
this
a
lot.
K
Actually,
you
see
a
lot
of
people
using
webrtc
data
channels
to
this
extent,
and
we
did
too,
I
implemented
basically
rtp
over
webrtc
data
channels,
a
crazy
amount
of
complexity
just
to
get
udp
working
in
the
browser,
and
it
just
wasn't
a
good
experience
and
yeah
I've
implemented
every
webrtc
protocol.
At
this
point
I've
optimized
them.
I
ended
up
throwing
away
like
a
year
and
a
half
of
work
because
we
ended
up
abandoning
this
webrtc
edge
project.
K
K
K
We
don't
really
do
real
time
very
well
for
multitude
of
reasons,
but
it
it
just
works
with
our
existing
stack,
it's
cmaf,
so
we
can
fall
back
to
hls
the
server
apr,
so
we
can
fix
some
of
the
issues
with
latency
hls
and
that's
really
what
we're
settling
on
it
uses
web
transport
and
each
each
segment
is
sent
as
a
quick
stream.
K
It's
actually
really
simple,
but
that's
kind
of
what
we
settled
on
and
I
don't
want
to
spend
my
time
like
you
know,
preaching
warp,
that's
probably
a
future
off
or
a
future
meeting,
but
any
questions
about
distribution.
H
Mozinati,
hey
look,
I
asked
you
this
earlier
on
the
list
and
I
think
you
mentioned
that
you
were
you
had
thought
about
it
before
and
you're
gonna
still
be.
You
know
continue
to
think
about
it
in
the
future.
Wonder
if
you've
had
some
more
thoughts
on
it
now,
but
it
seems
like
conceptually
warp
is,
would
be
very
similar
to
having
an
h3
capable
server
and
just
grabbing
the
segments,
as
you
know,
separate
resources
over
it
and
they
would
automatically
come
over
with
separate
quick
streams
like
warp
does
and
then
the
prioritization.
H
The
mechanisms
aren't
there
yet,
but
there's
proposals
to
do
something
very
similar
to
that
prioritization.
Do
you
think
that
there's
do
you
think
that
there's
more
value
in
trying
to
pursue
that
route?
Or
do
you
think
that
there's
something
that
would
be
fundamentally
harder
to
make
the
right
changes
that
you
think
you're
going
to
need
eventually
in
an
h3
stack
versus
what
you're,
what
you're
doing
separately.
K
Fundamentally,
we
want
to
be
able
to
download
segments
in
parallel
and
prefer
the
newest
data
like
if
there's
congestion-
and
you
can
do
that,
like
you
mentioned
with
h3,
you
can,
I
think
lucas
was
working
like
a
priority.
Header
would
be
a
great
standard
way
of
saying
this
request
should
come
before
this
one.
K
The
reason
we're
doing
web
transport
really
right
now
for
distribution
is
just
because
twitch
runs.
We
run
our
own
cdn
like
we
don't
need
to
use
http
we're
refined,
pushing
last
mile,
but
I
think
for
standardization,
especially
a
solution
that
uses
h3
or
even
h2
and
looks
more
like
dash,
would
probably
be
a
low
barrier
to
entry.
Q
Hello,
lucas
pardue
clyde
play,
I
hear
my
name
mentioned
and
I'm
summoned
no
thanks.
Luke
luke
and
I
have
talked
a
bit
about
this
in
the
past-
the
hp,
extendable
priorities,
draft
entered
or
48
like
a
couple
of
weeks
ago
or
a
few
weeks
ago.
Now
maybe-
and
this
is
just
really
about
signaling
and
and
some
guidance
on
how
servers
predominantly
servers
in
this
case-
can
use
those
signals
in
ways
to
prioritize
or
schedule
the
sending
of
multiple
resources
that
are
concurrently,
active,
that's
just
guidance
effectively.
Q
Servers
can
ignore
it
completely
or
do
whatever
they
like,
and
but
it's
optimized
around
the
web
use
case.
So
it's
kind
of
like
a
fiso
general
recommendation
that
could
be
ignored
and
speaking
to
luke
like
months
back,
it
seemed
more
like
this
use.
Cases
is
really
like
a
a
lifo
and
that's
completely
possible.
You
know
you
just
you
might
need
a
slightly
different
signal
at
the
at
the
each
independent
priority
message
or
you
might
just
want
a
signal
for
the
entire
session
that
this
is
a
different
kind
of
application
than
web.
Q
K
Collaborate
a
little
bit,
one
of
the
issues
with
the
using
hp3
for
warp
is,
if
you
talk
to
a
server
that
doesn't
support
prioritization
like
it
just
ignores
that
header
you're,
going
to
get
a
worse
experience
than
just
hls.
So,
like
you,
this
is
something
that
lucas
and
I
talked
a
lot
about.
Is
you
need
to
make
sure
the
server
supports
prioritization?
R
Hey
look
thanks
for
the
presentation
and
thanks
thanks
for
going
us
through
various
things
that
you
tried
out
and
I
had
a
clarification
question
the
more
I
kind
of
read
about
low
latency
dash
and
talking
to
the
experts.
It
does
feel
like
the
some
of
the
things
that
you
require.
R
K
Our
own
low
latency
dash
solution,
it's
a
custom,
hls
thing
instead
of
dash
but
same
concept.
As
far
as
I
can
tell.
R
K
S
Hi,
hey
one
question
from
your
presentation:
is
that
regardless,
would
you
be
interested
in
media
media,
real,
quick
or
specifically
about
media
platforms,
because
they
are
slightly
smart?
But
I
think
that
we
need
to
first
start
thinking
if
the
goal
is
to
have
medieval,
quick
or
if
we
really
are
requiring
weight
transport
for
them
for
them
specifically
for
building
blues,
because
I
think
that
this
has
some
extra
challenges
that
are
not
present.
If
just
using
quick.
K
Quick
is
mostly
required
for
to
better
eliminate
headline
blocking,
but
there's
actually
some
other
benefits
with
web
transport,
like
h2
fallback
would
probably
actually
work
thinking
about
it.
It
wouldn't
be
ideal.
You'd
still
have
some
head
of
line
blocking,
but
it
would
be
nice,
but
really
we
just
need
browser
support
and,
like
you,
can't
get
quick
native
in
the
browser
without
web
transport.
S
Yeah,
because
we
have
the
problem
of
the
with
a
specific
link
with
traffic,
how
to
plug
then
congestion,
control
of
them
other
of
the
quick
implementation
in
the
browser
with
the
with
the
user
specs.
So
I
think
that,
speaking
about
media
equipped
without
a
specific
statement
that
this
is
going
to
be
used
with
web
transport,
it
may
lead
to
some
issues
when
we
try
to
actually
implement
them.
K
To
elaborate
the
we
don't
have
that
issue,
because
the
sender
is
a
server
we
control.
We
have
a
custom,
quick
implementation,
but
100
if
you're
doing
ingest
over
web
transport
via
the
browser.
You
do
not
control
the
prioritization
of
streams
and
you
do
not
control
congest
control,
and
that
would
be
an.
K
E
So
the
queue
is
closed,
but
there
are
a
couple
of
comments
to
be
relayed
from
the
chat.
Kirill
says
that
refreshing
manifest
for
ll-dash
is
not
fun
and
lucas
needing
per
request.
Signals
to
express
priority
introduces
an
immediate
latency
cost,
whereas
my
understanding
of
warps
needs
is
that
a
declaration
that
the
session
is
best
served
lifo
will
avoid
that
entirely.
T
O
Okay,
so
next
slide,
please,
you
should
have
control
at
the
bottom.
O
Oh
yeah,
thanks
the
I
want
to
talk,
I'm
going
to
talk
a
little
bit
more
about
the
various
sort
of
solutions
that
have
been
talked
about
and
try
and
abstract
those
out
to
the
the
type
of
thing
that
I,
the
direction
that
we
could
choose
for
a
working
group
would
form
to
go,
but
to
clarify
it
up.
O
I
want
to
talk
a
little
bit
more
about
a
couple
use
cases
here
and
I
hadn't
seen
it
into
the
mob
spot
the
other
day,
this
report
from
dash,
if
but
it
has
a
great
set
of
use
cases,
I
highly
recommend
people
reading
and-
and
of
course
I
think
it's
great
several
use
cases
I've
put
beforehand
where
we're
in
there,
but
it
is
a
good
summary
I'd
encourage
people
to
read
the
links
at
the
bottom
now.
O
So
this
case
I
want
to
talk
about
right
here
is-
is
one
that
that
often
comes
up
of
you
know
you
have
a
soccer
game
or
some
people,
a
sport
like
that
and
you're
watching
and
there's
a
bunch
of
different
things
that
people
want
to
do
that
aren't
really
very
easy
to
do
with
the
protocols
we
have
today
and
that's
one
of
the
things
that
in
general,
when
I'm
looking
at
building
some
new
complicated
thing,
that's
difficult
to
deploy.
O
I
want
to
make
sure
that
I
have
some
features
that
are
not
just
better
versions
of
what
is
already
deployed
today,
but
are
actually
solve
problems
that
people
want
to
solve
that.
You
can't
do
with
the
stuff
that
is
widely
deployed
today.
Otherwise
you
know
you
end
up
with
a
very
difficult
deploy
path.
So
this
you
know
the
soccer
game
example
when
we
start
looking
at
this
there's
a
desire
to
be
able
to
watch
a
camera
over
the
goal
post
when
you're
right
in
the
in
the
same
stadium
watching
the
game.
O
It's
very
common
now
for
people
to
be
streaming
those
types
of
things
there's
the
issue.
Somebody
brought
up
later
of
the
sort
of
timing
and
latency
of
everybody
receiving
this,
so
that
we
have
a
low
late
that
the
end
viewers
are
getting
a
fairly
low,
latency
version
of
the
game.
So
you
don't
hear
all
your
neighbors
cheer
before
some
people
don't
care
about
the
problem.
O
You
know
some
some
do
right
and
this
one
variant
of
this
that
gets
particularly
into
that
thing
is
when
there's
betting
going
on
for
so
for
any
anything
that
has
regulated
betting
on.
It
is
highly
likely
to
have
very
strict
latency
needs,
and
you
don't
want
somebody
who's.
Just
sending
a
you
know
a
a
webrtc
video
of
the
game
from
their
phone
to
somebody
else
remotely
to
be
getting
a
substantially
lower
delay
than
whatever
the
betting
delay
is.
So
you
know
that's
that's
another
one
that
comes
up
the
other
one.
O
That
we
sort
of
see
is
that
you
have
a
large
large
number
of
people,
viewing
it
well
beyond,
where
webrtc
is
typically
scaled.
To
today
I
mean
most,
the
large-scale
webrtc
systems
support
single
conferences.
Up
to
around
a
thousand
people,
obviously
you
could
do
more
than
that
with
webrtc
less,
but
it
tends
to
be
in
that
sort
of
framework
is:
what's
cost
effective
the?
O
But
if
you
have
you
know,
100
000
people
viewing
this
and
you
want
to
be
able
to
bring
in
any
one
of
them
as
a
participant
as
an
interactive
person
on
the
call.
Maybe
it's
any
you
know
esports
and
you
want
to
bring
in
one
of
the
fans
or
spectators
as
part
of
the
commentary.
That's
going
on
live
in
it
now.
O
The
moment
where
you
bring
those
people
in
is
you
don't
want
to
lose
five
seconds
of
latency
that
that
person
doesn't
hear
when
that
switch
happens,
because
that's
exactly
the
context
they
need
to
comment
when
they
come
in,
so
you
have
to
keep
that
means
driving
the
latency
of
everything
down.
So
I
think,
there's
a
bunch
of
edge
cases
like
this,
where
we
switch
back
and
forth.
O
Let
me
talk
about
a
slightly
different
version
of
this.
This
is
large
company
meetings
right
now.
If
you
have
a
a
meeting
even
like
this
one
right
with
only
I
see,
we
got
129
people
remote
right
now.
If
we
allowed
all
those
people
just
be
active
speakers
at
the
same
time,
it
wouldn't
really
work.
That's
why
we
request
the
queue
and
go.
Do
these
things,
so
it's
usually
a
signaling
mechanism
that
allows
people
to
move
back
and
forth
between
interactive
and
non-interactive
mechanism.
O
Now
we
can
usually
do
this
with
webrtc
today
for
small
meetings,
but
it
doesn't
scale
very
well
as
it
sort
of
gets
large.
So
that's
that's
certainly
a
use
case.
That's
coming
not
from
the
streaming
direction,
but
is
coming
from
I'll
call
it
the
interactive
media
direction
and
what
we
see
is
more
and
more
use
cases
where
coming
from
both
of
these
groups
that
are
converging
to
the
same
thing
where
they
want
to
act
a
little
bit
more.
You
know
the
streaming.
O
O
The
other
thing
that
we
sort
of
talk
about
a
lot
is
the
way
that
streaming
uses
relays
and
distribution
points
on
the
the
distributions
is
very
clear,
we're
all
familiar
with
those
types
of
things,
but
there
are
also
a
lot
of
things
that
can
be
gained
by
similar
models
on
the
ingress
and
what
we're
the
the
main
place
where
you
have
gains
here
is,
if
you
had
a
relay
or
a
cache
node,
some
type
of
thing,
that's
on
the
other
side
of
a
low
of
a
a
network.
G
O
Much
at
the
wi-fi
network
or
the
the
very
last
edge,
and
if
you
can
insert
a
relay
there,
you
end
up
with
a
very
low
latency
between
the
end
client,
that's
reproducing
the
video
and
this
relay
and
you
can
do
re-transmission
based.
You
know
you
can
run
quick
over
a
stream
and
get
your
data
really
quickly
reliably
across
that
link
and
then
put
it
up.
So
basically,
we
all
understand
the
sort
of
issues
of
how
rtt
and
congestion
control
relate
with
each
other
and
re-transmissions.
O
But
if
you
can
reduce
your
rtt
dramatically
by
inserting
relays
at
key
points,
you
can
change
this,
and
this
really
comes
to
some
of
the
things
that
are
changing
in
the
network.
Today
I
mean
the
what's
happening
with
5g
and
the
ability
to
have
compute
very
near
the
edge
what's
happening
with
very
flexible
access
points
to
push
stuff
down
in
them
allows
us
to
have
an
ingest
side
cdn
effectively
as
well,
that
dramatically
improves
quality.
So
a
bunch
of
this
is
all
changing.
O
So
with
those
sort
of
use
cases
in
mind,
I
want
to
jump
over
to
the
looking
at
sort
of
the
design
solution.
Space
and
I'm
not
arguing
for
any
of
the
particular
drafts
one
way
or
another
here
or
anything.
I
don't
really
care
about
that.
I'm
sort
of
pointing
out,
like
like
a
bunch
of
different
people,
looked
at
this
different
ways
and
we're
all
coming
up
with
very
similar
things.
Okay,
we're
we
all
have
some
way
of
publishing
and
pushing
the
the
media
and
over
quick.
O
That's
why
this
is
called
over
quick
and
they
you
know
they
they
tend
to
put
dependents.
Video
frames
like
a
got
sequence
in
one
stream.
They
tend
to
have
some
way
of
trying
to
think
about
how
to
allow
multiples
of
those
happens.
Maybe
how
to
prioritize
them.
Those
types
of
issues,
the
the
most
important
thing
for
us
to
focus
on
initially
is
a
little
bit.
What
this
pub
line
looks
like
and
whether
it's
rushes
you
know
just
bringing
on
ingress
warp,
you
know,
does
it
can
do
both?
O
What
directions
on
this?
We
we
need
to
have
that
video
together.
We
need
to
think
about
how
the
congestion
control
relates
to
it.
So
I
do
not
think
that
this
work
should,
as
I
said
earlier,
I
don't
think
this
work
should
take
on
any
of
the
congestion
control.
O
I
mean
I
luke
got
me
on
this
term
of
latency
strategy
and
I
I
think
that's
a
it's
a
very
good
way
of
thinking
about
that's
what
we
need
to
do,
and
this
is
one
of
the
things
too,
that
comes
up
constantly
is
well
couldn't
the
ingress
and
the
you
know
can
what's
coming
in
and
what's
going
on,
be
completely
unrelated
to
each
other,
and
the
answer
is
basically:
no:
they
can't
they
are
highly
related
to
each
other.
They
don't
have
to
be
the
same
protocol.
O
We
prove
that
and
obviously
we'll
gateway
to
different
protocols,
but
whatever
your
strategy
is
for
dealing
with
latency
on
one
side,
if
your
strategy
for
dealing
with
the
latency
on
the
other
side
is
totally
mismatched
with
that,
you
have
problems
every
time.
Somebody
says:
oh,
no,
we
don't,
they
can
be
completely
separate.
O
They
inevitably
have
a
set
of
assumptions
and
models
about
what
the
output
is,
that
they're
trying
to
get
their
input
to
match
up
with
and
those
those
outputs
could
be
different,
and
you
know
jake
brought
up
some
of
those
issues
earlier
on
some
of
these
things.
So
I
do
think
that
we
have
to
deeply
consider
both
both
sides
of
this
together
and
they
have
to
be
designed
to
work
together
to
meet
the
goals
that
we
want
or
won't
happen.
O
I'm
not
heard
any
compelling
reason
at
all
why
they
can't
actually
be
the
same
protocol
for
both
how
video
and
audio
media
moves.
You
know
on
the
ingress
side
to
the
egress
side,
the
let's
see
if
I
want
to
make
sure
I'm
not
missing
too
many
things.
I
want
to
say
about
okay,
so
jumping
into
a
little
bit
more.
The
other
side
of
this
here.
O
The
latency
part
of
this
is
a
little
bit
of
a
red
herring
in
some
ways.
We
talk
about
what
constantly
people
are
trying
to
figure
out
a
definition,
for
you
know
what
what
different
latencies
are,
but
the
reality
is
is
that
we
can
get
a
certain
latency
over
quick
and
we
can
do
it
pretty
reliably.
Today,
the
various
implementations
measure
this
and
tested
it.
Our
stuff
is
currently
sitting
on
top
of
pico,
quick
and
we've.
It's
we're
open
sourcing,
all
the
implementations
of
it.
O
People
want
to
play
with
it,
but
I'm
not
arguing
for
that.
One.
I'm
just
saying
that
we've
everybody
that
was
working
in
this
space
has
implemented
enough
things
that
we
sort
of
figured
out
what
we
can
do
over
quick.
Now
we
can
get
the
sort
of
latencies
that
we
expect
for
interactive
webrtc
type
things
over
quick
over
this,
these
types
of
protocols
that
all
of
these
things
are
designing.
Okay,
so
I
don't
think
we
should
get
too
wrapped
up
on
exactly
what
that
is.
O
I
think
that
that
to
improve
beyond
that,
you
have
to
change
quick
or
use
something
other
than
quick
and
that
that's
really
out
of
scope.
We
should
just
be
like
hey:
we
can
get
what
latency
we
can
over
that
and
get
those
use
cases.
O
The
other
thing
that
I
think
that
we
sort
of
fuzz,
sometimes
on
this
and
and
don't
talk
about
that,
we
should
take
an
account
into
the
design.
Very
much
is
the
relay
issue,
so
all
the
large-scale
existing
deployments
of
these
very
things
streaming
or
interactive.
Have
these
I
mean
the
cdns
have
cdn
nodes
of
some
type.
They
might
be
very
much
http
caches.
They
might
be
some
sort
of
extended
support
to
http
type
cache.
That's
designed
specifically
to
work
with
hls.
O
They
might
be
something
completely.
You
know
internally
designed
twitch
may
have
designed
something
works
just
for
them
right.
You
know
those
types
of
issues
on
the
interactive
media.
We
call
these.
You
know
cascaded
sfus
and
if
you
look
at
how
conferences
like
large
scale
conferences
that
are
involved,
people
across
multiple
geographies
work,
there's
always
multiple
relays
and
distribution
points
and
sfus
involved.
O
They
also,
you
know
the
our
current
sort
of
you
know
I
right
now,
there's
a
lot
of
people
figuring
out
how
to
you
know,
remove
all
their
private
keys
from
their
tls
private
keys
from
their
cdns
in
russia
before
they're
seized.
You
know
like
just
this
faking.
The
origin
in
a
relay
may
not
have
been
the
best
design
model
for
it.
O
So
I
think
that
we
do
need
to
think
explicitly
about
how
these
work
and
what
we
need
to
do
and
recognize
that
for
this
to
be
a
successful
deployment,
they're
one
of
the
people
that
have
to
have
an
incentive
for
the
relays
to
deploy
and
work
this
you
know
a
lot
of
the
stuff
too
of
you
know
that
came
up
earlier
about
webrtc
or
not
I
mean
I,
you
know,
I
think
we
could
put
anything
in
a
browser
as
long
as
there
was
an
incentive
for
the
browser
to
put
it
in
there
and
it
meant
the
safety
and
security
constraints
that
the
browsers
promised
to
their
users
as
part
part
of
it
going
on
in.
O
So
I
think
changes
are
possible
there
too,
including
doing
a
whole
new
protocol.
I
also
think
that
everything
we've
been
talking
about
here,
just
trivially
you
can
map
it
on
top
of
web
transport
or
on
quick
raw.
You
know
that's
the
type
of
thing
that
could
be
worked
out
very
much
in
the
working
group
much
later.
What
we
need
to
agree
here
is
just
sort
of
the
overall
scope
of
these
things
kind
of
coming
together.
O
So
with
that,
I
want
to
just
sort
of
end
with
where
I
think
loosely.
We
should
be
going
and
having
this,
and
that
is,
I
think
we
should
have
a
sort
of
north
star
of
an
eventual
solution
and
architecture
that
solves
a
lot
of
the
hard
use
cases
that
we
want
to
be
able
to
deal
with,
and
that's
really
important
to
have
the
incentives
to
make
all
of
this
and
the
incentives
to
deploy
it.
O
O
Which
have
really
been
over?
What
were
the
pub
arrows
in
all
of
these
diagrams
of
how
to
push
this
stuff
around?
And
you
know
I
I.
I
think
that,
where
we
need
to
be
there
is
we
need
options
around
figuring
out
how
we
prioritize
those,
how
we
put
them
in
streams,
how
we
put
those
and
things,
and
we
need
to
rely
on
the
fact
we're
sitting
on
top
of
quick
existing
congestion
control.
O
I
think
that
a
lot
of
the
current
discussions
we
had
today
are
printed
pretty
lacking
on
discussion
a
little
bit
about
how
audio
fits
into
some
of
that,
and
probably
some
of
the
proposals.
You
know
that
that's
how
audio
fits
in
there
is
going
to
require
options
for
datagrams
in
some
cases
or
on
streams.
I
think
there'll
be
some
things
where
you
can.
You
need
to
choose
both
of
those
content
naming
and
how
we
do
that
we
sort
of
brushed
off.
O
O
I
definitely
think
that
we
need
to
design
the
ingest
and
the
distribution
to
work
together.
I
actually
don't
really
see
any
reason.
Their
protocols
would
be
different
for
pushing
the
video
around,
but
maybe
somebody
can
convince
me
and
there's
the
question
about
webrtc
I
mean
my
view
is
not
that
we
couldn't
do
this
on
webrtc
I
mean
webrtc
is
a
beautiful
cathedral
and
we
could
definitely
add
some
more
gorgiles
to
the
edge
of
it
to
do
whatever
we
want.
O
O
And
if
the
answer
to
that's
yes,
but
my
view,
is
it's
pretty
pretty
complicated
and
that
really
the
rtp
was
not
particularly
well
designed
for
a
large
scale,
distribution
out
in
relay
content
zone,
and
I
know
that
rtp
was
designed
for
multicast
and
really
what
I'm
proposing
here
is
an
application
level
multicast.
But
I
think
rtp
and
its
practical
usage
has
moved
a
long
ways
away
from
being
a
multicast
or
an
application
layer,
multicast
network
over
top
of
it.
O
C
I
Spencer
dawkins,
this
may
be
a
clarifying
question
or
the
beginning
of
discussion,
but
I
turned
my
hand
on
to
ask
to
talk:
ask
luke
about
his
comments
on
latency
and
I
appreciate
cullen
putting
up
a
slide
about
doing
things
with
no
relays
and
for
the
reasons
why
he
was.
You
know
why
you
named
that,
but
luke's
categories
included
a
thing
called
international
and
I
you
know
don't
have
to
have
an
answer
here,
but
I
think
the
question
is
worth
us
asking:
what
does
that
mean?
I
Does
that
mean
that
it's
a
long
way
from
our
cdn
or
that
it's
a
long
way
over
our
cdn
with
and
over
a
lossy
network
at
that
point
or
something
else,
but,
like
I
said
your
your
comment
about
doing
things
without
relays,
I
think
really
speaks
to
that
and
I
think
I
think
that's.
O
Thank
you.
I
will
say
something
about
that.
I
mean
I.
I
think
that
it's
very
easy
to
design
something
that
works
well
on
good
networks,
and
that
comes
back
to
some
stuff.
Somebody
said
earlier,
like
you
can
just
ignore.
You
can
just
send
stuff
as
fast
as
you
want,
and
just
sort
of
ignore,
all
normal
things
about
networks,
and
it
works
a
lot
of
the
time
it
really
does.
O
O
I
also
think
you
need
to
be
talking
about
is
really
a
lot
of
countries,
so
I'm
very
familiar
deeply
familiar
with
deployments
that
work
in
you
know
most
of
the
countries
in
the
world
and
what
it
takes
to
have
a
provide
a
reasonable
quality,
interactive
media
in
all
of
those
environments.
O
To
do
that-
and
this
is
very
common
across
all
of
these
major
services
and
even
different
things.
You
see
people
today
using
at
least
about
100
points
of
presence
for
their
cdns
or
sfus
or
whatever,
to
get
the
latency
low
enough
to
deliver
the
type
of
things
they
do
today
and
that's
not
just
for
web
conferencing.
That's
across
a
wide
range
of
things
that
are
interactive.
You
end
up
about
that
many
points.
O
If
you
look
at
delivering
something
like
stadia
or
that
sort
of
interactive
gaming
type
thing,
I
I
I
my
my
belief
on
that
is
you
need
a
lot
more
points
of
presence
than
that
and
that
we're
going
to
get
them
and
so
you're
going
to
see
people
selecting
a
cdn
like
which
cdn
they
use
like
in
a
very
small
area,
be
able
to
choose
from
different
ones
versus
just
I
got
us
west
and
that's
close
enough.
O
I
Right
so
I
I
was
so
I've
been
thinking
about
this
as
a
transporter.
I'm.
I
V
V
O
You're
100
right,
I
totally
believe
in
scalable
codex.
I
believed
in
them
for
way
too
long
and
they
keep
not
deploying,
but
I
mean
I,
I
think,
yeah,
and
I
think
this
is
exactly
the
work
the
working
group
should
do
is
nail
down.
How
does
this
really
work?
If
you
look
at
our
actual
proposals,
both
going
back
to
ripped
and
then
coming
into
the
quicker
stuff,
I
mean
yeah,
we
want
to
have
different.
O
K
I
wanted
to
answer
spencer
a
little
bit
and
add
maybe
a
little
new
requirement.
Sometimes
relays
are
expensive,
sometimes
like
we
have
for
twitch.
We
have
a
lot
of
broadcasts
that
have
zero
one
viewers.
We
don't
want
pay
to
send
the
data
over
a
backbone
over
these
relays,
so
one
of
our
requirements
is
that,
yes,
literally,
they
can
download
a
stream
like
over
the
atlantic.
Right,
like
rtt,
is
huge,
there's
a
lot
of
congestion
because
they're
using
transit,
but
that's
way
cheaper
for
us
pros
and
cons.
E
Thanks
sanjay
quickly.
W
Hi
kalan,
this
is
sanjay
mishra,
verizon,
very
good
presentation,
and
I
think,
obviously,
it's
very
exciting
to
see
some
of
the
conversation
here,
but
looking
operationally
folks
that
are
actually
implementing
real-time
distribution
with
webrtc
today
and
doing
a
lot
of
investment
in
that.
So
what
would
be
really
for
them
to
sort
of
turn
around
and
say
that
you
know
this
really
offers
me
something
better
and
I
need
to
redo
my
distribution.
So
so,
what's
in
there,
you
know,
is
it
the
congestion
control?
O
So
you
know,
like
I
work
on
webex,
which
has
a
huge
investment
on
webrtc
and
being
able
to
use
that
to
bring
everyone
in.
It's
been
it's
amazing
good,
but
the
the
thing
that
really
moves
us
is
dramatically
lowering
the
cost
of
scaling
out
the
distribution
while
still
keeping
the
latencies
we
want.
It
would
be.
You
know
the
thing
that
would
really
drive
us
to
to
move
to
this.
I
think
there's
other
people
that
are
very
much
coming
more
from
the
streaming
direction
where
they
have.
O
You
know
a
great
distribution
protocol
right
now
and
they're
struggling
to
have
a
a
good
ingress
protocol,
and
some
of
them
are
finding
webrtc
too
complicated,
or
some
of
them
are
finding
the
other
options
that
they
have
as
an
ingress
protocol
are
not
really
driving
the
the
real-time
latencies
they
want
and
being
able
to
switch
back
and
forth
some
of
these
use
cases,
but
I
think
you
are
hitting
on
the
part
of
the
problem
here
which
is:
do
we
have
something
that
is
enough
different
and
enough
interesting
that
it
will
really
take
off
over
webrtc
over
just
using
webrtc
to
do
a
similar
thing?
O
And
that's
that's.
That's
a
hard
question
to
answer,
without
digging
into
a
fair
amount
of
the
work.
E
Okay,
so
thanks
colin
and
we're
going
to
move
into
the
sort
of
general
discussion,
we've
got
about
25
minutes
for
general
discussion.
There's
just
want
to
acknowledge.
There's
a
lot
of
people
here,
there's
a
lot
of
different
backgrounds
and
use
cases
that
are
important.
So
everyone
just
you
know,
and
some
people
very
passionate
about
their
favorite
protocol
and
whether
it
scales
so
try
to
keep
it
civil
and
you
know,
try
to
bring
you
know.
New
information.
I
That
guy
I'm
spencer,
dawkins
and
justif
just
to
follow
up
on
my
second
thing
with
cullen
was
that
the
iab
has
spent
a
certain
amount
of
time
thinking
about
centralization,
and
so
the
decisions
we
make
about
about
operating
without
relays
and
what
the
requirements
really
are,
and
things
like
that
may
have
pretty
profound
implications
for
how
centralized
the
internet
becomes
either
more
or
less.
J
What
I've
heard
is-
and
I've
heard
this
in
other
places
as
well-
is
that
there
is
a
real
interest
in
getting
a
new
ingestion
protocol
for
the
reasons
that
ying
described.
J
So
I
think,
there's
definitely
something
that
needs
to
be
done
there,
but
one
of
the
things
I've
noticed
in
reading
the
requirements
for
different
ingestion
protocols
is
they
are
subtly
different.
So
I'm
just
wondering
if,
for
example,
something
like
wrist
seems
to
have
slightly
different
requirements
than
what
we're
talking
about
here.
So
I'm
wondering
if
we
can
bring
some
of
those
folks
in
so
we
get
a
very
clear
problem
statement
there
thanks.
X
Hi
there,
tom
hill
bt,
I
was
actually
quite
interested
to
see
the
hierarchical,
unicast
delivery.
Sub-Pub
mechanism
in
the
the
architecture
draft
for
quick
r
has.
Has
there
been
any
discussion?
I
can't
see
any
at
present,
but
has
there
been
any
discussion
on
a
multicast
delivery
model
as
part
of
this
and
or
is
there
any
part
of
the
architecture
that
precludes
that
from
being
possible
in
the
future.
O
I
do
yes
and
that
there
has
been
a
bunch
of
discussion
about
it.
I
I
don't
think
it
really
has
resolved
and
it
comes
down
to
an
argument
about
how
deployable
multicast
is
and
people
always
say.
Multicast
is
not
possible
to
deploy
while
selecting
their
printer
within
dns
and
watching
their
video
for
video
on
demand
over
a
multi-class
service
run
by
cable
companies.
So
I
think
that
that
there
is
so
the
way
I
have
described
quicker
is
there's
many
ways
to
think
of
it.
O
I
think
that
if
you
had
the
quicker
or
model
like
that
working
as
an
application
layer
multicast,
if
you
were
in
a
constrained
environment
that
actually
supported
real
multicast,
you
could
go
to
one
of
these
relays
and
the
relays
could
reflect
the
data
into
true
proper
multicast
inside
of
that
environment
and
then
pull
it
out.
One
of
the
reasons
that
I
haven't
been
pushing
building
that
as
much
is
multicast
on
wi-fi
now
like.
O
If
the
more
we
move
to
wireless
and
what
multicast
means
on
a
wi-fi
network
or
a
cellular
network,
is
it's
a
pretty
bad
story,
so
I
think
it
all
gets
complicated.
But
it's
one
of
the
things
that
I
really
like
about
this
architecture
is:
it
opens
the
door
to
having
some
of
the
segments
use
real
multicast.
N
So
the
stephen
wenger,
so
the
question
here
was:
where
should
we
focus
on
and
my
personal
preference
is
to
focus
on
the
way
in
the
distribution
side
of
the
story
to
get
to
go
away
from
the
three
seconds
or
so
latency
that
we
have
at
the
moment
when
we
switch
channels
on
a
tv
right?
N
That's
just
so
ridiculous.
If
we
could
do
something
there,
that's
where
the
money
is
on
the
interest
side.
I
know
we
are
not
in
the
good
old
times
when,
when
the
tv
trucks
were
driving
around
and
satellite
links
were
used
and
so
on,
but
it's
still
really
small
and
I
also
question
a
little
bit
whether
real
standard
solutions
are
required
there.
I
think
the
world
has
survived
so
far
quite
well
on
a
handful
of
let's
call
them
industry
standards
and
call
them
standards
for,
for.
N
Because
it
sounds
good
yeah,
but
in
fact
it's
it's
basically
documentary
implementations
right,
and
that
was
good
enough
and
I
don't
see
a
reason
why
why
an
organization
as
big
and
cumbersome
as
us
needs
to
get
seriously
involved
there.
So
I
would,
I
would
go
for
for
the
distribution.
That's
where
the
money
is.
Thank
you.
Y
So
I
just
want
to
describe
that.
The
engineers
that
have
said
this
to
me
are
the
same
engineers
that
write
their
own
protocol
that
runs
over
udp
and
separately.
They
report
yeah.
Sometimes
it
works,
and
sometimes
it
doesn't.
Sometimes
it
just
spectacularly
fails
and
doesn't
work
at
all,
and
nobody
knows
why
the
network's
just
not
good
enough,
and
they
don't
do
not
see
the
connection
between
these
two
statements.
Y
So
so
I
am
very
optimistic
about
media
and
other
things
running
over
quick
for
people
who,
for
whatever
reason,
feel
like
tcp
doesn't
meet
their
needs.
Quick
has
much
richer
semantics
with
multiple
streams
and
priorities,
but
it
has
a
competent
congestion,
control
algorithm
underlying
it
all.
Z
Stuart's
tall
pete
resnick,
so
I've
been
sitting
here
still
trying
to
figure
out
what
this
intrinsically
has
to
do
with
quick.
It
sounds
like
there's
new
protocol
to
be
written
at
upper
layers.
I
know
it's
quaint
to
talk
about
layers
anymore,
but
and-
and
it
sounds
like
quick-
has
some
stuff
underneath
it
that
will
help
those
protocols
work
better,
but
beyond
an
applicability
statement
that
says
this
is
why
you
should
use
quick
underneath,
because
these
things
in
this
higher
layer
protocol
are
going
to
work
better
or
work
at
all.
C
E
I
I
may
just
jump
in
and
express
a
personal
opinion
which
maybe,
which
is
that
quick,
brings
a
lot
to
the
table
sort
of
off
the
shelf
that
can
be
reused
and
that
it
can
lead
to
simplicity
at
higher
layers
that
may
come
at
a
cost
as
well,
but
I
think
that
is,
you
know,
sort
of
it's
one
of
the
people
who
got
like
us
talking
about
media
over
quick.
I
think
that's
one
of
the
drivers
for
why
quick
is
built
into
the
name
of
there.
Z
Right-
and
you
know
I
I
don't
want
to
dismiss
that
at
all-
it's
just.
I
don't
think
that
this
working
group
should
limit
itself
in
the
sense
that
if
someone
comes
up
with
the
cool
way
to
use
sctp
to
do
that
and
is
deployable,
I
don't
know
how
that
works.
But,
okay,
god
bless
them.
They
can
go
ahead
and
do
that,
but
that
this
working
group
shouldn't
be
thinking
necessarily
in
terms
of
quick
but
in
terms
of
the
underlying
structure
that
it
needs.
R
Hi
a
couple,
a
couple
of
points
just
responding
to
pts
that
I
totally
agree
so
when
we
proposed
quicker
one
of
the
things
that
was
called
was
media
delivery
protocol
and,
and
the
initial
thinking
was
around
udp.
There
was
no
quick
as
such,
but
but
that
goes
to
the
point
saying
that
it's
a
media
relevant
protocol
that
this
this
working
group
should
solve.
R
And
yes,
if
we
can
provide
the
benefits,
we
should
definitely
use
use
and
the
future
versions
of
quick
provides
additional
benefits
that
gets
looped
in
into
the
delivery
protocol
and
also
I
I
feel
that
it
would
be
worth
for
the
group's
time
to
see
what
problems
and
use
cases
that
we
bring
in
that
today's
solution
do
not
solve
our
inventory.
Solutions
do
not
scale
out
or
in
whatever
problems
that
we
see
like.
R
If
it's
a
simple
question
about
why
something
like
low
latency
dash,
cannot
work
on
the
distribution
side
and,
at
the
same
time,
some
other
things
I
would
also
like
to
know
about
is
how
do
how
does
the
internet
security
come
into?
The
picture
would
just
be
that
something
like
traditional
resolutions.
It
would
be
hard
to
get
something
like
that
working
or
not,
and
also
one
of
the
things
I
am
very
curious
or
interested
to
solve,
is
that
how
do
we
avoid
multiple
encapsulation
decapsulation
of
the
media?
R
That's
going
from
the
ingest
to
the
distribution
from
the
from
to
the
to
the
to
all
the
viewers
and
and
not
having
to
have
multiple
layers
of
encapsulation
capsulation
and
have
efficient
pipelining
across
these
intermediate
nodes.
That
way
we
can.
We
can
achieve
really
good
quality
and
latency
combinations
as
well.
Thanks.
D
My
response
to
stefan
earlier
around
focusing
on
distribution
and
not
focusing
on
ingest,
is
that
there's
this
two
two
things
that
should
be
considered
the
first
one
is
that
ingest
protocols
are
becoming
very
diverse
and
very
spread
out
covering
a
whole
bunch
of
different
use
cases,
and
maybe
coming
up
with
something
that
can
harmonize
some
of
those
problems
down
would
be
very,
very
useful.
D
It
would
certainly
be
useful
in
my
domain
satellite
trucks
permitting,
and
the
second
thing
to
consider
as
well
is
that
a
lot
of
distribution
protocols
end
up
getting
reused
as
ingest
protocols,
hls
dash
and
rtp
rtm
ping
all
end
up
getting
reused
as
some
means
of
contributing
into
a
distribution
platform
right,
there's
many
cdns
and
other
services
that
have
got
that
as
a
thing
and
you've
got
to
ask
yourself
well.
D
Why
are
they
doing
that,
and
the
answer
is:
is
that
if
your
input
protocol
and
your
output
protocol
the
same
then
it's
easier
to
reason
with
the
architecture
of
the
overall
system
and
and
how
things
function
as
well?
So
if
we
we
don't
focus
on
the
ingest
up
front,
then
we
certainly
run
the
risk
of
it
being
bodged
in
in
the
future.
D
E
P
I
want
to
comment
on
the
question
of
quite
quick,
specifically
and
not
other
protocol,
so
I've
at
least
with
google.
We
have
at
least
three
different
teams
that
are
dedicated
to
running,
maintaining
and
transfer
stack.
There
is
a
quick
name.
There
is
a
team
that
does
work
on
improving
tcp
congestion,
control
and
other
aspects
in
linux
kernel,
and
there
is
also
a
team
that
works
in
rtp
and
or
some
other
protocols
like
sctp.
We
also
use
in
some
cases
and
supporting
those
is
not
free.
It
takes
a
lot
of
effort.
P
P
In
that
sense,
you
being
able
to
build
upon
all
of
this
effort
rather
than
building
something
new
from
scratch,
is
a
very
important
plus
and
dash
over
http.
Free
at
this
point
is
an
extremely
mature
technology.
P
And
the
reason
I
believe
that
we
should
do.
This
is
not
only
from
standpoint
that
this
is
a
good
idea,
but
this
is
also
something
that
many
people
in
the
industry
independently
have
built.
Facebook
built
trash,
twitch
built
warp,
and
they
work
on
this
fundament
on
fundamental
is
the
same
principles
as
they
use
quick
avoidance
of
head
of
land
blocking
to
bush
media,
and
I
believe
this
means
that
this
is
the
right
direction,
because
multiple
people
independently
came
up
with
a
solution,
and
it
is
worth
standardizing.
It.
C
I
Ted
hardy
speaking
and
I
came
up
to
kind
of
pick
up
on
something
that
magnus
said
in
the
chat,
a
good
while
back,
which
is
that,
fundamentally,
what
this
really
is
a
discussion
about
whether
it's
time
for
a
new
media
delivery
protocol
and
the
point
he
made
in
chat-
was
that
quick
coming
to
the
fore
has
kind
of
created
a
straw
that
broke
the
camel's
back
or
I
would
say
that
broke
open
the
dam
of
discussion
about
whether
or
not
it's
time
for
a
new
media
delivery
protocol,
and
I
think
most
of
the
people
I've
heard
at
both
in
the
in
the
mic
line
and
on
on
the
chat
have
basically
said.
I
I
Multicast
models
are
fundamentally
a
public
publication
that
subscribe,
somebody
publishes
a
stream
and
you
subscribe
to
it
and
those
publication
subscribe
models,
especially
with
the
kind
of
fan
out
that
typically
occurred
in
application
layer.
Multicast
are
much
easier
with
quick
because
of
the
infrastructure.
That's
been
built
up
in
the
internet,
so
it's
it's
simply
the
case
that
cdn
infrastructure
and
the
rest
of
the
infrastructure
is
friendlier
to
to
quick,
because
it
shares
h3
as
one
of
its
use
cases
than
it
would
be
to
building
something
new
on
top
of
webrtc.
I
So
I
think
the
question
we
really
have
in
front
of
us
is:
do
we
want
a
new
media
distribution
protocol?
And
I
think
my
answer
is
yes.
Do
we
want
it
to
be
based
on
something
where
the
fan
out
possibilities
of
publication
subscribe?
Have
this
scaling
property
of
quick-
and
I
think
my
answer
to
that
is
yes
and
then
I
think
we
have
kind
of
the
question
of
if
we're
going
to
do,
multicast
and
unicast
for
this,
how
many
of
these
use
cases
are
actually
in
scope,
and
I
think
my
answer
to
that
is
again.
I
Yes,
if
we're
building
a
new
media
dis
distribution
protocol,
it
should
actually
cover
all
of
the
use
cases
that
have
been
described,
because
then
we
get
the
best
network
effect
of
building
on
that
infrastructure
and
building
on
the
code
bases
that
are
there.
So
people
have
said
we
we
named
this
wrong.
Sorry,
we
bike
shedding
the
name
of
something
is
one
of
our
favorite
occupations
at
the
at
the
itap.
I
If
you,
if
you
want
to
call
it
new
media
delivery
protocol,
yes
or
no
buff,
I
think
at
the
end
of
the
day,
chairs
your
your
your.
Your
question
will
be
very
quick
because
I
think
you'll
get
a
resounding
yes
to
its
time.
H
Miles
and
eddie
I'd
like
to
briefly
touch
on
three
main
themes
that
I'm
hearing
from
a
lot
of
people,
so
one
on
quick
versus
non-quick,
two
on
ingest
versus
distribution
and
three
on
latency
trade-offs.
So
on
quick
versus,
not
quick,
I
think
all
the
discussion
about
webrtc
and
god
forbid,
sctp
directly
or
srt
or
other
things.
I
think
a
lot
of
people
have
already
done
a
lot
of
soul
searching
and
we
arrived
at
quick
for
a
reason.
It
wasn't
that
we
ignored
a
lot
of
us
have
deployed.
H
H
I
don't
think
that
there
is
really
that
much
difference
when
you
look
at
the
all
of
the
the
different
needs
at
the
lower
layers
and
even
at
the
app
layer
between
those
two
cases.
There's
a
few
nuances
contribution,
you
know,
may
have
more
nuances
about
both
live
and
pseudolive
that
I
want
to
get
this
stream
up
reliably
anyway.
But
give
me
the
live
edge
as
fast
as
you
can,
but
also
make
sure
that
I
can
give
full
you
know
contribution.
H
You
know
at
a
delayed
time.
That's
the
only
nuance
for
a
contribution
and
distribution.
The
scale
is
obviously
a
difference,
but
when
you
look
at
the
lower
layers
of
the
protocol
required
to
deliver
those
things,
there's
really
not
that
much
difference
and
the
latency
trade-offs.
I
keep
hearing
the
trade-off
between
latency
and
quality.
I
think
those
are
you
know
false
false
trade-offs.
I
think
harold
put
it
well,
but
I
don't,
I
think,
was
kind
of
lost
in
the
room.
The
channel
is
your
limit.
The
channel
governs
your
quality.
H
You
can't
get
better
quality
by
adding
latency
if
you're
talking
about
a
short-lived,
you
know
five-second
ad,
maybe,
but
not
these
long-lived
flows
that
we're
talking
about
we're
talking
about
distributing
events,
we're
talking
about
live
streaming.
You
know
many
minutes
or
hours
worth
of
content.
You
can't
get
better
quality
than
the
channel
by
by
just
increasing
your
latency.
So
that's
a
false
dichotomy.
What
latency
impacts
is
resilience.
H
How
how
resilient
is
your
delivery
of
that,
and
so
we
need
to
take
latency
as
a
as
a
trade-off
between
resilience,
not
as
a
trade-off
between
quality
and
the
same
applies
for
contribution
and
distribution.
Ultra
low
latency
can
be
achieved
with
the
best
quality
that
the
channel
can
provide
in
both
of
those
cases.
A
All
right
I'll
speak
really
quickly:
davidskenazi
quick
enthusiast
first
off,
I'm
not
going
to
beat
over
the
dead
horse
of
let's
do
this
over
quick.
Of
course,
quick
is
awesome.
A
Let's
do
this
over
quick
people
have
said
why
and
that's
good
second,
as
speaking
as
an
internet
architecture,
enthusiast
I'd
like
to
offer
perhaps
a
suggestion
to
the
proponents
of
this
buff
we've
seen
that
like
not
going
into
the
buff
questions,
because
this
is
not
working
forming
but
clearly
there's
interest
and
it
sounds
like
it
makes
sense
for
this
to
happen
in
ietf
and
there's
people
who
want
to
do
the
work.
That's
all
great!
A
My
concern
is,
I'm
not
entirely
sure
if
we
can
agree
on
a
list
of
requirements,
so
what
I
would
recommend
for
the
proponents
is
to
have
side
meetings
and
refine
this
list
in
in
and
if
you
can
agree
on
some
on
a
list
there,
that's
something
that
we
can
then
have
a
walking
group
forming
buff,
because
if
we
agree
on
a
set
of
requirements,
that
means
we
can
tightly
scope
this
for
a
working
group
and
then
start
building
a
solution
that
meets
those
requirements
right
now,
I'm
not
enough
enough
of
an
expert
in
this
space
to
be
able
to
tell
if
all
those
use
cases
have
a
single
set
of
requirements,
and
that
worries
me
a
little
bit.
E
Thanks
cohen,.
O
I
just
want
to
speak
to
the
ingress
issues,
and
you
know
the
this
comment
made
earlier
about
like
hey:
it's
the
distribution,
that's
really
where
the
money
is,
or
whatever
I
mean
yeah,
there's,
obviously
that's
a
larger
scale
side
of
it,
but
ingress
is
where
we're
failing
the
worst
right
now
I
mean
if
you're
working,
if
somebody
are
using
non-proprietary
setup
for
their
system
applications.
Probably
the
most
common
ingress
is
rtmp,
and
I
mean
that's.
That's
really
old.
O
It's
out
of
blind
blocking
over
tcp
has
all
kinds
of
issues
people
have
mentioned,
and
it's
not
like.
You
can
build
a
device
that
will
work
with
anybody's
ingress.
I
mean
when
we
do
our
tmmp
ingress
to
facebook
versus
youtube.
We
have
to
do
it
differently
and,
like
you
have
to
understand
all
of
that
with
those
things.
AA
Why
quick
question
yeah?
I
think
everyone
here
is
very
excited
about
the
fact
that
we're
now
going
to
have
a
protocol-
that's
extremely
widely
deployed,
both
across
clients,
as
well
as
in
public
cloud
architectures,
and
that
that
protocol
is
capable
of
doing
you
know
unreliable
delivery,
and
so
I
think,
we're
all
looking
and
saying.
Well
what
can
we
do
with
this
and
it's
interesting
to
see
both
rush
and
warp
as
ways
of
leveraging
that
sort
of
unreliable
delivery
feature
within
quick
to
do
things
solve
problems
they're
being
faced
in
the
streaming
world?
AA
I
think
we're
also
interested
in
trying
to
use
it
for
other
types
of
use.
Cases
like
some
of
these
50
millisecond
200
millisecond
latency
problems,
but
we
haven't
yet
seen
the
solution
put
forth.
That
basically
speaks
so
here's
like
a
need
and
here's
how
quick
solves
that
and
I
think
we're
all
kind
of
really
excited.
AA
As
I
said,
because
of
the
quick
deployment
we
know,
there'll
be
all
these
things
we
can
do,
but
I
I
think
that
you
know
we
may
be
wanting
to
focus
on
these
concrete
things
that
sort
of
applied
in
live
streaming
rather
than
some
of
those
or
future
possible
gains
of
making
web
rtc
easier
to
deploy.
Just
because
we
have
these
concrete
things
in
front
of
us
and
hopefully,
as
time
goes
forward,
we'll
also
see
ways
of
how
quick
makes
webrtc
easier
to
deploy.
G
AB
AB
We
are
currently
one
of
the
most
popular
protocols
for
our
live,
video
contribution
and
distribution,
and
mainly
for
enterprise,
and
I
just
wanted
to
tell
that
the
topic
of
congestion
control
is
very
important,
but
I
would
propose
not
to
like,
in
terms
of
mediocrity
and
protocol,
be
developed
here
not
to
take
all
the
responsibility
for
congestion
control
and
for
latency,
because
when
we
talk
about
enterprise
contributor,
he
really
has
some
idea
of
what
he's
doing.
He
has
some
idea
of
the
network.
AB
What
we
see,
that's
that's
one
of
the
reasons
why
srt
is
currently
so
popular
and
important
among
enterprise
contributors
is
that
we
trust
them
more.
We
let
them
decide
what
to
do.
Thank
you.
L
L
AC
C
Yes,
so
thank
you,
everyone
who's
been
very
civil
and
I
have
nicely
contributed
discussions
here.
I
think
we
start
with
these
three
questions.
You
see
a
bit
of
of
henry
use
the
hand
racing
to
show
han's
tool
to
figure
out,
so
even
those
in
the
room,
you
need
to
use
your
retecco
light
or
fall
to
give
you
input.
I
will
start
the
first
question
now
so,
and
this
is
really
just
to
figure
out
give
some
indications,
I
think,
to
verify.
What's
I
think
what
the
discussions
show
that
the
interest
etc,
but.
C
O
Z
Z
C
I
mean
if
those
have
input
they,
it
would
be
good
if
they
could
post
why
they
don't
believe,
etc.
Or
what
I
mean
it's
it's.
I
interpret
as
my
personal
implication
of
that
those
13
are
saying
they
don't
see,
there's
a
need
to
do
something.
There's
no
use
case.
They
see
that
can't
be
met.
That's
my
interpretation!
C
C
Sorry,
yeah:
okay,
thank
you,
I'm
ending
this.
So
there
is.
C
Okay,
well,
let's
discuss
the
third
question
before,
but
but
just
to
summarize
here
it's
slightly
less
number
of
responses,
but
they're
even
more
for
that
that's
within
just
it's
not
there's
not
their
use
case.
That's
not
met
today
with
today's
protocols,
so
ted.
I
M
C
C
Okay,
I
think
I'll
very
soon
close
it
so
get
your
pulse
result
all
seen.
Thank
you,
and
this
had
58
participants,
50
saying
yes,
and
it's
saying
no,
I
think
it
gives
a
fairly
good
indication
of
that
some
joint
work
and
then
their
interest,
a
significant
interest
in
both
of
these
two
sets
of
use
cases.
C
So
I
think
the
next
steps
really
is
what
I
lied
out
there
saying:
okay,
can
we
scope
this
and
that
the
proponents
etcetera
will
start
looking
into
okay
discussing
on
the
mock
list,
seeing
what
we
can
do
working
towards
maybe
shorter
or
suitable
place,
but
yeah
so
mari?
Do
you
have
any
comments
as
a.d
for
this
buff.
AC
No,
I
don't
think
so.
You
think
you've
got
the
next
step
that
I
would
be
expecting
I'm
good
with
what
we
have
here.
Y
E
We're
just
at
time
now
I
really
want
to
thank
everybody
for
coming
for
participating.
I
thought
it
was
really
helpful
session.
It
was
good
to
see
so
many
people
here.
I
want
to,
of
course
thank
the
ads.
Thank
the
presenters.
Thank
my
co-chair,
and
everyone
have
a
great
rest
of
your
week
here.
AD
AD
AD
E
Well,
I
think,
hopefully
this
will
generate
some
discussion
on
the
list,
which
has
been
a
little
bit
sparse,
and
then
I
mean
I
would
like
to
see
you
start
to
put
together
a
charter
I
mean,
and
if
we
can
agree
on
and
let
people
argue
about
what
a
charter
should
be,
what
it
should
be
like
on
the
list.