►
From YouTube: IETF115-MOQ-20221110-0930
Description
MOQ meeting session at IETF115
2022/11/10 0930
https://datatracker.ietf.org/meeting/115/proceedings/
A
C
Okay,
now
I'll
repeat
myself
entirely
because
the
poor
people
remote
couldn't
hear
that
at
all
welcome
to
moq
now
in
3D
as
a
working
group
for
the
first
time.
This
is
our
first
meeting
as
a
working
group
at
a
plenary
ITF.
As
you
know,
we
had
an
interim,
but
this
is
our
first
one
for
those
of
you
who
are
not
used
to
the
ITF
by
this
point.
C
So
please
find
your
seats
as
quickly
as
you
can
remember,
hopefully,
by
the
by
this
point,
that
you
do
need
to
be
wearing
a
mask
if
you're
not
actively
eating
or
drinking.
So
if
you're
drinking
go
ahead
and
take
your
mask
off,
but
then
put
it
right
back
on.
D
B
All
right,
good
morning,
everybody
we're
gonna
get
started.
So
this
is
the
note.
Well,
it's
Thursday
I.
Think
everyone's
read
the
note.
Well,
if
you
haven't
please
do
it
contains
a
lot
of
important
information
and
guidelines
about
how
we
do
work
here
at
the
ietf
next
one
again,
it's
Thursday
I
think
most
people
have
figured
it
out,
but
if
you're
here
make
sure
can
I
turn
it
up,
can
I
just
I
can
get
I'll
just
get
nice
and
close
hello.
B
Thank
you
all
right.
If
you're,
if
you're
in
person,
use
the
little
mute,
Echo
client
to
get
yourself
into
the
queue
and
make
sure
you're
wearing
your
mask
unless
you're
speaking
as
Ted
has
said
and
remote
participants
make
sure
your
AV
is
off
unless
you're
presenting
or
talking
and
use
headset
Etc
thanks.
B
Our
agenda
is
here:
I'm
Alan.
This
is
Ted.
Gory
has
volunteered
to
take
notes.
Luke
is
backing
him
up.
If
anybody
else
wants
to
jump
in
the
note
stock
and
and
help
them
out,
I'm
sure
they
appreciate
it.
Is
there
somebody
who
will
relay
comments
from
zulip?
E
B
Okay,
so
what
are
we
going
to
talk
about
administrivia
and
agenda
bashing
we're
going
to
give
a
report
from
our
interim,
which
was
just
last
month.
I
think
some
people
might
have
missed
it
because
it
happened
so
quickly.
So
we'll
just
talk
about
some
of
the
decisions
we
made.
B
There
then
Luke's
going
to
give
a
presentation
on
the
combined
proposal
from
the
authors
of
Rush
warp
and
quick
R
and
we'll
spend
some
time
discussing
that
Colin
is
going
to
talk
about
relays
and
how
they
work
and
we'll
have
a
I'm
sure,
healthy
discussion
about
that,
and
then
we'll
talk
about
possibly
having
interim
an
interim
in
January
before
Yokohama.
Does
anybody
want
to
bash
the
agenda.
B
Okay,
you
may
have
seen
this
note
on
the
list.
Zahed
has
been
appointed
as
a
mock
technical
advisor.
So
if
you
need
technical
advice
fund,
sorry,
okay,
things
that
we
talked
about
at
the
interim
we're
going
to
use
GitHub
to
manage
our
documents
as
all
the
cool
kids
do
and
we're
going
to
use
issues
mostly
to
track
and
PR's
for
changes.
Please
try
to
keep
discussion
off
the
PRS
and
just
into
the
issues
and
also
on
the
list.
B
Of
course,
some
folks
were
concerned
about
the
way
retention
works
for
slack,
so
we're
in
the
process
of
setting
up
some
kind
of
bridge
for
slack
so
that
we
will
have
a
wage,
a
means
to
retain
the
messages.
But
until
that
time
please
don't
conduct
working
group
discussions
over
slack
stay
tuned
on
the
list
for
more
updates
there,
and
we
talked
about
what
we're
going
to
put
on
this
agenda
and
we
are
here
and
you
had
your
chance
to
bash
the
agenda
so
moving
on.
B
So
all
right
with
that
we'll
bring
up
Luke
to
talk
about
the
combined
proposal.
D
F
It
looks
good
hello,
media,
real,
quick
good
morning.
I've
got
a
lot
of
slides.
This
is
going
to
be
quick,
but
let's,
let's
start
going,
I'm
Luke
I
work
with
twitch
and
I've,
been
working
on
the
combined
rush
and
warp
and
quick
R
draft
just
to
try
and
find
base
Common
Ground
between
them.
F
So
before
we
begin,
there's
probably
a
lot
of
quick
enthusiasts
in
the
crowd,
maybe
not
so
many
media
enthusiasts,
so
I
really
wanted
to
cover
some
Basics
starting
off
just
medium
coding.
It's
not!
It
can
get
complicated,
but
it's
a
really
simple
level.
You
have
iframes
that
have
effectively
just
static
images
and
then,
as
time
goes,
you
can
have
keyframes
that
will
reference
one
or
more
frames
in
the
past.
So
this
is
Delta
encoding.
F
The
idea
is,
it's
just
two
there's
too
many
bits
if
you
try
to
encode
every
image
separately
and
every
so
often
you'll
see
we'll
reset
the
chain
and
make
a
new
iframe,
but
this
is
a
kind
of
common
encoding.
Actually,
this
is
really
simple,
but
it's
low
latency-
and
you
see
this
a
lot
with
like
webrtc
and
conferencing.
F
We
could
have
more
complicated
encoding.
This
is
one
possible
example.
We
have
B
frames,
they
they're
annoying
because
they
can
reference
future
frames
as
well.
This.
F
This
has
a
lot
of
ramifications
on
how
you
write
these
frames
to
the
wire,
but
in
this
example,
we
have
some
B
frames
that
will
reference
the
eyes
and
the
P's,
but
themselves
are
not
referenced,
and
it
you
see
this
more
common
from
like
Hardware
encoders
that
have
a
very
fixed
structure,
but
the
benefit
of
stuff
like
B
frames
and
more
references
in
general,
is
that
you
can
enhance
the
the
compression
ratio.
You
can
effectively
more
references.
F
You
use
the
less
bits
you
need
to
encode,
typically
and
then,
finally,
you
can
get
some
just
really
gnarly
encodings,
especially
if
you
let
a
software
encoder
and
give
it
like
a
you
know
a
few
hours
it
will
produce
the
most
jumbled
mess
the
most
complicated
possible
encoding,
every
frame,
referencing
every
other
frame
more.
You
know
just
creating
a
tangled
web
of
dependencies
and
it's
really
hard
to
know
how
you're
supposed
to
send
this
over
the
network.
F
But
we
try
one
nice
property
worth
mentioning.
Is
that
when
there's
a
new
iframe
we
can
you
typically
have
what
are
called
closed?
Gobs,
where
you
do
not
reference
the
previous
group
of
pictures?
So
that's
a
nice
property.
You
cannot
have
a
a
dependency
go
across
this
this
line,
so
we
abuse
that
and
a
lot
of
protocols
abuse
that
so
we're
making
a
live.
F
Media
protocol,
a
live
media
transport
and
there's
a
few
high-level
goals
that
I've
been
really
trying
to
adhere
to
so
number
one
make
sure
we
respect
the
encoding
over
the
network.
So
the
idea
being
is
we
want
to
send
media
as
it
needs
to
be
decoded
and
as
it
is
encoded,
we
don't
want
to
try
to
do
our
own
delivery
mechanism.
We
want
to
make
sure
that
if
this
Frame
depends
on
this
Frame,
then
over
the
network.
F
This
Frame
also
depends
on
this
Frame
I've
been
mostly
focusing
on
number
two
minimizing
latency
during
congestion.
I've
been
trying
to
improve
our
our
live
video
system
to
try
and
reduce
latency.
But
the
idea
being
is
that
when
networks
suffer,
you
need
to
have
a
plan
to
decide
how
to
keep
latency
low
and
number
three.
Of
course,
use
Quick,
it's
in
the
working
group
name
after
all,
but
funnily
enough,
we've
discovered
quick
later
in
the
process
and
it
just
kind
of
had
all
these
natural
properties.
F
We
did
not
set
out
to
use
Quick
initially,
but
yeah,
it's
in
the
charter,
so
some
more
little
history
we
have
this
existing.
You
know
this
kind
of
annoying
Gap
structure
with
B
frames.
How
do
you
serialize
this
over
TCP?
F
How
do
you
make
like
rtmp
hls
Dash?
How
do
they
work
and
more
or
less
they
take
this
structure
and
they
put
it
into
one
big
chain?
The
idea
being,
is
you
make
sure
that
everything
all
dependencies
are
first
in
the
chain,
and
this
is
what's
called
decode
order?
It
screws
a
little
bit
with
timestamps.
If
you
see
the
very
bottom
because
of
B
frames,
don't
worry
too
much
about
that,
but
this
is
one
of
the
reasons
why
B
frames
add
latency
and
effectively.
F
F
F
Well,
the
solution
really
is
the
congestion
controller
will
come
in.
You
know
Reno
cubic
bbr
whatever
and
pick
a
point
in
the
TCP
stream.
It
will
send
everything
before
this
point
and
everything
afterwards.
It
will
just
queue
and,
as
new
frames
arrive,
we
just
append
them
write
them
to
the
TCP
socket
they
get
added
to
the
end
of
the
queue,
and
this
continues
and
continues
where
new
frames
are
put
in
further
deeper
and
deeper
into
a
cube.
F
There's
some
ways
you
can
get
around
this
to
some
workarounds
like
you
can
avoid
flushing
frames
to
the
TCP
socket
until
the
last
possible
minute,
and
you
can
do
stuff
like
ABR
with
hls
Dash,
but
more
or
less
you
have
headline
blocking,
and
this
is
one
of
the
you
know
it's
very
similar
to
http
in
terms
of
the
analogies
here,
where
we're
we're
introducing
a
lot
of
dependencies
that
shouldn't
actually
exist
a
lot
of
these.
F
These
new
frames,
these
B's
and
the
p's,
don't
necessarily
depend
on
any
of
the
previous
frames
that
have
been
written
to
TCP,
but
we
make
them
dependencies
over
the
network,
so
rule
them.
This
is
these
aren't
hard
and
fast
rules,
but
typically
a
goal-
one
I
would
say
is
don't
introduce
dependencies
over
the
network?
The
idea
being
is
we
should
keep
the
media
dependencies
like
this.
F
Frame
depends
on
this
Frame
and
at
a
transport
level
we
should
not
introduce
new
dependencies
just
because
it's
easier
because
otherwise
we
just
add
latency
during
congestion,
it's
an
oversimplification,
but
as
are
these
slides.
F
So
how
does
RTP
sold
this?
The
idea
being
is
hey.
Let's
we
can't
use
TCP,
let's
break
frames
into
individual
packets
and
MTU
boundaries,
so
we
take
our
existing
frames
and
we
we
slice
them
up
and
transmit
them
ourselves.
The
idea
is
that
we
do
fragmentation,
congest
control,
reassembly
everything
in
user
space
all
by
ourselves,
but
we
get
away
from
TCP
by
doing
that
and
now
with
congestion.
What
do
we
do?
What
do
we
do
when
we
can
only
send
half
the
frames?
F
Well,
the
idea
being
is
because
we
do
Transmissions
ourselves.
We
can
choose
which
packets
and,
more
importantly,
which
frames
the
transmit,
so
we
selectively
decide
which
packets
to
transmit
and
re-transmit
such
that
we
try
to
avoid
something
like
the
B
frames.
In
this
example
they're
spurious
they
they're
Leaf
nodes
effectively
on
our
graph,
and
we
try
to
make
sure
that
there's
dependencies,
so
we
really
prefer.
F
F
Unfortunately,
there's
also
a
worst
case
where,
if
you
don't,
if
you're
not
intelligent-
or
let's
say
you
don't
have
congestion
control,
that's
rapid
and
you
just
like
fire
hose
the
network
with
all
your
packets
and
they
just
drop
randomly.
You
get
this
case
where
you
get
hearts
of
frames,
arrive
or
you
don't
deliver
frames
based
on
dependencies
like
we
lost
a
packet
for
our
iframe
and
that
just
causes
decode
failures
going
forward.
So
we
lost
the
same
number
of
packets
in
this
example,
but
we
only
decoded
the
first
frame
and
everything
else
was
lost.
F
But
rule
number
two
just
to
simplify
is
don't
drop,
partial
frames
technically
slices,
but
you
need
to
deliver.
If
if
a
frame
is
split
over
multiple
packets,
they
should
all
be
delivered
or
none
of
them
should
be
delivered.
Ideally,
you
don't
want
to
waste
bandwidth,
delivering
partial
partial
frames,
they're,
not
decodable
for
the
most
part.
F
So
how
does
rush
solve
this?
So
this
is
what
meta's
been
using
for
like
four
or
five
years
now,
apparently,
is
you
split
the
media
into
Quick
streams?
So
every
frame
is
a
separate,
quick
stream,
and
the
idea
being
here
is
like
quick,
make
sure
that
all
the
the
frame
is
fragmented
and
and
transmitted
and
arrives
at
the
other
side
in
order,
so
frames
can
arrive
out
of
order
into
in
relation
to
each
other,
but
within
a
frame
it
will
arrive
reliably
so
same
thing.
F
F
But
what
do
we
do?
What
do
we
do?
When
there's
congestion,
we
can
only
drop
half
the
frames.
Well,
just
like
r2p
there's
a
similar
best
case.
F
What
we
call
reset
stream,
we
effectively
say
I,
want
to
drop
this
Frame
or
we
don't
even
make
a
stream
for
it
in
the
first
place,
very,
very
similar
to
RTP
use
case
where
you
can
pick
and
choose
which
which
frames
you
should
drop,
although
if
you
don't
make
the
right
decisions
in
this
case,
we
dropped
two
frames
only,
but
we've
dropped
the
worst
ones
or
some
really
bad
ones,
and
you
cause
decode
errors
again
like
if
you
drop
this
iframe
a
great
example:
nothing
after
it
in
that
Gap
is
decodable
without
causing
artifacts
all
over
the
place.
F
So
one
of
the
things
that
Rush
does
and
quick
R
does,
is
we
put
the
dependencies
on
The
Wire?
The
idea
being
is
that
the
sender
already
knows
the
dependencies,
but
relays
also
know
the
dependencies,
so
they
can
make
these
intelligent
dropping
decisions,
and
this
is
foreshadowing
a
little
bit
to
Cullen's
presentation.
F
So
I
won't
cover
it
too
much
so
rule
three:
don't
drop
dependencies
in
graph.
Theory
terms
drop
the
leaf
nodes.
First,
you
don't
want
to
drop
the
root
of
the
graph
because
it
just
causes
everything
else
to
be
lost.
As
well,
there
is
a
caveat
there.
A
lot
of
these
Leaf
nodes
aren't
that
big
in
terms
of
file
size,
like
B
frames,
really
don't
contribute
much
of
the
the
bitrate
but
they're
still
the
best
to
drop
first.
F
F
The
idea
being
here
is
it's
closer
to
actually
TCP,
where
we
just
have
things
in
decode
order,
but
we
we
make
a
quick
stream
per
Gap,
so
the
dashed
line
is
missing
for
some
reason,
but
every
iframe
effectively
we
make
a
brand
new,
quick
stream,
and
what
we
do
is
we
mark
them
with
a
delivery
order,
a
little
bit
of
metadata
that
says
that
this
stream
should
be
delivered
before
this
stream,
or
this
GOP
segment
should
be
delivered
before
this
segment,
so
I
color
coded
them
here,
but
in
this
case
the
green
on
the
on
the
right.
F
F
So
what
do
we
do?
This
congestion?
We
can
only
deliver
half
these
frames
again
same
congestion
control
like
as
a
lot
of
the
the
other
examples.
We
we
learn
that
there's
issues
on
the
network.
We
need
to
figure
out
what
to
drop
and
the
way
that
work
works.
Is
we
deliver
based
on
this
order,
so
we
actually
kind
of
reshuffle
the
the
order
we
send
packets,
so
we
send
the
newest
packets
first,
in
this
case
the
iframe.
The
packet
five
is
sent
first
or
frame.
F
Five
is
sent
first
and
then
we'll
do
the
p
and
then
the
B.
So
we've
delivered
the
newest
frames
first
and
then,
with
the
remaining
congestion
window,
we
will
actually
go
back
and
try
this
in
the
original
iframe
frame
zero.
But
it's
a
requirement
for
all
the
frames
that
follow
it.
F
The
idea
being
is
that
the
congest
controller,
in
my
case
I'm,
just
using
bbr,
just
tells
you
what
it
can
send.
These
are
the
packets
I
can
send
right
now
and
we
go
in
in
descending
order
and
and
send
the
streams
until
eventually
we
hit
the
bottleneck,
which
is
we
just
can't
send
anymore
and
as
new
frames
are
generated.
So
this
is
a
new
and
a
PB
frame.
F
These
are
appended
to
the
newest
stream,
which
is
delivery
order.
One
and
what's
really
cool,
is
these
kind
of
jump.
The
queue
like
in
a
TCP
World.
These
are
added
to
the
very
end,
and
this
prioritized
world
or
this
delivery
order.
These
some
of
the
frames
might
get
sent
like
the
P
frame
was
is
now
in
the
congestion
window
it
gets
sent,
whereas
the
B
frame
is
falling.
On
the
other
end,
we
will
starve
that
and
we
will
starve
the
older
frames
as
well.
F
These
frames
just
sitting
in
Ram
they're.
These
are
again,
these
are
quick
streams.
So
there's
two
quick
streams
in
this
example:
the
yellow,
quick
stream
sitting
in
Ram,
using
a
tiny
bit
of
flow
control
like
Max
streams,
but
it's
not
using
anything
over
the
network.
It's
not
competing
with
other
packets
and
eventually
what
you
can
do
is
you
can
just
reset
the
stream
it's
not
required,
but
at
some
point
it's
going
to
be.
F
You
know,
after
like
some
number
of
seconds,
you're
gonna
be
like
hey
you're,
not
you're,
just
not
going
to
get
sent
anytime
soon.
So
we
should
just
free
up
this
memory
and
stop
worrying
about
having
this.
You
know
you
don't
want
to
have
an
indefinite
chain
of
frames
that
need
to
be
sent
from
like
minutes
and
hours
ago,
just
sitting
in
Ram,
so
this
is
optional.
I
will
mention
and
I'll
explain
why?
F
Because
one
of
the
things
that
warp
I
really
really
hope
it
I
claim
it
does,
is
it
offers
variable,
latency
and
relay
support
the
idea
being
that
the
receiver
can
choose
how
much
latency
they
want
the
latency
versus
quality
trade-off
and
this
works
across
relays,
which
I
think
is
really
cool,
so
just
to
kind
of
visualize
this
it's
a
hard
concept,
so
I've
done
my
best
to
try
and
visualize
this,
but
we've
got
we've
got
a
playback
session
right
now.
We've
got
two
two
frames
in
the
receive
buffer.
F
F
We
had
the
live
playhead
that
dash
line.
That's
what
the
encoder
is
spitting
out,
and
this
is
the
earliest.
We
could
possibly
receive
a
frame
over
the
network.
You
can
think
of
it
like
min
rtt,
and
then
we
have
these
other
playheads
on
the
left.
The
idea
being
is
the
receiver.
In
outcasts,
the
the
video
player
chooses
when
they
want
to
start
playback
and
they
choose
the
size
of
the
Jitter
buffer
to
get
into
networking
terms.
F
So
as
we
continue,
we
generated
a
little
bit
more
audio
and
we
delivered
some
that
audio
over
the
network.
The
encoder
also
generated
frame
three
of
video,
but
there's
a
little
bit
of
congestion.
It
hasn't
arrived
yet,
but
either
way
we
started
with
this
low
low,
latency
playhead
for
real-time
conferencing.
In
this
example,
it's
got
two
frames
of
latency,
which
is
a
little
absurd,
but
it's
just
an
example.
The
idea
being
is.
It
starts
playing
out
of
the
Jitter
buffer
almost
immediately
so
as
we
continue,
we've
got
some
more
congestion.
F
Actually
we're
not
seeing
frame.
Four
should
have
been
delivered
at
this
point
and
we
we
don't
even
have
audio
for
that
equivalent
of
frame
four.
So
at
this
point
the
low
latency
playhead
is
already
running
into
a
situation
where
it
doesn't
have
any
video
to
display,
and
it's
just
going
to
show
the
previous
frame
next
slide,
and
we
we
have
a
new
new
kid
in
the
block,
interactive,
latency,
interactive
versus
real
time.
It's
a
debatable
I,
say
interactive,
latency
being
just
just
higher
latency.
It's
like
twitch
style
agency.
F
I
would
call
it,
but
there's
some
debate
over
with
the
best
name
for
that
latency
range
is,
but
anyway
they
start
watching
video
they've
started
and
now
low
latency,
a
real-time
conferencing.
Latency
is
just
running
out
of
the
audio
buffer.
It's
gonna
start
just
having
silence
very
soon.
We
still
have
a
massive
congestion.
None
of
the
new
frames
have
been
generated
or
delivered.
Sorry,
they
haven't
been
delivered.
F
They
have
been
generated,
but
they're
sitting
blocked
based
on
congestion
control,
but
the
important
thing
is
because
we
prioritize
or
because
we
have
this
delivery
order
frame
five
and
a
little
bit
of
audio
that
that
corresponds
to
it
was
delivered
just
in
time.
It's
an
iframe,
so
we're
allowed
to
deliver
it,
whereas
all
the
previous
frames
we
weren't
allowed
to
skip.
F
There
were
peas,
but
just
in
time
for
our
real-time
latency
user
to
get
a
little
bit
of
something
as
a
medium
latency
viewer
catches
up
and
they're
about
to
hit
a
gap
of
missing
video.
F
We
add
one
more
viewer
to
the
mix.
This
is
I'm
just
calling
our
non-interactive
latency.
This
is
somebody
with
maybe
a
poor
Network.
Maybe
it's
somebody
in
Brazil
on
their
mobile
phone.
You
know
like
they
cannot
sustain
the
latencies,
that
the
other
users
need,
or
they
don't
want
it-
maybe
they're,
just
on
the
couch
or
something
like
they're.
Just
they
there's
no
reason
to
have
low
latency
they're
just
trying
to
watch
football,
so
they
they
joined
the
picture
too,
and
we're
recovering
a
bit
from
congestion
on
the
right
side.
F
You
see
that
we've
delivered
frame
six
we've
delivered
some
audio,
we're
catching
up,
there's
still
a
big
gaping
Gap
in
the
middle,
which
is
where
the
medium
latency
is
so
that
user
right
now
is
they're.
Listening
to
audio
you
see,
the
audio
is
continuous,
they
hear
it,
but
their
video
is
is
Frozen
or
stuttering.
Much
like
a
conference
call
if
you've
ever
used.
Everybody's
use
a
conference.
Call
at
this
point
where
that's
what
the
user
experience
is
like
and
then
finally,
we
keep
going
forward.
F
We
get
some
more
frames
popping
into
the
picture.
We've
now
caught
up
to
the
live,
playhead
everything's
being
delivered
and
we
can
go
back
and
we
can
start
back
filling
the
gaps
in
the
buffer,
so
we've
delivered
frame
three
and
just
in
time
for
the
high
latency
user
to
get
it
even
though
the
medium
and
low
latency
viewer
did
not
get
it,
and
then
we
deliver
frame
four.
So
we
we
finally
managed
to
save
the
day.
F
We
patched
all
the
holes
in
the
Gap
just
in
time
for
that
viewer
didn't
see
any
issues
at
all
and
then
there's
a
there's,
a
fourth
one
added.
This
is
a
archive
worker.
It
could
be
like
30
seconds
delayed.
The
idea
is
that
it's
sitting
there,
it
will
just
scoop
up
all
the
frames
eventually
and
then
upload
them
to
storage
for
a
Flawless
of
odd,
so
the
same
stream.
F
F
We
can
have
these
different
user
experiences,
so
the
real-time
latency
user
had
the
lowest
latency,
but
they
dropped
video
and
a
little
bit
of
audio,
which
is,
is
usually
required
if
you're
doing
real-time
conferencing,
but
it's
not
ideal
I
would
say
in
higher
latency
situations
like
a
medium
latency.
They
just
dropped
a
little
bit
of
video,
but
audio
was
Flawless.
That's
what
we're
looking
for
from
twitch,
for
example,
like
users,
don't
need
real
time.
F
F
Maybe
ABR
kicked
in
at
some
point
and
they
would
notice
that,
but
in
terms
of
frame
losses
and
frame
drops
they
didn't
notice
anything
and
then
finally,
the
archive
worker,
it's
just
whatever
latency
you
want.
You
could
wait,
wait
as
long
as
you
want
well,
Ram
persists
pretty
much
until
the
streams
are
eventually
reset
either
side
could
reset,
like
the
sender
could
send
a
reset
stream
and
the
receiver
could
send
to
stop
sending
to
say
like
please
just
don't
send
me
anymore,
I've
already
skipped
over
this
content.
F
That's
mostly
just
to
say
bandwidth,
though
so
anyway,
with
all
that
in
play,
we've
got
this
combined
draft
I've
been
working
on
we're,
trying
to
combine
warp
rush
and
a
little
bit
of
quick
R.
The
the
main
concept
is
any
number
of
frames
per
stream.
Is
a
segment
there's
a
lot
of
bike
shedding
on
this
name,
so
don't
don't
get
too
attached
to
it,
and
the
delivery
order
and
dependencies
are
written
on
the
wire,
and
this
is
to
support
relays.
F
The
idea
is
that
these
dropping
decisions
can
be
the
same,
no
matter
where
the
congestion
occurs.
If
the
congestion
occurs
on
the
first
top
versus
the
last,
hop
should
be
the
same
experience.
F
F
You
can
fragment
a
little
bit
more.
You
can
say
what,
if
we
send
reference
frames
and
non-reference
frames
as
separate
segments,
separate
quick
streams,
we
have
a
little
marker
at
the
front
that
says:
here's
the
here's,
the
delivery
order,
and
maybe
in
a
little
market
that
says
here
the
dependencies
between
streams
but
yeah.
We
could
break
it
further
because
in
this
case
we
have
these
B
frames
at
the
top
depend
on
each
other.
Even
though
ondexual
encoding
they
don't,
you
could
break
them
into
individual
segments
too.
F
You
can
break
it
down
as
far
as
you
want.
In
this
case,
we've
got
a
segment
for
an
individual
frame,
so
it's
not
fate
sharing
with
the
other
frames
when
it
doesn't
need
to
be,
and
then
finally,
you
could
just
break
it
every
frame
into
its
own
segment.
This
is
effectively
Rush
the
numbers
here,
like
the
delivery
order,
one
two,
three
four,
five,
all
the
way
up,
those
are
arbitrary.
F
It's
based
on
your
application
in
the
use
case
in
in
this
case,
we
want
to
deliver
the
newer
frames
first,
but
there's
certainly
use
cases
where
you
want
to
deliver
the
older
frames.
First,
in
our
case
like,
for
example,
you're
serving
an
ad,
you
don't
want
to
skip
any
of
the
ad
video,
so
you
you
kind
of
flip
it
around.
F
You
turn
on
the
head
of
line
blocking
and
say
that
actually,
the
first
iframe,
which
is
delivery
order
for
make
a
delivery
order,
one
make
that
the
first
one
that
we
deliver
so
there's
no
skipping
allowed.
So
there's
a
lot
of
business
logic.
We'll
kind
of
talk
about
this
at
some
point
in
the
future,
about
delivery
order.
F
There's
no
right
answer
effectively
to
how
you
prioritize
frames.
Somebody
has
to
decide
which
frames
are
more
important.
Somebody
has
to
decide
it's
audio
more
important.
Somebody
has
to
decide
even
like
which
Renditions
and
whatnot
are
important.
F
So,
what's
next
I
I
forgot
to
draw
a
bike,
but
this
is
a
shed.
The
the
names
are
really
hard,
so
we're
not
sure
if
we
should
call
this
warp
or
rush
or
meaver
quick
or
some
made
up
name
and
especially
the
name
of
segments
like
all
these
segments.
Are
they
layers?
Are
they
fragments
chunks,
media,
quick,
prefix
things
fubars
like.
G
F
And
then
I've
been
working
on
this
draft,
mostly
there's
a
GitHub
repo
kicksolated
warp
draft
I
found
a
lot
of
issues,
but
there's
a
lot
of
core
things
missing,
like
a
wire
format,
I've
defined
how
media
should
be
sent,
but
we
don't
even
have
a
way
of
how
the
bytes
should
be
written
to
streams.
Yet
we're
also
trying
to
figure
out
how
CDN
support
works
right
now
the
protocol
is
push-based.
It's
very
simple
web
transport.
You
push,
but
we
want
to
figure
out.
F
So
it
doesn't
really
matter
if
it's
push
versus
pull
so
long
as
you
send
media
the
same
way
over
quick
streams,
but
we
want
to
ideally
have
one
protocol
for
both
contribution
and
distribution,
and
we
need
to
figure
out
where
the
middle
ground
is
there
and
just
all
the
media
stuff
like
how
do
you
send
tracks?
How
do
you
choose
tracks?
F
How
do
you
do
you
know
which
codecs
which
container
even
the
current
draft,
has
like
cmap,
but
we
still
haven't
even
decided,
so
everything
that's
actually
required
to
make
sure
media
playback
works
is
currently
missing.
Just
just
the
networking
stuff
is
what
I've
been
focusing
on.
F
I've
also
been
working
on
a
demo
quick.video
demo
I
will
say
it's
very
crude.
It's
just
hosted
on
a
single
host
tiny
instance
in
US
West
2.
F
I'm
working
on
it
too,
but
there's
this
code
also
on
GitHub
to
show
how
this
works
and
some
examples
and
then,
finally,
with
adoption,
I,
don't
know
the
process.
I,
don't
think
this
is
ready
for
adoption
yet,
but
I
really
want
to
get
the
idea
from
the
room.
Is
this
the
right
direction?
How
do
we
feel?
F
F
A
lot
of
contributors,
so
I'm
Luke
again
representing
the
twitch
folks,
most
of
them
are
online
and
then
from
Cisco
we've
got
Suez
Cullen
and
meta.
Cairo,
Allen
and
Jordy
have
been
helping
a
lot
with
this
draft,
among
others
I've
seen.
There's
a
few
folks
have
left
a
lot
of
issues
lately
too,
and
that's
all
I
got.
C
Speaking
of
contributors,
the
queue
is
quite
full
at
this
point:
Sam
you're
up
first.
H
Between
me
good
morning,
samhurst
BBC
r
d,
thanks
very
much
for
the
presentation
that
was
really
cool
and
Kudos
on
the
handwritten
slides,
I
love
that
so
I
have
a
question
about
the
delivery
order.
Number
that
you
were
showing
up
there
is
that
just
a
constantly
incrementing
number
as
it
goes
through,
or
is
that
a
more
sort
of
dynamic
value
that
changes
depending
on
one
particular
instance
in
time,
yeah.
F
So
we
still
need
to
work
out,
as
this
is
one
continue
right
now.
It's
one
continuous
number
space,
it's
just
a
un
64.
how
warp
Works
in
production
for
right,
how
I've
coded
it
is.
It's
effectively
just
the
presentation,
timestamp
plus
three
seconds,
if
it's
audio,
just
to
give
audio
a
little
bit
of
a
head
start,
but
effectively
it's
an
arbitrary
number
so
long
as
the
lower
number
is
delivered
first
or
in
my
case,
I'm
doing
the
presentation
timestamp.
But
anyway
it's
arbitrary,
okay,
okay,.
D
I
Great
overview
Grosso
for
simplification,
like
you
said,
but
I,
think
it's
the
right
oversimplification
for
transport.
Folks,
one
important
aspect
that
I
I'm,
not
sure
was
was
convey,
which
I
think
it
matters
a
lot
to
the
the
for
the
transport
folks
to
understand
is
that
there's
differences
between
congestion,
control
and
rate
control
and
and
even
within
congestion
control?
I
There's
also,
you
know
when
you
talk
about
prioritization,
there's
different
concepts
of
the
partition
about
whether
you're
talking
about
your
local
cues
or
whether
you're
talking
about
marking
things
properly,
so
that
the
network
cues
handle
your
congestion.
The
way
that
you
intended
so
first
I
think
we
want
to
make
sure
that
people
understand
that
over
long
time,
Windows
we're
not
using
congestion
control
to
to
instantaneously
decide
which
packets
are
said
that
the
the
congestion
controllers
overall
average
bandwidth
estimate
gets
fed
up
slowly
to
the
app
and
that
changes
the
media
encoding
rate.
I
So
you're
not
just
going
to
drop
individual
parts
of
your
of
your
stream
you're,
going
to
select
a
totally
different
stream
encoding
in
order
to
meet
the
budget
that
your
condition
control
is
telling
you
over
a
long
time.
That's
not
useful
for
an
instantaneous
decision
that
the
congestion
control
says
you
know
what
I
I
know
I'm
not
going
to
send
this
packet
because
I
don't
have
a
window
open
this
in
this
packet.
So
that's
where
an
immediate
priority
can
help
on
on
a
marking
from
the
app
layer
to
the
to
the
quick
layer.
I
It
can
drop
something
immediately
or
hold
something
back
immediately,
but
the
longer
time
windows,
those
are
going
to
be
controlled
by
rate
controller
feedback
to
change
the
media
encoding
itself.
So
I
think
it's
important
to
keep
keep
those
things
in
mind
and
also
the
difference
between
congestion
control
decisions
to
do
something
for
internal
cues
versus
how
they
get
marked
for
Network
cues,
because
you
know
Wireless,
you
know
wmm
and
and
Mobile
Wireless,
qcis
and,
and
we
know
wired
dscps
and
things
like
that.
I
F
Yeah
one
thing
I
didn't
mention-
and
you
rightly
pointed
out-
is
that
these
don't
have
to
be
the
same
bit
rate
like
these
segments.
The
encoder
as
it
sees
congestion,
can
start
decreasing
the
bit
rate.
In
response,
this
is
meant
to
be
a
response
when
you
can't
change
the
encoder
in
time,
especially
for
contribution
like
you
have
to
switch,
it
switch
Renditions
and
iframe
boundaries,
but
yeah
and
and
prioritizations
only
as
effective
as
how
quickly
you
can
detect
congestion.
I
I
think
this,
this
order
that
you
have
I
think
it
came
across
in
the
presentation
as
this
is
something
baked
into
the
protocol,
but
that's
an
application
Level
thing
that
it
can
just
decide
whatever
you
know,
delivery
Priority.
It
wants
for
things
because
for
for
some
people,
the
user
experience
may
be
terrible
if
it's
stuttering,
that
that
order
that
you
showed
before
would
be
under
congestion.
I
You,
your
video,
is
constantly
stuttering
right
you,
you
cancel
something,
that's
being
played
right
now
and
you
try
to
play
the
Live
Edge,
and
so
you
get
someone
you
know
always
for
half
a
second
of
the
Live
Edge,
instead
of
a
full
second
of
continuous
video,
so
I
think
that
user
experience
is
up
to
the
app
to
figure
out
what
the
right
you
know.
Order
and
priorities
should
be
not
something
back
into
the
into
the
protocol.
C
Thanks
for
that,
I
will
notice
that
some
of
the
discussion
that's
currently
happening
on
the
chat
has
has
touched
on
ABR
and
other
things
like
that.
So,
if
you're
watching
this
on
YouTube,
please
do
go,
seek
out
that
chat
stream
as
well,
which
should
be
linked
from
the
iitf
site,
because
there's
certainly
some
working
group
discussion
happening
in
that
I'm
sure.
J
F
I
think
in
quick
streams
are
more
than
capable
of
doing
even
real-time
latency.
So
as
long
as
again,
you
have
some
way
of
deciding
which
quick
streams
to
send
first
and
which
fragments
to
send
first,
so
I,
don't
think
datagrams
are
necessary,
which
is
a
controversial
statement
but
yeah
by
having
more
tight
control
over
what
the
quick
library
sends
end
of
the
day.
It's
the
same
media
getting
fragmented.
F
J
K
L
I
guess
the
the
Assumption
I
appears
a
handle
on
the
BBC
r
d
is.
The
Assumption
here
was:
was
is
sort
of
traditional
kind
of
representation
kind
of
breakdown
of
the
video,
as
opposed
to
like
SVC
type
kind
of
so
then,
if
you
think,
if
we're
thinking
that,
then
it's
then
are
you
gonna?
L
Is
there
an
ABR
stage,
or
is
this
really
the
ABR,
because
I
guess
that's
happening
in
webrtc
already
with
the
SFU
approach,
where
you
just
pull
things
out,
I
sort
of
wondered
you
thought
about
that
because
it's
just
sort
of
seems
to
be
quite
sort
of
low
level,
ipb
based,
but.
F
So,
first
off
we're
I'm
doing
ABR
for
we're
using
this
approach.
The
idea
being
is
that
the
sender
chooses
which
of
these
segments
to
send
So
based
on
the
congestion
window.
It
can
switch
down
a
rendition.
You
did
mention
SVC
now
very
quickly,
switch
to
just
in
case
somebody
asks.
F
This
is
an
example.
Don't
overthink
this,
but
the
idea
is,
you
could
have
a
base
layer
like
360p
and
then
you
can
have
segments
that
build
on
top
of
so
SPC
people.
Don't
know
you
have
a
basin
layer
360p
in
this
case,
and
you
could
have
enhancement
layers
on
top,
so
we're
pretty
much
just
sending
SBC
layers
as
quick
streams
is
more
or
less
this
proposal,
but
I
don't
know
enough
about
SVC.
This
is
just
like
an
idea.
L
Right,
okay,
yeah
I
thought
it
should.
It
seemed
like
it
would
naturally
extend
to
it
in
a
way,
but
then
in
another
sense
it
would
then
maybe
you
wouldn't
necessarily
need
this
would
become
the
kind
of
ABR.
L
L
C
And
just
the
treasure
about
to
lock
the
queue.
So,
if
you
feel
like
you
want
to
be
following
up
on
this,
please
get
in
the
queue
now
Christian.
M
Nice
presentation,
Luke
I,
mean
I
I,
like
the
simplification.
It
makes
the
thing
easy
to
understand.
One
thing
that
is
missed
in
the
simplification
is
the
effect
of
these
segmentation
on
relays,
because
if
you
have
a
set
of
streams
as
you
describe,
then
you
are
going
to
get
head
offline
blocking
on
these
streams.
F
M
F
C
K
K
N
Okay,
is
it
better
now
cool,
so
thanks
Luke
for
the
presentation
on
trying
to
kind
of
combine
things
good
things
from
all
the
protocols
and
I
agree
this.
This
is
a
good
start
and
we
have
quite
a
bit
of
things
to
cover
to
make
it
more
usable,
especially
thinking
end-to-end
protocol,
not
just
the
transport,
but
also
the
relays
and
consumption
side.
How
do
you
ask
for
things
they
all
depend
on?
N
How
do
you
break
the
things
to
start
with,
so
that
we
can
Define
the
internal
protocol
I
think
we
need
to
cut
some
of
those
pieces
in
and
I
want
to
kind
of
talk
on
a
couple
of
things
that
was
raised
some
more
on
someone
else
on
on
the
delivery
on
the
priority.
I
think
there
are
two
different
concepts
like
priority
is
more
like
saying
what.
N
How
is
one
thing
important
in
relative
to
other
and
delivery
order
is
more
like
how
my
decode
decoder
wants
to
get
it
in,
so
that
it
can
do
some
good
work
and,
and
probably
we
need
to
kind
of
spend
some
more
time
on
thinking
how
relays
would
understand
the
data,
but
not
making
too
much
in
protocol,
but
leaving
it
to
the
application
space
to
Define
how
it
is
done.
That
would
be
helpful
and
another
point
on
the
datagrams
I
agree
at
some
point.
N
As
a
group,
we
have
to
kind
of
make
a
decision
on.
How
do
we
meet
the
use
cases
that
we
have
mock
chartered
for
HTTP
does
not
have
to
do
real
time.
So
hence
it's
okay,
but
mock
is
not
constrained
by
that
we
have
more
use
cases
and
we
might
not
do
datagram.
It's
a
a
working
group
decision
at
some
point,
but
we
need
to
say
Can
what
are
protocol
be
develop
in
water
transport?
We
do
works
for
the
use
cases
thanks.
D
P
Okay,
first
I'd
like
to
say
I,
I,
think
it's
good
that
you're
building
on
web
transport
that
solves
a
lot
of
like
multiplexing
issues
and
it
allows
for
I
think
you
should
allow
for
either
streams
or
datagrams
I.
Think
streams
are
fine
to
use,
but
I
think
it
makes
sense
to
allow
the
client
to
do
either
and
that's
I
believe
the
direction
RTP
over
quick
is
doing
during
your
presentation.
You
mentioned
that
you
haven't
defined
it
all
the
media
protocol.
Are
you
looking
for
contributions
on
that
front?
F
Of
course,
if
any,
if
anybody
wants
to
help
either
on
the
protocol
or
on
the
demo
or
just
just
want
to
talk
to
me,
go
for
it.
Looking
for
contributions.
D
D
D
Q
Okay,
I
notice-
you
have
the
SVC
example
here
and
for
the
case,
where
you're
not
using
SEC
but
you're,
just
feeding,
multiple
different
resolutions
or
bit
rates
up
I'd
be
interested
to
see
how
that
Maps
here,
but
also
I
notice,
also
that
all
the
playheads
you've
defined
in
your
other
diagram
appear
to
be
at
the
same
location
I.
You
know
as
in
there
all
these
things
are
arriving
at
that
location
and
you
have
different
playheads,
which
appear
to
be
running
off
the
same
buffer
stream.
Whatever
okay.
Q
Is
that
actually
what's
what's
anticipated
here,
which
would
seem
odd,
except
for
maybe
like
the
recording
stream
recording
head?
Or
is
there
something
more
complex
going
on
here?
I
I
think
like
it
is
actually
getting
fed
out
to
the
different
two
different
locations
which
I
imagine
it
is
which
may
be
consuming
different
portions
of
a
multi-layered
multi-bitrate
stream.
F
Okay
sure
so,
first
off
for
simulcast,
which
is
sending
multiple
Renditions
that
don't
depend
on
each
other.
It's
effectively
this
SVC
example.
You
just
remove
those
arrows
between
layers.
So
in
this
case
you'd
send
the
360p
rendition
I
got
the
highest
priority
will
be
delivered
first
and
then
any
extra
bandwidth
will
be
sent
on
higher
quality
ones,
so
sending
multiple
Renditions
works
very
similar
to
SVC,
there's
just
no
dependencies
between
segments
and
layers
and
then
finally
for
the
playhead.
You
would
not
be
receiving
those
all
that's
just
an
just
a
visualization.
F
The
idea
is
that
different
viewers
will
have
different
network
conditions,
so
they
all
have
different
starvation
congestion.
I
just
wanted
to
explain
that,
even
with
the
same
network
situation,
you
could
have
different
experiences
based
on
the
size
of
that
Jitter
buffer
and
the
size
of
that
Jitter
buffer
depends
on
the
use
case,
the
desired
user
experience,
but
everybody
would
get
a
different
experience.
Yeah
I.
D
C
You
thank
you,
I
think
that
brings
us
to
the
kind
of
next
steps
question
here.
The
discussion
has
been
great
today.
We
definitely
have
kicked
off
a
lot
of
different
issues,
both
at
the
mic
line
and
again
I'll
I'll
point
out
during
the
chat.
Almost
a
distinct
set
of
issues
we've
gone
through
and
Luke
I
entered
you
to
go
through
the
chat
and
anybody
who's
watching
us
post
facto
to
do
the
same.
We're
definitely
not
going
to
do
a
call
for
adoption.
C
Today,
it's
pretty
clear,
there's
a
lot
of
stuff
that
people
need
to
cash
out
first,
but
please
do
keep
the
discussion
going
on
the
list,
even
though
we
will
be
talking
about
an
interim
later
on
in
this
meeting.
We
really
want
to
make
sure
that
we
don't
have
to
have
the
pace
of
discussion
only
at
these
meetings.
There's
definitely
a
lot
of
this
that
you
can
do
in
either
mailing
list
discussion
or
on
the
issues
for
the
GitHub
repo.
So
thanks
very
much.
C
Our
next
thing
on
the
agenda
is
a
different
presentation.
Colin,
you
want
to
come.
D
C
And
by
the
way,
if
you
have
not
actually
signed
in
to
the
the
tool,
because.
C
You
were
going
to
get
up
or
something
like
that.
We'd
really
appreciate
you
doing
so,
because
this
is
a
pretty
tight
fit
for
a
room
and
we'd
love
to
have
a
a
little
bit
more
spacious
space
when
we
get
to
Yokohama.
So.
R
Whoa
I
forgot
I
had
to
stand
in
the
pink
Square
here.
Okay,
awesome.
Thank
you
dust
that
working
okay
when
I
step
away
from
the
mic
abuse
me
so
I'm,
going
to
talk
a
little
bit
about
the
relay
stuff
here
and
whoa
huge
lag.
R
The
goals
of
this
discussion
is
is
really
around
understanding.
Some
of
the
commonality
of
some
of
the
questions
that
come
up
in
every
one
of
the
relay
drafts
that
we've
had
a
lot
of
these
drafts
had
relays,
or
things
are
a
lot
like
them
in
some
ways.
So
I
want
to
try
and
hit
some
of
the
similarities.
Differences.
R
There's
lots
of
more
nuanced
things
that
even
some
of
them
came
up
in
the
previous
conversation
which
which
unfortunately
I
don't
really
talk
about
at
all
in
here,
but
I
I
think
we're
just
you
know
it's
a
first
working
group
meeting
we're
trying
to
figure
out
what
are
the
the
big
things
and
and
get
them
moving
along
and
get
us
all
sort
of
talking
on
the
same
page
and
having
a
good
discussion
about
things
that
need
to
be
I
was
really
thankful
to
like
I
like
I
love,
the
last
discussion,
lots
of
good
ideas
came
up
and
it
wasn't
about
like
oh,
a
versus
B.
R
It
was
like
this
is
a
property
that
we
need
in
the
end
system.
We're
going
to
do,
and
that's
that's
a
great
way
to
think
about
stuff.
At
this
stage
of
a
working
group.
O
R
Where
do
relays
fit
in
here-
and
you
know
in
every
one
of
these
systems
that
we're
talking
about
on
the
ingest
side,
our
producers
are
creating
some
content,
they're
pushing
it
over
some
protocol
up
to
something
you
can
call
it
a
server.
You
can
call
it
a
relay,
you
can
call
it
a
CDN,
but
it's
going
up
to
something
we're
going
to
talk
a
little
bit
more
of
that.
Second,
on
the
other
side,
something
up,
probably
in
the
cloud
and
again
it
may
be
a
CDN.
It
may
be
a
relay
it
may
be.
R
However,
you
think
about
it.
An
SFU.
These
types
of
things
are
bringing
the
data
down
to
the
consumers,
and
the
key
Point
here
is
we're
not
trying
to
Define
and
figure
out
exactly
how
that
CDN
relay
internals
Works.
What
we're
trying
to
make
sure
is
that
the
protocol
we're
defining
here,
which
is
going
from
the
a
to
the
X's
or
from
the
Y's
down
to
the
C's,
has
enough
information
in
it
that
the
X
and
the
Y's
devices,
whatever
they
are,
have
the
information
that
they
need
to
do.
They
need
to
achieve
the
end.
R
End-To-End
user
experience
that
we
want
to
achieve
here.
Okay,
so
there's
you
know
some
things
in
there
that
you
end
up
having
to
know
you
know
if
consumer
C
has
twice
the
bandwidth
as
consumer
D
and
producer
a
is
making
things
in
different
resolutions
or
whatever
you
know,
there's
some
information
you
need
to
know
in
here
that
helps
works.
There's
not
much
information
we're
going
to
get
to
that
later.
R
We
talk
about
the
security
on
this
stuff,
so
I
wanted
to
just
sort
of
hit
a
little
bit
of
the
terminology
in
here
on
this
slide
to
just
there's,
not
I'm,
defining
just
by
definition
that
the
ends
are
the
producers
and
the
consumers,
things
that
have
the
ability
to
encode
or
decode
the
video
decrypt.
R
It
can
see
the
media,
those
types
of
things
the
relays
in
the
middle
here
are
not
ends
they're
middles,
so
they
cannot
do
things
like
that
and
when
we
start
talking
about
the
ability
in
our
Charter,
we
have
a
strong
requirement
for
end-to-end
security,
we're
talking
about
the
producers
and
the
consumers
being
able
to
do
it.
If
there
was
a
transcoder
in
the
middle
of
this
network,
it
has
to
access
the
media,
obviously
and
which
means
it's
an
end.
R
A
transcoder
is
probably
both
a
producer
and
a
consumer
of
different
streams
and
is
is
in
the
end.
You
know
what
I
think
my
audio
is
really
messed
up.
I'm
gonna
take
a
mask
off
here,
so
so
that
that's
what
I
mean
by
ends
and
and
such.
R
Back
a
slide,
maybe
okay,
so
we
do
get
the
question
of
like
well.
Do
we
have
these
who's
going
to
provide
these
relays
and
was
one
of
the
questions
that
was
raised
a
lot
in
the
beginning?
It
will
anybody
build
this
type
of
stuff,
and
so
there
will
probably
be
a
range
of
different
people
to
do
it.
There's
some
companies
that
have
a
CDN
or
something
like
it
today.
Twitch
is
an
example
of
this
right.
They
have
this.
They
operate
it
themselves.
They
can
deal
with
this
stuff.
R
There's
companies
like
that,
you
know
WebEx
part
of
Cisco
that
today
does
its
own
media
nodes
near
the
edge
and
pushing
them
out
to
all
of
these
places,
but
would
would
rather
run
that
on
somebody
else's
infrastructure
and
buy
and
there's
there's
companies
like
Akamai
cloudflareoplasty
of
all
talk
to
me
about
mock
in
various
forms
and
those
types
of
things
this
company
likes
interdigital
that
has
drafts
here
that
are
looking
at
relays
that
are
tightly
tied
to
5G
Network
optimizations,
so
they
can
provide
a
better
experience
for
the
media
than
you
could
if,
if
you
weren't
so
that
there's
there's
a
range
of
those
types
of
issues,
so
do
I
have
a
cube
built
up
here.
R
Warn
me:
if
I
do,
because
I
won't
I
will
stop
if
we
need
questions
so
what
what
really
drives
the
need
for
these
relays?
Why
do
all
these
existing
networks
happen?
Well,
there's
a
bunch
of
different
reasons.
Some
of
the
sort
of
high,
so
the
two
biggest
ones
for
me
are
one
is
on
the
distribution
side.
You
you
need
fan
out.
You
need
some
device
that
can
you
know,
take
your
your
content.
R
Instead
of
just
going
to
one
person,
spread
it
out
to
hundreds
of
people
or
millions
of
people
and
scale
that
up-
and
these
are
a
convenient
way
to
do
that
type
of
thing
and
the
closer
you
move
that
you
can
have
some
bandwidth
savings
and
some
of
those
types
of
other
issues
as
well,
then
there's
another
point
of
you're
trying
to
get
these
relays
closer
to
the
consumers
or
the
producers.
R
So
let's
talk
about
in
the
consumer
case
for
a
second
the
closer,
it
is
the
faster
the
less
round
trip
there
is
between
the
device
consuming
it
and
the
and
where
the
device
that
has
a
valid
copy
of
the
media
you're
trying
to
receive
and
the
more
you
can
squeeze
the
the
time
like
that
that
it
takes
to
get
a
message
back
and
forth
between
those
the
more
you
can
use
Quick
streams
and
things
like
that
to
re-transmit
data
you
lost
and
still
not
have
it
slide
out
and
still
not
have.
R
It
extend
the
latency
of
your
overall
delivery
of
the
media
from
a
glass
to
glass
point
of
view.
So
this
is
why
you
see
so
many
CVS
like
any
of
the
major
media
networks
today,
just
even
doing
Dash
Ratio
or
something
like
that,
I
mean
you,
don't
try
and
deliver
video
to
users
in
Australia
by
using
a
CDN
server
in
London.
It's
it!
That's
just
not
a
very
effective
way
to
do
it.
R
You
try
and
get
it
close
to
there,
so
that
when
there's
loss
on
a
local
land
link,
it's
a
slow,
it's
a
much
faster
place
that
you
can
say:
I
lost
this
packet.
Please
return
it
and
get
it
back
to
me.
So
bringing
those
things
up
now
some
people,
like
the
5G
people,
want
to
extend
that
much
farther
and
closer
out
to
the
edge
than
we've
traditionally
seen
in
today's
networks,
and
that's
one
of
the
things
that's
changing
that
allows
more
bandwidths.
R
So
these
operational
requirements,
a
few
of
these
comes
out.
You
know
you
want
to
minimize
load
and
make
them
easy
to
scale.
You
know
normal
sort
of
things
and
trying
to
develop,
develop
any
Cloud
Server.
You
know
you
need
to
make
them
configurable,
but
fundamentally
they're,
making
decisions
about
what
to
forward
what
to
drop.
Okay,
that's
really
what
they
do.
They
take
some
packets
in
they
send
some
packets
out
and
you
decide
which
one
of
those
go
in
and
out
a
lot
of
the
requirement
is
drafts.
R
We
saw
some
sort
of
requirement
to
clearly
hand
off
a
client
to
a
new
relay
to
reschedule
relays
or
reload
balance,
take
something
out
of
service,
and
this
is
the
sort
of
go
away
messages
you
see
in
several
of
the
protocols.
So
this
is
a
relay
redirect
type
idea,
low,
latencies,
obviously,
the
whole
point
of
this
sort
of
thing
and
rapid
recovery
from
buildings.
These
are
all
requirements
that
we're
going
to
have
to
figure
out
how
our
protocol
will
help
what
it,
what
our
protocol
needs
to
be
able
to
support
those
operational
requirements.
R
The
again
I'm
just
defining
this
as
terminology
by
by
definition,
not
some
logical
sort
of
thing,
I'm
splitting
the
stuff
that
relays
see-
or
this
goes
across
the
protocol
into
an
envelope
and
a
payload
okay.
So
the
payload
is
the
video
content,
the
audio
content.
Whatever
the
media
is
from,
it
may
be
encrypted,
it
may
be
DRM
protected.
It
may
be
anything
or
it
may
be
totally
unencrypted
as
well.
But
the
key
point
is
the
relay:
can't
depend
on
being
able
to
look
into
the
payload
to
see
it.
R
The
envelope
data,
on
the
other
hand,
is
really
what
this
presentation
is
going
to
mostly
be
about,
and
this
is
the
stuff.
What
is
it
that
the
relay
needs
to
see?
You
know
in
an
IP
sense
like
you
need
to
see
your
source
and
destination
IP
addresses
or
a
router
can't
route
it
it's
like
that
type
of
stuff.
R
Obviously,
our
goal
here
will
be
put
the
minimal
stuff
we
can
in
the
envelope
for
for
a
lot
of
reasons,
and
and
do
it
so
that
that's
just
the
terminology
I'm
using
to
define
those
two
different
chunks
of
sort
of
data
that
we
need
to
deal
with.
R
So
I
don't
want
to
delve
deeply
into
what
all
the
drafts
do
on
this.
But
you
know
warpad,
you
know
an
ID
for
a
stream
and
some
delivery,
Priority
stuff,
perhaps
dependency
list.
The
deploy
draft
had
you
know
similar
set
of
stuff
with
a
bunch
more
data
that
might
help
it
work
in
a
5G
context.
You
know
quicker
had
some
sort
of
different
things
with
you
know,
time
to
live
or
time
stamps
and
discardable,
but
the
exact
details
of
all
those
drafts
don't
really
matter
what
matters
as
we
start
figuring
out.
N
R
R
I
was
thinking
about
you
know
your
360p
videos
more
important
than
a
layer
than
an
adaptation
layer
that
it
goes
720p
on
top
of
it.
But
you
know,
as
Luke
pointed
out
too,
if
you
have
advertising
video
and
non-advertising
video,
your
business
application
may
say
business
logic
may
be
that
the
the
ads
are
more
important
right.
R
So
there's
some
application
layer
of
things
at
that
type
of
idea
and
then
there's
this
stuff
that
has
to
do
with
sort
of
delivery
order,
some
sort
of
numeric
counter
of
these
things,
and
that
was
a
lot
of
what
Luke
was
talking
about
and
helping
to
helping,
try
and
motivate
and
explain
his
draft
of
look.
If,
if,
if
you
have
some
data,
that
depends,
if
you
have
a
p
frame,
that
depends
on
an
iframe,
but
you
can't
deliver
the
iframe,
there's
really
no
point
in
delivering
the
P
frame.
R
It's
that
data
is
useless,
there's
nothing!
You
can
do
with
it.
So
there's
some
sort
of
you
know
sense
of
of
a
natural
ordering
that
way,
there's
also
data
that
is
so
old.
It
is
useless.
That's
really
what
we
mean
by
real
time
when
we
talk
about
real-time
video
right
is,
is
there's
stuff,
that's
too
old
to
be
of
any
use,
and
so
the
warp
drafts
you
know
took
the
approach
of
the
the
newest
thing
was
always
more
most
important
in
this
sort
of
context.
R
That
might
be,
that
might
be
completely
right
or
mostly
right,
with
some
fine
print
and
hand
waving
and
I'm
sure
the
working
group
will
will
dig
into
that
deeper
as
as
we
hit
that,
but
we
we
need
some
way
of
sort
of
understanding
the
the
decoder
order
of
these
types
of
things
of
what's
needed
at
the
other
end
and
what
makes
the
sense
to
be
delivered
in
that.
So
this
is
similar
to
you
know
an
RTP,
you
had
sequence
numbers.
R
We
have
different
stream
or
segment
IDs,
whatever
we
call
stream
segments
layers
fragments.
We
have.
You
know
that
type
of
way.
We
need
some
way
of
ordering
that
and
we
need
to
decide
how
that
that
ordering
number
relates
to
the
priority
of
it
being
sent
within
the
broader
priority
of
the
audio
is
more
important
than
video.
So
I
see
these
as
two
different
things.
We
can
I
mean
both
conceptually.
We
clearly
everybody
agrees,
there's
two
different
things
and
we
need
both
of
them
here.
R
How
they
get
mapped
to
bits
may
be
one
thing,
two
things
we
can
sort
of
debate
that
later,
but
I
think
that
the
key
thing
for
us
is
to
understand
what
it
is.
We
want
the
the
protocol
to
deliver
and
that
we
want
relays
to
be
able
to
look
at
this
type
of
information
and
start
to
to
deal
with
it
as
they
forwarded
something
on
and
remember.
R
If,
if,
if
a
video
stream
was
sent
up,
you
know
in
multiple
Renditions
one
at
one
megabit
and
one
at
two
megabits
and
there's
a
relay
that
has
access
to
both
of
those
there's.
We
will
be
doing
the
normal
thing
of
clients
you
know
connecting
and
asking
for
the
one
that
has
the
right
bandwidth
for
what
they
need.
R
None
of
that's
taking
away
from
that,
but
you
still
need
to
understand
what's
priority
to
deliver
inside
of
that,
no
questions,
okay,
so
this
is
actually
what
I
think
is
the
most
thing
to
land
in
the
envelope
paid,
actually
dependency
indication.
R
Yes,
so
one
way
that
we
could
do,
and
one
of
one
of
the
warp
drafts
had
a
proposal
for
this
is
I
mean
we
could
have
a
dependency
list
where
we
could
say
these
chunks
of
data
segments
of
data
depend
on
these
other
chunks
like
by
named
ID,
and
that
relays
and
things
could
look
at
that
figure
out
what
to
do
with
them.
R
Some
of
the
other
drafts
quicker
being
one
of
them,
use
the
idea
that
there'd
just
be
like
a
priority.
Like
you
know,
audios
Priority
One,
you
know
360ps
priority.
Two
720p
is
priority
three
and
that
the
application
would
pick
those
priorities,
but
with
a
small
number
of
set
of
numeric
priorities,
they
would
be
able
to
deliver.
You
know
get
the
the
end
effect
they
wanted
on
the
relays.
R
So
one
of
the
questions
that
we'll
have
to
figure
out
from
the
working
group
and
is:
should
we
have
a
dependency,
an
explicit
dependency
list,
or
can
we
get
away
with
just
having
this
sort
of
you
know
General
priority
idea
to
be
able
to
do
this
in
the
relays
and
what
you
know.
What
do
we
need
to
do
there?
That's
sort
of
one
of
the
areas.
R
Caching
is
a
is
another
area
where
the
quicker
drafts
did
assume
sort
of
this,
but
we
could
or
could
not
have
this,
so
this
is
not
very
important
on
the
Ingress
side.
This
is
all
about
the
distribution
side,
where
it's
probably
more
important,
and
on
that
side
you
come
to
the
idea
of.
Is
it
useful
for
one
of
the
relays
to
have
a
short-term
amount
of
media
data?
R
That's
there
available
to
it,
and
this
can
help
deal
with
tasks
like
when
a
client
first
joins
there's
already
the
previous
iframe
is
there
in
cash
and
they
can
get
the
previous
iframe.
Instead
of
waiting
for
the
next
iframe
to
be
transmitted
and
sent
into
the
system,
it
could
be
pushed
a
little
bit
farther.
You
know
some
interactive
applications
Skype
long
ago,
even
sort
of
did
it,
but
you
know
various
applications
playing
with
the
idea
of
well.
R
R
We
could
or
could
not
support
those.
This
could
be
something
that
the
protocol,
if
we
did
this
at
all,
I'm
sure
it's
something
that
would
be
optional
for
the
relays
to
ever
cash
more
than
zero
bytes
of
data,
but
you
know
caching
and
whether
we
want
to
deal
with
that
is
one
of
the
issues
that
we'll
have
to
touch
on
on
that
side.
It
can
be
done
as
completely
extension
draft
right.
It's
those
types
of
things.
R
Let's
see
security
and
privacy,
so,
as
I
said
at
the
beginning,
the
relays
can
only
read
the
envelope
data.
The
and
other
network
elements
can't
you
know
routers
along
the
way.
This
is
all
inside
a
normal,
quick,
TLS
connection
between
the
consumers
or
producers
and
the
relays
okay.
So
this
doesn't
change
quick
TLS
in
any
way
like
everything's
encrypted
over
the
network
with
quick
TLS,
exactly
as
you
would
expect,
and
exactly
like
today's
cdns
as
well
on
on
a
classic
CDN.
R
Today,
yeah
every
connection
might
be
encrypted
with
https
and
the
media
that's
being
moved
around
inside
of
that
connection,
you
know
maybe
DRM
encrypted
right,
so
I
think
that
we're
trying
to
very
much
follow
that
same
model.
That's
worked
in
that
space
and
allow
for
this
now,
one
of
the
things
about
this
that
we
do
need
to
sort
of
you
know
hassle
is
like.
Do
we
need
techniques
for
dealing
with
Integrity
of
what
happens
of
the
envelope
data?
R
Oh
sorry,
I
took
the
slide
out.
I'll
go
back
one
okay,
so
my
view
is
that
the
relays
shouldn't
ever
change
envelope,
information
that
they
should.
They
can
read
it,
but
they
shouldn't
change
it.
And
if
you
were
on
a
system
that
chose
to
use
MLS
group
keying,
then
you
could
get
away
with
with
actually
providing
real
a
a
Integrity
protection
of
the
envelope
data
at
the
ends.
It's
not
like
the
relays
in
the
middle
would
do
the
computational
work
to
check
it
probably,
but
they
they
could
so
Spencer.
R
A
A
Spencer
Dawkins
too
long,
don't
read
or
don't
listen
is
yes,
but
the
thing
to
me
that
seemed
helpful
for
me
to
be
asking
about.
This
is
what
a
what
a
useful
name
for
what
are
called
relays
now
would
be.
A
That
would
be
clearer
and
the
probably
just
the
functionality
kind
of
thing
we
talked
yesterday
recently
about
me
asking
so
or
you
know,
is
this
a
general
purpose
relay
I,
think
not
or
if
it's
not
what
kind
of
a
relay
is
it
and
I
think
after
seeing
that
part
of
the
discussion
in
your
slides,
which
which
are
incredibly
helpful
to
me,
thank
you
but
I
think
that
what
I,
what
I
think
you
were
saying
was
that
this
would
be
a
relay
that
relayed
whatever
Rush
warp.
R
R
R
R
It
is
an
incredibly
General
thing,
so
what
this
working
group
is
really
talking
about,
building
from
from
day
one,
whether
you
think
about
it
or
not
as
soon
as
you
talk
about
moving
media
over
quick,
it
or
media
over
anything
in
some
real-time
way,
we
are
talking
about
building
a
a
better
set
of
ways
to
move
real-time
data
around
the
network,
and
it
will
turn
out
to
be
incredibly
generalized
if
we
get
this
right.
A
C
So,
from
from
a
church
perspective,
thank
you
very
much
for
the
intervention
but
I'm
going
to
ask.
Not
you
actually
spend
a
lot
of
time
talking
about
names,
because.
E
Right
what
what
I.
C
Think
you
were
asking
for
is
a
description
of
relay
functionality.
That's
absolutely
cool,
but
we
don't
want
to
spend
a
lot
of
time
on
on
names,
because
it's
an
enormous
bike
shed
and
I
will
note
that
the
cube
ballooned,
the
minute
you
said
names.
C
People
who
no
doubt
have
great
ideas
of
of
their
favorite
name,
if
you're
in
the
queue
to
talk
about
the
name,
please
get
out
of
the
queue
if
you're
in
the
in
in
it
to
talk
about
relay
functionality.
Welcome.
We
look
forward
to
your
insights,
but
we
can
keep
talking
about
this
as
the
mock
protocol,
good
name,
coming
future.
D
D
B
Or
is
it
you
want
to
try
to
finish
it
up
and
then
take
all
the
questions?
Please.
R
Do
that,
okay,
to
get
back
to
where
I
can
actually
do
that?
Okay,
one
more
slide
here!
So
the
this
this
slide
under
cells.
The
connect,
the
the
problem
here,
but
when
the
mock
client
connects
over
the
mock
protocol
to
the
mock
relay,
we
need
some
idea
of
what
it
is
that
is
used
to
authenticate
and
authorize
and
allow
the
relay
to
decide
that
is
going
to
be
willing
to
forward
this
traffic
in
a
meaningful
way.
R
Okay
and
basically,
initially
what
I
had
written
on
this
slide
when
I
first
started
was
that
you
know
there'll,
be
an
authorization
token
and
some
form
and
really
I
I!
Think
that
we
come
around
to
that
that
the
you
know,
there's
an
authorization
token
that
the
application
got,
that
the
consumer
or
producer
got
in
some
out
of
bound
way
that
it
allowed
it
to
pass
it
to
the
relays.
R
It
probably
also
found,
in
the
same
out
of
bound
way
what
relays
it
might
want
to
use
and
that
the
relay
say
that
now
that
is
such
a
fluffy
hand
wave
to
the
whole
problem.
It's
completely
useless.
Okay,
so
I
recognize
that,
but
I
think
we're
going
to
end
up
something
along
those
lines
now
option
two
which
was
I
forget
who
proposed
that
to
the
list
right
now:
I'm,
just
blanking
I
apologize
to
you
over,
but
but
you
know
what
oh
it
was
will.
So
you
know
will
said:
hey
look!
R
Actually
we
need
we've
had
this
problem,
Lots
we've
thought
about
it:
lots
in
existing
cdns
and
there's
some
really
good
approaches
to
steal
from
here
that
have
a
bunch
of
advantages
over
what
was
being
discussed
and
so
I
I'm
as
an
individual
they're
very
much
in
favor
of
going
down.
This
sort
of
you
know
number
two
type
option
where
we
take
the
same
type
of
ideas
of
what
has
worked
to
be
very
scalable
in
existing
today
HTTP
cdns
and
apply
it
to
these
mock
cdns.
R
C
Okay,
before
we
do,
I
just
want
to
do
a
time
check
for
people.
It
is
10
45,
we
closed
at
11
30
and
we
have
a
pretty
hard
stop
because
they
need
to
set
the
room
for
the
next
host
talk.
So
please
make
your
interventions
pithy
and
be
ready
to
have
quick
responses.
The
first
is
Emma.
D
Nice
presentation
so
quick
question
running
the
last
slide,
so
I
love,
you
said
minimize
the
information
the
envelope
should
have
for
the
Relay,
but
should
not
keep
it,
as
is
because
several
developers
love
to
have
everything.
If
we
keep
it
as
it
would
be
vague,
so
I
think
we
should
include
what
information
should
be
available
if
it
should
not
be
there
plus
one
okay.
S
R
Oh
okay,
let
me
paraphrase
what
you're
saying
here
and
make
it
a
little
bit
tighter
than
what
you
said,
but
I
agree
with
you.
So
obviously,
if
we're
doing
an
application
like
meat
Echo,
we
can
definitely
make
the
relays
not
have
access
to
any
of
the
media
content
for
sure
and
be
used
to
help
scale
out.
Something
like
me
could
Echo
but
mean
Echo
also
probably
requires
an
SFU
somewhere,
and
these
relays
do
not
100.
R
S
Yeah
that
works
for
you,
yeah
there's
got
to
be
enough
metadata
to
to
to
to
allow
the
switching
to
happen,
got
it
the
other
thing
on
the
authentication
piece.
If
I've
got
a
large
fan
out
and
I've
got
Fred
who's
been
creating
data
and
we're
using
Mac
or
whatever
to
authenticate
I.
Don't
want
Fred
to
have
to
have
a
million
Association
shared
secrets
with
all
the
end
points.
S
R
Okay,
I
think
that
is
a
hot
and
deep
area.
You're,
not
wrong,
of
course,
and
I
think
that'll
be
a
great
topic
for
the
working
group
after
we
get
past
some
of
the
easier
stuff,
but
we're
not
like,
but
you're
totally
right,
but
I
mean
we're.
This,
like
people,
aren't
even
ready
to
It's
Gonna
explode
their
heads,
though.
T
Hello
Robin
marks
Akamai
I've,
seen
Luke
and
you
talking
about
priorities
for
streams
and
dependencies
between
streams
and
not
doing
the
just
at
the
origin,
but
also
in
between,
and
it
starts
to
smell
a
lot
like
HTTP
2
dependency
tree,
which
is
a
notoriously
bad
idea
that
we're
still
suffering
from
today,
especially
people
like
me,
I,
don't
think
I
think
this
is
too
complex
for
the
new
extensible
priorities
that
we
have
in
HTTP
3..
I
Mercenary
so
quickly
on
the
dependencies.
I.
Think,
if
you
look
at
what
we
do
in
RTP
in
in
a
lot
of
us
assumptions,
is
that
sequence
numbers
are
always
implicit
dependencies
on
the
things
before
them.
The
difference
here
is
that
now
we're
talking
about
independent,
quick
streams,
and
is
there
going
to
be
some
sort
of
signaling
of
the
dependencies
across
those
quick
streams
because
they
may
not
truly
be
independent.
They
may
be
in
the
SVC
case,
different
layers
in
in
each
of
the
streams
or
for
various
different
reasons.
I
You
may
have
different
encodings
that
split
things
up
across
streams
and
you
might
be
able
to
describe
the
dependencies
of
those
streams.
I
think
it's
pretty
simple
to
have
a
simple
and
flexible
way
to
to
describe
those
without
too
much
complexity.
We
have
the
frameworking
draft.
We
have
81
dependency,
descriptors
and
you're.
Talking
about
you
know
like
a
simple
one
bite
thing
that
that
gives
you
almost
everything
you
need
to
know,
but
within
a
stream
I
think
it's
just
simple:
to
keep
the
basic
rule.
Everything
is
in
sequence.
I
What
came
now
depends
on
everything
that
came
before
in
that
stream.
That's
that's
a
simple
assumption
to
start
out
with,
and
then
secondly,
I'd
like
to
understand
from
from
the
meta
and
twitch
folks,
you
must
already
have
some
relays
assume
that
just
your
own
built
realize
if
you
had
to
replace
those
with
my
cloudflare
fastly
whatever.
What
would
you
do
differently
or
what
would
you
have
to
encode
in
the
protocol
to
work
with
a
third
party
I?
I
G
Yeah,
it's
interesting
around
this
envelope,
information
and
the
security.
If
you're
actually
going
to
sign
Etc,
it
needs
to
be
stable
across
the
whole
system,
which
has
been
a
classical
case
which
doesn't
work
in
RTP
because
you
have
leg
by
leg
or
basically
and
point-to-end
point
dependent,
signaling,
so
you're
gonna
have
to
have
an
interesting,
actually
ensure
that
you
have
a
global
namespace
for
identification.
L
The
Piezo
handled
I'm
just
going
back
to
the
ABR
stuff
in
that
you
kind
of
have
this
very
fine
grain
kind
of
behavior
happening
at
the
solves
like
per
stream
representation
level.
But
then
the
ABR
sitting
in
some
kind
of
browser,
sandbox
kind
of
trying
to
measure
throughput
on
a
JavaScript
sort
of
API
and
making
an
ABR
decision
there.
There
seems
to
be
quite
a
big
disconnect
between
the
amount
of
information.
L
That's
kind
of
you've
got
this
very
fine-grained
stuff
happening
per
per
rep,
but
then
switching
reps
you've
got
sort
of
crappy
information
that
sort
of
you're
getting
kind
of
quite
messy.
Like
timing
and
data
arrival,
stuff
I
mean
the
streams
API,
for
example,
doesn't
even
give
you
a
timestamp
when
you
get
a
call
back
so
trying
to
measure
that
I'm
thinking
sort
of
signals,
or
something
like
that
from
relays.
That
could
then
help
that,
but.
R
A
cool
cool
idea
to
get
some
signals
from
the
relay
to
help
that
I
hadn't
thought
about
that
I
had
thought
a
lot
about
given
this
is
all
running
in
user
space.
The
quick
and
all
of
this
is
running
in
user
space
and
application.
I
was
hoping
that
this
working
group
at
some
point
would
be
sending
some
requests
over
to
the
quick
working
group
to
expose
some
API
information
that
we
can
use.
For
example,
the
current
you
know,
bitrate
estimate
that
the
quick
layer
has
which
would
be
really
useful
for.
L
Yeah
I
mean
that's
a
constant
kind
of
thing.
Net
info
API
has
been
sort
of
like
in
and
out
of
yeah
like
it's
not
really
kind
of
flying
anywhere
apart
from
chrome,
and
so
that
kind
of
thing
is
tricky.
It
seems
yeah.
R
R
L
Yeah
it's
true,
although
sometimes
in
some
ways
there
is
actually
information.
That's
at
the
server
the
relay
that
the
client
doesn't
actually
have
like
the
kind
of
effectively
the
sea
wind.
You
know
how
to
draw
transport
info
that
basically
tried
to
do
that,
but
yeah
that
we've
got
some
stuff
in
cmsd.
That
potentially
does
something
like
that,
but
but.
R
C
You
okay,
so
we
started
out
with
12:
we've
had
seven
people
at
the
mic,
and
now
we
have
ten,
so
I
would
like
to
say
we're
gonna
close
the
queue
very
quickly
here.
So
if
you
think
you're
going
to
have
comments
on
this,
probably
go
ahead
and
get
into
queue.
Well,
it's
pure
sorry
will
next
yeah.
U
Thank
you.
Will
the
lack
of
my
I'll
be
quick,
I
agree
with
the
vast
majority
of
the
points
presented
here.
I
I
think
Mark
has
the
luxury
of
looking
back
at
12
years
of
Dash
on
hls,
which
used
relays
in
a
very
similar
context
and
look
at
what
would
we
have?
What
should
we
have
added
to
those
formats
in
the
very
beginning?
That
would
really
help
with
operations
today
and
there's
two
aspects
that
that
are
not
the
the
attractive
ones
we
like
to
talk
about.
U
One
is
request,
tracing
being
able
to
debug
as
soon
as
you're,
going
through
components
owned
by
different
people
in
multi-cdns.
How
do
you
know
where
a
problem's
occurring
so
request
tracing
tracking
where
it
went
and
then
also
logging?
If
you
ask
people
today,
what
is
a
quick
or
web
transport
log?
Look
like
you're
going
to
get
50
different
answers,
so
we
might
start
thinking
about
standardizing
that
so
that,
if
you
are
collecting
logs
from
five
different
cdns
operating
your
relays,
that
they're
going
to
be
consistent.
C
Corey,
since
he's
our
our
minute
taker,
you
can
ask
him
whether
he
got
all
of
that
I.
V
Tried
yeah
going
first
I
was
just
going
to
say
and
we'll
talk
about
the
intermediary
being
a
relay,
there's,
probably
some
interaction
with
the
network
that
was
that
might
be
possible.
Can
we
take
that
to
transport
area
tsbwg
or
something
and
talk
about
that
completely
separate
to
talking
about
the
relay?
But
if
you're
exposing
things
in
the
real
way,
then
maybe
something
to
the
network
might
turn
out
to
be
the
right
thing
to
do.
W
Yeah
Stefan
Wagner
regarding
the
dependency
indication
question
you
post.
This
is
very
very
tricky.
The
one
reason
why
previous
dependency
indications
in
the
video
coding
standards,
nh64
had
two
bits
for
some
priority
thing
never
worked
out
was
because
there
was
no
defined
Behavior.
What
to
do
with
these
two
things
and
with
these
two
bits,
and
therefore
people
set
them
anyway,
if
they
wished
and
no
one
cared
about
them
at
the
end,
even
if
the
encoder
choose
to
do
that.
W
So
this
particular
problem
here
for
this
protocol
is
even
harder
because
we
now
have
different
media
types,
some
of
which
we
don't
even
fully
understand
that
the
lifetime
of
this
protocol
is
probably
larger
than
the
full
understanding
of
what
you
need
say
for
hat,
takes
type
of
priorities
or
whatever
right.
So
the
my
recommendation.
W
There
is
to
have
a
single,
one-dimensional,
now
number,
not
any
for
complex
dependency,
graphs
with
B
frames
or
whatnot,
but
the
single
number,
with
a
limited
range
and
a
defined
behavior,
of
how
a
Gateway
reacts
to
this
number
and
then
the
encoders
and
the
the
sending
mechanisms
can
figure
out
how
to
set
this
accordingly.
F
Luke
from
twitch
I
was
just
on
up
there,
but
my
day
job
is
I
work
on
the
CDN
team,
so
I've
thought
about
this
a
little
quite
a
bit.
One
thing
that
I'm
grappling
with
internally
is
just
how
successful,
HTTP
and
HTTP
cdns
have
been
because
they're
stateless,
the
idea
being
is.
We
have
millions
of
viewers
across
thousands
of
streams
and
we
don't
care
like
we.
You
don't
actually
need
to
keep
track
of
them.
F
It
just
kind
of
funnels
all
the
way
through
so
I'd
like
to
have
more
thought
about
how
much
state
do
we
really
need
in
the
system?
How
much
state
do
we
need
for
media
for
quick
and
how
much
can
we
defer
like,
for
example,
slide
one?
The
origin
is
the
final
decision
maker,
probably
wouldn't
work
at
scale?
R
Look
before
we
walk
away
totally
totally
agree,
but
I
want
people
to
I
constantly.
Have
people
tell
me
it's
stateless
and
I'm
like
oh,
how
do
I
request
something
they're
like
oh
you
give
us
the
URI
and
from
the
URI
we
know.
Given
the
state
we
store.
What
to
deliver
to
you.
I
mean
it's.
It's
I
mean
stateless
is
a
wonderfully
bizarre
term
to
use
in
the
context
of
a
file
server
right.
R
So
it
is
stateless,
but
I
think
that
the
thing
that
the
point
100
agree
with
you
on
is:
we
have
to
really
carefully
think
about
what
the
state
is.
There's,
TLS,
State,
there's
all
kinds
of
State
in
the
servers
we
need
to
really
think
about
the
state
and
how
we
make
high
reliability,
cheap,
scalable
systems.
You
know
how
we
engineer
that
and
get
it.
F
J
Go
yeah,
I
think
the
another
point
that
the
quick
stream
is
one
two
one
I
think
there
is
a
drop
about
a
multicaster
quick.
You
need
to
make
a
quick
become
the
one
to
earn.
So
if
we
use
that
we
may
be
able
to
do
the
end
to
end
quake
and
for
the
metadata,
maybe
we
can
put
it
in
the
outside
of
the
quick.
Of
course
you
need
to
encrypt
it.
Yeah.
R
X
J
Yeah
and
and
I
think
during
the
end
to
end
quick
may
enable
very
nice
Stadium
kind
of
because
Renee
doesn't
have
to
decrypt
all
the
quick
quick
package.
You
don't
need
to
look
at
metadata.
It's
very
small
amount
of
computation,
Maybe.
R
J
Q
Jessup
Mozilla,
plus
one
to
what's
Stephen
Winger,
was
saying.
I
was
going
to
say
something
similar
I
think
that
what
this
does
is
it
helps
push
the
the
complexity
on
the
choice
of
the
priorities
and
so
on
out
to
the
application
which
will
allow
for
future.
You
know
changes
and
improvements
in
in,
what's
going
to
happen
there,
whether
it
actually
comes
down
to
a
single
number
or
not
I,
wouldn't
want
to
decide
now,
but
I
do
think.
Q
We
wanted
to
simplify
and
tightly
Define
what
the
relays
are
going
to
operate
on,
so
that
so
that,
then
you
can
innovate
at
the
edge
at
the
edges.
R
X
Hello,
Ian,
sweat
Google
as
a
person
who
spent
a
ton
of
time
trying
to
kill
hp2
priorities.
X
I
actually
would
make
an
opposite
argument
here
and
say
that
I
don't
think
we
should
immediately
assume
because
HTTP
2
priority
was
like
a
complete
dumpster
fire
that
it
actually
applies,
and
my
argument
is
as
follows:
so
one
is
in
hp2.
The
client
is
attempting
to
guess
what
the
correct
prioritization
scheme
is
here.
The
producer
of
the
content
knows
the
structure
of
the
Gap
and
it
understands
the
premium
dependency
there.
G
X
It's
unambiguous,
it's
absolutely
like
it
knows
it
or
not.
Another
thing
is
we
had
things
like
reprioritization,
there's
no
need
for
reprioritization,
there's,
no
redirect
re-parent
things
that
do
all
the
things
that
are
very
complex
and,
for
example,
like
in
hb2.
When
you
cancel
the
stream.
Everything
under
it
got
like
re-parented
automatically
hero
would
argue
either
you
don't
allow
that
at
all
or
if
you
do
allow
it
you
just
basically
like
kaibox
the
whole
Stream
So
you
because
they're
all
dependencies
you
can
just
like
kill
that
entire
tree.
X
You
don't
need
to
wait
and
like
trying
to
re-parent
it
and
all
those
things
so
because
we
understand
the
use
case,
I
think
it's
a
completely
tractable
problem
to
use
trees.
However,
I
also
will
say:
I'm
not
sure
it
actually
provides
a
huge
amount
of
value
over
like
the
stream
per
Gap
approach
and
so
I
think.
That's
something
for
the
working
group
to
decide
is
whether
the
marginal
complexity
is
actually
worth
it,
but
I,
don't
think
it's
I
think
it's
a
completely
different
situation
from
hp2,
because
the
problem
is
different.
X
Like
again,
a
huge
problem
with
hp2
was
Firefox
had
a
prioritization
scheme
that
it
kind
of
like
invented,
based
on
like
looking
at
the
spec
and
trying
to
figure
out
how
to
use
it,
but
like
there
was
no
right
answer,
like
everyone
did
something
different
on
the
client
side,
none
of
them
were
right
and
most
of
them
didn't
work
particularly
well.
Here
we
have
a
fairly
authoritative
answer
on
what
the
right
prioritization
is
at
least
within
a
gap.
So
those
are
my
thoughts
thanks.
C
C
I
showed
that
you
were
trying
to
to
request
the
screens.
Did
you
actually
have
something
you
meant
to
be
showing,
or
is
that
just
a
that.
N
Thanks
Karen
for
bringing
everyone
on
the
same
page
on
what
relay
on
some
of
the
requirements
we
need
here.
A
couple
of
points
I
agree
on
the
state
argument
that
you
and
Luke
made.
We
need
to
make
sure
we
keep
the
state
as
minimal
as
possible
as
it
helps
for
the
distribution
anything
other
than
that.
We
need
to
kind
of
discuss
in
the
group
and
see
if
we
really
need
it
or
not.
Second
thing
on
I
want
to
Echo
what
Randall
and
Stefan
said
on
the
dependency
or
prioritization
scheme.
N
Whatever
we
do
relay
should
not
be
made
to
aware
of
each
and
every
video
Codec
and
dependencies
they
bring
in.
They
should
be
asked
me
agnostic
as
possible,
but
should
be
able
to
give
minimal
information
to
make
a
forwarding
or
drop
decision,
and
if
you
can
come
up
with
something
that
fits
that
requirement,
that
would
be
easy
to
build
scalable
relays
last
part
is
that
I
don't
see
you're
talking
about
how
the
clients
would
pre-warned
relays
in
case
whether
they
wanted
multiple
qualities
so
that
they
can
easily
switch
between
different
qualities?
N
Maybe
have
you
talked
about
it
but
missed,
but
something
that
we
need
to
keep
in
mind
on
how
can
relays
kind
of
of
subscribe
to
various
qualities
so
that
when
our
client,
because
of
what
our
Network
conditions
goes
to
different
qualities,
it
would
be
quick
rather
than
going
through
all
the
way
through
chain
of
fillers
to
get
the
qualities
if
needed
at
some
point
right.
R
Yeah
I
carefully
just
avoid
it
in
my
my
my
slides
the
whole
topic
of
sort
of
naming
or
or
that
that
evil
subscribe
word
or
anything
like
that,
but
I
mean
one
way
or
another.
We
will
need
to
be
able,
like
all
of
these
things.
Imagine
that
we'll
be
able
to
you
know
something
could
express,
it
could
get
a
a
consumer
could
get
a
4K
stream
and
it
could
realize
they
didn't,
have
the
bandwidth
and
switch
down
to
a
1K.
You
know
a
720p
stream
or
something
I
mean
I.
R
Clearly,
we
need
to
deal
with
that
in
the
working
group.
I
just
didn't
feel
we
had
the
right
drafts
and
agreements
in
place
to
really
hit
that
problem.
Yet
so
we'll
need
to
figure
out
that
that
type
of
stuff
down
the
road
a
little
bit
but
I
just
sort
of
chose
for
that
for
the
limited
time
to
kick
that
can
down
the
road
a
little
bit
here.
P
On
the
topic
of
congestion
control,
there's
nothing
stopping
a
native
or
mobile
app!
That's
using
a
quick
library
from
getting
to
information
about
the
congestion
control.
However,
for
a
web
client
which
I
assume
you
will
want
some
web
clients
that
do
Ingress
into
the
into
a
relay,
you
do
need
an
API
that
can
expose
information.
Luckily,
we
are
working
on
that
in
the
web
transport
working
group
in
the
w3c.
So
if
there
are
specific
things
you
need
beyond
what
we're
already
doing
there.
P
That
might
be
a
good
thing
to
ask
for
similarly
on
priorities,
I
think
we've
mostly
been
talking
about
priorities
that
are
sent
to
the
relay,
but
there's
also
the
question
of
the
local
prioritization
of
streams
at
the
at
the
client,
and
that
also
relates
to
a
web
client
API,
and
that's
also
a
topic
in
discussion
in
the
web
transport
to
be
3C
working
group.
So
if
there
are
things
that
need
to
be
requested
of
another
working
group,
that
might
be
the
w3c
web
transport
region
group.
P
Lastly,
oh
I
forgot
I'm.
Sorry,
let
me
see
if
I
can
remember.
R
Peter
before
you
get
to
your
last
thing,
just
100
agree
what
you
said
and
I
might
ask
you
if
you
could
just
send
a
little
bit
an
email
to
the
list
here,
I
mean
I,
know
you're
deeply
involved
with
all
that
stuff.
You
can
just
send
a
little
email
to
the
list
with
with
pointers
to
people
where
they
might
be
able
to
read
a
little
bit
more
about.
What's
going
on,
there
I
think
that
that
would
be
I
mean
this.
R
This
group's
gonna
have
to
have
a
Synergy
with
you
know:
transport,
web
transport,
quick
and
so
getting
people
aware
that
would
be
awesome.
P
I
could
do
that.
Oh
I
did
remember
the
third
thing.
The
third
thing
is
basically,
everybody
wants
implementations
of
quick
that
have
a
real-time,
friendly
congestion,
control,
algorithm
and
I.
Haven't
really
seen
one
yet
and
I
think
that,
in
order
to
write
one,
for
example,
if
I
were
to
Port
the
Weber
typical
webrtc
one
called
googcc
over
to
Quick,
it
would
require
some
more
time
stamp
for
information
in
the
feedback
messages
and
that
I've
seen
two
drafts
for
that
proposals
for
adding
that
as
a
quick
extension
but
I
don't
think
either.
P
R
R
I
do
think
this
application
will
that
I
do
think
that
media
over
quick
will
work
reasonably
well
with
today's
congestion
control,
but
it
could
work
much
better
with
slight
changes
to
the
congestion
control
that
we
should
try
and
get
from
the
appropriate
working
groups
or
something
so
that's
another
thing
that
we
should
poke
on
moving
forward.
100
agree,
and
just
you
know,
maybe
now
that
this
is
really
moving
and
is
real.
It's
an
incentive
for
the
people
who
have
those
drafts
to
to
resurrect
them
and
get
that
going
again.
H
Next
stanhurst
BBC
R
D.
So
it's
not
specifically
a
comment
on
your
on
your
presentation,
which
is
great
by
the
way,
but
more
something
which
keeps
coming
up
in
other
comments
that
people
have
been
making
and
I
wanted
to
make
sure
it
got
captured
somewhere.
We
need
to
make
sure
that
we
don't
end
up
focusing
on
frames
as
a
unit,
so
I
love
the
idea
of
walk
being
able
to
arbitrarily
chop
up
gops
and
have
different
numbers
of
frames
within
one
stream.
H
We've
got
to
be
careful
because,
when
you're
thinking
about
audio
what
what's
the
application
data
unit
there
and
is
that
a
sample
and
if
we
end
up
just
having
a
new
sample
per
stream,
we're
going
to
use
up
streams
like
there's
no
tomorrow
and
there's
also,
then
other
media
types
like
what
do
we
do
with
subtitles?
What
do
we
do
with
haptics,
which
is
a
new
one
coming
through?
H
We
just
need
to
make
sure
that
we
don't
specify
that
there's
one
for
every
Adu
and
give
the
ability
to
have
more
than
one
of
those
and
then
how
do
we
synchronize
them,
but
that's
an
entirely
different.
Not
that's
an
entirely
different
topic.
R
O
Hi
Jonathan
Lennox
I
mean
something
that
I
was
thinking
about
when
the
medical
case
was
mentioned
and
generally
like
the
case
of,
if
you're
trying
to
you.
O
Live
stream
a
conference
is
the
case
where
you
will
have
cases
where
we
have
in
where
we
have
media
from
Independent
Media
sources.
We
probably
want
to
send
those
over
the
same
mock
connection
for
prioritization
reasons,
and
we
need
to
make
sure
we
think
about
that.
Both
for
prioritization,
then
for
the
protocol
to
figure
out
how
that
works
and
I
just
want
to
make
sure
that's.
You
know
on
everybody's
mind
as
they
develop
this.
R
Yeah
and
I
think
that
that
comes
up
and,
as
you
said,
in
the
the
conferencing
scenarios,
but
it
also
comes
up
in
the
ad
insertion
scenarios
but
yeah
that
we
need
to
keep
that
in
mind.
R
C
And
thanks
everybody
who
was
at
the
mic
line
or
commenting
in
the
chat,
I
I
will
say.
If,
if
we're
seeing
this
level
of
engagement
on
the
mailing
list,
we
would
be
having
a
weekly
interim
calls.
So
please
do
bring
these
same
issues
up
now
that
we've
we've
kicked
off
the
working
group
on
the
mailing
list
or
in
the
appropriate
repos,
and
we
can
go
from
there
pull
up
the
get
the
chair
slides
back
up.
So
we
can
go
on
to
the
next
thing.
C
All
right
so
before
we
get
to
aob,
which
will
which
will
possibly
include
Spencer's
presentation,
which
he
gave
us
as
a
backup,
a
quick
question
on
possible
interim
and
I
think
we
can
run
this
as
a
poll
I'm
going
to
try
and
do
that
in
a
second.
C
We
think
there's
a
a
lot.
That's
that's
happened
here
today
in
a
lot
of
discussion
that
have
been
kicked
off
at
the
previous
interim
and
so
we'd
like
to
think
about
having
maybe
an
in-person
interim
over
a
couple
of
days.
What
Alan
and
I
have
been
discussing
is
late,
January
of
2023.
That's
that's!
Next
year
surprisingly,
and
the
U.S
West
Coast,
there
are
a
couple
of
people
already
in
queue,
so
you
have
clarifying
questions
or
comments
before
we
run
the
poll.
C
I
I,
don't
know
if
you
can
still
get
from
it,
so
mozamati
I
was
going
to
ask
in
the
prior
interim.
We
identified
two
major
points
of
discussion
that
we
wanted
to
try
to
cover
either
in
person
this
week,
or
you
know
in
the
in
the
near
term,
that
was
the
congestion,
control
and
prioritization
aspects
and
then
media
formats
we
haven't
started
the
media
formats
discussion.
Is
that
going
to
be
a
focus
of
this
in
interim
or
do
do?
We
want
to
start
that
just
on
list
and
not.
C
So
I
think
we
can
understand
it
a
little
bit
today
when
people
were
reminding
us
and
including
Stefan
and
and
others
that
we
need
to
consider
media
formats,
Beyond
video
and
not
not
only
video,
but
I,
definitely
agree
that
we
need
to
have
deeper
discussions
of
those
on
on
audio
video
and
potentially
upcoming.
It
might
or
might
not
be
at
this
interim.
We
if
we
get
agree
in
an
interim,
then
we'll
start
the
agenda
bashing.
So.
C
A
Spencer
Dawkins
I've
made
this
coming
in
chat,
but
I
I,
you
know,
I,
don't
know
the
chairs
have
been
able
to
keep
up
with
everything.
That's
happened
in
chat.
A
The
the
you
might
want
to
also
poll
on
how
many
interim
meetings
and
I
I'm
I
was
thinking
virtual
interims
between
now
and
Yokohama,
and
it
I
had
a
couple
of
thoughts
about
possible
topics,
and
neither
one
of
those
was
the
one
most
suggested.
So
I
you,
you
all,
are
talking
about
a
in-person
interim.
A
Yes,
please
that
would
be
lovely
but
I
I
would
I
would
suggest
that
you
wait
for
the
working
group
to
make
progress
on
the
mailing
list
as
opposed
to
when
slack
or
you
know
something
like
that
before
you
decide
that
we
only
need
one
I
could
see
multiple
reasons
why
you
might
need
two
or
three.
But
let's,
let's
ignore
that
for
now.
C
A
C
Have
suhaus
and
Cullen
behind
you
in
line
and
then
we're
going
to
run
the
poll
I
think
we
can
at
least
figure
out
whether
we
want
zero
or
one
and
if
it
grows
beyond
one.
We
can
continue
that
discussion,
but
just.
N
A
clarifying
thought
here,
Luke's
presentation,
had
some
of
the
next
things
that
we
want
to
do.
That
might
be
a
starting
list.
We
can
think
of
the
possible
topics
and
and
also
depending
upon
the
topic
we
might
get
a
different
audience
and,
and
we
need
to
make
sure
that
we
have
the
Right
audience
in
the
meeting.
If
not,
it
will
not
be
useful
thanks.
C
R
Love
to
have
them,
but
I
think
the
really
important
thing
asked
us
is
January
late
January
is
incredibly
close
when
you
consider
the
holidays
and
everything
so
I
think
we
should
nail
down
the
dates
soon.
If
we're
going
to
do
this
like
before
the
end
of
November
type,
soon.
C
A
C
The
way
I
put
this
is,
if
you,
if
we
have
an
in-person
interim
in
late,
January
2020
through
2023,
please
assume
I,
said
U.S
West
Coast
here,
because
apparently
I
forgot
to
type
it.
Can
you
attend
either
in
person
or
virtually
in
that
time
zone?
So
please
understand
that
there
any
place
we
have
this.
There
will
be
support
for
remote
attendees,
but
that
that
time
zone
late,
January
raise
your
hand.
Now,
if
you
can
do
not
raise
your
hand
if
you
are
not
available
there
and
we
will
see
how
it
goes.
C
C
C
Okay,
so
certainly
quite
a
few
people
were
able
to
do
it.
The
raised
hands
were
52
that
do
not
raise
hands
10.,
that's
a
significant
number.
So
what
we'll
do
is
go
ahead
and
come
up
with
a
concrete
suggestion
on
the
list.
Alan
and
I
have
the
token
to
do
that.
V
C
So
we
do
have
a
couple
of
minutes
there
and
there
was
a
update
on
the
mock,
use
cases
and
requirement
stock
Spencer.
Do
you
want
to
do
a
quick
run
through
that.
A
So
I'm
Spencer
Dawkins
I
want
to
call
people's
attention
to
the
picture
of
this
on
this
on
the
cover
slide,
pretty
maybe
more
than
anything
else,
you
know
the
idea
that
use
cases
would
interact
with
requirements
and
that
those
would
Drive
protocol
specifications.
If
that's
that's,
if
that's
shocked,
anybody
please
tell
us
now
our
next
slide,
please.
A
So
what
we
said
we
were
going
to
do
was
work
on
basically
dragging
out
a
lot
of
stuff
that
was
in
the
individual
draft
for
require
for
use
cases
and
requirements
that
would
not
belong
there.
Some
of
them
were
talking
about
things
were
not
in
the
approved
Charter,
and
some
of
them
were
just
history
and
observations
and
opinions,
and
so
what
we
did
was
update
the
use
cases
section.
A
Well,
we
updated
the
draft
this
way
and
went
to
update
the
we
were
thinking,
update
the
use
cases
to
reflect
working
group
discussion
on
the
slide
three
questions
from
the
from
the
for
October
interim.
There
wasn't
actually
any
discussion
of
those
questions
from
the
for
children,
so
we
submitted
Dash
zero.
A
Three
and
Forex
for
extra
credit
did
a
proposal
of
initial
structure
of
the
requirements
section
based
on
our
understanding
of
the
implicit
requirements
that
they're
into
approved
Charter,
which
was
based
on
uhas
understanding
all
right
next
slide,
please
yeah
so
yeah
that
so
and
like
I,
say
added
a
little
introductory
text
in
each
subsection
of
the
the
requirement
section
next
slide
please
so
this
is
I
did
talk
with
James
Michael
author,
since
we
got
the
comments
from
Cullen
that
were
that
appeared
on
the
mailing
list.
A
Thank
you
for
those
calling,
so
so
thinking
we
would
do
a-04
revision
before
requesting
that
the
chairs
would
adopt
anything
and
then,
basically
after
something
that
the
working
group
can
adopt,
we
would
be
drilling
down
to
the
next
level
of
detail
in
use
cases
index
level
structure
in
the
requirement
section
next
slide.
Please
so
I
did
have
some
questions
there,
like
I,
say
which
we
will
follow
up
on
the
Mike
mailing
list.
For
all
of
these,
we
were
trying
to.
A
We
were
trying
to
categorize
the
live
media
of
gaming
and
video
conferencing,
use
cases
and
then
think
about
that.
Some
more
again
we'll
follow
up
on
the
mock
mailing
list
anyway,
and
once
we
get
to
that
point,
talk
about
any
additional
use
cases
that
would
need
to
use
a
simple
low,
latency
media
delivery,
solution
for
ingest
and
distribution,
as
we
mentioned
earlier,
there's
those
are
out
there,
but
the
ones
we
would
be
we
would
need
to
know
about
is
our
use
cases
through
all
our
own
capabilities.
A
We
have
not
previously
identified
next
slide,
and
so
we
had
basically
a
high
level
structure
from
our
understanding
of
zuhas's
high-level
structure
from
the
interim
meeting
and
which
I
really
appreciate
and
they're
basically
agree
on
a
starting
line
for
that
the
requirements
and
then
proceed
in
the
usual
issue.
Pr
emerge
way
on
GitHub.
Is
that
my
last
slide
I
try
to
remember.
A
No,
we
are
not
I
like
I,
said
Cullen
sent
comments
that
were
helpful
enough
to
where
we
would
need
to
you
know
James
and
I
both
agreed.
We
need
to
do
a
dash
04
before
asking
people
to
look
at
it
seriously
and
we're
committed
to
do
that
in
the
relatively
near
term
and
any
other
thoughts.
C
Thank
you
very
much,
Spencer
note
that
we
will
not
be
waiting
to
an
interim
meeting
before
issuing
a
call
for
adoption
on
this
once
we've
gotten
the
mailing
list
discussion
to
a
point
where
we
feel
like
the
working
group
is
ready,
we'll
issue
with
a
call
for
adoption
on
the
list.
So
please
do
continue
to
join
us
on
the
list
so
that
you
can
do
that.
We
have
reached
the
11
24
Mark
and
the
question
is:
is
there
anything
else
for
the
good
of
the
order.
E
Sorry
thanks
David
skanazi
process,
Enthusiast,
apparently
no
just
I
wanted
to
provide
a
small
suggestion
to
the
chairs,
because
this
reminds
me
of
when
we
did
a
requirement
stock
in
mask
and
it
was
helpful,
but
we
ended
up
not
publishing
it,
which
was
quite
disappointing
for
some
of
the
folks
working
on
it.
So
just
they
make
that
clear
at
the
beginning,
whether
or
not
you
plan
to
just
have
it
and
not
publish
it
or
if
you're
going
to
publish
it
as
an
RFC.