►
From YouTube: IETF112-AVTCORE-20211112-1200
Description
AVTCORE meeting session at IETF112
2021/11/12 1200
https://datatracker.ietf.org/meeting/112/proceedings/
A
A
Terribly,
you
know
detailed,
just
active
items
and
decisions
is
really
all
we
need
and
you
can
use
the.
A
I'm
not
positive,
we
need
one
in
this
context,
but
somebody
wants
to
volunteer
for
that.
I'll
probably
be
good
too.
A
B
You
do
need
a
data,
tracker,
login
and
hopefully
that
all
worked.
I
know
it
was
problematic.
Earlier
in
the
week
you
can
join
the
session
dragon
room
jabber
room
via
the
meeting
agenda,
although
I'm
not
sure
why
you'd
need
to
please
use
headphones
or
an
echo
canceling,
speakerphone
and
state
your
full
name
before
speaking,
so
that
we
can
get
it
in
the
notes.
B
B
You
don't
have
to
enable
video
if
you
don't
want
to
here's
the
notewell,
a
reminder
of
itf
policies
by
participating.
You
agree
to
follow
these
policies,
which
are
listed
below
bcp
9
through
79.
Please
read
them
and
understand
them.
B
In
addition
to
the
ipr
policies,
we
do
have
a
policy
on
guidelines
for
conduct
an
anti-harassment
policy
and
procedures.
B
B
B
B
A
Think
yeah
so
jpeg
xs
has
been
published,
so
it
would
work
there.
A
Let's
see,
yeah
vp9
is
in
the
rsd.
Editor
queue
it's
waiting
on
lrr,
which
is
waiting
on
frame
marking.
We
did
a
pub
rack
on
cryptex,
so
that
should
go
through
the
80s
soon
frameworking.
A
As
of
about
an
hour
ago.
This
noticed
incorrect.
We
needed
we
needed
an
updated
draft
because
we
decided
to
change
it
to
experimental
well
published
that
this
morning.
So
now
I
will
do
the
write-up
on
that
and
hopefully
that
should
get
done.
We
dropped
petra
a
while
ago.
He
probably
dropped
this
from
the
draft
status
too,
the
next
as
a
next
time.
A
So
if
anybody
that's
in
tetra,
let
us
know,
but
in
the
meantime
we
can
ignore
that
and
we
adopted
a
few
drafts
and
I'm
like
to
know
which
one's
79
I'm
liking,
which
one
7983
is.
I'm
sorry.
B
That's
quick,
multiplexing.
A
I'll
go
multiplexing
right
right
right!
Yes,
yes,
so,
but
we,
I
think
all
our
other
milestones
are
going
to
be
discussed
here
today,
so
yeah,
so
vvc
is
both
adopted
and
in
first
working
group
last
call
there
were
a
lot
of
comments
there
from
mostly
from
the
vc
community
itself.
So
it's
great
that
they're
participating
here
and
we'll
just
definitely
be
talking
about
that
later.
B
All
right
so
now
I'm
going
to
talk
briefly,
hopefully
about
recent
liaison
statements.
We
have
a
request
from
the
w3c
web
transport
working
group
and
yanivar
is
here
to
talk
a
little
bit
about
that,
and
then
we
have
a
liaison
from
iso
iec
jtc
one
sc29
working
group,
three
about
green
metadata,
so
yanivar.
E
All
right,
can
you
hear
me
we
can
hello,
yes,
I'm
yanni
barbury
from
mozilla.
I
co-chair
the
w3c
web
transport
working
group,
where
one
of
our
use
cases
is
bi-directional.
E
Real-Time,
audio
video
communication
over
web
transport,
which
is
a
javascript
api
on
top
of
itf
web
transport
and
quick
and
the
problem
we're
having
there
is
when
sending
video
when
a
client
is
sending
video
to
a
server
and
the
problem
is
the
client
does
not
have
enough
information
to
know
what
it
can
when
it
can
reapply
a
multiplicative
increase
in
the
media
center
rate
to
recover
from
prime
congestion.
E
We
can
reduce,
send
rate
in
in
the
face
of
congestion,
but
we
don't
know
how
to
rapidly
reapply
ascend
rate
a
faster
send
rate
afterwards
when
the
congestion
goes
away.
So
the
request
is
to
know
if
so,
we're
glad
to
hear
that
this
working
group
is
considering
rtp
over
quick
and
we're
concerned.
E
We're
we
would
like
to
know
if
rtp
over
quick
can
satisfy
this
use
case
and
if
so,
what
measurements
could
a
browser
make
available
to
aj's
client
to
assist
this
problem,
and
would
it
perhaps
require
some
form
of
selectable
replaceable,
even
congestion
control
and,
if
so,
which
algorithms
thank
you.
B
Yeah,
I
would
I
would
caution
yanivar,
that
rtp
over
web
transport
or
is
a
little
bit
different
from
rtp
over
quick,
so
maybe
we'll
get
into
that
in
the
in
the
session
to
follow,
because
it's
not
as
tightly
coupled.
But
so
I
guess,
can
you
raise
some
of
these
issues
in
the
discussion
that
follows
and
we'll
try
to
see
where
to
go
from
there?
Does
that
make
sense.
B
Okay,
the
second
liaison
is
from
iso
iec
jtc,
one
sc29
working
compo3.
Do
we
have
a
representative
to
talk
about
this
liaison
request.
A
All
right,
let's
see
see
what
cars
see.
I
hope
I'm
pronouncing
that
right
in
the
queue
are
you
okay,
go
ahead,
hi.
G
Can
you
hear
me
so?
Yes,
we
can.
Yes,
I
can
speak
on
that.
My
colleagues
worked
on
this,
so
if
it's
okay,
I
can
go
through
please
so
this
three
epic
systems
and
they
would
like
to
point
an
update
on
the
progress
of
this
standard
2301-11,
which
is
energy,
efficient,
media
consumption,
green
metadata.
This
has
been
so
2015,
19,
first
and
second
editions
and
now
the
third
edition
is
currently
in
the
committee
drafts
series
status
and
just
as
an
overview
in
general.
G
This
standard
provides
various
type
of
metadata
that
enable
management
of
there
are
these
four
bullets:
management
of
decoder,
power,
consumption
or
display
power,
consumption
and
for
offline
applications,
or
for
like
adaptive
streaming
or
dash
applications.
It
provides
media
selection,
metadata
and
also
quality
recovery
for
video
encoding
power
consumption,
so
different
kind
of
metadata
are
provided
in
this
specification
and
recently.
The
third
edition
of
this
standard,
which
is
which
is
pointed
to
in
this
ls,
provides
a
bunch
of
new
features.
G
For
example,
these
three
bullets
say
here:
there
is
now
interactive
signaling
for
remote
decoder,
power
reduction
that
gives
better
power
reductions
and
also
vvc
sci
messages
for
carrying
green
metadata
related
to
complexity
metrics.
This
has
been
added
and
also
we
will
see
seo
that
can
also
carry
metrics
for
quality
recovery
after
low
power
encoding.
So
a
bunch
of
new
features
have
been
added
and
hope
that
this
information
is
of
interest
to
this
group,
and
we
can
consider
using
this
in
this
group.
G
These
different
metadata
have
all
a
very
different
shape
and
form
some
are
for
live
applications.
Some
are
for
a
stream
adaptive
streaming
applications
and
some
are
forward
direction
from
center
to
receiver.
Some
are
from
receiver
site
to
the
sender
side,
so
my
imagination
is
that
the
the
metadata
that
is
to
be
transferred
from
receiver
to
the
center
side-
that's
something
especially
where
you
know
some
container
formats
and
so
on
might
be
of
interest.
C
G
This
is
the
third
edition
that
now
adds
vbc,
but
already
there
are
metrics
for
older
codecs
available.
So,
yes,
they
are
already
in
the
previous
editions.
G
H
G
That
is
where
I
don't
have
any
complete
information.
If
there's
anybody
else,
I'm
also
fairly
new
to
this.
B
A
Yeah,
I
mean,
I
think
you
know
if
you
know
in
so
far
it's
I
guess.
There's
not
one
of
the
elements
is
the
receiver
to
send
her
feedback.
So
I
think,
if
somebody
is,
you
know
interested
in
submitting
a
draft
on
that.
I
think
we
could
certainly
you
know
be
you
know
interested
in
them.
You
can
take
up
with
the
if
the
work
group
wants
to
do
it,
but
I
think
yeah,
certainly,
I
think
any
sort
of
receiver
to
send
our
feedback
over
rtcp
is
in
scope
for
this
group.
A
So
somebody
wants
to
do
that
work.
I
think
there
would
be
interest
in
that.
I
mean
assuming
there's
interested
in
the
community.
G
D
Next
topic
is
rtp
hold
on
stefan
and
colin
are
in
the
queue.
Okay.
H
So,
just
just
one
remark
that
if,
if
someone
were
interested
in
this
type
of
stuff,
then
of
course
the
forward
direction
is
also
something
I
mean
a
lot
of.
This
gets
as
far
as
I
understand.
A
lot
of
this
can
get
piggy
packed
into
the
codec
into
the
codec
bit
stream,
using
sai
messages
and
such,
but
it
could
also
be
multiplexed
by
simply
creating
its
own
payload
format
right
and
that
way
it
may
be
applicable
also
to
non-epic
standards.
H
Chances
to
work
on
that
are
not
limited
to
to
the
return
direction.
Yeah!
That's
a
lot
done
in
this
in
this
area,
whether
it's
worth
doing
it,
you
know,
but
we'll
see,
yeah.
Thank
you.
Veron.
I
Yeah
hi,
so
I
wanted
to
return
back
to
the
the
first
liaison
statement,
just
briefly
talking
about
congestion
control,
and
it
would
seem
that
you
know
the
issues
of
congestion,
control
coming
up
in
a
bunch
of
different
groups
and
and
the
media
of
a
quick
side
meeting
was
talking
about
this
and
it's
come
up
in
a
bunch
of
different
places
and
certainly
for
congestion
control
for
rtp
udp.
I
I
know
we
had
the
the
rmcat
group,
which
tried
to
do
a
bunch
of
algorithms
and
I'm
not
sure
those
algorithms
directly
apply
to
quick.
So
I'm
not
suggesting
we
take
the
work
there,
but
I'm
I
was
wondering
if
we
should
maybe
have
a
broader
conversation
about
where
we
do:
media
congestion
control
over
quick
and
whether
that's
of
rtp,
specific
or
a
more
general
conversation.
B
Yeah,
well,
I
think
the
presentation
to
follow
colin
will
directly
bear
on
that
question.
So
maybe
we
can.
We
can
talk
about
it
during
the
presentation
or
in
the
q.
A
show.
I
Sure
I
think
it's
more
general
than
rtp
over
quick,
quick
yeah.
A
B
All
right
so
we're
now
going
to
hand
it
over
to
the
rtp
over
quick
team.
J
Yeah
hi,
can
you
hear
me?
Yes,
cool
yeah,
thanks
for
inviting
us
to
present
here
again
today
in
july,
we
already
presented
the
mapping
of
rtp
onto
quick
draft,
and
today
we
would
like
to
go
a
bit
more
into
detail
on
congestion
control
for
rtp
over
quick
on
the
next
slide.
J
J
Both
of
the
drafts
use
the
unreliable
datagram
extension
with
the
flow
identifier
to
demultiplex
different
rtp
sessions,
and
our
draft
focuses
a
bit
more
on
congestion
control
and
the
interface
requirements
for
quick
implementations
and
condition.
Controlling
and
yeah.
Both
drafts
use
sdp
for
signaling.
K
K
J
Okay,
yeah,
that
is
because
in
udp,
one
would
probably
typically
use
different
udp
ports
and
do
different
rtp
sessions
over
different
udp
ports
and
identify
them
by
that
and
in
quick.
We
can
have
the
itp
sessions
in
one
connection
and
then
try
to
use
all
of
the
information
we
have
available
in
that
one
connection
to
you:
use
for
different
rtp
sessions
at
the
same
time
like,
for
example,
the
congestion
control
information.
K
I
was
trying
to
find,
I
was
trying
to
figure
out
the
use
case
where
I
would
want
to
have
multiple
rtp
sessions
on
between
the
same
entities,
and
I
couldn't
find
one
so
so
I
was
wondering
whether
it's
it's
reasonable
to
drop
this
requirement
and
just
say
if
you,
if
you
need
more
rtp
sessions,
set
up
more
quickly,
quick
connections.
J
Yep,
maybe
I'll
take
that
and
we
can
expand
that
on
the
draft
and
explain
it
further.
I
Yeah
I
mean
there
was
a
comment
in
the
chat
about
not
understanding
what
rtp
session
meant.
In
this
context
I
mean
you
know:
rtp
session
has
a
very
well
defined,
meaning
in
in
the
rtp
spec,
and
we
need
to
make
sure
that
that
what's
meant
here
is
consistent
with
that
meaning
and
it's
not
just
being
used
as
a
as
an
alternative
to
the
ssrc,
for
example,
for
demonstrating
users.
I
So
I
think,
there's
a
perhaps
a
deeper
conversation
about
sessions
and
demultiplexing.
That
should
happen
at
some
point.
J
Okay,
then,
maybe
I
continue
with
the
presentation
on
congestion
controlling
first
and
we
keep
that
on
a
later
discussion
yeah.
So
today
we
want
to
focus
on
the
congestion
controlling,
and
we
identified
a
couple
of
questions
which
come
up
when
we
do
rtp
over
quick
and
first
is
that
we
have
congestion
control
in
quick
and
rtp
and
quick
suggestion
to
use
an
algorithm
similar
to
neurano.
J
It
does
not
meditate
mandate
any
congestion,
control
algorithm
and
rather
provides
a
set
of
connection
statistics
that
can
be
used
by
any
congestion
controller
and
other
algorithms
than
the
arena
are
currently
under
investigation
and
rtp
also
provides
its
own
congestion
control
using
explicit
transition
controls
signaling
in
rtcp,
for
example,
there's
rc8888,
which
provides
a
feedback
format
that
can
be
used
for
algorithms,
for
example,
firearms
like
scream
and
nada,
and
also
gcc,
as
proposed
by
the
rmcud
group.
J
J
J
J
We
will
leave
that
out
today
because
it's
we
already
know
that
it's
probably
better
to
have
a
congestion
control
algorithm,
which
is
specified
for
real
time,
and
we
would
rather
look
at
these
today.
J
J
We
would
like
to
use
the
quick
connection
stats,
which
are
already
available
in
the
transport
protocol
to
reduce
the
rtcp
overhead
and
in
quick.
We
have
the
datagram
draft,
which
allows
an
implementation
to
expose
the
datagram
acknowledgments
to
the
application
and
we're
using
this
to
identify
which
rtp
packets
have
arrived
at
the
receiver.
J
We
keep
track
of
the
send
rtp
packets
and
in
which
datagrams
they
were
sent
and
at
what
time
they
were
sent.
So
we
have
the
timestamp
and
as
soon
as
an
acknowledgement
arrives,
we
know
that
this
rtp
packet
has
been
received,
and
then
we
also
use
the
rtt
samples
provided
by
the
quick
connection
statistics
to
infer
a
receive
timestamp
at
which
the
packet
arrived
at
the
receiver
by
just
adding
half
of
the
rtt,
the
last
known
rtt
to
the
same
timestamp
we
kept
track
of
earlier.
J
The
next
slide.
I
have
a
short
overview
of
our
test
implementation.
We
integrated
quick,
go
with
g
streamer
and
the
screen
implementation.
J
The
application
transmit
video
data
in
rtp
packets
from
one
sender
to
a
receiver,
using
g
streamer
to
encode
and
packetize
rtp
packets,
which
are
then
prepended
with
the
flow
identifier.
We
talked
about
earlier
and
sent
to
the
receiver.
J
The
receiver
takes
the
rtp
packet
and
identifies
the
itp
session
and
forwards
the
data
from
the
package
to
the
corresponding
rg
streamer
pipeline,
and
then
we
use
a
slightly
modified
version
of
quick,
go
to
be
able
to
selectively
disable
new
venue
in
our
experiments
and
also
to
expose
the
connection,
statistics
and
the
datagram
acknowledgements.
J
We
build
a
testbed
setup
on
the
next
slide.
Please.
L
Yes,
I
had
a
question
about
the
the
time
stamp
feedback.
From
from
what
I
understand,
the
the
purpose
of
that
in
rtcp
is
for
congestion
controls
to
understand
the
one-way
delay,
to
understand,
variance
in
one-way
delay
and
those
delay-sensitive
algorithms
can
use
that
high-fidelity
one-way
delay,
inferencing
from
that.
But
if
you're
basing,
if
you're
trying
to
synthesize
that
using
round
trip
times,
it
seems
like
you've
lost
the
one-way
delay
aspect
of
that
which
may
be
important
for
some
of
those.
Some
of
those
congestion,
control,
algorithms,.
J
Yeah,
I
think
we
will
get
back
to
that
in
a
later
slide
on
the
results
for
this
inferring
and
there
are
drafts
or
quite
recent
drafts,
which
also
try
to
use
the
receive
time
stamps
directly
in
acknowledgements.
I
will
reference
that
later.
J
We
configured
a
bandwidth
limit
of
one
megabit
per
second
and
we
run
the
experiments
with
different
one-way
delays
and,
after
60
seconds
of
a
video
run
for
video
transmission.
We
reduce
the
available
link
capacity
to
500
kilobits
per
second
to
see
how
the
algorithms
and
the
transport
react
to
this,
and
then
our
application
locks
incoming
and
outgoing
rtp
and
rtcp
packets
for
analyzers
afterwards
and
will
to
calculate
psnr
and
sm
statistics
on
the
raw
video
files
next
slide,
please.
J
So.
The
first
results
for
screen
using
rtp
over
for
rtp
over
quick,
using
stream
screen
and
comparing
this
to
udp.
J
The
left
graph
shows
the
quick
implementation
and
the
right
one
udp,
and
we
can
see
that
the
results
are
quite
similar,
so
it
seems
to
be
possible
to
to
use
quick
data
grams
to
transmit
rtp,
and
we
can
also
see
that
the
target
bit
rate
of
the
screen
condition
control
are
slightly
below
the
one
in
udp.
B
M
B
J
J
So
everything
goes
through
the
both
combustion
controllers
and
we
see
that
in
the
first
part
of
the
video
transmission,
it
seems
to
behave
very
similar
to
the
previous
slide,
but
that
probably
is
due
to
application
limited
transmission.
So
as
long
as
screen
is
sending
less
data
than
urinal
would
then
we
are
in
this
application,
limited
state
where
neurano
doesn't
really
do
much
yet,
but
as
soon
as
then,
the
link
capacity
is
reduced
to
500
kilobits
per
second
for
30
seconds.
J
We
see
that
both
congestion
controllers
try
to
adapt
to
the
new
link
capacity
and
the
the
screen
target.
Bitrate
drops
to
almost
zero,
and
the
ramp
up
also
does
not
happen
at
all
in
the
case
of
a
one
millisecond
delay.
It
may
be
that
that
happens
much
later
if
we
looked
into
it
for
a
longer
time,
but
this
is
already
not
really
usable.
So
we
didn't
investigate
that
further
and
in
the
second
case
our
one-way
delay
was
15
milliseconds.
J
J
We
can
see
that
the
the
top
two
experiments
looks
pretty
similar,
so
it
seems
to
be
possible
in
general,
too,
use
the
quick
statistics
for
rtcp
feedback,
but
we
also
see
that
in
the
later
experiments,
especially
in
the
150
milliseconds,
one
way
delay
the
ramp
up
in
the
beginning,
works
much
slower
or
starts
much
slower,
and
we
in
our
experiments.
J
We
generate
the
feedback
at
a
fixed
interval
and
we
think
that
the
instability
in
the
last
experiments
are
due
to
this
fixed
generation
of
feedback
which
may
lead
to
some
acknowledgements
from
quick
arriving
after
we
generate
the
next
feedback.
Even
though
the
rtp
packet
has
already
been
received
by
the
receiver
and
due
to
the
delayed
and
aggregate
x,
they
only
arrive
a
little
bit
too
late
at
our
sender,
so
that
some
feedback
is
not
included,
which
leads
to
feedback
which
would
be
less
precise
than
the
one
rtcp
could
provide.
J
There
are
probably
two
drafts
which
could
improve
the
situation,
because
there's
one
draft
for
quick
timestamps,
which
would
give
us
timestamps,
for
which
would
help
us
to
estimate
the
one-way
delay,
as
we
already
talked
about
earlier,
and
then
there's
a
very
recent
draft
which
gives
timestamps
for
received
packets,
and
that
would
also
help
to
improve
the
generated
feedback
because
we
then
didn't
have
wouldn't
have
to
use
the
latest
rtt
samples
anymore,
and
instead
we
could
just
use
the
feedback
which
is
directly
on
what
we
need
from
quick.
J
We
run
all
the
experiments
again
with
the
second
stream
opened
and
quick
this
time,
not
datagrams,
but
the
real,
quick
stream
which
sends
constant
data,
and
in
this
case
we
see
that
the
application
probably
needs
some
prioritization
between
real-time
data
in
quick
data
grams
and
background
traffic.
Within
the
same
connection
as
in
our
experiments,
we
see
that
the
target
bitrate
quickly
degrades
and
it
might
even
starve
because
of
the
background,
data
and
yeah.
There's
done
probably
some
better
scheduling
of
prioritization
necessary.
J
But
this
is
also
very
artificial
setup
in
our
experiments,
because
we
really
send
a
lot
of
background
data
and
there
are
probably
better
experiments
needed
with
more
natural
forms
of
background
traffic,
like
on-off
http
request,
or
something
like
that.
Instead
of
constant
sending
data.
J
So
the
first
one
is
that
it's
probably
problematic
to
run
two
separate
congestion
control
loops,
at
the
same
time
like
a
real-time
congestion,
control
algorithm
like
screen
for
the
real-time
data
and
something
like
neurino
and
quick
at
the
same
time
for
all
of
the
outgoing
data
on
the
connection,
and
the
second
is
that
it's
probably
possible
to
use
grid
state
information
to
reduce
rtcp
feedback,
but
that
could
probably
also
profit
from
some
optimizations,
which
would
be
provided
by
the
receive
timestamp
draft
or
something
similar,
which
would
give
more
detailed
information
than
just
the
rtt
and
using
this
rtt
for
the
debating
pro.
J
And
then.
The
last
point
is
that
some
prioritization
is
necessary.
As
we've
seen
on
the
last
slide
yeah
in
the
future.
We
plan
to
do
some
more
experiments
on
different
congestion,
control,
algorithms
and
different
forms
of
computing
traffic
and
different
network
topologies,
and
we
would
also
like
to
try
to
find
some
way
to
do
better,
prioritization
than
no
prioritization
at
all
to
see
how
we
can
use
one
quick
connection
for
real-time
data
and
non-return
data.
At
the
same
time,.
N
Yeah
this
is
this
is
really
good
work,
I'm
glad
to
see
this
kind
of
moving
forward.
I
guess
I
have
three
points
here
that
I
think
are.
It
would
be
good
for
us
to
try
to
limit
scope
here.
One
would
be
you
know.
Is
there
anything
besides
receive
time
spawn?
We
need
to
have
the
same
information
that
rtp
or
what
we
had
here
as
udp
would
have,
because
if
I
think,
if
we
get
there,
then
we
can
feel
pretty
confident
that
we
have
everything
at
a
protocol
level.
N
You
need
to
do
a
good,
real-time
congestion.
Control.
Second
thing
is
that
you
know
I
think,
trying
to
prioritize
real-time
and
non-real
time.
Right
now
might
be,
I
think,
a
lot
and
we
might
just
say,
open
a
different,
quick
connection
or
non-real-time
stuff,
because
that's
how
it
works
today
and
that's
that
that's
where
we
stop,
but
that
might
be
a
good
milestone
of
saying
that
we
just
basically
replaced
the
entire
congestion
controller.
N
If
we
do
rpp
flows
today
and
get
that
working
over
quick-
and
I
think
that
would
be
a
pretty
good
accomplishment
and
then
last
one
thing
we
haven't
really
talked
about
here
is
sort
of
overhead
of
how
many
more
bytes
are
we
putting
on
the
wire
for
each
rtb
packet,
you
know
and
where
are
we
sending
redundant
rtc
feedback
and
it'll
be
good
to
sort
of
dive
into
that
and
see.
You
know
what
those
numbers
are
and
then
we
start
thinking
about.
Is
there
anything
that
can
be
done
to
mitigate
that.
P
P
What's
the
main
specific
reason
why
rtp
was
considered
for
this
investigation
is
that
was,
it
was
a
reason,
because
some
some
sense
of
interoperability
with
existing
rpg
stacks
or
or
or
was
that
a
protocol
of
choice
to
pick?
I
just
want
to
understand
the
context
behind
that.
A
Q
J
Q
Q
How
can
we
make
this
work
when
the
the
application
that
it
is
using?
The
quick
stack
is
untrusted.
For
example,
we
are
going
to
replace
the
the
rtcp
feedback
from
from
this
quick
feedback,
how
this
will
be
exposed
to
the
application,
because
I
think
that
they
will
impact
quite
a
lot.
R
J
Q
Yeah
because,
for
example,
I
doubt
that
we
can
just
disable
the
the
quick
confession
control
and
let
the
application
run
on
javascript
just
do
behave
properly,
so
we
will
have
to
find
a
way
of
trying
to
have
both
working
simultaneously.
You
know,
I
think
that
it
was
sometime
proposed
that
have
some
good
breakers
in
case.
F
B
B
I
guess
the
one
thing
that
wouldn't
potentially
be
exposable
is
the
detailed
act
data,
because
in
a
browser,
basically,
you
can't
assume
that
that
the
acts
reflect
the
application
what
it
got,
but
maybe
the
the
timestamp
and
other
info
from
the
acts
could
be
could
be
provided
up.
I
guess
and
and
originally
in
web
transport
we
did
have
the
concept
of
adding
a
congestion
control
selector
to
the
constructor,
and
I
guess
to
justin's
point
that
would
be
on
a
quick
connection
level.
B
B
And
also
web
transport
does
have
prioritization
built
in,
but
it's
probably
of
the
kind
that
your
research
shows
is
going
to
be
problematic,
which
is
basically
right.
Now
the
datagrams
get
get
absolute
priority,
so
they
will
starve
out
the
the
reliable
streams.
J
Yeah,
I
think
we
could
do
more
experiments
on
that
in
web
transport,
especially
under
prioritization,
and
I
agree
that
it
might
be
hard
to
expose
all
of
the
data
we
use
in
our
experiments.
Yet
yep.
O
I
wanted
to
thank
you
for
bringing
this
work
forward
and
thank
the
chairs
for
bring
this
oregon
navy
avt
core.
O
The
question
I
probably
have-
or
the
suggest
I
have
is-
this
draft
did
not
actually
flag
avt
core
in
the
file
name
and
I've
been
having
some
conversations
on
the
media
of
a
quick
mailing
list
which
is
only
available
about
about
another
draft.
That's
relevant
here
that
doesn't
have
avt
core
in
the
name.
It
was
focused
on
the
svp
for
or
keeping
over
quick,
but
recognizing
that
that
was
going
to
have
know.
O
A
lot
of
a
lot
was
going
to
have
to
reflect
a
lot
of
discussions
about
what
rtp
over
quick
was
actually
going
to
look
like
or
we
for
the
chairs
are
you
are
we
to
the
point
where
we
should
be
putting
mvt
core
the
in
the
draft
beams?
We
get
more
visibility
within
the
group
here.
A
O
You
all
can
think
about
that
and
yeah
yeah
sure
the
the
discussions
we've
been
having
for
my
draft
in
this
space
and
the
over
in
the
mock
mailing
list
are,
I
think,
they're
definitely
ready
to
move
into
abc
core
or
a
group
like
that,
rather
than
kind
of
an
ad
hoc
overall
media
over
a
quick
discussion.
Yeah
I
mean.
A
I
Yeah
hi,
so
I
mean
there's
a
bunch
of
discussion
in
the
chat
about
short-term
and
long-term
approaches,
and
I
think
that's
important.
I
think
we
should
make
sure
we
have
that
broader
architectural
conversation
about
where
we
want
to
go,
rather
than
necessarily
just
sleepwalking
into
into
reusing
our
tp
here.
I
The
the
the
actual
reason
I
I
got
up
to
the
mic
was
you
know
the
comments
about
using
separate
connections
to
avoid
having
the
the
congestion
control.
Prioritization
discussion,
I
mean
separate
connections,
causes
quite
a
lot
of
pain
in
in
the
current
model,
and
we
we
should
have.
It
have
a
deliberate
discussion
about
whether
we,
we
think
that's
acceptable
pain
or
whether
we
want
to
make
conscious
effort
to
reduce
that
and
try
to
make
everything
run
over
a
single
connection,
even
if
it
means
more
congestion,
control
and
prioritization
work.
C
Thank
you.
Thank
you
for
doing
this
work.
The
question
that
I
have
is
you've
only
looked
at
doing
this
in
the
uni-directional
flow
right.
This
is
just
a
sender
of
one
sender,
one
with
the
one
flow
of
media,
correct.
J
C
We
get
sort
of
tagging
on
from
colin's
last
remark,
around
connections
and
and
just
various
thoughts
are
how
multiplexing
might
be
dealt
with
in
this
space.
Bi-Directional
of
flows
of
media
should
also
be
considered
as
well.
C
We've
had
an
awful
lot
of
discussions
earlier
this
week
in
the
media
of
the
moq
side
meeting
and
in
other
sort
of
forums
around
those
use
cases,
and
I
think
that
there
may
be-
and
this
is
a
hand,
wave
potential
issues
when
it
comes
to
complex
competition
between
congestion
control,
depending
on
which
way
we
tackle
the
conductivity
and
multiplexing
of
the
sessions.
J
Yeah,
that's
probably
true.
We
were
planning
to
or
our
current
test
sub
setup
is
kind
of
inspired
by
the
test
cases,
evaluation
for
congestion
control
from
rmcat,
but
they
don't
completely
implement
all
of
it
and
yeah.
As
I
said,
we
would
like
to
do
more
experimentations
with
that,
and
then
we
should
probably
also
include
bi-directional
streams.
There.
A
O
N
Wanted
to
add
on
to
comments
point
where
he
sort
of
mentioned
that
well,
maybe
we
should
take
on
you
know
prioritization
know,
I
think
that's
for
any
specific
work
in
this
area.
We
probably
want
to
have
a
good
sense
of
what
our
goals
are.
You
know
what
would
count
enough.
Will
we
count
as
like
a
reasonable
v1?
N
What
what
things
do
we
think
we
have
to
do?
I
think
this
author
has
shown
you
can
get.
You
know
some
results,
even
just
now,
just
by
sort
of
encapsulating
rtp
within
quite
datagram,
but
what
would
be
necessary
for
us
to
kind
of
consider
is
something
that
we
want
people
to
play
on,
whether
it's
overhead,
whether
it's
packet
rate
optimization.
N
You
know
this
is
still
an
early
draft,
so
this
is
fine
where
it
is
right
now,
but
as
this
moves
forward,
I
think
kind
of
lining
up
as
a
working
group.
What
do
we
think
that
you
know
they
should
be
trying
to
solve?
I
think
would
help
us
all
kind
of
get
to
a
shared
picture
of
where
we
want
to
end
up.
A
Okay
and
we're
kind
of
over
time,
but
this
is
dairy
with
a
lot
of
interest,
so
I
figured
we
would
let
us
go
over,
but
thank
you
all
and
I
guess
discussion,
probably
on
the
abt
list
or
the
mock
list,
or
both,
probably
if
you're
interested
should
be
subscribed
to
both.
But
hopefully
we
can
try
to
converge
on
where
we're
having
the
discussion.
A
A
Q
Okay,
so
we
have
been,
I
put
a
new
draft
for
greatness,
that
it
is
does
to
clarification.
One
was
the
authentication
after
encryption
and
also
how
the
what
the
they
mean.
What's
the
minute
of
the
crisis
attribute
when
it
is
present
with
with
with
bundle,
it
does
not
change
the
anything
on
the
draft.
It's
just
clarification
say
that
were
needed
because
it
was
not
clear
at
the
moment
and
regarding
the
implementation
status,
we
still
have
only
two
implementations.
Currently
one
is
done
by
you
jonathan
and
that.
Q
H
Yeah
hi,
no,
this
is
the
two
payload
formulas
for
the
evc
will
be
a
quick
thing.
Next,
please.
H
Thank
you.
So
we
had
a
working
group
last
call
a
few
weeks
ago,
a
few
months
ago.
Actually-
and
we
got
comments
from
from
this
guy
from
henry,
and
they
were
all
in
all
the
the
the
the
activity
on
the
mailing
list
in
response
to
this
working
group
class
called
was
quite
active.
H
We
had
more
than
30
emails
there
and
the
comments
were
also
quite
extensive
and
but
over
the
time
I
think
we
got
to
a
good
understanding
of
what
the
commenters
were
asking,
and
I
think
we
addressed
most
of
them
in
this
version
number
12,
which
was
posted
just
before
the
deadline,
and
we
have
remembered
the
revision
number
13
in
the
in
the
works
that
is
currently
reviewed
by
the
authors,
where
miska
actually
put
in
some
additional
text
related
to
the
remaining
issue
somewhere
deep
down
in
the
alpha
answer-
and
that's
I
mean
I
I
I
can
promise
it
for
pretty
quickly.
H
What's
a
little
bit
concerning
to
me
is
that
our
usual
suspects
here
on
this
in
this
work
group
didn't
comment
at
all
yeah
except
belmont,
and
but
I
would
guess
that
that's
perhaps
because
all
these
payload
formats
for
for
null
unit
based
codecs
are
so
similar
nowadays
that
that
perhaps
most
of
the
bugs
have
been
ironed
out
much
earlier,
but
I
really
it
it
would
be
good
if
people
could
take
one
more
look
at
it
at
some
point
now.
H
I
I
think
the
real
test
will
come
when
the
thing
goes
out
to
publication
request
and
all
these
security
guys
come
out
of
the
woods
and
and
talk
about
the
newest
trend
that
they
want
to
see
in
the
security
sections
and
similar,
maybe
the
congestion
control
guys
I
don't
know
so
yeah
and
well.
So
what
we
will
be
doing
is
we'll
publish
one
more
revision
of
the
bbc
draft
pretty
soon
I
gave
you
a
target
date
mid
of
the
month,
but
we
will
do
that
earlier.
H
I
think,
probably
as
soon
as
later
today,
and
then
we
suggest
that
maybe
another
working
group
last
call
is
adequate.
Since
the
text,
changes
are
not
insubstantial,
although
they
are
deep
down
in
the
details
where,
where
the
bbc
codec
maps
to
the
offer
answer
mostly
and
then
I
think
we're
ready
for
after
that,
working
plus
call
for
publication
request
next
slide.
Please.
H
Thanks
bernard
now,
this
was
an
issue,
that's
more
a
generic
question
that
was
triggered
by
the
banner
by
a
comment
from
belmod,
where
he
said
there
are
no
complete
examples
of
the
stp
of
our
answer.
Negotiation
in
the
draft
that
was
in
in
the
context,
I
think,
mostly
of
the
scalability
related
rather
complex,
of
answers,
scenarios
that
this
draft
would
allow,
and
that
raises
a
little
bit
the
philosophical
question
of
what
good
do.
H
Examples
do
yeah
the
so
one
thing
that
I've
seen
rather
consistently
in
implementations
by
people
who
are
not
say
regular
participants
of
this
group
is
that
they
take
code
snippets
from
the
from
the
drafts,
or
at
least
you
know,
the
examples
were
biting
from
the
drafts
hack
a
little
bit
around
them
for
the
special
needs
they
may
have
and
think
that
they
have
something
that
ought
to
work
with
other
people's
implementation,
and
typically
it
does
not
very
often
it
does
not,
and
the
the
reason
for
that
is.
H
I
think
that
people
simply
take
the
examples,
as
shortcuts
of
for
you
know,
for
for
the
way
how
things
are
to
be
done
and
don't
really
get
a
full
understanding
of.
What's
going
on,
we
had
this
discussion
before
a
few
years
ago
or
decades
ago,
or
so,
and
in
for
the
hvc
payload
there
we
later
on,
decided
not
to
go
for
full
examples,
and
I
just
want
to
kind
of
reconfirm
that
that's
the
right
thing
to
do.
H
I
mean
we,
we
can
add
more
examples
if
we
absolutely
need
to.
But
for
the
reasons
I
stated
we
are,
it's
not
laziness.
It's
really
it's
it's
more.
H
You
know,
I,
I
think
examples
teach
you
way
too
much
from
from
the
the
complexity
of
the
problems
that
are
hidden
in
those
in
those
of
our
answer.
Cap
exchange
options
that
we
have
in
those
drafts.
So
what
I
asked
this
question
on
the
mailing
list,
there
was
no
response.
B
Yeah
thanks
stefan,
I
guess
what
I
was
responding
to
in
part
is
some
of
the
issues
we've
seen
with
implementations
and
figuring
out
what
they
do.
There's
a
pretty
widespread
problem
now
with
people
inventing
their
own
very
different
implementations
of
hevc,
for
example.
B
I
understand
your
point
that,
but
you
know
translating
these
things
into
actual
implementations
has
just.
It
has
been
problematic
because
people
pick
and
choose
things,
and
you
never
know
what
they're
going
to
do
until
you
actually
look
at
it.
H
I
completely
agree,
I
think,
the
the
disagreement.
If
there
is
any
I
don't
know,
the
disagreement
is
what
the
cure
to
this
problem
is
right.
I
think
fever
is
not
to
to
put
oversimplified
examples
in
a
draft
that
will
be
copy
pasted
in
that
may
be
known
as
correct,
but
may
may
not
be
known
as
applicable
for
the
particular
problem
that
people
have
and
that
people
go
away
from.
H
H
H
Okay,
let
me
put
this
as
the
proposal
or
let
me
put
that
not
as
a
question
but
as
a
statement,
so
unless
people
speak
up
pretty
darn
soon
right
now,
but
they
want
to
see
more
detailed
examples.
We
are
not
going
to
add
more
examples
and
we
will
be.
L
Yeah,
I
don't
think
you
need
examples
to
spoon
feed
people.
I
don't
think
that's
necessary,
but
in
your
view,
do
you
think
that
the
current
examples
give
the
high
order
bit
of
of
what
what
the
important
part
most
important
parts
of
the
draft
we're
trying
to
convey
in
their
stp?
I
think
that's
the
useful
part,
because
there's
usually
a
lot
of
text
and
it's
usually
you
know,
drafts
that
have
many
parameters
and
and
many
rules
on
all
the
parameters.
It's
it's
hard
to
lose.
L
H
So,
okay,
I
I
think
I
think
we
are
we're
good
enough.
If
you
take
this
and
and
a
little
bit
of
say,
common
understanding
of
how
of
irons
are
supposed
to
work.
We
we
are
good
for
the
relatively
simple
use
cases
say
no
scalability,
once
complex
stuff
comes
into
the
picture.
H
People
have
to
dig
in
people
have
to
understand
that
you
know,
because
if
you,
if
you,
if
you,
if
you
just
try
to
create
recreate
a
scalability
model
without
understanding
the
the
vvc
scalability
mechanism,
which
are
not
that
easy
to
to
block
right
and
that
that
five
percent
of
additional
complexity,
that's
being
added
by
the
payload
format,
if,
if
you
don't
have
an,
if
you
don't
have
an
a
good
picture
of
that
and
just
naively
starts
implementing,
then
then
you
will
arrive
at
something
that
just
doesn't
work
so
the
the,
but
for
the
for
the
relatively
simple
use
cases,
if
you're
using
say,
say
no
scalability
or
temporal
scalability
only.
L
L
H
Good
thanks,
I
think
that's
all
I
had
on
the
vvc
thing,
so
that's
on
track
next
slide,
please
so,
jumping
over
to
evc!
That's
a
really
short
one.
Next
slide.
H
We
haven't
wrapped
the
or
one
working
group
draft
this
this
tradition
of
keeper
life
drafts.
I
I
consider
you
know
this
is
this
is
just
not
not
a
good
thing
so,
unless
you
chairs
tell
me
that
we
should
run
keep
a
live,
keeper
live
drafts
for
bookkeeping
purposes
or
something
like
that,
we're
not
going
to
do
it.
However,
the
draft
is
still
on
our
on
our
internal
agenda
and
we
are
committed
to
to
deal
with
that.
H
However,
the
the
way
we
want
to
do
that-
and
that
was
I
think,
greed
in
last
summer-
is
we'll
deal
with
it
once
we
have
learned
the
lessons
and
that's
especially
the
lessons
I'm
expecting
on
the
security
side
during
the
idf
last
call.
So
in
other
words,
what
we
suggest
to
do
is
we
will
produce
version
two
after
we
got
the
bbc
draft
through
the
last
call
and
then
we'll
be
ready
very,
very
quickly
with
working
group
glass,
curl
of
the
ebc
draft.
H
The
alternative
would
be
keeper
lives
if
you
choose,
if
you
do
that,
I
think
that's
that's.
That
sounds
reasonable
to
me.
I
think
which
one.
A
H
No,
no,
if,
if
there
were,
if
there
were
specific
security
issues
that
I'm
I
think
that
would
come
up,
we
would
have
proactively
addressed
them.
This
is
this
is
just
doing
payload
formats
for
the
last
20
years.
H
Something
always
comes
up
from
that
crowd,
so
I
mean
if
I
would
be
very
pleasantly
surprised
if
this
time
nothing
comes
up,
but
something
always
comes
up
so
and
there's
always
a
need
to
add
a
sentence
or
to
to
the
security
consideration,
question
and
frankly
in
in
the
past,
that
was
also
the
truth
for
the
congestion
control.
H
I
C
A
See
mike
faller
in
the
queue
annular
mike,
which
are
your
branding.
S
With
general
dynamics
I
was
gonna
have
mike,
do
our
presentation
for
us,
okay,
protect
and
I'll,
be
certainly
here
for
questions
as
we
go
along
sounds
good.
M
Hello,
can
you
hear
me?
Yes,
okay,
I've
been
listening
to
the
the
conversations
of
the
others
and
I'll
I'll
augment
the
presentation
based
on
some
of
what
I've
heard.
But
as
far
as
I
know,
the
organization
we
are
supporting
is
is
new
or
somewhat
new
to
the
ietf.
M
I
was
around
for
the
birth
of
that
as
a
combined
department
of
defense
and
vendor
working
group,
the
working
group
was
called
interoperably
controlled
working
group
this.
While
the
skip
protocol
began
in
1994,
there
were
predecessors
to
it.
We
go
back
to
the
pstn
days
of
early
communications
that
the
dod
would
do
over
pstns
and
then
into
is
the
end
and
migrated
into
ip
all
handled
with
a
with
a
community
of
interest.
M
If
you
want
to
call
it
that-
and
the
community
was
mostly
confined
in
the
u.s
for
a
large
part
of
this,
but
the
goal
of
this
group
was
to
develop
the
next
generation
interoperability
security
protocol,
supporting
the
us
government
and
the
military
interests
later
around
2001
to
include
nato
and
nato
partners
and
the
name
changed
to
the
skip
working
group.
M
The
skip
working
group
is
a
functional
standards
making
test
community
that
meets
one
or
two
times
a
year
depending
upon
the
needs.
We
have
separate
action
items
to
do.
We
have
demonstrations
to
provide.
We
are
a
functioning
body,
that's
been
around
them
for
a
while
next
slide.
M
The
skip
standards
are
currently
available
to
participating,
government
military
communities
and
select
oems
of
network
equipment
and
call
management
servers
that
support
skip
government
business
entities
must
request
access
to
relevant
information
and
access
to
skip
standards
is
based
upon
a
need
to
need
to
know
now.
Devices
that
implement
skip
standards
transparently
operate
over
digital
carriers
skip
is
an
application
layer
protocol.
It
doesn't
function
down
in
the
network
layer,
the
devices
most
commercial
network
and
security
community
personnel
are
not
aware
of
skip,
and
this
can
result
in
a
skip.
M
Sub
media
type
skip
being
removed
from
the
sdp
of
a
standard,
sip
message,
so
the
lack
of
awareness
among
the
network
and
security
community
has
become
a
larger
issue
as
the
use
of
skip
grows
over
more
commercial
carriers
and
as
network
security
devices
become
more
restrictive
of
unknown
media
as
a
side
story.
To
that,
my
first
exposure
to
to
network
security
was
way
back
in
the
early
cisco
days
of
building
a
firewall
based
on
ip
addresses.
M
M
We
have
come
across
issues
where
the
sd
stp
doesn't
contain
the
transport
that
we
need.
So
the
draft
rfc
submitted
the
ietf
is
designed
to
provide
information
to
network
equipment,
oems
network
administrators
security
personnel
and
to
help
skip
succeed
of
the
commercial
networks
because
skip
relies
on
commercial
equipment
within
the
network
infrastructure
to
operate
and
as
it's
become
security
policies
have
changed,
devices
have
changed
from
being
where
you
go
in
and
decide
what
you're
going
to
not
let
into
your
network.
M
So
the
skip
rfc
enables
network
equipment
and
manufacturers
to
provide
an
equipment
configuration
that
supports
skip
as
a
as
a
media
subtype,
so
that
the
network
administrators
network
security
personnel
can
define
and
implement
compatible
network
policy
which
permits
the
skip
media
subtype
to
reverse
the
network.
M
While
it
is
a
gateway
protocol,
it
is,
it
is
meant
to
take
iep
traffic
and
an
isdn
bri
network,
64k
udi
channel
and
build
a
end-to-end
bit
integrity
network
for
the
users
to
use
whatever
they
want.
So
rfc
4040
develops
a
clear
channel.
It
just
happens
to
transverse
go
from
ip
to
isdn.
64K
udi,
so
skip
is
to
be
treated
like
that
on
a
pure
ip
network.
M
This
rfc
is
needed
to
provide
additional
information
for
those
media
subtypes,
where
we
include
the
media
type
description,
the
payload
format,
rtp
header
fields,
payload
format,
parameters
and
the
stp
declaration
as
a
mapping
to
the
sdp
and
provides
mapping
examples.
What
you'll
see
on
the
next
slide,
I
believe
next
slide.
M
This
is
the
contents
of
the
stp
declaration
to
support
a
skip
session.
I
kind
of
like
mo's
comment
of
of
high
order
bits.
M
M
M
That
unawareness
is
partially
tied
to
the
fact
that
this
that
the
body
of
community
that
this
serves
is
one
that
has
a
as
an
already
standing
community
already
has
expanding
set
of
standards
in
which
to
operate.
To
do
this,
so
it
operating
in
that
environment
is,
is
one
that
produces
the
fact
that,
while
we're
defining
the
end
products,
but
we
don't
have
any
way
to
let
the
people
in
the
middle
know
what
we
do.
So
the
draft
rfc
increases
the
etf
awareness
of
the
skip
working
group
in
its
efforts
to
achieve
international
interoperability.
M
So
we
have
to
look
at
our
problem
as
not
just
being
the
end
products
having
interoperability,
because
everyone
else
who
carries
traffic
is
responsible
for
defining.
What's
on,
the
network
is
responsible
for
building
equipment
that
implements
security
plans,
they
need
the
knowledge
to
know
how
to
do
that.
It
even
goes
down
as
far
as
the
procurement
people.
M
If
I,
as
a
network
designer,
want
to
support
this
protocol
on
my
network.
How
do
I
reference
that
to
a
procurement
rep
so
that
they
can
then
write
a
a
request?
For
quote
to
vendors,
saying
this
is
what
your
product
needs
to
support
so,
as
in
the
last
bullet,
the
rfc
provides
information
about
skip
and
the
skip
working
group
community
system
network
architects,
dr
administrators
security
personnel,
oems
risk
analyst
procurement
personnel
necessary
for
skip
to
be
included
in
the
system
security
life
cycle.
M
So
that's
pretty
much
it
in
in
a
nutshell,
I
can
respond.
M
A
M
It's
not
the
the
it's
basically
building
a
a
channel
by
which
the
end
devices
will
negotiate.
What
they're
going
to
do,
they're
going
to
negotiate
applications
they're
going
to
negotiate
security
parameters,
so
its
format
itself
is
not
as
clearly
defined
as
if
you
were
carrying
a
movie
traffic
or
729,
or
something
like
that.
A
M
B
M
Here's
here's
we
have
to
express
some
ignorance
of
your
of
how
the
ifc
works
so
for
for
our
purposes.
We
we
look
at
this
as
the
the
adoption
of
this
draft.
Rfc
is
not
the
end
of
the
game.
We
have
much
more
work
to
do
because
an
rfc
doesn't
mean
anyone's
going
to
implement.
M
I
would
ask
buber
what
do
you?
Where
would
something
like
this
go,
because
it
is
in
the
end,
it
is
the
standard,
that's
needed
for
end
products
to
communicate
and
the
way
that
they
were
designed,
the
recognition
that
their
far
end
product
is
a
compatible
one
and
then
working
over
a
clear
channel
establish
what
each
of
the
users
are
capable
of
doing
and
want
to
do.
B
R
I
was
just
reading
your
tweeting
over
the
charter
because
it's
been
a
while,
since
I've
read
it
and
as
long
as
this
is
covered
in
it,
which
I
haven't,
I've
finished
reading
it
yet
and
this
working
group
doesn't
have
any
objections
or
the
working
group
supports
the
idea
of
bringing
it
in
since
it's
informational.
B
B
M
Well,
that
can
be
a
discussion
we
can
have
offline
is
to
the
benefits
of
one
versus
the
other,
because
that's
also
something
that
I
don't
know
personally
so.
M
That
is
true
in
a
sense,
they
are
participants
in
how
network
equipment
is
created
by
the
rfcs
that
they
have
endorsed
or
or
information
that
they
have
to
disposal.
B
K
K
M
Yes,
it
is,
we've
been
been
building
products
to
this
since
about,
I
want
to
say,
2000,
2004
2003.,.
S
Good
the
question
regret
in
beginning
of
the
earlier
slides
for
this.
They
were,
as
I
mentioned,
about
the
tsbcis
specification,
it's
kind
of
in
the
same
ballpark.
If
you
will
similar
or
not,
but
not
the
same
as
what
we're
doing.
I
was
kind
of
curious
how
that
was
approval
for
that
was
processed
through
the
group.
A
S
Of
the
presentation,
yeah.
S
Rrcs
that
were
approved-
I
guess,
since
the
last
meeting
or
whatever
oh.
A
T
I
would
actually
suggest
talking
to
the
media
type
reviewers,
for
that,
and
I
know
murray
is
one
that
wasn't
one
at
that
time.
So
it
might
be
something
where
you
need
to
ask
that,
but
I
actually
got
in
queue
to
ask
a
different
question
which
is
actually
on
this
slide.
If
I
can,
if
I
can
point
to
this
from
my
understanding
of
your
description
in
addition
to
the
two
media
types
that
you
have
in
there,
you
do
have
data
streams
which
you
describe
in
this
slide
as
clear
channel
data.
T
M
No,
the
the
reference
to
last
bullet
in
that
slide
is,
is
that
the
data
present
on
the
audio
or
video
skip
payload
type.
It
will
be
treated
as
clear
channel
data,
don't
mess
with
it.
T
I
think
you
may
want
to
take
another
pass
through
the
draft
and
in
your
description
of
that,
because
that's
not
the
same
impression,
I
got
with
a
quick
look
at
the
draft.
So,
okay,
thanks.
A
H
So
having
just
gone
through
this
xs
thing,
where,
frankly,
a
perfectly
available
iso
standard
that
just
cost
a
little
bit
of
money
raised
a
big
big
uproar.
H
I
I
think
you
will
all
have
a
hard
time
to
get
anything
through
the
standard
itf
process
where
they
needs
to
be
a
reference
in
there
to
a
document,
that's
not
available
to
the
public
and
so
far
my
suggestion
would
be
to
target
from
the
outset
to
give
this
to
the
independent
stream
or
whatever
it's
called
so,
basically,
not
an
itf
document,
but
still
an
rfc
and
therefore
something
which
most
network
people
pay
some
attention
to
yeah.
H
I
I
Hi
we
seem
to
be
going
a
little
bit
backwards
in
terms
of
the
way
the
itf
handles
these
sorts
of
documents.
I
I
was
very
surprised
to
see
that
all
that
the
controversy
about
the
the
spec
that
stefan
was
just
talking
about,
because
we've
done
lots
of
payload
formats
which
have
you
know,
had
various
degrees
of
difficulty
in
accessing
the
specifications
over
the
years
so
process
wise.
I
I
So
I
I
would
encourage
the
group
to
adopt
this
and
take
it
on
as
a
a
a
payload
format
specification
in
the
usual
way
and
work
with
the
chairs
to
work
with
the
area
directors
to
to
find
a
way
of
your
making
making
sure
this
is
acceptable
and
that
that
may
just
mean
that
you
know
a
small
number
of
people
are
given
access
to
check
that
the
specification
makes
sense
because
it's
accessible
to
the
community,
which
cares,
and
I
think
that's
what
matters.
M
May
speak
for
a
minute
the
that
is
possible
to
to
for
request
permission
for
this
to
look
at
this.
It's
a
skip
standard,
214.2
referenced
in
the
r
draft,
rfc
and
really
most
of
the
contents.
That
of
that
document
of
any
technical
significance
are
in
the
draft
rfc
itself.
I
I
mean,
I
think
it
probably
does
need
someone
who
understands
rtp
from
this
from
the
ietf
to
look
at
look
at
the
the
the
kodak
specification
and
look
at
what's
being
proposed
and
say
yeah.
This
makes
sense,
but
I
don't
see
that
it
requires
it
to
be
made
available
to
the
entire
community.
I
It
requires
it
to
be
available
to
the
community
who
will
be
implementing
it,
and
you
know
someone
to
to
look
at
the
spec
and
say
yeah
that,
what's
being
proposed
from
the
rtv
point,
makes
sense
and
I'm
pretty
sure
we've
done
this
sort
of
thing
before
in
itf's.
T
R
Yeah,
just
on
that
point,
the
isg
recently
has,
if
you
look,
there's
there's
a
bcp
97
is
under
revision
for
exactly
this
reason,
we
had
a
number
of
documents
come
to
us
that
we
were
not
able
to
evaluate,
because
the
specification
to
which
it
referred
was
not
available
to
us.
I
think
the
proposal
is
exactly
what
I
think
colin
just
said.
As
long
as
the
reviewers,
the
people
who
need
to
review
it
and
approve
it
can
get
access
to
it.
To
make
sure
this,
the
wrapping
specification
is
right.
R
M
We
just
have
to
the
reviewers
will
just
have
to
contact
the
there's,
a
contact
within
the
draft
rfc
at
the
bottom
requesting
access
we
know
in
advance
who
they
are.
We
can
make
sure
that
they
get
approved
yeah.
I
think
that's
probably
workable.
C
M
Sorry,
my
my
question
is
along
the
line
of
that
the
action
items.
I
guess
you're,
suggesting
that
we
make
the
doc
the
background
document
skip
214.2
available
to
certain
people.
B
B
So
we
will
try
to
get
back
to
you
with
with
feedback.
Okay,
I
mean
I
can
think
of
some
some
folks
who,
who
might
might
be
appropriate
but
we'll
have
to
discuss
it.
M
And
we'll
revisit
the
draft
rfc
for
for
making
it
clear
that
the
the
last
bullet
in
this
slide,
referencing
treated
as
clear
channel
daily
is,
is,
is
concise
and
and
exact
enough
to
specify
that
it's
referencing
just
the
data
that
is
traveling
within
the
audio
skip
or
video
skip.
U
So
first,
you
might
already
be
aware
that
the
ietfs
framework
you
issued
a
call
for
adoption
of
the
omar
s
frame
draft,
which
is
mostly
talking
about
the
format
itself
and
has
some
arctic
architecture
points
as
well.
U
So
it
was
a
successful
call
with
some
feedback
that
maybe
the
graph
should
be
split
in
a
pure
format
based
spec
and
a
separate
active
architecture
draft.
U
Also,
it
was
noted
that
the
sram
format
itself
is
has
some
interest
outside
rtp.
So,
for
instance,
you
could
you
could
use
web
transport
or
rtc
data
channel
with
webcodex
and
still
use
this
frame
in
between
to
do
end-to-end
encryption
so
that
that
means
that
we
we
want
integration
with
rtp,
but
we
also
want
a
screen
to
be
used
outside
of
rtp.
U
In
addition
to
that,
on
the
wp
side,
so
the
api
level,
the
webrtc
working
group,
adopted
the
webrt
and
curry,
transform
as
the
first
public
working
draft
and
the
functionality
already
shipped
in
chrome
since
maybe
a
year,
and
it's
also
enabled
by
default
in
software
tech
preview,
and
it
might
be
also
in
in
the
queue
for
other
browsers
as
well.
So
progress
is
being
made
to
add
support
to
allow
webpages
to
to
use
s-frame
or
to
implement
engine
encryption
into
browsers,
so
the
webrt
and
kodi
transform
is
definitely
used
for
end-to-end
encryption.
U
It
was
also
used
for
emulating
emulating
red
if,
if
it's
not
available
in
the
browser,
so
so
we
are
seeing
that
solutions
are
in
browsers,
are
more
and
more
using
that
and
we're
seeing
that
solution.
Existing
webrtc
solutions
are
using,
like
google
do
facetime
webex
there
all
adding
progressively
support
for
energy
encryption
and
they're
all
doing
that
with
different
test
firm
like
formats
and
flavors.
U
So
there's
no
interpolation,
it's
not
the
same
kind
of
technology.
It's
very
related,
but
it's
not
exactly
the
same.
Some
workarounds
are
being
used,
and
so
so
it's
not
great
and
it's
not
great,
because
it's
a
security
privacy
technology
and
usually
having
just
one
well
identified
piece
of
technology
to
do.
That
is
better.
So
next
slide.
U
Based
on
that,
my
understanding
is
really
that
we
need
to
make
progress.
I
think
we
already
had
a
similar
slide
one
year
ago
about
that
saying:
hey
things
are
starting
to
evolve
and
to
an
encryption
is
being
shipped
and
we
need
to
make
progress,
and
it's
even
more
of
a
case
now
so,
and
it's
particularly
particularly
the
case
for
s
frame
within
webrtc
ecosystem
for
web
transport
and
data
channel.
We
still
have
some
time,
but
for
webrtc
ecosystem.
U
It's
it's
really
getting
fat
there,
so
that
means
s
frame,
rtp
integration
and
s
frame
sdp
integration.
My
understanding
in
the
past
was
that
avt4,
for
instance,
would
be
responsible
for
doing
the
srm
rtp
integration,
but
we
are
seeing
email
threads
on
the
sframe
working
mailing
list
about
that.
So
first
question
that
I
have
to
the
avt
core
working
group.
No
previous
slide.
My
first
question
would
be
to
to
be
clear
about:
where
should
that
work
go
or
where
should
it
be
worked
on?
U
Could
it
be
all
done
in
this
frame
working
group,
the
sms
integration,
or
would
it
be
better
to
do
that
in
ibt
core,
because
currently
I
see
that
there
are
discussions
being
done
in
parallel,
and
maybe
it
would
be
good
to
be
clear
exactly
about
where
he's
going
to
work
and
which
working.
A
U
A
I
Yeah,
just
on
that
point,
the
the
stp
I
mean,
obviously,
music
is
the
home
for
sdp.
The
payload
formats
are
usually
defined
in
avt
core
and
that's
but
specifies
a
lot
of
the
sdp
parameters,
so
it
maybe
needs
to
review
here
as
well.
U
Okay,
so
so
it's
great
we're
in
a
ut
core,
we're
discussing
that.
So,
let's
try
to
make
progress.
So
at
last
meeting
we
we
discussed
a
lot
the
packet
versus
frame
issue
and
I
think
that
there
are
common
requirements
in
both
cases.
So
if
we
put
the
packet
versus
frame
issue
aside,
we
can
try
to
focus
on
uncommon
requirements
and
that's
what
we
try
to
do
in
the
next
slides
next
few
slides
so
yeah
the
vs
frame
working.
U
There
was
a
good
discussions
in
base
frame
working
with
making
list
about
various
approaches,
and
basically
I
think
we
we
know
that
middleware
middleware
being
like
sfus
network
intermediaries
or
browsers,
cannot
inspect
encrypted
or
transform
packet
payloads,
but
they
still
especially
as
a
fuse.
They
still
need
information
to
root
and
complete,
transform
packets
separately
and
that's
especially
the
case
in
svc
simulcat
cases,
but
it's
already
the
case,
for
instance
in
in
non-simulcast
cases.
U
So
one
question
was
from
the
main
list:
whether
it
should
be,
if
possible,
just
from
http
inspection
to
determine
that
content
is
encrypted
or
transformed,
and
if
so,
where
should
the
information
be
located?
Should
it
should,
should
it
be
just
a
payload
type,
should
it
be
a
dedicated
head
extension?
U
I
I
I'm
a
little
confused
about
what
you
mean
by
this
question,
because
my
understanding
is
that
the
payload
type
defines
what
what
is
within
the
rtp
payload.
So
I'm
not
sure
what
you
mean
by
the
separate
the
separate
points
there.
U
So
currently,
in
the
current
and
deployed
environments,
the
payload
type
is
defining
the
codec,
so
let's
say
h64
api
8p9
and
we
need.
We
need
at
some
point
this
information
anyway
and
that's
how
it's
being
deployed.
But
we
could
also
say
that
you
know
that's
wrong
and
the
pilot
type
should
say.
Oh
it's
encrypted
content,
or
it's
content
that
you
you
do
not
need
to
care,
it's
opaque
and
that's
one
possibility.
Q
Yeah
I
mean
the
only
thing
is
that
it
was.
I
mean
it
was
right
there
in
the
in
some
discussion
of
the
main
disease
that
maybe
yeah
signaling,
that
in
the
sdp
that
the
content
was.
Even
if
it
is
yes
and
we
don't
change
the
packet
decision
of
the
codec,
just
increasing
the
the
payload
yeah,
the
the
payload
of
the
the
of
the
rtp
packet,
that
it
could
go
from
there
within
the
the
same
pilot
time.
But,
for
example,
me.
Q
R
Q
It
could
be
done
in
a
different
way.
So,
yes,
we
were
seeking
guidance
about
exactly
if
if
this
could
be
an
option-
or
we
should
just
er
if
we
see,
for
example,
shape
and
sending
an
rtp
packet
with
with
payload
the
payload
network
for
vp8,
then
the
payload
must
be
vp8
and
an
unless
view
or
whatever
it
can
expect
it
in
an
exped
to
have
the
the
correct
payload
in
there.
I
So
I
think
that
there's
some
separate
issue
issues
here
I
mean
one-
is
how
you
signal
it,
and
one
is
what
is
the
payload
format?
You
know
the
the
the
the
the
question
on
the
slide
about
here?
Is
this
located
in
the
the
payload
type
or
within
the
itp
pillow
that
I
mean
they're
the
same
thing,
so
you
may
are
you
asking
about
the
way
that
is
included
into
the
packet
or
the
way
the
data
is
the
way
the
contents
of
the
packets
are
signaled.
U
So
yeah
I
was
trying
to
to
to
cover
the
fact
that
in
vs
frame
working
mainly
so
there
was
a
a
proposal
and
usually
you
could
say.
Oh
I
want
to
negotiate
vpa
because
that's
really
what
I
want-
and
I
want
also
to
to
this
frame.
So
then
you
need
to
provide
that
information.
U
The
fact
that
you're
using
vp8
and
the
fact
that
you're
using
s
frame
to
view
the
party
and
this
information
dp8
plus
s
frame,
could
go
also
in
the
packet
or
it
could
be
left
elsewhere
and
so
on,
and
that's
what
what
I
would
like
to
to
hear
about.
U
Because
we
we
think
that
getting
both
information
like
that
yeah
it
is
it's
ep8
and
yeah
it's
it's
encrypted.
It's
transformed
would
be
useful
by
looking
at
rtp
packets.
I
B
C
B
I
think
what
there's
a
couple
of
distinct
issues
as
colin
said:
one
is
the
negotiation
of
specific
rtp
header
extensions.
Like
we've
had
the
frame
marking
header
extension,
there's
the
dependency
descriptor.
You
know
that
there's
routing
information
that's
being
negotiated.
B
R
B
To
be
encrypted,
but
it
doesn't
really
tell
you
that
explicitly,
and
the
thing
is
that
you
could
negotiate
those
rgb
header
extensions
without
encrypting
the
payload
right
so
and
then
there's
the
payload
type,
which
or
negotiation
of
things
like
the
codec,
but
again
that
currently
doesn't
tell
you
whether
the
payload
itself
is
encrypted.
B
So
it
seems
to
me
like
there's
something
missing
in
there
and
I
know
there's
been
discussions
on
the
s
frame
mailing
list
about
whether
you
really
want
to
even
tell
the
sfu
that
you're
encrypting
it
or
it's
none
of
the
sfu's
business.
B
So
I
know
there's
there's
that
issue
because
really
the
the
encrypted
payload
is
really
an
end-to-end
thing
and
sdp
really
isn't
about
that.
It's
hot
by
hop,
but
I
think
it's
a
little
bit
dangerous
to
assume
that,
because
you've
negotiated
a
particular
header
extension
that
everything
will
work
end
to
end
with
an
encrypted
payload,
because
there
could
be,
as
we
just
talked
about
with
the
b2b
uas
right.
There
could
be
somebody
in
the
middle
who's
looking
at
this
and
if
the
sdp
doesn't
tell
it
that
they
give
it
any
inkling
that
it's
encrypted.
B
It
probably
could
be
could
become
a
problem
at
some
point,
so
I
think
there
is
a
piece.
That's
maybe
none
of
the
some
of
the
information
is
in
none
of
these
none
of
the
things
you
described
here.
It's
not
right,
yellow
type,
it's
not
a
header
extension.
It's
probably
something
else.
A
Oh
yes,
I
mean
it
seems
to
me
like
conceptually
what
this
actually
should
be.
I
mean
insofar
as
everything
in
the
rtp
session
will
be
s
frame,
which
I
think
is
what
people
probably
want,
because
you
don't
because
I
think
having
this
be
mixed
is
a
security
and
you
know
nightmare.
I
think
what
this
actually
is
a
profile.
Much
like
savp
has
a
profile.
A
This
is
a
new
transformation
like
savpp,
which
I
mean.
I
don't
know
if
you
just,
I
mean
whether
you
actually
negotiate
it.
That
way
for
backward
compatibility
reasons
is
a
separate
issue,
but
I
feel,
like
you
know,
that
might
be
at
least
conceptually
the
cleanest
way
to
approach
it
and
if
we
have
to
hack
something
to
get
it
to
work
with
browsers
or
whatever
so
be
it.
A
But
I
know
that,
certainly,
especially
for
you
know,
the
problem
with
having
to
be
a
payload
type
is
a
the
payload
type
multiplication
issues
and
also
just
the
I
don't
know.
If
you
want
to
be
able
to
negotiate
both.
P
They
have
to
be
combined
together
in
any
business
logic
that
you
apply
on
per
packet
basis
and
stp
is
the
one
that
sets
up
some
kind
of
stream
level
indications
and,
and
speaking
here,
just
in
terms
of
the
articulator,
it's
a
bit
kind
of
not
thinking
about
what
happens
in
the
setting
up
there,
and
I
did
did
see
some
of
the
discussions
that
happening
on
the
streams
mailing
list
about
you
know
saying
for
for
a
media
stream
level
or
for
or
for
particular
payload
type
on
applicability
that
if,
if
this
is
an
s
frame
or
on
s
packet,
that
way
for
it's
kind
of
hint
to
the
an
stp
offensive
can
be
defined
explicitly
to
say
what
the
behavior
should
be.
P
They
think
it's
a
hint
to
the
sf
utility.
It's
saying
that
you
know
you
know,
don't
look
into
the
packet
further,
because
it's
entered
and
created.
You
don't
get
anything
versus
something
like
if
it's.
If
it
is
not
an
internet
encrypted
thing,
you
can
look
into
the
packet
to
see,
for
example,
audio
level
or
something
like
that,
and
hence
having
some.
We
need
to
kind
of
look
it
in
the
picture
of
the
signaling
and
the
media
to
make
a
decision
just
looking
at
rtp.
L
Yeah,
I
think
conceptually,
I
agree
with
jonathan
that
the
first
thing
I
thought
conceptually
was
also
profile
level
things
like
that
savp
or
something
like
that.
I
was
an
original.
I
made
the
the
right
precedent
model
to
use,
however,
not
an
originalist,
and
I
don't
believe
that
it
decided
long
ago
should
impact
everything
today,
and
I
think
I
would
argue
that
savp
has
caused
a
lot
more
damage
than
than
good
and
there's
been
a
lot
of
problems
with
things
like
stuff
for
security.
L
I
would
I
would
caution
against
trying
to
do
what
logically
makes
sense
is
to
make
a
new
profile
out
of
this,
but
I
think
that's
that's,
probably
a
mistake
that
rehashes
a
lot
of
the
mistakes
that
those
older
profiles
cause.
What
makes
more
logical
sense
to
me
is
this
is
just
an
encapsulation
like
like
red
or
effect
or
anything
else,
it's
an
encapsulation
of
of
some
other
payload.
L
So
I
think
it
makes
more
logical
sense
to
declare
all
the
real
payloads
and
then
also
declare
the
encapsulated
types
and
then
what
you
actually
send
on
the
wire
is
the
encapsulated
type.
So
it
would
be
a
different
payload
type
number
and
it
would
just
be
an
encapsulation
of
some
other
payload
type
that
is
also
in
the
stp,
but
won't
appear
on
the
wire
because
you're
not
actually
sending
that
format
on
the
wire.
I
I
It
strikes
me
as
a
wrapper
format.
Much
more
like
you
read
or
the
retransmission
formats
also,
and
I
think
what
what
you
are
specifying
here
is
a
wrapper
format
which
says
that
the
content
is,
for
example,
vp8
but
encrypted,
and
so
it's
a
it.
So
you
know
I
don't
think
this
is
a
a
codec
agnostic,
payload
format.
I
think
this
is
a
codec,
specific
format
which
wraps
you
know
something
which,
which
wraps
a
particular
set
of
content.
I
So
what
you're
signaling
is
is
that
it's
this
wrapper
format
and
what
is
being
wrapped
is
this
you
and
you
know
the
the
the
existing
payload
format
gets
put
into
the
wrapper,
perhaps
with
an
additional
header
had
it
before
the
front
of
it,
and
it's
signaled
as
a
payload
type
wrapping
some
existing
payload
type.
A
So
I
just
one
response
I
mean,
I
think,
that's
probably
a
fine
solution.
I
think
the
sdp
part
is
going
to
have
to
be
a
little
tricky
because
if
you
want
to
say
I'm
willing
to
receive
rap
vp8,
I
do
not
want
to
receive
unwrapped
vp8,
but
I
think
that
might
be
more
music
consideration
than
a
avt
core
consideration
per
se.
U
Yeah,
I
think
it's
it's
a
good
issue
to
tackle
as
well
and
to
keep
track
of.
I
U
Okay,
we
don't
have
a
lot
of
time,
but
maybe
this
is
my
yeah.
We
could
say
it's.
My
last
slide
no
go
above
and
previous
slide.
U
Yeah,
so
I
think,
from
what
I
hear
is
consensus,
is
that
we
need
an
ability
to
identify
whether
it's
encrypted
and
transform
and
also
be
an
underlying
codec
type,
I'm
not
sure
about
whether
it's
good
with
consensus
on
having
a
solution
that
is
independent
of
the
exact
transformation
format.
It
seems
to
me
that
it
would
be
good
to
have,
because
what
we're
seeing
is
being
deployed.
They
are
various
s
frame
or
transform
being
used,
and
so
the
wrapper
format
will
only
be
able
to
will
not.
U
Transform
and
yes,
the
scope
is
directly
related
to
whatever
skill
your
video
codex.
Only
so
there's
any
known
set
of
codecs
that
we
want
to
store
and
it's
a
fully
generic
solution
in
any
way.
The
target
is
only
working
on
your
video
products,
just
you're
on.
P
Here,
going
back
to
your,
you
know
the
first
slide
of
having
different
systems
in
implementing
s
frame
versus
packet
in
different
different
phase
like
google,
facetime
versus
webex
and
those
kind
of
things
I'm
going
back
to.
If
you
go
with
an
alternative,
payload
format
or
some
kind
of
wrapping
payload
format,
how?
How
would
that
level
of
entropy
will
be
achieved
with
a
two
different
implementation
of
the
s
frame
or
what
is
the
packet
so
are?
P
U
It's
open
question:
I
don't
think
that
there's
consensus
on
or
on
anything
there.
Yet
what
is
known
now
is
that
it's
either
a
packet
law
or
a
freight
level
yeah
we're
not
interested
in
cases,
but
more
than.
R
That,
since.
I
In
the
chat
about
we
want
to
apply
s
frame,
packetization,
vp8
packetization,
and
this
this
perhaps
comes
to
the
heart
of.
What's
what's
the
issue
yeah,
I
I
was
my
understanding
was
that
what
you
wanted
to
do
was
take
vpa
content.
You
know
in
the
same
way
it's
usually
packetized
encrypt
it
and
then
put
it
into
a
new
payload
format,
which
seems
to
be
the
model
I
was
talking
about,
and
I
thought
mo
was
talking
about
justin's
talking
about
doing
something
different.
I
A
Yeah,
I'm
sorry
we,
this
discussion,
that'll
cut
a
little
short
by
other
things
running
over
time.
I
feel
like
this
needs
more
discussion.
We
should
try
to
figure
out.
What's
the
best
place
to
have
this
discussion
would
people
we
tried
to
do
an
interim
on
this
before
or
didn't
really
have
the
right
take
up,
but
if
we
can
get
the
right
people
in
a
room,
an
interim
or
perhaps
a
more
informal
side
meeting
might
be
helpful
just
so
we
can
sort
of
get
convergence
on
make
sure
everybody's
talking
about
the
same
things.
A
A
The
only
other
thing
I
wanted
to
raise
is
that
spencer
mentioned
earlier
on
the
rtp
over
quick
discussions.
We
think
probably
having
specifically
rtp
over
quick
discussions
on
the
aut
list
would
be
best,
whereas
more
general
media
over
click
on
the
macquest,
so
just
to
keep
things
somewhat
separable,
except
someone's
separate
there.
That's
probably
the
best
organization
for
that.
A
And
with
that,
thank
you
all
for
coming
and
we'll
see
you
at
other
sessions
here
or
next
time,
possibly
in
person.
A
And
we'll
arrange
for
interim
on
the
s3
measures.
I
believe
we
can
get
some
convergence
on
that
we'll
just
go
side
on
the
left.