►
From YouTube: IETF110-RTCWEB-20210312-1200
Description
RTCWEB meeting session at IETF110
2021/03/12 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
Just
if
you're,
if
you're,
not
interacting
with
the
group
at
that
particular
moment,
if
you
want
to
turn
your
video
on
when
you're
talking,
that's
certainly
fine
but
they're,
trying
to
make
sure
that
in
the
larger
groups,
and
some
of
them
have
hundreds
we're
not
sending
hundreds
of
video
streams
around.
A
A
A
Now,
since
we
are
at
the
top
of
the
hour,
we
will
go
ahead
and
start
the
usual
pleading
for
a
note
taker,
and
I
warn
you,
I
think
we
have
allocated
the
first
half
an
hour
of
the
of
the
meeting
to
that.
A
So,
if
there
is
anybody
who
wants
to
kick
us
off
by
saying
no,
no
I've
broken
my
hand
or
otherwise
giving
it
a
really
excellent
excuse.
That'd,
be
great,
even
better,
of
course,
would
be
somebody
saying
yes,
please
I'll
do.
A
A
Reverend
roscorla
has
agreed
to
take
notes
on
this
meeting,
so
that's
very
kind
of
him.
As
I
remind
everybody,
the
only
kind
of
notes
we're
taking
for
this
are
very
simple.
What
did
we
decide?
So
there's
definitely
no
need
for
he
said
she
said
we
have
recordings
for
that
and
we
do
appreciate
your
taking
this.
A
A
Okay,
so
this
is
rtc
web
at
itf
110.
We
are
not
in
prague,
but
since
we're
virtually
in
prague.
I
think
this
is
an
echo
back
to
our
original
buff.
As
you
folks
will
remember,
the
the
very
first
buff
for
rtc
webb
was
in
prague,
and
here
we
are
again
a
resurrected
working
group
kind
of
not
in
prague
kind
of
in
prague
as
well
like,
then,
we
have
a
note
well.
This
notewell
reminds
you
of
the
itf
policies
which
are
in
effect,
especially
on
patent
disclosure
and
codes
of
conduct.
A
If
you
have
not
already
read
them,
please
do,
please
feel
free
to
consult
a
lawyer
on
exactly
what
they
mean.
If
you
have
any
concerns,
you're
also
welcome
to
reach
out
as
described
here
next
one.
A
Your
chairs
are
currently
ted
hardy
and
sean
turner.
We
feel
a
little
bit
lonely,
though
we
always
used
to
have
three
chairs
and
only
dwindled
to
two
at
the
very
end
of
rtc
web.
Hopefully,
we
will
not
need
to
run
this
group
for
very
long
to
resolve
the
question
before
us,
but
if,
if,
if
you
feel
like
adding
yourself
to
the
chairlifts,
please
reach
out
to
an
a.d
and
see
if
the?
If
the
occasion
strikes
you
next
slide
hold
on
just
a
second.
A
I'm
running
a
meeting
right
now,
I'm
running
a
meeting
right
now,
bye,
hello
for
those
of
you
who
just
heard
me
say
running
a
meeting
right
now:
that's
that's
it!
So
our
goal
and
the
only
goal
is
the
rest.
The
contradictions
regarding
bundle
only
between
bundle
and
json.
That
is
what
does
m
equal
mean
it.
A
It
equals.
Maybe
we'll
see
you
in
prague
it
equals.
Maybe
we
can.
We
can
work
this
out
in
a
single
meeting
and
and
relax
again
last
slide.
Please.
A
We
already
have
a
the
virtual
meeting
tips:
the
notewell,
the
notetaker
established
for
jabber
scrub.
Your
your
chairs
will
just
do
that.
If
there's
anything
that
you
want
put
into
into
the
into
the.
A
A
B
Is
that
he
suggested
unless
that
would
look
at
a
slip
streaming
attack
if
we
have
time
at
the
end,
so
I
figured
we
he
sent
slides.
B
A
I,
if
we
have
time
at
the
end,
I
have
taken
a
quick
look
at
the
slides.
I
don't
think
that
they're
entirely
specific
to
rtc
web,
so
you
can
certainly
look
at
them,
but
I
doubt
that
there
will
be
an
rtc
web
specific
action
item
here,
although
obviously
sharing
them
out
to
community
is
useful.
C
C
Okay,
so
here
to
talk
about,
how
do
we
align
you
know,
json
bundle,
handling
and
specifically
for
what
we
put
in
in
sdp
offers
and
answers.
The
issue
is
that,
unfortunately,
late
in
the
document
process,
we
realized
that
jason
bundle
specified
contradictory
ways
of
generating.
You
know
fbp
offers
and
answers
when
we
went
to
bundle
and
there
wasn't
any
easy
way
to
reconcile
late
in
the
document
stage
like
these
these
diverging
approaches.
C
We
also
realized
that
the
most
common
implementation
of
the
webrtc,
which
was
used
in
chrome
and
elsewhere,
actually
has
a
behavior
that
differs
from
both
of
these
specifications
and
while
its
behavior
is
non-standard,
there
stands
to
reason
that
changing
that
behavior
to
match
the
specification
may
have
it
causes
its
own
set
of
problems.
So
we
figured
we
want
to
take
a
look
at
that
at
the
same
time
that
we're
looking
at
how
these
two
specifications
diverge
is
what
the
correct
behavior
should
be
next
slide.
C
All
right
so
a
little
recap
for
those
of
you
who
it's
4
am
you
know
this
may
be
kind
of
gobbling,
but
this
is
just
recapping.
You
know
the
notion
of
bundle
policy,
which
is
a
thing
that
can
be
specified
for
jsap
to
say:
here's
how
you
actually
try
to
bundle
in
the
upcoming
offer
and
these
pun,
these
policies
control
exactly
how
aggressive
the
application
is
about.
C
Attempting
to
you
know
what
happens
if
the
other
side
can't
do
bundle
and
these
policies
control
at
the
lowest
level
ice
gathering
behavior,
which
is
then
the
thing
that
has
the
most
direct
consequence
on
what
happens
when
the
remote
endpoint
doesn't
support
bundle
when
the
default
policy
is
the
default
policy
of
balanced?
C
What
ends
up
happening
is
that
we
always
gather
a
unique
set
of
candidates
for
the
first
m
section
of
each
vid
media
type.
That
means,
if
you
have
an
audio
section
in
a
vid
in
two
video
sections,
the
implementation
will
gather
ice
candidates
for
that
audio
section
and
it'll
gather
candidates
for
that
first,
video
section.
But
it's
going
to
assume
that
that
second
video
section
is
going
to
be
bundled
over.
That,
first,
you
know
over
the
bundle,
transport
and
it
won't
gather
candidates
for
that
second,
video
section.
C
What
that
means,
then,
is
that,
if
you're
talking
to
an
endpoint
that
doesn't
support
bundle,
the
expectation
is
that
second
video
session
will
not
work,
and
the
overall
goal
here
is
to
kind
of
trade
off
the
ability
to
interoperate
with
non-bundle
endpoints.
C
With
this,
how
many
candidates
actually
need
to
be
gathered
with
the
expectation
that
gathering
more
candidates
for
more
m
lines
is
going
to
just
take
longer
in
call
setup,
so
balance
gathers
candidates
for
the
first
m
section
of
each
media
type
max
bundle.
C
The
most
aggressive
policy
only
gathers
candidates
for
the
first
m
section
of
the
entire
stp
and
max
compat,
designed
to
be
most
compatible
gathers
candidates
for
all
m
sections
and
m
sections
that
don't
have
candidates
for,
for
obvious
reasons,
can't
be
used
with
non-bundle
endpoints,
because
it
presumes
that
these
m
sessions
are
actually
going
to
ride
over
the
the
bundled
m
section,
which
will
probably
be
like
the
the
first
one
in
the
sdp.
C
These
m
lines
that
do
not
have
candidates
are
intended
to
be
marked
at
the
zero
port
in
those
initial
offers
to
ensure
that
that
endpoint,
on
the
other
side,
when
it
doesn't
understand
the
bundle
grouping,
can
simply
just
see
that
port
equals
zero
in
and
reject
those
m
sections,
with
the
expectation
that
that's
an
okay
consequence,
because
you'll
still
in
the
balance
case
have
that
basic
audio
and
video.
C
It
turns
out
that
sometimes
this
works
with
non-bundle
endpoints.
They
can
see
that
even
though
they
they
oh
I'll,
get
that
in
a
second.
But
the
fact
they
don't
have
candidates
is
not
always
an
issue,
because
the
remote
side
may
generate
their
own
candidates
and
that
may
actually
just
turn
out
work
to
work
because
of
peer
reflection
candidates.
But
anyway,
this
is
the
intent
of
the
bundle
policies
next
slide.
C
So
here
is
just
a
a
representation
in
terms
of
stp
of
what
actually
happens,
depending
on
the
bundle
policy
you
can
see
in
the
balanced
offer.
We
gather
you
know
a
port
that
ends
up
being
unique
ports
and
the
ice
candidates.
C
I
haven't
included
all
the
ice
candidates
just
for
you
know
fitting
this
all
on
on
the
slide,
but
both
the
audio
and
the
video
section
have
ice
candidates
and
unique
ports
in
the
balanced
offer
and
the
max
bundle
offer
only
the
audio
section
has
a
unique
port
and
the
other
two
are
set
to
zero
and
then,
finally,
in
the
max
compat
offer,
candidates
are
gathered,
they're
separate
credentials
for
each
m
section
and
each
one
ends
up
with
a
non-zero
port.
C
So
this
is
just
kind
of
you
know,
setting
the
foundation
here
explaining
you
know
how
do
these
policies
differ
and
in
which
cases?
Might
you
end
up
with
zero
ports
for
balanced
and
max
bundle?
C
Okay,
now
getting
into
the
actual
differences
here,
you
know
port
zero
identifies
these
m
sections
that
require
bundling
when
I
say
require
that,
even
though
the
implantation
is
going
to
try
to
bundle
all
the
m
sections,
the
ones
that
can't
be
negotiated
unless
you
negotiate
bundle
are
the
ones
that
are
marked
with
zero
port
and
no
edge
credentials
the
zero
port
behavior.
C
You
know
the
key
difference
between
this
bundle
is
that
json
says
that
you
mark
these
things
with
zero
ports
that
are
going
to
be
requiring
bundle
to
be
negotiated.
But
bundle
says
that
you
should
also
mark
these
things
with
zero
ports
and
answers
and
in
re-offers.
C
Okay,
so
in
a
simple
case,
we
have
one
audio
stream
and
one
video
stream,
and
you
know
we're
using
the
default
policy
of
balance.
C
Both
offers
have
ice
credentials
and
both
offers
have
unique
ports,
but
the
bundle
answer
has
a
zero
port,
even
after
bundling
has
been
negotiated,
whereas
the
jsof
answer
does
not,
and
so
you
know
the
the
thing
that
should
be
noted
here
is
just
the
use
of
you
know:
the
m
equals
video
zero
port
and
the
answer
you
know
ends
up
being
a
different
behavior
and
implementations
may
respond
to
that
differently
than
the
m
equals
video,
with
a
port
that
the
shared
port,
which
is
what
most
implementations
are
seeing
in
answers
right
now.
C
C
Slide
so
that
was
with
max
bundle
policy.
You
get
the
same
result
when
using
the
balance
policy.
The
default
here
you
generate,
you
know
a
different
offer
than
the
previous
slide,
but
both
bundle
and
json
generate
the
same
stp.
They
both
mark
that
second
m
line
with
the
xero
port
and
include
the
bundle
only
attribute,
which
is
a
way
of
saying
that
hey
this
port
is
set
to
zero,
but
we're
not
re.
You
know
this
isn't
a
disabled
m
section.
C
This
is
basically
only
expected
to
be
used
when
you're
negotiating
the
bundle
framework
and
in
both
cases
their
offer
generated
is
identical.
But
again
the
answer
is
different
in
that
the
jsf
answer
uses
that
shared
port
for
the
second
m
section,
whereas
the
bundle
answer
uses
the
zero
port
in
the
answer
just
like
it
did
in
the
offer
and
marks
it
with
bundle.
Only
next
slide.
C
Okay
and
then
finally,
here's
what
jset
does
you
know
in
that
offer
versus
what
liv
weber
c
actually
does
and
this
kind
of
gets
at?
You
know
the
implementation
concern
here.
We
are
using
max
bundle
on
the
left.
We
have
the
json
offer,
which
is
just
like
it
was
in
the
previous
slide.
That
second
m
section
is
marked
with
you
know
the
zero
port
and
bundle
only,
but
when
liberty
c
goes
to
do
this,
it'll
fill
in
you
know,
port
nine
rather
than
than
port
zero,
and
the
concern
then
becomes
okay.
C
How
will
these,
how
endpoints
that
have
been
receiving
nine?
You
know,
suddenly
react
when
they
start
receiving
zero.
You'll,
see
that
liberal
weapon,
you
see,
does
not
spill
in
ice
credentials.
For
that
second
m
section,
just
like
jason.
Does
it's
just
that
the
difference
in
the
port
and
the
lack
of
a
bundle
only
attribute
on
the
answer?
The
answers
end
up
being
identical,
so
there's
no
concern
there.
It's
just
really
what
happens
in
that
initial.
C
That
here,
liberal
c
has
basically
offered
you
know
ice
credentials
in
the
first
m
section,
just
like
it
does
for
just
like
json
specifies
and
in
the
second
section,
there's
not
meant
to
be
ice
credentials.
There's
just
no
there's.
No
there's
no
candidates,
no
excredentials,
and
you
just
get
this
port
value
of
nine.
D
I'm
sure
I'm
just
got
got
it.
Thank
you.
E
E
So,
just
ignoring
the
the
the
port,
nine
or
zero
here
for
a
second
come
back
to
that
in
a
bit
the
a
equals
bundle
only
seems
like
we
need
that,
so
that
we
can
so
that
the
so
that
it's
valid
for
the
answer
to
even
come
back
with
the
that
we're
doing
bundle
right
like
like
I'm
sort
of
concerned
about
the
the
the
libor
webrtc
missing,
the
a
equals
bundle
only
seems
like
a
bug
we
need
to
fix.
I
guess
this
might
is
that
a
question:
are
you
going
to
propose
that
we?
G
Okay,
yeah:
I
guess
you
can
get
to
this
when
when
it
comes
up
justin,
but
I
have
a
very
basic
question,
which
is
why
should
we
care
at
all
about
any
of
this,
which
is
that
it's
very
easy
enough
to
write
a
little
javascript
to
change
these
numbers
around
and
so
that,
if
you're
speaking
to
somebody
right,
it's
online
instead
of
a
zero
or
whatever,
right
and
and
people
do
much
more
complicated
things
in
webrtc?
G
H
C
I'll
take
a
little
aside
here
because
I
think
that
you
get
at
really
what
is
the
fundamental
problem
here,
which
is
why
do
we
care?
Why
should
we
do
anything?
Why
are
we
here
in
the
middle
of
the
night
for
many
folks,
you
know
talking
about
this
and
I
think
you
know.
Ultimately,
you
know
the
question
comes
back
to
if
we
change
some
of
these
things
are
some
applications
just
going
to
break.
You
know
all
of
a
sudden
and
that
will
cause
people
to
start
complaining.
C
Yeah
exactly
exactly
so
now
so
basically
you
know
in
order
to
in
order
to
bring
these
specs
into
agreement,
we
need
to
change
something
and
some
things
that
we
might
change
might
cause
us
to
have
more
tomatoes
thrown
at
us
and
some
might
cause
less,
and
so
the
trick
is
to
figure
out
which
of
these
things
should
we
change
to
kind
of
get
to
a
place
where
the
specs
agree,
and
hopefully
maybe
the
implementation
agrees
with
the
specs
and
we
don't
get
the
tomatoes
thrown
at
us.
D
Yeah,
I
just
want
to
echo,
I
think,
what
justin
said,
which
is
like
the
one
on
the
the
really
most
unsatisfactory
outcome
for
me
is
to
have
the
specifications
be
confusing
so
that,
like
no
one
else
can
ever
implement
this
and
get
it
get
it
right
that
and
it's
all
just
locked
in
like
my
head
and
justin's
head.
So
you
know
in
in
2050
I
would
get
defrosted
to
like
try
to
try
to
fix
some
emergency
problem.
C
Okay,
well,
let's
move
on,
you
know,
there's
a
lot
of
technical
detail
here
we
can
come
back
to
this
once
you
frame
the
problem
too
colin.
E
E
One
of
the
things
that
needs
to
happen
is
this
was
one
of
the
incompatibility
points
and
I've
been
telling
people
for
years,
like
the
webrtc
sec
says
like
you're,
going
to
be
able
to
get
you're
going
to
be
getting
these
bundle,
things
that
you
have
to
deal
with,
and
you
better
not
crash
with
you
when
you
have
zero
port,
and
so
people
have
been
moving
towards
that
and
trying
to
get
those
things
out,
but
that
causes
you
know
real
harm
for
hundreds
of
millions
of
users
per
day
right
so
like.
E
I
think
that
there
is
a
a
need
for
us
to
get
this
to
the
point
where
it
achieves
the
interop
the
backwards
compatibility
problems
that
we're
trying
to
do
and
try
and
get
those
people
to
sort
those
out
and
get
rid
of
those
spcs
for
the
the
interop
stuff
that
doesn't
necessarily
nail
down
one
of
these
solutions
versus
the
other
or
whatever.
But
I
I
think
that
that
is.
E
I
don't
underestimate
that
right,
like
there's
stuff,
that's
hard
to
change
and
there's
stuff,
that's
easy
to
change,
and
you
know
we've
been
telling
people
for,
however,
many
years
to
go.
Do
something,
and
if
we're
going
to
yank
the
rug
out
from
underneath
of
them
on
that,
it's
not
going
to
really
improve
the
credibility
story
of
why
we
should
all
do
webrtc
and
make
it
good.
E
A
A
Go
ahead,
I
see
tim
and
q,
but
I'd
like
to
ask
after
tim,
if
we
could
go
ahead
and
finish
the
the
slide
deck
before
continuing
the
discussion.
C
Just
before
tim
starts,
I
just
want
to
respond
a
little
bit
to
that
that
the
main
thing
is.
We
have
one
thing
that
we
didn't
have
before,
which
is
a
lot
of
data
that
informs
like
how
often
is
bundle
being
used.
How
often
do
we
have
non-bundle,
endpoints
and
I'll
get
to
that
in
a
minute
which
will
hopefully
inform
this
conversation.
J
We're
tim
come
to
my
question
so
I'll
shut
up
now.
C
Okay,
in
that
case,
you
know,
carrying
on,
I
think,
we're
on
to
the
the
next
slide.
C
Okay.
So
this
is
basically
a
summary
of
the
places
where
you
know
these
specs
and
the
work
you
see
differ.
You
know,
it's
sort
of
you
know
summarize
in
that
second
column
of
the
behavior
jsep
uses
for
these
endpoints.
That
can
only
be
used
when
bundle
is
negotiated.
It'll
use,
port
zero
and
a
equals
bundle.
C
You
for
all
lines
that
are
not
the
default
bundle
group
you
have
to
then
use
port
zero
and
equals
bundle
only
in
answers
and
subsequent
offers
and
live
webrtc
like
jsep
only
does
this
behavior
in
initial
offers,
but
it
uses
port
nine.
You
know,
which
is
just
an
implementation
sort
of
artifact,
that
when
there
are
no
ice
candidates,
it
simply
fills
in
port
nine,
because
that's
what
the
spec
says
to
do
when
you
have
no
candidates,
so
it
doesn't
use
that
port
zero
and
so
the
adjacent
behavior.
C
You
know
as
it
was
designed-
and
we
talked
about
it
at
length
in
various
itf
meetings-
was
designed
to
make
the
safest
behavior
for
offers
to
non-bundle
endpoints,
and
the
answer
behavior,
you
know,
is
what
applications
are
already
expecting.
There
is
no
port.
C
Zero
bundle
has
the
same
behavior
for
offers,
but
it
has
a
what
I
think
could
be
argued
as
a
more
consistent
syntax
is
that
in
both
offers
and
answers,
lines
that
are
intended
to
be
only
used
with
bundle
are
marked
with
port
zero,
and
that's
the
I
think
was
the
reason
for
the
change
in
the
in
the
bundle
spec
which
occurred.
I
think
in
bundle
revision
42
to
to
use
port
zero
in
in
answers,
and
then
finally,
liberal
ftc
has
we
basically
you
know
this
is
the
default
behavior.
C
We
wouldn't
be
changing
anything
since
there
wouldn't
be
any
risk
to
apps
of
changing
anything,
but
we
wouldn't
have
the
same
behavior.
That
makes
things
better
for
non-bundle
endpoints
and,
as
I
mentioned,
you
know
offering
port
nine
with
no
candidates.
It
could
be
rejected
with
like
ice
mismatch.
It
could
fail
or
it
might
just
succeed,
because
if
the
other
side
has
its
own
candidates,
then
peer
review
candidates
will
work
or
regenerate
it
and
may
just
work.
C
So
you
know
this
is
the
dilemma
that
we
we
face.
If
we
go
and
change
the
initial
offers
to
always
use
port
zero.
Some
existing
webrtc
apps
may
joke
on
that.
If
we
change
the
answers
to
have
port
zero,
more
webrtc
apps
may
choke
on
that,
and
if
we
don't
change
anything,
then
we
may
not
have
exactly
the
backward
compatibility
behavior
that
we
had
originally
designed
all
right.
C
C
Slide
so
you
know
there
are
a
bunch
of
things
to
consider
here,
but
I
think
it's
helpful
to
try
to
break
this
problem
down
into
multiple
pieces.
The
first
question
of
before
us
is:
what
should
we
put
into
answers,
this
sort
of
steps
you
know
sets
aside
the
whole
implementation
that
will
work.
You
see
question,
and
this
really
affects
any
mode.
C
You
know,
basically,
when
we
have
m
sections
that
are
bundled,
do
they
get
marked
with
port
zero
or
or
not,
and
then
the
second
question
is
for
just
considering
initial
offers.
You
know
from
from
adjacent
endpoints,
you
know,
since
the
existing
one
of
the
popular
existing
implementation
does
not
use
port
zero,
you
know,
should
we
start
using
port
zero
in
the
case
where
it
has
explicitly
said
to
do
max
bundle,
not
the
default,
and
then
what
should
we
do
when
the
implementation
says
it
wants
to
use
the
default?
C
The
balance
you
know-
and
we
have
more
than
one
audio
or
video
stream,
meaning
that
one
thing
which
should
be
marked
with
it
with
the
zero
port
and
the
reason
I've
broken
these
two
out
is
that
we
might
choose
for
a
different
behavior
for
applications
that
are
just
using
the
default
versus
applications
that
have
specifically
said
we
want
to
try
to
bundle
as
aggressively
as
possible.
With
that,
you
know
have
basically
said
hey.
I
think
I
know
what
I'm
doing
next.
C
Slide
so
the
first
issue,
you
know,
as
I
just
sort
of
summarized
you
know
if
we
make
a
change
to
start,
including
port
zero
in
answers.
Applications
that
are
currently
seeing
like
a
shared
port
today
may
incorrectly
see
that
port
zero
and
think
hey.
This
m
line
has
been
disabled
and
the
number
may
not
be
high,
but
just
for
an
example,
you
know
when
we
switched
to
required
dtls
1.2.
C
C
However,
despite
that
you
know
99
usage
of
dtls
1.2,
we
got
a
lot
of
tomatoes
thrown
at
us
and
we
actually
ended
up
having
to
go
to
a
like
a
18-month
deprecation
timeline.
Harold
can
correct
me
if
I'm
wrong
in
the
exact
details
to
actually
wind
down
support
for
dtls,
1.0
and
enforce
support
for
for
1.2
that
there
were
a
small
but
quite
vocal
and
quite
heavily
used
minority
of
applications
that
did
not
support.
C
You
know
we
were
basically
saying
we
were
going
to
break
them
and
they
got
quite
upset
with
us
so
trying
to
avoid
you
know
being
in
that
same
position
again
of
throwing
the
switch
and
suddenly
finding
applications
that
have
you
know,
support
added
support
for
bundle,
but
maybe
not
entirely
be
broken
by
setting
this
port
and
the
answer
to
zero
next
slide.
C
You
know,
similarly,
you
know
this
is
the
case
where
we
say:
okay,
if
we're
doing
max
bundle-
and
you
know
right
now-
we're
sending
this
port
nine
for
that
second
m
section
and
we
start
sending
port
zero.
For
that
second
m
section,
you
know,
are
there
applications
out
there
that
are
going
to
break?
I?
C
And
again
that
for
2a
we're
talking
about
max
bundle,
so
it
affects
even
cases
where
we
have
one
audio
in
one
video
now
in
2b,
since
this
is
the
balance
policy,
the
default,
the
only
time
we
get
a
zero
port
is
when
we
have
more
than
one
m
section
of
given
type
like
in
this
example.
Here
we've
got
two
video
sections
and
the
second
video
section
is
marked
with
port
zero.
You
know
right
now,
you
know
applications
are
not
seeing
that
you
know
port
zero.
C
In
fact,
due
to
an
implementation
detail
again
within
liberal
rotc
it
never.
You
know
it
was
for
a
long
time
presuming
use
of
plan
b
here
where
there's
actually
a
use
of
unified
plan
and
a
you
know.
Second,
video
m
section.
It
actually
gives
that
m
video
m
section,
a
unique
port
rather
than
port
nine,
and
so
switching
from
that
to
port
zero
would
be
its
own
difference.
C
Next
slide,
all
right,
so
the
data
you
know
what
actually
happens
out
in
the
real
world,
so
the
for
the
top
table
talks
about
what
are
the
bundle
policies
in
use.
I
will
sort
of
caveat
that
all
these
statistics
are
only
taken
from
the
dev
channel,
which
is
has
a
pretty
broad
footprint,
but
not
quite
the
same
footprint
as
as
production
chrome,
which
would
be
you
know,
probably
20x
more
data
points,
but
it's
being
used.
You
know
by
real
applications.
This
is
not
canary.
C
This
is
dev,
which
is
being
used
by
real
people
every
day
and
about
85
percent
of
people
are
using
balanced
and
seven
percent
of
people
using
max
bundle
and
max
compat,
not
adding
to
100
due
to
rounding
and
then
finally,
the
second
table
shows
how
often
is
bundle
actually
negotiated,
and
this
is
actually
pretty
informative
because
it
says
you
know
if
you
have
more
than
one
audio
or
video
stream
of
each
type.
C
Basically,
you
have
a
99.999
success
rate
of
actually
a
cheaping
bundle,
and
I
didn't
really
believe
these
stats,
so
I
went
and
dug
into
them
a
little
more
closely.
There
are
literally,
you
know,
hundreds
of
thousands
of
cases
where
you
know
people
were
using.
You
know
more
than
one
sort
of
audio
or
video
stream
of
a
given
type
in
single
digit
ones
of
you
know
where
bundle
was
not
negotiated
and
audio
or
video
was
there's
more
than
one
stream
of
a
given
type.
C
So
one
thing
has
to
be
sort
of
taken
into
account
is
that
most
applications
right
now
are
still
using
plan
b
and
at
least
in
chrome,
and
in
that
case,
there's
never
more
than
one
stream
of
a
m
section
of
a
given.
You
know
a
given
type,
but
even
then
you
know
the
the
bottom
line
is
that
bundles
being
used
extremely
heavily
is
a
extremely
successful
specification.
C
The
the
question
the
only
place
where
we
see
bundle
not
used
almost
exclusively,
is
in
the
case
where
there's
just
a
single
audio
in
in
video
stream.
Next
slide.
C
And
I
can
always:
I
will
be
able
to
pull
some
more
data
for
this.
You
know
in
a
couple
weeks
when
these
stats
reach
the
production
channel.
So
colin
did
you?
Okay?
Let
me
let
me
sort
of
summarize
the
observations
here
and
then
I'll
sort
of
take
any
questions.
E
Can
you
say
a
little
bit,
I
mean,
given
the
zoom
number,
how
many,
how
many
samples
are
like?
Can
you
give
a
feel
of
how
many
samples
that
that
those
that
data
represents.
E
That
surprises
me,
given
the
way
that
the
major
web
conferencing
applications
do
do
share
when
they're
sharing,
screen
sharing
screen
and
sharing
video.
At
the
same
time,
that
that
seems
surprising
that
it's
99.9999,
like
really
surprising,
it
means
no
one's
using
webrtc
for
it
just
doesn't
match.
I
anyway
it'd
be
interesting
to
dig
into
that
a
little
bit
because
those
numbers
don't
really
match.
They
surprised
me
a
lot.
C
I
J
Yeah,
I
think
there
are
apps
which
will
open
two
peer
connections
if
they
want
to
do
what's
effectively
unbundled
behavior.
So
I
think
that
might
be
skewing
your
figures,
like
so
they're
lighting,
two
peer
connections,
one
for
video
and
one
for
screen,
share.
C
Sure
that's
entirely
possible.
I
mean.
C
Thanks
tim,
so
I
mean
what
takeaway
should
we
have
here?
I
think
the
number
one
takeaway
is
that
you
know
bundle
is
extremely
popular
and
you
know
double
check
these
stats,
but
they're,
basically
essentially
zero
applications
that
are
negotiating
more
than
one
audio
video
stream
of
a
given
type
within
a
single
peer
connection
and
not
using
bundle.
C
There
are
definitely
applications
that
are
doing
this
that
are
negotiating
more
than
one
audio
video
stream
of
given
type
but
but
they're
they're
using
bundle,
and
as
mentioned,
I
think
it
was
about
you
know,
there's
a
you
know:
meaningful
fraction
like
two
percent
of
applications
that
are
either
using
that
are
using
more
than
one
audio
or
video
stream
of
a
given
type
with
unified
plan,
there's
also
a
meaningful
fraction
of
applications
that
are
using
max
bundle,
but
the
majority
of
applications
are
are
using
or
the
super
majority
applications
are
using.
C
The
balance,
bundle
policy
and
overall,
as
the
key
point
is
just
that
you
know,
bundle
is
used
extremely
extremely
frequently
and
only
in
the
case,
where
there's
a
single
audio
video
stream.
Do
we
see
that
there's
a
you
know,
bundle
not
negotiated.
C
You
know
even
even
then
still
94
success
rate,
so
the
conclusions
we
can
draw
from
this
is
that
if
we
change
the,
you
know
this
policy,
where
we
start
sending
port
zero
in
the
answer,
when
bundle
is
negotiated
like
almost
every
webrtc
app
is
going
to
be
affected
by
this,
because
these
are
all
cases
where
bundle
is
being
negotiated
and
suddenly,
like
you
know,
fort
zero
is
going
to
come
back
in
the
answer
and
I
think
it's
reasonable,
based
on
historical
sort
of
similar
cases
that
some
of
these
you
know
are
going
to
see
this
port
zero
and
be
surprised
by
it
and
and
break.
C
So
I
think
that
that's
a
high
risk,
the
you
know
second
thing
of
like
what
should
we
do
in.
D
D
C
C
All
right,
so
we
talked
about
issue
one,
which
is
what
to
do
in
answers
issue.
Two
a
is
what
to
do
in
offers
when
we're
doing
max
bundle,
and
so
before
I
get
into
my
conclusions
here.
The
one
thing
I
guess
is
worth
noting
is
that
I
talked
to
folks
at
firefox
about
you
know
what
their
experience
was
because
they
actually
had
implemented.
C
You
know
the
support
for
a
good
bundle,
only
import
zero
a
couple
years
ago,
and
you
know
during
that
time.
You
know
there
were
you
know
they
ran
into
exactly
the
types
of
issues
that
I
was
concerned
about
that
you
know
some
things
would
see
port
zero
and
malfunction.
In
fact,
you
know,
according
to
the
bug,
that's
knight
here,
cisco
spark
malfunctioned
upon.
C
You
know
getting
a
port,
zero
and
aegis
bundle
only,
but
it
has
been
rolled
out
in
firefox
for
actually
a
couple
years
now,
and
so
that
allows
us
to
maybe
say
that
the
the
risk
is
bounded
that
you
know
the
applications
did
malfunction
here
when
getting
port
zero
initial
offer.
But
if
it
works
with
firefox,
which
firefox
now
is
sending
port
zero
in
offers.
But
not
answers
today
that
you
know
most
applications
probably
are
are
dealing
with
that.
C
Okay,
so
still
you
know,
there
are
some
applications
that
don't
work
with
firefox
and
so
there's
definitely
some
risk
of
changing
like
the
behavior,
either
for
max
bundle
or
for
balance
to
go
from
using
port
zero
to
using
port,
nine
or
sorry
go
from
point
support:
nine
to
four
zero
roman
c:
u
and
q.
A
F
Yes,
okay,
sorry,
can
you
hear
me
or
I'm
seeing
the
activity?
Yes,
okay.
Oh
the.
There
are
still
a
fair
number
of
applications
which
only
generate
offers
from
the
browser
and
expect
an
answer
from
some
sort
of
sfu.
So
there
is
like
out
of
the
stats.
The
fact
that
answer
has
been
there
are
no
answers
ever
generated
by
browsers,
so
it
might
affect
the
policy
as
well.
F
It
might
affect
observations
as
well,
so
the
so
applications
will
be
considerably
more
affected
by
how
offers
are
generated
versus
by
how
answers
can
be
affected
by
how
offers
are
generated
versus
how
answers
are
generated
like
a
lot
of
conferencing.
Apps
just
generate
offers
and
never
generate
answers
in
the
browser.
C
C
The
other
point
I
would
just
make
is
that
the
you
know
offers
only
affect
the
case
where
you're
not
like
for
a
application
that
has
a
single
audio
in
a
single
video
stream.
It
will
not
see
a
port
zero
in
the
offer
because,
because
it
only
has
a
single
audio
in
a
single
video
and
it's
not
using
max
bundle,
it'll
be
using
that
balance
policy
and
won't
see
a
port
zero
in
the
offer.
C
However,
it
will
see
port
zero
in
the
answer,
because
that,
once
bundle
has
been
negotiated,
that
video
stream,
you
know,
is
going
to
be
bundled
onto
the
actual
audio
m
section,
and
so,
even
in
that
simple
av
offer,
the
answer
will
now
have
port
zero.
So
that's
why
I
think,
there's
a
pretty
significant
risk
because
pretty
much
every
app
out
there,
unless
it's
like
just
a
single
audio,
you
know,
m
section
application
or
data
channel-
is
going
to
see
a
different
behavior
in
answers.
If
we,
you
know
snap
to
the
bundle
specification.
C
Okay,
so,
let's
move
on
to
oh
one,
other
point
that
was
worth
mentioning
is
that
before
we
move
on
just
slide
back
on
2b,
you
know
changing
the
behavior
for
endpoints
that
have
more
than
one
audio
or
video
stream
of
that,
given
type
it
will
affect
some
apps.
The
risk
is
probably
bounded
as
you've
seen
with
firefox,
but
these
apps
are
not
talking
to
non-bundle
endpoints.
C
You
know,
99.999
percent
of
them,
as
indicated
by
stats,
are,
are
negotiating
bundles
successfully,
so
the
benefit
the
whole
benefit
of
using
port
zero
in
the
first
place
in
the
equals
bundle.
Only
was
there
to.
You
know,
improve
the
interoperability
with
non-bundle
endpoints,
but
what
we're
seeing
is
that
these
apps,
that
are
doing
more
than
one
audio
video
stream,
are
exclusively
really
talking
to
the
bundle
aware
endpoints,
so
that
sort
of
may
affect
our
appetite
to
make
changes,
because
the
upside
of
making
this
this
change
to
use
port
zero.
C
You
know-
maybe
some
unlimited
and
like
there
may
be
other
reasons
but
like
just
in
terms
of
negotiating
better
interoperability.
It's
just
is
not
borne
out
by
the
data
tim.
C
J
C
Yeah
I
mean
you're
saying
that,
basically,
the
only
cases
that
we're
concerned
about
here
are
cases
where
there's
two
audio
or
you
know
two
or
more
audio
or
two
or
more
video
streams,
and
whether
there's
a
data
channel
or
not
doesn't
have
any
impact
on
on
that.
There's
the
only
places
where
you'd
actually
see
that,
with
the
balance
policy
of
port
zero.
C
C
All
right,
so
here's
the
sort
of
the
money
slide.
You
know,
here's
what
I
think
we
should
do
and
again
these
are
my
opinion,
but
hopefully
informed
by
the
data
and
the
recommend
the
data
from
the
previous
slide
is
that
I,
I
think,
switching
to
use
the
the
we
should
reconcile
these
specs,
and
so
basically
we
can
either
move.
C
You
know
the
the
benefit
of
the
bundle
sdp
behavior,
you
know,
seems
to
largely
be
you
know,
a
cleaner
syntax,
and
I
feel
like
that.
You
know
the
risk
of
now
changing
this.
What
will
be
sdp?
That's
sent
to
you
know
pretty
much.
Every
single
user
of
webrtc
is
really
too
high
for
us
to
do
it
at
this
point
in
time
this
change
was
was
proposed.
C
You
know
in
fairness,
but
by
christopher
a
couple
years
ago
and
sort
of
did
not
receive
like
you
know,
full
sort
of
evaluation
at
that
point
in
time,
but
I
think,
probably
even
at
that
point
in
time
the
ship
had
already
sailed
in
terms
of
the
the
size
of
the
webrtc
ecosystem
at
that
point
in
time.
C
So
you
know,
if
we
do
this,
we
would
say
that
while
you
may
include
a
equals
bundle,
only
in
offers
and
or
sorry
in
answers,
but
then
no
new,
you
know
code
should
basically
generate
port
zero
and
bundle
only
in
in
answers
colin.
E
So,
look
I
I
I
I
I
I
love
this.
I
think
this
is
a
pretty
pragmatic
recommendation.
I
will
also
point
out
this
sort
of
matches:
firefox
behavior,
which
we
have
an
incredible
amount
of
deployment,
experience
to
suggest
that
it
works,
and
this
isn't
causing
problems.
So
I
I
I
think
that
that's
there.
E
I
would
also
say
this
that
the
max
bundle
thing
I
I
think
that's
an
interesting
split
to
split
the
max
bundle,
a
little
bit
separately
apart
in
the
behavior,
and
I
I
would
be
very
much
in
favor
of
it.
You
know
if
it
helped
a
transition
to
get
everybody
into
a
consistent
code
base
to
update
the
max
bundle
label
to
now
be
a
max
bundle.
We
really
mean
at
this
time
you
know
like,
but
to
be
in
to
use
a
new
label.
E
D
Hey
so
I
think
I
think
I
think
I
think
I'm
pretty
in
favor
of
this.
I
think,
there's
one
I
don't
quite
understand
which
I
want
to
just
drill
into,
but
so
I
think
so
far
as
I
understand
it,
the
argument
for
the
bundle,
but
I
also
adjacent
water,
so
I
have
the
same
skin
in
the
game
as
justin.
Perhaps
but
you
know
the
or
editor
brother,
but
the
so
I
understand
the
real
argument
for
the
current
bundle.
D
Behavior
is
really
aesthetic
and
in
the
context
of
stp
that
doesn't
carry
much
with
me
and
and
given
that
we
have,
you
know,
have
to
choose
between
one
another
or
the
installed
base
seems
like
an
auto
win.
In
that
case,
the
I
don't
think
I
quite
so
okay,
so
so,
as
I
understand
it,
that
what
you're
proposing
this
to
a,
although
it's
not
doesn't
have
an
a.
D
The
second
bullet,
is
that
I
is
that
we
change,
so
we
would
create
a
new,
but
a
new
thing
called
max
bundle
safe
and
that
would
that
would
have
the
the
current
specified
behavior
and
that
mass
bundle
would
then
maybe
have
non
mass
bundle
with
it
and
maybe
have
non-predictable
behavior,
depending
which
browser
you're
in
right.
Is
that
correct.
C
In
yeah,
that's
pretty
much
what
I'm
saying:
we
have
a
few
options.
What
we
could
do
here,
we
could
just
basically
say
the
max
bundle
token.
The
behavior
there
is
is
unspecified
and
implementation
specific,
and
if
you're
using
max
bundle,
you
can
keep
your
max
bundle
or
we
could,
where
we
document
saying
that
max
bundle
essentially
presumes
that
there
will
be
no
port
zero
and
it
will
just
presume
that
bundle
is
being
used
from
the
very
beginning
as
if
you
had
already
done
offer
answer.
C
So
I
so
I
covered
then,
like
you
know,
sort
of
what
we
might
do
with
max
bundle,
and
then
you
know
adding
a
new
token
that
you
know
would
avoid
sort
of
messing
with
people
who
already
feel
like
they're,
getting
specific
max
bundle
behavior.
But
as
we
see
that
they're
still
cases-
and
since
I
don't
have
the
stats
yet
on
the
intersection
between
how
many
people
are
using
balanced
in
multiple
m
sections
of
a
given
type,
so
we
don't
know
quite
the
amount
of
risk
of
what
it
means
to
change
the
balanced
policy.
C
But
that
being
said,
you
know
the
fact
that
firefox
you
know
is
is
implementing
the
the
json
behavior
means
that
we
should
at
least
try.
C
You
know
I
I
did
mention
that
there's
limited
upside
for
doing
this,
but
in
terms
of
just
getting
to
you,
know
a
space
where
our
implementations
match
our
our
our
documents.
It
seems
worthy
to
at
least
try
and
see
if
there's
like
more
of
a
set
of
people
who
would
be
affected
this
month
than
we
thought.
I
Christopher,
yes,
hello,
I'd
just
like
to
point
out
that,
even
though
I'm
I'm
daughter
of
bundle,
I'm
actually
okay
with
this
the
recommendation
in
in
bullet
one-
I
I
guess
we
could
point
out
that
actually
in
case
people
were
wondering
how
could
bundle
and
jason,
you
know
specify
different
things
so
actually
bundle
and
json.
They
used
to
be
aligned
that
at
some
point
there
was
actually
a
suggestion.
I
It
was
not
by
me,
but
but
anyway,
to
actually
change
the
bundle
specification,
and
we
did
that,
unfortunately,
that
change
never
got
implemented
in
in
jsof.
So
that's
how
we
ended
up
with
this
misalignment,
so
just
a
little
bit
of
history,
but
I'm
actually
fine
to
to
do
this
change
bundle,
as
I
suggested
here
in
in
in
bullet
one
when
we
come
to
bullet
2.
I
I
I
I
agree
this.
This
is
an
pragmatic
approach.
I
I
think
we
we
can
in
in
jsep.
We
can
change
the
name
of
the
current
max
bundle.
I
I
I
suggest
to
do
it
must
bundle,
but
I
mean
it
could
be
anything
and
and
and
then
so
then
we
keep
the
the
the
current
behavior,
but
we
just
changed
the
name
of
the
token.
That's
how
my
understanding
of
this
suggestion.
C
So
and
I
feel
like
you
know,
the
only
things
that
are
required
of
this
working
group
really
are,
you
know,
point
one
and
then
you
know
two
a
as
I
should
have
done
the
slide,
but
first
of
all
on
on
two.
I
think
that
you
know
the
within
you
know
here
on
the
chrome
team.
We
can
figure
out
this
transition.
You
know
independently
of
needing
an
input
from
the
the
working
group.
C
So
yes,
I
think
that
is
you
know
really
the
the
key
bits.
There's
one
more
point
I
want
to
get
to,
if
we're
all
mostly
agreed
here.
C
All
right,
so
one
final
point
that
sort
of
came
up
during
the
discussion
of
all
this
is
that
there
is
no
guidance
from
bundle.
Well,
there's
no
explicit
guidance
for
the
situation
for
where
you
get
the
bundle
only
attribute
in
an
answer
for
m
line.
That's
not
in
a
bundle
group.
C
You
know
the
the
spec
does
say
it's
unspecified,
but
it
may
actually
be
worthwhile
to
specify
this
behavior,
because
what
may
happen
is,
if
you
send
an
offer
with
a
equals
bundle
only
to
a
non-bundle
aware
endpoint,
it's
possible
that
it
could
copy
the
attributes
from
the
offer
to
the
answer,
but
not
copy
the
actual,
a
equals
bundle,
grouping
mechanism,
and
so
you
might
get
an
answer
that
doesn't
have
a
equals
group
bundle
saying
it's
going
to
bundle,
but
you
still
have
this
bundle.
Only
you
know
attribute
in
the
answer.
C
I
think
the
simple
thing
would
just
be
to
say:
we
should
have
text
in
bundle
that
says
if
this
ever
happens
and
you
get
bundle
only
in
the
m
section.
That's
not
part
of
a
bundle
group,
then
you
must
ignore
a
equals
bundle
only
and
well.
Naturally,
we
cannot
specify
what
non-bundle
implementations
do
we
can
specify
what
bundle
implementations
do
when
they
get
this
sort
of
stp
in
their
their
answer.
C
F
So,
just
to
add
to
this
topic,
the
general
offer
answer
doesn't
specify
what
you're
supposed
to
do
with
attributes
when
refusing
an
m
line.
F
So
traditionally,
if
you
put
the
port,
zero
or
based
on
the
spec,
you
can
literally
put
anything
in
the
attributes
in
the
answer
they
are
supposed
to
be
ignored
and
since
we
are
changing,
how
offer
answer
works
with
bundle
and
we
actually
assign
some
meanings
to
the
attributes
which
are
present
in
the
answer,
even
with
port
zero.
We
need
to
kind
of
put
in
this
additional
safeguard,
because
that
will
potentially
affect
bundle
implementations.
F
Because
again,
there
are
some
endpoints,
which
is,
as
I
mentioned,
we
just
put
port
zero,
copied
the
offer
line
and
just
put
the
port
zero
on
the
previous
slide,
when
we
said
that
the
how
we
implement
answers,
one
additional
benefit,
it's
shorter
and
anything
which
makes
the
current
sdp
shorter,
like
essentially
putting
an
answer
with
an
actual
port
and
without
a
bundle,
bundle
only
it's
again
welcome,
at
least
in
my
book,
because
it's
already
way
too
long
and
it's
it
makes
it
harder
to
process.
F
One
last
thing
like
with
this
whole
discussion:
there
are
a
number
of
sdp
parsers
that
I've
seen
which
actually
what
they
do
is
when
they
see
the
sdp
with
port
zero,
because
they
stop
parsing
this
m
line.
They
just
assume
that
this
whole
m
line
can
be
ignored
and
it's
it
might
be
more
effort
to
actually
implement
handling
of
port
zero.
F
Then
it
seems
it
might
require,
like
some
additional
pro
pre-processing
of
sdp,
not
just
using
the
normal
sdp
parser
because
sdp
parser,
because
it's
based
on
the
older
or
france
implementation
just
ignores
everything
which
is
in
the
m
line
zero.
So
there
is
no
easy
way
from
an
app
to
look
that
the
attribute
was
present
there.
F
No,
it's
it.
It
doesn't
affect
what
I
recommend
for
answers
and
I'm
I'm
still
thinking
there
should
be
a
behavior,
and
this
is
actually
kind
of
a
different
point.
I
wanted
to
bring
in
with
bundle
and
with
rtc
web.
F
I
never
seen
a
use
case
like
I
see
a
use
case
of
kind
of
not
allocating
candidates
that
you
do
not
need
when
you're
generating
an
offer,
but
I've
never
seen
a
use
case
where
you
actually
want
to
actively
refuse
the
m
line
or
make
sure
that
an
m
line
will
be
refused
if
bundle
is
not
supported
by
the
remote
side
like
I've.
Never
seen
that
requirement
being
there.
F
So
one
possible
behavior,
especially
with
trickle,
is
put
port
nine,
put
ice
candidates
in
the
line
and
when
they
start
allocating
local
candidates,
if
you
get
the
either
the
connectivity
check
from
the
remote
site
or
even
if
you
or
if
you
get
an
answer,
and
you
see
that
the
this
line
is
not
bundled
and
that
way
you
can
so
you
can
basically
achieve
kind
of
the
same
benefit
without
doing
bundle.
F
Only
by
just
delaying
the
allocation
for
the
second
m
lines
or
for
the
m
lines
that
you
won't
potentially
want,
bundled
and
that
will
work
in
99.99
of
the
use
cases,
get
the
same
result
and
will
not
disrupt
any
of
the
existing
applications.
C
Okay,
I
mean,
I
think,
we'll
probably
want
to
discuss
the
details
of
that
on
the
list,
but
as
we
try
to
figure
out
exactly
what
text
to
be
changed
in
in
jsat,
for
you
know
handling
of
that
point
two
from
the
the
recommendation
slide.
You
know
we
we
can
consider
that.
F
And
for
the
handling
on
this
4.2,
I
would
still
prefer
the
max
bundle
to
be
specified
versus
being
just
leaving
it
as
application
specific,
because,
if
we're
leaving
it
for
the
future
and
potentially
fixing
this
issue
with
no,
I
with
no
ice
credentials
in
that
offer,
because
that
might
be
like
this.
This
again,
I
prefer
not
to
generate
anything
which
looks
broken
or
can
cause
an
additional
disruptions
which,
for
no
specific
reason
and
again
adding
ice
candidates
might
fix
that
quite
easily,
even
for
existing
apps,
without
disrupting.
I
Yeah
yeah.
My
comment
was
to
this
suggestion
here
on
the
slightly
off
topic,
I
mean
in
general,
I
think
if
you
have
endpoints
that
are
simply
copy
pasting,
you
know
attributes
and
whatever
into
answers.
There
are
so
many
things
you
know
not
related
to
bundle
that
could
go
wrong,
but
having
said
that,
if
we're
gonna
update
bundle
anyway
now
due
to
the
the
the
misalignment
with
jsep,
I'm
fine
to
add
some
text
also
regarding
this
attribute
that
it
must
be
discarded
if
it's
outside
a
bundle
group.
So
I
can
do
that.
C
E
I
just
just
comment
on
this.
This
particular
slide.
I
mean
this
looks
fine.
I
have
no
problem
with
that.
I
think
at
some
level
the
alternative
with
this
should
be
treated
as
invalid
stp
right,
because
what
you
want
to
do
when
you
get
those
endpoints
is
have
things
fail
quickly
because
obviously,
what's
happening
is
not
valid
at
this
point,
so
I'd
prefer
an
stp
error,
but
you
know
whatever
I'm
perfectly
happy
with
this.
I
you
know
either
way.
It's
fine
with
me.
C
That's
okay.
I
mean,
I
think,
that
you
know
we
will
need
to
come
down
on
one
side
or
the
other
of
like
we're.
If
we're
updating
bundle,
you
know,
should
we
you
know
just
ignore
those
things
or
or
should
we
generate
an
error,
and
if
you
think
that
you
know
generating
an
error,
you
know
something
that
should
be
entertained.
That's
probably
something
that
we
want
to
have
that
discussion
on
the
list.
E
I
think
it's
worth
commenting
on
what
happens
in
this
use
case
and
there's
two
possible
things
like
just
ignore
it
and
carry
on
do
your
best
to
carry
on
to
make
it
work,
which
is
effectively
the
proposal
here
or
there's
the
you
know.
This
is
an
error,
raise
an
error
early
like
it
doesn't
yeah,
let's
discuss
on
the
list
which
one
wants.
I
I
don't
either
would
be
fine
with
me,
but
I
think
it's
going
to
be
that
usual.
What
do
you
do
when
you're
dealing
with
something
that
clearly
is
doing
something?
I
I'll
allow
both
things
we
can
say
you
know,
discard
it
and
move
on
or
just
you
know,
raise
an
error
and
do
something
about
it.
I
I
think
that
I,
I
think
that's
going
to
generate
sdp
behavior.
I
mean
some
end
points
that
they
just
discard
things
and
other
may
you
know,
release
the
session
and
things
like
that.
So
so
I'm
fine
allowing
both.
I
I
think
the
the
the
key
issue
is
that
we
want.
C
C
F
Just
to
add
to
this
is
that
none
of
the
specification
pre-bundle
make
this
an
error
so
which
is
retroactively
telling
that
the
end
points
which
do
not
implement
bundle
are
doing
something
which
is
incr,
which
is
in
interactive
dp,
even
though
it
hasn't
been
an
incorrect
sdp
up
until
bundle
was
created.
So
it's
it's
kind
of
we're
affecting
legacy
stuff
with
a
new
spec
and
outflowing
behavior,
which
was
perfectly
legal
up
until
this
point.
So
that's
why
I
don't
think
that
should
be
treated
as
an
error
piece.
I
I
think,
as
is
mentioned
here,
we
do
have
this
unspecified
text
which
we
also
use
in
other
specifications.
But
I
mean
I
I
I
I
won't
argue.
I
mean
I'm
okay,
adding
more
explicit
text
but,
but
yesterday
to
know-
and
normally
this
unspecified
it
was-
I
believe
we
had
said
in
our
specification
also
but
sure
we
can
be
more
specific
and
and
also
remember
that
in
the
future,
when
we
do
a
new
stp
attributes
and
or
whatever
sorry
for
for.
E
E
I
just
want
to
respond
rowan,
I
I
don't.
I
don't
think
what
you
said
is
quite
right.
I
mean
an
sbi.
There
are
sbcs
that
do
this.
I've
seen
it
happen.
I
agree
right,
but
an
sbc
that
copies
attributes
into
an
answer
that
claim
it
supports
stuff
that
the
sbc
does
not
support
is
is
forbidden
by
the
stp
spec
by
the
base
sdp
stack,
so
this
has
always
been
disallowed.
It's
it's
not
we're
changing
behavior
here
you
know,
if
you
don't
support
bundle,
you
can't
put
in
the
attributes
that
say
I
support
bundle.
F
You
can
argue
this
a
little
bit
more
on
the
list,
but
it's
essentially
the
text
for
if
using
an
m
line,
is
a
bit
confusing
because
it
says
you're
supposed
to
put
port
zero,
but
it
doesn't
actually
say
what
you
are
doing
with
and
that
this
m
line
will
be
ignored
because,
by
definition
by
including
an
m
line
with
zero
you're,
including
something
which
you
do
not
support,
meaning
like,
for
instance,
you
can
have
the
proto
line,
which
you
have
no
idea
how
to
handle,
and
you
have
no
idea
how
to
handle
anything
which
goes
into
the
format
on
that
line.
F
So
you're
already,
including
something
which
you
don't
know
how
to
handle,
and
you
put
port
zero
as
an
indication
of
that.
That's
like
that's
kind
of
why
this
where
this
behavior
is
coming
from
and
in
some
cases,
endpoints
refuse
to
parse
the
m
line.
If
it's
not
valid
for
the
proto,
which
was
offered,
even
though
the
port
is
zero,
and
then
we
you
need
to
you
need
to
gen
have
a
safe
way
to
generate
something
which
is
safe.
A
So
I
don't
see
anybody
else
in
queue,
so
perhaps
we
want
to
go
back
to
the
the
previous
question
a
bit
and
just
use
the
handy
dandy
hand
tool
just
to
make
sure
that
we
have
the
broadest
possible
agreement
on
the
way
forward.
So
can
I
ask
you
to
move
back
a
couple
of
slides
there
to
the
conclusions
you
drew
justin,
sorry,
sean
sure.
A
I
I
I
C
You
know
referring
to
the
the
terminal
case
of
if
you
see
that
you
know
stp
construct,
you
know
we,
you
should
pray,
it
makes
sense
to
to
be
able
to
just
tolerate
it,
regardless
of
where
it's
seen.
A
A
A
Okay,
the
numbers
seem
to
have
stabilized
at
12
raised
hand
and
zero
do
not
raise
hand.
The
other
participants
are
still
welcome
to
add
their
voice,
but
it
does
not
look
like
anybody.
So
far
at
least
has
objected
to
the
way
forward,
being
updating
bundle
to
use
the
jsep
behavior.
A
So
we
will
confirm
that
on
the
list,
but
I
think
from
the
point
of
view
the
today's
meeting.
We
can
call
that
the
the
conclusion
of
the
discussion
at
the
meeting,
in
any
case
so
as
chairs,
then
our
next
question
is
who's
going
to
write
up
the
update
to
bundle.
A
So
now
we
are
looking
for
volunteers
to
make
this
proposal
into
internet
draft
and
eric
riscorla
has
joined
thecube.
D
Well,
I
wasn't
joining
the
volunteer,
but
but
are
we
gonna?
You
can
ask
the
second
question
too.
A
Certainly
I'll
be
happy
to
do
that,
but
I
think
that
this
sure,
let's
do
that
and
then
we'll
so
update
jsep
to
include
an
alternative
name
for
the
existing
behavior
e
eg
max
bundle
save.
Does
that
sound
like
a
reasonable
way
to
cast
the
question
echo.
C
So
I
think,
there's
there's
I
don't
know
if
you
can
need
to
get
into
these
details.
I
think
that's
a
decent
way
to
frame
the
question.
I
think
there's
one
question
or
some
question
there
of:
do
we
document
the
existing
max
bundle
or
do
we
just
rename
the
existing
max
bundle.
A
So
there
are
several
people
who
have
joined
the
queue
on
that.
So
I
will
wait
to
put
the
question
to
the
group
until
they've
spoken,
christo
you're.
First.
I
A
D
Yeah,
it
is
decided
yeah,
so
I
I
think
I
liked
the
resolution
you
approached
justin,
which
is
that
we
document
max
bundle
safe,
which
behaves
as
max
bundle,
does
now
and
then
basically
say
some
reason.
Words
about
how
max
bundle
is
like
has
undefined
behavior,
and
you
know
I
think
that
actually
matches
the
style
of
these
specs.
A
number
of
places
where
we
just
like
openly
admitted
that,
like
things
were
not
just
like
because
this
is
like
cf
ttls
1.0,.
C
D
I
think
you'd
say
some
implementations:
do
mass
bundles,
safe,
that's
what
they
should
do.
Some
other
thing
which
I'm
happy
to
write
down.
I'll
have
us
write
down,
but
like
say
it's
on,
like
I
mean
I
guess
I
I'm
trying
to
distribute
document
and
specify
right.
I
think
we
wouldn't
specify
what
we
would
document
right.
C
D
What
you're
saying
should
happen
right,
so
what
I
suppose
we
do
is
take
all
the
text
in
the
specification
now
and
and
replace
max
bundle
of
mass
model
safe
and
then
put
a
note
somewhere.
That
says
this
is
pre
mass
model
safe
was
previously
named
max
bundle
and
some
multiplications
treat
them
the
same,
and
some
limitations
do
some
other
stuff.
C
E
I
think
I'm
sort
of
plus
wanting
is
roughly
what
ecker
is
saying
here,
which
is
slight
slight
variant
of
I
mean
the
reason
I
don't
want
to
get
into
a
whole
bunch
of
work
of
specifying
what
the
existing
max
bundle
is
is
because
well,
first
of
all,
there's
there's
two
differing
implementations
of
it
right.
So
I
think
we
should
just
say
you
know
it
everywhere.
E
In
our
specs
we
say
max
bund,
the
you
know,
new
max
bundle
or
whatever
the
new
label
is
and
then
somewhere
we
say
there
was
a
pre-standard
label
called
max
bundle
which
has
unspecified
behavior
and
is
not
recommended
for
usage
going
forward
or
something
like
that.
Right
I
mean
we
have
to
reserve
that
label
max
bundle
in
the
iana
registry,
and
I
think
we
should
just
comment
on
the
draft.
It's
a
pre-standard.
It
was
a
pre-standard
implementation
with
unspecified
behavior.
E
That's
it
and,
and-
and
you
know
that
reserves
the
code
point
for
everyone
to
keep
implementing
it.
However,
they
do.
A
A
So
12
people
have
participated
so
far
if
you're
a
participant-
and
you
have
not
participated.
Please
do
so
as
soon
as
you
can.
A
A
Okay,
I'm
gonna
go
ahead
and
end
the
session
because
we
seem
to
have
reached
the
same
numbers
as
before.
Once
again,
we
have
14
who
have
raised
their
hand
in
support
and
none
who
have
not
raised
their
hand
out
of
the
total
participants
for
this
session
of
40..
A
So
I
think,
while
we
will
of
course
confirm
this
on
the
list
at
least
the
sense
of
room
appears
to
be
in
favor
of
both
these
recommendations,
the
first
that
we
update
the
bundle
spec
to
use
jsep
behavior
and
the
second
that
we
changed
the
name
of
max
bundle.
Your
bike
shed
paint
welcome
here,
sent
it
to
christer,
as
our
volunteer
for
what
to
name
this,
I'm
going
to
plump
for
mox
bundle
myself.
A
But
then
the
next
question
I
think,
going
back
to
the
additional
things
that
are
desirable
for
changes
to
bundle.
Those
are
technically
not
within
scope
of
this
working
group,
so
we
will
caucus
with
the
area
directors
to
work
out
whether
that
update
will
simply
be
processed
in
music.
D
Offline,
hey
ted-
I
want
to
fly
that
we
do
need
changes
that
jsep,
because
this
second
change
requires
adjacent
change.
I
suspect
that
collectively
the
three
of
the
json
vendors
can
manage
that,
but
that
will
engage
the
change.
A
A
fair
point,
and
so
we
definitely
would
not
re-close
the
working
group
until
we
had
determined
what
changes
were
needed
to
jsep
and
probably
not
until
they
were
complete,
but
the
bigger
work
on.
Where
does
the
bundle
document
go?
I
think,
does
need
a
discussion
with
the
area
directors
on
on
exactly
where
they
would
like
to
see
that
unless
they
would
like
to
weigh
in.
A
I
I
mean
I
I
think
it
I
mean
at
the
end
of
the
day.
It
doesn't
really
matter
from
an
author
perspective,
but
since
the
the
rfc
was
done
in
music,
I
think
it
would
be.
That
would
be
the
best
place
because,
whatever
the
80s
decide,
my
suggestion
would
be
to
do
it
in
music
just
for
clarity
and
also
to
make
sure
that
we
also
need
a
volunteer
for
the
jsep
update.
I
I
think
I
I
can
do
the
bundle,
but
I
think
we
should
have
been
you
know
someone
else
doing
the
j7.
K
I
think
we
are
chartered
to
coordinate
with
the
music
at
least
I
think
as
long
as
we
do
that,
then
we're
complying
with
the
charter
and
I'm
comfortable
to
do
either
with
either
thing.
A
Okay,
we'll
continue
that
discussion
offline.
Then
I
think
I
think,
we're
all
pretty
comfortable
that
the
document
would
get
reviewed
in
both
of
them
before
it
went
forward,
and
that's
probably
the
critical
thing
exactly
who
holds
the
gavel
in
the
pen
is
something
we
can
still
work
out.
A
Do
you
want
to
go
to
your
last
slide
again?
There
justin.
C
Sure,
just
before
we
move
on
just
our
process,
question
you
know,
is
this
going
to
then
be
like
a
you
know,
spec
that
would
basically
just
define
these
new
policies
as
an
update,
or
would
we
do
this
as
like
abyss
for
for
json.
A
As
a
process
question,
it
would
be
simpler
to
do
as
an
update
if
you
wanted
to
do
a
abyss.
That
would
be
a
good
bit
more
work
simply
for
the
the
rfc
editor
group,
if
no
one
else
so
my
advice
would
be
to
do
it
as
an
update
and
then,
as
we
move
forward
with
the
documents.
If
they,
if
they
go
to
a
later
stage
of
standardization,
there
could
be
a
change
there.
Colony.
E
I
mean
I'd
100
agree
with
what
you
just
said,
but
given
we
have
the
current
xml
v3
like
if
we
took
the
rfc
editor
xml
as
as
it
exists,
and
we
just
did
it
on
that
and
the
isg
process
approval,
it
sure
would
be
nicer
for
implementers
to
have
it
as
the
real
spec.
Instead
of
everybody,
reading
a
spec,
that's
labeled,
the
webrtc
jsof,
which
actually
like
the
key
label
throughout.
It,
is
wrong
right,
I
mean
like
why?
E
Don't
we
just
like
fix
it
right
right
like
like
it's
just
I
I
don't
understand
why
it's
actually
or
let
me
put
it
this
way,
all
the
reasons
why
it
would
be
more
work
to
approve
the
patch
than
approve
the
actual
document
through
a
diff
chain
just
seem
like
problems
with
the
ietf
and
not
really
very
real,
and
it
would
be
better
for
implementers
to
actually
have
the
real
talk
like
it
seems
like
why?
Don't
we
do
the
right
thing
on
this
one?
K
D
Yeah,
I
agree
with
colin
I
mean
like,
like
you
know,
had
we
like
like
if
we
didn't
have
like
these
processes,
we
clearly
would
just
like
like
rub
the
document
in
place
and
so
like.
Let
let's
what
like
as
colin
says:
let's
roll
once
do
the
right
thing
and
we
have
all
the
oh.
We
have
all
the
xml
I
mean
like
if
we
just
discovered
this,
like
you
know,
if,
if
things
have
been
done,
it's
the
opposite
order.
We
should
read
the
xml
in
place.
We
have
like
the
maximum
I'll,
do
it
with.
A
I
certainly
have
no
objection
if
that's
the
the
will
of
the
community,
but
I
will
say
that
I
I
think
that,
when
you
sent
it
out
for
last
call
really
being
careful
in
the
in
the
write-ups
to
make
sure
that
the
the
community
as
a
whole
understands
exactly
why
it's
being
revved
in
place
and
how
small
the
the
update
is
intended
to
be
is
going
to
be
a
challenge.
Forever.
Writes
that.
A
So
sean
could
you
move
to
the
last
slide
here.
A
So
I
think
this.
The
discussion
here
didn't
strike
me
as
as
nearly
as
complete
as
the
as
the
discussion
on
what
to
do
about
the
m-line
question.
A
So
I
suggest
that
this
go
back
to
the
the
mailing
list
and
I'm
going
to
suggest
starts
on
the
music
mailing
list
in
particular
to
discuss
whether
or
not
this
would
be
an
appropriate
piece
to
to
a
later
updated
bundle,
whether
this
update
or
another,
because
I
think
this
sounded
like
there
was
some
discussion
about
whether
this
was
simply
wrong
behavior
and
should
not
be
called
out
in
any
more
detail
than
that
or
whether
specifications
of
exactly
what
to
do
with
that
wrong.
Behavior
were
appropriate.
A
I
think
that's
it
for
this
topic.
Is
there
anything
else
people
need
to
raise
on
either
of
these
topics
before
we
take
up
our
aob
krister.
I
A
Chairs,
fair
enough:
okay,
thanks
everybody!
This
is
the
major
topic
of
the
working
group
and
dispatched
with
great
alacrity.
Much
appreciated.
I
believe
we
do
have
one
additional
topic
that
was
raised
by
harold
harold.
Do
you
want
to
present
now.
H
H
H
H
H
H
H
H
H
H
H
Rfc
8826,
the
giant
security
constitution
section
that
we
otherwise
would
have
to
have
in
all
the
documents
and
outlines
most
of
the
threats
that
executing
other
people's
codes
in
the
web
page
and
cause
making
letting
that
make
connections
exposes.
But
it
does
not
mention
the
idea
that
you
could
use
the
attempt
to
set
up
a
connection
as
an
attack
against
the
middle
box.
H
So
possible
actions
we
can
take
on
this
as
rtc
web
or
as
individual
submission
supported
by
the
artist.
You
have
many
lists
or
whatever
I'm
not
all
that
key
enough
of
trying
to
resurrect
the
group
with
yet
another
charter.
So
it's
probably
an
inside
action,
some
kind.
C
Yeah,
how
if
you
could
go
back
to
the
first
slide?
I
guess
one
question
I
have
is:
how
unique
is
this
problem
to
webrtc?
You
know,
or
it's
just
webrtc.
C
You
know
a
thing
that
uses
some
slightly
different
protocols
and
then
it's
like
its
own
sort
of
analysis
here,
because
it's
using
its
own
protocols
like
in
particular
how
does
turn
I
forget
the
details
like
how
do
the
things
you
mentioned
about
turn
username
turn
into
this
particular
type
of
attack
and
things
that
we
should
be
conscious
about
and
think
about.
Perhaps
writing
in
the
in
this
document.
H
So
the
reason
why
it
turned
out
to
turn
the
username
was
that
sami
was
using
turn
username
to
make
the
packet
so
long.
It
got
broken
into
so
it
was-
and
I
think
he
put
the
attack
packet
in
in
the
latter
in
the
later
part
of
the
term
username,
so
anything
that
makes
the
packets
very
long,
it's
a
danger,
and
so
we,
when
I
put
in
the
safe
cars
against
over
long
usernames,
I
also
put
the
same
safeguard
again
in
against
against
passwords
and
a
few
other
attributes.
H
Poster
child
for
shipping
executable
content
into
other
people's
computers
and
be
and
telling
them
oh,
it's
safe
to
execute
this
content.
H
C
H
C
See
so
the
attacker
would
have
to
tell
someone
to
go
to
a
turn
server
that
was
at
a
port
that
was
on
a
port
where
that
that
had
an
lg
yeah
got
it
thanks.
E
F
E
There's
lots
of
algs
in
the
gnats
still
but
algs,
which
either
defragment
the
packets
which
make
this
hard
to
do,
or
they
ignore
commands
that
show
up
in
second
fragments
if
they
don't
defragment
the
packets
that
control
the
stuff.
So
I
mean:
can
this
still
happen?
Yeah,
of
course
somebody
can
go
find
something
from
a
long
ago
where
it
happens,
but
it's
less
prevalent
than
you
might
think,
and
it's
just
it's
impossible
to
solve
like
there's
the
only
way
you
can
really
solve
or
mitigate
risks
on
this
involve
dramatically
breaking
your
network.
E
We're
already.
The
itf
specifications
already
recommend
these
alg,
like,
if
you
do.
If
you
write
your
nat
such
that
they
follow
the
itf
specifications
anyway
whatsoever,
you're
not
going
to
have
this
problem,
I
mean,
I
think
it's
worth
commenting
on
somewhere,
but
everybody
always
tries
to
blow
this
into
a
big
deal.
But
when
you
go
to
actually
survey
the
gnats
that
you
can
do
this
on,
it's
it's
not
that
great
of
an
attack
and
certainly
not
an
attack,
that's
going
to
work
on
any
enterprise
corporate
fire.
E
H
Well,
what
I
gathered
from
sami's,
original
original
write-up
and
the
amount
of
reaction
it
garnered
was
that
pretty
much
all
the
linux
based
default
installed.
Linux
and
ip
tables,
firewalls
rare,
vulnerable.
E
I
would
push
back
on
that.
I
would
I
would
I
mean
my
testing
would
suggest
that's
just
wrong.
So
I
noted
in
the
write-ups
of
this.
The
write-ups
usually
go
like
this.
There's
a
problem
like
this
and
they
describe
you
know
if
you
don't
do
fragmentation
and
then
they
say,
and
all
the
cheap
linux
based
things
do
something.
That's
that's
not
exactly
the
same
thing
like
they
comment
on
one
that
that
you
know
lots
of
linux,
use
things,
use
ip
tables
and
then
say
if
your
ip
tables
was
done
wrong.
You'd
have
this
problem.
H
E
E
A
Okay,
there
are
four
more
people
in
queue,
but
I'd
like
to
cut
the
queue
at
very
soon
now,
so,
if
you're
not
in
the
queue-
and
you
feel
like
you-
have
something
desperately
important
to
say
either
or
be
prepared
for
the
list.
Trust
me
right.
C
C
You
know,
I
think
the
the
blackboard
list
seems
like
that's
doing
a
lot
of
like
the
the
work
here
in
terms
of
keeping
things
safe,
and
the
question
is:
can
that
block
portlets
be
applied
to
ice
candidates
because
I
think
most
of
the
attacks
against
turn
could
probably
also
be
carried
out
against
ice
with,
like
ice
user
names,
you
know,
and
if
the
black
portless
is
the
thing
that
we're
going
to
use
to
protect
turn
you
know,
can
we
also
use
that
for
ice,
or
is
that
going
to
cause
connectivity
problems.
H
A
Okay,
it's
still
showing
colin
in
in
the
queue,
but
I
think
colin
you're
you're
done
right
where
eric
is
next.
Yes,
eric
thanks.
D
Yeah,
so
I
think
you
know
I'm
treating
this
as
like
a
dispatch
question,
so
I
I
suggest
we
dispatch
this
with
a
draft
would
be
welcome
and
then
maybe
we
could
publish
the
individual
submission.
I
don't
think
we
should
change
the
security
document
for
this.
Just
for
mechanical
reasons
like
the
the
things
you're
representing
for
jsep
is
like,
but
don't
like
pretty
traditional,
but
this
this
will
be
like
real
effort.
If
you
want
to
review
it
and
like
so,
I
think
it
better
undocumented,
better.
D
J
Yeah,
I
just
wanted
to
kind
of
say
that
the
reason
it's
kind
of
difficult
for
webrtc
is
we're
one
of
the
few
places
where
you
can
generate
udp
and
it's
this
attack
is
significantly
easier
in
udp.
J
So
I
think
that's
that's
why
it
becomes
a
webrtc
problem,
I'll
rather
generate
it
from
html
javascript
html5.
I
think
one
of
the
things
that
we
could
maybe
do
is
talk
about
randomized
padding,
so
randomized
padding,
specifically
ice
and
turn
packets.
So
you
add
a
add,
a
piece
of
a
data
chunk
in
there.
That
is
a
random
length
and
that
would
make
the
attack
significantly
less
successful.
J
I
I
question
whether
colin
was
saying
that
there's
a
downside
to
to
to
doing
this,
and
I
think,
particularly
in
terms
of
the
blocking
the
low
number
ports,
does
anyone
really
want
to
run
a
turn
server
on
on
a
report?
That's
less
than
1024
like?
Does
anyone
actually
do
that?
J
I
can't
see
it
and
I
do
think
that
it's
a
real
problem,
because
a
particular
and
saying
that
it's
not
an
enterprise
problem,
isn't
it
isn't
useful
because
we're
all
at
home,
behind
old
routers
like
nobody's,
been
able
to
replace
the
router
for
the
last
year
at
home
and
they're
all
running
whatever.
Was
there
that
you
put
in
three
five
years
ago
and
a
lot
of
those
are
antiques
frankly
and
definitely
have
really
dodgy
algs?
So
I
think
it
is
a
real
problem.
J
I've
got
no
statistics
to
prove
that,
but
I
really
do
think
it's
an
issue
and
I
wonder
if
it
would
be
possible
to
make
it
so
that
ice
packets
were
only
ever
single
like.
Is
there
any
reason
why
you
would
want
to
fragment
an
ice
packet.
A
So,
thank
you
for
that
roman.
We
did
actually
cut
the
cue,
so
you
can.
Thank
you.
I
think
my
sense
of
it
from
this
particular
set
of
conversations
is
that
many
of
the
issues
that
are
being
raised
with
this
extend
considerably
past
rtc
web,
even
if
we
might
be
the
poster
child
for
the
problem
and
the
result
of
the
problem
and
the
result
of
it
means
that
any
actions
that
might
be
taken
to
respond
to
it
would
have
impacts
not
just
to
rtc
web
but
beyond
it.
A
So
I
think
probably
the
best
thing
to
do
as
as
eric
hinted
was
to
treat
this
like
a
dispatch
problem
and
try
and
find
the
right
home
for
this
work.
So
I
suggest
that
the
chairs
work
with
harold
and
the
ads
to
figure
out
if
he
is
willing
to
write
such
a
short
rfc,
where
the
best
place
to
evaluate
it
would
be.
I
suspect
it
would
be
sag
or
someplace
like
that,
but
we
can
certainly
have
it
reviewed
inside
rtc
web
as
well.
A
Okay,
so
the
action
item
for
the
note
taker
is
to
the
chairs
and
to
harold
to
work
with
the
ads
to
find
an
appropriate
place
for
a
short
description
of
this
attack
and
for
that
place
to
be
the
place
where
mitigations
are
discussed.
A
I
think
sean
are
we
at
the
end
of
our
agenda?
I
think
we
may
be.
B
A
Thanks
everybody
and
once
again
really
appreciate
how
fast
the
core
issue
has
been
resolved
by
the
group.
I
look
forward
to
confirming
it
on
the
list.