►
From YouTube: IETF101-NETVC-20180319-1550
Description
NETVC meeting session at IETF101
2018/03/19 1550
https://datatracker.ietf.org/meeting/101/proceedings/
B
B
A
A
D
E
A
F
B
E
B
B
B
A
B
B
G
C
A
C
A
How
do
we
take
all
of
you
actually
and
what
a
moment
JavaScript
frameworks
and
annoying
me
act,
but
I'm
actually
taking
a
bunch
loaded,
I'm
the
only
here,
w3
o
w3,
oh
well,
because
I'm
looking
at
web
or
for
the
Megan
or
not
active
in
that
I'm
on
the
AV.
So
I
do
my
work
from
that.
Ap
and
I
need
to
pay
attention
five
books,
my
ancestral
home
for
my
students.
H
C
Okay,
yeah
I've,
seen
on
the
on
the
working
group
roster
that
it
says
actual
start
time,
I
mean
they're
doing
a
nice
little
survey
to
see
when
routes
actually
start
welcome
to
net
BC.
Everyone,
I'm
Natasha
co-chair
with
my
I,
don't
know
where
my
ears
I
guess
we
should
give
him
a
couple
mum.
Has
anyone
seen
him?
Okay,
he's
here,
that's
good!
That's
a
good!
Probably
a
thing
I
should
know
right
within
three
hours.
That's
good!
That's
good!
I'm
sure
he's
on
his
way
right:
okay,
I'm
agenda
sunscreen,
and
where
are
we.
C
C
C
Yeah
I
wouldn't
rely
on
the
London
Underground.
Oh
right,
okay,
let
me
just
send
this
round
thingy.
Okay,
yes,
welcome
everyone
to
net
VC.
Before
we
kick
off
or
just
remind,
we
have
a
number
of
people
and
meet
Eko.
If
you
are
coming
to
the
mic
to
speak,
please
speak
your
name
clearly
and
I'm
keep
your
aren't.
They
keep
your
questions.
Concise,
clear,
speak
slowly,
do
as
I
say
not
as
I
do
also,
if
you
are
introducing,
if
you
are
presenting
today,
please
stand
in
the
pink
box
or
favorite
pink
box.
Please
do
that.
C
This
is
the
note.
Well,
if
you
have
been
here
since
this
morning,
you've
probably
seen
it
a
few
times
already.
If
you
have
a
problem
with
it,
please
let
us
know
as
soon
as
possible
and
I'm.
Oh,
is
this
the
new
note
well
great
yeah
I,
think
I
got
it
wrong
last
time,
so,
okay,
cool!
Thank
you
right!
Administrative
tasks,
I
sent
around
the
blue
sheets.
C
Like
we
have
note
takers.
We
have
Java
scrub.
We
yes
I
mentioned
about
remote,
attendees
and
presenters,
and
also
before
we
begin
I'm
an
outgoing
chair.
Unfortunately,
because
my
role
has
changed
recently,
if
you
are
interested
in
being
a
chair
of
the
net
VC
group,
please
let
myself
all
moan,
no,
because
it's
it's
not
a
very
big
talk
job.
Actually.
So,
if
you're
pro
like
sort
of
taken
your
first
steps
in
chairing
in
IETF,
please
let
us
know
this
is
a
good
group
to
chair
I'm,
very
interesting
topic,
cool.
Alright,
this
is
our
agenda
today.
C
J
J
That's
that's
the
most
important
milestone
by
far
in
the
working
group
and
so
far
we
still
don't
have
a
single
merge
code
base
or
candidate
so
that
that's
the
milestone
that
last
meeting
we
agreed
to
push
out
to
July
and
I
got
to
be
honest
that
it's,
it
seems
very
unlikely
and
four
months
Adam
just
asked
you
know.
What's
the
likelihood
of
closing
out
this
milestone
in
four
months,
I
think
it'll
be
a
pretty
tall
order
to
get
that
closed
out.
J
I'd
love
to
hear
some
thoughts
about
what
people
would
like
to
see
is
the
direction
to
try
to
progress
that
and
the
the
other
milestones
we'll
finish
up
the
requirements
document.
There's
there's
some
few
late
changes
to
bring
that
in
sync
with
other
users
of
this
document.
There's
other
industry
for
that
are
also
using
this
document
and
so
synchronizing
between
those
two
requirements.
J
Those
two
sets
of
requirements
is
probably
important
for
us,
but
hopefully
that
synchronization
can
happen
within
the
next
few
weeks
and
we'll
get
that
hopefully
published
by
April
and
then
the
the
final
one.
The
final
milestone
for
a
storage
format
or
container
binding
hasn't
had
any
work
started
on
it,
but
that's
really
kind
of
a
moot
issue.
J
If
we
don't
address
the
codec
spec
and
reference
implementation,
so
we'll
talk
a
little
bit
more
about
those
and
about
what
what
course
the
workgroup
would
like
to
see
to
try
to
resolve
that
after
the
presentations
and
today,
I'd
probably
be
a
short
session.
We
we
don't
have
a
dollar
update,
so
that
was
dropped
off
the
agenda.
If
you
were
looking
at
the
first
version,
so
we'll
finish
up
by
five
o'clock
today
at
the
latest
and
maybe
even
a
little
bit
earlier,
because
first
two
topics,
let
my
ID,
be
light.
C
C
J
J
J
K
About
that
all
right,
hi
I'm
Thomas
from
Ozora
I
have
a
very
short
update
on
draft
ITF
nut
bc
testing.
This
draft
has
not
had
any
update
since
the
last
meeting,
so
this
is
just
really
the
status
current
status
on
next
slide.
K
Yeah
not
no
updates.
This
is
relatively
stable
at
this
point,
the
only
thing
I
thought
I'd
point
out
is
that
we've,
you
know
I've
been
running.
You
know
people
continuously
using
the
strap
to
do
testing
I'm
gonna
have
an
example
results
on
the
next
slide.
K
K
The
only
issue
has
ever
come
up
during
using
it's
just
just
for
a
v1
is
that
the
AV
one
code
base
has
gotten
slower
and
slower
slower.
So
our
our
a
current
objective,
one
fast
test
set,
is
no
longer
very
fast.
It's
very
slow,
I'm
setting
this
this
result
actually
does
not
use
the
very
slowest
if
he
wants
to
be
to
test
with
other
than
that
there
have
been
no
major
issues
with
it.
I
think
last
last
video
we
decided
to
hold
this
for
changes.
I,
don't
really
have
a
comment.
K
J
So
middle
is
a
question
as
an
individual
for,
first
of
all,
before
the
closing
the
draft
topic,
there's
not
a
V
mana
for
V
MAF
on
here
and
I
thought.
There
was
some
discussion
a
while
back
about
perhaps
dropping
MSS
sim
in
favor
of
just
assume
it
was
that
have
a
resolved
or
is
that
still
opened
so.
K
One
thing
that
potentially
changers
that
Netflix
has
created
a
updated
version
of
the
math
call
of
I,
think
they
call
it
harmonic
weekly
math,
which
is
this
different
way
of
combining
two
frame
scores
that
I
think
divergent
resolves
some
issues
that
the
old
the
math
had
in
regards
to
the
old
V,
math,
saturating
and
kind
of
producing
unreasonable
numbers
at
some
of
the
rates
we
test,
as
the
draft
has
not
been
updated
to
you
know,
address
that
there
might
be
something
we'd
want
to
fix
before
closing
it.
It.
J
K
K
They
produce
very
different
results
and
measure
different
things.
You
know
the
which
is
so
hot.
You
know
clear
on
which
one
is
better
I
mean,
but
that
testing
graph
basically
says.
If
you
see
a
large
discrepancies,
be
niche
between
the
two.
You
should,
you
know,
verify
the
results
with
a
subjective
testing
instead
to
figure
out
which
one
is
giving
you
a
better
answer
for
your
particular
tool.
I
think
that
remains
the
best
advice.
J
So
after
the
VMF
update,
what
does
the
group
think
about
closing
the
document
instead
of
leaving
it
open?
If
there's,
if
there's
little
chance,
that
it's
gonna
change
too
much
more,
it's
been
pretty
stable
for
a
while.
Does
anyone
have
an
opinion
of
whether
we
should
close
or
leave
it
open?
My
close
I
mean
publish
as
finished
spec.
K
L
Very,
very
from
Mozilla,
so
if
I
remember
correctly,
the
the
one
thing
that
we
were
potentially
thinking
of
adding
to
this
was
some
tests
specifically
for
rate
control,
and
while
you
know
that
might
Soltan,
if
they'd
be
nice
I,
don't
think
there's
a
reason
to
keep
holding
the
document
forever
for
that
nobody's
gonna.
Do
it
so
I
think
we
can
go
ahead
and
publish
you
could
even
potentially
publish
without
having
a
meeting.
J
Yeah,
we
certainly
don't
need
to
meet
just
to
agree
to
publish
a
document.
So
if,
anyway,
if
I
want
to
feel
strongly
that
we
should
not
publish
it,
I'd
say
speak
up
now
in
the
room
or
speak
up
on
the
list,
because
otherwise
I
think
the
recommendations
from
the
editor
is
going
to
be
to
go
ahead
and
close.
This
out
publish
it
after
correcting
the
VAF
implementation.
J
N
J
J
Certainly,
when
all
the
metrics
agree,
it's
pretty
it's
a
no-brainer
that
you
know
the
the
you
know
something
is
is
consistent
and
and
better,
but
when
you
say
that
there
was,
you
know,
divergence
of
the
metrics
or
not
even
divergence,
but
you
know
significant
differences
between
the
metrics.
Is
it
useful
to
try
to
categorize
that
on
specific
content,
so
we
can
start
getting
some
intuition
about
when
some
of
the
metrics
should
be
ignored
or
taken
with
a
grain
of
salt?
J
Do
we
think
that
it's
possible
to
get
get
some
insight
with
that
by
looking
at
all
the
different
test
clips
where
they
diverge
and
seeing,
if
there's
a
pattern
there
can
we
can
help
the
guide?
Further
testing
say
you
know
this
class
of
content
is
most
likely
poor
for
SM
or
you
know,
are
very
good
for
the
VMF.
Is
that
something
worth
trying
to
do.
K
The
only
that
we
could
do
that,
we
would
need
to
actually
to
compare
to
subjective
results
in
order
to
get
a
good
correlation
there.
We
have
done
that,
but
only
for
I
think
there's
been
a
couple:
do
filter
tools
that
people
have
done
the
subjective
tests
on,
and
s
put
it
so
we
can
definitely
probably
reach
a
conclusion
in
regards
to
the
you
know:
types
of
loop
filters
how
we
test
it,
though
other
tools
to
be
harder,
because
you
haven't
done
subjective
tests.
J
Some
kind
of
guidance
like
that
meant
may
be
nice
for
codec
developers
because
you're
just
looking
at
a
you
know,
smorgasbord
of
metrics,
you
know
can
sometimes
be
daunting
and
if
they
disagree,
you
know
it's
a
it's
a
lot.
But
if
we
can
get
some
concrete
guidance
about
when
they're
likely
to
disagree
and
which
ones
are
likely
to
prevail
for
certain
class
of
content,.
K
Yeah
I've
also
concerned
I
would've.
Actually
the
current
state
of
the
document
is
with
regard
to
this,
but
you
know
that
considering
defining
a
subset
of
the
metrics,
you
actually
defined
them
all
a
document,
but
if
we
get
give
me
a
priority
to
sum
or
eliminate
some
that
are
generally
not
useful
later
like
book
in
my
picture
here
at
the
CB
and
CR
scores
are
entirely
redundant
with
CIE
de
and
in
general,
the
APS.
K
O
C
J
C
P
So
what's
missing,
is
really
just
a
few
nice
to
have
features
like
the
support
for
the
dollar
entry
recovery
has
not
yet
been
completed
and
also
I
think
the
codec,
like
some
good
tools
for
screen
contents.
So
what
we
have
now
is
not
a
complete
merge
of
dollar
and
four,
but
it's
a
fairly
simple
codec
performing
pretty
similar
to
h.265
I
think
but
should
be
less
complex.
P
And
I
also
have
it
doesn't
fit
the
page
quite
right,
but
okay,
as
for
a
v1,
is
frozen
in
quotation
marks.
I
think
there's
a
quite
a
lot
of
updates
going
on
really
now
small
changes,
but
at
least
no
new
tools
are
allowed
and,
as
I
said,
there's
a
lot
going
on
in
the
repository.
But
it's
it's
only
small
changes
and.
P
Still
some
bug
fixes
going
on
last
time.
I
gave
a
comparison.
I
show
the
history
compression
history
of
a
v1
over
the
past
few
years,
and
also
the
complexities
go
through
that
again,
with
an
update.
I
have
some
numbers
showing
the
improvements
of
vp9
or
actually
it's
over.
What
a
v1
was
like
in
July
2016,
which
is
roughly
the
same
as
vp9,
so
we
have
at
least
gnar
why
DDR,
gano,
29%
and
the
other
metrics
are
roughly
the
same.
P
So
this
is
the
compression
history,
starting
in
July
2016
on
the
Left
been
a
bit
up
and
down
the
latest
bump
in
February.
This
year
is
probably
just
because
I
sampled
at
a
bad
time,
but
it's
almost
30
percent
and
that's
that's
what
the
revised
goal
was.
We
started
out,
I
think
like
40
or
50
percent
gain,
but
it's
got
more
realistic
and
we've
got
30
percents.
C
P
P
There
is
at
least
the
same
order
of
magnitude
complexity,
so
so
the
main
difference,
perhaps
between
81
and
86,
for
complexity,
wise,
is
that
if
you
throw
more
CPU
at
81,
you
get
more
compression.
But
if
you
throw
a
lot
of
CPU,
that's
h.264
compression
will
stop.
You
won't
get
that
wrong.
That's
much
more!
So
that's
everything!
I
had
I,
certainly
less
than
two
minutes.
Q
Q
J
Thank
You
Stefan
for
the
important
question
I
think:
let's,
let's
have
that
question
brought
up
again
after
the
exbc
discussion,
because
Nick
we
need
to
decide
as
a
group
what's
the
best
way
forward.
They
remind
everyone.
The
intended
output
of
this
group
was
a
single
codec
and
hopefully,
something
that's
practical
and
implementable
by
web
browsers
for
WebRTC
usage
so
for
real-time,
interactive,
two-way,
encoding
and
decoding.
You
know,
as
Steiner
shows
here.
J
You
know,
81
theoretically
could
probably
fit
that
bill,
although
we
don't
have
a
proposal
from
from
someone
representing
the
Alliance
officially
to
come
in
here
and
do
that,
but
right
now
that
current
practical
implementations
of
81
or
nowhere
near
that,
what
would
you
say
is
the
current
gap
Steiner
between
making
it
practical,
real
time
speeds
and
getting
compression
much
much
better
than
h.264?
What's
the
current
gap
from.
J
P
J
P
P
P
P
C
R
Okay,
thank
you
very
much,
so
I
will
be
presenting
about
the
ecstasy
video
codec.
My
name
is
Jonathan
and
I
work
for
Davidian.
So
this
is
a
new
proposal
to
this
group,
and-
and
this
is
the
first
time
we're
presenting
it
and
I
kind
of
hope-
it's
not
too
late
to
bring
in
new
stuff.
But
what
I
want
to
do
to
emphasize
here
is
that
it's
not
just
a
new
piece
of
technology
that
we're
bringing
in
it's
also
representing
a
slightly
different
mindset
for
for
developing
and
I'll.
R
Go
into
that
in
the
beginning
of
the
presentation.
So
what
I
will
representing
today
is
just
briefly
about
what
what
is
a
crisp
xvc
about
the
design
philosophy,
very
briefly
about
the
technology
and
then
I'll
go
into
more
more
details
about
something
that
the
core
restriction
flags
and
how
that
helps
us
to
do.
R
R
How
to
bring
in
new
versions
of
the
codec
and
I'll.
Go
into
more
details
of
that.
We
also
have
focused
very
much
on
making
sure
that
our
implementations,
especially
the
decoder
side,
is
efficient
and
is,
is
capable
of
being
quickly
brought
to
to
practical
applications
to
use
it
commercially.
We
have
a
few
demos
of
that
and
there's
a
demo
on
our
webpage,
where
we're
actually
running
the
decoder
in
JavaScript,
so
that
you
see
that
the
D
current
process
itself
is
very
lightweight.
R
We
we
only
remove
stuff
or
designer
on
stuff
if
we
really
really
need
to.
If
someone,
if
if
there
is
a
request
to
not
use
a
specific
coding
tool,
so
that
means
we
can
we
can
use
the
the
very
best
compression
technologies
and
and
get
the
the
best
performance,
and
only
if,
if
it's,
if
we
run
into
problems
with
that,
we
will
step
down
and
use
some
alternative
or,
or
rather
yeah,
do
some
do
some
design
around.
R
So
here's
a
very
simple,
extremely
simplified
scale
that
if
you
have
some,
if
you
want
something
which
is
really
high
performance,
you
also
have
to
be
prepared
that
there's
a
high
risk
that
there
are
patterns
surrounding
it
or
patents
on
that
technology,
while
in
the
other
end
of
the
scale.
If
you
want
to
be
really
sure
that
you
are
using
something
that
is
not
patent
encumbered,
then
you
have
to
be
prepared
that
it's
probably
not
as
good
performance.
R
Much
of
the
the
common
pieces
that
you
see
in
in
modern
video
codecs
is
very
similar
in
ecstasy
and
in
in
other
codecs.
Compared
to
to
achieve
EC.
For
example,
we
have
a
bit
more
advanced
splitting
of
blocks
for
prediction
and
for
transform
and
non
square
transforms,
and
so
on
and
in
the
the
current
version
that
we
are
working
right
now,
which
you
haven't
released.
R
Yet
we
add
quite
a
bit
more
of
those
advanced
features,
so
to
say
that
cross
component
prediction
or
affine
motion
prediction
or
local
illumination
compensation,
result
and
I
talked
about
coding
tools
like
separate
pieces
of
technology,
separate
processing,
steps
that
you
can
isolate
from
each
other
and
that's
actually
also
how
we
have
developed
and
implemented
it
in
the
reference
software.
We
have
this
concept
of
restriction
flags,
so
each
of
these
different
tools
can
be
turned
off
via
control
information
in
the
bit
stream
itself.
R
If
that
tool
is
turned
off
yeah,
then
we
go
into
this,
if
course,
and
we'll
just
replace
that
prediction
mode
with
DC
prediction.
Is
that
so?
And
we
have
this
for
every
of
these
76
tools,
we
have
a
fallback
solution.
This
is
very
simple,
but
some
of
them
are
a
bit
more
advanced
because
they
relate
to
signalling,
and
you
have
to
do
several
modifications
in
the
code
to
make
them
work
well,
but
this
is
one
of
the
simplest
ones.
J
So
now
from
the
floor,
mic
virtually
a
question
about
the
selection
of
the
tools
that
control
by
the
flag.
Did
you
design
those
because
of
what
you
suspected?
It
may
be
areas
that
may
be
patented
or
purely
based
on
technology
and
based
on
what
you
thought
was
the
the
right
breakdown
of
the
components
of
the
codec
question.
R
What
we've
tried
to
do
is
isolate
a
small
pieces
as
possible
which
which
still
practical
to
implement
so
so
we
want
to
have
as
many
accesses
you
can
have,
while
still
not
causing
too
much
trouble
in
the
course
of
say.
So,
that's
the
reasoning
we
we
want
to
make
every
piece
so
that
you
can
turn
off
every
piece
of
the
code
and,
ideally
a
small
portion
as
possible.
J
R
Some
of
these
flags
might
not
make
very
much
sense.
We
have
one
flag
which
turns
off
by
prediction
altogether
and
in
that
that
is
something
which
are
performance,
and
maybe
it
doesn't
it's
not
it's
not
a
tool
which
is
actually
at
risk
from
an
eyebrow
perspective,
because
that's
very
old
technology
so
but
but
most
of
them
are
for
small
pieces
that
you
relieve.
You
want
to
be
able
to
turn
them
off.
R
So
what
I
wanted
to
talk
about
also
is
how
we
use
these
restriction
flags
to
enable
a
evolution
of
the
codec
and
and
versioning
of
the
codec,
so
Alba
trims,
all
xvc
bits.
Dreams
contain
a
indication
of
an
X
B,
C
major
version
and
an
X
ay
C
minor
version,
where
the
major
version
represents
new
pieces
of
technology.
R
If
you
increase
the
the
major
version,
that
means
you
have
added
new
tools
to
the
codec,
while
the
minor
version
represent
reduction
in
number
tools,
when
you,
when
you
say
this
tool,
should
no
longer
be
available
in
the
codec,
so
so
I
will
I
will
disable
this
via
the
restriction
flag
and
you
make
the
codec
smaller.
So
we
have
separated
these
two,
which,
which
makes
it
possible
to
control
how
the
the
codec
evolves
over
time,
and
it's
actually
and
it's
the
reference
software,
which
defines
which,
which
versions
are
valid
at
any
point
in
time.
R
R
You
might
at
some
point
if,
if
they
are
using
a
tool
that
you
are
no
longer
allowed
to
use,
then
you
have
to
rewrite
them
or
re-encode
them,
and
that's
something
if
you,
if
you
are
considering
a
large
service
or
ott
applications,
you
might
want
to
to
re-encode
your
your
your
bit
streams
or,
depending
on
what
the
tool
is
that
you
have
to
turn
off.
It
might
be
just
rewriting
the
syntax
or
if
it's
a
more
central
piece,
you
might
actually
have
to
do
a
full
of
rien
coding.
R
But
on
the
other
hand,
if
you
do
that,
you
might
be
able
to
to
get
better
performance
because
the
codec
has
evolved.
Since
you
first
created
the
bitstream,
so
you
might
be
able
to
compress
it
more
efficiently
and
then
the
other
side
of
it
is
the
client
upgrade.
So
if
a
new
major
version
is
released
with
new
tools
index,
we
see
correct,
then
the
the
clients
needs
to
be
updated
to
support
that
new
version.
So,
during
a
period
of
time
there
will
be
support
for
two
different
versions.
R
So
if
we
have
so
so,
this
is
one
of
the
reasons
why
we
wanted
want
the
codec
to
evolve
over
time
is
because
of
the
patent
situation,
and
because
we
have
seen
the
how
difficult
it
can
be
for
patent
encumbered
codecs
to
become
the
applied
due
to
the
two
patent,
trolls
and
and
unclear
licensing
situations
and
so
on.
And
so
if
there
is
a
report
of
a
problem
related
to
a
piece
of
technology,
and
we
determined
that
yeah
this
this
tool
can
no
longer
be
used
in
the
codec.
R
Then
there
is
a
new
minor
version
released
so
so
quickly
after
it
has
been
determined
that
yeah
this
tool
cannot
be
related
used
because
there
is
a
patent
which
is
in
conflict
with
the
license
of
X.
You
see
so
then,
then,
after
the
we
have
made
that
minor
version
released,
we
can
start
exploring
what
would
be
a
better
approach
to
do
something
similar.
R
R
After
still,
some
more
time,
you
would
say
that
at
this
point
in
time,
we
expect
all
the
coders
to
support
this
new
version
and
what
that
means
in
terms
of
from
a
bits
bitstream
perspective,
I've
tried
to
draw
these
different
colors
to
indicate
that,
during
which
period
of
time
a
specific
version
of
a
bit
stream
will
be
valid.
You
see
that
the
diversion
one
bit
stream
is
valid
until
the
point
where
support
for
41.0
is
removed
from
the
reference
decoder,
but
there
is
an
overlap.
R
Looking
at
it
from
from
a
perspective
of
encoders
and
decoders,
you
will
have
overlapping
pair,
which
the
encourage
needs
to
be
upgraded
so
but
I've
done
it
with
it
within
quotation
mark,
because
this
upgrade
is
a
very
simple
thing.
This
is
just
disabling,
something
which
is
which
a
tool
which
should
no
longer
be
used.
R
So
this
is
typically
just
a
configuration
thing
that
you
would
do
on
your
Ankara
to
make
sure
that
this
specific
feature
should
not
be
used
anymore,
but
then
over
here
you
have
the
upgrade
of
the
decoder,
and
that
requires
a
new
installation
of
a
new
version
of
the
decoder.
So
that's
pushing
out
a
new
new
software
to
the
clients,
but
there
is
all
for
both
of
these
there's
an
overlapping
period
of
time,
where
you
can
do
perform
this
this
action,
and
you
can
see
that
you
might
introduce
more
more
versions
of
the
codec.
R
R
So
that's
about
the
version
handling
I
have
one
slide
about
XP,
see
in
the
WebRTC
scenario
where
this
is
not
something
we
have
been
working
specifically
with,
but
but
having
these
restriction,
flags
and
this
isolated
pieces
of
technology
opens
up
for
an
opportunity
to
to
kind
of
determine
that
the
tool
set
to
use
based
on
the
capabilities
or
or
the
the
scenario
that
you
want
to
use
them.
For.
So
you
can.
R
You
can
open
up
for
a
negotiation
of
the
tool
set
for
each
session
to
be
different,
and
you
can
also
in
that,
in
that
scenario,
take
into
account
the
the
patent
situation,
the
aircar
situation.
If
you
want
to
do
a
royalty-free
session,
you
can
explore
what's
the
current
situation
around
these
tools
and
you
can
even
do
it
based
on
your
locality,
where
you
are
in
the
world
to
make
to
use
different
tool,
sets
for
different
sessions.
R
When
it
comes
to
it
comes
to
results,
we
have
been
running
xvc
with
the
the
test
conditions
and
the
test
sequences
from
from
the
testing
draft
and
use
this
or
recompressed
yet
framework
to
generate
results,
and
we
think
the
results
looked
really
promising.
We
have
quite
significant
gains
relative
to
hmmh
EVC,
but
also
quite
large
gains
compared
to
a
v1,
and
I
included
one
figure
here
as
an
example
that
you'll
see
X,
we
see
being
better
than
the
other
codecs
and.
R
R
It
can
be
used
in
in
web
applications.
We
have
demonstrated
that
the
decoder
is
really
lightweight.
What
we
haven't
worked
with
is
developing
fast
speed
modes
for
fast
encoding.
So
that's
something
that
needs
to
be
evaluated
more
I
would
say,
but
and
and
then
for
the
third
third
bullet
here
we
believe
you
have
a
good
framework
for
the
the
licensing
and
IPR
situation
which
which
would
meet
this
objective
and
that's
it.
Thank
you.
C
L
So
so
your
contention
is
that
that
picking
tools
that
are
high
performance
versus
speaking
tools
that
are
old
and
low
performance
is,
is
one
way
to
mitigate
risk
and
I
agree
with
you.
That's
one
way
to
mitigate
risk
right,
so
there
are
other
ways
to
mitigate
risk,
such
as
as,
for
example,
paying
a
bunch
of
money
to
lawyers
and
doing
a
bunch
of
work
or
getting
a
bunch
of
other
companies
who
hold
IPR
to
agree
to
license
it
under.
L
Q
L
That
is
another
point
that
get
to
a
little
bit
but
I
think,
at
least
from
from
our
perspective
like
like
I,
think
that
would
be
an
OK
strategy.
If
I
lived
in
a
world
where
past
damages
did
not
exist
in
the
the
problem
is,
is
that
I
ship
a
few
hundred
million
copies
of
Fox
every
six
weeks
and
if
I've
been
deploying
your
codec
for
a
few
months?
That's
an
awful
lot
of
past
images
that
I've
racked
up.
L
If
there's
a
problem
that
that
we
just
didn't,
know
about
and
hadn't
shut
off
yet
so
so
I
think
in
order
to
actually
be
widely
deployables,
you
need
to
do
some
of
that.
Other
risk
mitigation
stuff
up
front.
R
Yeah
well,
I
think.
One
of
the
reasons
why
we
bring
bring
this
proposal
here
is
because
I
think
a
group
like
this
is
a
good
place
to
to
actually
explore
those
those
the
patent
situation
and
and
send
out
requests
for
for
patent
declarations
to
determine
what
what
is
the
rule
that
we,
the
real
patent
situation
well
disc.
This
codec
and.
L
And
I
agree
that
that
may
turn
out
some
problems,
but
but
if
I
can
translate
that
for
you,
it
sounds
like
what
you're
asking
is
for
us
to
do
all
the
work
to
guarantee
that
this
is
royalty-free
which,
which
you
know.
Maybe
if
this
is
compelling
enough,
then
then
that
might
be
worth
investing
in.
But
but
you
know
that's
that's
a
decision.
I
think
that
the
group
has
to
make
I
I.
Q
L
The
answer
is,
yes,
that'd
be
easy.
Yeah
yeah,
but
yeah
so
I
mean,
but
the
the
the
question
is
is:
is
you
know
if
there
are
things
that
that
you
don't
want
to
contribute
on
a
royalty-free
basis,
then
then
we
would
have
to
take
those
out
of
the
codec,
at
least
in
terms
of
what
I
think
the
working
group
would
be
willing
to
standardize
and
essentially
like,
like
the
only
thing.
L
I
think
that
that
we
are
interested
under
the
current
charter
would
be
that
royalty-free
baseline,
like
if
you
want
to
hold
out
those
other
things
like
okay,
but
then
you
have
to
go
reevaluate.
What's
the
performance
going
to
be,
if
we
take
all
of
those
things
out
and
I
think
like
having
some
kind
of
notion
of
how
much
that
would
affect
the
performance
arm
would
be
important
for
making
decision
on
on
how
useful
a
candidate
this
is.
R
J
Yeah
sort
of
clarify
the
the
then
the
Charter
doesn't
rule
out
anything
that
could
be
royalty
bearing,
but
it
would
have
to
show
good
evidence
that
there
is
good
justification
for
including
that
technology.
So
you
know
if
the
compression
gain
was
very
significant
and
the
royalties
were
very
reasonable.
Maybe
that
would
merit
consideration
by
the
group,
but
if
the
compression
gain
is
minimal
and
the
royalties
are
in
line
with,
you
know
say,
for
example,
the
situation
with
other
codecs
today
other
leading-edge
codecs.
L
If
you
actually
go
read
the
Charter
like
there,
there
is
a
section
towards
the
end
where
it
where
it
says
like
we
should
follow
the
preference
specified
in
bc,
p
79
to
prefer
royalty-free
technologies.
We
can't
actually
write
in
a
charter
that
things
must
be
royalty-free,
because
that's
not
a
determination
that
the
IT
can
make
like.
Essentially
that's
only
a
determination
that
can
be
made
in
a
courtroom
under
under
the
current
legal
system
in
the
u.s.
at
least,
and
probably
many
other
jurisdictions
as
well
right.
L
So
so,
but
different
members
of
this
group
probably
have
different
preferences
over
what
would
be
an
acceptable
level
of
Licensing.
You
know
speaking
of
someone
who
distributes
copies
for
free
in
an
open-source
manner,
which
means
we
want
other
people
to
be
able
to
distribute
copies
of
our
software
for
free
I.
Think
the
only
only
acceptable
licensing
to
us
would
be
none
but
but,
as
Moe
mention
well
no
money.
Yes,
let
me
rephrase
that
more
accurately.
L
K
Iii,
this
question
was
actually
very
happy
that
you're,
with
your
result,
slides
I,
think
they
did
a
good
job
in
following
the
testing
draft
procedure.
I
was
actually
curious
if
he
had
any
any
comments
regarding
the
testing
draft
and
how
he
felt
that
worked
for
xvc
and
whether
and
I
was
also
curious.
Given
your
very
good
results,
if
he
had
any
ideas,
what
in
xvc
gave
you
the
really
good
results
like
did
you
find?
R
R
R
R
R
K
R
So
on
our
github
repository,
we
have
the
master
branch
with
which
represent
version
1,
and
then
we
have
the
dev
branch,
which
is
targeting
version
2,
but
we
haven't
released
version
2
yet
so
so
that
that's
still
working
progress.
I
cannot
work
into
it
in
this
slide,
but
it's
not
really
it.
We
haven't
determined
exactly
which
tools
will
be
in
there.
K
R
I
would
think
that
probably
ITF
might
be
more
agile
than
than
other
standardization
organisations
in
terms
of
revising
and
updating
specifications,
but
I,
don't
know
how
I
mean
if
it
becomes
too
often
that
you
have
to
revise
it.
Is
it
wouldn't
be
good,
but
we
don't
envision
that
we
I
mean.
We
think
that
it
will
be
only
as
rarely
as
possible
that
you
would
have
to
remove
something
and
making
these
new
additions
would
also
be
I
mean
with
a
reasonable
time
frame
so
that
you
can
get
this.
This
update
good
comfort.
J
From
the
choice,
yeah
I
don't
think
there
would
necessarily
be
a
technical
problem.
I
mean
we
have
plenty
of
working
groups
that
have
lived
for
you
know,
probably
too
long
so
I
don't
see
a
technical
problem
with
having
a
rolling
rolling
technology
coming
in
on
a
you
know,
monthly
or
sitting
monthly
basis.
M
R
N
:
Jenks,
one
of
the
most
of
interesting
things
about
this
idea
to
me,
is
the
ability
to
turn
on
and
off
tools
and
that
you
got
that
to
work
with
a
fairly.
You
know,
fairly,
modern
and
obvious
that
obviously
complex
is
correct
and
I
I
do
wonder
if
that's
one
of
the
the
big
takeaways
this
working
group
should
should
take
on
that
idea.
N
One
of
the
reasons
I
like
that
type
of
idea
is
right
now,
I'm
involved
with
lawyers
from
various
companies
and
arguments
about
whether
a
patent
is
valid
or
not,
or
whether
patent
is
obvious
or
not.
It's
a
very
different
thing.
Some
people
think
oh
yeah.
That
was
obvious
at
that
time,
and
many
other
people
don't,
and
this
would
allow
different
companies
to
actually
change
different
settings
on
whether
they
think
about
certain
things
like
that
in
certain
cases.
So
there's
one
comment.
N
You
know
somebody's
trying
to
make
a
bunch
of
money
for
their
IPR
when
it's
just
a
money
case,
not
a
sort
of
strategy
case
of
hand.
They
typically
the
optimal
strategy,
seems
to
be
for
the
most
part
to
wait
until
you
know,
within
a
few
years
of
the
expiry
of
patent
before
any
of
the
lawsuits
start
so
often
the
discovery
that
there's
a
problem
in
something
may
be
many
many
years
after
it
bus
was
what
was
being
used.
That's
it
thanks.
O
O
R
Q
Stephon
banger
in
many
countries,
let's
say
having
complaints,
have
to
be
nowadays
fairly
specific
right
so
to
speak.
More
broadly,
I
think
the
addition
of
a
tool
that
post
standard
setting
reactively
allows
you
to
selectively
throw
away
in
the
standards
domain
that
one
nasty
tool
which
that
stupid
troll
across
the
street
is
asserting
against
the
world.
That's
a
useful
tool.
That's
I
think
what
what
Holland
was
going
after
I
went
after
that
to
an
MPEG
about
two
years
ago,
or
so
right.
It's
it's.
Q
It's
a
useful
tool,
whether
it
is
whether
it
is
technically
doable
is
another
question:
I
mean
video
codec.
Some
people
will
tell
you
it's
quite
hard
because
of
the
symmetry
effect
between
those
various
tools
that
are
sitting
there.
It's
not
that
easy
to
compartmentalize.
However,
if
that
such
it
will
variable,
I
think
the
standards
groups
would
have
a
tool
that
would
allow
them
to
react
in
a
somewhat
backward,
compatible
ways,
though,
to
to
the
threat
of
certain
trials
and
that
hopefully,
would
discourage
those
thoughts.
So
that's
that's
the
idea
behind
behind
proposed
like
this.
Q
Q
Maybe
you
have
a
better
choice.
You
have
it's
mandatory
to
implement
everything,
except
when
there's
kind
of
an
industry
consensus
to
switch
something
off,
because
there's
this
one
troll,
which
too
much
money
who
makes
a
nuisance
out
out
of
himself.
So
yeah
there's
there's
an
option
here.
It
may
work
whether
we
can
get
assuming
that
this
group
would
want
something
like
that,
assuming
we
are
not
rubber
slimming
everyone
which
doesn't
have
that
and
which
is
frozen
bits
to
him
and
whatever
right.
Q
M
And
Schwartz
I
I
thought
Collin
Jennings
was
going
to
say
that
so
I
didn't
bother,
but
so
Cohen
Jennings
has
been
speaking
in
other
working
groups
recently
about
new
architectures
for
for
video
and
real-time
media
and
has
has
reinvigorated
I.
Think
a
discussion
about
using
using
runtime
pluggable
software
codecs
in
a
web
context
or
in
context
where,
where
it
might
be
possible
to.
M
C
Somebody
say
living
standards,
cool
I'd,
say
I
guess.
The
best
thing
to
do
next
is
to
take
some
of
the
comments
which
are
in
the
room
go
through
the
minutes.
Maybe
there's
a
draft.
So
if
people
want
to
continue
the
discussion,
you
can
reference
the
draft
read
the
draft
and
reference
it
and
then
go
and
develop
from
there.
H
J
So
talking
from
a
floor
mic
again,
I
had
a
few
more
questions
so
first
about
you
mentioned
something
in
the
draft
that
you're
in
continuous
dialogue
with
patent
holders.
Do
you
have
a
rough
feel
for
whether
or
not
you're
going
to
have
enough
of
those
on
committing
on
to
the
royalty-free
baseline
terms
that
the
performance
would
be?
You
know
at
least
door
like?
Could
you
have
a
feel
for
that?
Yet.
R
R
J
A
chair
point
of
view,
I
I
think
we
probably
would
not
make
a
decision
to
hum
on
something
that
we
thought
would
not
meet
most
of
the
Charter
objectives.
So
unless
we
had
some
kind
of
you
know
rough
view
of
what
the
likelihood
of
a
performance
RF
baseline
would
be,
then
it
would
be
premature
to
try
to
adopt
something
that
we
that
we
thought
was,
you
know
very
unlikely
to
at
least
meet
Thor
once
you
had
the
RF
picture
of
it
figure
it
out.
J
So
is
it
possible
that
you
may
be
able
to
go
off
and
and
and
make
some
headway
on
that
question
before
the
next
meeting
or
or
just
in
general?
What
timeframe
do
you
think?
Is
it
months
years
out
before
you
figure
out
what
the
RF
baseline
would
look
like
or
something
you
can
do
in
a
pretty
short
term
month,
sir,
by.
R
The
next
meeting,
but
the
question
is:
would
it
really
be
royalty-free
that
I
mean
that's
the
the
tricky
question
and
I
mean
we
have
made
some
some
analysis
or
of
our
own
to
see
what
would
make
a
good
tool
set
and
what
would
be
believed?
Is
it's
a
good
central
piece
of
technology
but
but
to
to
really
have
something
concrete
to
say
this?
This
is
this
is
round
two
free
I.
This
is
that's
very
difficult
for
my
company
to
to
to
bring
I
mean.
J
R
R
J
Okay
and
then
the
final
things
merged
a
small
point,
but
maybe
double
check
with
Steiner
about
the
blip
that
happened
in
a
v1.
Someone
was
trying
to
make
it
faster
and
by
making
it
faster,
the
compression
you
know
hit,
you
know
a
horrible
cliff,
so
there
was
a
blip
that
seemed
to
be
about
right
about
the
time
when
you
were
doing
your
evaluation,
so
maybe
double
check
on
a
current
on
the
current
get
hash.
Maybe
from
this
week,
okay.
R
R
L
J
Thanks
thanks
for
verifying
that,
okay,
so
that
you
know
okay,
Brent's
point
I
think
this
is
interesting
for
different
reasons,
not
just
for
the
core
technology
part
of
it,
even
if
the
core
technology
may
prove
to
be
you
know,
patent,
encumbered
and
and
by
people
that
are
unwilling
to
license
it
worldly
free
some.
Some
ideas
of
this
may
be
useful
for
whatever
candidate
in
MVC
pushes
forward,
and
especially
the
cadence
of
regular
releases,
may
align
very
well
with
our
Charter
to
put
it
in
WebRTC,
because
the
browser's
all
have
this
dynamic
cadence.
J
You
know,
roughly
six
weeks
to
you,
know
twelve
weeks,
if
that,
if
that
could
be
a
cadence
that
you
know
publishes
on
and
there's
a
reasonable
overlap
of
versions
that
can
be
maintained
in
in
a
browser.
You
know
not
infinite
number
of
versions
and
not
one
version
back,
but
at
least
you
know,
you
know
one
year's
worth
of
versions
if
that
can
be
well
contained.
J
That
may
be
a
good
fit
for
the
update
models
that
the
browser's
use,
and
that
would
be
very
interesting
to
have
a
dynamic
codec
that
revs
that
fast
and
is
constantly
improving.
I.
Think
that'd
be
a
pretty
pretty
cool
thing
to
to
push
forward,
but
the
RF
question
is
the
big
elephant.
Right
now
is
what
performance
can
you
get
when
you
get
this
down
to
RF.
C
Great
okay,
any
other
questions.
Okay,
thank
you.
So
much
longer,
that's
great!
Thank
you.
Okay.
That
is
the
end
of
our
agenda.
For
today.
Does
anyone
else
have
anything
else.
They
would
like
to
raise
brilliant.
Okay,
if
you
have
not
signed
the
blue
sheets,
please
come
up
to
the
front
to
sign
the
blue
sheets.
If
you're
interested
in
becoming
a
chair
of
this
group,
please
let
the
current
chairs
know
myself
and
Moe
or
Adam
or
ad
I,
believe
you
are
free
to
go.
So.
Thank
you.
Everyone,
oh
wait.
C
I
H
I
get
cool
all
right.
Let's
do
this
mysterious
everybody.