►
From YouTube: IETF98-NETVC-20170328-1300
Description
NETVC meeting session at IETF98
2017/03/28 1300
A
C
D
D
D
Alright,
so
we're
starting
a
few
minutes
late,
but
lots
of
important
people
that
we
were
waiting
for.
D
Welcome
everyone
to
a
net
VC,
a
few,
a
few
things
before
we
get
started.
We
have
a
quite
a
bit
of
change
in
personnel,
so
we'd
like
to
first
of
all
bid
farewell
to
our
ad
Alyssa.
We
will
now
be
stepping
up
as
ITF
chair,
so
you
go
out
to
lists
on
that.
Thank
you
very
much
for
everything.
You've
done
for
net
bc.
G
Thanks
really
eat
the
mic
for
the
last
time,
thanks
and
you're,
obviously
in
extremely
capable
hands,
as
your
former
co-chairs
now
you're
responsible,
lady,
so
I
will
be
watching
from
over
in
the
general
area.
D
And
so
that's
the
other
bit
of
important
news
that
a
replacing
Alissa
will
be
Adam
are.
Our
friend
will
co-chair,
so
should
be
a
perfect
continuity
in
that
and
not
somebody
to
pester
the
write-ups,
and
we
want
to
welcome
Natasha
who's
stepped
up
to
co-chaired,
plus
the
bottom
and
I'm
going
to
stay
on.
F
Anybody
that
could
take
notes
and
do
java
scrub
now
describes
an
easy
job.
You
just
look
at
the
jabber
and
stand
up
at
the
mic.
If
there's
a
question
to
be
asked.
Thank
you.
So
sorry,
I
don't
know
your
name
so
Jonathan
Jonathan
Sarge
APIs.
Quite
thank
you
very
much.
Anyone
for
note-taking.
I
F
Get
a
star
you
get
a
star
I!
Think
it's
like
really
crushes!
You
can
trade
it
in
for
nothing,
but
you
can
show
how
great
you
are.
D
D
The
note
well
is
particularly
important
for
this
workgroup.
If
you're
not
familiar
with
I
TFI
p
r+,
you
please
make
sure
you
are
aware
of
it.
This
this
work
to
remind
you
is
a
is
part
and
parcel
of
having
an
IPR
free
standard
at
the
output
of
this
work.
Make
sure
you
review
that
any
comments
on
the
agenda
before
we
get
started.
D
Alright,
so
going
on
to
the
first
item,
reviewing
our
work
of
documents
in
my
son's,
so
we
have
a
first
mass
plan
of
a
July
last
year
which
we've
obviously
exceeded,
and
that
was
the
requirements
and
testing
evaluation
criteria
documents
if
we
choose
to
publish
them
as
information
and
we
did
decide
to
choose
to
publish
them.
So
that's
not
the
question
here,
but
obviously
this
that's
will
update
the
milestone
two
may
kind
of
aggressive
what
we
think
there's
not
much
left
to
do.
D
We
already
did
our
work
with
class
code
requirements
last
time,
but
they've
been
some
changes
that
Jose's
going
to
review
that
that
are
probably
a
little
bit
more
than
editorial.
So
we
will
restart
over
class
call
for
two
weeks
after
this
meeting
and
I
believe
we're
ready
to
start
the
work
group
last
call
and
testing,
but
we'll
wait
until
after
the
VR
testing
presentation
to
to
make
that
call.
D
But
the
goal
is
to
get
most
both
of
those
done
by
by
the
main
milestone
now
so
they're
other
many
milestones,
the
Codex
specs
and
the
reference
implementations
of
them
were
due
in
may
I.
Don't
think
there's
any
reasonable
way
that
can
be
accomplished,
so
we're
going
to
take
those
milestones
to
December
and
there's
also
a
storage
format
spec,
which
there
has
not
been
any.
D
F
D
Yeah,
okay,
so
yeah
apologies
to
those
remote
will
have
to
not
to
state
the
slide
numbers
and
and
people
online
will
have
to
arm
pull
down
the
slides
and
and
flip
them
annually.
Sorry
about
that,
but
those
in
the
room
are
going
to
get
a
very
high
resolution
version
of
these
slides.
J
Hello,
can
you
hear
me?
Yes,
thank
you
very
much
so
good
afternoon,
everybody.
This
is
a
quick
presentation
on
the
net
VC
and
requirements.
This
is
the
version
number
five
of
the
document
next
page,
please
so
it
it's
going
to
be
pretty
quick.
There
are
changes
to
the
applications,
some
editorial
changes,
some
changes
to
the
requirements
and
nothing
to
the
evaluation
methodology.
D
J
So,
in
the
applications
we
we
did
some
changes
to
the
introduction,
some
editorial
changes.
The
important
change
here
is
to
the
video
monitoring
and
surveillance
application,
where
we
we
change
the
rates
that
we
had
originally
so
for
the
1080p.
We
had
originally
only
25
frames
per
second,
and
now
we
added
both
25
and
34
5
megapixels,
we
added
low
frame
rate
as
well
since
for
surveillance.
In
many
cases,
the
quality
of
the
pictures
is
what
is
important
and
not
necessarily
the
frame
rate.
J
So
we
added
a
12
frame
rate
in
25
and
30
for
adding
higher
temporal
resolutions
and
for
the
4k
case
as
well.
We
went
from
the
low
frame
rate
to
adding
25
and
30,
so
this
is
just
to
cover
a
wider
range
of
surveillance
applications
and
that's
that's
that
for
the
application
section
next
page,
please
smartform.
J
So
there
were
some
changes
to
the
general
requirement,
and
that
was
mostly
for
section
31,
1
and
3
dot,
one
dot,
343
dot,
one
dot
one.
What
we
wanted
to
do
is
to
mention
compression
performance
and
so
that
the
compression
performance
would
be
good
for
both
what
we
call
easy
material
and
difficult
material
for
natural
content.
J
So
that
is,
we
would
like
to
see
improvements
not
just
in
the
easy
scenes,
but
also
in
scenes
that
have
a
lot
of
detail,
a
lot
of
motion,
and
so
that
was
the
thinking
in
changing
the
the
wording
for,
for
that
section
also
to
add
explicitly
call
for
screen
content
sharing
for
both
moving,
which
is
static
and
also
dynamic,
screen
content
sharing.
So
that's
a
that
that
was
the
changes
to
that
requirement
for
three
dot,
one
dot.
Three
we
had
mentioned
syntax.
J
That
would
allow
extensibility,
but
we
wanted
to
make
clear
that
this
is
explicitly
the
bitstream
syntax
and
that
we
recommend
that
there
is
backward
compatibility,
and
so,
in
this
case,
backward
compatibility
means
that
the
changes
to
will
not
affect
legacy
decoders
and
by
legacy
we
mean
just
simply
decoders
that
are
that
are
working
at
a
specific
profiling
level
will
not
be
affected
by
future
changes
to
the
to
the
encoder.
So
that's
really
they'll.
Take
the
the
essence
of
the
changes
for
313,
so
backward
compatibility
for
a
specific
profile
in
level
next,
please
so.
D
Just
before
we
go
in
that,
so
this
is
on.
This
is
what
I
thought
there.
The
changes
would
be
really
more
than
editorial
and
I
think
this
is
what
we
need
to
reset.
The
worker
class
call
so
I'd
ask
the
worker
to
please
review
this
I
think
this
is
more
of
a
substantive
change
and
we
should
realize
what
this
means
that
explicitly
calling
out
screen
sharing
performance
being
part
of
the
target.
D
The
last
call
and
make
sure
you
weigh
in
on
your
views
on
them
now
note
that
in
other
spec
in
other
standards,
for
example,
in
hevc
they're,
the
first
version
did
not
have
any
target
for
screen
sharing
performance
and
it's
not
until
version
for
that
was
just
published
a
few
months
ago,
where
screen
coding
performance
actually
ended
up
being
significantly
better
and
if
we're
setting
our
bar
relative
to
industry
standards.
Now
this
is
even
a
more
important
thing
to
keep
in
mind.
Twenty-Five
percent
of
better
than
the
latest
december
state-of-the-art
is
a
pretty
significant
undertaking.
J
So
next
next
slide,
please
that's
life
I've,
okay,
so
for
the
basic
requirement
there.
So
we
just
went
to
the
general
requirements
now
for
basic
requirements.
There
was
a
change
to
support
of
efficient
random
access
point
encoding.
We
had
that
before,
such
as,
inter
coding
or
resending
of
context
variables,
as
well
as
efficient,
switching
between
multiple
quality
representation.
So
this
is
something
that
we
we
talked
about
before.
J
That
would
be
a
requirement
in
order
to
allow
efficient,
random
access
encoding
for
three
dot-to-dot
three
that
has
to
do
with
complexity,
and
here
what
we
wanted
to
add
is
a
specific
sentence
that
would
point
towards
reasonable
complexity
of
hardware
and
software
and
color
implementations
compared
to
what
we
have
today
is.
As
always,
we
had
in
the
document.
Well
still,
we
we
still
have
in
the
document
requirements
that
for
the
high
quality
encoding,
the
encoder
should
not
be
ten
times
as
complex
as
what
we
have
today.
J
Ok
and
that's
that's
mostly.
K
H
K
K
J
J
D
Looking
for
Mike
for
Mike,
there
was
I
think
we're
doing
the
dips
for
the
for
this
change.
I
think
there
was
also
an
added
definition
of
profiles
and
levels
and
I
think
that's
probably
important
to
call
out
have
people
review
what
we
it's
not
defining,
what
profiles
and
what
levels
the
the
spec
should
have,
but
defining
what
is
constituting
a
profile.
What
is
constituting
a
lovely
could
be
something
importantly,
we're
reviewing
coming
on.
D
D
J
J
D
D
So
next
up
is
Thomas
to
talk
about
testing
and
he's
also
going
to
do
it
remotely
so
Thomas.
If
you
could
come
up
to
the
mic
q
me
dekha,
is
he
on.
M
M
Cool
and
I
can't
or
my
slides
up
I
can't
see
it
all
there
we
go
now.
I
can
see
so
I'm
going
to
present
the
next
update
of
ITF
net
bc
testing
and,
I
think,
is
actually,
I
think,
05.
I
forgot
to
update
the
number
on
the
first
slide,
but
there
there's
only
one
changes
time.
So
the
next
step,
okay,
but
there
are
basically
two
new
test
sets
in
the
testing
draft
I
didn't
remove.
Yes,
that's
it's.
N
M
There
there
are
two
new
test
sets
on
their
objective
to
slow
and
objective
too
fast.
So
previously
we
had
the
version
one,
and
so
these
are
basically
just
for
these
obsolete
at
old
tests.
Am
I
renamed
instead
of
calling
it
objective
1.1,
I
renamed
it
to
slow
and
fast
just
or
less
confusion
which
reflects
what
they
are
they're
basis
of
just
the
same
ways
to
previous
test
sets
where
a
slow
has
the
very
high
resolution,
4k
videos
and
twice
as
many
videos
total
as
fast
as
the
next
slide.
M
The
biggest
difference
with
between
these
test
sets
and
the
previous
ones,
is
that
these
new
ones
have
HDR
material
arm
in
the
test
sets
the
HDR
materials
at
all
resolutions,
all
the
way
up
from
360
to
4k.
It's
encoded
in
HDR
10,
compatible
format,
which
is
basically
a
su
2084
with
the
thousand
at
peak
brightness
and
they're
stored
a
Timbit
sample
depth.
M
That
means
that
these
all
these
tests
tests,
even
the
fest,
will
now
have
10
bit
samples
in
them,
which
means
that
you
can
only
use
these
test
sets
if
your
codec
supports
tended.
Oh,
that
it
does
the
probably
the
biggest
change
here.
M
So
dollar
difference
is
that
we've
added
240p
content
to
both
test
sets
arm.
So
we
basically
didn't
have
with
people
were
concerned.
We'd
have
enough
coverage
of
really
low
resolution,
for
example
very
low
bitrate
cellphone
streaming
for
YouTube
in
similar
use
cases,
so
we
added
a
6
240
p
clips.
M
We
also
because
of
these
these
were
small
resolutions
being
much
much
faster
than
code
than
the
fast
big
resolutions.
We
decided
to
extend
them
to
100
previously
limited
60
frames
to
now
120
frames
in
length,
which
is
also
the
iframe
interval,
with
the
limitation
on
the
hook.
Length
is
mostly
a
a
CPU
optimization,
so
we
don't
spend
a
whole
lot
of
time
to
ion
because
the
longest
video
basically
limits
how
long
the
test
takes.
So
for
the
small
videos
we're
not
limited
by
the
time
they
take.
So
we
increase
the
length
under
20
frames.
M
Next
slide:
there's
our
small
tamal
provements
shields
to
had
a
gray
for
a
minute
so
that
that
bug
has
been
fixed
on
the
new
version
of
it.
Isn't
it
Tessa?
The
other
thing
is
a
couple.
Other
videos
have
been
switched
around
so
that
objective
to
slow,
as
is
a
complete
superset
of
objective
too
fast,
meaning
that
it
only
adds
video
is
too
effective
too
fast
previously
with
objective
one.
There
were
some
differences
between
the
videos
like
objective
one
fast
had
some
videos
in
420
format
and
objective
11.1.
M
The
slow
one
had
videos
in
four
before
format,
so
now
the
format's
it
had
matched,
and
you
can
basically,
if
you
can
run
a
vector
too
fast
and
potentially
run
the
slow
version
just
by
running
the
missing
videos,
which
is
a
convenient
feature.
It's
not
implemented
in
our
compressor
tour,
any
implementations
yet,
but
is
something
we
can
do
in
the
future.
M
O
M
M
Basically,
the
weird
aspect
ratios
what
they,
what
they
have
is
different
order
effects,
for
example
right,
because
your
video
codec
is
limited
to
say,
64
x,
64
blocks
with
different
aspects
ratios
you'll
end
up
with
blocks
outside
of
the
picture
proportionally
and
you'll
have
to
do
weird
edge
of
that
I.
You
know.
Specialists
often
encoder,
that's
actually
mostly
covered
by
all
the
different
resolutions,
especially
some
of
the
240
clip
sorta
resolution
where
they
are
partial
block
sizes.
So
that
effect
is
harder
comfortable.
Just
by
having
different
resolutions.
N
Steve
Otto,
just
this
may
be
jumping
ahead
a
little
as
we
look
forward
to
a
working
group.
Last
call
I'm
wondering
if
it
makes
sense
on
this
document
to
hold
off
publication
until
we're
getting
close
to
finalizing
the
credit
specification
itself.
Since
you
might
end
up
deciding,
you
need
to
do
more
expensive
tests,
and
we
now
envision
I.
M
Yeah
I
do,
I
do
think,
that's
a
concern.
I,
like
you
know
that
this
talking
has
been
up.
You
know,
iteratively
updated,
but
I.
The
dollar
concerns
I
edit.
The
test
document
does
have
a
spot
for
a
very
expensive,
like
moss
score
testing.
However,
that's
not
really
been
used,
so
it's
it's.
It's
quite
likely
that
there
may
need
to
be
adjustments
made
in
that
part
of
the
document
when
we
actually
run
one
so
that
I
would
be
a
concern
for
you
know,
finalizing
it
right
now.
I.
A
H
Hi
Reynold,
just
Mozilla
I,
think
it's
fine
to
keep
it
open
as
agree
with
tim.
I
also
just
one
thing
on
the
previous
discussion
of
the
the
sizes
I'm
just
wondering
whether
it
makes
any
sense
to
have
at
least
a
case
where
it's
smaller
than
where
the
total
width
of
our
total
height
is
smaller
than
one
large
male
64
x,
64
macroblock.
The
240
case,
probably
does
not
cover
that
and
that's
the
question
to
the
critic.
People
I,
don't
know
what
the
answer
is.
K
E
D
O
O
This
working
group
last
all
is
in
distracting
attention
how
there
were
less
balls,
and
maybe
you
have
more
cycles
in
to
finish
them,
but
I
mean
if
you
think
that
the
process
of
finishing
the
Codex
is
going
to
discover
and
then
the
other
counter-argument
to
be
having
a
moving
target
for
what
the
products
of
the
art
being
tested
as
I,
evaluate
them
might
be
harder,
go
to
developers,
but
I
mean
so
I
mean
I.
Think
there's
there's
some
benefit
to
getting
it
done,
and
so
that
is
a
that's
one.
O
H
N
D
If
we
intend
to
continually
update
it
Thomas,
if
that's
your
intent,
but
you
think
there's
going
to
be
some
subset
of
changes
coming
in
over.
You
know
over
the
next
several
months,
then
you're,
not
all
so
violently
try
to
push
it
for
publication.
On
the
other
hand,
you're
you're,
pretty
sure
that
you
know
it's
it's
ninety-nine
percent
done
and
that
anything
would
just
be
minor
updates.
You
know
editorial
level
updates
are
just
adding
a
few
more
test
sets
any
few
more
tests
clips.
You
know
that
would
be
in
material
too.
D
M
I
think
I
have
done
I
have
so
far
for
each
of
the
last
meetings.
I
think
there's
been
a
fairly
large
update,
but
mostly
it's
been
bought.
Awarding
changes,
but
I
I
would
actually
consider
the
addition
of
test
clips.
A
very
large
update,
because
I
get
directly
changes
the
results
you
get
from
the
testing
draft.
M
I
mean,
I
think
I
think,
that
the
larger
most
likely
thing
that
happened
would
be
there
for
a
new
ver
new
set
of
test
clips
to
be
added
for
some
special
case,
the
and
then
also
the
subjective
test,
the
in
section
potentially
changing.
So
I
mean,
if
you
you
know,
if
you,
if
you
don't
consider
adding
not
you
know,
testing
clips
to
be
a
large
change,
then
you
know
we
could
publish
it.
D
Don't
think
want
to
publish
stuff
just
to
publish
stuff,
so
it
sounds
like
there's
really
no
strong
opinion
on
on
progressing
it.
Now,
no
strong
opinion
favoring
progressing.
It
now
I
heard
a
pretty
weak
opinion
from
Jonathan
the
self
questioning
whether
or
not
you
really
believe
that
so
I
think
we'll
keep
this
document
open
and
I.
Don't
think
about
you
to
do
a
last
call
on
it
is
if
we
intended
progressive
later
anyway.
D
N
I
I
The
new
things
in
info
I've
added
support
for
monochrome
video,
which
was
a
trivial
thing
to
add,
and
also
added
support
for
42
chroma
sampling,
which
I
need
to
rebuild
head.
It's
actually
encoded
them
as
444
internally,
which
means
that
it's
very
simple
to
implement
them
code
remains
clean
and
simple.
We
don't
have
to
be
concerned
about
from
the
aspect,
ratio
or
rectangular
blocks.
I
I
I
Now
I,
perhaps
so,
I
think
some
sensors
will
give
you
four
to
two
but
again,
unless
somebody
can
come
up
with
I
really
use
case
for
this
I
would
be
reluctant
to
add
a
lot
of
complexity
in
the
encoder
and
decoder
for
a
corner
case.
So
my
suggestion
is
to
just
keep
it
as
I
did
for
now,
and
I
also
made
some
changes
to
the
C
of
F
equals.
Fellow
past
picture.
Some
improvements
there,
which
gave
point
four
per
cents
PDR
gain
in
the
high
complexity,
setting
there's
a
question.
H
K
I
M
N
M
I
J
Yes,
exactly
so
I
was
going
to
say
that
in
the
requirements
we
do
have
a
mention
of
422
requirement,
but
no
I
agree
completely
with
Thomas
that
the
reason
that
it's
there
is
for
legacy
broadcast
applications.
So
I
think
that
it's
something
that
we
need
to
support
but
doesn't
have
to
be
optimally
supported.
I
think
that
most
of
the
material
will
be
420
in
for
44
in
42,
as
Thomas
says
we
should
have.
I
Ok
then
I'll
live
on
and
they
are,
they
are
various
fixes
in,
for,
for
instance,
there
was
a
bug
in
the
chromatin
luma
code
in
the
high
big
that
case
we
have
the
fixes
for
increased
Portability,
and
also
we
have
a
code
that
has
been
taken
from
for
and
put
into
a
v1
codec
and
whenever
that
code
has
been
updated
in
any
one.
None
and
try
to
take
that
back
in
before
so
I
won't.
I
would
like
to
keep
common
code
and
sync
know
so
I'll.
I
Give
some
details
on
the
change
in
sea
of
the
earth
see
if
f
has
been
percentage
I,
because
I'm
not
going
to
repeat
that,
but
seals
just
reminding
that
C
of
F
is
no
future.
It
will
modify
the
pixels
by
a
delta
which
is
calculated
using
the
surrounding
pixels.
It
used
to
be
six
pixels
that
has
been
increased
to
eight
pixels
and
also
the
filter
has
click
function,
which
is
restricting
the
amount
of
change
that
the
pixel
can
change
and
that
has
been
modified
as
well.
H
H
I
This
on
this
side,
I
try
to
illustrate
the
new
function.
It
can
have
three
different
strengths
and
that's
what
the
plot
shows
are.
The
strengths
can.
I
I
And
so
water
has
been
added,
is
a
ramp
down
dates.
So
in
this
case,
in
this
example,
if
the
difference
is
more
than
32,
we
will
not
change
the
pixel,
and
if
the
values
are
between
the
strength
and
and
32,
it
will
ramp
down
gradually
and
the
function
takes
now
takes
another
argument,
which
is
the
zero.
I
H
I
H
I
So
moving
on
to
the
next
topic,
I
have
run
some
experiments
using
the
RV
compressed
schedule.
I
I
wanted
to
include
x265
as
well,
but
for
some
reason
I
didn't
get
that
your
orchids.
You
can
get
it
working
at
some
later
in
the
build,
so
I
only
have
four
compared
to
81,
and
it's
I
think
it's
useful
to
have
an
update
on
this.
Since
we
have
switched,
I
have
switched
the
test
sequences,
the.
H
I
I
It
shows
much
more
than
in
track
performance
and
luxury.
That's
very
useful,
but
I
guess
magmatic
reasons.
We
decide
to
be
short.
It's
using
the
objective
one
set,
which
Thomas
said
this
is
now
updated.
I
I
try
to
use
the
new
one,
but
it
can
now
add
four
doesn't
support
resolutions
that
are
not
multiple
of
16
and
I
hadn't
time
to
fix
that
so
I
had
to
use
the
old
that's
set,
and
there
has
been
quite
a
lot
of
activity
in
maybe
one
recently.
I
So
it
would
be
interesting
to
see
how
far
compares
now
and
what
I
found
was
that
81
now
generally
at
least
on
average,
becomes
better
than
poor.
It
used
to
be
somewhat
worse,
that's
compression,
but
then
again
it
was
something
that
was
very,
very
close
to
be
benign.
So
anyone
has
progressed
the
Hammond
memory
improvement
in
everyone.
The
new
tools,
I
think
for
still
seems
to
be
slightly
better
at
video
conferencing
in
low
delay
configurations
so
salty.
The
meeting
rooms
and
talking
heads
for
performs
pretty
well
Ramage
Ariz.
I
So
we
should
probably
focus
on
the
compression
and
I'm
not
so
much
on
the
speed
in
the
capacity
that
I
did
and
you'll
also
see
that
resilience,
which
is
the
only
thing
that's
for
support.
He
has
a
significant
cost.
The
default
mode
I
mean
everyone
is
not
to
be
errors
in
it,
meaning
that
if
you
lose
some
data,
then
you
can't
really
be
called
anything
until
you
get
an
intro
update.
I
Charlie,
everyone
is
a
moving
targets
and,
as
we
speak,
new
tools
are
being
added
and
in
the
queue
next
month's
we'll
probably
see
a
ten
percent
improvement
have
seen
more
next
slide.
So
this
shows
poor
compared
to
a
the
one
and
as
before,
I
have
DD
rate
on
the
x-axis
and
the
frame
rate
on
the
y-axis.
But
you
should
probably
focus
on
the
x-axis
and
for
is
to
the
right
here
and
either
one
is
to
the
left
under
its
let
it
states
the
the
better
side.
So
we
see
with
Aaron
silence
on
a
day.
I
I
What
happens
if
we
want?
At
least
you
jump
to
what
happens
if
we
discard
the
recipient
requirements?
In
that
case,
a
one
is
almost
ten
percent
better,
and
if
we
move
on
to
the
hybrid
a
configuration,
then
also
everyone
is
almost
ten
percent
better
and
finally,
we
switch
off
the
resilience
next
slide.
I
I
Everyone
is
able
to
slash
the
bitrate
by
eighty
percent
complete
before
and
I.
Think,
that's
mainly
because
some
people
are
true,
but,
on
the
other
hand,
sequences
like
kerline
and
also
the
video
sequences,
which
are
the
video
conferencing
back
door,
is
performing
very
good.
This
is
by
the
way,
low
complexity
and
and
a
high
high
delay
and
for
some
reason
for
the
hospital
loans
or
some
chroma,
and
also
on
SC,
which
is
not
the
case
in
lonely.
D
Okay,
so
my
final
slide,
one
quick
point
is
since
we're
deciding
to
keep
the
testing
graph
to
open.
It
is
a
thing
to
consider
you
know.
Other
standards
have
broken
out
the
screen
content
or,
in
general
different
classes
of
content
that
they
are
expected
to
have
wildly
different
results.
So
maybe
it's
worth
in
the
test
expect
to
consider
breaking
out
screen
content
if
the
results
really
do
skew
things
to
make
them
almost
incomparable,
maybe
that's
worth
considering
for
future
versions
of.
I
I
So
this
is
hansel.
What
next
you
don't
have
a
fixed
BAM
harm
for
is
still
using
the
variable
length
and
holding
and
I
think
the
expectation
or
animal
and
codec
is
that
it
should
have
our
if
that
decoding.
So
one
possible
solution
to
that
is
to
use
the
dollar
and
sugar
colder
and,
in
that
case,
perhaps
is
more
correct.
Talk
of
emerged
of
four
and
dollar
since
the
entropy
Qadri's,
exactly
a
very
core
of
a
codec
I.
I
I
So
much
of
that
work
is
already
done
and
again
this
will
take
the
work
in
the
script
to
work
towards
an
inertial
dollar
I'm
doing
and
since
the
visuals
have
already
been
adopted
in
everyone,
so
on
his
part,
public
take
us
close
to
the
sub
settle
everyone.
Well,
actually,
it
might
be
more
correct
to
say
that
it's
a
bomb
that
has
moved
towards
what
we
domain
net
diseases.
So
much
of
the
work
that
we
are
done
here
has
been
now
been
adopted
in
anyone
all
the
troops
14
already
in
81
rv7
bit.
I
Interpolation
filters
need
sum
of
the
coefficients
from
those
filters.
A
man
dressed
work
based
on
what
will
happen
for
I
quantization
matrices,
Delta
Q.
Everything
is
now
adopted
in
very
warm.
So
that's
what
I
have
it?
Somebody
has
any
opinions
on
how
we
should
progress
with
the
code
at
perkins
speak
up.
Does
it
make
sense
to
have
emerged
to
me?
I
think
it's
probably
a
good
idea
to
see
things
convert
in
some
way
or
another.
K
D
Remind
the
workgroup
that
the
output
of
this
is
intended
to
be
a
single
specification
for
the
codec,
so
we're
not
going
to
progress
multiple
codex,
we're
not
going
to
look
for
multiple
candidates.
We
we
want
a
single,
interoperable
codec
at
the
end
of
this
work,
so
anything
that
can
be
done
to
help
merge
things
I
think,
would
be
a
big
benefit.
D
Yep,
so
there
wasn't
very
much
to
do
in
terms
of
update
on
della
itself.
So
what
I'm
going
to
present
here
is
the
constrained
directional
and
has
been
filters
EF.
That
Steiner
was
talking
about
next
slide,
please
so,
first
so
Steiner
explained
what
CL
PF
was
like.
So
brief
reminder
of
what
the
direction
the
dolla
directional
joining
filter
was.
D
It's
a
filter
that
operates
on
a
by
eight
blocks.
It
works
first
by
estimating
the
direction
the
main
direction
in
each
eight
by
eight
block.
This
is
done
only
for
luma,
then
it
uses
a
conditional
replacement
filter.
So
the
filter
is
kind
of
similar
to
the
CLP
f
function,
except
that
there
is
no
Graham
down.
It
goes
to
zero
abruptly
and
that
filter
is
first
apply
along
the
direction
that
was
found
by
the
direction
estimation,
and
it
is
followed
by
a
second
filter
that
filters
across
the
lines
that
are
filtered
by
the
first
filter.
D
To
remove
air
remaining
ringing
artifacts
soand
Allah.
We
used
a
global
frame
level,
strand
that
was
quality
dependent
and
then
each
super
block,
so
each
16s
or
each
64
x,
64
superblock,
would
signal
a
strength
of
Dressman
compared
to
the
global
frame
level
we
were
using
for
different
values,
including
one
value
that
would
be
simply
turn
off
the
filter.
So
that's
how
things
work
in
dalla
and
originally
in
81
when
we
have
during
the
experiment,
it
looks
like
so
now,
we've
actually
merged
the
dollar
range
filter
with
cl
PF.
D
The
result
is
this
cdef
proposal,
so
it
merges
the
two
into
a
single
filter,
basically
by
replacing
the
second
during
the
second
foot.
The
second
conditional
replacement
filter
india
ringing
that
gets
replaced
by
CL
PF,
so
the
resulting
complexity
is
pretty
similar
to
the
original.
During
proposal,
it's
just
slightly
higher,
not
very
much
the
results.
On
the
other
hand,
they
exceed
both
during
and
Sylvia
alone,
and
they
also,
they
also
exceed
a
cascading
of
during
and
CRPF
done
independently.
D
N
D
D
Oh
there's
more
work
to
do
for
Brian
enhancements
poker,
but
even
that
will
highlight
busy
very
high
complexity,
we're
still
around
two
percent
improvement
in
yes,
our
next,
like
in
terms
of
complexity.
Right
now,
at
the
highest
complexity,
settings
cdef
is
adding
less
than
1%
to
the
encoder
complexity
at
lower
at
lower
complexity,
setting
that
is
still
adding
ten
to
thirty
percent
of
anyone,
the
exact
settings,
but
that
is
not
very
hard
to
reduce.
D
D
It
has
been
reduced
now
down
to
six
lines,
which
was
what
the
original
uranium
filter
proposal
was
using
and
for
certain
strategy
right
now
we're
using
the
first
one,
which
is
the
whole
frame
optimization,
but
it's
possible
to
simply
select
to
pre-select
some
some
strands
to
look
at
for
a
frame
and
just
only
search
that
then
this
is
how
it
was
implemented
in
dalla
and
that
worked
fairly
well.
So
it
should
also
work
with
a
CDF
proposal
next
light,
and
so
what
we
have
left
to
do
mostly.
D
D
We
have
yet
to
apply
entropy
coding
to
the
strength,
so
that
should
help
us
reduce
the
signaling
over
head
slightly
and
last
step.
Is
we
probably
to
optimize
interaction
with
other
tools,
because
if
we're
able
to
reduce
ringing,
then
you
can
configure
other
tools
to
get
better
quality
by
and
allow
more
ringing,
which
will
then
give
you
so
there's
still
somewhere
to
do
an
integration
there
that's
like,
and
so
here
we
have
I
think
it
should
show
up
on
the
projector
comparison
of
with
the
kind
of
during
that
can
be
applied
here.
D
D
I'll
just
say
it's
a
great
to
see
emerge
of
some
of
the
ideas.
I
think
that
was
the
whole
spirit
of
of
the
work
that
we
intended
to
have
done
here
so
I'm
good
to
see
that
the
thorn
dalla
teams
are
working
together
and
getting
some
of
the
common
tools
marched
together
and
the
results
are
are
pretty
good
too
and
I
don't
know
if
people
notice
they're
not
go
back
to
a
particular
slide
here.
D
This
is
what
actually
surprised
me
that
the
the
results
are.
You
know,
I
thinking,
I
think
some
are
even
had
a
lot
to
ten
percent
performance
improvement
when
you
use
faster
settings
when
you're
trying
to
do
real
time.
I
think
that's
a
significant
advantage
for
this
filter
that,
if
they
can,
if
it
can
really
help
to
improve
the
quality
of
the
video
visually
and
objectively,
when
you're
in
time
crunches
it
allow
the
coda
to
take
corners
in.
D
Achieve
real
time,
it's
one
of
the
main
differences
I
see
between
lot
of
the
other
work
and
other
bodies
and
work
we're
doing
here
in
the
ITF
a
RT
area.
Who
cares
a
lot
about
real-time
encoding?
And
you
know
some
of
the
other
video
standards
are
much
more
focused
on
offline
streaming,
so
they
only
care
about
decoding
speed,
but
never
real-time,
encoding,
I
think
this
is
a
really
great
tool
for
for
improving
the
cases
where
you're
under
real-time
encoding
constraints.
You
can
really
take
corners
and
still
make
up
the
difference
with
this
filter.