►
From YouTube: IETF113-CDNI-20220322-1330
Description
CDNI meeting session at IETF113
2022/03/22 1330
https://datatracker.ietf.org/meeting/113/proceedings/
C
It's
good
to
see
folks
in
the
room,
but
thanks
everyone
for
coming.
This
is
iatf
113
the
cdni
session,
we'll
go
ahead
and
get
started.
Sanji
next
slide.
Please.
C
This
is
the
notewell.
Everyone
should
be
familiar
with
it.
Your
participation
is
governed
by
the
the
documents
listed
here.
So
you
agree
by
participating
to
follow
all
of
these
guidelines.
I'm
sure
everyone's
seen
it
and
read
it
next
slide.
Please.
C
This
is
a
hybrid
meeting,
so
just
to
some
reminders
for
folks
there's
some
folks
in
the
room,
some
joining
remotely
some
tips
on
keeping
connecting
and
making
sure
that
you
are
in
the
blue
sheets
and
when
you're
not
presenting.
Please
turn
off
your
video
and
audio
just
to
help
us
save
on
the
bandwidth
next
slide.
Please
we
will
be
using
meat
echo
for
all
of
the
queuing,
even
if
you're
in
the
room,
so
make
sure
that
you
are
on
the
tool
and
you're
logged
in
next
slide.
Please.
C
For
remote
participants,
you
are
here
with
us
on
echo,
so
hopefully
you
know
how
to
use
the
tool
use
the
queue
use.
Your
video
and
audio
blue
sheets
are
being
tracked
automatically
through
meet
echo,
so
no
need
to
sign
in
if
you're
on
site,
but
you
do
need
to
sign
in
to
meet
echo
next
slide.
Please.
C
Sanjay
go
for
it.
This.
A
C
Okay
back
to
business,
this
is
cd
and
I
do
we
have
a
volunteer
to
be
a
jabra
scribe.
I'm
going
to
take
minutes,
and
everyone
please
log
in
to
me
that
go
to
make
sure
that
we
have
a
accurate
account
for
blue
sheets.
C
This
is
just
a
reminder
of
our
milestones.
Last
time
we
did
re-adopt
the
triggers
and
we
adopted
the
footprints.
So
there's
a
new
milestone
there
for
the
for
the
footprint
draft
we'll
be
talking
about
that.
I'm
still
on
our
list
is
finishing
up:
uri
signing
and
phil's
going
to
give
us
an
update
on
that.
C
I
don't
see
phil
online
yet,
but
hopefully
he
will
he'll
be
here,
acme
star,
that's
the
https
delegation.
Fred
is
not
feeling
well
so
sanjay's
going
to
take
that
update
for
us
and
then
nier
is
here
to
talk
about
the
the
triggers
and
the
footprints,
and
then
we
have
unpacked
agenda
beyond
that
next
slide.
Please.
C
Chris
lemmons
can
talk
to
us
about
the
cta
wave
stuff.
We
also
have
the
other
half
of
the
https
delegation
that
we
split
out
last
time.
So
kristoff
is
going
to
talk
about
that
and
then
we'll
have
updates
on
the
metadata
and
capacity
advertisements
from
alfonso
and
andrew
and
hopefully
we'll
get
all
that
in
in
the
next
two
hours
and
do
a
quick
wrap-up
and
I
30
seconds
under
my
five
minutes.
So
let's
go
ahead
and
get
started
is
phil
here
I
don't
see
him
online.
C
Let's
do
we
want
to
just
go
straight
to
chris,
or
do
we
want
to
let
near
go
I'd
like
to
keep
the
cta
wave
and
uri
signing
pieces
together
so
near?
Do
you
want
to
go
ahead
and
go.
F
A
Yeah
you
can,
but
I
should
be
able
to
bring
this
up
here.
G
F
F
Okay,
so
let's
start
hi
everybody,
I'm
neil
sofer
from
quilt.
In
the
previous
meeting
and
the
meeting
with,
we
discussed
an
internet
of
additional
footwind
types
and
we
like
to
would
like
to
get
to
the
let's
call
the
volume
group.
Let's
call
for
this
internet
draft.
Do
you
move
the
slides
or.
C
I'll
try
loading
this
slide.
Let's
let
phil
go
ahead
since
he's
here
now
and
we'll
go
back
to
our
regularly
scheduled
agenda
cool
phil
you're
up.
A
H
F
Okay,
so
a
quick
recap
and
rfc806
define
the
footprint
types
and
the
object,
protein
types,
ipv4
or
ipv6
idioms
asn
country
code.
The
footing
object,
is
actually
a
an
object
that
allows
to
define
a
bit
of
phototyping
the
values
that
the
client
can
match
to.
F
Thank
you
and
rfc8008
allows
to
use
this
footment
on
a
footmate
object
to
specify
the
k
on
which
a
footprint
each
capability
is
supported.
So
in
the
previous
meetings
in
the
previous
meeting,
we
suggested
two
new
footman
types
expressed.
F
F
C
Yeah,
I
think
this
one's
pretty
straightforward.
It's
just
registering
two
new
footprint
types
in
the
iona
registry.
We've
talked
about
it
for
a
little
while
now
it's
I
don't
think,
there's
been
any
contention
about
it.
I
went
ahead
and
did
my
shepard
pre-review,
as
is
our
custom,
I
think
once
assuming
that
all
those
were
addressed
we're
probably
in
pretty
good
shape.
I
don't
know
if
anyone
has
any
objections
to
us
moving
this
forward
to
a
working
group
last
call.
C
C
F
Okay
and
the
second
draft
we
discussed-
and
the
previous
time
was
the
second
edition
for
the
controller
interface
and
a
quick
reminder.
F
The
controller
interface
allows
an
absolute
to
manage
and
content
and
met
is
content
or
metadata.
F
Within
the
dance
incident
operations
like
proposition
or
invalidation,
we
suggested
a
new
version
for
this
rfc
early
sanjay
and
myself
next
in
this
new
version,
and
we
already
discussed
this
added
functionality,
which
is
additional
information
in
the
air
propagation,
a
triggers
extensibility
for
example,
time
policy
that
allows
us
allows
the
absence
of
the
end
to
indicate
that
he
wants
the
a
preposition
be
done
at
a
specific
time
and
also
define
two
additional
content:
selection
methods,
which
are
the
content,
regresses
and
content.
Playlists.
F
F
F
It
has
the
cdn
path,
the
list
of
cdns
that
actually
instruct
the
operation,
and
it
allows
us
to
do
some
circuit
breaking
and
a
set
of
properties
that
are
publicly
used
for
target
selection.
Okay,
the
trigger
should
be
applied
on
method
on
des
metadata,
urls
or
those
content
patterns,
etc,
etc.
Next,.
F
Next,
please,
thanks
er,
so
the
point
we
got
to
is
okay:
we
ask
what
what
could
happen
if
we
want
another
method
for
content
selection
and
then
is
it
a
separate
property?
We
will
need
to
define
the
trigger
of
which
we,
and
so
we
said,
okay,
why
not
do
the
same
trick?
Use
the
list
of
generic
objects
for
the
metadata
and
content
selection
as
well
so
next
slide.
F
F
Next,
okay,
so
we
suggest
a
generic
spec
of
the
object,
which
is
specified
targets
to
execute
the
trigger
on
okay.
It's
the
generic
object.
We
may
have
several
types.
For
example,
we
already
defined
the
types
matching
they're
all
the
types
from
arc:
807,
which
are
url
spec
or
uri
patterns,
as
well
as
content,
playlists,
okay
and
in
the
future
we
may
add
more
and
more
types
and
the
spec
is
a
sp
is
specified
for
a
specific
trigger.
Subject.
F
So
if
we
look
at
the
spec
object
structure,
it
is
the
spec
subject,
the
type
of
the
spec
and
their
values.
So
this
specific
example,
it
says
a
match
metadata
with
the
following
url:
okay,
we
have
metadata
that
that
is
chosen.
You
are
using
url
spec,
with
this
specific
url.
Do
I
have
a
pointer
here
somehow,
okay,
never
mind.
F
We'll
see
an
example
in
a
minute:
okay,
one
last
change
before
I'm
showing
the
example.
Next,
please,
the
in
the
original
rfc
there
was
a
temp
trigger
type
which
which
is
preposition
or
invalidation
or
purge.
It's
actually
specified
the
operation.
The
trigger
stands
for,
and
I
find
it
more
appropriate
to
call
it
action.
F
Okay,
so
I
would
like
to
have
a
win
to
rename
this
field
and
to
call
to
call
it
a
action
instead
of
a
type
okay.
So
let's
look
at
the
example?
Yes,
so
this
is
the
trigger
v2
object.
First
of
all,
it
says:
okay,
the
action
is
preposition
and
we
would
like
to
preposition
this
following
list.
We
would
like
to
preposition
metadata
with
this
single
url,
as
well
as
content
with
that
matches.
Those
two
urls.
F
You
know
there's
it
was
a
single
request:
there
is
a
metadata
urls
and
metadata
and
content
url.
So
it
was,
it
was
still
in
a
single
request.
It
was
just
you,
you
had
to
define
the
properties
over
and
over
again.
So
if
we
have,
you
need
to
define
metadata,
url
property
and
content,
url,
property
and
and
change
the
trigger
version
is
every
time
you
define
it.
So.
I
Hi
guys
hope
you
can
hear
me.
Yes,
yes,.
I
I
have
a
question
here:
you're
talking
about
metadata
and
content
pre-positioning.
Typically,
when
we
do
a
content
pre-position,
we
are
also
pre-positioning
the
metadata
with
it.
But
in
this
particular
case
you
know
you
have
a
case
where
you're
pre-positioning
metadata
for
a
parent
part
and
content
for
file
console.
I
So
so
I'm
I'm
just
wondering
when
we
are
doing
a
content
pre-position.
We
are
typically
doing
both
the
content
and
the
metadata
right.
The
metadata.
When
we
do
a
metadata
pre-position,
we
are
only
doing
the
metadata
and
not
the
content
itself,
but
in
this
particular
example,
you
have
used
a
parent
part
for
the
metadata
and
child
paths
for
the
content.
F
I
Yeah,
so
if
there
are
pointers
in
the
metadata,
would
this
the
downstream
cdn
be
expected
to
recurse
through
that,
and
you
know,
get
all
the
metadata
for
the
objects
that
it's
pointing
to.
I
C
I
F
This
remark,
but,
as
I
understand
it's
not
related
to
the
structure
of
the
the
trigger,
or
it's
related
to
the
specific
example.
I
Yeah
example-
and
it's
it's
kind
of
related
to
the
expected
behavior
of
the
downstream
cdn
when
dealing
with
you
know
a
pre-position
request.
Yeah.
F
I
A
I
F
Okay,
so
let's
proceed,
I
wait.
Let's
do
the
example?
Sorry
well,
so
I
want
you
to
okay.
If
you
keep
this
example
in
mind,
okay,
we
will
relate
to
it
later
on.
We
have
here
a
trigger
of
people
for
preposition
of
a
metadata
with
a
single
url
and
content
with
two
different
urls.
Okay,
let's
move
on.
Let's
move
on
okay,
so,
similarly
to
the
trigger
v2
and
we
needed
a
object.
F
We
needed
to
redefine
the
error,
object
actually
practically
defined
a
lv2
object
without
all
properties
related
to
content
selection
to
metadata,
like
metadata,
urls
et
cetera,
and
we
added
a
generic
twig
spec
list
and
we
took
when
we
defined
the
status
v2
object,
which
holds
the
arrow
v2
object
as
well.
So
if
we
go
to
the
next
example,
this
is
an
example
for
an
aov2
object
which
in
this
case
says
that
the
air
of
type
project.
F
Sorry
yeah,
the
error
the
error
is
related
is
relating
the
spec
listed
here.
Okay,
so
it
there,
the
only
the
spec
relevant
to
the
error
are
listed
in
the
spec
list.
Okay,
so,
if
you
remember
the
example
we
showed
before
of
the
trigger,
we
had
a
a
metadata
aspect
and
a
content
spec.
Here
we
have
only
the
content,
spec,
so
the
metadata.
F
We
had
no
no
error
for
the
metadata
and,
if
you
remember
the
url
list
in
the
twitter
add
two
urls,
and
here
we
have
only
a
single
url,
because
the
error
only
relates
to
this
url,
the
other
with
the
other
url.
Everything
went
fine,
okay,
so
this
is
the
semantic
of
the
error.
It's
similar.
It
is
similar
to
the
semantics
of
the
l,
the
original
error
with
the
previous
structure
and
one
point.
F
However,
I
find
a
bit
of
an
issue
here
is:
it
seems,
like
a
bit
of
a
of
a
a
bit
of
a
hassle,
to
pass
this
arrow
by
the
absence
of
the
end.
Okay
to
understand
which
object
exactly
was
no.
They
relates
to
it's
it's.
Maybe
some
swing
comparison
or
something
like
that,
and
I
would
like
to
suggest
next
a
a
bit
of
an
adjustment
to
the
structure.
F
Okay,
so
if
we
add
some
placeholders
in
the
list,
it
may
allow
us
to
to
to
keep
the
a
position
or
to
to
take
the
trigger
and
the
spec
list
from
the
trigger
and
add
placeholders
for
saying,
okay,
for
this
object,
everything
went
ahead
is
not
relating
to
everything
went
well.
So
in
this
example,
they
there
is
a
spec
list
and
the
first
video
is
an
empty
spec,
which
we
respectfully
means.
Okay,
they
met
the
data
spec
we
had
back.
Then
everything
was
okay
with
it.
There
is
nothing.
F
It
does
not
relate
to
this
aspect
as
well
as
if
you,
if
you
are
going
into
the
spec
itself,
that
the
failing
specs
there
is
a
again
an
empty
object
in
the
url
list,
meaning
okay,
this
url
was
okay,
the
o
does
not
apply
to
it,
and
this
may
help
the
and
better
understand
the
error.
F
So
this
is
just
a
suggestion.
The
does
not
contain
this
change
already,
and
I
would
be
happy
to
your
opinion
about
it.
A
So
we
have
q
here
rajiv,
you
want
to
go
first
and
then
kevin.
I
I
You
know
there
is
the
two
levels
of
ambiguity
which
makes
men
one,
obviously
that
you've
already
identified
the
fact
that
you
know
you
are
only
listing
those
specific
effects
where
you
have
an
error,
and
so,
in
this
case
you're
nesting
at
the
level
of
spec
for
showing
the
specs
that
have
an
error,
but
what
you're
actually
doing
is
you're
actually
populating
it
with
a
partial
spike
when
compared
to
the
preposition
request,
because
the
request
spec
had
two
urls,
and
here
you
have
a
single
url.
I
So
that
is
one
question.
The
second
one
is
you
know?
How
do
you
expect?
You
know
this
error
to
be
dealt
with
in
cases
where
the
error
has
happened
at
a
url.
I
That
may
not
be
part
of
the
original
stick
so
like,
so
this
obviously
would
not
work
with
the
spec
type
of
urls
but
say
there's
a
spec
type
of
playlist,
which
you
have
added
okay,
so
part
of
that
would
need
would
mean
that
downstream
cdn
passes
that
playlist
and
brings
all
the
dependent
resources
of
that
playlist
into
you
know
the
pre-position
queue
and
populates
them
and
say,
for
example,
on
a
playlist
there's
a
subtitle
file
which
is
giving
me
a
404.,
the
rest
of
the
cop,
the
playlist
is
fine,
I'm
getting
a
200
on
the
playlist,
I'm
getting
at
200
on
the
av
segments.
I
I
got
a
404
on
the
subtitle
segment
now
that
subtitle
url
is
something
that
I,
where
would
I
put
that
in
in
this
structure,
because
that
is
something
which
is
not
explicitly
mentioned
in
my
incoming
request
right.
So
so
we'd
have
to
be
a
little
more.
You
know
intelligent
in
how
we
write
this
structure
in
order
to
allow
for
these
kinds
of
errors
to
also
be
surfaced,
and
if
we
come
to
the
next
slide
there.
My
you
know
point
would
basically
be
that
you
already
have
a
status
object
right.
I
Mapping
between
everything,
what
succeeded,
what
failed
without
us,
having
to
you
know,
put
in
hacks,
like
you
know,
trying
to
maintain
position
or
without
necessarily
having
the
upstream
have
to
maintain
a
lot
of
state
about
what
it
sent
so
that
it
can
match
up
when
the
response
comes
in
I've.
Just
a
couple
of
two
thoughts
over
there.
F
A
I
think
maybe
rajiv,
if
you
can
turn
your
video
off
and
see
if
that
helps.
If
you
can
turn
your
video
off.
A
F
Because
I
was
able
to
hear
your
the
first
issue
you
presented
and
I
think
it's
something
we
need
to
think
of
the
ability
to
say:
okay,
this
playlist,
not
only
this,
the
the
entire
playlist
fail,
but
only
part
of
it,
and
it's
adds
a
significant
complexity
in
the
for
the
definition
of
this
api.
I
believe
and
need
I
think
it
we
need
to
think
about
it
and
the
second
I
I
was
not
able
to
you
were
breaking
out
from
at
my
place
at
least.
A
I
I
was
basically
trying
to
say
that,
instead
of
having
a
separate
error
object,
which
only
tries
to
communicate
the
specific
errors,
why
don't
we
roll
the
functionality
of
reporting
the
error
into
a
generic
status
object,
so
the
status
object
is
designed
to
as
closely
as
possible
mirror
the
structure
of
the
incoming
preposition
object,
the
request
so
that
everything,
every
spec
that's
in
the
preposition
object,
is
available
in
the
status
object
and
each
spec
in
each
component
of
that
spec
has
either
a
status
okay
or
or
a
status
error.
I
And
if
there's
a
status
error
we
have,
you
know
sub
fields
and
sub
objects
inside
that
more
you
know,
granularly
describe
what
error
was
encountered.
The
point,
the
advantage
of
doing
it.
That
way
is
that
the
upstream
cdn
doesn't
really
need
to
maintain
a
lot
of
state
and
especially
because
some
of
these
transactions
may
be
quite
long-lived
transactions.
I
So
if,
if
a
upstream
cdn
is
saying,
hey,
here's
a
set
of
ten
episodes,
playlist
that
I
want
you
to
pre,
pre-warm
or
pre-position,
okay,
it.
It
may
take
20
30
minutes
for
the
downstream
cdn
to
complete
that
operation
before
it
can
send.
The
status
back
saying
that
hey
this
particular
trigger
has
now
been
completed
right.
So
do
you
really
want
the
upstream
cd
and
trying
to
hold
on
to
state
for
that
long
so
that
it
can
match
up
with
an
error
message
or
if
the
complete
status
message
comes
back?
I
F
Yeah,
first
of
all,
the
status
object
itself
also
holds
the
trigger
the
trigger
v2
object.
Okay,
also
yeah.
J
F
It's
already
it's
the
the
trigger
object.
Only
the
part
of
the
status
object,
it's
it
resides
in
the
object
and
the
two
appears
both
of
them
appears
in
the
start
in
the
status
object.
So
this
is
covered.
What
is
not
covered
you
you
suggested
also
that
you
would.
We
would
have
an
arrow
at
the
in
a
different
object
per
spec.
So
for
this
that
we
have.
I
F
I
My
point
here
is
even
with
a
single
spec.
You
know
that
happens.
If
it's
a
single
playlist
it
may
be.
It
may
be
a
playlist
say,
for
example,
a
movie
which
has
a
quality
ladder
with
six
different
qualities,
so
which
means
that's
still
going
to
take
some
amount
of
time
for
the
downstream
cdm
to
process
before
it
can
respond.
So
that's
still
a
certain
amount
of
state
that
the
upstream
cdn
needs
to
maintain
to
be
able
to
correlate
the
errors.
I
So
by
joining
the
error
object
as
a
part
of
the
status
in
the
state
testing
as
a
part
of
the
trigger
object
itself,
you
know.
Basically,
the
response
from
the
downstream
cdn
is
the
trigger
object
with
it
is
embedded
inside
it.
So
the
upstream
does
not
have
to
depend
on
any
state.
The
status
response
has
everything
in
it.
F
C
How
does
this
advert?
I
think
we
should
take
it
offline
yeah,
I'm
going
to
inject
myself
as
chair
and
say
that
rajiv,
I
think
it's
a
good
discussion
if
you
have
actually
a
proposal
for
for
an
object.
You
know
that
might
also
help
move
the
thing
forward,
but
if
we
can
take
this
to
the
list
and
have
that
discussion,
that
would
be
great
we're
running
a
little
behind
schedule
so
near
we
could.
We
could
try
and
push
through
the
rest
of
this.
F
Okay,
let's
proceed
and
skip.
Let's
keep
this
one
next
yeah
thanks
and
the
last
important
thing
I
want
to
discuss
here
is
the
the
photon
capability
objects
relates
to
this
control
interface,
and
so
the
downstream
sedan
may
support
some
of
the
action.
It
may
support,
pre-position
and
invalidation,
but
not
support
period,
and
but
it
may
support
those
actions
but
for
different
subjects.
For
example,
pre-position
in
in
validation
and
purge
are
supported
for
metadata,
but
only
for
content.
F
For
example,
a
a
pre-position
of
content
support
time
policy,
but
purge
does
not
support
time
policy
and
also
what
is
the
target
selection
methods
that
are
allowed?
Okay,
so
for
metadata
for
people's
version
of
metadata,
you
can
you
may
use
playlist,
but
for
purge
you
cannot
use
playlist,
why?
I
don't
know
okay,
but
next.
F
Okay,
so
what
we
try
to
do
is
define
a
structure
that
says
that
for
a
specific
action
on
a
specific
subject,
the
list
of
support
supported
specs
are
this
list
and
the
list
of
supported
supporters.
Extensions
is
another
list
and,
of
course,
everything
is
subject
to
the
footprint
that
the
capability
is
specified
to.
So
let's
look
at
the
object
and
an
example.
F
So
in
in
this
example,
we
support
only
preposition
of
content
and
invalidation
of
content.
Okay,
no
operation
of
metadata
is
available,
no
purge
is
available.
Okay
and
the
specific
preposition
of
content
can
be
done
using
your
specs
or
ccid
specs,
and
it
supports
time
policy.
However,
invalidation
does
not
support
a
time
policy.
It
doesn't
support
any
extension
because
the
extensions
list
is
empty
or
not
appear.
It
doesn't
appear.
C
I
think
there's
been
good
discussion
here.
I
think
everyone
please
go
and
read
the
draft
and
we'll
have
we'll
continue
the
discussion
on
the
list.
I
think
this
is
good.
There
have
been
a
lot
of
good
updates
to
triggers
and
I
think
we're
making
a
lot
of
progress
on
making
it
better.
So
I'm
excited
by
the
discussion.
D
F
A
Kevin
you
want
me
to
see
if
I
can
bring
the
slides
up
in
the
meantime,
while
we
get
phil
still
get
back
on.
C
G
A
Okay,
so
I'm
just
voicing
fred
here
he's
he
could
not
make
the
the
meeting
today
as
he
was
not
feeling
well,
so
the
the
revision
of
the
draft
that
that
is
put
out
in
the
mailing
list
and
also
in
a
in
the
really
the
the
main
change
in
this
version.
Eight
really
is
that
it
has
been
pretty
much
cleaned
up
and
has
been
very
simple,
very
much
simplified.
A
The
the
only
only
part
that
is
required
in
this
rfc
is
to
be
able
to
identify
that
what
is
the
location
from
where
the
downstream
cdn
will
go
and
fetch
the
certificate
information,
and
all
of
the
conversation
has
already
happened
before
it,
so
so
there's
no
exchange
of
information
other
than
that.
A
The
the
https
draft.
This
draft
will
basically
essentially
only
go
out
and
make
the
request
to
pull
the
certificate
from
the
upstream
cdn
and
that's
what
the
the
change
is
really
for.
This
particular
draft.
A
So,
as
you
see
here,
the
example
acme
delegations
and,
and
then
the
the
url
that
has
been
already
identified,
on
which
it
will
go
and
retrieve
the
request
from
so
that
really
is
the
change
you
want
to
go
to
the
next
slide
and
then
we
still
have
to
talk
through
about
security
and
privacy.
A
Their
security
concerns
that
ensure
that
the
metadata
is
preserved
or
within
the
rfc
9115
that
there's
no
other
security
leak.
So
I
think
this
document
may
by
itself
may
not
have
anything
that
exposes
the
security
and
privacy
concerns,
but
I
think
we
still
want
to
just
talk
through
that,
preferably
in
the
mailing
list.
If
you
can
review
the
draft
and
see
if
there
are
any
questions
or
concerns
that
this
draft
should
address
with
respect
to
security
and
with
respect
to
privacy,.
A
C
I
think
that
you
know
it's
it's
much
simplified
now,
which
is
good.
I
will
go
ahead
and
do
the
shepard
pre-review
on
on
the
new
version,
and
hopefully
everyone
else
can
also
go
out
and
review
it.
It's.
It
should
be
pretty
straightforward
now
to
move
forward
with
it.
I
think
we
took
out
a
lot
of
the
the
other
stuff
about
metadata
and
the
harder
stuff.
So
right.
A
C
Okay,
all
right
any
anyone
have
any
thoughts
or
questions.
Otherwise,
please
review
the
draft.
I
will
review
the
draft
officially
and
we'll
try
and
move
this
forward
by
next
ietf.
H
Okay,
so
I'm
christopher
wordpeak,
yes,
I'm
going
to
present
an
update
on
this
draft
here
on
delegated
credentials
subsets.
H
H
Ongoing
work
in
the
tls
working
group-
and
so
this
split
has
already
been
discussed
in
the
in
this
group
here
in
cdni
and
yeah
and
yeah.
I
would
like
to
ask
for
adoption
of
this
draft
within
the
cdi
cdna
working
group,
the
next
line.
H
So
currently
what
is
in
the
draft?
It's
that
two
objects
which
are
defined
two
mi
objects.
One
is
a
conf
delegated
credential.
H
So
that's
yeah.
So
it's
quite
simple.
Those
two
objects,
one
which
allows
to
just
provide
an
url
in
the
other
which
transports
all
the
cryptographic
material,
let's
say
so
next
line.
H
So
there
are
a
couple
of
things
to
do
in
the
for
this
draft,
so
the
first
one
is
so
is
about
aligning
those
two
drafts,
so
the
one
on
dedicated
certificates
and
one
on
dedicated
credential
just
about
the
advertisement
so
that
we
have
an
fci
object,
which
is
a
bit
common
in
which
would
allow
to
announce
the
what
is
supported
by
the
downstream
cdni,
and
it
could
also
allow
to
advertise
additional
parameters.
H
There
are
sections
about
privacy
and
security
that
needs
to
be
added,
and
then
but
the
main
thing
is
about
the
there's,
still
an
issue
about
this
delegated
credential
object,
which
is
not
really
an
mi
object,
as
as
as
in
the
spirit
of
cdni.
So
I
have
a
slide
on
this
in
the
next
slide
and
and
then
there
are
also
things
like
currently
in
the
current
proposal.
The
public
private
key
is
generated
by
the
upstream
cdn,
and
maybe
this
is
not
something
that
we
always
want
to
do.
H
We
should
also
support
the
case
where
the
downstream
cdn
is
generating
this
key
and
then
asks
for
the
delegated
credential
for
to
the
upstream
cdn.
H
Okay
next
slide
so
yeah.
The
open
issue
is
that
well,
this
delegated
credential
objects
is
not
really
an
mi
object
in
the
spirit
of
the
rfc
8006.
H
Because
it's
it's
not
well,
it's
not
fetched
as
a
normal
mi
object.
This
is
fetched
via
this
url,
which
is
defined
in
the
configuration
delegated
credential
objects.
H
But
if
we
remove
delegated
credential
from
this
draft,
well,
there's
no
point
of
of
just
keeping
the
dedicated
credential
of
the
conf
delegated
credential
objects,
and
so
they're
different
object
options
on
how
to
solve
this
issue.
I
mean
the
issue
is
really
about.
How
can
we
fetch
the
delegated
credential?
How
the
downstream
cdn
can
fetch
the
delegated
credentials?
H
So
there
are,
one
could
be
to
rely
on
an
fci
object
which
allows
for
the
downstream
cdn
to
announce
the
number
of
delegated
credential
it
needs,
and
then
the
upstream
cdn
pushes
the
delegated
credential
vsmi
object.
H
But
the
problem
here
is
that
it's
not
really
dynamic
that
we
have
to
keep
in
mind
that
dedicated
credentials
have
a
very
short
validity
times,
so
we
have
to
renew
them
regularly
and
it's
kind
of
the
downstream
cd
and
each
time
it
sees
that
a
delegated
credential
is
expiring.
It
has
to
fetch
a
new
one.
So
we
need
a
really
dynamic
mechanism
and
the
other
option
would
be
to
well
specify
a
dedicated
interface
which
allowed
to
fetch
delegated
credentials
so
that
either
it's
an
interface
somewhere
in
cdni
or
sva.
H
That
needs
to
be
defined,
maybe
the
trigger
interface.
I
don't
know
or
something
something
else,
some
dedicated
to
that
or
by
proposing
an
extension,
so
option
b
in
the
acme
working
group
which
details
how
to
fetch
the
delegated
credentials.
H
So
this
is
something
which
is
mentioned
in
the
subserts
draft,
but
I
don't
know
if
the
acme
walking
group
would
exact
accept
that,
knowing
that
the
idea,
the
interest
of
this
delegated
credential
is
that
you
don't
need
any
ca
anymore.
So
it's
not
really
in
the
scope
of
the
acme
protocol
so
and
yeah
I
mean
I
wanted
to
really
present
all
the
options
and
the
this
open
issue
and
have
feedback,
maybe
on
the
mailing
list.
Maybe
today
on
the
discussion
and
to
see
how
to
move
forward.
C
Thank
you
kristoff.
I
I
know
I
owe
you
a
response
on
the
mailing
list
as
well.
I
haven't
responded
that
email,
yet
I
apologize
for
that,
but
sanjay
is
in
the
queue
sanjay.
A
Yeah
one
quick
question
christoph:
I
think
the
I'm
not
sure
that
this
draft
will
really
go
into
the
acme
working
group
because
it
really
refers
to
the
subserts,
which
is
the
work
done
in
the
tls
working
group.
So
I
think
it
would
be
better
to
to
sort
of
if
you're,
making
use
of
the
subserts
then
keep
the
focus
clear
that
it
really
any
reliance
it
has
is
focused
solely
on
that
that
rfc.
A
A
It
may
materially,
not
matter,
but
I
just
wanted
to
call
that
out.
Also
enough,
not
sure
if
you
paid
that
attention
to
that
part.
C
Okay,
I'm
gonna
put
myself
in
the
list
or
in
the
queue
I
think
you
know
I
I
brought
up
the
this.
It's
not
really
a
metadata
thing.
I
still
need
to
think
about
it
more.
I
don't
know
that
it
makes
sense
to
create
an
interface
just
for
this.
Maybe
we
can
make
an
exception,
but
it
is
something
separate.
C
C
If
we
try
and
specify
something
like
that
in
here,
we
can
specify
the
format.
Excuse
me
right.
All
you're
really
doing
is
specifying
the
format
and
not
who's
going
to
set
up
this
server
to
to
provide
it.
But
it's
a
it's
a
it's.
A
fine
line
right
now,
we're
gonna
have
to
say
what
what
version
of
tls
are.
C
You
gonna
use
to
pull
this,
and
is
that
good
enough,
and
I
don't
know
that
we
want
to
go
down
that
road,
but
I
think
everyone
should
take
a
look
at
the
draft
and
if
you
have
thoughts
or
comments
on
where
we
should
take
it,
please
please
respond
to
the
list,
send
your
comments
there
and
we
can
try
and
take
this
forward
now.
As
chair,
we
wanted
to
have
a
discussion
quickly
on
whether
or
not
we
can
go
ahead
and
adopt
this
draft.
This
was
a
split
out
from
an
existing
working
group
draft.
C
I
I
don't
see
a
problem
with
us
taking
on
this
work,
since
we
already
had
the
work
and-
and
we
probably
should
move
this
work
forward,
but
as
a
matter
of
protocol
we
wanted
to,
you
know,
bring
it
up
and
say:
hey
does.
Is
the
working
group
still
interested
in
going
down
this
road?
I
assume
we
are,
but
if
anyone
has
any
objections,
please
feel
free
to
voice
them
now.
C
We
will,
of
course,
go
to
the
list
and
send
out
an
actual
adoption,
email
and
solicit
feedback,
but
if
there
are
no
objections
here,
we
will
go
ahead
and
start
that
process.
C
C
B
L
Fantastic
and
video
too,
all
right
and
you
got
my
slides
up
excellent.
Yes,
I'm
going
to
try
to
move
a
little
bit
quickly
through
this
that
we
have
time
for
at
least
a
couple
of
questions.
This
is
gonna,
be
the
common
access
token
next
slide.
L
L
And
we're
gonna
start
out.
I
wanna
talk
about
who's,
doing
this,
what
all
we're
doing
and
how
that
relates
to
the
uri
signing
that
is
being
worked
on
in
this
group
and
why
this
group
might
care
next
slide.
L
So
this
work
is
coming
out
of
the
cta
wave
project
and
the
primary
use
case
is
for
streaming
media,
although
the
token
itself
is
a
little
bit
more
general
and
the
goal
is
to
find
a
single
token
that
covers
all
of
the
existing
usages
in
the
industry
from
all
of
the
all
of
the
cdn
specific
tokens
that
are
in
use
today.
Next
slide.
L
So
talk
a
little
bit
about
how
this
is
different
from
the
uri
signing
that
this
group
is
doing
this,
the
common
access
token
is
cwt
based,
so
it's
it's
going
to
be
smaller
terser
and
is
a
little
bit
faster
to
parse
for
cdns.
L
Obviously,
there's
going
to
be
no
built-in
support
for
delegation,
it's
not
part
of
a
larger
cdni
or
other
interconnect
effort.
It
is
just
a
token,
but
it
does
generally
have
more
claims
with
greater
complex
complexity.
Next
slide.
L
All
right
we
have
just
like
the
just
like
the
uri
setting
token
it
has
encrypted
claims
in
order
to
protect
the
privacy
of
the
end
user,
but
unlike
it,
instead
of
putting
it
as
a
base64
string,
it
uses
the
cozy
object
directly.
This
saves
processing
time
and
is
and
saves
a
lot
of
space
avoids
repeated
base64y.
L
L
The
first
few
are
really
trying
to
draw
a
box
around
any
request:
information
that
intermediaries
or
providers
are
likely
to
use
in
order
to
determine
what
content
is
returned,
so
the
http
method
is
has
a
claim
on
it,
so
that
you
can
specify
that
if
your
token
is
valid
for
get
but
not
put,
and
not
post,
for
example-
and
the
alpn
is
actually
an
extension
of
the
fact
that
you
can
adjust
based
on
scheme,
and
so
there
are
tokens
that
claim
https,
but
not
http,
using
the
regular
expression
on
the
uri.
L
We
wanted
to
extend
that
to
the
alpn
more
generally,
and
then
there
are
a
number
of
headers,
including
an
arbitrary
number
of
unspecified
headers.
That
servers
can
use
to
return
different,
differing
information.
They
are,
of
course
well
advised
to
use
the
very
header
appropriately
here,
but
yes,
having
a
regular
expression
on
arbitrary
headers.
L
We
have
some
geography
claims
that
we're
still
working
out
the
details
on.
We
have
a
claim
for
the
tls
public
key.
This
allows
you
to
use
for
the
tokens
something
similar
to
what
oauth
does
with
mtls
self-signed
certs,
wherein
you
use.
L
If
you
use
a
certificate,
even
a
self-signed
certificate,
to
acquire
an
authorization
token
by
whatever
means
you
provide
credentials
to
get
authorization
tokens,
the
issuer
can
bind
your
public
key
to
the
token
it
gives
you
and
the
intermediary
can
guarantee
that
the
certificate
you
presented
matches
the
token
so
that,
if
your
token
is
stolen,
then
it
can't
be
used
without
your
private
key
as
well.
L
And,
lastly,
we
have
actions
that
modify
the
rejections,
so
there
are
situations
where,
when
a
token
is
rejected
you,
the
issuer
wants
a
very
specific
status
code
and
some
headers
to
be
returned
and
we're
still
workshopping
exactly
what
this
is
going
to
look
like.
But
the
idea
is
that
this
will
allow
issuers
to
define
these
rejections
directly
next
slide.
L
Okay,
another
difference
is
that
we
have
some
strong
types
on
these
claims,
so,
for
example,
the
critical
claim
here
is
an
array
of
claim
numbers
claim,
numbers
and
strings
encrypted
claims
are
of
a
cozy
encrypt
type
directly
network
claims
are
going
to
be
it's
going
to
be
a
seabor
array
of
rfc
9164
tags.
Other
types
other
claims
are
going
to
have
appropriate
types.
This
will
reduce
the
amount
of
parsing
of
strings
and
error
handling
that
has
to
happen
post,
well-formed,
validation
next
slide.
L
So,
let's
talk
about
why
we
in
this
group
might
care
about
it
or
we
might
not
so
like
the
alpn
method
and
headers.
Those
are
just
general
uri
signing
claims,
they're
applicable
to
any
sort
of
a
any
sort
of
a
token
like
this
compositions
encrypted.
Subject:
critical
claim
those
aren't
even
specific
to
uri
signing.
They
are
literally
just
generic
claims.
L
These
are
potentially
generally
useful.
Maybe
it's
useful
to
try
to
define
them
generally
in
a
broader
sense.
Next
slide.
L
L
I
don't
see
a
whole
lot
of
energy
for
any
sort
of
successor,
token,
to
the
wonderful
and
delightful
token
that
this
group
has
that
satisfies
the
needs,
but
if
there
ever
were
it's
likely
that
such
a
token
would
overlap
heavily
with
at
least
some
of
these
claims
next
slide.
L
So
there's
no
real
takeaway
here
this
is
no
action
items.
This
is
noodles
for
your
noodle
machine
and
some
public
domain
cats,
and
I
wanted
to
leave
at
least
a
few
minutes
of
time
for
questions
about
what
this
token
is
and
what
it's
doing.
C
I
guess
how
far
along
are,
is
the
development
of
this,
and
do
you
see
adoption
of
it
and
do
you
think
that
it
will
it
will
go
anywhere
or
whether
or
not
you
know
uri
signing
token
just
it?
Will
it
also
go
somewhere
or
will
it
fall
by
the
wayside,
because
this
thing
is
more,
I
guess,
does
this
add
a
whole
bunch
of
stuff
that
people
want,
or
does
it
just
add,
a
whole
bunch
of
stuff?
C
L
Definitely
adds
it,
it
definitely
adds
a
bunch
of
stuff
that
people
want,
and
it
adds
some
stuff
and
which
category
which
thing
is
in
depends
based
on
the
use
cases.
So
like
there
are
some.
There
are
some
folks
with
zero
interest
in
any
of
the
tls
stuff,
and
there
are
folks
that
are
like
no,
no,
that
that
enables
my
very
specific
use
case.
L
And
so
there
is
there's
a
lot
of
drive
towards
adoption
on
this,
and
that's
why
this
work
is
happening
in
the
cta
that
there's
a
group
of
people
there
that
represent
a
variety
of
intermediaries
and
providers
that
yeah
that
have
a
lot
of
energy
towards
this
and
they're
really
pushing
for
something
that
will
be
genuinely
implemented
across
a
wide
variety
of
intermediaries
and
so
yeah.
C
And
do
you
think
that
this
is
something
we
should
liaise
with,
or
is
it
just
be
aware
if
we
choose
to
take
forward
uri
signing,
if
we
don't
ever
do
anything
else
with
urs
signing,
then
they
can
go
and
do
what
they
want.
L
Yeah,
I
I
think
that
some
sort
of
a
liaison
probably
makes
sense,
there's
a
lot
of
overlap
in
participation
already,
which
is,
which
is
a
valuable
thing.
L
I
think
there
is
if
this,
if
the
cat
takes
off
and
uri
signing
token
does
not,
for
whatever
reason
it
may
at
some
point,
be
valuable
to
define
an
interface
layer
between
the
uri
signing
token
and
the
cat
that
allows
people
to
use
the
cat
for
cdna
delegation
right.
L
We
may
want
a
metadata
at
least
or
something
to
support
it
right
right
and
now,
if
we,
if,
if
we
ever
want
to
take
on
that
work,
there
are
other
questions
we
have
to
ask
and
a
lot
of
the
the
musts
around
this
were
intentionally
left
out
of
the
uri
signing
draft,
because
there
was
not
a
desire
to
force
all
of
the
intermediaries
to
implement
all
of
these
features
in
order
to
do,
cdni,
right
and-
and
the
goal
here
is
to
create
you
know,
genuine
interoperability
and
may
is
the
may-
is
the
bane
of
an
interoperable
system
right.
L
So
if
we
ever
do
that,
there
is
a
there
is
definitely
going
to
be
some.
Some
questions
an
issuer
can
express
anything
that
they
need
to
express
in
a
cat,
but
there
are
more
requirements
on
an
intermediary
supporting
a
cat
than
there
would
be
on
supporting
a
uri
signing
token
got
it.
A
Sanjay
yeah,
so
chris
I
before
you
were
put
on
the
agenda.
I
did
see
that
there
was
some
support
in
the
mailing
list
about
you
know
you
talking
about
it.
So
I'm
glad
you
did
that
and
you
clarified
some
of
the
questions
that
kevin
was
asking
moving
forward.
You
also
mentioned
about
implementation,
so
that's
an
important
thing,
so
I'm
I'm
wondering
if
you
want
to
keep
us
sort
of
apprised.
A
L
Yeah
I
can
certainly
I
can
certainly
keep
the
group
updated
and
yeah.
There's.
There's
no
surprises
the
the
the
structure
is
very
similar,
they're
coots
versus
jutes.
It's
you
know
it's
the
same
roots
at
the
core.
There
are
only
a
few
obvious
ways
to
express
these
sorts
of
things.
L
M
Thanks
for
the
chance,
I
just
wanted
to
say
that
to
the
earlier
question.
There
is
definitely
very
very
heavy
interest
from
content
providers
and
the
cdn
in
the
ecosystem
of
content
delivery
to
support
a
common
scheme
right
now.
The
ecosystem
is
extremely
fragmented
and
cad
has
the
benefits
of
flexibility,
with
these
logical
claims
that
can
be
put
together
with
ants
and
ours,
but
also
the
conciseness
of
that
which
helps
the
performance,
which
is
extremely
critical
for
the
cdn
vendors.
J
A
J
B
A
D
D
Meet
echo
thinks
it
has
something
to
do
with
the
udp
session
initiation.
But
oh
this
is
amazing.
I
can.
I
can
see
all
kinds
of
little
wave
things
going
on
over
there
all
right.
Sorry,
so
my
my
thoughts
on
this,
I
love
standards,
things
that
are
generic
that
can
be
used
by
more
people.
D
I
think
that's
why
we
change
uri
signing
to
be
jwt
instead
of
all
the
query,
params
stuff,
that
it
was
in
the
beginning.
If
something
like
this
comes
along
that
that
fills
that
gap
better
than
uri
signing,
I'm
all
for
it,
I
don't.
I
think
you
are
a
signing
with
something
that
came
along,
because
there
was
nothing
else
that
did
it,
but
if
there's
a
more
generic
solution
for
it,
I
like
it-
and
I
think
the
the
puns
are
going
to
work
out.
D
So
the
the
issues
that
ben
had
with
it,
there
was
one
minor
clarification
he
wanted
to
see,
which
I
added,
and
he
was
happy
with
the
other
thing.
The
other
point
was
having
to
do
with
delegated
shared
keys
going
from
from
going
from
csp
to
ucdn.
He
was
fine
with
it
was
from
ucdn
to
dcdn
that
he
did
not
like,
and
I
wrote
to
the
mailing
list
and
nobody
really
replied
and
we
decided
to
just
go
ahead
and
and
correct
it.
D
So
what
I
did
was
remove
the
the
explanation
of
how
to
do
it
and
like
there
were
a
couple
places
where
it
was
like.
Oh
and
you
can
do
this
here
and
I
just
removed
that
part,
but
I
did
not
remove
the
concept
entirely
because
I
did
not
want
somebody
to
rediscover
this
and
say.
Oh,
this
is
a
great
and
clever
idea.
I've
come
up
with
and
let
me
implement
this
and
instead
I
put
that
they
should
not
do.
D
This
ben
is
still
concerned
that
that's
in
there
at
all
and
that
it's
should
not
versus
must
not.
I
guess
I
I'm.
I
was
torn
on
this.
I
thought
about
this
for
a
while
before
I
went
with,
should
not,
and
mostly
because
I
felt
like
we
had,
we
had
no
recourse
like
if
we
say
you
must
not
do
this
or
else
what
like
they
can
do
it
in
the
background.
It's
it's
already
out
of
band,
there's
nothing
that
it's
breaking,
there's
no
way
to
detect
it
or
stop
it.
D
B
Hi
phil
about
the
should
not
something
that
should
be
noted
is
that
it's
not
enough
to
say,
should
not
and
and
let
the
reader
decide
that,
what
what
level
this
is
like,
how
you
know
required
or
recommended
it
is
to
not
do
this
or
it
there
needs
to
be
text
around,
and
I
haven't
reviewed
the
text
so
like
yeah,
I'm
not
sure
at
the
moment.
B
If
there
is
enough
text
about
that,
but
it
needs
to
be
clear
what
the
consequences
are
of
implementing
this
and
in
in,
like
ideally,
should
not
means
that
there
are,
like
the
the
writer
foresee
cases
where
this
is
allowed,
and
this
could
be
like
corner
cases
or
you
know,
situations
in
which
you
cannot
do
differently
than
actually
using
this
mechanism
and
that's
what
the
should
and
should
not
is
for
it's
for
allowing
these
corner
cases
and
describing
the
content
context
and,
like
maybe
examples
in
which
this
is
acceptable
and
yeah.
B
This
is
we
see
this
a
lot
in
in
isu
evaluation,
where
we
considered
is,
it
must
or
should
really-
and
I
think
it
applies
here
as
well.
So
maybe
that's
that
could
clarify
a
bit
ben's
point
about
why
this
should
maybe
is
not
enough
to
him.
I
don't
know
if
it
helps
at
all,
but,
like
I
see
the
point
of
you
know
not
wanting
it
to
remove
it
completely,
and
then
someone
reinventing
it.
It
really
depends
on
on
the
actual
text
that
is
around.
D
Okay,
yeah,
I
I
don't,
I
don't
see
any
like
valid
use
cases
other
than
people
who
implement
are
saying.
No,
I
really
want
to
use
shared
key
and
I
really
want
to
use
this
and-
and
I
think
we've
said
a
ton
of
stuff
about
not
doing
it
and
why
not
to
do
it?
There's
things
in
the
security
consideration.
I
can
always
add
more,
I'm
always
open
to
that.
I
really
was
on
the
fence.
Somebody
jokingly
sent
me,
I
forget
what
what
rc
it
is
now,
but
it
was
an
april
1st
rfc
that
had
really
should.
B
D
And
yeah,
and-
and
I
really
I
I
kind
of
like
you
know
really
should
not-
is
what
I
wanted
to
put
there,
but
I
was
like
oh
no,
this
is,
you
know,
obviously
a
joke,
but
it
I
was
really
on
the
fence,
and
I
just
I
guess,
maybe
I'm
maybe
I'm
just
misinterpreting
the
should
versus,
must
more
as
plain
english
versus
what
the
rfc
state,
and
so
maybe
maybe
you
must
not,
as
is
the
right
thing,
and
I
can
I
can
make
that
change.
D
I
just
I
I
discussed
it
with
some
people,
some
experienced
ietfers
and
they
they
didn't
even
have
a
definitive
answer.
One
of
them
said
start
with,
must
and
see
where
you
go
and
the
other
one
said
start
with,
should
and
see
where
it
goes.
So
I
can
just
I
can
just
change
it
to
must
or
must
not,
rather
just
to
be
clear.
B
Yes,
and
and
if
the
working
group
has
an
opinion,
that's
that's
very
valid
like
to
have
this
conversation
and
decide
and
have
a
consensus
call
by
the
chairs
that
is
going
to
you
know,
motivate
putting
in
their
must
or
not
master
should
there
that
would
help.
So,
if
I'm
also
like,
I
will
have
to
look
at
the
text
again
too,
to
have
a
more
definitive
opinion,
but
yeah.
C
I
think
our
original,
I
mean
the
whole
time
it's
been.
There
are
people
who
do
this
right.
We
know
that
they
shouldn't
do
this,
but
there
are
people
who
do
this
and
we
don't
want.
We
didn't
want
to
you
know
completely
lock
them
out,
but-
and
maybe
it's
okay
to
say
must
not,
and
they
can
still
go
and
do
it
there's
nothing
preventing
them
from
going
and
doing
it
right,
but
it
I
guess
the
stance
of
the
ietf
should
be
to
do
something
secure
and
not
allow
them
to
do
something.
That's
unsafe
right.
B
Like
that
could
be
described
in
the
document
without
normative
language
and
say
like
we
are
aware
that
exists
be
like
there
is
exist,
and
this
is
not.
This
is
not
recommender
or
like
yeah.
D
B
Francesca
has
one
more
comment,
I
think
yeah
sorry.
So
so
from
a
process
point
of
view.
I
just
wanted
to
say
that
ben
will
be
stepping
down
the
isg
tomorrow,
so
his
blocking
discuss
will
disappear,
but
I
hope
that
we
can
still,
you
know,
get
his
input
and
even
though
it's
not
going
to
be
like
actually
blocking
the
document,
I
hope
that
we
can
still
get
his
yeah
like
informal
approval
of
some
sort,
but
yeah.
We
should
get
to
a
consensus
in
the
working
group
and
then
move
forward.
D
D
C
I'm
okay
with
going
must
not
phil,
you
know
I'll
I'll
leave
it
to
you
to
decide
I'll,
go
and
review
the
text
again.
Everyone
else,
please
also,
if
you
have
some
cycles,
review
the
text
again
and
if
you
have
thoughts
or
feelings
on
it,
and
we
can
really
really
try
and
get
this
one
finished
up
and
out
the
door
once
and
for
all.
A
A
Sorry
we
lost
the
order
here,
so
there
is
capacity
there.
It
is.
E
Okay,
thanks
yeah,
I
will
try
to
go
quick
on
the
first
part
of
the
slide
deck,
because
it's
a
recap
of
something
that
we
have
explained:
oh
glenn,
that
was
let's
understand
what
the
the
person
that
was
presenting
this
in
the
last
meeting.
So
please
next
slide
yeah
next,
please
yeah!
So
just
why
we
present
in
this
this
draft
to
the
to
to
this
group.
As
all
of
you
probably
know
in
the
streaming
video
alliance,
there
is
a
the
open
catching
working
group.
E
Where
is
gathering
different
perspective
of
the
streaming
video
business,
where
you
have
the
content
providers,
you
have
commercials
at
the
end,
so
you
have
sales
providers
and
that
they
all
they
have
the
common
interest
of
bringing
the
best
quality
of
experience
for
video
delivering
this
video
services
to
end
users.
So
we
working
on
trying
this
effort
of
having
the
best
quality
of
experience.
E
This
is
why
we
work
together
to
create
this
inter
try
to
get
integrations
more
efficient
and
try
to
get
some
standardization
of
how
the
different
processes
of
the
video
delivery
goes
from
the
content
provider
to
the
end
users,
all
through
all
these
different
roles
or
techniques,
technologies,
et
cetera.
E
So
please,
next,
so
at
the
first,
let's
see
the
sva
open
catching
working
group
was
paying
attention
to
the
cni
interfaces
and
metadata
definitions
of
the
different
rfcs
like
a80607
and
8008,
but
there
was
a
finding
that
it
was
probably
needed
to
cover
some
gaps
that
trying
to
put
all
this
together
for
the
this
video
service
deliveries.
E
E
This
is
why
we
were
working
on
this
extending
the
metadata
that
come
from
this
group
to
try
to
to
cover
all
the
needs
from
the
different
members
of
different
roles
in
the
video
delivery,
trying
to
define
useful
methods
to
make
the
configurations
try
to
automate
the
configurations
to
putting
this
in
place
with
our
current
implementations
and
deployments
in
in
real
life
and
extending
with
more
advanced
configuration,
publishing
capabilities
that
are
required
by
content
providers.
Trying
things
like
publishing,
versioning
of
configurations,
and
things
like
that.
Please
next.
E
Yeah
well,
this
is
very,
very
quick.
This
is
a
representation
of
the
of
the
metadata
mobile
model
of
the
806
and
yes,
I
will
be
very
quick.
We
haven't
touched
any
of
these
many
structure
of
the
metadata
model,
but
we
live
for
our
proposed
extensions
in
the
genetic
met
as
generic
metadata
objects
so
with
taking
leverage
of
this
infinite
extensibility
that
the
interface
permits
on
the
metadata
is
next.
E
E
So,
as
a
summary,
I
think,
yeah
we
can
go
later
on
this
past
peace,
sanjay
yeah.
One
of
the
things
that
includes
this
draft
is
what
we
call
it
processing
stages.
This
is
a
way
to
permit
to
intervene
in
the
different
processes
that
happens
when
the
user
agent
is
requesting
a
content
to
a
cdn
or
a
damson
cdm.
Sorry
how
that
process
will
go
to
an
abstinence,
cdn
origin
to
get
the
response
back
from
the
upstream,
and
that
response
goes
to
the
user
ryan
by
the
damson
cdn.
E
These
processes
that
we
he
identified
for
the
steps
from
the
client
requests,
ordinance
requests
or
any
response
on
client
response.
We
defining
a
way
to
intervene
on
those
processes,
so
downstream
cdn
is
able
to
make
transformations
on
different
parts
of
the
of
the
request,
for
instance
modifying
the
the
headers
that
goes
to
the
origin.
E
E
Yeah
well,
this
is
a
I'm
going
to
pay
time
here
now,
it's
just
a
representation
of
the
processing
stages
model.
All
these
leave
in
a
generic
metadata
object
that
is
defined
defined
in
the
in
the
draft.
E
E
E
In
terms
of
the
capacity
capabilities
interface
we
have
in
the
these
drafts
have
been
defined.
E
A
few
new
objects
that
are
required
to
the
thompson
cdn
be
able
to
announce
to
the
upstream
the
different
capabilities
regarding
these
new
extensions
for
the
metadata
model
that
could
be
required
for
the
abstinence
to
know
so,
for
instance,
it's
important
and
it
could
be
important
for
the
absent
to
know
that
the
processing
stages
is
available
or
not,
or
even
it
could
be
important
to
to
the
amsterdam
to
know
that
a
subset
of
the
processing
stages
is
the
dams.
E
Incident
is
capable
for,
for
instance,
it
should
be
important
to
know
the
amps
incident
to
know
if
the
spectral
language
is
supported
in
the
dumps
incident
or
not.
Maybe
it's
not,
but
maybe
is
able
to
do
some
processing
stages
replacement
without
expression
language.
So
there
are
some
conditions
here
that
the
fci
interface
could
not
only
announce
the
capability
of
a
complete
object
but
of
a
part
of
an
object.
Applicant.
E
Yeah
yeah,
I
will
not
stop
here.
This
is
something
that
will
come.
Probably
next,
so
it's
not
changing
in
this
draft
version
regarding
this,
but
we
just
for
you
to
know
the
sva
and
the
opencasting
working
group
is
working
on:
try
to
extend
the
metadata
model
with
new
features
regarding
an
advanced
publishing
method
that
will
meet
the
content
provider,
a
better
management
of
the
configurations.
E
So
probably
this
is
something
that
will
come
in
in
the
following
meetings.
Please
next.
B
E
A
E
Yeah,
so
well
that
we
are
version
two
what's
changed
from
version
one
to
version
two.
One
of
the
comments
that
came
from
the
last
meeting
is
that
they
could
be
good
to
have
a
categorization
of
the
different
generic
metadata
objects
that
we
are,
including
with
this
extension.
E
That
was
a
very
fair
point,
so
we
we
have
made
this
categorization.
We
have
separated
different
metadata
objects
with
what
these
sections,
like
cash
control
or
metadata
related,
could
catch
properties
management.
Here's
the
list
of
the
different
objects
in
this
category,
the
origin,
access
metadata
related
to
the
acquisition
of
the
content,
client
access
control
in
this
version
of
the
draft.
There
is
no
generic
metadata
object
yet,
but
we
did
because
this
is
something
that
we
in
the
open
catching
working
group
are
working
on
different
options.
E
Different
objects
for
this
client
access
control,
but
they're
not
finished
yet
so
we
haven't
included
in
the
draft,
but
I
think
it
would.
We
think
we
were
good
to
having
in
the
in
the
as
a
category
here
even
and
is
empty
now,
but
we
we
will
have
different
objects
in
the
next
version
of
the
draft
edge
control.
Genetic
metadata
to
inform
processing
of
responses
downstream,
so
it's
a
way
to
to
the
abstinence,
to
condition
the
way
the
thompson
cdn
could
respond
to
a
reversal
agent
request
in
the
edge.
E
So
this
could
include
something
like
the
I
will
see.
We
will
see
that
in
a
moment
for
the
crossover
in
policy.
For
instance,
there
were
some
doubts
in
the
last
meeting,
so
the
upstream
is
able
to
somehow
set
the
damp
since
the
end
to
independently
of
information
that
are
that
came
from
the
absence,
indian
origin
to
enforce
some
functionality
to
the
end
user
request,
okay
processing
stages
as
a
complete
category
for
itself
and
in
general
metadata
is
where
we
include
other
objects
that
are
not.
E
E
We
have
include
some
minor
changes
in
the
metadata
expression,
language
that
there
is
no
change
in
the
in
the
built-in
operators
or
variables
parts,
but
it's
just
about
the
in
the
syntax.
In
the
in
the
object
in
the
json
object,
there
is
more
detailed
description
for
processing
stages.
E
We
have
removed
from
the
draft
the
mi
requested
capacity
limits.
There
is
something
that
goes
in
the
other
draft
that
andrew
will
be
presenting
in
a
moment.
So
we
decided
to
put
this
out
of
this
draft
as
the
capacity
inside
a
draft
have
their
own
definition
of
mi
object.
E
That
will
be
extending
the
806
and
we
added
corrections
from
the
previous
version
that
was
detected
in
the
revision
from
the
members
please
next,
so
I
will
just
go
directly
if
you,
for
the
sake
of
the
time,
to
to
one
of
some
of
the
main
doubts
that
were
in
the
in
the
previous
meetings,
so
I
I'm
open
to
to
discuss
any
other
questions.
E
If
you
want
here,
I
I
have
asked
in
the
presentation,
in
the
first
column
the
section
of
the
version
one
and
the
section
four,
the
section
two
so
version:
two
sorry
because
we
go
up,
we
have
categorized
all
the
objects,
then
section
numbering
has
completely
changed
so
yeah
for
the
sake
that
you
can
find
the
reference
of
the
of
the
previous
questions
on
the
last
meeting.
So
I
I
will.
E
I
will
go
to
the
section
232,
the
third
line
there
about
aloe
compress
and
this
there
was
a
concern
about
this
object
from
kevin
and
yeah.
There
was
a
very
fair
point
of
the
doubt
here.
The
thing
is
that
we
have
found
that
this
object
allo
compress
that
we
still
have
it
in
the
in
the
version
200
with
that
name.
That
name
does
not
reflect
correctly
the
function
that
is
expected
for
a
dumb
abstinence
cdn
to
use
this
object.
E
The
idea
of
this
object
of
the
functionality
is
that
the
upstream
cdn
enforces
the
dumpster
cdn
to
compress
a
response
to
the
end
user.
If
the
user
has
sent
the
access
encoding
header
correctly
but
independently,
if
the
origin
has
sent
the
response
to
the
downstream
uncompressed,
so
it's
like
delegating
the
compression
of
the
an
object
in
the
damson
cdn
in
the
response
to
the
to
the
end
user,
independently
of
what
the
origin
is
doing.
E
E
We
are
working
on
changing
the
name,
so
we
still
don't
have
a
a
final
name
with
this,
but
something
like
edge,
compress
or
first
compress
will
be
more
suitable
for
the
use
case.
That
is
especially
for
this
object,
so
probably
will
be
changing
the
next
version
of
the
draft
for
other
the
next
section,
211
the
cash
policy-
and
there
is
another
one
in
the
next
slide-
that
this
is
the
negative
catch
policy.
E
This
is
something
similar
related
to
to
what
the
origin
is
doing
of
the
in
the
upstream
and
what
the
upstream
cdn
wants
the
dumpster
incident
to
do
all
these
cash
cash
policies
are
related
to
cash
control.
Headers
in
the
responses,
typically,
the
origin
could
send
a
cash
control.
That
is
the
same.
E
That
should
be
sent
to
the
end
user
right,
but
the
thing
is
that
there
are
use
cases
where
the
upstream
wants
the
dumbstream
to
have
a
different
behavior
regarding
the
cache
in
this
case,
this
is
regarding
the
time
to
leave
of
the
objects
in
the
dumps
in
cdn
cuts,
so
independently
of
what
is
sent
to
the
end
user.
In
the
response
to
the
request,
the
upstream
needs
to
say
the
downstream
to
change
the
way
that
it's
behave,
the
cuts.
E
So
there
are
some
examples
that
that
we
have
found
in
in
real
life
in
in
our
production
services
deployed
is
like,
for
instance,
an
upstream
wants
to
have
a
catch
control,
zero
or
no
cash,
no
store
for
the
local
catching
the
end
user,
but
they
don't
want
to
receive
a
huge
amount
of
requests
from
the
dumpster,
so
they
prefer
to
maintain
in
a
double
stream,
a
ttl
of
let's
say
one
minute,
while
the
end
user
received
a
no
catch,
no
store.
E
E
So
they
don't
need
that
the
damsen
continues
to
revalidate
the
object,
because
if
they
need
to
change
the
object,
they
can
use
the
content
management
to
do
that
and
that
way
the
downstream
block
or
stop
the
huge
amount
of
requests
from
the
end
users
and
is
not
disturbing
the
origin
server.
So
this
catch
control
policies
or
the
negative
patch
controllers
policies
is
intended
for
that,
so
a
way
to
change
the
behavior
of
the
dams
in
terms
of
the
ttl
of
the
objects
without
changing
what
is
intended
for
the
industrial
request.
E
So
please,
next
sanjay
yeah
this
section.
This
is
two
three
one
one
is
regarding
the
gross
origin
policy.
The
doubt
from
here
from
kevin
was
in
the
description
was
not
very
clear.
E
The
origin
header
include,
is
included
in
that
request
and
there
is
not
the
service
url.
It's
not
what
is
defined
in
the
host
in
the
course
only
in
a
host
match,
because
the
upstream
one
is
delegating
a
host
match
is
delegating
the
service
of
the
request
for
a
video
content,
for
instance,
or
for
another
object,
but
the
origin
header
is
related
to
the
url
of
the.
You
know
how
the
client
access
that
content
in
the
in
a
video
platform
or
in
a
web
page.
E
E
A
Can
we
wrap
up
quickly
yeah?
I.
E
Will
try
to
finish
yeah?
Yes,
next,
I'm
trying
to
find
something
more
important.
Catch
policies
is
explaining
before
yeah
some
correction.
Yes,
please
next.
E
Yeah
there
are
some
questions
about,
including
something
in
the
registry
for
different
options.
One
was
for
the
private
fetchers
is
my
point
here
is
that
private
features
is
something
that
is
more
a
point
to
point
so
from
one
specific
abstinence
to
one
specific,
downstream
cdn,
because
our
private
features
probably
from
the
dumpster.
So
I
don't
think
it's
needed
to
have
a
private,
a
public
registry
for
for
those
private
features,
because
of
that,
because
I
think
they
are
not
generic
and
by
definition
there
was
another
comment
regarding
and
the
traffic
type.
This
is
a
generic.
E
E
Yeah
and
that's
it
very
quick,
I'm
sorry
for
the
sake
of
the
time,
but
please
I'm
I'm
more
than
welcome
to
to
use
the
mailing
list
to
for
whatever
requirement
that
you
want
and
in
the
last
slide,
please
sanjay.
That
is
where
we
find
that,
in
this
draft
person
is
good
enough
to
to
plant
to
have
this
adopted
as
a
working
group
draft
all
the
realistic
concerns.
A
What
I
would
suggest
is
that
if
you
can
put
those
questions
that
kevin
had
and
the
answers
you
have
in
the
mailing
list,
I
think
that
would
you
know
at
least
create
a
record
there
and
yeah.
You
know
that
have
similar
questions
maybe
answered
it
in
in
that
manner,
and
then.
J
A
You
know
take
up
your
question
of
adopting
this
into
the
mailing
list
kevin.
What
do
you
think.
C
I
think
thank
you
for
thank
you
for
the
updates
I
haven't
had
a
chance
to
go
over
the
full
draft.
The
the
the
groupings
are
great.
I
think
that
the
other
thing
then
is:
do
we
want
to
break
it
up
into
multiple
drafts
right
now,
it's
a
90-page
draft
and
there's
a
lot
of
stuff
in
there,
and
there
are
things
we
could
probably
accelerate.
Some
of
the
metadata
are
probably
easier
to
push
through
if
we
do
them
as
separate
pieces,
some
of
the
stuff
that's
more
fleshed
out.
C
We
should
consider
that
I
think
versus
the
the
mono
draft,
which
is
which
is
very
hard
and
especially
with
the
the
stages
which
I
think
are
interesting.
I
think
that's
good
stuff,
but
that
is
going
to
take
a
lot
longer,
I
think,
than
just
pushing
through
some
of
the
metadata.
That's
helpful
for
you
guys.
E
C
K
Saving
the
worst,
I
guess.
C
K
K
As
part
of
this,
the
this
the
draft
has
actually
been
simplified.
A
bit
we've
cut
out
one
of
the
main
concepts
which
was
allowing
an
upstream
to
ask
the
downstream
to
reconsider
capacity
limits.
We
weren't
necessarily
confident
in
the
approach
we
were
taking,
so
we
decided
to
strike
it
out
just
to
make
it
a
little
more
clear
and
concise
next
slide.
Please.
K
So
the
component
that
was
related
to
that
which
was
struck
out
is
the
mi
requested
capacity
limits
object.
Once
again,
we
just
didn't
feel
we
had
a
a
good
enough
grasp
on
how
it
would
be
used
effectively
to
communicate
or
allow
the
upstream
to
ask
the
downstream
to
reconsider
adjustments
next
slide.
Please.
K
K
This
is
the
the
delta
of
the
json
structure
from
the
last
draft
on
the
left.
You'll
see
that
there
were
some
hard-coded
sections
of
total
limits
and
host
limits.
We've
decided
to
try
and
generize
that
slightly
by
moving
those
over
into
generic
limits,
objects
and
then
identify
or
adopting
a
new
subscope
to
those
limits.
So
you
can
see
here
on
the
right
hand,
side
there
is
a
scope
object,
which
is
of
type
published
host
with
several
values.
K
This
allows
a
more
granular
scoping
of
a
capacity
within
a
cdni
footprint,
sometimes
that
we
found
that
that
wasn't
necessarily
granular
enough
to
meet
certain
needs.
So
we
decided
to
allow
within
that
footprint
a
subscoping,
and
this
is
the
vehicle
that
we've
identified
for
that
next
slide,
please,
this
object
hasn't
changed
since
the
last
version
of
the
draft.
This
is
a
an
object
which
is
meant
to
advertise
telemetry
source
capabilities,
and
this
is
relevant
in
that
the
capacity
limits
were
advertising
we've.
K
We
felt
that
there
was
a
strong
correlation
with
correlating
that
to
a
a
telemetry
source,
so
there
was
no
ambiguity
between
what
capacity
advertisement
were
specifying
and
how
to
measure
the
utilization
against
that
limit.
So
this
this
is
really
a
foundation
for
further
work
which
we
have
yet
to
put
forward,
but
we,
we
felt
it
was
very
relevant
in
this
case
and
wanted
to
lay
the
foundation
for
it
here.
Next
slide,
please.
K
K
The
upstream
would
then
also
be
looking
to
consume.
Telemetry
data
that
related
to
the
specific
capacity
limits,
combine
those
two
data
points
together
to
make
sure
that
there
is
sufficient
sufficient
capacity
in
order
to
delegate
requests
to
the
downstream
next
slide.
Please
once
again,
this
is
the
the
diagram
which
allow
or
shows
how
the
downstream
would
reflect
changes
to
capabilities
to
the
upstream
relies
upon
a
callback
mechanism
and
a
subscription
service
handled
by
the
upstream
in
the
downstream
next
slide.
K
Please,
once
again,
this
is
just
a
call
out
to
the
fact
that
we
had
removed
the
mi
requested
capacity
limits,
object
from
this
version
of
the
draft
and
next
slide.
Please,
and
that
is
it
all
right.
I
made
it
in
five
minutes
all
right.
C
Good
job
andrew,
thank
you
any
any
comments,
so
I
encourage
everyone.
I
haven't
had
a
chance
to
look
at
the
updated
draft
myself.
Everyone
please
go
out
and
read
the
updated,
metadata
and
capacity
drafts
and
send
comments
to
the
list.
Please
I
think
we
should
we
we
should.
C
We
should
see
you
know
if
we
can
get
some
more
comments
out
there
and
have
more
people
discuss
the
draft,
and
then
we
can
talk
about
whether
there's
you
know
appetite
in
the
working
group
to
to
adopt
this
work
and
take
it
on
any
other
final
comments.
Andrew.
C
We
have
three
minutes
left
to
wrap
up
sanjay
anything
you
want
to
say.
My
comment
is,
I
hope,
to
see
everybody
in
person
at
ipf114,
it's
good
to
see
people
back
in
the
rooms,
and
I
am
happy
thank
you
for
everyone
who
came
and
presented.
Thank
you
for
you
know
carrying
with
us
through
some
technical
difficulty.
C
We
got
through
it
all,
and
I
appreciate
all
the
hard
work
from
all
of
our
authors.
Sanjay.
A
Yeah
yeah,
I
I
I
echo
your
comments
and
I
think
there's
there's
good
progress
and
I
would
like
to
see
you
know
andrew.
You
know
sharpen
up
this
draft
so
that
you
know
we
can
bring
it
back
here
and
and
take
a
pull
on
it
and
likewise
glenn
and
team.
Everybody
has
done
a
lot
of
work
on
the
metadata
so
alfonso
you
know
you.
You
have
the
action
item
here
to
try
and
move
that
draft.
A
Maybe
you
know,
as
suggested
that
maybe
break
it
up
into
smaller
pieces,
so
I
think
that
might
be
able
to.
We
might
be
able
to
kind
of
look
at
it
easily
and
move
forward
with
those
with
those
drafts
so
yeah.
I
hope
to
see
everybody
in
person.
The
next
one
is
in
july
almost
end
of
july
third
week
of
july,
in
philadelphia,
so
maybe
easier
for
some,
not
so
easy
for
others,
but
I
know
francesca
would
not
be
able
to
join
us
there.
A
She
is
she's
going
to
be
on
on
maternity
leave
then
wish
her
all
the
best
but
keep
up
the
work
here,
because
there's
a
lot
of
stuff
going
on.
So
a
lot
more
to
come.
I
guess
in
the
next
itf,
but
before
we
end
we've
got
about
a
minute
left
francesca
anything
you
want
to
add.