►
From YouTube: IETF112-CDNI-20211109-1600
Description
CDNI meeting session at IETF112
2021/11/09 1600
https://datatracker.ietf.org/meeting/112/proceedings/
A
Hello,
everyone-
this
is
cindy
and
I
working
thanks
for
coming,
looks
like
people
are
still
trickling
in,
but
we'll
go
ahead
and
get
started.
Welcome
back
to
ietf.
If
you
are
not
here
for
the
cdni
working
group,
you
might
be
in
the
wrong
room,
otherwise
we
will
move
ahead.
A
This
is
the
note.
Well,
everyone
should
be
familiar
with
it.
All
of
your
contributions
and
participations
are
governed
by
the
rules
set
full
in
these
bcps.
If
you
haven't
read
them,
you
should,
but
otherwise
you
should
be
aware
that
of
the
rules
of
the
idea-
and
you
agree
to
them
by
being
here.
A
Sanjay-
and
I
are
here
to
conduct
our
cd9
meeting
for
iatf112
if
we
could
have
a
volunteer
to
just
monitor
the
jabber
room.
That
would
be
awesome
if
someone
wants
to
volunteer
for
that,
I
will
be
taking
minutes
if
someone
else
also
wants
to
take
minutes.
That
would
be
great
as
well.
Blue
sheets
are
taken
care
of
for
us
by
meat
echo.
So
don't
you
just
step
up
and
say:
yes,
they'll
just
take
a
peek
in
jabber,
otherwise
sanji
will
have
to
do.
A
Thanks
chris
awesome,
all
right,
then
we
will
move
on.
These
are
the
existing
milestones
for
the
working
group.
We're
going
to
be
talking
about
uri
signing
which
is
in
iusb,
is
with
the
isb
and
still
has
an
update
for
us.
We
have
two
existing
milestone
items
ups
delegation,
which
frederick's
going
to
give
an
update
on.
I
know
there's
been
some
discussion
on
the
list,
I'm
happy
to
see
that
you
know
we're
making
some
progress
there
and
then
here
is
going
to
talk
about
the
triggers
interface.
A
Here
is
our
agenda.
We
got
two
hours
this
time
because
we
did
run
out
of
time
last
time.
So
hopefully
everyone
will
have
plenty
of
opportunity
to
ask
questions
and
discuss,
but
we
do
have
a
fact
agenda.
So
if
anyone
has
any
changes,
they'd
like
to
make
or
voice,
please
do
so
now.
Otherwise
we
will
move.
A
On
no
okay,
then
the
first
thing
on
the
agenda
is:
we
had
a
call
for
adoption
on
two
draft.
I've
been
out
after
the
last
night
yeah.
We
just
wanted
to
go
over
that
here
they
are
the
first
one
is
the
stevian
eye
triggers
extension
draft.
A
A
Sorry,
and
so
there
were
no
objections
on
the
list.
There
were
some
people
who
expressed
you
know
their
their
approval
of
this,
and
so
we
are.
A
We
are
ready
to
move
forward
with
adoption,
we're
just
giving
folks
the
last
chance
here
to
object.
Otherwise
we
will
move
forward
with
the
wrath.
A
I
don't
see
anyone
stepping
up
to
object,
so
I
I
think
this
one's
pretty
straightforward.
The
existing
milestone
is
set
for
december
of
this
year.
I
don't
know
if
that's
still
a
reasonable
deadline
here.
If
you
think
that
we
should
extend
that,
and
I
think
maybe
updating
that
milestone
to
you
know
what
is
a
reasonable
time
frame
to
get
that
finished
up.
A
Okay,
I'd
like
to
set
a
deadline
just
because
deadlines
are
good
for
us,
it's
good
for
us
to
have
deadlines.
I
I
don't
want
us
to
rush
to
get
it
in
by
obviously
next
month,
but
I
think
you
know
if
we
feel
good
about
trying
to
shoot
for
march
of
2022.
I
would
be
good.
A
All
right
and
then
the
other
draft
was
the
footprint
extension
again.
This
is
just
additions
to
the
metadata
footprint
registry
by
design
we
built
a
registry
so
that
we
could
add
stuff
as
we
move
forward
and
found
things
that
we
needed,
and
I
think
this
is
a
great
example
of
that
the
changes
are
fairly
straightforward.
There
were
no
objections
on
the
list
and
there
were
you
know
there
was
support
board
on
the
list.
If
anyone
has
any
objections
to
it.
This
is
your
last
chance
before
we
call
the
adoption.
A
I'd
like
to
go
ahead
and
just
add
a
milestone
for
this,
and
I
think
this
one's
pretty
straightforward.
I
think
we
should
be
able
to
get
it
done
by
the
next
ietf.
Do
you
see
any
issues
with
that
setting
a
deadline
of
march
for
this
as
well.
A
For
let's
call
or
for
for
trying
to
finish
it
up,
yeah
and
trying
to
get
to
a
last
call
by
my
next
idea.
Okay,
I
don't
think
there's
a
lot
more
to
add
to
this
one,
this
one's
pretty
straightforward.
A
F
C
Thank
you.
Thank
you.
So,
as
they
came
in
already
explained
in
the
preview
in
itf
101,
I
will
propose
to
have
assist
for
adoption
and
the
first
one
is
is
heading.
Can
you
please.
C
Yeah,
okay,
so
I'll
cover
both
of
them
and
give
a
quick
recap
and
and
open
it
for
questions.
So
the
first
draft
is
extending
the
foot
in
the
copy
footprint
capabilities
capability
interfaced
with
additional
footprint
types
next
place.
So
a
quick
recap
of
of
the
current
state.
Currently
rfc
806
defines
the
footwind
types
which
can
be
ipv4
ipv6,
asn
and
country
code.
Counter
code
specifically,
is
an
iso
3166
code
to
alphanumeric
to
alphanumeric
characters
and
it's
further
to
discard
the
footwork
object.
As
you
can
see.
C
C
Next,
please
ifc808.
C
Which
is
the
foot
foot
the
fdi
into
the
sci
put
in
capability
interface,
uses
this
footprint
objects
and
in
order
to
specify
which
capability
the
availability
of
capabilities
for
specific
clients.
So
in
this
example,
the
client
they
capability
specified
capability
is
available
for
clients
within
the
asn
64496
that
resides
in
the
u.s.
C
C
C
I
think
this
is
all
we
added
in
the
in
the
in
this
draft.
There
was
only
one
change
since
the
previous
itf
summit
meeting,
which
changes
the
footage
name
from
3116
to
code
to
subdivision
code
following
a
single
frozen
alfonso,
a
remark,
any
questions
concern
or
something
like
that,
regardless
those
suggestions.
C
C
So
as
a
reminder,
the
rfc8007
defines
the
cdn
control
interface,
which
allows
an
obsidian
to
manipulate,
manage
the
content
and
metadata
held
by
the
dance
obsidian,
for
example,
preposition,
invalidation
or
purge,
and
as
kevin
already
mentioned,
oe
and
sanjay
worked
on
a
draft
that
that
extends
this
rfc
and
the
current,
and
currently
we
want
the
then,
and
we
decided
to
that.
C
It
would
make
sense
to
merge
all
the
the
extension
into
the
original
rfc
and
create
a
new
rfc
with
a
new
korean
rfc
and
which
will
allow
us
to
also
obsolete
rfco
807,
and
there
is
in
this
in
currently
in
the
draft.
There
is
no
new
content
beyond
the
content
already
approved
by
this
walking
group.
C
C
For
that
video
and
the
trigger
should
the
trigger
object.
The
version
2
of
the
trigger
object
will
have
a
can
have
a
list
of
generic
extensions
similar
to
the
generic
metadata
from
the
rfc806
and,
for
example,
for
an
example.
For
such
a
generic
extension
is
the
time
extensions
extension
allowing
the
absence
at
the
end,
to
indicate
that
a
concentric
position
should
happen
at
3
a.m.
C
C
Okay,
let's
proceed
and
the
second
other
functionality
is
their
propagation
and
in
the
case
that
they
we
have
multiple
levels
of
of
cdn
so
that
the
trigger
from
the
opposite
end
propagates
through
the
down
sedan
to
a
further
down.
Since
then,
and
we
the
asked
the
situation
that
the
failure
happens,
I
think
the
trigger
the
failure
in
the
trigger
execution
happens
within
the
further
downs
and
down
sedan,
and
the
draft
allows
a
mechanism.
C
C
Okay,
next
and
the
last,
and
actually
most
simple
addition
to
the
rfc
to
the
draft-
is
the
additional
content
selection
methods.
Okay,
so
the
original
rfc
has
a
set
of
properties
that
allows
the
selection
of
metadata
or
content
urls
or
patterns,
and
the
draft
suggests
to
the
selection
of
content
contact
via
regixes,
as
well
as
playlists,
for
example,
hls
or
dash
okay,
and
in
order
to
support
that,
we
need
to
extend
both
the
trigger
object,
and
so
we
have
a
version,
two
of
it
and
the
object.
C
Okay,
so
that
that's
all
for
the
adoption
stage.
So
it
would
be
a
good
time
to
point
to
stop
okay
and
have
a
questions
or
any.
C
C
A
C
I
would
like
to
suggest
an
additional
change,
as
I
already
said,
when
a
then
as
you
can,
as
you
can
see
in
the
left
side
of
the
slide,
a
trigger
object
has
a
list
of
a
closed
list
of
properties
allowing
the
content
or
metadata
selection,
and
we
now
add
it
to
a
new
content
selection
method,
which
is
the
content,
reg
access
and
coded
playlists,
and
and
in
order
to
do
that,
we
must
define
we
define
the
trigger
object
and
create
a
version
two
of
it
or
as
well
as
the
error
object
and
it.
I.
C
I
think
that
it
would
be
better
to
go
to
a
a
generic
mechanism,
that
a
new
method
for
selection,
for
selecting
a
new
content
for
for
prepositioning,
for
example,
would
be
done
when
we
would
like
to
define
a
new
method
for
content
selection.
We
just
do
it
by
registration,
as
we
do
now,
with
the
footprint
object
that
we
just
registered.
C
That
can
be
any
one
of
those
of
this
of
this
option:
metadata
urls
or
content,
urls,
etc,
and
we
will
maintain
a
registry
for
this
content
selection
methods.
A
We
had
it
was.
There
was
an
email
on
the
list
about
this.
If
I
recall-
and
you
and
I
had
a
discussion
about
this,
I
think
it
makes
sense
to
me,
but
as
an
author
of
the
metadata
draft
and
data
makes
sense
and
the
use
of
a
generic
trigger
x,
trigger
object
makes
sense
to
me.
I
think
it's
cleaner
than
having
to
keep
enumerating
things
as
properties
inside
the
trigger
object.
A
A
I
think
it
does
make
for
a
cleaner
object
and
do
we
have
a
draft
with
proposed
changes
for
that
out
or.
C
A
A
Okay,
I
think
it's
an
interesting
idea.
I
think
I
like
the
idea
if
anyone
else
has
thoughts
on
it.
You
know
please
speak
up
or
we
can.
I
think,
move
forward
with
the
adoption
piece
so
go
ahead
and
push
the
the
updated
drafts
as
working
group
drafts,
and
then
we
can
push
out
an
additional
update
with
proposals
and
start
discussing
this
additional
change.
On
top
of
that,
is
that
what
you
were
thinking
laura.
A
Okay,
I
think
that's
a
good
plan
I'll
stop
here
again
and
pause
to
see
if
anybody
else
wants
to
voice
an
opinion.
A
There
are
no
objections.
I
think
it's
easier,
it's
easier
to
discuss
once
we
have
some
text
in
front
of
us,
so
I'm
happy
to
let's,
let's
put
out
the
the
working
group
draft
and
then
propose
an
update
and
and
discuss
it
on
the
list.
G
Agrees,
excellent,
all
right
thanks,
nir
anything
else.
Are
you
good.
A
H
How's
it
going
everybody
all
right,
so
a
quick
rehash
of
some
things
that
happened
that
we
went
over
in
the
summer
ietf
session.
It
was
in
last
call
in
february.
H
H
I
addressed
all
the
easy
things,
but
there
were
a
couple
things
that
I
don't
think
were
so
easy,
so
client
ip
there
were
usefulness
and
privacy
concerns.
Honestly,
I
think
they
were
mostly
usefulness
like
how
does
this
operate
when
you're
dealing
with
dual
stacked?
How
does
this
operate
when
you're
dealing
with
mptcp?
H
How
does
this
operate
when
you're
dealing
with
switching
from
wi-fi
to
mobile?
All
the
all
the
cases
that
that
were
aware
of
it
wasn't
like
any
shockers
were
in
there,
but
it
was
just
they
felt
that
either
it
wasn't
talked
about
enough,
or
maybe
its
usefulness
is
so
small
that
we
should
just
remove
it.
Shared
keys
that
one
was
a
really
big
one.
H
We
didn't,
I
added
some
more
text
to
say:
hey.
This
is
a
really
bad
idea.
You
really
shouldn't
do
this,
but
if
you
want
to
here's,
how
you
do
it
and
it's
supported
there
were
some
things
that
I
changed
from
should
to
must,
because
I
had
no
good
reason
why
it
it
shouldn't
be,
must
and
then
there's
there's
always
been
an
agging
question
of
more
advice
for
designated
experts.
H
So
I
have
a
couple
questions
that
I
want
to
put
forth
and
I
I
was
thinking
about
this
as
as
the
meeting
was
starting
up
that
I
probably
should
have
done
this
on
the
mailing
list
first
too,
but
I
will
follow
up
with
that.
Should
we
remove
client
ip?
The
reason
why
it
was
put
in
there
was
because
it
was
traditionally
in
there
in
the
proprietary
implementations,
so
people
wanted
it.
People
used
it.
People
like
to
be
able
to
say
hey
here.
Let
me
give
you
a
quick
preview
link.
H
What's
your
ip
and
you
know,
people
would
know
that
it
needed
to
be
their
cable
modem
ip
and
if
they
changed
anything,
it
would
fail.
That
sort
of
thing
like
that
was
the
use
case.
It
wasn't
really
meant
to
be
a
robust
option
that
would
be
used
for
the
general
case,
but
maybe
it's
just
not
useful
and
we
should
remove
it
rather
than
trying
to
write
a
bunch
of
text
around
why
people
shouldn't
use
it.
H
Second
question
is
removing
shared
keys
should
do
we
want
to
stop
there's
people
in
the
queue
I
can
answer.
If
people
want
to
comment
about
that.
First
question.
H
I
To
your
point
about
the
client
ips
like
yeah,
there
are
a
lot
of
caveats
a
lot
of
risks.
I
mean
I've
been
personally
bitten
between
clients,
switching
on
dual
stack
between
v4
v6
with
certain
vendors,
but
unless
there's
a
large
overhead
in
keeping
it
in,
I
feel
like
keeping
it
in
is
probably
the
best
bet.
It's
going
to
add
flexibility
and
use
cases
where,
like
you
mentioned,
there
could
be
very
niche
uri
signing
that
folks
are
going
to
want
to
do
maybe
even
internally
within
organizations
you
know
so
on
and
so
forth.
I
I
it
unless
there's
a
risk
in
keeping
it.
I
think
keeping
it
is
probably
the
better
way
to
go
with
the
understanding
that
you
mentioned.
There's
there's
caveats
and
it's,
I
think,
just
a
general
disclaimer
that
you
know
there
are
caveats
with
how
this
works
and
it's
up
to
you
to
accept
all
those
caveats.
If
you
implement
this.
H
Okay-
that's
that's
very
helpful.
I
so
real
quick
before
kevin
speaks.
I
I
so
I'm
going
to
send
these
to
the
mailing
list.
It
would
be
awesome,
andrew
if
you
could
kind
of
reiterate
that
on
the
mailing
list,
because
I
want
to
use
that
to
point
the
the
ads
to
about
their
concerns
and
I'll
I'll.
Make
sure
that
I
make
the
the
text
robust
enough.
H
If
it's
not
already,
I
already
added
some
stuff
to
it,
but
I
might
want
to
just
polish
it
up
a
little
bit
and
really
kind
of
drive
home.
That
point
that
that
these
are,
like
you
said
niche
use
cases
that
it's
there
to
support.
A
Yeah
I'll
throw
in
my
two
cents
as
an
individual.
I
think
that
you
know
we
had
always
talked
about
the
internal
use,
private
network
kind
of
I
own,
I'm
in
isp,
I
own
it.
I
want
to
use
an
ip
address
for
a
specific
purpose
and
that's
the
only
place
it's
really
good,
for
I
mean
uri
signing,
has
a
lot
of
questionable
security
things
in
there
right.
We
all
know
that
we
could
certainly
beef
up
the
disclaimers
about
it.
I
saw
that
you
added
one
I
mean
because
we
can
make
it
even
more.
A
You
know
obvious
to
folks
how
bad
it
is,
but
I
think
if
there
are
people
who
need
it-
and
that
was
the
original
rationale
right-
there
is
a
vendor
who
wanted
this,
and
so
I
don't
necessarily
see
a
reason
to
pull
it
unless
there's
a
lot
of
pushback,
I
don't
know
how
bad
the
pushback
was,
but
we
could
probably
put
bigger
folder
things
in
there
about
why
you
shouldn't
use
it
or,
and
also
where
the
one
or
two
good
use
cases
are
for
using
this.
You
know
which
helps
the
the
reader
understand.
H
J
Chris
chris,
I
I
won't
rehash
all
of
that.
I
agree
with
the
previous
commenters.
There's,
there's
lots
of
desire
for
this.
The
one
thing
I
wanted
to
add
was
there
are.
H
Yeah
that
so
I
was
actually
thinking
about
that
when
I
was
going
through
going
through
this
again
like
we,
we
could
do
a
lot
of
things.
We
could
probably
add
like
a
v4
and
a
v6
client
ip
and
do
some
other
stuff.
I
also
agree
with
doing
changes
this
late
in
the
game
unless
people
really
felt
like
hey.
I
have
a
use
case
for
that.
I
would
kind
of
want
to
say.
H
J
There
there
is
actually
a
real
use
case
around
wanting
to
put
in
both
an
ipv4
and
an
ipv6,
and
there
are
some
cases
where
both
addresses
will
be
known
at
the
time,
the,
u
the
token
is
generated
and
and
that
can
be
reasonably
reliably
produced.
J
J
A
Okay,
I
think
that
we
have
been
in.
We
have
been
an
ie
tf,
last
call
for
a
very
long
time
if
we
were
going
to
make
those
types
of
changes.
This
is
something
we
would
have
to
go
and
pull
it
and
do
a
new
round
on
I'm
all
for
it.
If
somebody
really
needs
this
right,
if
people
are
going
to
say,
this
draft
is
not
useful
to
me,
because
I
need
the
feature,
then
we
should
be
in
the
future.
H
Okay,
so
yeah
comment
on
the
client.
Ip
should
stay
with
with
a
big
warning:
that's
fair,
coordinate
your
eyes,
signing
specification,
common
access,
token
project,
I'm
I'm
not
familiar
with
that!
So
de
facto!
No,
but.
H
A
Yeah,
I
don't
know
if
it's
if
it's
close,
if
it's
close,
it's
something
we
could
think
about.
If
it's
not
close,
then
glenn,
I
know
you
had
aws
signature
four
in
your
draft
and
I
and
I've
been
thinking
about
that
as
well.
It's
not
close
enough
for.
H
Yes,
so
so
very
similar
to
removing
client
ip
is
removing
shared
keys,
and
I
kind
of
have
the
same
feelings
on
it
that
it
was
created
to
do
niche
tests
and
stuff
like
that.
I
did
add
some
text
about.
Hey,
really
don't
do
this.
This
is
not
a
good
architecture.
There's
a
lot
of
problems
with
it,
especially
when
you're
dealing
with
tiered
cdns
and
stuff,
but
I
kind
of
feel.
A
Remember
why
we
put
in
the
shared
keys,
because
at
the
time
we
started
writing
this
all
of
the
implementation
with
public
cdns
had
shared
keys,
and
so
right
it's
a
thing
that
we
wanted
to
support
because
that's
what
everybody
was
doing.
A
I
don't
know
if
that's
still
the
case,
I
imagine
it
probably
is
still
supported
and
there
are
probably
people
who
would
like
to
have
it
because
you
know
their
legacy
systems
use
it.
But
it's
it's
a
yeah.
It's
not
a
good
idea.
We
could
put
you
know
big
disclaimer
text
again,
but.
H
A
Yeah
but
there
they're
going.
A
A
C
J
Chris,
I
think
that
if
you
own
the
network-
and
you
don't
have
to
interoperate
with
anybody-
you
don't
need
a
standard
to
tell
you
how
to
interoperate
with
people.
H
H
J
I
mean
you
could
like
you,
you
could
reasonably
just
remove
it
entirely.
We
don't
have
a
police
force.
We
can't
stop
people
from
using
shared
keys,
it's
pretty
obvious
how
you
would
use
shared
keys
if
you
wanted
to.
H
C
J
H
A
J
I
B
Last
question:
this
was
only
brought
up
by
180,
but
he
he
brought
it
up
multiple
times.
H
Is
not
mandatory
and
now
my
understanding.
B
Of
why
we
chose
to
do
that
is
essentially,
and
this
is
gonna
probably
be
even
more
scary
than
share
keys
is,
is
if
we
wanted
to
give
somebody
a
skeleton.
H
E
K
H
H
E
B
D
Have
client
id
or
a
very
short
expiry
time
or
something
else
to
compensate
I'm
having
some
light
issues.
Is
anybody
also
hearing
some
of
the
the
noise
on
the
there's
some
static?
I
don't
know
if
it's
coming
from
from
their
film
or.
H
So
chris,
do
you
want
how
to
comment.
J
Yeah,
so
there
is
a
use
case
with
no
cd,
and
I
you,
if
your
cdn
is
accepting
keys
from
multiple
issuers,
you
are
likely
to
pro
you
are
likely
to.
In
fact
you
are
going
to
have
to
have
a
limitation
where
you
only
it's
where
you
have
an
association
between
what
what
systems
you
accept
that
key
for
or
that
key.
If
you're
willing
to
accept
that
an
issuer
can
authorize
a
given
uri
in
general,
then
you
have
to
be
willing
to
allow
an
issuer
to
authorize
any
uri.
J
H
Okay,
yeah,
I
I'm
I'm
fine
with
that.
I
so,
like
I
said
before,
I'm
gonna,
I'm
gonna,
send
these
all
out
in
question
as
questions
to
the
male
english
too,
and
if
you
can
reply
because
it'll
be
much
easier
to
to
send
them.
Links
to
a
mailing
list
then
links
to
a
video
somewhere
and
again
apologize
for
not
doing
that
before
this
call,
but
yeah,
okay
and
if
there's
no
more
comments
I'll
move
on.
H
H
I
guess
my
my
first
thought
on
is:
it
seems,
okay,
so
also
thanks
to
kevin
and
chris
for
the
other
review
stuff.
I
I've
already
merged
that
into
the
into
the
repo.
So,
whenever
we're
after
I'm
done
talking
to
the
ads
about
their
concerns
and
make
the
updates
for
the
shared
keys
I'll
do
another
draft.
So
does
anybody
have
any
comments
on
chris's
pr
had
a
chance
to
look
at
it?
I
think
he
submitted
it
like
1am
my
time,
so
I
I'm
doubting
it
but
figured
I'd,
throw
it
out
there
before.
H
Okay,
all
right
so
yeah,
I
guess
I'm
all
done
then
I'll
make
sure
to
send
those
emails
out
to
the
mailing
list.
So
people
can
reply
and
thank
you
very
much.
A
I
think
the
other
thing
is
we
have,
I
don't
know:
if
francesca
is
there,
we
we
put
review
of
uri.
A
Chat
if
we
have
to
make
these
updates,
is
that
something
that
we
need
to
do
ahead
of
that
for.
M
M
So
that's
why
I
put
it
up
directly,
because
otherwise
he
would
get
filled
up
and
he
would
have
to
wait
six
weeks
instead
of
four
weeks.
So
if,
if
the
working
group-
and
you
feel
can
can
make
any
any
additional
modification-
let's
say
at
the
latest
like
three
four
days
before
the
telechat
but
yeah
as
soon
as
possible.
M
But
but
let's
say
one
week
before
the
telechat
that
that
would
that
would
give
the
80's
enough
time
to
to
review
like
the
stable
version,
let's
say
and
yeah
it
might,
it
might
not
be
needed
if
they
remove
their
their
discuss
before
the
telechat.
We
don't
actually
need
to
go
through,
but
I
just
wanted
to
put
it
up
there
so
that
the
ads
have
a
deadline
and
yeah
like
by
that.
By
that
time
they
will
have
to
have
reviewed
the
this
version.
A
Okay,
so
we
have
some
open
questions,
phil's
going
to
put
them
out
on
the
list
everybody's
going
to
respond
and
we'll
have
that
discussion,
and
if
we
can
close
those
out
over
the
next
couple
of
weeks,
then
we
can.
We
can
leave
it
on
the
on
the
agenda
for
the
next
for
the
december
telechat.
If
it
starts
dragging
out,
then
you
may
need
to
push
it
out.
M
Yeah
and
that's
fine
as
well
just
let
me
know,
and
I
will
move
it
to
the
to
to
another
one.
I
don't
know
two
weeks
later
or
or
more
time
if
needed,
but
ideally
it
would
be
good
to
have
one
week
or
two
to
know
if,
if
we
need
to
move
it
one
week
before
before
the
telechat
so
november
25th
or
something
like
that,
because
that's
when.
A
H
Okay,
I
I'm
my
goal
is
to
do
work
on
this
say
you
said
the
25th,
which
is
thanksgiving.
I
have
that
week
off,
and
so
I
was
planning
to
do
work
on
it
during
that
week.
So
hopefully
I
think
that
aligns
well
with
what
my
intention
was.
M
Okay,
great
and
again,
if,
if
not
just,
let
me
know-
and
I
can
move
it-
no
like
no,
no,
no
big
pressure,
just
yeah
just
try
to
make
it
this
the
most
optimized
way,
but
thank
you
for
the
update,
phil.
A
All
right,
so
next
we
have
up
frederick
to
talk
about
the
https
delegation.
N
N
Can
you
hear
me
yes,
okay,
good,
so
today
I
will
give
you
a
quick
update
and
draft.
We
are
on
version
7
now
next
slide,
please
so
in
the
mailing
list.
We
had
quite
many
comments
on
the
draft.
N
N
I
see
him
he's
there.
Okay,
okay,
so
yum
feel
free
to
to
comment
if
necessary.
So
then
we
fixed
a
bunch
of
things
in
the
draft.
N
N
N
So
we
have
still
a
bunch
of
things
to
do,
especially
on
the
security
and
privacy
parts
so
kevin.
You
ask
quite
many
questions
and
we
still
maybe
have
to
add
some
hints
about
about
those,
so
I
don't
know
yet
yeah,
maybe
we
we
will
discuss
about
that
later
anyway.
So,
regarding
security
and
privacy,
we
need
to
be
to
study
a
bit.
N
We
need
to
remove
also
the
star
delegate,
some
star
delegation
method
properties,
as
it
is
mentioned
today
we
have
the
acme
star,
acne,
server
and
credentials
location,
rai
and
csr
template,
so
this
might
not
be
necessary
to
carry
between
cdns
where,
however,
maybe
the
csr
template
could
be
in
some
cases.
N
And
finally,
we
need
also
to
sync,
maybe
with
sva
working
group
that
is
working
on
on
the
cdna
interfaces,
notably
if
we
need
to
if
we
need
to
add
some
more
properties
in
the
in
the
cdni
in
the
metadata
interface.
N
A
So
I
I
sent
some
comments
to
the
list
this
weekend.
I
read
the
updated
draft.
I
think
there's
still
some
more
work
we
need
to
do.
We
do
need
to
keep
up
the
privacy
and
security
sections
and
I
still
have
a
couple
of
questions
about
the
metadata.
B
Structure
and
whether
we
need
the
fci
type,
but.
A
That's
you
can
take
a
look
at
the
comments
that
I
posted.
N
A
And
then
I
think
we're
a
little
premature
for
a
working
group
last
call
we
can
look
at
the
next
version
of
the
draft.
Hopefully
we
can
iterate
over
it,
and
then
you
know
between
now
and
next
march.
A
If
we
can
get
to
a
a
good
place,
we
could
probably
think
about
that
before
the
next
iepf,
but
but
we'll
see
where
we
can
get
through
that.
Okay.
D
As
an
individual,
I
will
I
I
concur
with
kevin
on
that.
We
need
to
wait
on
the
working
group
last
call
on
this
draft
because
of
you
know
the
changes
that
need.
D
Specifically,
the
rfc
9115
does
a
lot
of
work
that
aligns
with
what
this
draft
is
trying
to
do.
So
I
think
there
needs
to
be
a
pretty
good
alignment
between
how
the
interface,
how
the
downstream
cdn
will
establish
requests
with
the
upstream
cdn
to
exchange
information
for
identifying
itself
as
the
acme
client
and
and
talking
to
the
upstream
cdn
as
the
acme
server.
So
I
think
all
of
that
interaction
has
to
be
really
captured.
D
Well,
so
I
think
once
we
have
that
in
the
next
draft
it
should
require
it
should
be.
It
should
have
a
you
know,
good
review
and
and
based
on
that,
we
should
decide.
You
know
how
the
draft
moves
forward
thanks.
A
I'll
look
forward
to
seeing
the
updated
draft.
It
sounds
like
the
the
rfc
9v115
changes
could
be
significant,
so
once
that
update
is
out,
everyone
should
go
and
reread
and
make
sure
it
looks
good
to
everyone.
And
then
we
can
take
that
one
to
the
list.
C
F
D
Yeah,
why
don't
we
do.
A
That
andrew,
let
me
pull
andrew's,
slides
and
we'll.
Let
him
go
first.
I
Sorry,
yes
hi
my
name's
andrew
ryan,
I'm
here
today
to
discuss
some
potential
extensions
mainly
to
the
fci,
to
allow
signaling
of
capacity
capabilities
and
limits
next
slide.
Please.
I
I
And
so
this
we
discussed
this
topic
briefly
at
the
last
ietf
111,
but
the
the
general
highlights
of
what
we're
trying
to
accomplish
here
and
the
reason
we're
going
to
be.
Who
we've
proposed
to
this
draft?
Is
we
wanted
to
be
able
to
accommodate
the
ability
to
allow
us,
cdns
upstream
cdn's,
downstream
gyms
or
content
providers,
the
ability
to
make
informed
decisions
about
how
much
traffic
should
be
delegated?
I
We
wanted
to
provide
a
vehicle
using
cdni,
and
particularly
the
fci
interface,
to
to
handle
this
signaling,
and
the
goal
was
to
be
able
to
make
s
to
signal
capacity
limits
that
are
specific
to
the
delegation
relationship
between
an
upstream
and
downstream
make
sure
that
the
signaling
that's
being
provided
is
unambiguous
and
very
clearly
and
mutually
understood,
and
that
there's
the
ability
for
a
communication
direction
between
the
upstream
and
the
downstream
and
just
a
quick
call
out
that
the
actual
transport
mechanism
itself
is
somewhat
out
of
scope
of
this.
I
We're
merely
just
talking
about
and
making
the
proposal
about,
the
vehicle
in
which
the
data
is
going
to
be
encapsulated
with.
I
know,
there's
been
some
discussion
about
using
alto.
The
specific
use
case
that
we
hear
in
the
street
and
the
streaming
video
alliance
for
the
open
caching
initiative
was
we
there
there's
a
another
api
interface
that
we'd
be
leveraging,
but
the
data
model
is
going
to
rely
heavily
upon
cdni
next
slide.
Please.
I
So
how
are
we
going
to
accomplish
everything
we've
laid
out
there?
What
we've
kind
of
come
up
with
is
that
we've
during
discussion,
we
wanted
to
leverage
fci
as
much
as
possible.
We
felt
this
was
very
appropriate
to
act
and
be
you
be
conducted
as
a
capability.
How
much
capacity
a
downstream
is
advertising
to
an
upstream.
I
So
as
such,
you
know,
we
proposed
okay,
let's
come
up
with
a
a
payload
that
a
downstream
could
use
to
advertise
capacity,
limit
capabilities.
The
the
limits
would
be
in
units
such
as
you
know,
bits
per
second
requests
per
second,
and
the
way
we
were
going
to
tackle
the
the
the
compo.
I
The
goal
that
I
had
mentioned
about
unambiguous
and
mutually
understood
limits
is
by
coupling
each
one
of
these
capacity
limits
with
a
corresponding
telemetry
source,
which
gave
near
real-time
aggregated
metrics
about
the
usage
of
an
upstream
against
a
downstream,
and
so
in
that
way.
If
the
downstream
says
this
is
a
limit
you
have,
you
can
only
send
me
100
gigabits
per
second
there's
no
ambiguity
in
what
that
means,
because
the
upstream
can
pull
its
lump
resource
provided
by
the
downstream
to
see
the
the
current
delegation
utilization
and
then
adjust
accordingly.
I
The
second
component
here,
the
fci
telemetry.
This
is
kind
of
a
result
of
the
what
we
just
mentioned,
how
we
want
to
couple
a
telemetry
source
with
a
a
capacity
limit.
This
mechanism
here
this
fci
telemetry,
is
the
ability
for
a
downstream
to
advertise,
support
for
specific
telemetry
sources
or
tend
or
types.
I
Eventually,
we
would
like
to
propose
and
work
towards
putting
together
a
draft
to
define
a
formal,
telemetry
interface
in
which
there's
a
well-known
transport,
well-defined
transport
and
well-defined
format
of
the
data
that
would
be
available
for
near-time,
aggregated
metrics,
such
as
things
like
bits
for
pet
requests,
sorry
bits
per
second
request
per
second
etc,
but
in
lieu
of
that,
what
we're
we're
basically
going
to
be
doing
in
the
near
term,
since
a
formal,
telemetry
interface
would
be
a
very
large
effort
to
scope
and
define
we
wanted
to
leverage
in
the
near
term,
just
a
stub
of
a
generic
telemetry
source,
and
the
goal
of
that
is
twofold:
one:
the
level
of
effort
in
defining
a
formal
climate
interface
and
the
integration,
but
two
it's
in
most
cases
right
now,
folks,
who
are
actively
working
on
delegating
requests
between
different
entities.
I
There
already
are
existing
telemetry
sources
so
being
able
to
leverage
already
existing
telemetry
sources
and
all
of
the
work
that's
done
there
it.
This
will
help
ease
of
adoption
if
folks
start
moving
towards
this
model.
So,
instead
of
adding
additional
work
to
integrate
in
that
telemetry,
let's
take
advantage
of
work.
That's
already
been
done,
so
those
two
are
tightly
coupled
together
in
how
the
upstream
the
downstream
can
talk
about
the
the
specific
limits
the
downstream
is
allowing
the
upstream
or
this
this
next
one,
the
semi
capacity
requested
capacity
limits
element.
I
I
In
this
model,
the
the
upstream
would
be
able
to
use
this
metadata
object
to
send
a
signal
to
the
downstream,
which
would
then
trigger
an
asynchronous
process
in
the
back
end,
which
is
left
up
to
the
downstream
to
implement
and
work
out.
All
the
details
on
this
is
mainly
just
a
vehicle
once
again
to
facilitate
the
bi-directional
communication
goal
that
we've
been
looking
for
next
slide.
Please.
G
Here
sure
yeah
you
mentioned
existing
sources
are
the
majority.
A
Of
these
sources,
proprietary
in
nature,
or
are
there
standard
protocols
right
that
people
are
using
to
currently
distribute
the
data
or
emit
the
data.
I
That's
a
very
good
question:
it's
it's.
I
guess
a
little
bit
of
column
a
little
bit
of
column
b
right.
Typically,
it's
going
to
be
a
you
know,
some
kind
of
an
http
https
api
endpoint,
in
which
you
know,
content
providers
or
other
cdns,
are
pulling
a
bespoke
interface
provided
by
a
downstream
cdn,
which
has
you
know,
provides
metrics
and
the
the
payload
is
in
a
bespoke
form.
I
You
know,
typically,
it's
going
to
be
json
or
something
of
that
nature,
but
there
there
is
no
standardization
at
all
there
and
that's
actually
going
back.
It's
to
your
point.
We
do
eventually
want
to
go
and
make
the
proposal
to
define
a
telemetry
interface
because
of
that
exact
reason,
there's
no
standardization!
There's
no
communication
channel
well
defined
it's
all
kind
of
just
ad
hoc
based
on
which
provider
is
involved.
I
Yes,
that
was
the
that
was
kind
of
the
decision
point
we
were
looking
at
and
to
I
think
one
of
your
other
feedback
points
was
the
concept
of
a
are
we
going
to
be
defining
a
registry,
and
the
answer
to
that
point
was
yes
for
specific
types
of
telemetry,
such
as
bits
per
second
request
per.
Second,
I
feel
you
know,
there's
a
well,
I
guess
a
generally
well
understood
concept
of
what
those
generally
mean,
but
even
still
they're
kind
of
getting
back
to
the
point
of
tying
it
to
a
telemetry
source.
I
There
could
still
be
some
ambiguity
in
that
right
if
the
upstream
is
calculating
bits
per
second
in
a
different
manner,
particularly
if
we
look
at
the
use
case
of
a
content
provider
who
may
be
collecting
telemetry
from
clients
directly
via
rum,
they
may
get
a
certain
calculation
of
usage
from
that
versus
tracking
internally
from
another
system,
server
side
style
metric
there
was
there
was
no
clear
way
at
that
point
to
have
the
upstream
calculate
on
its
own,
its
usage
against
the
downstream,
without
incurring
some
form
of
ambiguity.
I
As
a
result,
therefore,
we
felt
that
it
was
more
appropriate
for
the
downstream
to
provide
a
telemetry
source
of
the
usage
that
it
is
seeing
based
on
the
delegation
and
in
that
manner,
when
the
upstream
and
the
downstream
are
using
the
same
telemetry
source
to
you
to
calculate
or
look
at
utilization.
I
You,
okay,
any
other
questions
at
that
point,
all
right
so
right
here.
What
we're
looking
at
is
the
what
we're
proposing
for
the
format
of
the
fci
capacity
limits,
payload
type
kevin
you
had
given
some
good
feedback
too,
particularly
about
the
first
two
bullet
points:
the
total
limits
versus
the
host
limits.
I
The
current
intention
up
here,
with
the
way
it's
currently
structured,
was
that
the
total
limits
object
would
represent
all
traffic
delegated
between
an
upstream
and
a
downstream.
So
if
the
downstream
says
upstream,
you
can
delegate-
or
you
can
send
me-
100
gigabits
per
second-
that's
ubiquitous
across
all
cdn
domains.
I
This
second
section,
though,
was
meant
to
allow
the
downstream
to
have
some
way
shape
or
form,
to
tell
the
upstream
that
certain
type
of
traffic
is
different
from
other
types
of
traffic
low
latency,
you
know
streaming
is
going
to
have
a
much
different
request
profile
than
general
game,
download
or
other
style
of
bulk
traffic
and
may
have
different
utilization
impacts
on
their
infrastructure.
I
So
being
able
to
tie
both
of
these
elements
together
to
say,
here's
a
general
upstream,
you
can
delegate
all
of
you
know.
All
of
the
total
traffic
you
can
delegate
is
governed
by
this,
along
with,
if
you're,
going
to
send
me
traffic
on
this
host
name,
which
we
know
is
low,
latency
or
some
other
high
rps
or
whatever.
I
So
that's
really
the
intention
and
the
interaction
points
between
these
total
limits
and
host
limits.
Payloads
now
kevin,
like
you
mentioned
earlier,
there
may
be
room
for
improvement
here
on
simplifying
the
object
structure
by
perhaps
pulling
the
host
out
of
the
the
host
limits
and
specifying
it
within
an
object,
type
of
the
total
limits
and
just
making
the
assumption
that
a
lack
of
a
host
declaration
assumes
a
total
a
total
limit.
I
So
this
once
again
drew
some
very
good
chatter
or
very
good
feedback
on
the
the
list.
What
is
the
fci
telemetry
here?
This
object
here
is
once
again
made
to
represent
capabilities
of
supported
telemetry.
So
you
know
this
is
what
it,
what
is
the
downstream
capable
of
supporting
in
terms
of
telemetry
sources,
as
we
can
see
here,
there's
a
type
generic,
and
this
is
once
again
a
call
out
what
we
wanted
to
highlight
in
that.
I
All
right
next
side,
please
so
here
is
a
very
high
level
workflow
of
what
we're
anticipating
all
of
this
to
kind
of
come
together
with
so
in
the
in
this
workflow.
Here
we're
really
describing
the
the
upstream
periodically
pulling
the
downstream.
Now
the
upstream
has
a
couple
different
responsibilities:
the
upstream
would
be
pulling
the
downstream
periodically
to
get
what
the
capability
or
the
the
capabilities
of
capacity
limits
from
the
downstream
are.
So
that
would
be
the
top
left
diagram
here.
I
I
Currently,
our
thought
process
is
that
we
would
use
http
cache
control
headers
to
govern
that
ttl,
that
that
fits
nicely
into
the
the
framework
of
communication
that
the
sva
open
caching
project
is
specifying,
but
in
lieu
of
that
the
the
upstream
is
expected
to
periodically
pull
the
downstream
to
get
what
the
advertised
capability
limits
are
for
capacity.
I
The
upstream
is
then
also
expected
to
periodically
pull
the
telemetry
source
that
the
downstream
is
providing
to
gather
and
understand
the
current
utilization
that
is
represented
by
the
upper
right
hand.
Component
of
this
then,
in
the
bottom
it
the
upstream
is
then
to
be,
is
then
supposed
to
compare
its
current
utilization
towards
the
advertised
capacity
limits
that
the
downstream
is
provided
and
adjust
traffic
routing
decisions
accordingly
to
fit
within
the
advertised
limits.
I
And
here
this
is
a
call
out
again
to
the
fact
that
we
want
to
be
able
to
make
the
we
want
to
allow
the
downstream
to
potentially
signal
to
the
upstream
that
a
change
has
been
made
to
the
fci
capabilities
for
capacity
limits.
This
this
model
here
was
really
governed
or
really
more
specific
towards
this.
The
open,
caching,
the
sv
open
caching
workflow
such
that
we'd
be
using
callback
hooks
to
for
subscriptions
to
sva
or
the
fci
capabilities
updates.
I
But
the
the
idea
here
is
that
we
want
the
there
is
a
mechanism
for
the
downstream
to
signal.
Back
to
the
upstream.
There
was
a
change
in
fci
capabilities.
Come
pick
up
your
net,
your
your
your
newest
version
of
your
capacity
limits,
the
the
the
main
impetus
for
this
is
that
we
want
to
assume
that
the
limits
that
were
provided
are
expected
to
be.
I
You
know
valid
unless
you
hear
otherwise,
you
know
there
shouldn't
necessarily
have
to
be
a
constant
pulling
from
the
upstream
every
five
minutes
and
say
hey
if
something
changed,
hey
something
changed.
The
intention
here
is
that
the
downstream
would
give
a
kind
of
a
long-standing
order
of
capacity
limits
to
the
upstream,
which
they
would
try
to
adhere
to
unless
something
changed
on
the
downstream.
I
This
here
is
once
again
the
the
mechanism
to
allow
the
other
side
of
the
communication,
where
the
upstream
would
want
to
ask
the
downstream
to
reconsider
limits.
Up
until
now,
it's
really
been
more
of
a
one-sided
conversation.
It's
the
downstream
providing
data
to
the
upstream
about.
This
is
what
these
are
the
limits
we
want
you
to
hear
to,
and
the
limits
in
this
point
are
really
considered
like
not
to
exceed
style
limits
like
please.
Don't
don't
go
past
x,
amount
of
this
telemetry
source
described
value
of
bits
per
second
requests
per
second
etc.
I
But
let's
just
say
the
upstream
has
some
kind
of
a
demand
coming
up
that
they're,
aware
of
and
they'd
like
to
ask
their
various
downstream
partners.
Would
you
be
willing
to
consider
an
update
to
support
this?
This
is
the
the
vehicle
that
we
came
up
with
to
allow
to
facilitate
that
conversation.
I
The
the
scoping
here
is
really
bound
to
how
the
metadata
object
model
is
set
up
in
this
particular
case.
We're
stating
that
this
signal
of
mi,
requested
capacity
limits
is
bound
specifically
to
this
cdn
domain
of
serviceaid.cdnexample.com,
that
we
really
weren't
able
to
come
up
with
a
clean
way
to
allow
the
upstreams
to
just
ask
for
generic
total
delegation
relationship.
I
Advertisement
updates
right
now.
The
vehicle
it's
described
using
this
metadata
model
is
very
specific
to
a
particular
host.
A
B
A
I
I
This
was
proposed
by
one
of
the
co-authors
ben
as
a
a
very
clever
solution.
I
feel
to
use
what
we
have
available
in
order
to
accommodate,
but,
like
we
mentioned,
there
are
some
caveats
we
wish.
We
didn't
necessarily
have
I.e
like
the
scoping
to
a
particular
host
in
this
example,
but
I
think
there
would
be
we
would
welcome
feedback
on
this
footpart,
especially
because
this
one
was
one
that
we
feel
need
is
the
least
straightforward
of
our
proposal.
I
And
once
again,
this
just
kind
of
goes
through
how
we
would
imagine
using
this
mi
requested
capacity
limits
object.
You
know
the
upstream
would
post
that
to
the
the
downstream
the
downstream
would
then
follow
its
particular.
It's
already
existing
workflow
on
calculating
current
utilization
based
on
a
policy
engine
that
is
outside
the
scope
of
this
document.
The
downstream
would
then
decide
whether
or
not
it
wants
to
update
its
current
capacity
limits
and
then
advertise
that
back
up
to
the
cdn
or
the
upstream
cdn
next
slide,
please.
I
So
that
is
the
the
proposal
that
we're
coming
forth
today
with
like,
I
said,
we
definitely
welcome
lots
of
feedback
kevin.
I
thank
you
very
much
for
your
feedback.
It's
it's.
It
was
very
good
and
we're
certainly
going
to
incorporate
that
in.
We
would
certainly
love
more
feedback
as
well.
A
If
anyone
wants
to,
I
know
the
draft
just
came
out
before
the
deadline,
and
so
I
don't
know
if
everyone's
had
chances
to
read
it,
but
it
is
a
good
read.
I
encourage
everyone
to
go
and
take
a
look,
and
I
post
comments
to
the
list
that
please
everyone
else
read
and
post
your
comments
as
well.
There's
you
know
some
open
questions
there,
some
great
ideas,
so
if
other
folks
have
thoughts,
that'd
be
really
helpful.
I
I
A
Excellent.
Thank
you
andrew
thank
you
for
the
presentation.
Thank
you
for
the
draft.
Again
everybody
go
out
and
read
the
draft
please
and
send
comments
back
to
andrew.
D
Yeah
a
second,
then,
I
think,
andrew
you,
you
covered
a
little
bit
of
details
here,
so
I
think
that's
helpful
and
hopefully
folks
will
have
now
a
better
sense
of
as
they
review
the
document.
So
please
go
ahead
and
review
it
and
post
your
comments
on
the
mailing
list.
O
O
O
Okay,
okay,
good!
I
assume
you
got
me
now.
Yes,
so
I'll
start
with
some
background
here
on
the
configuration
metadata
project
and
then
we
kind
of
move
into
the
current
state
of
the
draft
so
go
to
the
next
slide.
Yeah
background
next
slide:
perfect!
Yes,
there's
just
some
background
on
why
the
sva
got
into
this
and
by
the
way
my
co-authors
and
contributors
are
on
the
line
here,
so
they
can
chime
in
as
well
on
this,
but
you
can
go
skip
ahead.
O
You
need
to
focus
on
this
much
go
ahead,
perfect
yeah,
so
you
know
we
started
looking
at
cdni
metadata
for
this
and
realized
that
the
object
model
needed
to
be
extended
to
handle
some
typical
use
cases
for
commercial
cdns,
and
I
won't
get
too
much
into
the
triggering
and
stuff,
but
mostly
we
we
were
looking
at
adding
extensions
to
define
cache
policies
and
such
so
go
into
the
next
slide,
we'll
kind
of
get
right
into
it.
O
O
O
One
of
the
metadata
objects
we've
introduced
is
a
service
id,
and
this
idea
that
you
know
most
commercial,
cdns,
open
caching
systems
have
some
sort
of
a
skid
of
sku
a
cp
code,
a
service
id
and
we've
set
it
up
so
that
those
can
be
metadata.
But
it's
almost
backwards
in
this
model
you
define
host
names
and
then
apply
metadata
to
their
host
name.
So
now
we
can
say
great.
If
the
host
name
is
www.video.example.com
then
assign
the
service
id.
But
in
fact,
cdn
configurations
are
almost
usually
inverted.
O
O
O
The
enhancements
fall
into
these
general
categories,
enhanced
source
origin
definitions
and
the
main
motivators
there
were
to
introduce
load,
balancing
and
failover
some
authentication
methods
to
authenticate
to
origins.
The
aws
method
was
mentioned
earlier,
simple
header
off,
there's
a
lot
of
work
done
in
increased
cash
control
policies
and
a
lot
of
the
motivation
behind
those
and
I'll
address.
Some
of
the
questions
that
kevin
had
brought
up.
O
Yes,
the
origin
or
the
content
provider
can
specify
cache
control
in
headers
coming
out
of
the
responses
from
the
origin,
but
typically
we
need
to
define
caching
rules
not
only
for
the
end
client,
the
user
agent,
but
caching
rules
for
the
cdn
and
they're
often
different
parameters.
You
may
want
different
internal
caching
rules
or
times
in
the
cdn
than
you
would
want
downstream.
So
that's
the
motivation
from
any
of
these
cash
policy.
Extensions,
there's
a
whole
set
of
extensions
on
dynamic
cores,
headers
traffic
type
I'll
get
into
that
service.
Ids.
O
I
mentioned
open
caching
configurations
as
part
of
defining
open
caching
request
routing
rules.
We
had
the
need
for
some
metadata
there,
some
configuration
metadata,
some
objects
went
in
there.
O
Private
features
came
up
and
there
were
some
questions
kevin
had
about
that
as
well.
You
could
keep
adding
private
features
by
having
more
and
more
generic
metadata
objects.
This
is
true,
but
we
wanted
to
have
a
structure
around
it
so
that
organizations
like
the
sva
could
have
their
a
registry
of
their
own
private
features.
That
would
really
sit
internal
into
into
this
one
object
without
additional
generic
metadata
objects
and,
lastly,
the
processing
stage
model
which
I'll
dive
into
deeper.
D
Alfonso,
you
have
a
question
here.
So
do
you
want
to
go.
L
Yeah,
it's
just
to
to
add
about
the
cash
control
policies,
dynamic,
courthouse.
All
that
glenn
has
explained
that
other
of
the
motivations
I
think
for
for
this
is
that
many
cdns
act
as
super
projected
origins
so
projectors.
I
think
this
is
the
name
it's
an
old
draft
from
the
atf,
so
they
present
to
the
users
more
like
an
origin
and
not
like
a
catch
system,
so
they
are
acting
as
an
origin
from
the
content
provider
perspective,
sorry
and
for
the
user.
L
So
it's
very
common.
These
use
cases
where
the
audience
of
the
of
a
content
provider
of
an
absence
at
the
end
doesn't
wants
to
make
all
the
things
that
could
be
required
for
a
user
making
that
request,
but
let
the
super
project
origin
to
do,
though,
that
that
job
one
example
could
be
the
dynamic
headers
where
maybe
they
don't
want
to
handle
that
in
the
origin,
but
let
the
cdn
to
do
that
job
and
to
select
the
adequate
core
header
to
present
to
the
user.
This.
L
This
is
the
kind
of
motivation
on
some
of
these
new
generic
metadata
objects.
O
Good
next
slide:
oh
any
other
comments.
O
O
Typically,
you
may,
for
example,
have
a
cached
response,
this
the
client
response
portion
of
it
labeled
d
in
the
diagram,
and
you
may
want
to
adjust
the
response
for
each
client
as
it
comes
out
of
cache.
This
gives
you
an
ability
to
do
that.
Sort
of
a
thing
most
processing
happens
generally
on
the
first
on
getting
a
request
from
the
client
and
I'm
processing
response
from
the
origin,
but
this
does
give
we've
seen
use
cases
over
the
years
that
call
for
all
four
of
these
stages
next
slide.
Please.
O
O
O
On
the
request
side,
it
may
be
a
match
on
some
pattern
of
the
request
url
or
on
a
header
and
on
the
response
side,
you
might
typically
be
matching
on
some
element
of
the
response
or
a
status
code.
Sanjay
did
you
have
something.
O
Okay,
I
have
to
keep
the
volume
low,
because
I
got
this
crazy
echo
here
so
with
it
beyond
the
express
the
expression
matching
once
you
do
apply
a
rule
at
a
certain
stage,
you
essentially
either
transforming
the
request,
transforming
the
response
or
generating
a
complete
synthetic
response.
Those
are
generally
kinds
of
things
that
go
on.
There's,
probably
enough
detail
here
for
now.
There's
structures
here
that
allow
you
to
specify
lists
of
headers
that
can
be
added
lists
of
headers
that
can
be
replaced,
etc.
O
Good,
so
here's
a
typical
full
example
of
a
processing
stage,
a
very
simple
one.
In
this
example
for
edge.example.com
standard
host
index,
we
introduce
a
processing
stage
at
the
origin
response
so
dealing
with
responses
coming
out
of
the
origin
and
we're
looking
for
a
match.
So
here's
a
simple
example
of
the
metadata
expression
language
which
I'll
get
into
next.
Basically,
if
response
status
code
is
a
200,
then
go
ahead
and
apply
some
metadata,
and
in
this
example
the
metadata
is
a
cache
policy
that
tells
the
cdn
to
cache
internally
for
five
seconds.
O
So
that's
a
very
simple
type
of
rule
that
one
might
do.
O
Next
slide,
so
in
the
original
draft
we
proposed
that
there
would
be
this
expression,
language
and
the
expression.
Language
really
does
two
things:
it
allows
to
define
criteria
for
matching,
you
know
to
apply
rules
conditionally,
and
it
also
there
are
certain
areas
where
we
actually
have
to
synthesize
a
response.
So
there's
an
example
of
one
here
and
in
the
new
version
of
the
draft,
the
metadata
expression
language
is
called
out
with
the
syntax
for
the
various
variables
and
expressions.
O
So
the
first
expression
here
is
a
simple
expression
match.
This
may
be.
A
typical
thing
you
would
do
to
apply
metadata
on
in
this
example
is
a
request
coming
from
safari
for
host
example.com,
and
so
you
could
therefore
apply
metadata
just
based
on
a
match
on
a
user
agent
or
something
like
that.
The
second
example
is
using
the
expression
language
to
dynamically
or
to
synthesize
a
value.
O
So
in
this
example,
we
are
transforming
a
response
that
may
have
coming
out
from
an
origin
and
we're
adding
a
cookie,
and
the
cookie
we
want
to
add
specifically
is
concatenation
of
the
user
agent
string
and
the
host
name.
So
that's
the
value
and
then
there's
other
thing.
The
value
is
expression
is
true.
That's
just
a
signal
to
the
engine
processing
this.
That
value
is
not
a
string
literal
that
it
needs
to
be
evaluated.
O
Next
slide:
yeah!
No,
actually,
the
sorry
go
back
yeah,
oh
okay!
That's
it!
That's,
never
mind,
go
to
capacity,
we're
doing
well
capabilities
interface.
Sorry,
I'm
looking
at
two
screens
at
once!
O
So
as
we
add
new
metadata
and
all
these
new
capabilities,
of
course,
fci
needs
to
come
along
for
the
ride,
so
we
needed
to
add
some
fci
objects
that
are
also
called
out
in
this
document
to
so
that
a
downstream
cdn
can
declare
its
ability
to
support
these
new
metadata
constructs
next,
really
out
of
scope
for
this
discussion,
but
just
to
put
it
all
in
context.
O
We
also
within
the
sva
have
extended
the
metadata
interface
originally
proposed
in
rfc8006
and
there's
some
sva
documents
being
pushed
through.
This
would
really
work
led
by
guillaume
and
alfonso,
and
that's
probably
not
no
need
to
discuss
it
more
now,
but
just
to
for
reference
that
it's
there.
B
G
A
Ability
to
publish
so
39
metadata
is
a
trigger
goal.
B
Or
invalidate
his
meditator
are
there
specific
to
that
interaction
that
were
undesirable
or
are
there
any
other
things?
Are
you
I
just?
Are
you
actually
bringing
these
to
the
working
group.
O
Yeah
I
mean
the
original
spec
was
exactly
that
the
upstream
triggers
the
downstream
to
pull
metadata
and
that's
great,
but
really
what
we're
looking
at
is
use
cases
here,
where
the
upstream
pushes
metadata
into
the
downstream
effectively
publishing
configuration
metadata.
That
was
probably
one
of
the
biggest
changes
we've
introduced.
O
We
we
we
can
discuss
that
with
an
sva
within
sva.
We
have
a
whole
set
of
apis
that
implement
the
suite
of
open
caching.
Capabilities
of
this
is
just
one
of
many
and
that
api
implements
our
metadata
publishing
interface,
which
effectively
is
an
extension
of
the
cdni
metadata
interface
but
sanji.
I
think,
let's
talk
about
that
within
the
sva
and
see
how
much
of
that
api
we
want
to
put
in.
L
Yes,
yes,
look
to
what
these
changes
are
compatible
with
the
trigger
interface,
so
it
is
more
than
possible
to
use
use
these
extensions
with
the
trigger
interface.
Even
if
we
are
defining
a
new
api
for
some
use
cases
that-
and
we
are
implementing
at
this
dvd
for
that
api,
but
used
to
be
a
interface.
O
Yeah
and
we
can
start
a
discussion
in
the
mailing
list
about
the
sva
apis
and
whether
we
feel
there
would
be
interest
in
pushing
those
into
the
iutf.
A
O
O
Well,
while
we're
on
the
subject,
so
the
api
that
we've
currently
done
really,
the
big
thing
it
adds
is
a
push,
but
we're
going
to
be
looking
at
a
next
round
of
api
work
that
really
addresses
the
whole
workflow
around
managing
cdn
configurations,
and
that's
where
we
plan
on
handling
versions
of
metadata,
publishing
from
staging
to
production,
environment,
rollbacks,
lists
of
generic
metadata
objects
that
can
be
used
across
configurations.
So
all
of
that
might
work.
The
ietf
is
interested
in
as
well.
A
I
had
one
other
quick
comment:
it's
impressive
that
the
the
processing
stages
stuff
was
all
able
to
be
implemented
in
just
generic
metadata.
I
was
wondering
if
there
was
if
that
was
done,
just
because
it
was
a
good
way
to
get
it
into
the
metadata
interface
or
if
that
was
the
preferred
method
or
was
there.
You
know
also
considerations
for
changes
to
the
metadata.
O
Well,
that
was
creative
data
modeling
on
my
part,
to
make
it
fit
within
the
generic
metadata
rules.
As
I
was
told,
the
best
way
to
move
this
through
ietf
is
not
to
change
the
structural
amount
of
data,
but
in
fact
I
think
it
fit
pretty
well.
If
you
want
to
go
back
to
that
scheme,
a
few
slides
back,
I
think
it
worked.
A
Yeah
I
thought,
and
then
I
liked
the
ascii
drawings
too.
Those
were
awesome,
it
is
impressive
and
I
think
the
whole
processing
stage
is
issue
of
you
know.
Where
do
you
apply
it
in
ingress,
going
to
the
origin
coming
back
or
coming
out,
it's
a
useful
piece
that
we
never
considered,
but
as
long
as
I
yeah,
I
just
really
wanted
to
figure
out.
You
know.
O
I
feel
pretty
good
about
it
and
I
think
it's
a
nice
clean
structure
of
organizing
it
with
rules
applied
based
on
the
match
data
and
then
each
rule
is
a
header
or
response
transform.
That's
pretty
straightforward
stuff
for
anybody.
Who's
worked
with
web
server
configurations,
so
I
think
it's
all
right.
O
Excellent,
okay,
cool
all
right,
you'll
move
on
down
to
the
revisions
we're
almost
on
here,
so
draft
status
next
yeah.
So
this
is
basically
a
summary
of
what
changed
from
revision.
Zero
version.
Zero
was
a
very
thin
shell.
Apologies
for
that,
but
now
there's
enough
information
in
here
to
sink
your
teeth
into
there's
a
couple
diagrams.
O
So
each
proposed
new
generic
metadata
object
will
have
an
example
in
there
with
it,
which
is
what
you
want
to
see
and
then
the
metadata
expression
language
portion
was
added
as
well.
Here
I
think
the
next
couple
slides
were
just
some
answers
I
had
put
in
line
of
some
original
questions.
Kevin
opposed
on
revision,
zero-
probably
no
need
to
go
through
those
now
I'll,
post
kevin.
The
best
thing
for
me
to
do
is
to
put
post
all
that
in
the
sva
into
the
cdni
mailing
list.
In
response
to
your
questions,
right.
O
Yeah
and
I'll
do
that
for
the
same
for
your
new
set
of
questions
as
you
develop
them,
or
questions
from
anyone
else
on
this
revision
and
we'll
plan
on
revision,
two
probably
in
a
month
or
two,
like
I
said,
coordinated
with
sva
review
work,
that's
going
on
on
the
sva
version
of
this
document
and
we'll
just
wrap
it
up
on
the
final
slide.
G
I
think
this.
A
Draft
was
big;
it
definitely
had
a
lot
more
detail
than
the
one
we
saw
earlier
in
the
summer.
Thank
you
for
that.
It
was
dense.
I
haven't
made
it
all
the
way
through
completely
yet.
B
A
Need
to
give
it
another
read
myself,
and
I
know
it
just
came
out,
so
I
encourage
everyone
out
there
to
please
go
and
read
the
draft.
There's
a
lot
of
cool
stuff
in
the
draft,
and
we
should
we
should.
You
know
possibly
divide
it
up
into
separate
drafts.
There's
some
that's
more
straightforward,
like
cash
control
policies,
those
are
probably
less
less
discussion
points
for
those
than
there
are
for
the
actual
expression,
language
right.
There's
there's
a
lot
to
parse
through
there.
A
That's
a
pun,
sorry,
but
I
I
think
that
you
know
we
could
probably
first
see
how
folks
reacted
on
the
list.
I'd
like
to
have
give
some
give
folks
some
time
to
read
it
because
it
is
fairly
big
but
and
then
we
can
start
picking
and
choosing
deciding.
You
know
how
we
want
to
pursue
each
other
think
that's
in
there.
O
A
G
A
O
D
Good,
that's
just
my
two
cents.
No.
I
think
that
that
that
makes
sense
that
that
makes
it
easier
to
sort
of
read
relevant
information
together
and
then
easier
to
respond
that,
yes,
this
makes
sense,
and
let's
let's
go
with
this
draft,
so
I
think
glenn,
that's
a
good
idea.
If
we,
if
there's
an
effort
to
break
it
down
into
you,
know
based
on
metadata,
you
know
similar
metadata,
you
know
types
and
have
that
draft
that
will
stand
on
its
own.
I
think
that
may
be
easier.
D
O
And-
and
I
think
also
just
for
the
sva
folks
on
the
call
I
mentioned
this
rather
than
going
through
the
pain
of
maintaining
two
parallel
docs,
we
may
start
to
cut
out
the
sva
part
2
dock,
the
one
that
this
is
derived
from
and
just
have
the
sva
dock
start
to
wrap
and
reference.
This
one
that
may
make
life
a
lot
easier
easier,
but
we'll
figure
that
out
for
ourselves
in
the
sva.
G
Excellent
does
anyone
else
have
any
questions
or
comments
for
glenn.
G
If
there's
nothing
else,
that
is
the
end
of
the
agenda
items
we
had
for
today.
It
looks
like
we
finished
a
little
bit
early,
which
is
great,
but
we
had
a
lot
of
great
discussions.
If
no
one
has
any
other
comments,
I
think
we
can
close
the
session.
We'll
look
forward
to.
A
Seeing
everybody
again
at
the
next
ietf
in
march,
please
go
read
all
the
drafts.
Please
post
your
comments
to
the
list.
We
have
follow-ups
on
a
number
of
these
graphs
that
we
want
to.
You
know,
move
forward
on,
especially
the
uri
signing
one,
but
otherwise
thank
you
very
much
for
attending
sanjay.
Any
less
thoughts.