►
From YouTube: IETF106-WPACK-20191120-1520
Description
WPACK meeting session at IETF106
2019/11/20 1520
https://datatracker.ietf.org/meeting/106/proceedings/
A
A
B
A
C
B
Alright,
everyone
welcome
to
the
web
packaging
both
here
at
the
IETF
106
I
appreciate
everyone
showing
up
so
did
we
start
the
blue
sheets.
We
did
start
the
blue
sheets.
That's
excellent
and
we
already
have
note
takers.
Thank
you
too
Lucas
Perdue
and
Martin
Jukes.
That's
really
great
and
Eric
Kinnear
is
going
to
jabber
relays,
so
we
don't
have
to
beg
everyone
for
those
jobs.
So
you
should
thank
these
fine
gentlemen,
if
you
should
see
them
the
hallway
for
sparing
you
the
awkward
question.
B
B
Ok,
so
why
are
we
here?
I
mean
excellent
existential
questions
are
important.
It's
a
boss.
After
all,
right,
Shawn
and
I
are
here
as
your
guides.
We
are
neither
proponents
nor
opponents
of
this
proposed
working
group,
and
this
is
a
potentially
working
group
performing
both
the
proponents
of
the
both
have
presented
a
proposed
charter
and
will
be
sharing
their
ideas
and
their
technology
about
the
items,
their
scope
with
you
directly
as
they've
already
done
on
the
mailing
list,
and
thank
you
for
the
discussion.
That's
happened
on
the
mailing
list.
B
A
high-level
purpose
of
this
in-person
get-together
here
there
are
little
confab
is
to
determine
if
the
Boff
participants
recommend
the
formation
of
a
working
group
based
on
the
web
packaging
problem,
the
rule
would
discuss
today
and
if
they
do,
if
they
have
any
recommendations
regarding
a
potential
charter
for
said
for
service
group,
and
this
will
be
sort
of
a
long
and
twisty
road
to
get
there.
But
everything
we
do
is
in
service
of
these
two
questions
for
the
next
90
minutes.
B
All
right
here
are
some
of
the
twists.
Well,
first
be
discussing
several
aspects
of
the
problem
space
you
see,
there
are
four
really
short
five-minute
presentations.
I'm
gonna
ask
that
we
really
hold
questions
until
the
end
of
those
when
we
have
10
minute
period
set
up
for
discussion
of
that
I'm
concerned
that
otherwise
there
might
be
a
fair
bit
of
you
know,
redundancy
and
the
questions,
and
hopefully
that'll
help
avoid
some
of
that
afterwards.
Jeffery
will
talk
about
some
of
the
technology.
That's
being
proposed,
followed
by
discussion
time
specifically
about
that,
then.
B
We'll
move
on
to
the
proposed
charter
gather
the
groups
insights
about
that
proposal.
It's
been
circulated
on
the
mailing
list
and
it's
in
the
media
materials.
If
you'd
like
to
take
a
look
at
it
now,
we'll
also,
of
course,
put
it
up
on
the
screen
when
we
get
to
this
part,
it
envisions
five
specific
deliverables
and
proposes
a
number
of
specific
internet
drafts
as
starting
points
for
a
potential
working
group.
If
the
box
would
like
to
try
and
refine
that
in
real
time,
we
can
try
and
do
the
edit
a
proposal
thing
on
the
screen.
B
So
my
advice
is
get
in
line
early
and
keep
your
presentations
on
schedule.
Additionally,
we're
gonna
try
and
make
sure
that,
in
particular,
the
voices
the
proponents
are
heard
whenever
there's
any
lack
of
clarity,
because
the
most
important
thing
here
is
that
you
understand
their
proposal.
So
we
can
decide
the
key
questions
that
we'll
ask
at
the
end,
and
of
course
you
know.
Lastly,
we
will
finish
with
some
polls
about
a
potential
working
group
forming
'both.
B
D
All
right
thanks,
hello,
everyone,
my
name
is
Matt
and
I,
have
just
a
few
brief
minutes
to
talk
to
you
about
our
use
case
for
a
web
packaging
in
remote
edge
networks
or
community
networks.
So
for
some
background,
I
am
a
researcher
at
the
University
of
Washington
and
I'm.
Here
are
some
of
my
colleagues
and
our
research
group
focuses
on
internet
access
in
rural,
remote
and
developing
regions.
Most
typically,
we
focus
on
community
networks
in
these
regions.
So
what
is
a
community
network?
D
Community
network
is
a
network
that
is
owned
and
operated
by
members
of
the
community
and
serving
members
of
the
local
community.
Community
networks
are
generally
small
scale
and
they're,
typically
owned,
collectively
or
as
a
small
business
and
sometimes
even
run
informally.
So
two
key
points
to
take
away
from
this.
The
first
point
is
that
these
internet
providers
are
typically
too
small
to
have
business
relationships
with
typical
large
content,
distribution
networks
or
content
platforms.
D
The
second
key
point
is
that
these
internet
providers
have
local
infrastructure,
that's
close
to
their
end
users,
because
they're
coming
from
the
communities
that
they're
serving
so
I
want
to
talk
a
little
bit
about
the
characteristics
of
these
networks
at
a
technical
level.
One
is
that,
if
you
remember
they're,
extremely
remote
and
part
of
coming
with
being
extremely
remote
in
many
cases
is
very
constrained.
D
D
So
in
these
networks,
what
does
the
traffic
look
like?
And
we
note
that
in
a
community
oftentimes
the
media
that
people
want
to
share
and
view
is
driven
by
social
connections,
and
these
are
tightly
net
local
communities.
But
we
observe
that
what
the
network
is
actually
seeing
is
mostly
mostly
HTTP
sessions
to
the
large
content
platforms
that
you
would
expect
in
2019.
So
this
isn't
super
surprising.
D
So
why
are
we
interested
in
webpack?
Our
goal
is
to
optimize
local
traffic
and
keep
it
off.
This
constrained
backhaul
connection
whenever
possible,
and
this
is
really
complicated
to
do
in
the
current
regime.
These
networks
again
are
too
small
to
locally
deploy
hardware
from
major
CD
ends,
and
there
are
some
sort
of
hacky
solutions,
but
they're
like
really
unsatisfying
and
they're
all
like
bad
in
different
ways
right.
D
So
three
concrete
use
cases
to
leave
you
with
from
community
networks
that
we
see.
The
first
is
that
in
these
networks
many
users
disable
security
updates
for
their
devices,
because
the
updates
are
large
and
bandwidth
is
precious.
So
you
can
imagine
a
network
administrator's
frustration
when
the
same
binary
has
to
be
downloaded
hundreds
of
times
over
a
backhaul
bottleneck
all
right,
so
we
see
webpack
as
in
a
bit
as
a
way
to
cache
these
updates.
The
second
is
local
media.
D
Again,
I
talked
about
how
media
sharing
is
often
defined
by
these
local
social
networks,
and
we
see
demand
for
the
same
media
across
many
users
and
I
won't
go
into
this
too
much
because
I
think
there's
a
presentation
about
this
in
more
detail
later
well.
The
last
case
I
also
want
to
emphasize,
is
breaking
out
local
applications
from
the
content
that
they're
delivering.
So
if
you
imagine,
applications
like
Trello
or
Google
Docs
that
are
very
useful,
but
that
are
relatively
heavy.
D
They
don't
operate
well
in
these
remote
communities
and
the
members
of
these
communities
are
cut
off
from
these
like
important
parts
of
the
modern
Internet.
So
we
see
that
we
argue
that,
like
local
caching,
even
independent
of
content
could
benefit
the
applications
themselves,
all
right,
so
our
major
asks,
or
that
we
as
a
group,
consider
these
small
ISPs
and
drive
towards
an
open
standard,
and
we
believe
that
this
is
probably
the
best
way
forward
for
us
right
now
to
us.
D
B
So
is
Brian
going
to
present
remotely
over
all
right,
so
Brian
you
can
just
put
yourself
in
the
me
deco
or
Brian
I'm.
Sorry,
you
can't
hear
me
there.
You
go
sorry
about
that.
I
will
admit
you
and
get
your
slides
up.
Oh
it's
easier
said
than
done.
Can
you
hear
me?
Yes,
can
you
see
the
slides
I
can
see
the
slides
awesome.
So
if
you
want
me
to
advance
the
slides
just
say
next,
okay.
E
So
this
is
gonna,
be
really
simple
and
I'm
gonna
be
saying
next,
a
lot
next,
so
embedded
web
technologies
are
increasingly
used
for
embedded
systems.
This
is
everything
from
digital
signage,
to
cable
boxes,
to
smart
appliances,
televisions.
There
are
already
many
hundreds
of
millions
of
devices
using
web
browsers
for
this
purpose
and
go
to
next.
E
E
E
E
But
you
almost
always
want
to
update
the
network
when
it's
available
and
it
wants
to
be
part
of
the
web,
just
like
everything
else
so
I'd
like
to
ship
you
an
e-reader
with
some
books
already
pre-installed
or
cookbook
with
some
recipes.
These
ultimately
exist
with
real
URLs
at
real
endpoints
on
the
real
web,
and
we
would
like
to
update
them
as
content
changes
and
things
like
that.
E
So
this
is
a
problem,
because
the
web
has
always
assumed
your
first
interaction
with
the
site
would
come
from
fetching
it
from
a
domain
across
the
internet
and
increasingly
many
features
that
are
very
useful
for
these
sorts
of
things
you
get
on
embedded
devices
are
cut
off
because
they're
designed
around
concepts
that
really
only
work
with
based
on
the
network
model.
Next.
E
So
the
question
is:
how
can
we
bridge
those
two
worlds?
Next
and
today
the
embedded
use
cases
wind
up
solving
and
resolving
a
lot
of
complexity
to
bridge
this
gap,
but
we
really
want
is
to
simply
bootstrap
this
content
into
a
serviceworker
is
standard,
fare,
offline
content
and
then
let
all
the
web
technologies
work
just
as
fluidly
as
they
would
otherwise.
E
E
What
we
need
is
the
uniform
way
to
configure
a
specific
device
or
browser
startup
to
say
trust
this
package,
as
this,
it's
not
important
that
other
browsers
and
devices
also
share
the
same
level
of
trust
because
with
an
embedded
device,
we're
giving
you
an
entire
operating
system,
and
we
can
reasonably
say
what
you
trust,
I
believe,
that's
the
end
of
my
slides.
Oh
it's
not
so
this
leaves.
If
we
solve
this,
it
means
no
local
web
servers
are
necessary
and
there's
no
discrepancy
like.
Is
it
the
web
or
isn't
it
to
the
web?
B
F
Okay,
hello,
I'm
Devin
from
Google
hello,
so
what
led
to
amp
in
the
first
place?
This
is
a
bit
of
prehistory
for
me,
because
I
joined
am
just
last
year
to
work
on
sign
exchanges,
but
as
far
as
I
can
tell
people
saw,
the
mobile
web
was
suffering
and
kind
of
motivations
to
heal
that
come
from
a
variety
of
places.
F
There
are
many
possible
avenues
for
improvement
of
the
mobile
web,
and
an
amp
is
one
of
many
that
Google
has
pursued
over
the
years.
The
Y
amp
in
particular,
as
far
as
I,
can
tell
be
those
kind
of
two
main
goals
here.
One
is
to
enable
good
UX
at
scale
through
opinionated
DX
and
the
other
is
to
enable
static
verification
of
a
variety
of
properties.
F
A
consequence
of
this
is
bad
URLs,
which
leads
to
bad
branding
wasted,
screen
real
estate
having
to
trust,
caches
and
difficulty
or
impossibility
using
a
variety
of
web
api's,
because
the
bad
origin
means
that
the
amp
cache
has
access
to
state.
It
doesn't
want,
and
pages
have
to
do
extra
work
to
send
that
back
to
the
publisher,
for
instance,
using
course
sign
exchanges
fix
all
that,
while
still
allowing
amp
to
guarantee
the
sort
of
the
UX
privacy
and
security
guarantees
that
were
set
by
the
initial
unsigned
amp
deployment.
F
This
is
my
attempt
to
sort
of
distill
our
minimal
needs.
One
is
that
the
content
should
provide
the
URL
in
origin
in
a
way
that
the
browser
can
trust
for
display
to
the
user
as
well
as
for
those
web
AP
is
another.
Is
that
the
publisher
should
be
able
to
specify
a
subset
of
the
page
that
up
to
and
including
the
whole
page,
in
a
way
that
can
render
concurrently,
with
subsequent
with
additional
network
requests
for
resources.
F
Third,
the
user
experience
should
be
relatively
uneventful
tears
and,
lastly,
because
this
is
kind
of
based
on
this,
an
exchange
spelling,
you
know,
I
may
have
over
an
under
specified
in
ways
that
you
know
we'll
discover
as
we
explore
the
design
space
in
the
future.
There's
room
for
Google's
deployment
of
the
current
spec
and
we're
working
on
all
of
these.
So
caches
cannot
distribute
modified
versions
of
sign
exchanges
to
users,
publisher
signed
exchanges
to
users,
but
they
can
serve
previously
fetched
sango's
changes
within
the
expires
window.
F
This
is
mostly
an
improvement
on
the
unsigned
amp
model.
The
mitigations
here
are
not
full
solutions
because
they
are
subject
to
aggregator
approval,
as
well
as
possible
interference
from
high
level
actors
that
can
selectively
block
traffic.
Amp
sign
exchanges
need
to
vary
by
distributor.
This
is
the
appcache
transform
header
that
some
people
have
heard
of.
F
This
is
mostly
due
to
the
fact
that
these
were
deployed
before
the
multi
exchange
solutions
were
designed
and
built,
and
the
sort
of
art
non
amp
solution
on
amps
and
exchange
solution.
Shouldn't
have
this
problem,
and
hopefully
we
can
fix
this
for
amp
as
well.
Lastly,
various
companies,
including
Google,
require
amp
for
certain
experiences.
This
should
be
generalized
generalizable
to
a
much
wider
subset
of
the
web
and
outside
of
this
buff
Google
is
working
on
evolving
various
standards
to
make
that
possible
and
welcomes
participation
in
that
space.
Thank
you.
B
G
G
Hello
and
I'm
Vista
working
on
Chrome
and
I
have
been
implementing
next,
please
so
I'd
like
to
talk
about
use
is
caring.
All
kinda
bundles,
the
unsigned
bundles
are
basically
you
know.
Condo
losers
chimney
church
because
they
abandon
see,
can
burn
up
multiple
resources
and
the
Camry
present.
A
single
multiple
pages
users
can
just
open
the
bra
bundle
in
the
cam
browser
around.
Oh,
it
is
a
bundle
in
the
evening
a
while,
because
it
doesn't
have
the
same
interest.
G
It
is
not
possible
to
verify
who
published
the
contemplates
the
con
same
as
the
original
next
piece
and
here's
the
potential
use
it.
Imagine
that
the
browser's
provide
a
teacher
like
save
a
bundle
where
users
can
discreetly
culturally
download
the
current
Facebook.
He
was
as
brought
him
in
the
shade
was
a
friend
in
order
to
the
soft
browser
can
automatically
generate
a
bundle
for
the
page
or
hide
can
probably
also
processor
on
bundle
for
beta,
you
X
and
the
profit.
G
A
link
to
the
browser
can
discover
downloaded
instead
next
and
the
feature
like
this
quickly,
enable
of
like
Union
Station,
where
internet
connected
is
vini.
Lady
on
the
airplane,
your
friend
might
have
download
a
nice
with
game
as
a
bundle
and
that
because
it's
a
bundle
is
just
about
offline,
an
XP,
but
then,
like
I
use
the
statistical
you
want
to
cost
of
tries
on
with
of
why
use
meet
again,
but
with
one
of
it
like
a
simple,
you
jump
discouraged,
it's
a
bundle
using
p2p
parts.
G
If
you
enable
the
particular
flood
or
the
with
bundles,
either
a
project,
an
unsigned,
bundles
yeah
or
a
word
games
with
you
can
actually
title
if
you're,
interested
and
I
can
share
it
with
friends
of
our
next
big
I
didn't
express
to
mention.
But
of
course,
because
bundle
can
burned
up
multiple
resource
and
those
missiles
can
be
launched
from
there.
Users
can
use
that
themselves
can
use
that
for
offline
browsing
and
because
resource
can
be
just
there.
It
was
very
fast,
didn't
rest
of
the
collective
XP,
and
so
all
flying
were
like
operate.
G
Native
situations
is
still
very
common
in
my
being
record,
it
was
actually
also
mentioned
in
previous
box
for
camp
like
I
income
datings
or
some
business
systems
you
probably
have
experienced,
but
like
a
Wi-Fi
105,
but
it's
very
flaky
or
actually
in
some
mobile
carriers,
something
very
heavy
squadron
when
you
just
did
that.
We
study
it
once
it
happens.
G
Getting
anything
from
the
server
from
the
network
becomes
very,
very
painful
in
the
actually
can
be
very
commonly
observable
in
Japan
next,
please,
and
also
like,
for
example,
of
all
Alliance
for
affordable
Internet
publishes
internet
affordability,
because
every
year
in
the
latest
we
call
plasma
still
is
the
billions
of
people
are
in
solar
eclipse
offering
and
among
needs
people,
public
Wi-Fi
and
the
PCB
parts.
By
me
very,
very
common
expedia
day
you
can,
there
are
open
construed
available
only
absolute
to
develop
a
young
kind
of
easily
Crete
as
their
bundles
for
just
like
misty.
G
B
Thank
you
so
much
okay.
So
thank
you
to
all
the
presenters
for
summarizing
the
use
cases
and
problems
that
they
anticipate
would
be
solved
by
the
efforts
in
this
working
today.
We
have
10
minutes
set
aside
now
to
talk
about
those
use
cases,
keeping
in
mind
that
afterwards,
you
know,
Jeffrey
will
make
a
presentation
about
his
sort
of
the
proposed
approaches
of
technologies
he
has
for
the
potential
working
group.
If
any
of
the
speakers
would
like
to
come
forward
Jeffrey
as
well
to
help
facilitate
this
discussion.
That's
that's
fine
points
of
clarification.
Mark
go
yes,.
H
Mark
Nottingham
I,
just
I,
was
a
little
surprised,
so
so
it's
being
asserted
that
the
use
cases
we
just
heard
about
are
all
in
scope
for
the
proposed
work
and
will
be
delivered
by
it.
H
I
see
a
very
clear
path
from
from
what
has
been
proposed
to
the
amp
use
case.
That's
fine,
I,
don't
see
a
clear
path
or
a
natural
reason
why
this
work
is
is
the
right
solution
for
the
other
use
cases
you
know
for
the
unsigned
bundling?
H
Why
is
a
standard
format
for
this
necessary
browsers
already
do
this
to
some
degree,
but
I
expect
to
hear
about
interoperability
challenges
in
that
area
for
the
the
remote
network
and
the
the
poorly
connected
networks.
This
is
a
problem.
We've
talked
about
a
lot
in
the
HP
working
group.
We've
had
proposals
in
this
space
in
the
past
and
we
didn't
go
forward
for
a
lot
of
different
reasons.
But
but
part
of
that
is
is
that
when
you
do
that
you
lose
confidentiality
and,
and
so
it
makes
me
a
little
nervous
to
say
we're.
H
Gonna
address
the
C's
case
when
we
don't
have
an
answer
to
the
confidentiality
problem
and
we
don't
have
a
run
in
a
true
mechanism
for
finding
those
caches.
There's
a
lot
of
undone
work
there.
So
I'd
almost
want
to
say:
don't
include
that
in
scope
for
the
Charter,
but
maybe
that's
if
we
find
a
path
towards
that,
then
do
that
later.
H
J
A
party
that
actually
turns
out
to
be
something
that
Daniel
Kahn
Gilmore
raised
as
an
issue
with
me
and
Brian
some
time
ago,
and
we
actually
did
went
out
and
did
an
analysis,
I
think
you're
right.
It
is
much
more
complicated
as
it
turns
out.
The
first
use
case,
where
you're,
basically
replacing
the
connection
to
a
distributor,
replaces
the
connection
to
an
origin.
It
ends
up.
Looking
a
lot
like
in
circular
mental
models
were
connection
to
a
CDN,
replacing
the
connection
to
an
origin
in
peer-to-peer
networks.
J
It's
somewhat
different
because
you
have
two
different
ways
of
trying
to
manage
the
confidentiality,
property
and
I'll.
Give
you
the
simplest
way
of
thinking
about
it
in
a
flooding
network.
This
isn't
a
problem,
because
every
node
sees
every
question
and
every
response,
and
so
the
size
of
the
flooding
Network
turns
out
to
be
the
equivalent
of
the
anonymity
size
of
the
group
you
have
now.
J
J
You
can
use
for
this
that
basically
creates
pseudo
clients
and
say
that
the
client
that
is
sending
it
is
claiming
to
be
on
path
to
the
to
the
client,
which
is
making
the
request,
rather
than
being
the
requester
itself,
and
the
result
of
that
is
as
long
as
you're
able
to
manufacture
those
kind
of
pseudo
client
mechanisms,
and
this
is
actually
something
bundle-
allows
you
to
do
very
easily,
but
some
others
like
writes,
mash
or
other
peer.
Your
technologies,
don't
allow
you
to
do
easily.
You
can
get
there
now.
J
Is
this
worked
enough
to
put
into
a
standard
right
now?
Absolutely
not,
but
I
think
there
are
paths
for
which
we
can
take
this
analysis
and
there's
a
reasonable
sense
in
which
one
or
the
other
would
allow
us
to
move
forward
with
the
peer-to-peer
cases.
So
that's
frankly
the
one
that
interested
me
that.
H
Variance
cut
the
queue
after
Brian,
really
quick,
okay,
that
that
sounds
exciting,
but
I'd
like
to
see
the
proposal.
So
we
can
analyze
it
before.
We
consider
it
in
scope
for
this
work
and
especially
cannot
only
the
the
security
analysis,
but
the
analysis
of
whether
this
is
the
right
framework
to
fit
that
approach
within
right.
So.
I
So,
in
particular
that
work
on
peer
to
the
peer-to-peer
protocol
is
not
in
the
Turner.
It's
only
the
the
thing
that
you
transfer
and
and
the
way
the
the
community
networks
I
believe
would
use
it
is
there
they
would
have
a
central
node
that
users
explicitly
go
to
to
fetch
the
applications
so
that
so
the
privacy
is
explicit
and.
B
K
Phil
Han
Baker
I'm
really
puzzled
by
the
set
of
use
cases
because,
as
I
was
watching
them
I
see
well.
Of
course
you
want
encryption.
I
just
can't
understand
why
you're
proposing
a
packaging
format
without
encryption
in
the
first
place.
I
can't
see
why,
with
those
use
cases,
you
don't
want
encryption,
because,
if
you're
doing
any
cache
data
in
the
network,
cache
data
should
be
encrypted.
K
I,
don't
think
that
today
we
should
be
talking
about
deploying
a
noose
anew
bunch
of
web
technology
that
ensure
this
assumes
that
the
data
sits
even
on
a
web
server
unencrypted,
and
now
you
wanting
to
throw
this
data
all
over
in
caches
everywhere
and
not
think
about
encryption.
I
think
that
encryption
should
be
a
requirement.
L
M
M
F
F
G
M
There's
the
original
publisher,
there's
then
the
end
user
and
then
there's
the
intermediary
and
right.
Now
all
these
technologies
are
coming
from
an
intermediaries,
perspective
and
I
feel
like
we
could
get
a
very
different
result
if
we
start
with
community
networks
and
that
current
reality,
so
just
I
guess
hope
that
that's
the
focus.
I
N
Con
Gilmore
ACLU,
so
I
continue
to
be
nervous
about
this
appreciate
Ted's
comment
about
in
response
to
mark
Nottingham
so
but
I
wanted
to
point
out.
There
was
a
couple
weird
things
that
happened
in
the
exchange
just
between
Marg
and
Ted.
Their
mark
came
up
here
and
said.
We
really
need
to
think
about
confidentiality
and
Ted's
came
up
and
said
what
we
did.
This
analysis
thinking
about
basically
metadata
privacy
and
those
two
are
not
the
same
thing.
N
So
I'm
just
going
to
repeat
that
particular
concern,
because
I
don't
think
those
questions
have
been
answered
in
in
a
in
a
robust
way.
I
don't
even
know
how
to
answer
them,
because
it's
do
it
because
the
web
is
so
big,
so
I'm,
just
like
trying
to
put
those
concerns
back
on
the
table.
I
also
want
to
add
one
quick
thing
which
is
when
we're
thinking
about
this
stuff.
N
B
J
I
appreciate
the
comment:
I
was
actually
gonna.
Ask
you
a
clarifying
question,
which
is
why
cuz
they're,
okay
so
quickly,
but
you
you
got
to
the
end
before
did,
and
that
is
when,
when
you're
asking
in
particular,
what
can
the
intramurally
intermediary
see
of
what
the
user
does?
Can
you
unpack
that
slightly?
What
the
sure.
N
So
we're
the
transport
model
that
we
know
and
love
on
the
web,
the
intermediary
well,
I.
Suppose
there
are,
you
know,
link
tracking
JavaScript
and
things
like
that,
but
once
the
user
has
navigated
away
from
they
prefer
the
prefer
basically
doesn't
get
to
see
anything
else
right
in
terms
of
what
the
user
is
concretely
doing
when
they
interact
with
the
origin
in
the
model
where
the
intermediary
just
keeps
themselves
there
and
acts
as
a
proxy
for
the
content,
they
see
all
of
the
transactions
between
the
between
the
client
and
the
origin.
So.
J
N
Right
or
a
bug
or
bundles
bundles
and
they
may
be
able
to
extract
piecemeal
right.
So
if
the
only
thing
they
get
is
one
bundle
from
the
origin,
then
presumably
they
shipped
the
entire
bundle.
That
will
probably
make
the
small
networks
sad,
because
that's
it
you
know,
the
bundle
is
large
and
they're
now
forced
to
pay
that
cost,
even
if
the
user
is
only
looking
at.
Why
that
woman
to
the
bundle,
the
bundle
is
split
up
into
detail
and
the
user
requests
the
different
bundles
separately.
J
J
It's
actually
fairly
large
as
a
bundle
of
information,
but
the
peer-to-peer
handing
of
it
around
is
very,
very
cheap,
so
the
backhaul
portion
of
it
that
the
generation
in
the
backhaul
of
it
is
the
expensive
part
both
from
a
true
cost
perspective
and,
frankly,
in
the
past,
a
political
risk
perspective
right.
So,
if
you're
handing
that
off
as
a
whole,
in
fact
the
intermediary
doesn't
know
anything
about
what
you're
interested
in
within
that
about
right.
J
J
A
DNS
query
provider
you
you
can
hide
your
query
stream
and
everybody
else's
query
stream.
If
you
get
a
big
one,
but
now
you're
talking
to
somebody
who's,
a
big
query
provider,
you
have
the
same
sort
of
thing
when
you're
talking
about
the
size
of
object,
you
as
a
content,
creator
create,
and
so
one
of
the
things
that
the
content
creators
have
to
think
about
is
what
they're,
putting
together
in
a
bundle.
J
Is
it
enough
to
have
the
user
experience
set
that
they're
likely
users
are
going
to
want
to
have
in
the
context
that
they
want
frankly,
I.
Think
amp
is
the
simple
version
of
this
set
of
considerations
right,
it's
much
harder
in
the
peer
to
peer
cases,
but
I
think
again
it's
both
worth
doing
and
and
should
be
the
motivating
use
case
for
for
some
of
the
security
and
privacy
considerations
that
we
end
up.
Writing,
ok,
yeah.
N
The
person
distributing
the
bundle
is
the
origin
and
they
are
making
decisions
in
terms
of
how
they
pack
up
the
bundle
which
will
have
an
impact
on
the
privacy
characteristics
for
the
client
with
respect
to
the
intermediary,
so
the
intent
is
about
who's.
Making
these
decisions
and
who's
getting
affected
by
them
are
pretty
weird.
B
I
I'll
start
this
by
saying
all
of
this
is
preliminary.
We
expect
this
to
change,
possibly
in
significant
ways,
as
the
working
group
does
its
work
and
there's
a
piece
of
the
design
that
that's
intended
to
facilitate
that
so
I'm
gonna.
This
is
in
two
sections.
One
is
the
the
package
or
bundle
format.
The
other
is
talking
about
origin
trust.
I
So
a
package
is
a
collection
of
URLs,
possibly
from
multiple
origins,
with
content
negotiation
information.
So
this
this
example
has
one
URL
that's
negotiated
on
language,
one
that's
negotiated
on
content
type
and
one
that's
not
negotiated
and
comes
from
a
different
origin
than
the
other
to
the
overall
format
in
the
in
the
current
design.
It's
the
subset
of
see
bore
without
without
tags,
without
complicated
map
keys
and
without
floating-point.
I
There's
a
trunk
at
the
beginning
of
the
format
that
we
hope
to
be
invariant,
which
has
the
version
number
and
a
fallback
URL,
so
that
a
client
that
doesn't
understand
the
version
it
gets
can
just
redirect
to
that
URL,
there's
a
list
of
sections
so
that
the
format
can
be
used
in
a
random
access
way.
There's
an
index
of
those
sections
of
pointers
directly
to
where
they
start
so
that
it
can
be
used
so
that
it
can
be
read
in
a
streaming
way.
I
We
put
that
index
at
the
beginning
of
the
format,
unlike
the
zip
which
which
puts
it
at
the
end.
This
sacrifices
the
ability
to
append
resources
and
the
ability
to
write
the
format
in
a
straight
anyway
and
then
at
the
very
end,
we
stick
the
total
length
of
the
format
so
that
we
can
stuff
it
into
self-extracting
executables.
I
I
The
responses
section
is
just
a
big
chunk
of
HTP
responses.
The
index
points
at
an
individual
response,
so
you
never
have
to
parse
the
whole
section
at
once.
You
just
parse.
The
thing
you
want
to
look
up
a
response
is
represented
as
the
pair
of
header
and
body.
The
header
is
a
map
of
map
from
names
to
values.
We
don't
represent
trailers
right
now
and
we
assume
that
header
fields
have
been
combined,
which
means
that
that
we
can't
represent
setcookie
or
multiple
set
cookies.
I
As
far
as
I
know,
that's
the
only
header
that
breaks
in
this
model
and
we
could
change
it
if
people
think
it's
important
to
bundle
setcookie
the
manifest
gives
you
a
pointer.
It
gives
you
a
URL
for
sort
of
metadata
about
the
package
and
that
URL
is
expected
to
be
found
within
the
within
the
package
for
a
web
app,
it's
probably
the
app
manifest
defined
by
the
w3c
for
a
book.
It
might
be
the
the
ePub
index
format
or
manifest
format.
I
It's
it's
not
not
specified
what
it
needs
to
be
that'll,
be
in
the
kind
of
w3c
side
of
the
specifications,
there's
a
list
of
critical
sections.
So
the
the
section
names
are
an
extensibility
point,
and
so,
if
we
define
a
new
section
and
it's
important
that
clients
read
it
or
or
don't
use
the
package,
then
it
gets
listed
in
critical
sections
and
there's
a
couple
other
ways
to
represent
this.
I
If
we,
if
we
think
this,
isn't
the
right,
the
right,
detailed
way
and
finally
there's
a
list
of
signatures
for
first
stuff
in
the
bundle,
so
the
signature
section
has
a
list
of
authorities,
an
authority
is
x.509
certificate.
It
could
be
a
raw
public
key.
We
could
define
other
other
things
that
are
authorities.
Rob,
public
keys
are
useful
for
for
a
signature
based
sub
resource
integrity.
I
Where
a
webpage
just
says,
here's
the
public
key
I
expect
to
have
signed
by
resource
an
x.509
certificate
could
represent
a
domain
owner
or
something
else,
for
instance,
a
transparency
log
asserting
that
that
something
that
a
resource
appeared
in
it
could
be
kind
of
semantically,
a
publisher
of
books
that
vouch
for
the
content.
But
don't
say
this.
This
lives
at
a
particular
origin
and
the
list
needs
to
include
any
certificates
used
to
build
a
chain
which
Rusted
Root,
because
these
are
meant
to
be
used
offline.
I
Each
signature
picks
out
a
signing
Authority,
the
the
type
of
public
key
defiant
determines
the
the
signature
algorithm
it
defines
when
in
time
the
signature
is
valid,
start
and
end.
It
points
at
a
URL
where
you
can
update
the
signature,
so
once
it
expires,
you
can
get
a
new
version,
assuming
the
content
hasn't
changed
and
it
it
covers
a
particular
subset
of
resources.
I
So
you
can
have
resources
for
multiple
origins
which
you
need
different
signers,
and
so
you,
you
pick
out
the
the
particular
subset
that
this
signature
is
vouching
for
signs
the
hashes
of
the
resource.
You
seem
Martin,
Thompson's
mice,
algorithm
again
to
help
with
stream
bloating.
We
don't
have
a
design
yet
for
counter
signatures,
but
there's
a
couple
use
cases
that
seem
like
they
need
them.
So
that's
something
that
that
I
would
suggest
the
working
group
think
about
when
you
have
a
package
that
has
untrusted
stuff
in
it
so
stuff,
that's
not
signed.
I
Each
resource
has
a
package
URL
and
an
origin
that
it's
claiming
now.
We
need
that
to
be
cross
origin
with
stuff
that
is
actually
from
the
origin.
That's
claimed.
Otherwise
you
get
trivial,
cross-site,
scripting
resources
with
a
different
claimed
origin.
The
same
package
need
to
be
cross
origin,
so
that
storage
works.
If
you
bundle
two
different
websites
into
one
package,
like
el
paquete
de
Seminole,
does
in
Cuba,
you
don't
want
their
their
storage
to
stomp
on
each
other.
I
Things
on
the
server
that
served
the
package
should
be
cross
origin
with
the
package.
This
supports
the
web
archive
which
bundles
a
bunch
of
websites
that
it
doesn't
trust
and
doesn't
want
them
to
be
able
to
mess
with
the
the
main,
the
main
origin,
and
then
we
think
that
that
you
should
be
able
to
package
the
same
resource
into
two
packages
like
two
versions
of
a
resource
into
two
packages
and
have
them
not
stomp
on
there
so
on
each
other's
storage.
I
So
origin
trust
is
that
the
controversial
part
of
all
this
as
far
as
I
can
tell
fundamentally
you
get
origin
trust
by
signing
the
content
with
the
certificates
that
it's
issued
the
same
way
as
a
servers.
Tos
certificate,
all
of
the
complicated
parts
come
from
trying
to
prevent
a
group
of
dangers
that
that
come
from
come
from
that
siding.
So
some
of
those
dangers
are
intrinsic
to
any
object
based
security
model.
I
Some
we
can
avoid
by
designing
things
carefully,
so
for
the
intrinsic
dangers
we
we
can
kind
of
make
them
less
dangerous.
We
can't
completely
prevent
them,
and
so
we
make
servers
opt
into
to
the
danger
they
do
that
by
getting
by
having
the
certificate
have
a
particular
x.509
extension
and
then
ca's
only
grant
that
if
dns
says,
we
also
limit
this
the
length
that
the
signature
can
be
valid
so
that
if
someone
signs
of
owner
ability
and
thinks
that
they
don't
have
a
vulnerability,
they
still
can't
shoot
themselves
in
the
foot.
I
You
can
cause
problems
if
you
sign
personalized
data.
So
if
you
sign
someone's
like
bank
statement,
you
can
you
can
allow
them
to
attack
other
people.
We
have
some
advice
in
the
specifications
about
how
servers
can
prevent
themselves
from
doing
that.
You
know
in
a
kind
of
systematic
way
and
then
clients
also
do
not
do
not
allow
stateful
headers.
So
you
can't
do
session
fixation
stuff
that
that
comes
from
some
some
Commons
that
occur
and
Martin
Thompson
sent.
I
It's
possible,
if
you
just
have
a
bundle
of
sign
things,
then
an
attacker
could
mismatch
versions.
That's
why
the
bundles
have
signatures
that
cover
multiple
resources
at
once.
If
an
attacker
removes
one
resource
from
the
signed
group,
it
it
requests
for
it
from
the
stuff
in
the
bundle
will
fail,
they
won't
go
to
the
network.
But
if
you
try
to
fetch
something
that's
not
mentioned
at
all,
it
will
go
to
the
network
which,
which
helps
a
couple
use
cases.
I
I
So
I've
talked
a
bunch
about
packages
you
heard
stuff
about
signed
exchanges
earlier
signed.
Exchanges
are
basically
an
optimized
format
for
a
one
resource
bundle.
That's
signed,
we're
not
sure
if
we
will
actually
need
that
that
optimisation
in
the
long
run.
If
we
don't
all
of
the
tools
that
generate
signed,
exchanges
will
just
migrate
to
generating
bundles
and
that's
it.
B
L
I
It's
not
currently
in
the
working
group
charter,
but
I'm
not
opposed
to
saying
that
the
working
group
should
think
about
that.
Remem
I
think
I
think
it's
a
secondary
consideration.
I
think
we
should
possibly
think
about
it
in
designing
the
the
primary
format
to
make
it
possible,
but
I
don't
think
we
need
to
kind
of
have
have
the
patch
format
fully
figured
out
before
we
get
use
out
of
bundles
I.
L
Think
that
it's
a
question
so
stepping
back
the
question
I'm
trying
to
ask
is:
is
this
something
if
we
formed
about
a
working
group?
Would
that
be
in
scope
of
that
working
group,
because
it
might
be
an
important
tool
for
solving
some
of
the
use
cases
that
we
had
presentations
about
at
the
beginning
of
the
meeting?
Good
thanks.
O
O
All
these
problems
were
discussed
there
and-
and
we
went
through
a
bunch,
more
discussion
about
the
use
cases
and
I
just
wanted
to
make
sure
that
everyone
understood
this
I
think
the
the
unsigned
bundles
aspect
of
this
work
is
interesting.
Tying
it
so
completely
to
http
may
be
a
decision
that
the
working
group
could
make
a
decision
on.
O
It
seems
a
little
bit
odd
in
some
ways
the
way
that
it's
structured
at
the
moment,
I
sort
of
understand
how
it
got
there
and
and
the
form
of
it,
but
it
the
fact
that
you
don't
have
request.
Header
fields
is
curious,
which
requires
that
you
understand
the
variance
thing
very,
very
much
in
depth
before
you
can
satisfy
yourself
that
this
is
an
okay
thing
to
do.
Yeah.
O
But
most
of
my
reservations
about
this
work
and
I
think
many
others
is
about
this.
This
notion
of
taking
the
content
that
you
have
acquired
from
any
distribution
channel
and
identified
one
problem
with
the
use
of
those
distribution
channels
and
then
substituting
that
in
for
content
that
would
be
considered
equivalent
to
and
can
share
the
same
state
as
stuff
that
you
would
acquire
from
a
direct
connection
to
the
origin
and
I
haven't
seen
to
dick
edges
points
earlier.
A.
O
B
P
P
The
interesting
question
here
is
again:
this
shift
from
from
transport
object
security
I
see
in
sort
of
the
encoding
parts
of
this
of
this
presentation,
a
lot
of
evidence
of
essentially
hey.
We
have
this.
We
have
some
implementation
experience
with
it.
We
took
it
to
a
larger
forum.
We
basically
got
feedback
and
you're,
essentially
playing
sort
of
like
vulnerability
golf
with
with
the
format,
DJ
and
Ted,
and
I
sat
down
in
Montreal
after
this
and
sort
of
tried
to
start
playing
that
game
with
the
transport.
P
The
object,
security,
stuff
and
I
thought
we
got
a
lot
farther
than
I
expected
us
to
get
right
like
so
the
space
kind
of
got
bigger,
but
it
didn't
get.
You
know
exponentially
bigger
in
a
way
that
was
was
terrifying,
at
least
from
my
standpoint.
Dkg
is
kind
of
making
a
face
over
there.
That
makes
me
think
that
maybe
we
were
having
a
slightly
different
conversation.
I
think.
P
The
approach
that
was
applied
to
sort
of
the
encoding
and
sort
of
the
interactions
is
probably
one
that
we
should
apply
to
to
the
the
transport
object
security
thing.
We
can
continue
having
these
discussions
in
order
for
because
it's
like
I,
don't
think
that
the
solution
to
this
should
be
the
Ted
dkg
and
I
sit
down
and
have
a
beer
and
talk
about
it.
Right
like
we
need
up
a
bigger
ring
to
that
and
I
know
that
we're
not
supposed
to
be
talking
to
that
2d
charterparty,
yet
that
I'm
gonna
shut
up.
P
K
Phil,
Han,
Baker,
yeah
I,
do
have
some
very
firm
views
about
encouraging
because
I
have
an
alternative
proposal
here,
but
that
aside,
I
think
that
one
of
the
things
that
we
need
to
do
here
is
it's
not
just.
When
you
choose
use
cases,
it's
not
just
about
choosing
the
use
cases
that
you
want
to
solve.
It's
about
finding
the
paradigmatic
use
case
that
encapsulates
as
much
of
the
problem
as
possible,
and
what
you're
doing
here
is
that
you're
moving
from
transport,
which
HTTP
is
to
the
object.
K
What
you're
really
doing
here
is
that
this
package
is
really
standing
for
that
web
server,
that
you're
not
talking
to,
and
so
when
you're
trying
to
map
it
onto
HTTP,
which
you
know
I'm
big
families
of
HTTP.
You
know
I
did
some
work
on
it
myself
at
one
point.
I
think
that
what
you
need
to
do
is
to
think
in
terms
of
if
you
are
going
to
have
a
format
that
would
allow
a
website
manager
to
upload
their
content
into
the
cloud
and
service
from
just
that
format.
K
I
You
you
made
the
really
interesting
point
that
you
can
think
of
this
package
format,
as
acting
as
the
server
and
I
think
it
actually
acts
as
the
cache,
which
is
why
variants
happened
to
work
really
well
for
it
and,
if
you
think
of
the
package
as
a
cache,
it
actually
probably
does
work
for
CDN
upload.
But
the
paradigm
attic
use
case
that
that
I've
been
coming
from
for
designing
the
whole
thing
is
in
fact
the
community
networking
use
case
mark.
H
Nottingham
since
Martin
brought
it
up,
I
helped
instigate
the
escape
workshop,
and,
from
my
perspective,
that
was
largely
because
we
had
heard
concerns
from
parts
of
the
community
about
the
power
dynamics
around
around.
You
know
whether
this
would
enable
certain
imbalances
and
I
think
my
takeaway
from
that
was
that,
while
those
are
concerning
dynamics,
it's
not
something
that
we
should
really
consider
concern
yourself
with
here,
because
it's
an
effect
at
at
least
a
tertiary
distance.
It's
it's
not
a
direct
effect
of
this
proposal
and
so
I.
H
To
my
mind,
those
those
criticisms
aren't
valid
trip
for
this
proposal.
It's
not
something
we
should
say.
Well,
we
can't
do
that
work
here
if
they
were
a
direct
effect.
Of
course,
you
know
that's
a
different
story,
I
and
so
I'm.
Not
against
this
work.
Going
forward.
I
do
have
concerns
I'm,
gonna
piss
off
Patrick.
Can
we
go
back
to
the
slides?
I
know.
You've
got
the
Charter
up
now,
which
slides
the
Geoffrey
severance
yes
and
go
back
a
couple,
not
quite
that
fast
ease
forward.
A
few
please
keep
on
going
give.
B
O
H
H
You
know
we
have
an
analogous
situation
in
HTTP,
where
we
have
a
lot
of
people
who
want
to
do
a
share
compression
for
for
State
on
different
representations
and
it's
very
exciting
for
a
lot
of
people,
but
we've
consistently
told
them,
no,
because
the
security
properties
that
are
too
slippery
and
it's
a
foot
gun
and
we've
created
this
requirement
today
that
that
community
interest
in
that
create
a
record.
You
know
a
quite
complex
document
around
the
security
trade-offs
there.
We
need
to
evaluate
that
and
then
we'll
think
about
starting
some
work.
H
H
Where
you
do
a
huge
chunky
download
you
don't
get
good
cache
efficiency,
you
don't
get
good
integration
into
the
stack,
and
so
you
know
people
say
this
is
an
alternative
to
HTTP
but
also
needs
to
work
with
HTTP.
You
say
it's
part
of
the
caching
model.
It
needs
to
work
with
the
HP
caching
model
and
we
need
to
have
those
discussions
too,
and.
Q
Last
word
Chris
lemon
says
on
jabber.
This
work
seems
designed
directly
for
the
Google
amp
presentation.
The
other
use
cases
appear
to
be
trying
to
pick
up
on
a
solution
that
seemed
to
solve
80%
of
their
use
case.
The
question
is
really
whether
we
want
to
take
on
those
extra
20%
of
work.
He
thinks
we
should
start
with
the
basic
work
and
if
we
discover
that
we
want
to
and
can
expand
the
scope
that
we
would
consider
reach
our
during
at
that
time,
it's.
J
J
So
simply
having
these
other
use
cases
in
mind,
not
only
do
we
get
to
use
them
and
these
important
community
network
applications
if
we
get
them
right,
it's
going
to
make
the
building
block
better,
even
for
the
amp
use
case.
So
I
strongly
believe
that
the
Charter
should
include
more
than
one
use
case
here
and
I
prefer
to
include
the
amp
and
the
community
network.
One
is
as
just
very
basic.
C
B
R
C
H
C
A
H
C
C
C
S
C
Okay,
so
background
web
pages,
sometimes
group
multiple
sub
resources
into
a
single
combined
resource
to
fill
out
cross
resource
compression,
introduce
the
overhead
of
HTTP
1
requests.
W3C
tag
proposed
a
web
packaging
format
based
on
multi-part
to
give
web
browsers
visibility
into
the
structure
of
these
combined
resources.
That
is
not
seen.
Deployment
in
HTTP
2
did
not
make
these
bundles
on
unnecessarily
as
one
one
as
was
once
expected.
These
bundles
are
still
needed
in
countries
were
expensive
and
or
unreliable
mobile
data
there's
established
practice
of
sharing
content
and
native
applications.
C
O
There's
a
test
there's
an
implicit
assumption
here
that
the
the
web
application
is
identified
by
an
HTTP,
your
I
or
or
somesuch
I
think,
and
so
it
might
need
a
little
bit
of
expansion
if
you,
if
you
want
to
keep
that
one
I
just
realized
that
now
it's
like.
Oh,
hang
on
a
second
web
applications
can
come
from
other
places
as
well.
H
T
Mike
Bishop
there's
a
statement
here
that
a
previous
attempt
to
solve
this
problem.
It
was
not
deployed
and
we
don't
go
into
why
it
was
not
adopted
or
how
we
intend
to
do
better.
That
needs
to
at
least
be
understood.
Maybe
research
on
that
as
part
of
the
working
groups
charter,
but
I
think
I
would
rather
have
an
understanding
of
that
before
we
go.
I
Okay,
so
answering
Martin,
the
browsers
require
a
secure
context
for
a
lot
of
web
api
is
now
you
don't
get
that
with
files
and
replying
to
mark
people.
There
there's
a
program
called
web
pack
which
we're
trying
not
collide
names
with,
but
that
is
super
heavily
used.
Sometimes
too
much.
If
you
listen
to
Alex
Russell,
like
bundle
sizes
should
be
smaller,
but
we
do
not
know
how
to
how
to
send
the
individual
resources
efficiently.
You
you
have
to
bundle
them
to
get
to
get
reasonable
performance.
L
L
C
Right
can
I
sit
on
that.
We
did
Patrick
took
notes
on
the
sides
and
we
can.
You
know,
try
to
resolve
these
on
the
list
on
so
W
pack.
The
W
pack
working
group
will
develop
a
specification
for
a
web
packaging
format
that
efficiently
bundles
multiple
HTTP
resources.
It
will
also
specify
a
way
to
optionally
signs
resources
such
that
a
user
agent
can
trust
that
they
came
from
their
claimed
web
origins.
Key
goals
for
web
pack
are
efficient
storage
across
a
range
of
resource
combinations.
C
S
C
H
C
C
The
next
one
is
being
extensible,
including
to
avoid
cryptography
that
becomes
obsolete
security
and
privacy
properties
using
bundles
as
close
to
as
practical
to
TLS,
1.3,
transport
of
the
same
resources
or
properties
do
change.
The
group
will
document
exactly
what
changed
and
how
the
content
authors
can
come
compensate
hyper
and.
P
Rimmel
Google
I
understand
where
this
came
from
I
understand
that
a
lot
of
the
anxiety
around
this
is
about
that
last
part
right.
There
I
would
like
anyone
affected
by
any
change
in
the
security
model,
to
be
able
to
compensate
and
I'd
like
us
to
to
consider
all
of
them
so
like
either
I
can
I
can
suggest
X
or
that
at
later
expanding
it
or
contracting
it
very
like,
but
don't
leave
it
right.
There.
J
Ted
Hardy
I
think
the
security
and
privacy
properties
of
using
bundles
after
Daniel
con
Gilmore's.
A
previous
discussion
is
probably
not
granular
enough.
We
need
to
have
both
something
here
that
talks
about
the
retrieval
of
the
bundles
and
the
confidentiality
properties
of
using
the
bundle
derived
resources
as
well,
and
since
I
see
him
behind
me
in
line
I'll.
Let
him
finish
this
basically.
B
T
Mike
Bishop
I
want
to
make
sure
that
one
of
the
notes
back
further
up.
It
was
very
brief
and
I
think
he
misses
the
point
that
was
made.
The
suggestion
was
not
to
drop
on
alphecca
Thea's
seminar.
It
was
not
to
drop
the
third
example,
but
to
but
I
interpreted
that
as
drop
that
particular
embodiment
and
just
refer
to
the
use
case.
H
T
U
C
N
In
addition
to
maybe
being
too
vague,
it
seems
basically
impossible
to
me
the
TLS
1.3
transport
is
going
away.
If
we
do
this
right.
The
confidentiality
and
privacy
properties
that
you
get
from
jelous
1.3
are
simply
not
present
when
someone
else
is
doing
the
delivering
of
the
data,
and
so
the
idea
that
we're
trying
to
keep
it
as
close
as
practical
just
sounds
like
a
flight
of
fancy
to
me
and
I
and
I
and
I,
can't
imagine
how
we
would
how.
B
N
B
M
Nodal
article
19,
so
we
keep
referring
to
content,
authors
and
I.
Don't
think
that's
what
you
mean
in
all
cases,
and
actually
the
better
word
would
be
publishers,
we're
talking
about
online
versions
of
newspapers
and
other
places
that
are
no
longer
gonna,
for
example,
get
their
own
ad
revenue
because
of
something
like
this,
and
that
would
probably
be
a
little
bit
more
accurate
and
then
I
think
there
also
might
need
to
be
a
little
bit
of
thinking
around
the
goals
for
that
third
person
like
stake
holder.
M
If
you
will
like
how
this
will
affect
those
original
content,
publishers,
I
have
another
point:
oh
I,
think
in
the
last
one
around
centralization
and
power
imbalances,
maybe
also
doesn't
completely
encapsulate
how
this
will
look
for
a
publisher,
a
Content
publisher,
because
I
think
it
is
also
around
monetization
and
power,
and
centralization
don't
quite
give
it.
That
explicitly
would
be
my
suggestion.
Thank.
O
O
B
O
Gonna
cut
the
queue
after
okay,
so
one
of
the
things
that's
missing
here,
I
think
is
a
very
clear
articulation
of
the
constraints
under
which
people
choosing
to
use
this
would
have
to
operate
in
order
to
gain
the
guarantees
being
promised
owned.
I
kind
of
agree
with
dkg
here
that,
because
we're
making
such
a
fundamental
change,
the
way
this
operates.
Setting
the
bar
here
is
is
unrealistic,
so
we
we
need.
O
What
we're
doing
here
is
we're
setting
a
new
bar
and
in
order
to
meet
that
new
bar,
you
have
to
meet
an
entirely
new
set
of
requirements,
and
that
goes
to
Mark's
question
about
how
you
operationalize
this
thing.
It
goes
to
the
question
of
that
Jeffry
raised
in
his
discussion
about
what
sort
of
confidentiality
guarantees
we
will
eat.
We
need
to
expect,
and
it
goes
to
things
like
the
personalization
things.
O
You
can't
put
personal
personalized
information
in
one
of
those
things
and
get
anywhere
near
the
guarantees
that
we're
talking
about
here
so
I'd
like
to
see
that
somewhere
in
this
in
this
charter,
explicitly
because
there's
going
to
be
some
work
to
set
the
bar
and
do
the
analysis
to
prove
that
we
can
make
that
bar
and
find
what
constraints
need
to
be
met
in
order
to
reach
that
bar.
So
I
realize
a
bit
of
a
mouthful
handful
that
get
rid
of,
but
I
think
I
think
that's
really
central
to
all
of
this.
O
I
don't
think
safe
web
app
installation
is
necessarily
a
goal
that
I'm
interested
in
pursuing
I
know
that
others
are
but
I
I
think
there's
a
lot
wrapped
up
in
that.
That
could
lead
to
some
very
interesting
consequences
for
a
working
group
and
disagreements
unless
we
unpack
that
a
lot
more
I
suspect
that
we're
going
to
put
ourselves
into
a
tailspin
and
just
addressing
something
like
the
amp
use
case
or
the
distribute
content
to
someone
who
has
narrow
backhaul.
As
we
heard
all
earlier.
O
L
S
Hi
Ben
Schwartz,
so
I
wanted
to
sort
of
second
Martin's
Martin's
point
that
we
and
particularly
I,
wanted
to
point
out
that
there's
this
line
here
about
out
of
scope,
it's
out
of
scope
to
define
the
details
of
how
web
browsers
load
the
formats
I
think
it
would
be
worth
rephrasing
that,
because
I
think
it
should
be
in
scope
to
define
constraints
on
how
web
browsers
load.
The
format
in
particular
I
want
to
point
out
that
there
are
some
constraints
that
would
make
this
a
clear
privacy
victory
over
existing
HTTPS.
S
In
particular,
if
there's
a
constraint
that
that
web
browsers
can't
load
a
bundle
in
response
to
a
user
action
that
it
can
only
be
essentially
pushed
down
by
the
by
the
server
preemptively
that
actually
I
expect
personally
has
better
privacy
properties
than
ordinary,
HTTP
and
so
I
think.
The
Charter
should
make
it
clear
that
it's
at
least
allowable
for
the
the
working
group
to
create
a
constraint
in
that
kind.
V
Felix
hanta,
Facebook
I,
think
one
of
the
things
that
stands
out
to
me
here
is
something
that
is
not
discussed
in
the
scope
and
sort
of
United.
The
use
cases
that
were
discussed
is
a
sort
of
punting
on
the
discovery
mechanism.
All
of
these
cases,
to
my
knowledge,
described
sort
of
manual
user
discovery
of
this,
and
maybe
that's
a
hard
problem,
but
I
think
this
is
the
best
place
to
look
at
solving
that.
C
You
all
right,
I'm
gonna
go
on,
so
the
packaging
format
will
also
aim
to
achieve
the
secondary
goals
described
and
draft
MP
squad
as
long
as
they
don't
compromise
or
delay
the
above
properties.
The
following
potential
goals
are
out
of
scope
for
this
Charter
DRM,
a
way
to
distribute
the
private
portions
of
a
website.
For
example,
UW
PAC
might
define
a
way
to
distribute
emails
application,
but
won't
define
a
way
to
distribute
individual
emails
without
a
direct
connection
to
Gmail's
origin.
C
C
Note
that
consensus
is
required
both
for
changes
to
the
current
protocol
mechanisms
and
retention
of
current
mechanisms
in
particular,
because
something
is
in
the
initial
document
set.
There's
lists.
There
does
not
imply
that
this
consensus
around
features
of
how
to
specify
that's
basically
just
saying
it's
a
starting
point
and
we're
going
to
take
control
and
do
whatever
we
want
to
the
document,
which
is
pretty
standard.
B
L
I
So
the
in
order
to
distribute
private
information
you
need,
you
do
need
to
encrypt
it.
You
need
it's,
it's
very
dangerous
to
sign
it
in
a
way
that
anyone
will
trust
is
coming
from
the
origin
server.
You
can
they're
spoofing
attacks
so
like
making
this
for
public
information
constrains
the
scope
to
make
it
slot
to
make
it
tractable.
B
Question
could
go
on
for
a
long
time,
so,
unfortunately,
the
constraints
of
the
the
format
we're
gonna
move
forward,
I
think
to
the
polls
based
on
what
we
know.
When
we
understand
about
how
the
group
feels
about
the
changes
to
the
Charter,
we
obviously
can't
prevent
present
a
totally
revised
charter.
You
know
in
real
time
so
we'll
do
our
best
based
upon
these
questions.