►
From YouTube: IETF 111: Oblivious HTTP BOF
Description
The Oblivious HTTP (OHTTP) Birds of a Feather session at IETF 111 will be held on 27 July 2021 at 23:30 UTC. The Oblivious HTTP (OHTTP) BOF seeks to charter a working group to define a method of bundling HTTP requests and responses that provides protected, low-latency exchanges.
A
B
Good
afternoon,
good
morning
or
whatever
time
of
the
day,
there
is
where
you
are,
this
is
oblivious
http
buff.
Let's
get
started.
A
This
is
a
notewell,
hopefully
you've
seen
it
by
this
point
in
the
week.
It
summarizes
your
responsibilities
and
rights
in
apr
contributions.
B
Yeah,
so
we
have
notetakers.
B
People
are
aware
of
not
well.
This
session
is
being
recorded
on
metacore.
B
I
think
we
settled
that
so
agenda
is
roughly
going
to
be
half
an
hour.
B
Martin
will
talk
through
use
cases
and
technology
recap
for
then
we'll
have
a
walk
through
the
charter,
any
comments
and
discussions
on
the
currently
proposed
charter,
and
if
we
realize
that
people
are
happy
with
this
work
to
be
chartered
and
the
charter
looks
good
or
there
are
only
minor
changes
that
we
can
take
offline,
then
there
might
be
some
time
at
the
end
to
do
on
specific
issues
on
the
documents,
but
obviously
that
conditional
on
the
rest
of
the
board
being
successful.
A
But
before
we
move
on
any
basha's
chain
proposed
changes,
this
agenda.
A
All
right
hearing,
none
just
a
quick
review
of
the
process.
History
here,
martin
published
this
oblivious
http
draft
back
in
january,
and
we
had
a
discussion
in
the
psych
dispatch
working
group
in
march
this
year
at
the
last
ietf
meeting.
A
A
A
They
went
out
for
comments
on
the
oh
http
mailing
list,
and
there
are
some
concerns
expressed
there
that
led
to
some
con
concerns
being
adopted
by
a
few
of
the
different
ads
that
filed
block
physicians,
and
so
the
point
of
our
buff
today
is
to
kind
of
work
through
some
of
these
concerns
and
see
if
we
can
convince
those
ads
to
unblock
us
and
move
forward
and
get
kind
of
clear,
better
agreement
across
the
community.
A
So
with
that,
I
believe
we
are
over
to
martin
for
the
technical
overview
to
get
us
all
back
up
to
speed
on
what
we're
talking
about
here.
So.
C
Mark
I've
requested
access
to
the
slide
thing.
C
C
So
what
is
this?
It
is
the
ability
to
make
unlinkable
http
requests
that
involves
having
a
proxy
to
hide
source,
address,
information
and
mix
requests
together.
That'll
give
us
a
little
bit
of
traffic
analysis,
resistance
and,
of
course,
encryption
so
that
the
proxy
can't
see
and
modify
the
things
that
are
being
sent
via
it.
C
Pretty
straightforward
arrangement,
client
encapsulates
request,
sends
it
to
the
proxy
which
forwards
it
on
to
the
server
and
the
server
just
removes
the
encryption
layer
manages
the
request
and
sends
a
response
which
is
also
encapsulated
in
a
very
similar
fashion.
The
crypto
is
a
little
bit
uneven,
but
the
basic
principles
are
pretty
straightforward.
C
So
this
really
gets
to
a
lot
of
the
questions
that
we're
talking
about
here,
and
people
will
want
to
know
more
about
what
we
have
here,
and
I
think
we'll
probably
want
to
discuss
that
a
little
bit
later.
But
the
the
basic
scenarios
that
that
we
have
in
mind,
at
least,
is
we
control
clients,
control
to
some
extent
we
control
servers.
But
we
don't
really
want
to
have
the
information
that
clients
might
be
providing
to.
C
One
of
the
ideas
behind
this
is
that
this
has
less
overhead
than
some
of
the
alternatives
you
can
achieve
similar
properties
by
using
a
regular
http
proxy
and
just
making
a
new
request,
a
new
connection
for
every
request
that
costs
a
lot
and
you'll
also
know
that
tor
has
similar
sorts
of
properties
to
a
proxy,
but
with
higher
overheads
and
then,
of
course,
once
we
get
into
things
that
are
truly
sensitive
and
we're
talking
about
some
of
those
things
in
the
advertising
space
right
now
you
want
to
go
into
things
that
are
far
more
sophisticated
and
far
more
privacy,
preserving
with
k
anonymity
goals
that
are
mathematically,
verifiable
and
things
like
that.
C
And
so
then
we
go
into
things
like
prio
and
the
the
new
heavy
hitters
protocols
that
we
might
be
talking
about
in
the
near
future.
I
hope.
C
So,
there's
lots
of
reasons
that
you
might
not
want
to
do
this.
I
don't
think
this
is
not
reasons
not
to
standardize
with
enough
negatives
there
that
I
hope
people
understand,
but
just
reasons
that
you
wouldn't
want
to
use
this
sort
of
mechanism
for
your
particular
use
case.
This
doesn't
work
for
general
purpose
http,
it's
obviously
more
expensive
than
a
direct
request.
There's
a
bunch
of
crypto
and
there's
really
it's
stitching
two
requests
together.
C
C
So
the
most
natural
comparison
here
is
to
one
request
per
connection
and
this
design
trades.
Some
things
replay
protection,
post,
compromise
security
and
protocol
changes
primarily
for
some
amount
of
performance.
So
you
see,
the
comparison
here
shows
that
there's
there's
no
signing
and
verification
involved,
which
is
a
modest
improvement,
a
lot
less
hashing,
which
is
basically
nothing
but
probably
from
our
perspective.
The
the
main
thing
is
that
we're
talking
about
shaving
this
down
to
one
round
trip
rather
than
two
round
trips,
and
that
makes
a
huge
performance
difference.
C
C
C
There's
some
discussion
in
the
draft
about
traffic
analysis
and
what
might
be
done
to
deal
with
that.
That's
a
difficult
problem
that
all
parties
involved
take
some
responsibility
for,
and
there's
also
discussion
about,
replay
attacks
and
the
implications
there.
C
But
that's
something
that
we
have
some
experience
with
in
tls
and
http
already.
C
This
uses
a
new
message
format.
It
doesn't
necessarily
have
to
aside
from
the
fact
that
message:
http
is
not
very
good,
but
that's
something
that
we
should
probably
talk
about
in
chartering.
This
is
a
discussion
that
we
didn't
really
get
into
at
the
sec
dispatch
meeting,
but
certainly
has
a
bearing
on
where,
where
this
work
ends
up.
C
So
one
of
the
one
of
the
questions
that
has
been
raised
on
the
list
is
with
respect
to
those
authorized.
Tls
interception
scenarios
where
clients
in
particular
are
configured
with
trust
anchors
and
those
those
enable
the
the
use
of
interception
in
in
enterprise
networks,
for
instance-
and
there
was
a
concern
raised,
that
this
was
deliberately
trying
to
circumvent
those
sorts
of
interception
policies
that
might
be
applied
and
that's
actually
deliberately
not
the
case.
C
I
can
say
that
we
definitely
have
no
plans
to
do
that.
We
don't
even
enable
dove
when
a
deception
is
enabled,
and
we
have
no
plans
to
to
enable
this
when
we're
not
using
when
the
devices
are
configured
to
enable
interception
at
all
that
is
independent
of
whether
interception
is
actually
in
use.
If
we,
if
we
think
that
there's
a
chance
that
something's
going
to
be
intercepted,
we
don't
generally
try
to
interfere
with
that,
and
so
there's
some
discussion
that
I've
added
there's
a
pull
request.
C
The
slides
have
a
link
to
that
that
you
can
click
on,
but
there's
a
response
to
that
there.
I'll
also
note
that
the
draft
ensures
that
requests
are
identifiable
using
the
media
type
and
a
combination
of
other
things,
if
you
like,
which
should
make
them
identifiable
in
case
this
is
this
occurs
accidentally,
but
that's
not
expected
to
be
a
primary
mechanism,
most
most
cases
where
interception
is
enabled.
C
C
The
funny
thing
is
part
of
the
motivation,
for
this
was
to
ensure
that
if
we
use
this
for
dough,
we
don't
provide
dns
resolvers
with
a
large
concentration
of
information
about
user
requests.
One
of
the
ways
in
which
information
is
exploited
is
by
taking
the
longitudinal
study
of
individuals,
dns
queries,
and
this
breaks
the
the
stream,
the
dns
queries
up
into
individual
queries.
C
There
is
obviously
some
potential
to
connect
them
based
on
the
content
of
those
queries,
but
the
protocol
would
provide
nothing
that
would
allow
them
to
be
connected
at
the
protocol
level,
and
that
is,
I
think,
reducing
the
concentration
of
information
in
these
servers,
and
there
are
other
things,
obviously
here
that
are
probably
more
important
than
that
when
it
comes
to
questions
of
consolidation-
and
I
don't
know
how
far
we
want
to
get
into
the
rabbit
hole
talking
about
market
pressures
and
all
those
sorts
of
other
things.
But
there
are
some
some
design
choices
here.
C
F
If
the
benchwards
google,
if
the
applicability
of
this
is
driven
by
latency
considerations,
why
is
telemetry
on
your
list
or
how
is
telemetry,
latency
sensitive.
C
So
it
will
depend
on
the
application,
but
the
telemetry
cases
is
primarily
modified
by
simply
the
cost
of
submission.
The
cost
of
submission
under
a
per
per
connection
thing
increases
fairly
significantly
relative
to
this
design.
That's
that's
all.
There's
no
latency
consideration
for
telemetry,
but
there
are
other
things
where
latency
would
be
would
be
interesting.
There's
a
bunch
of
things
that
the
server
that
browsers
in
particular
talk
to
servers
for
that
those
were
just
the
two
that
I
put
on
the
slides.
G
G
G
C
Right
so
the
reason
I
didn't
include
that
originally
is
that
that
already
uses
some
form
of
pir,
but
your
opinion
may
differ
from
mine
on
on
how
that
that
pir
is
actually
effective,
so
this
may
be
applicable
there.
Yes,.
H
A
Personally,
my
opinion
is
that
pir
is
has
enough
caveats
that
it
could
benefit
from
a
mechanism
like
this,
but
the
the
question
I
can
like
to
ask
is
about
kind
of
the
caveats
that's
needed,
so
you
know
that
this
is
not
suitable
for
generic
http,
which
is
a
whole
bunch
of
state
tracking
mechanisms
attached
to
it,
I
was
just
wondering,
is
whether
it's
your
assessment,
that
the
set
of
things
that
you
need
to
turn
off
about
general
http
is
sort
of
consolidated
enough
that
we
can
describe
the
applicability
of
this
thing
clearly
enough
to
give
the
implementers
a
clear
guide
as
to
what
they
need
to
get
what
sort
of
box
they
need
to
put
around
this
thing
to
make
sure
that
it's
not
leaking
unduly.
C
Yeah,
that's
a
good
question.
I
think
there
is.
There
are
a
number
of
cases
where
use
of
http
is
is
fairly
constrained.
We're
not
talking
about
the
web
browsing
case
generally.
C
We
are
talking
about
things
where
http
is
used
for
apis
and
that
sort
of
thing
and
in
those
contexts.
I
think
you
have
a
lot
better
ability
to
to
understand
what
sort
of
information
is
transferring
between
requests
or
transferring
from
the
context
into
the
request,
and
so
I
think
we
have
some
guidance
there
and,
of
course
we
can
probably
improve
on
the
guidance
in
the
draft.
B
I
I
So
I
think
this
may
be
more
clarifying
comment
than
a
clarifying
question,
but
just
reading
the
chat,
I
think,
like
it's
really
important
to
get
this
point.
This
cannot
be
used
for
generic
web
browsing,
not
because
it
doesn't
carry
cookies,
but
we
need
to
modify
every
server
in
the
world,
that's
supported
and
so
like.
We
already
have
a
thing
which
is
like
a
proxy
system
for
like
talking
generic
web
browsers
and
it's
called
mask
and
by
http
connect.
So
what
this
is
for
is
something
entirely
different.
I
This
is
for
it's
for,
like
apis
was
like
pre-arranged
between
the
client
and
the
server
and
like
not,
and
we're
like
the
client
is
somewhat
of
getting
the
server's
key
material.
So,
like
I
I
guess
like
so
people
are
concerned,
this
will
like
break
general
web
browsing
applications
like
like
it's
like
not
suitable
for
that,
and
so
things
like
content,
negotiation
or
like
localization,
like
just
like,
are
not
relevant
in
this
case.
I
So
you
know
I'm
not
saying
they're
like
no
cases
where,
like
you
might
want
to
know
like
where
the
thing
came
from
or
something
like
that.
But
those
are
cases
where,
like
the
proxy
gotta
tell
you
or
the
client
how
to
tell
you
and
so
like
it's
just
like
not
like
it's
like
there's
just
no
way
to
turn
this
on,
like
you
can
hear
a
client
and
like
have
to
do
anything
just
fold.
J
Can
you
hear
me?
Yes,
okay,
I
don't
know
why
it
keeps
on
popping
up
the
permissions
dialogue.
That's
really
weird,
but
if
I
say
need
echo
three
times
never
mind,
I
think
I
said
this
in
the
in
the
mailing
list
discussion,
but
I
I
would
very
much
like
us
to
carefully
consider
what
we
call
this
thing
for
the
record,
I'm
supportive
of
this.
J
I
think
these
are
interesting
use
cases
and
I
think
that
leveraging
http
for
this
use
case
is
generally
a
good
idea,
because
you're
getting
so
much
reuse
out
of
the
bits
that
that
matter,
but
it's
causing
confusion
in
this
room,
I'm
pretty
sure
it's
going
to
cause
confusion
to
developers
as
well.
If
you
call
it
something
http,
so
I'm
not
saying
we
shouldn't,
but
I
think
we
need
to
very
carefully
consider
what
we
call
this
protocol,
which
I
know
is
something
that
historically,
the
ietf
is
not
very
good
at
doing.
A
I
guess
one
more
thing
since
some
of
the
comments
on
the
mailing
list
and
some
discussion
around
this
has
been
about
kind
of
impact
on
network
operations
right.
I
wonder
if
you
could
just
comments
a
little
bit
on
what
how
this,
if
it
were
used
for
a
given
api,
would
change
the
the
traffic
profile
of
that
api.
It
seems
like
sort
of
like
it
would
mainly
on
the
client
side,
just
change
changes
so
that
the
requests
would
go
to
the
proxy
instead
of
directly
to
the
server.
C
No
it's!
It
is
pretty
straightforward
in
that
regard,
so
clients
that
have
an
api
endpoint
that
they
are
configured
or
they
might
be.
I
I
hesitate
to
say
this
word.
They
might
be
discovering
that
endpoint,
somehow
they
would
switch
the
the
server
destination
for
the
proxy
uri
and
then
the
the
other
modification
is
that
they
encapsulate
before
they
send,
which
adds
a
handful
of
bytes
to
every
request.
K
L
I
was
thinking
about
the
use
cases
for
this,
and
perhaps
you
can
help
me
by
articulating
some
of
those
going
back
to
something
that
ecker
said,
which
is
this
is
not
for
general
web
browsing.
No,
those
are
not
your
words
his,
but
I
seems
to
resonate
with
me.
L
I'm
trying
to
understand
what
the
actual
use
cases
are.
I
mean
you're
characterizing
the
use
cases
as
ones
where
you
don't
want
to
have
state
shared
between
requests.
Basically,
you
don't
want
to
have
a
connection
because
you
don't
want
to
have
state
across
requests.
What
are
those
use
cases
one
and
how
does
one.
L
C
So
we've
talked
about
dns
telemetry
submission,
safe
browsing.
C
The
things
that
are
most
interesting
to
me
are
things
that
have
latency
requirements
in
the
order
of
one
round
trip
and
have
some
amount
of
private
information
associated
with
making
the
request
or
the
content
of
the
request,
but
they
have
no
structural
need
for
anything
to
be
linked
between
multiple
requests.
Dns
is
perfect
for
this,
but
there
are.
There
are
other
examples
that
are
obviously
flopping
up.
C
C
M
Just
to
follow
john,
if
you
had
something
for
tls
or
quick,
that
was
like
a
prime
public
key
that
you
could
use
for
xero
rtt.
That
would
be
kind
of
similar
to
this
particular
design,
wherein
the
client
has
a
public
key.
That's
that's
shared
by
potentially
many
other
clients.
M
L
Can
I
jump
in
the
queue
here
chris?
I
don't
know,
I
don't
want
to
jump
in.
So
that's
an
interesting
point
you
make
is
that
is
that
a
plausible
solution
to
the
problem
actually
having
a
shared
public
key
for
a
particular
domain
that
is
made
publicly
available
the
clients.
M
Might
be
okay
yeah,
it
very
well
might
be,
but
this
is,
I
think,
arguably
simpler
in
terms
of
like
the
actual
mechanics
that
need
to
happen.
M
It
does
raise
the
bar-
yes,
okay,
but
you'd,
also
whatever
this,
you
know
hypothetical
extension
to
tls
or
quake
or
whatever
that
that
would
also
be.
You
know,
additional
code
that
we
some
woman,
need
to
write.
So
I
it
is
a
difference,
I'm
not
sure
it's
a
substantial
distinction,
though.
C
N
N
That
will
really
help
give
better
clarity
to
the
proposal
or
either
that
or
bringing
to
sharper
focus
whether
there
is
a
genuine
use
case
here,
one
or
the
other,
which
would
be
positive
and
also
I'm
sure,
we'll
come
back
to
this
point
later.
But
since
I
think
consolidation
was
mentioned
on
the
slide,
I
just
wanted
to
make
the
point
now
to
revisit
later
that
without
discovery.
I
think
there
really
is
a
risk
of
that.
N
If
we
let
clients
decide
which
proxies
and
servers
to
use
without
any
input
from
users
or
any
choice,
then
there's
an
absolute
risk
of
centralization
of
traffic,
and
that
seems
to
me
like
a
really
bad,
a
bad
approach
that
should
be
avoided
at
all
costs.
Thank
you.
A
I'm
relaying
a
question
from
stephen
farrell
here:
sorry,
I'm
just
scrolling
to
find
it.
I
believe
the
question
was,
you
know
this
is
promised
on
the
client
having
an
hpk
e4
hpke
key
public
key
for
the
server
that
it
can
use
to
encapsulate
requests
and,
what's
the
what's
the
current
theory
for
how
the
client
gets
configured
with
that.
C
C
You
have
to
decide
who
to
trust
with
providing
that
information
and
how
that
information
is
provided,
and
you
have
to
deal
with
some
of
the
key
consistency
issues
that
that
I
think
chris
and
I
have
a
separate
draft
talking
about
one
of
the
initial
deployments
that
we're
talking
about
here
simply
relies
on
configuration
for
this
sort
of
thing.
C
So,
to
give
a
concrete
example,
we
would
ship
in
firefox
a
public
key
for
the
server
that
we
were
expecting
to
talk
to
and
the
uri
for
the
proxy
that
queries
would
be
sent
to,
and
that
would
be
fixed
and
we'd
have
reasonable
certainty.
Then
the
individual
people
weren't
being
targeted
using
the
key,
and
we
would
have
some
trust
relationship
with
the
proxy.
That's
that's
a
fairly
straightforward
deployment
mechanism
and
I
expect
that
there
would
be
a
number
of
systems
that
would
benefit
from
being
able
to
do
that.
C
G
G
So
right
now
for
our
private
relay
service,
we,
which
is
essentially
the
same
property
as
this,
and
I
think
it
is
very
important
to
note
that,
when
you're
using
the
oblivious
messages,
the
proxy
can
reuse
kind
of
one
big
pipe
of
http,
2
or
hp3
to
the
target
and
forward
many
many
different
messages
over
that,
as
opposed
to
having
to
bring
up
a
bunch
of
new
sockets,
essentially
tcp
or
udp
between
the
proxy
and
the
target.
G
So
it's
a
lot
lighter
weight
for
the
proxy
as
far
as
reuse
and
also
potentially
allows
it
to
make
the
messages.
Look
a
lot
more
consistent
as
far
as
timing.
In
that.
A
Martin,
I
think
you
need
to
relinquish
the
presentation
slot
so
that
I
can
take
it
back
over.
A
Okay,
so
we're
going
to
go
into
the
post
charter
review
since
the
charter
that
was
on
the
list,
we've
cut
it
into
three
slides
I'll
page
through
it
briefly,
so
that
people
I
can
kind
of
swap
it
in,
and
then
we
can
have
some
pretty
open
discussion
about
what
folks
think
should
be
in
the
charter,
so
we're
kind
of
into
three
chunks.
A
First
chunk
is
kind
of
the
problem
statement
here,
which
I
think
captures
broadly
what
martin
described
in
terms
of,
if
trying
to
protect
some
aspects
of
client
identity
such
as
search
ip
addresses
from
servers
that
you
know
offer
http
apis.
A
The
second
slice
here
is
describing
what
the
mean
work.
Product
of
the
group
would
be,
in
particular,
the
oblivious
http
protocol,
which,
as
we
were
discussing
a
moment
ago,
might
deserve
a
better
nicer
name,
as
mark
mark
suggests,
but
it
gets
the
principle
here
that
the
idea
is
to
encapsulate
http
requests
so
that
they
can
be.
A
A
Okay,
third
part
of
the
charter
here
is
the
usual
litany
of
caveats
that
the
working
group
will
focus
on
the
core
elements
as
in
advance
of
any
other
work
annual
liaise
with
other
relevant
groups,
so
https
for
http
aspects,
cfrg
for
any
cryptographic
aspects
and
things.
A
There
are
no
milestones
that
have
been
specified
on
the
charter
yet,
but
I
think
martin
said
on
the
mailing
list
that
then,
obviously,
obviously
the
major
milestone
would
be
the
core
protocol
and
martin
suggested
about
a
year
and
a
half
out
from
the
formation
of
the
working
group.
A
So
that
is
the
charter,
as
it
has
been
discussed
on
the
list.
So
far
with
that,
I
think
we
have
quite
a
bit
of
time
set
aside
for
discussion
of
whether
this
charter
makes
sense
whether
it
has
the
right
scope,
what
should
be
in
or
out.
So
I
would
appreciate
it.
I
think
alex
is
going
to
manage
the
queue,
but
I
very
much
appreciate
folks
keeping
their
comments
focused
on
what
should
go
in
this
charter.
A
P
J
Want
permissions
again,
can
you
hear
me
yep
very
consent
focused,
I
so,
like
I
said
before,
I
I
support
the
formation
here.
I
think
that,
as
I
understand
the
set
of
use
cases,
this
is
interesting
as,
as
you
can
probably
tell
what
I'm
most
concerned
about,
is
any
potential
impact
on
the
rest
of
the
http
ecosystem,
and
so
I
think
that
the
work
would
need
to
very
carefully
consider
how
it
is
related
to
that
in
that,
as
I
understand
it,
martin
you're
defining
a
new
http
application
effectively.
J
You
know,
there's
a
uri
that
you
send
these
these
messages
to,
and
I
look
at
that
in
terms
of
bc,
56
bis
and
how
that
does
things
and
the
really
interesting
part
of
it
is
it
also
encapsulates
http
messages
into
this
new
protocol-ish
thing.
So
I
I
I
don't
have
any
answers
for
that.
I
think
you
could
maybe
recast
as
an
extension.
J
Maybe
it's
fine
like
it
is
something
that
should
be
in
scope
for
the
working
group
to
figure
out
what
the
best
way
to
to
make
sure
that
this
slots
into
the
hp
ecosystem,
so
that
it
clearly
says
what
it
is
and
doesn't
you
know,
because
you're
reusing
the
registry
is
because
you're
reusing
a
lot
of
the
concepts.
I
just
want
to
be
careful
about
that
and
I
think
you're
flexible
on
that
from
the
discussions
I've
had
with
you
on
that
martin.
But
I
just
want
to
make
sure
that
you
know
at
least
from
my
perspective.
C
J
After
my
initial
comments,
actually
I'm
kind
of
coming
to
a
place
where,
if
we
can
define
use
cases
that
this
is
good
for
I'm
actually,
okay,
if
it
isn't
that
generic,
you
know
if
it's
a
specific
tool
for
a
specific
kind
of
task,
and
that's
well
defined-
that's
great.
Maybe
it
shouldn't
be
more
generic.
But
let's
have
that
discussion.
The
working
group
I
reckon.
R
Hi
everyone
thanks.
Martin
for
the
presentation.
My
comment
is
more
about
the
I'm
a
little
more
concerned
about
the
use
cases,
one
in
particular
than
the
the
mechanism
I
think
most
of
the
mechanism
we
can
hash
out
in
a
working
group.
That's
to
me
not
an
issue,
including
the
issue
that
I
raised
about
proxies.
R
I
don't
see
that
as
a
blocking
issue,
at
least
in
my
mind
the
issue,
the
concern
I
have
and
I'd
like
I'd
like
martin,
if
you
could
respond
to
this,
just
a
little
bit,
is
in
the
use
case
for
dns,
I'm
somewhat
nervous
about
here's
the
question
really,
if
you
have
bots
that
start
using
the
oblivious
http
servers
for
cnc,
what
is
the
operational
method
to
determine
who
those
bots
are
and
and
how
to
shut
them
down?
Currently
you
can
go
back
to
you.
R
C
So
that's
going
to
depend
very
much
on
your
deployment
model.
How
are
you
deploying
your
cnc
detection
at
the
moment?
So
a
lot
of
cases.
I
understand
that
this
happens
at
resolvers
and
they
look
at
the
query
string
that's
coming
in
and
they
look
for
abnormalities
in
that
query,
stream
and
and
then
investigate
those.
C
Those
things
are
cases
where
an
entirely
new
name,
suddenly
pops
up
and
and
has
a
large
amount
of
traffic
or
prefixes
and
whatnot,
and
that
sort
of
thing
is
done
at
the
resolver
and
could
be
done
at
the
resolver
under
this
sort
of
regime.
Equally,
the
only
thing
that
this
would
sort
of
prevent
is
those
cases
where
you're,
using
inferences
that
rely
on
the
connection
between
queries
by
the
same
client
and
I'm
not
sure
that
that's
something
that
necessarily
fits
in
with
with
that
sort
of
defense
mechanism
at
all.
R
Okay,
if
I
may
just
follow
up
for
just
30
seconds,
I
think
the
issue
here
is
with
the
resolver
in
particular.
I'm
thinking
about
the
doe
resolver
have
sufficient
information
to
identify
the
source
in
order
to
say:
hey,
that's
you
know
this
guy's,
you
know
a
bot.
Let
me
inform
someone
about
it
right.
It's!
The
bots
are
pretty
they're
they're,
pretty
clear
to
identify
you
know
with
in
in
these
results.
It's
not
not
that
hard,
there's
a
lot
of
code
that
does
it
maybe
they'll
get
sneakier
over
time,
but
they're,
not
that
sneaky
right.
C
Well,
when
you
take
steps
to
deliberately
ensure
that
someone
can't
identify
your
source
ip
address,
then
they
don't
get
source
ip
address.
That's
that's
kind
of
a
design
feature
right
or
above
depending.
L
S
So
what
I'm
missing
a
bit
from
the
charter
or
from
the
work
seems
like
is
the
clarity
and
that
you
actually
maybe
have
to
write
something
about
what
is
in
the
encapsulated
to
be
requested.
And
what
do
you
need
to
form
those
so
that
they
are
not
leaking
information
themselves
and
and
how
that
interact
with
the
applications
and
use
cases?
That's
intended
to
be
supported
here.
C
So
I've
heard
this
comment
from
a
few
people.
I'd
just
like
to
say:
if
people
want
to
make
this
comment,
this
is
great
input
to
a
working
group.
I
don't
think
this
is
the
sort
of
thing
that
charters
need
to.
S
F
Ben
schwartz,
google.
I
wanted
to
ask
about
the
the
question
of
whether
this
problem
is
clearly
defined,
as
I
think
about
these
use
cases.
I
think
that
the
really
important
question
here
is
who
trusts?
Who
who
trusts?
Who
about
what?
And
there
are
a
lot
of
parties
here
and
there
are
a
lot
of
trust
relationship
types.
F
I'd
like
to
see
that
spelled
out
a
little
more.
I
find
it
very
challenging
to
explain
when
this
is
valuable,
especially
if
we
put
discovery
out
of
scope
and
say
that
we're
just
talking
about
essentially
hard-coded
configurations,
in
that
case,
there's
a
single
party
who
has
chosen
both
the
proxy
and
the
far
destination,
which
is
a
little
bizarre,
because
that
party
could
have
just
chosen
one
server,
that
they
actually
trust
not
to
track
the
user.
F
Instead
of
trying
to
pick
two
and
it's
even
trickier,
because
those
two
servers
can't
really
be
strangers
to
each
other
at
a
minimum.
The
destination
needs
to
specifically
approve
each
proxy
that
comes
to
it,
because
those
proxies
are
going
to
deliver
enormous
volumes
of
traffic
if
they're,
legitimate
and
so
an
illegitimate
proxy
could
deliver
an
enormous
volume
of
abuse
instead.
F
So
we
end
up
with
this
situation,
where
we
need
these
two
parties
not
to
collaborate,
but
we
do
sort
of
need
them
to
coordinate,
and
it
just
seems
vastly
more
complicated
to
me
than
a
a
solution
where
you
instead
pick
one
server
that
you
actually
trust
and
talk
to.
F
I
think
the
deliverable
would
be
a
a
very
precise
description
of
the
trust,
relationships
and
assumptions
who
are
the
parties
and
what
are
all
of
the
assumptions
that
they
have
to
make
about
each
other's
relationships.
Because
you
know
the
client
here
needs
to
make
an
assumption
about
the
nature
of
the
relationship
between
two
other
parties,
and
it
actually
can't
ever
prove
that
it
just
kind
of
has
to
hope
that
it's
true.
F
So
I'd
like
to
see
that
spelled
out.
You
know
I
ideally
in
the
charter.
If
we
already
know
what
it
is
and
it's
short
enough
and
then
we
can
judge
whether
it's,
whether
it's
realistic
enough
to
be
worth
chartering
or
we
can
just
move
ahead
and
make
sure
that
we
write
it
down
for
the
implementers.
At
some
point.
F
F
I'm
talking
about
the
case
where
the
client
has
somehow
already
picked
the
proxy
and
the
target
server
it
wants
to
use,
and
it
just
needs
to
be
able
to
get
the
key
from
the
target
server,
whether
it's
hpke
and
the
current
the
defined
mechanisms
or
something
else
defined
by
the
working
group.
But
I
think
it's
important
to
be
able
to
do
this.
F
Have
a
mechanism
to
do
this
communication,
because
otherwise
we're
really
going
to
paint
all
the
clients
and
servers
in
the
corner
where
they're,
essentially
just
coming
up
with
an
ietf
style
protocol
and
half
agreeing
on
each
other
to
use
it
together,
and
that's
really
the
case
of
we
need
this
risk
to
be
useful
for
a
lot
of
the
potential
use
cases
and,
like
I
said
it's
not
clear
to
me.
This
is
currently
in
the
chart.
There's
something
about.
C
F
It
sounds
like
maybe
not
part
of
the
core
protocol,
and
I
agree
it
doesn't
be
part
of
the
core
protocol,
but
certainly
needs
to
be
part
of
the
core
in
scope
for
the
working
group
to
work
on.
Maybe
a
separate
rfc
from
the
main
protocol
rfc.
But
I
think
from
the
beginning
needs
to
be
in
scope
for
the
working
group.
G
Tommy
yeah
soda,
I
mean
start
off,
determine
I
definitely
support
this
charter.
Is
the
right
starting
place
for
this
work
and
I
think
we
should
do
it.
I
wanted
to
make
two
points
here,
one
to
what
elliot
was
saying
earlier
about
bots
and
fraud
detection.
G
I
think,
when
you
look
at
a
model
like
this
detecting
bots
kind
of
expands
in
responsibility
from
just
the
dns
server
to
also
including
the
proxy,
so
you
know
in
the
way
that
we
are
currently
using
oblivious
dough,
a
lot
of
the
limitations
on
who
can
access
this
and
the
bot
detection
and
fraud
detections
happening
in
the
proxy
on
itself,
not
within
the
target
alone,
and
so
I
think,
if
you're
distributing
the
work
in
order
to
improve
privacy,
you
also
need
to
distribute
the
responsibility
to
validate
traffic
and
improve
trust.
G
So
I
think,
thinking
along
those
lines
would
help
for
this,
not
saying
that
the
things
behind
proxies
are
still
responsible
for
everything
they
were
before.
These
are
boxes
that
are
coordinating,
not
things
that
are
just
randomly
put
together
then,
to
the
for
some
of
the
broader
concerns.
I
think
it
sounds
like
a
lot
of
these.
Concerns
are
pointing
to
questions
for
deployment
model,
specifically
deployment
model
of
proxies
with
the
oblivious
targets.
G
I
agree
that
this
is
a
larger
question
about
how
you
set
up
trust
relationships,
how
you
do
discovery-
and
these
are
important
things.
I
think
that
is
work
that
should
be
done
in
the
itf
overall,
but
I
think
it's
bigger
than
this
one
topic.
It's
not
just
here.
We
see
some
of
the
same
people
interested
parties
showing
up
in
multiple
groups
and
multiple
topics
here,
so
I
I
feel,
like
it'd,
be
a
bit
incorrect
to
try
to
show
all
of
the
work
about.
G
How
do
we,
you
know,
set
up
these
trust
relationships
about
who
can
proxy?
Who?
How
do
we
do
discovery
because
building
a
mechanism
just
for
this
particular
protocol
of
the
blueprint
http,
I
think,
would
be
the
wrong
result.
It
almost
sounds
like
we
wouldn't
we're
going
to
need
some
type
of
ops
working
group
for
this
in
general.
G
That
would
encompass
long-term,
how
we
deploy
oblivious
http,
how
we
deploy
go,
maybe
how
we
deploy
mask
going
forward.
These
are
things
that
are
going
to
be
a
broader
question
and
we
don't
need
to
solve
it
all
here,
even
if
we
have
a
dependency
on
some
of
those
solutions.
Long
term.
E
I
Slides
so
my
face
is
really
big
on
the
screen,
so
I'll
start
off
by
saying
plus
one.
I
agree
with
everything
tommy
just
said,
but
one
thing
I
wanted
to
focus
on
is
use
cases
here.
I'm
hearing
some
folks
talk
about
discovery
and
centralization
and
that
doesn't
match
the
use
cases.
I
have
in
mind
for
this
completely
like
at
all,
so
this
in
my
understanding
and
the
proponents
can
tell
me
if
I'm
wrong,
but
this
is
absolutely
not
for
web
general
web
browsing.
I
If
you
want
to
proxy
your
web
browsing,
we
have
another
working
group
that
can
do
that
for
you.
This
is
for
very
specialized
cases
where
you're
already
as
a
client,
knowing
that
you're
talking
to
a
given
server,
and
you
want
to
give
less
information
to
that
given
server.
So,
for
example,
on
chrome,
something,
that's
really
important
is
safe
browsing
what's
safe
browsing,
it's
not
the
act
of
browsing.
It
is
a
service
where
you
say
hey.
Is
this
origin
safe
or
is
this
one
known
to
use
malware?
I
Another
example
is
tls
certificate?
Revocation,
you
can
ask
a
centralized
authority
like
hey.
Is
this
certificate
revoked
if
that
authority
is
something
you
trust
so
as
chrome?
These
are
going
to
be.
Google
servers
like
no
matter
what
happens
in
this
working
group.
This
we're
talking
to
the
same
thing.
I
What
this
gets
us
is
that
this
allows
chrome
to
provide
the
google
servers
with
less
information,
because
now
they
see
okay,
someone
asked
whether
this
tls
cert
was
revoked,
but
they
can
build
a
profile
of
david
asked
these
four
sites
in
a
row
and
that's
where
this
has
a
lot
of
value,
but
also
there
is
really
no
point
for
this
use
cases
of
having
discovery,
because
at
the
end
of
the
day,
if
chrome
wants
to
do
this
to
be
able
to
talk
to
a
google
server,
probably
it'll
have
a
contractual
agreement
with
someone
to
operate
with
a
third
party
to
operate
this
http
proxy,
so
that
we
can't
collude
that
contract
is
going
to
involve.
I
You
know,
dollar
signs,
and
so
we're
not
going
to
discover
and
say.
Oh
someone
offered
to
do
this,
we'll
send
it
to
you,
no
we're
going
to
use
the
one
that
we
actually
decided
to
use.
So,
that's
why
I
see
this
technology
as
very
useful.
I
would
like
to
see
a
move
forward,
but
I
don't
think
we
need
to
bog
ourselves
down
in
things
like
discovery,
because
they're
absolutely
not
needed,
and
this
is
a
really
hard
problem
that
I
think
we
could
completely
face
that
if
we
tried
to
solve
thanks.
T
Hi,
I
hope
you
can
hear
me
so
I
looked
into
the
current
charter
text
and
I
think
in
the
application
applicability
statement
that
saw,
I
mean
addresses
number
of
my
comments
that
I
had
on
the
previous
version
of
this
charter.
Having
said
that,
I
think
still
I'm
a
bit,
I
may
be
not
sure
about
the
use
cases
we're
talking
about
because
it
seems
like
people
have
different
things
in
mind.
T
I
have
seen
some
people
are
confused,
so
maybe
we
can
do
a
better
job
on
like
what
exactly
the
or
give
some
examples
of
the
use
cases
that
we're
talking
about
here.
Also,
the
the
chartered
text
currently
says
like
this
in
this
context,
and
also
later
on,
so
that
this
working
group
may
work
on
a
different
use
cases.
So
I
mean
these
things
need
to
be
a
bit
clarified
for
me
and
I'll.
I
would
like
to
echo
what
mark
nottingham
said.
T
T
Yeah,
I
think
I
think
it
would
be
it
would
be
good,
would
be
good
to
good.
To
give
an
example
say,
like
I
mean
a
lot
of
charters
have
examples
like
I
think
we
have
been
discussing
a
lot
of
use
cases
here,
number
one
number
of
use
cases
we
can
give
it
as
an
example,
but
if
this
charter
also
says
like
yeah
we're
gonna
work
on
use
cases,
that
is
good
enough.
U
Hi,
so
I'm
gonna
relay
something
from
stephen
and
then
I'm
also
gonna
add
my
own
comment.
If
that's
okay,
so
stephen
wrote,
we
have
tall
orbits,
we
don't
have
a
way
to
standardize
tor
in
practice.
U
Okay
and
then
my
my
second
one,
my
comment
would
be
anonymity
and
privacy
of
different
things
and
tori's.
Providing
anonymity
and
our
http
is
providing
privacy
and
specifically
does
not
give
any
protection
against
the
global
passive
adversary.
M
I
think
martin's
mike
is
having
issues
so
I'll
reply.
I
think
to
the
the
first
one
that
is
like
why
what
about
tour
and
something
general?
As
ecker
pointed
out
earlier
in
one
of
his
comments,
we
have
that.
That
is
what
mask
is.
That
is
what
generic
http
connect
is.
M
M
The
the
threat
model
here
is
a
a
bit
more
constrained
than
that
of
this
global,
passive
observer
or
eavesdrop,
where
we
can
see
everything
so,
and
it's
not
clear
to
me
that
we
need
to
you,
know,
raise
the
bar
to
that
of
a
you
know:
a
global,
passive
adversary.
So
my
two
cents-
I
don't
know
if
martin
wants
to
china
by
a
text
or
not.
D
Yes,
thank
you.
I
wanted
to
make
three
points,
or
maybe
before
that,
the
first
point
being
that
I
I
like
it's
working
this.
This
should
go,
go
forward.
So
whatever
comments,
I
may
have
it's
it's
more
about
the
charter
than
the
you
know
whether
it
should
be
done.
It
should
definitely
be
done,
and
so
the
first
point
is
about
the
use
case.
D
We
already
discussed
that
a
little
bit,
I
think,
more
clarification
that
would
be
useful,
potentially
even
in
the
charters,
though
I'm
not
not
sure
that
that's
actually
required,
but
but
somewhere
more
information
is
needed
and
if
we
think
about
the
dns
use
case,
for
instance,
I
think
there's
some
question
marks,
at
least
in
my
mind
about
exactly
what,
because
you
can,
you
can
do
some
of
these
things
that
we
do
for
dns
and
oblivious
dns
and
other
other
things.
It's
not
the
only
possible
solution
for
for
reducing
information.
D
Confidence
computing
has
also
been
explored
by
by
my
team,
for
instance,
so
I
think
there's
some
clarity.
There
would
be
useful,
so
please
work
on
that
a
little
bit
more
and
the
context
is
not
not
not
there.
Yet.
D
Quite
the
other
comment
is
sort
of
relate
to
what
magnus
was
saying
earlier
about
the
requests,
and
I
think
it
is
a
lot
of
fun
to
for
us
to
work
on
on
sort
of
the
plumbing
part
of
things,
but
but
the
payload
is
also
quite
important
if
you
look
at
the
general
intent.
I
realized.
This
is
not
for
the
general
internet
use
case
of
general
browsing
cases,
but
but
the
problem
usually
not
not
in
the
framing
of
things,
but
actually
what
you
send
inside
and
you
can
send
lots
of
stuff.
D
You
can
do
fingerprinting
and
I
think
it
would
serve
a
lot
of
purposes
if
we
actually
provided
some
advice
on
that
as
well.
I
think
that
that's
a
thing
that
we
should
say
in
the
charter-
and
a
third
point
is
that
I
think
we
do
need
to
look
at
discovery
and
I
think
yeah
I
mean.
Of
course
it
needs
to
be
in
the
context
of
what
whatever
the
application
is
and
the
use
cases
are.
D
D
Of
course
it
yeah
you
can't
push
a
proxy
function
to
sort
of
random
device
in
the
internet
either,
but
I,
I
think
something
along
the
lines
of
you
know:
here's
a
trusted
party
that
that
should
approve
the
kinds
of
proxies
that
that
we
are
willing
to
use,
and
then
the
set
of
pro
actual
set
of
proxies
could
be
dynamically
discovered,
something
like
that
might
actually
work
without
causing
too
much
headache.
D
Yeah,
that's
it!
Thank
you.
O
If
I
don't
really
see
the
private
advantage
of,
I
mean
requests
go
to
a
contracted
party
rather
than
just
go
directly
since
there's
still
an
element
of
trust,
but
this
is
an
analysis
that
we
can
document,
but
on
the
other
hand,
if
the
use
cases
include,
though,
for
example-
then
I
have
no
doubt
that
this
should
support
recovery
from
the
same
problems
that
have
led
to
establishing
add.
Maybe
it's
the
same
discovery
mechanism
that
it
will
be
defined
by
ldd.
O
Maybe
it's
different
one,
but
we
have
that
actually
the
same
issues,
so
I
mean
I
think
that
the
use
cases
seem
to
be
identified
as
we
proceed
and
as
people
come
up
with
new
concerns,
which
is
fine
since
we're
still
at
a
brainstorming
stage.
Basically,
but
I
think
that
this
really
needs
to
settle
down
before
we
can
proceed
and
finally
chart
and
establish
a
working
group.
So
we
should
spell
out
those
human
use
cases
and
also
the
use
cases
it
will
rule
out
like
I
mean
since
the
problem
that
many
have
is
what
happens.
W
O
To
be
used
for
general
browsing
and
when
we.
O
A
discussion
whether
we
need
to
have
discovery
whether
it
should
be
discussed
in
this
group
or
somewhere
else.
I
mean
I'm
fine.
If
there's
I
mean
if
we
recognize
a
broader
need
for
discovery
in
general
or
for
trust
relationships
and
key
management
in
general,
and
we
want
to
address
somewhere
else,
but
still
we
we
need
to
make
sure
that
it
is
at
rest
somewhere
if
we
need
to
address
it,
and
so,
in
any
case
I
mean
my
concern.
V
O
We
can
state
the
expected
use
cases,
but
people
just
go
beyond
that.
If
it
is
technically
possible,
so
I
mean
either.
We
maintain
the
group
to
make
that
impossible,
because
the
protocol
should
have
technical
features
that
make
it
impossible
to
use
this
for
general
browsing
or
for
use
cases
that
are
not
the
intended
ones
or
on
the
other
hand,
we
have
to
explore
the
consequences
and
the
possible
impact
on
everyone
on
all
the
network,
communities
and
all
the
stakeholders,
and
so
at
least
include
the
discussion
of
what
happens.
O
If
the
protocol
is
used
beyond
the
expected
use
cases,
and
then
I
mean
whether
we
want
to
go
beyond
that
and
to
which
extent,
and
to
which
extent
it's
within
the
mandate
of
the
idf,
then
it's
also
a
discussion
we
can
have.
But
at
least
we
have
to
start
the
analysis
and
come
up
with
the
possible
scenarios
about
what
could
could
go
wrong.
N
All
right,
yes,
this
will
follow
quite
nicely
on
from
what
yerry
and
victoria
just
said,
as
it
happens.
N
So
looking
at
the
charter,
I
think
it's
missing
three
key
milestones
and
obviously,
therefore,
the
the
work
to
underpin
those
there's
an
absolute
requirement
early
on
if
this
charter
proceeds
in
a
milestone
which
is
to
produce
the
use
cases
and
use
case
limitations
document,
and
for
that
to
be
published
by
the
working
group,
I
think,
even
from
this
discussion,
it's
really
clear
that
that's
that's
a
very
obvious
need
to
be
added
to
the
charter.
N
Secondly,
again
so
picking
up
on
some
of
the
points
in
in
the
jabber
chat
and
have
been
made
on
the
mic,
I
think
there
really
needs
to
be
a
document
produced
that
looks
at
some
of
the
operational
security
concerns
involving
a
a
broad
range
of
stakeholders
effectively.
Looking
at
things
like
I'm
not
pulling
out
a
couple
of
things
on
on
the
jabber
chat,
this
is
really
reliant
on
non-collusion
between
the
server
and
proxy
for
there
to
be
any
form
of
added
privacy.
N
How
is
that
going
to
be
verified?
It
certainly
needs
to
consider
that
the
potential
negative
impact
on
centralization
and
also
things
like
the
potential
security
exposure,
if
it's
exploited
for
cnc
by
by
malware,
even
if
that's
not
an
intended
use.
So
I
think
yes,
it's
nice
to
have
these
cases,
but
we've
got
to
be
realistic
of
permission.
This
innovation
will
apply
here,
whether
we
like
it
or
not.
N
So
that
needs
to
be
thought
about
as
well
and
then,
finally
again
alongside
the
protocol,
the
work
group
absolutely
must
produce
something
documenting
a
fair
discovery
mechanism,
so
that
there's
a
way
of
an
open
method
to
discover,
proxies
and
servers
that
will
be
supporting
this
protocol
if
it's
possible
to
repurpose
something
being
done
elsewhere,
such
as
add
fantastic.
I
I
don't
really
see
that
that's
true
because
add
isn't
doing
a
general
purpose,
discuss
a
discovery
method,
so
absent
a
general
purpose
discovery.
N
I'm
sorry
that
one
has
to
be
included
in
the
charter
here.
Otherwise
this
is
not.
This
working
group
won't
actually
deliver
the
whole
job
and
it's
simply
leaving
the
work
for
somebody
else
to
do
later,
and
I
think
that's
that's
lazy
and
inappropriate.
Thank.
A
You
andrew,
could
you
just
if
I
could
just
ask
a
brief
clarification.
You
seem
to
really
believe
that
discover
a
protocol
is
necessary.
Could
you
comment
on
why
you
think
that
there's
such
a
critical
need
for
a
discovery
protocol.
N
Yeah,
absolutely,
as
I
see
it
frankly,
this
goes
back
to
trust.
If
I'm
a
user-
and
you
know
rfc
8890,
I
think,
is
pretty
clear
here.
Why
should
I
just
trust
the
that
the
client
software
is
making
the
right
valid
choices?
For
me,
why
shouldn't?
N
I
have
the
ability
to
choose
different
proxies,
if,
if
I
wish,
which
avoid
centralization-
which
I
think
is
a
creeping
concern,
you
know,
otherwise
we
might
just
as
well
give
up
and
say
the
internet
becomes
a
a
network
of
cdn's,
and
that
seems
like
a
really
bad
outcome
for
users.
So
I'd
almost
turn
it
on
its
head
and
say
why?
Wouldn't
you
want
discovery
that
you
know
justify?
N
I
Well,
I've
been
in
the
line
a
long
time
so
so
I
have
some
notes.
I
mean
the
first
thing
is
like
just
to
echo
what
time
you
said.
I'm
strongly
supported
this
works.
This
is
useful
work
and
something
which,
like
like
we
already
have
a
number
of
people
who
have
like
a
real
need.
For
second
point.
I
I'd
like
to
make
is
like
I'm
finding
this
whole
discussion
of,
like
you
know,
discovery
and
you
know
general
purpose,
web
browsing
and
you
know
how
it
would
we
about
they're,
really
quite
surprising.
We
already
have.
We
have
a
working
group
which
is
specified
to
do
non-discoverable,
general
purpose
and
crypto
web
browsing,
it's
called
mask
and
so,
like
all
the
people
who,
I
think
that
think
that
is
like
bad
to
do.
I
Ohtp,
like
I
don't
understand
like
where
were
you
when
we
started
mask
and
like
everything
that,
like
people
so
like
and
if,
if
we
don't
have
oecp
what
people
will
do
is
you
know
we
will
use
mask
or
we'll
use,
http
connector,
which
is
one
of
those
things
which
will
have
like
isomorphic
properties
to
ohp,
but
we'll
simply
have
inferior,
round-trip
and
cryptographic
properties.
I
So,
like
does
that
to
which
people
think
that,
like
this
general
concept
of
like
proxying,
the
data
between
like
the
client
and
you
know
the
server
and
having
the
proxy
obscure
the
ips,
the
client
and
you
know
and
be
part
of
that,
like
trust
boundary,
it's
like
a
bad
idea,
but
that
shifts
sales
back
when
we
shift
http
connect
like
years
and
years
and
years
ago
so
like
I
can
understand,
people
won't
be
concerned
about
like
those
things
but
like
that
ship
has
sailed.
I
So,
yes
ben,
thank
you
just
show
like
ip6
as
well,
but
hp
connect
predates
that
piece.
I
think
so.
You
know
to
the
like
specific
question
of
like
discovery.
I
guess
my
question
for
the
people
with
discovery
would
be
good
would
be
like.
Under
what
circumstances
is
that
gonna
actually
work?
I
As
david
pointed
out,
the
major
use
cases
for
this
are
ones
in
which
the
client
is
selecting
both
the
proxy
and
the
target,
and
in
those
cases
the
person
who
wrote
the
client
knows
the
people,
they
trust
and
other
people,
their
potential
processes
to
build
potential
targets
and
they're
like
not
and
like
for
them
to
like,
allow
a
random
entity
in
the
network,
as
their
proxy
is
like,
like
completely
defeats
the
privacy
purpose
of
the
protocol.
So
you
know,
I
think
it's
certainly
possible
like
that.
I
Client
would
allow
you
know
someone
to
configure
their
own
proxy.
That
doesn't
seem
like
problematic,
but,
like
that's
not
discovery,
that's
configuration
something
quite
different,
and
so,
like
I
guess,
like
the
piece
of
the
people,
I
think
one
discovery.
I
I
think
it's
really
incumbent
on
them
to
like
describe
an
actual
use
application
where
a
discovery
of
this
type,
like
someone
might
actually
like,
want
to
use
this
and
thinks
that,
and
I
actually
is
considering
it
and
would
find
discovery
useful,
because
if
you
can't-
but
I
just
don't
think
this
is
like
a
reasonable
thing
to
insist
on
is
having
part
of
this
part
of
the
protocol.
You
know,
if,
like
you
want
to
like,
have
the
working
chart
just
to
have
it
in
a
non-blocking
fashion.
I
The
charter
currently
has
totally
reasonable
would
have
it
be
like
a
condition
of
deployment
when
the
people
who
wanted
to
play
don't
care
about
it,
it
just
doesn't
make
any
sense.
Finally,
I
think
I
wanted
to
you
know
address
this
point
that
I
think
a
number
of
people
made
about
the
trust
scenario
the
like.
I
Certainly
it's
true
that
to
build
a
system
like
this,
what
you're
doing
is
you're
saying
you
know
I
need
to
trust
the
proxy
and
the
endpoint
right
and,
as
ben
pointed
out,
if
there
was
some
way
to
like
be
like,
I
absolutely
trust
the
endpoint
completely.
I
Then
this
protocol
has
no
value,
but
in
our
experience
what
happens
is
that
people
are
a
little
uncomfortable
with
the
idea
that
there's
a
purely
policy
control
that
guarantees
that
one
endpoint
is
trusted,
and
so
the
purpose
of
a
system
like
this,
as
in
many
multi-party
computation
systems,
is
such
as,
for
instance,
prio
or
or
heavy
hitters,
or
on
many
of
the
two-party.
I
Pir
systems
is
the
dyna
situation
in
which
you
have
to
or
tour
for
that
matter
in
which
you
have
to
distrust
more
than
one
person
and
they
have
to
collude
in
order
in
order
to
to
attack
you
now.
Is
that,
like
a
perfect
guarantee,
it
is
not,
but
is
it
better
guarantee
than
saying
that
you
want
to
trust
one
person?
I
think
I
I
guess
I
said
the
last
thing,
but
I
actually
have
one
more
thing,
which
is
jar.
You
mentioned
confidential
computing.
I
I
don't
know
exactly
what
specifically
you're
talking
about,
but
I'm
actually
fairly
familiar
with
the
techniques
that
people
are
thinking
about,
using,
for
example,
replacing
dns
or
replacing
safe
browsing.
There's
a
henry
horton
gives
paper
on
something
called
checklist,
and
the
performance
properties
are
actually
like.
You
know
not
like
as
good
as
what
you
get
something
like
this
they're,
very,
very
impressive
work
and,
more
importantly,
not
generic
techniques,
they're
specific
techniques.
So
you
know
the
question
is
not
like:
can
we
like,
for
any
specific
application,
maybe
have
a
pir
technique?
I
H
Hi
ted
hardy
coming
you
from
utc
one
sorry,
if
I'm
a
little
bit
less
than
perfectly
coherent,
is
it
I
wanted
to
to
make
a
couple
of
points.
First
around
the
charter,
and
I
think,
during
the
the
chat
exchanges,
we've
actually
seen
a
framing,
that
would
be
very
useful
to
bring
into
the
charter.
I
think
there's
some
of
it
in
the
draft,
but
it's
quite
clear
that
the
framing
in
in
the
in
the
chat
has
been.
H
This
is
about
building
http
applications
in
which
a
server
wishes
to
provide
a
mechanism
to
allow
blinded
queries
to
reach
it
through
cooperating
proxies
and
the
use
cases
that
people
are
talking
about
are
actually
the
individual
http
applications
which
use
this
set
of
procedures,
some
of
which
is
protocol
mechanism,
some
of
which
are
going
to
be.
You
know
individual
key
management
issues
and
some
of
which
are
going
to
be
the
agreements
between
the
the
service
provider
and
the
blinding
proxies,
and
I
think,
having
that
in
the
charter.
H
As
like
the
context
setting
is
really
going
to
be
helpful
when
we
bring
the
charter
out
to
the
wide
audience
for
for
final
approval,
because
I
think
the
current
charter
is,
it
has
those
breadcrumbs
if
you've
been
following
the
conversation
very
tightly,
but
maybe
doesn't
have
them
in
ways.
That's
clear
enough
for
those
who
are
coming
to
to
just
read
the
charter
and
not
not
the
framing
discussions
that
have
happened.
H
I
I
also
think
that,
as
a
result
of
that,
you
probably
want
to
lose
the
name
completely
and
we've
had
a
lovely
bike
shed
painting
contest
in
in
the
chat
that
you
can
mine
for
other
games,
some
of
which
are
simply
in
some
of
which
might
actually
be
useful.
H
But
I
think
that
by
recasting
that
that
would
be
useful
and
I
think
one
of
the
the
pieces
that
that
shows
you
is
that
most
of
this
discussion,
as
ecker
says
about
discovery,
is
just
missing
the
mark
of
what
this
is
meant
to
do
in
the
common
case.
Right.
H
If
you
look
at
the
discussion
in
the
chat
around
google's
safe
browsing
service
or
a
map,
tile
delivery
service
or
the
firefox
telemetry
service
in
each
of
those
cases,
there's
there's
a
single
service
provider
who's
going
to
make
the
agreement
with
the
blinding
proxy
who's
going
to
to
distribute
the
information
about
it
and
the
consolidation
and
discovery
issues
that
people
keep
getting
wrapped
around.
H
Don't
really
change
with
the
fact
that
you
have
taken
what
was
a
a
single
service
and
split
it
into
two
parts
which
allows
you
to
have
the
final
delivery
service
be
blind
to
exactly
which
of
the
clients
made
the
query,
which
is
the
aim
here.
So
I
I
really
think
that
by
maybe
going
up
a
couple
extra
10
000
feet
in
in
writing
the
start
of
the
charter
that
we
could
possibly
make
it
a
little
bit
more
obvious.
H
Why
those
things
shouldn't
be
the
starting
focus,
even
if
there
do
happen
to
be
a
few
services
and
and
doe
is
the
obvious
one
here
where,
because
it
is
a
a
global
service,
there
might
actually
be.
How
do
I
find
one
of
the
many
proxies
that
have
been
set
up
with
proxy
services
that
have
been
set
up?
So
I
think
those
aren't
aren't
the
obvious
places
to
start.
H
You
start
with
the
one
where
there's
single
service
providers
and
you
show
how
it
works
in
in
two
or
three
of
those,
and
then
you
tackle
the
one
where
there's
a
global
scope
and
see
if
the
same
things
work
there.
So
once
again,
I
think
this
is
a
a
useful
and
interesting
piece
of
work.
I
I
think
the
generalization
of
it
has
has
maybe
taken
a
turn.
H
That's
been
misunderstood
because
I
think
it
really
is
about
how
each
one
of
these
services
builds
hdp
applications
which
accomplishes
these
goals
rather
than
creating
an
overall
generic
browsing
mechanism
or
an
overall
generic
proxy
mechanism.
Thanks.
B
X
Hi
yeah
so
also
coming
to
you
from
1am.
So
apologies,
it's
a
bit
incoherent,
but
so,
following
on
somewhat
from
what
ted
said
about
how
this
isn't
a
generic
mechanism,
I
just
wanted
to
sort
of
back
up
some
of
the
comments
that
were
made.
I
think,
mostly
in
the
chat
about
how
clarifying
what
the
use
cases
are
is
just
so
important
in
that
case
both
so
you
know
what
you're
actually
getting,
but
also
from
the
security
point
of
view.
Actually
it's
been
mentioned
so
far.
X
I
know
the
draft
mentions
replay
attacks
at
the
moment,
so
I
think
documenting
exactly
when
this
can
be
used
incredibly
clearly
and
what
the
downsides
are.
If
this
is
used
in
other
cases,
it's
just
going
to
be
so
vital
to
stop
people
interpreting
this
as
a
more
general
protocol-
and
I
think,
sort
of
relatedly
to
that.
There's
been
a
lot
of
discussion
about
consolidation
and
discovery,
and
I'm
not
going
to
weigh
into
that
discussion.
X
But
I
think,
given
that
there's
been
such
a
lively
discussion
in
this
group
among
people
who
are
sort
of
invested
in
this,
I
think
really
documenting
exactly
why
those
concerns
are
not
valid
for
the
use
cases
that
are
defined
for
this
protocol
is
going
to
be
really
valuable,
because
if
people
in
this
group
aren't
entirely
sure
why
or
why
not.
These
are
not
all
our
concerns,
then
I
don't
think
anyone
deploying
this
protocol
is
going
to
have
a
clue.
So
I
think
personally,
I
think,
separate
sort
of
milestones.
X
The
group
these
documents
would
be
helpful
like
a
separate
applicability
statement,
rather
than
just
a
document
mentioned
in
the
charter,
but
actually
sort
of
published
with
the
eventual
protocol
document,
but
I
think
both
of
those
will
be
really
helpful
in
actually
helping
people
deploy
this
securely.
Z
Yeah,
so
just
to
echo
some
of
what
the
previous
person
said,
I
I
think
it
is
important
that
we
kind
of
outline
what
the
security
properties
are,
that
we're
going
for,
since
it's
a
little
bit
different
than
maybe
some
of
the
other
security
mechanisms
that
we've
been
looking
at,
and
you
know
defining
threat
model,
and
things
like
that
is
will
be
somewhat
important
here
so
either
having
that
described
in
the
charter
or
as
an
explicit
milestone,
I
think,
would
be
a
good
idea.
Z
Z
Z
Oops,
I
need
to
actually
allow
permissions
to
use
my
microphone,
the
plus
one
to
what
a
bunch
of
people
said.
I
like
the
way
ted
hardy
framed
this
earlier
and
that
I
think
the
a
bunch
of
the
communicator
confusion
here,
perhaps
points
to
areas
where
it
could
really
help
to
clarify
the
charter
text
and
put
some
use
cases
in
it.
Z
The
I
think,
on
the
discovery
side,
I
think
another
one
against
discovery
is
that
it
seems
likely
for
some
of
the
use
cases
that
we're
most
talking
about.
Is
that,
in
order
to
prevent
some
of
the
abuse
that
may
happen,
it's
likely
that
proxy
and
service
gonna
need
to
employ
some
type
of
authentication.
Z
So
maybe,
in
the
long
term,
as
we
start
looking
at
other
use
cases,
we
could
look
at
discovery
that
I
think
that's
almost
a
a
counter,
a
counter
use
case
or
count
a
counter
goal
in
the
beginning,
the
I
think
on
the
charter
text.
One
thing
I
think
that
might
be
helpful
to
call
out
explicitly
is
that
the
property
where
the
bulk
of
tls
key
signings
is
handled
by
the
proxy
and
where
servers
have
a
smaller
pool
of
bundled
connections,
is
by
itself
interesting
enough.
Z
That
may
be
worth
calling
out
in
the
charter,
as
there
are
a
number
of
use
cases
that
might
get
unlocked
just
by
that,
once
they
get
up
to
large
enough
scale
and
I'll
also
give
a
big
plus
one
to
mark
nottingham's
suggestion
earlier
on
that
we
really
need
a
better
name
here,
and
I
think
that
we'll
want
to
think
carefully
about
the
naming
here
from
how
it
looks
from
a
public
policy
perspective
and
a
marketing
perspective.
Z
B
L
Thank
you,
I
think
I'm
audible,
okay,
good.
I
got
the
mic
line
yesterday.
I
think
so.
I
tried
to
keep
notes,
but
I've
been
scratching
and
rewriting,
and
so
I
have
no
idea,
but
but
I'm
going
to
say
the
the
core
points
I
wanted
to
say
this
discussion
has
been
very
useful,
so
thank
you
for
it
and
martin.
Thank
you
for
the
draft.
L
It
was
a
good
read,
but
having
the
draft
out
doesn't
necessarily
mean
that
everybody's
got
in
their
cash,
so
the
the
discussion
is
definitely
incredibly
helpful
here,
I'll
start
off
by
saying,
I
think
I'm
in
support
of
doing
this.
This
is
a
good
piece
of
work
and
I
think
it's
useful.
I
don't
think
it
captures
all
the
use
cases.
This
is
repeating
what
everybody
already
knows.
So
I'm
not
going
to
talk
about
that
in
terms
of
use
cases,
I
think
that
it's
it
has
been
articulated.
L
I
think
what
would
be
helpful,
and
this
is
not
for
the
charter.
This
is
for
later
for
implementers
to
understand
exactly
how
to
think
about
when
to
use
this
and
when
not.
Finally,
on
discovery.
L
Oh
actually,
before
getting
to
discovery,
I
will
say
that
I
I
thought
that
ohdp
is
actually
a
good
name
for
this,
because
I
think
it
actually
captures
technically
exactly
what
it
does
and
captures
it.
Well.
However,
I'll
take
mark
and
eric's
points
that
there
might
be
other
non-technical
considerations
why
ohdp
may
not
be
helpful
in
getting
this
deployed.
So
to
that
extent,
I
think
it
might
be
interesting
to
reconsider
the
name,
but
otherwise
I
think
the
name
is
actually
pretty
good
it.
L
It
captures
what
it
does
quite
well
on
the
discovery
point
going
back
to
this
everybody's
been
talking
about
that.
L
L
Many
many
things
are
configured
pre-configured
and
that
is
a
real
way
of
deploying
things
and
not
being
able
to
have
the
technology
to
deploy
at
all
makes
it
fundamentally
impossible
to
do
it
even
with
configuration
so
I'll,
take
tommy's
point
and
repeat
it,
because
I
think
it's
actually
a
good
point,
and
it
certainly
made
me
reconsider
my
thinking
on
discovery
here
was
basically
that
we,
we
have
several
pieces
of
technology
that
we
are
building
not
just
in
in
this
particular
meeting,
but
at
the
ietf
mask
is
one
example
of
that
and
and
which,
which
right
now
have
no
discovery
mechanisms
that
we
don't
have
any
discovery
mechanisms
for
them,
and
each
of
these
is
going
to
have
different
hurdles,
but
it's
quite
fine
to
build
the
core
technology
that
we
want
to
build
here
without
actually
having
a
discovery
mechanism
for
it.
L
I
certainly
don't
see
it
as
something
that
blocks
us
from
deploying
something
like
ohdp
or
mask
for
that
matter.
So
if
you
want
to
make
a
more
general
con,
if
you
want
to
have
a
more
general
conversation
around
discovery,
let's
do
that,
but
let's
not
make
that
a
necessity
for
this
working
group,
which
is
clearly
the
charter
is
not
talking
about
that
and
the
charter
is
not
asking
for
that,
and
I
think
it's
perfectly
reasonable
without
that.
L
So
I'll
say
that
the
onus
is
on
those
who
want
to
insert
discovery
into
the
charter
to
demonstrate
that
it
is
necessary.
So
I
don't
think
that
anybody
needs
to
demonstrate
that
discovery
is
is
not
required.
We
deploy
things
right
now
without
discovery
willingly
and
that's
fine.
J
Permissions
there
we
go
okay,
I'm
here
for
you
yuri
and
lars.
I
I
agree
with
ted's
comments
that
a
little
bit
of
reframing
could
help
this
quite
a
bit.
Regarding
the
discovery
issue,
I
agree
with
john's
statement.
The
onus
is
on
the
people
who
are
asking
for
it.
I
will
point
out
yet
again:
http
does
not
have
a
discovery
mechanism,
although
this
protocol
is
different
than
hp
in
some
respects,
it
is
also
very
similar
and
indeed
the
discovery
mechanisms
that
are
the
informal
discovery.
J
Mechanisms
that
are
defined
for
http
are
quite
problematic
from
a
number
of
different
aspects.
So
discovery
is
not
a
magical
solve
for
consolidation
issues.
Consolidation
on
the
web
is
caused
by
factors
that
are
way
way
way
up
from
this
in
the
stack
from
oh,
I
need
to
find
a
local
proxy
or
I
need
to
find
a
server-side,
proxy
and,
and
so
in.
In
fact,
you
know
done
badly.
J
I
think
discovery
is
going
to
add
a
lot
of
bad
aspects
to
protocol
in
terms
of
allowing
the
network
or
or
other
folks,
on
the
path
to
insert
themselves
upon
the
user
in
in
a
fairly
forceful
fashion.
So
clearly,
discovery
is
a
requirement.
Is
a
no
go.
I'm
happy
to
talk
about
it,
folks
more,
but
I'm
extremely
skeptical.
Y
Yeah,
I
also
think
there
was
really
a
lot
of
useful
information
in
the
comments
being
made.
You
know,
especially
from
eric,
although
the
speed
with
300
words
per
minute
was
overflowing
me.
So
I'd
really
love
to
see
more
than
what
I
usually
get
from
these
working
groups
in
terms
of
an
explanation
document,
I'm
not
sure
if
it's
use
cases
or
system
reference
model-
or
you
know,
scenarios,
but
all
those
things
that
were
discussed
here,
kind
of
nicely
comprehensively
written
in
a
document.
Y
That's
not
just
you
know
way
below
the
line
and
how
it
would
read
in
the
current
charter
right
may
work
on
yeah.
We
really
don't
care.
We
just
want
to
do
the
protocol
specification,
nothing
against
that,
but
it
would
really
be
good
to
have
documentation
coming
out
of
the
working
group
that
would
allow
as
many
as
possible
people
to
understand
the
systematic
implications
and
comparison
with
the
other.
You
know
solutions
that
were
mentioned
here
in
this.
AA
AA
Just
agreeing
with
yari
and
ted
to
some
extent
here,
I
do
think
one
of
the
things
that
is
wrong
here
in
the
charter
is
that
there
really
wasn't
a
really
good
description
of
what
the
use
cases
are,
and
I
think
the
the
the
converse
is
true
as
well,
that
there
wasn't
a
really
crisp
description
of
what
the
non-use
cases
were,
and
I
think
that's
important
in
the
charter.
AA
I
know
that
there
is
an
attempt
in
in
3.1
of
of
the
draft
to
actually
talk
about
applicability,
but
I
think
we've
seen
today
and
we've
seen
in
this
hour
and
a
half
so
far
that
the
confusion
over
what
the
use
cases
are
and
what
some
of
the
non-use
cases
are
is
something
that
really
needs
to
be
clarified,
and
so
I
think,
from
a
practical
point
of
view,
the
fifth
paragraph
of
the
charter
is
just
too
weak,
it
just
doesn't
say
enough,
and
so
that
needs
to
get
fixed
before
the
charter
can
go
forward.
AA
I
agree
with
the
previous
speaker.
I
think
a
crisp
problem
statement
that
was
a
separate
milestone
might
be
a
useful
thing
here,
so
that
we're
not
overloading
the
protocol
document
the
actual
core
protocol
with
a
whole
bunch
of
text
about
what
the
use
cases
are
and
aren't,
and
warnings
to
implementers
and
so
on
and
so
forth,
and
maybe
separating
that
out
into
a
document
that
would
reach
implementers
would
be
a
positive
thing.
My
last
comment
is
also
in
paragraph
five
of
the
charter.
AA
AB
Apologies
for
stepping
in
a
bit
late.
So
forgive
me
if
I
repeat
something
I
already
said
I
wanted
this
version
of
protocol
in
2016..
I
didn't
realize
that
that
or
2015
I
didn't
realize
that
was
the
protocol
I
needed.
AB
I've
wanted
there
always
been
applications
where
I've
wanted
some
way
for
a
client
to
send
something
to
a
server
and
the
server
to
be
unable
to
link
that
to
other
requests.
The
client
has
made
it's
extremely
important
thing
for
a
lot
of
privacy
enhancing
technology,
and
we
really
need
that
in
order
to
increase
the
degree
of
privacy
people
have
on
the
web.
For
you
know,
these
are
very
sort
of
limited
seeming
use
cases,
but
things
like
telemetry
et
cetera,
very
important
to
have
that
additional
privacy.
AB
I
really
want
to
push
back
on
saying.
Okay,
we
need
to
discover
we
need
to
get
a
stakeholder.
The
whole
point
of
internet
innovation.
Is
you
don't
right?
We
have
the
protocol,
it
solves
real
problems
that
should
be
enough
to
deploy
it
and
for
isps
especially
your
moving
packets,
you're
not
supposed
to
care
about.
AB
What's
in
them
that
that's
sort
of
the
way
that
this
the
whole
thing's
architected,
and
so
it
comes
to
operational
considerations
on
networks,
that's
it's
something
that
the
argument
is
that
that
well,
I
should
be
able
to
get
insight
into
what's
flowing
over
the
wire
and
no
you're
not
supposed
to
get
that
insight
to
the
extent
you
are
able
to
get
that
insight.
We've
identified
that
as
a
problem,
because
we
there
is
no
difference
between
a
consumer
isp
some
entity
on
the
network
that
you
have
no
idea
exists:
somebody's
compromised,
wi-fi
endpoint.
AB
G
Tell
me
so
again:
yeah
been
queue
for
a
while
going
back
to
ted's
point
earlier
plus
100
to
what
he
was
saying.
I
think
setting
the
context
about
the
fact
that
this
is
building
http
applications
with
cooperating
servers
and
proxies
really
really
help
here
and
again.
To
reiterate
on
the
discovery
point,
I
want
to
encourage
people
who
are
talking
about
that
to
think
of
this
more
like
how
users
interact
on
a
system
with
a
vpn,
there
isn't
an
open
discovery
protocol
for
any
random
ipsec
vpn
endpoint.
G
That
is
very
useful
to
users
for
different
applications
and
absolutely
when
this
is
deployed,
we
should
let
people
configure
which
proxies,
which
service
targets,
what
they
want
to
get
and
already
for
the
use
cases
that
we
have
described
here,
safe,
browsing
or
dns.
These
are
things
that
have
their
own
models
for
how
users
configure
them
turn
them
on
and
off
whether
or
not
you're
we're
adding
these
privacy.
Protections
probably
will
become
part
of
that.
G
It's
just
like
how
people
can
install
vpn
apps
or
choose
to
configure
the
system
vpn,
but
again
we
don't
randomly
send
people
through
things
that
are
discovered
and
that
would
not
help
privacy
at
all.
That
would
not
help
user
choice
at
all.
Let's
talk
about
configuration,
not
discovery
here,
and
I
think
it's
also
critical
to
understand
that
the
deployment
model
and
the
configuration
model
depends
on
the
specific
application.
G
G
So
building
trying
to
say
that
there
needs
to
be
a
generic
configuration
or
discovery
mechanism
is
incorrect
here.
That
needs
to
be
specific
to
the
applications.
And,
yes,
we
can
say
we
need
one
for
dns,
but
that's
different
from
saying.
We
need
it
for
anything
that
is
using
this
particular
cryptographic
mechanism
to
protect.
B
Sure
tommy
you
get
the
honor
or
final
close.
E
Close
sorry,
sorry,
I
think
I
jumped
in
there.
I
didn't
turn
my
microphone
on
sorry
about.
I
don't
think
the
world
the
sky
is
falling
here.
I
think
if
we
put
some
introductory
text
as
ted
suggested,
to
see
if
it's
in
and
out
it's
it's
good.
I
think
that
the
what
we're
falling
into
here
is
the
trap
where
we
try
to
put
everything
in
the
charter.
I
do
not
think
we
need
to
do
that.
I
think
we
were
talking
about
use
cases
again.
We
can
put
some
examples.
E
The
applicability
statement
that's
listed
in
paragraph
five
covers
most
of
the
rest
of
that
people
wanted
to
talk
about.
You
know
what
are
the
usage
constraints?
That's
already
there
I
mean
again
we
need
to
talk
about
operational
considerations,
but
how
we're
going
to
address
that
you
know
deeply
within
the
protocol
and
put
that
in
the
charter,
I
think,
is
just
totally
not
useful.
I
think
we
just
have
to
leave
it
to
the
the
authors
and
trust
that
the
working
group
will
end
up
delivering
a
product
that
will
actually
meet
its
needs.
Thanks.
AC
And
so
that
me
now
tommy
jensen,
microsoft,
a
lot
of
comparisons
have
been
brought
up
with
the
add
working
group
and,
having
spent
a
lot
of
time
there,
I'd
like
to
draw
a
comparison
as
to
why
I
do
not
think
the
discovery
argument
applies
here
and,
in
fact,
is
quite
the
opposite.
So
in
add,
the
problem
we're
trying
to
to
solve
is
when
a
network
recommends
a
server
insecurely.
We
want
to
make
sure
that
we
end
up
bootstrapped
into
a
server
securely.
AC
That
was
colluding
collusion
was
actually
the
goal.
We
want
to
end
up
with
an
encrypted
communication
with
an
endpoint
that
is
colluding
with
the
prior
endpoint,
so
that
we
don't
send
our
traffic
elsewhere.
This
scenario
is
exactly
the
opposite.
I
want,
as
the
client
developer,
to
have
my
software
choose
a
proxy
that
is
not
colluding
with
the
target
destination.
AC
I
I
don't
think
that
the
charter
needs
much
change.
As
people
have
said,
I
don't
think
listing
specific
use.
Cases
is
is
great
for
a
charter.
AC
I
do
think
it
might
help
if
we
do
want
to
highlight
very
specifically
that
the
scenarios
being
targeted
are
those
where
ahead
of
runtime,
we
know
the
destination,
meaning
that
I
intend
to
use
this
protocol
to
these
five
destinations
or
whatever
for
very
specific
purposes,
because
outlining
that
then
highlights
why
we
don't
need
discovery.
I
know
where
the
target
is,
and
since
I
know
the
target
and
therefore
I
should
know
who
owns
it.
AC
A
All
right,
thanks
for
that
tommy
and
thanks
to
everyone
for
your
contributions
for
this,
I
think
this
has
been
a
useful
discussion
to
many
folks.
I
think
folks
are
coming
away
with
this
with
a
clearer.
I
think
I
hope,
a
clearer
idea
what's
being
proposed
here
and
I
think
we've
gotten
some
useful
suggestions
for
how
to
refine
things
so
alexia
and
I
have
been
back
channeling
a
bit
in
the
background
and
just
kind
of
wanted
to
present
our
summary
of
where
we
we
see
things
having
ended
up
here.
A
So
I
got
a
few
high
order
points
here.
You
know
a
few
things
on
which
the
charter
could
be
clearer.
It
seems
like
folks,
would
find
it
useful
to
have
a
few
clear
use.
Cases
mentioned
in
this
not
again
not
stuffing
everything
in
the
charter,
but
having
a
few
clear,
precise
use
cases
to
really
capture
the
intent
of
this
protocol.
A
Third
thing
to
be
clear
on
the
trust
model:
I'm
calling
out
what
and
what
relationships
are
expected
among
the
entities,
this
kind
of
coordinate
but
not
collude
model.
I'm
calling
that
out
a
bit
more
clearly
a
bit
more
detail
in
the
charter,
I
think,
would
be
helpful
and
kind
of
addressing
the
points
of
that
have
been
cleared
up
in
this
discussion.
A
Finally,
on
this
discovery
question
clearly:
we've
had
both
opinions
on
both
sides
of
this
question
here.
Assessment
of
the
chairs
is
that
there's
not
really
consensus
for
any
sort
of
requirement
here,
there's
some
folks
who
are
quite
keen
on
it,
but
there's
also
some
compelling
arguments
that
the
discovery
is
not
necessary,
so
net
net
of
all
that
it
doesn't
sound
like
there's
very
broad
agreements
that
discovery
needs
to
get
covered
in
this
charter.
A
So
I
think
that's
kind
of
the
summary
that
we
we
had
from
the
chair
side.
I
think
alexia
was
going
to
run
a
quick
straw,
pulled
it
to
capture
a
bit
more
concrete
out
here.
Alexi
did
you
want
to
run
that.
B
Right
we
richard-
and
I
would
like
to
ask
a
quite
generic
question
about:
is
there
a
is
there
a
problem
to
be
solved
here
in
atf
that
the
atf
should
work
on?
So
I
will
start
the
poll,
and
hopefully
I
can
figure
out
how
to
use
it.
Just
give
me
a
second.
I
P
Oh,
I
I
actually
think
I
may
have
cancelled
it,
so
I
will
not
click
anything.
This.
B
B
I
don't
think
we
can
ask
any
other
more
specific
questions
at
this
point,
so
we'll
ask
our
sponsoring
ed
to
say.
Finally,
let's.
L
See
yeah,
sorry,
alexi,
there's
a
lot
of
people
who
raised
who
not
raise
their
hand
and
it's
it's
not
insignificant.
I'd,
be
curious
to
know
what
happened.
What
people?
Why
why
people
think
that
there
is
no
problem
here,
because
that's
not
the
sense
I
got
in
the
room
at
all
and
I
am
certainly
curious.
A
Yeah,
I
think
we
can
maybe
allow
since
we're
a
bit
ahead
on
time.
We
can
maybe
allow
a
couple
minutes
for
that.
If
there's
anyone
who
was
in
that
do
not
raise
hand
camp
that
wanted
to
to
come
to
the
mic
and
and
say
why
they
didn't,
they
think,
there's
not
something
we
worked
on
here
sounds.
A
A
No
volunteers,
actually,
while
we're
soliciting
volunteers.
R
Thanks
richard
wow,
this
has.
R
The
meeting
I
was
sort
of
middle
of
the
road
on
this,
and
the
reason
is
mostly
that
it's
not
clear
to
me
that
the
genera,
the
general
mechanism
is
needed.
If
they're
really
going
to
be
a
small
number
of
actors
using
it,
and
maybe
you
know
it
wasn't
clear
to
me
that
that
this
should
be
started
as
a
as
a
standard
protocol,
but
maybe
something
a
little
bit
more.
I
don't
want
to
use
the
word
experimental,
but
something
we
get
a
little
more
operational
experience
and
it's
not
like.
I.
I
strongly
object.
R
N
Thanks
elliot
andrew
respect
of
the
comments
in
the
chat,
is
it
worth
asking
the
opposing
question
just
to
try
and
see
if.
A
My
co-chair
may
contradict
me
here,
but
I
think
it
was
pretty
clear.
The
binary
was
pretty
clear
on
the
first
one,
so
I'm
going
to
let
that
stand
while
we're
soliciting
folks,
I
I
would
like
to
ask
to
the
floor
ahead
or
rob
wilson
or
eric
the
folks
who
are
holding
blocks
on
this
to
see
if
they
have
any
last
questions
here
of
things.
You
know
any
remaining
concerns
that
they
had.
That
may
not
have
been
addressed
yet
in
this.
In
this
discussion.
A
AD
Yeah
rob
please
please
go
ahead,
yes,
so
I
think
this
I
thought
it'd
been
a
very
useful
and
very
interesting
discussion.
I
don't
really
have
much
to
add
or
say
that
hasn't
been
covered
by
other
people
here.
I
think
for
me,
describing
the
use
cases
is
particularly
helpful
and
I
can't
remember
who
mentioned
it.
The
idea
of
having
a
separate
problem
and
description
documents
sound
like
that
would
potentially
be
useful
when
people
come
to
understand
to
deploy
this.
Having
that
separate
from
the
protocol,
spec
sounded
useful
to
me.
AD
I
did
also
agree.
I
liked
ted's
comments
that
he
made
about
trying
to
set
out.
This
was
a
big
picture
would
be
particularly
useful,
and
the
other
thing
I
picked
out
in
terms
of
what
was
said
here
was
that
I
thought
that
describing
the
sort
of
trust
relationships
between
the
clients
and
the
server
and
the
proxies.
I
think
that
is
useful
to
explain
and
understand
what
those
relationships
are.
AD
I
have
to
say
some
of
the
use
cases
I
I
didn't
find
some
of
them
that
compelling
in
terms
of
of
how
they'll
be
deployed.
So
again,
I
think
better
descriptions
for
that.
Would
be
would
be
helpful
on
the
question
discovery
again,
just
listening
to
what
was
said
there,
I
it
felt
to
me
that
discovery
probably
isn't
required
and
potentially
could
even
be
harmful,
so
my
preference
would
be
not
to
include
it
initially
and
maybe
investigate
to
see
if
it's
helpful,
but
that
was
just
what
I
heard
in
that
conversation.
But
thank
you.
T
Yes,
I
think
rob
just
said
what
I
wanted
to
say.
I
think
still
the
use
case.
My
comments
on
the
use
case,
stills
have
a
holds
a
bit
like
clarification
or
some
sort
way
of
like
describing
it
would
be
helpful,
and
I
also
also
heard
that
their.
T
I
also
think
like
there
could
be
one
more
like
clarifying
the
applicability
statement.
What
is
included
and
what's
the
point
of
the
and
something
something
along
the
line
that's
saying
was,
would
be
helpful
and
with
that
I
think
I'm
happy
what
happened
today
here
and
I
think
it's
fine
I'll
I'll
clear,
my
my
blog,
I
think,
after
the
discussion,
what
we
I
went
through
here.
V
I
think
eric
wrote
in
the
chat
a
little
bit
ago.
He
can't
speak,
but
he
needs
to
see
an
improved
charter
on
the
domain
limit
accent.
Server
needs
to
be
modified
and
ted
hardy's
context
and
clarification
in
the
charter,
if
possible,
about
the
trust
among
partners
and
that
echo
replied
to
many
of
these
points.
V
Yes,
so,
first
of
all,
thank
you
all
for
the
really
good
discussion.
It
was
really
useful,
in
my
opinion,
and
thanks
to
to
you
richard
and
alexis
who
are
accepting
to
to
sharing
this
buff.
It
was
a
very
well
run
meeting
yeah,
it's
been
useful
to
identify
the
issues
and
that
you
should
have
summarized,
and
I
also
wanted
to
ask
if
any
of
you
have
proposed
text
practical
text
to
to
modify
the
charter.
I
think
it
would
be
great
to
have
that
suggestion
in
the
mailing
list
and
yeah.
That's
it.
V
A
You
all
right
with
that.
I
think
we
are
adjourned.
I
think
we've
got
some
good
action
items
going
forward
to
revise
this
charter
and
get
it
back
up
to
asg
for
consideration
thanks
to
everyone
for
your
help
on
this
and
we'll
see
you
on
the
mailing
list.
Thank
you.