►
From YouTube: IETF113-OHAI-20220323-1330
Description
OHAI meeting session at IETF113
2022/03/23 1330
https://datatracker.ietf.org/meeting/113/proceedings/
A
B
I
think
we
can
get
started
sound
good
francesca,
okay,
cool,
well
hi.
Everyone
welcome
to
the
ohio
session
at
ietf113,
I'm
siobhan
and
I'm
joined
by
richard
bronze
and
francesca
is
our
eyes
and
ears
on
the
ground.
Thank
you
for
that,
and
we
are
still
looking
for
a
note
taker.
So
if
anyone
would
be
willing
to
volunteer
that
would
be
fantastic.
B
B
Fantastic,
thank
you.
So
we
have
a
note
taker
yeah
and
if
you
have
anything
that
you
would
like
to
say,
the
mic
just
prepend
that
with
mike
and
you
can
get
that
sorted
out.
B
B
Iatf
113
is
hybrid.
So
if
you're
in
person,
please
please
make
sure
to
sign
into
the
session
using
meet
echo
online
tool
and
please
please
use
we
are
going
to
join
the
queue.
It
is
important
for
queueing
and
all
queueing
is
done
through
medico.
B
All
right.
I
think
we
can
get
started.
We
we
do
have
quite
a
bit
of
time,
so
I'm
hoping
we
can
get
some
good
discussion
on
and
first
up
we'll
have
martin
and
then
there
will
be
some
discussion
about
issues
and
updates
since
last
meeting,
and
then
we
have
two
brand
new
drafts
being
presented
by
tommy
and
hero.
D
I'm
going
to
do
the
hall
of
mirrors
thing
for
a
moment
because
I'm
going
to
be
jumping
in
to
yes,
I
want
to
share
my
screen.
That's
the
crazy,
crazy
thing:
yeah.
Okay,
all
the
mirrors
and
now
slides.
D
So
this
is
just
a
brief
update
on
what
we're
doing
and
I'm
going
to
go
through
some
of
the
issues
here
that
we've
got
since
the
last
meeting.
There
have
been
a
few
issues
raised
in
the
last
hour
or
so
obviously
I
won't
go
into
those
ones,
but
I
will
hopefully
get
through
all
of
the
important
things
that
we
identified.
D
Option
one
changes
in
version:
one
we've
moved
the
padding
to
the
binary
messages
draft.
That
is
something
that
I
think
simplifies
the
overall
design
and
it
makes
the
binary
messages
stuff
more
useful.
Generally,
we've
also
made
changes
to
the
the
labels
that
we're
using
for
generating
to
extend
the
exports.
D
Those
are
now
based
on
the
the
content
type
of
the
of
the
the
content.
That's
enclosed
and
there's
some
text
about
repurposing
the
design
that
we
have
here
for
other
purposes,
not
that
this
is
a
primary
goal
for
us,
but
it's
something
that
makes
a
little
bit
easier
for
us
to
manage.
D
D
So
the
first
one
of
the
issues
that
we're
talking
about
is
what
information
the
proxy
is
able
to
provide
to
it
to
a
server
when
it
relays
messages.
Obviously,
we
want
the
proxy
to
hide
a
lot
of
information
about
what
the
client
is
who
they
are
and
where
their
requests
are
coming
from.
But
there
are
certain
use
cases
where
it
might
be
useful,
perhaps
to
have
a
proxy,
provides
just
a
tiny
amount
of
information
at
the
moment.
D
There's
a
pull
request.
We're
discussing
that.
I
don't
know
if
we
want
to
talk
about
that
one.
Just
just
now.
D
It's
kind
of
in
flight
right
right
now,
there's
a
number
of
comments,
so
I
will
probably
just
stick
to
this
one
here.
Are
there
any
comments
about
this
one,
I'm
going
to
bring
this
off
full
screen.
E
It's
just
waiting
to
be
to
be
called
on
yeah,
so
I
guess
I
think
my
take
on
this
is
that
actually,
like
this
entire
restriction
is
kind
of
misguided
and
that
there's
a
number
of
things
that
you
might
want
the
proxy
to
say
one
one
of
them
is
this
bit,
as
you
indicate
one
of
them,
I've
heard
floated
is
a
a
token
of
some
kind
indicating
you
know.
You
know
that
there's
been
some
sort
of
dial
service
reputation,
kind
of
stuff
on
it.
You
know.
E
One
thing
I've
heard
of
is
geo
is
rough
geo.
I
certainly
heard
suggestions
of
you
know
of
short
term
short
term
linkages
for
for
ip
address.
You
know
this
is
the
same
person
over
or
cover
a
couple
of
a
couple,
less
messages,
yeah,
I'm
not
like
a
huge
fan
of
these,
but
I
guess
my
my
thermal
thesis
is
these
are
all
out
of
scope
for
this
protocol
and
that
we
should
simply
not
say
anything
about
what
you
should
say
and
and
and
leave
that
to
applications.
E
Fundamentally,
that's
I
don't
know
exactly
what
what
messages
are,
what
what
crap
is
conveyed
in?
That
metadata
is
really
like
for
the
the
client
and
the
the
client
in
the
proxy
to
like
work
out
for
themselves.
So-
and
I
understand
like
certainly
like
you
know,
that
that
allows
you
to
bypass
any
possible
protections
being
offered
here
but
like,
but
like
you
know,
this
is
just
like.
This
is
like
this
is
like
a
6919.
E
Don't
do
it,
so
I
just
think
it's
like,
rather
than
trying
to
like
put
that
in
the
number
of
language
we
should
like
talk
about
what
well
nick
asks.
We
don't
know
it's
being
out
of
them
yeah
the
point.
The
point
is
that,
like
the
point
is
that
there
would
be
an
understood
policy
of
what
of
what
the
proxy
was
doing
and
not
doing,
and
the
user
be
aware
of
that
at
the
point
they
chose
it
just
entered
we're
still
aware
of
anything
issues.
E
Anything
here
right
I
mean,
like
you,
know
the
proxy
and
the
process
can
just
be
side,
channeling,
all
the
information
to
the
server
for
once
right,
and
so
the
only
question
here
is
like
so
I
think
you
know
your
fundamental
trust
in
the
process.
D
Yeah,
so
that
speaks
to
to
potentially
saying
as
long
as
the
the
potential
range
of
things
that
a
proxy
could
communicate
to
a
server
is
is
known
to
the
client.
Then
the
client
can
use
that
as
part
of
this
decision
to
use
a
proxy.
D
C
Yeah
yeah,
I
think
that's
what
the
the
text
in
96
tries
to
do.
Right
now,
in
particular,
tries
to
say
that
any
of
this
of
this
public
metadata
that
the
client
and
proxy
agree
upon
as
being
revealed
during
the
course
of
using
this
proxy
is
known
to
the
client,
whereas
in
contrast
what
I
call
private
metadata
like
this,
this
shadow
banning
bit
where
the
client
does
not
necessarily
agree
to
the
presence
of
that
metadata.
C
That
must
be
strictly
bounded.
The
proxy
just
shouldn't.
Add
things
to
the
you
know:
client
forward
of
requests
without
the
client
being
aware
of
it.
C
So
the
sort
of
intent
here
in
my
mental
model
has
been
that
the
any
sort
of
public
metadata
is
agreed
upon
between
client,
proxy
and
and
factored
into
the
decision
to
use
a
particular
proxy
or
not,
and
private
metadata
must
be
strictly
bounded.
It
is
sort
of
like
a
you
know
a
must,
but
we
know
you
won't
in
a
way,
but
at
the
end
of
the
day,
you
kind
of
have
to
trust
the
proxy
to
to
not
do
bad
things
like
forward
your
ip
address
or
whatever.
So
I
I'm
not.
C
D
So
so,
chris,
I
want
to
pull
on
something
there
a
little
bit.
There's
there's
two
things
that
we're
sort
of
conflating.
Perhaps
one
is
the
spectrum
of
possible
things
that
the
proxy
might
say
so,
the
type
of
information
that
it
might
pass
to
a
server
and
the
other
is
the
specific
values
that
those
types
take
now.
Are
you
suggesting
that
most
things
fit
in
the
you
know
the
precise
value
that's
being
communicated
or
it
is
that
just
all
of
them?
Just
you
know
the
type
of
the
thing
that's
being
communicated
about
you.
D
So
so
understandable,
so
in
the
shadow
banning
case,
the
client
might
know
that
the
so
that
the
proxy
might
pass
a
bit
along,
but
it
won't
know
the
value
of
that
bit
yeah
in
the
geocache.
C
Yeah
my
thank
you,
my
my
my
thing
is
that
the
client
does
not
necessarily
know
the
precise
value
it
will
just
know
that
the
value
is
being
communicated
faithfully
by
the
proxy
like
it
would
be
the
right
geo
mapping
or
whatever
it
would
compute
the
right
token.
It
would
send
the
right
bit,
but
the
client
is
explicitly
made
aware
of
it.
D
C
I
think
there's
a
lot
of
sharp
edges
within
that
any
more
metadata
you're,
adding
that's.
Why,
like
with
the
96
pr
in
particular,
I
tried
to
punt
any
anything
more
than
a
single
bit
to
a
separate.
You
know,
discussion
down
the
road
and
because
to
your
point,
any
any
any
more
information
you're
adding
is
it's
a
slippery
slope
so
to
speak?
F
All
right
hello,
so
I
agree
with
ecker
that
it's
difficult
or
impossible
to
try
to
make
this
normative.
F
Maybe
one
thing
we
could
do
is
have
a
normative
statement
about
future
standardization
of
protocols
between
a
proxy
and
a
target
of
saying,
like
we
really
shouldn't
approve
things
that
give
away.
You
know
lots
of
information,
but
otherwise
we
can't
really
enforce
this
and
to
echo
what
people
have
been
saying.
I
think
this
really
comes
down
to
how
much
the
client
trusts
the
proxy
and
the
like.
The
implicit
thing
whenever
we're
using
oblivious
proxy
is
that
trust
relationship,
and
so
anytime.
F
We
try
to
talk
about
untrusted
proxies
or
proxies
doing
things
that
the
client
don't
doesn't
want
we're
kind
of
missing
the
entire
deployment
model,
and
I
think,
even
for
the
shadow
banning
case
like
I
could
as
a
client
once
I
learned
that
a
proxy
does
shadow
banning
bits,
I
could
choose
not
to
use
it
and
say
I
don't
trust
you
anymore,
because
I
don't
want
that
in
my
contract.
So
I
don't
know
if
the
shadow
banning
is
really
unique
or
if
it's
just
one
more
thing,
the
client
could
decide
not
to
trust
you
about.
D
B
Yeah,
I
think
I
kind
of
mentioned
something
similar
on
the
pull
request
but
yeah,
I
kind
of
agree.
I
think
that.
Well,
I
think
that
we
should
just
keep
it
to
be
assured
and
you
can
describe
the
many
ways
in
which
this
can
go
really
badly
and
like
why
a
proxy
shouldn't
just
add.
You
know
everything
under
the
sun
and
it
communicates
with
the
server,
but
I
think
applications
will
have
things
that
they
want
to
do
so.
G
G
E
I'm
suggesting
that
at
this
point
perhaps
it's
time
to
to
to
go
back
to
the
the
the
separate
work
on
the
poor
request,
and
maybe
we
can
try
to
like
go
back
and
like
rather
than
try
to
like,
I
think
I'm
generally
like.
Obviously
when
I'm
generally
hearing.
E
D
Yeah
this
is
one
of
the
more
difficult
ones.
We
will
take
this
to
the
pull
request.
D
I
have
probably
one
final
point
here
in
and
that
is
that
some
of
the
things
that
people
are
requesting
here
are
probably
generic
http
features
that
don't
exist
currently,
and
they
might
be
something
that
we
would
would
benefit
from
standardization
with,
and
so,
if
we
we
took
those
things
to
say
the
http
working
group,
we
could
define
new
header
fields
for
say
the
shadow
band
bit
or
no
I'm
not
I'm
not
going
to
say
that
we're
going
to
do
the
geolocation
thing,
but
other
things,
those
lines.
D
This
one's
a
little
more
technically
complicated,
so
I
think
a
number
of
people
have
pointed
out
that
this
isn't
particularly
good
at
carrying
all
of
the
different
http
semantics.
D
At
the
moment,
we
sort
of
have
a
an
atomic
request
and
an
atomic
response,
and
if
we
want
to
do
things
like
have
large
request,
bodies
or
large
response
bodies
that
stream
and
are
processed
in
a
streaming
fashion,
it's
not
very
well
set
up
for
doing
that.
In
fact,
it
would
be
a
very
bad
idea
to
use
the
current
thing
for
doing
very
large
requests
that
are
processed
incrementally,
and
it
also
means
that
we
can't
do
things
like
encapsulate
1xx
responses.
D
Which
is
a
little
bit
awkward,
so
I
have
a
slide
ben.
If
you
have
a
question,
I
might
answer
it
as
we're
coming
up
so
jump
in
if
I'm.
D
So
this
is
just
a
sketch
of
a
design.
I'm
part
the
way
through
writing
this
up.
It
turns
out
that
writing
this
up
is
awkward
and
time
consuming,
so
the
couple
of
hours
I've
had
to
spend
on
this
has
not
been
enough
to
do
it
cleanly,
but
the
basic
idea
is
that,
rather
than
having
a
single
aad
invocation,
either
via
hpke
or
just
directly
itself,
we
invoke
it
multiple
times
and
send
the
message
in
chunks.
D
So
this
is
easy
for
hpke,
because
it
maintains
its
internal
counter
for
the
aead.
We
need
to
maintain
a
counter
in
a
nods
and
which
we
can
follow
the
pattern
that
we
have
in
tls.
For
that
sort
of
thing
for
each
chunk
we
can
prefix
each
of
them
with
a
length
which,
if
we
use
say,
for
instance,
the
quick
variant
encoding,
it
might
cost
one
extra
byte.
If
you
send
a
single
chunk
and
then
the
only
other
consideration
here
is
that
we
need
something
to
seal
off
the
end
of
the
message.
D
So
when
you
have
a
series
of
chunks,
you
need
to
know
that
the
chunk
that
you're
processing
is
the
very
last
one.
Otherwise,
you're
exposed
to
potential
truncation
attacks
so
use
something
to
distinguish
the
last
chunk
and
if
there's
any
number
of
things
that
you
can
use,
the
the
simplest
thing
in
in
many
respects
is
is
just
use
a
zero
length
for
the
last
one,
because
then
you
have
a
distinguishing
thing
and
you
ensure
that
the.
D
Ensure
that
it's
you
rely
on
the
the
outer
encapsulation
to
to
manage
the
length
of
the
remaining
things
richard's
going
to
complain
about
that.
Probably-
and
that's
all
I
have
so
comments
folks.
D
H
Yeah,
so
that
doesn't
seem
as
important
to
me.
I
understand
the
the
interest
here,
but
I
would
I
would
ideally
prefer
for
this
to
go
in
a
separate
document,
possibly
as
an
extension
to
hpke
or
you
know,
as
essentially
an
addendum
to
hpke,
because
this
is
a
very
general
purpose
thing
that
you're
describing
here
has
really
very
little
to
do
with
http.
I
Now,
it's
up
to
the
application
of
hpk
to
like
provide
sequence
numbers
to
make
sure
that
you
process
them
in
the
right
order,
etc,
but
but
that
that's
there
we
could
use
that
it'd
be
pretty
straightforward.
I
What
I
was
wondering
is
whether
we
need
to
invent
something
new
to
signal
end
of
stream
here
or
whether,
like
we
could
lean
on
the
http
chunked
content
encoding,
or
something
like
that
to
you
know
which,
if
I
recall
correctly,
has
end
to
stream
signal,
and
so
you
would
just
basically
just
have
the
the
encrypted
thing
be.
You
know,
here's
some
some
chunks
and,
and
you
can
encrypt
them
and
then
the
interior
thing
would
signal
when
you
got
the
last
shot.
D
D
D
I
Going
yeah,
so
what
I
was
going
to
say
is
like,
if
you're
going
to
do
the
hpke,
multiple
aud
thing,
you're
going
to
need
a
framing
a
little
bit
of
framing
anyway,
so
you
might
as
well
just
put
a
bit
there.
F
So
one
question
because
I
was
waiting
to
see
kind
of
the
proposal.
I
understand
that
that's
not
ready
yet
does
supporting
this
streaming
chain.
Do
the
streaming.
Does
that
change
the
wire
format
for
the
encapsulated
request
for
requests
that
don't
want
chunked
or
streaming.
D
Yeah
the
the
way
that
I've
got
it
at
the
moment,
there's
a
single
zero
byte
at
the
at
the
start
of
the
message,
but
otherwise
it's
exactly
the
same.
So
exactly
the
same
code
that
you
have
today,
you
put
a
zero
byte
in
and
if
you're
unwilling
to
support
chunking
on
receipt,
then
you
just
simply
read
a
zero
byte
off
and
then
run
exact
same
code
that
you
have
today.
F
Love
it,
so
I
guess
the
other
question
is
like
you
know:
what
are
the
use
cases
for
oblivious
http
that
currently
would
use
this
like
are?
Who
is
going
to
benefit
from
this
bite
and
the
potential
complexity
of
implementing
like?
Is
anyone
going
to
actually
use
this
and
test
it
out
and
exercise
this
as
we're
developing
the
protocol,
because
I
have
a
couple
use
cases
that
I'm
currently
using
this
for
and
they're
not
needing
this.
D
Right
so
there's
a
there's.
A
lot
of
use
cases
that
we
have,
that
are
very
small
requests.
Very
small
responses,
and
that
would
be
kind
of
this
would
be
a
waste
on
those
those
things
now.
The
question
is
whether
there's
anything
that
you
might
use
this,
for
that
has
much
larger
requests
or
responses.
D
There
are
some
of
the
things
that
I've
been
looking
at
for
the
npc
related
submission
of
you
know:
advertising
related
things,
for
instance,
that
might
use
larger
payloads.
It's
uncertain.
F
G
G
I
mean
I
know
I'm
the
math
guy,
but
you
know
at
some
point
you
can
use
mask
like
if
you're
really
want
to
stream.
That's
we
have
another
protocol
for
that.
That
might
be
a
better
fit.
So
at
first
I
was
like
oh
yeah.
This
makes
sense,
but
now
that
I'm
seeing
crypto
foot
guns,
I'm
kind
of
concerned
that
solving
those
might
delay
the
rest
of
this
effort
and
unless
we
have
a
real
use
case,
that's
not
solved
I'd
like
to
understand
more.
Why
that
one
can
be
solved
by
something
like
mask.
D
Right
so
I
suspect
that
the
right
thing
to
do
here
would
be
to
put
out
a
pull
request,
see
what
people
think
about
the
concrete
implementation
of
it.
I
suspect
that
the
feedback
that
I've
gotten
here
will
remain
unchanged,
but
since
I'm
90
of
the
way
through
work,
I'll,
probably
go
ahead
and
ship
a
pr
circulate
on
list
and
see
see
what
comes
out
of
that.
H
Ben
schwartz,
just
one
quick
note
you
you
could
consider
separating
this
by
mime
type
effectively.
D
A
number
of
ways
to
cut
this
one
okay,
so
this
is
one
of
the
more
interesting
technical
problems
that
exists
here.
D
I
think
it
might
have
been
david,
benjamin,
who
has
sort
of
reminded
us
that
anti-replay
is
kind
of
a
thing
when
you're
doing
this
when
you
seal
something
up
in
hpke,
there's
nothing
in
it
that
ensures
that
their
connection
is
live,
and
so
a
malicious
proxy
that
gets
their
hands
on
a
request
can
replay
that
request
any
number
of
times
and
the
server
that
has
no
defenses
against
those
sorts
of
things
might
just
process
the
request
multiple
multiple
times.
D
So
what
we
have
at
the
moment-
and
this
is
going
to
require
a
little
bit
of
coordination,
perhaps
to
address-
is
we've
got
some
text
in
the
draft
that
that
looks
at
the
general
problem
and
surveys
the
space
and
there's
a
call
out
to
a
new
draft
that
I've
proposed
to
the
http
api
working
group
that
talks
about
how
you
put
timestamps
in
requests
and
how
you
deal
with
the
problems
in
there.
D
D
This
is
often
the
case
for
things
like
dns,
even
though
perhaps
maybe
they
should
care
in
certain
circumstances,
a
lot
of
dns
servers
won't
care
about
replays
and
the
other
one.
Is
you
take
all
the
requests
you
receive
and
you
remem
remember
them,
and
if
you
see
a
request,
that's
played
out
twice.
You
reject
the
second
one,
all
subsequent
ones,
for
something
like
oh
gdp,
you
just
look
at
the
a
slice
of
the
the
request
that
the
encapsulated
key
is
a
good
thing
to
use.
D
In
that
case,
you
don't
need
to
remember
the
entire
request
in
order
to
recognize
that
something's
coming
again,
you
just
need
enough
of
that
ink
value
to
to
ensure
that
it's
unique,
probably
16
bytes
would
do
so.
The
problem
with
this
is
that
you,
even
with
as
little
as
16
bytes
per
request
that
you've
processed
that's
an
infinite
amount
of
state
if
you're
a
long
running
server.
D
So
a
practice
that
we've
used
in
a
number
of
other
settings
is
to
include
some
sort
of
timestamp
in
the
request
and
the
http
date.
Header
field
is
not
a
bad
choice
for
this
one,
even
though
the
date
is
not
used
particularly
often,
and
then
you
only
need
to
remember
requests
over
a
short
span
of
time.
Anything
outside
of
that
span
is
rejected
because
it's
too
old,
it's
probably
a
replay
or
too
far
into
the
future,
someone's
got
a
wacky
clock
for
those
requests
that
you
remember
as
they
get
too
old.
D
D
That
leads
to
the
next
problem,
which
is
that
client
clocks
are
bad
and,
in
some
cases,
really
really
bad.
There's
a
paper
that
we
cite
that
talks
about
just
how
bad
it
is.
But
it's
not
unusual
to
find
clocks
that
are
years
bad,
some
of
them
like
centuries
into
the
future,
that
sort
of
thing
completely
weird
stuff.
So
the
solution
there,
I
think,
is
to
just
let
the
client
retry.
D
So
you
have
a
the
service,
send
a
signal
saying
that
you'll
collap,
your
clock
is
wacky
and
the
server
response
will
include
a
date
header
field
conveniently
and
the
client
can
just
retry
using
the
date
that
the
server
has
provided.
You
should
turn
the
value
around
in
most
cases,
there's
a
number
of
ways
in
which
this
can
go
really
really
badly
wrong.
D
D
But
in
general
this
this
works
reasonably
well.
It
provides
linkability
between
the
original
attempt
and
the
retry,
but
if
the
retry
is
essentially
the
same
request
tried
again,
then
it's
not
particularly
bad.
D
And
here
there's
not
there's
acne
answers
as
well
in
the
chat.
I
don't
think
that
allows
you
to
do
the
time.
Windowing
thing,
although
it
may
be
possible
in
that
case,
because
the
server
is
the
one
that's
generating
the
noises.
D
Yep,
so
it's
essentially
a
token,
so
this
is
more
or
less
in
for
your
information.
I
think
this
is
probably
a
reasonable
thing
here.
I
think
the
challenge
here
is
that
we're
citing
a
draft-
that's
an
individual
draft
in
another
working
group,
so
I
just
wanted
to
make
people
aware
of
that
that
possibility
we
could
probably
break
that
dependency
by
folding
some
of
the
content
in,
but
we
haven't
yet
had
the
conversation
in
http
api
about
what
to
do
with
that
draft.
D
So
there's
a
little
uncertainty
there.
Thankfully
we
have
some
other
problems
to
solve,
so
we
this
doesn't
become
urgent.
E
So,
first
let
me
just
like
state
what
I
think
is
the
case,
which
is
that
this
is
about
replay
by
the
proxy
correct.
E
Right,
are
you
good
just
making
sure
I'm
the
same
page
as
you?
I
think
I
mean
I.
E
I
guess
I
wonder
like
I
mean
I
know
I
know,
as
you
say,
clocks
are
like
terrible,
but
I
wonder
if
the
answer
is
just
don't
do
that
you
know
we're
talking
we're
talking
a
new
protocol
definition,
and
I
I
wonder
if,
like
maybe
the
answer
is
just
like
it
is
say
well
like
like,
have
an
accurate
clock
or
it's
not
gonna
work,
and
I
guess
I
just
like
like
I
know
we
talked
about,
I
mean
you
alluded
to
some
of
these
things,
but
it's
just
like
that.
E
The
like,
just
just
just
like
you,
know
naively
accepting
whatever
date
the
server
gives
you
has
like
all
kinds
of
other
problems,
and
and
and
if,
if
you
know,
I
think
well,
I
guess
I
guess
there
are
the
other
two.
There
are
two
flavors
of
this
right.
E
One
is
where
you
don't
know,
or
you
know
you
accept
it,
but
only
for
the
purpose
of
this
message,
at
which
point
you
mess
measures
or
not,
because
it's
basically
nuts
or
the
other
is
where
you
persist
it
at
which
point
now
you
have
a
tracking
problem,
I'm
not
sure
how
to
do
what
and
so
it
seems
like,
and
it
seems
like
maybe
like
the
answer
is
just
like
it's
just
like
fisher
clocks,
so
I
don't
know,
but
I
think
I
guess
I
guess.
E
If
I'm
I,
I
think
I
think
my
I
guess
maybe
my
my
my
bottom
line
is
like
either.
We
should
do
not
try
to
correct
the
clock
at
all
and
just
do
what
you
have
this
in
this
thing
or
have
send
a
correct
clock
or
you're
gonna
have
to
absorb
around
trip
for
knots.
I
think
it's
probably
my
take
on
this.
I
don't
think
I
don't
think
trying
to
crack.
The
clock
is
going
to
like
this
is
a
great
idea.
E
The
version
of
the
next
slide
effectively
is
try
again
with
a
correct
clock,
but
I'm
just
saying
that
unless
you're
that,
if
you're
not
going
to
store
that
clock
information
for
very
long,
then
you
might
as
well
just
try
the
knots
and
you
can
skip
the
clock
part
and
then
and
then
the
way
you
think
about
this
is
the
clock.
Is
the
clock
is
a
shortcut
around
the
around
the
knots
right
is
going
to
think
about
this.
D
Yeah
I
mean
I,
I
went
through
it
in
the
other
draft.
I
think
it's
probably
probably
tractable
but
yeah
you
you
essentially,
if
you
have
a
bad
clock,
you
end
up
with
with
a
round
trip
an
extra
round
trip
for
every
request
that
you
you
send,
which
is
unpleasant.
Then.
H
So
I
I
looked
just
briefly
at
the
I
guess
two
drafts
involved
here.
Neither
of
them
seems
to
mention
the
problem
of
distinctive
skew
linkability.
So
if
you're
the
only
client
who's
11
seconds
behind,
then
all
of
your
requests
become
thinkable,
you
lose
the
oblivious
property
so
including
a
date.
Header
in
the
inner
request
is
a
little
scary.
D
Yeah
I
did
talk
about
fuzzing,
but
I
should
probably
have
brought
that
out
a
little
bit
more.
Maybe
ecker's
idea
is
much
much
better.
Tell
me.
D
Yeah,
that's!
This
is
really
only
http
at
this
point,
so
I
don't
think
we
need
to
make
it
particularly
generic,
but
it
does.
I
think
I
think
we
do
a
text
on
the
generic
protections
that
are
necessary.
D
H
Yeah,
similar
to
the
to
tommy's
comment
a
moment
ago.
I
think
this
is
enough
of
a
case
that
a
lot
of
the
uses
of
oblivious
won't
care
about
this.
To
the
point
where
I
don't
know:
if
this
really
belongs
in
the
main,
oblivious
draft
at
all
this,
I
think
this
should
either
be
either
something
generalized
or
maybe
just
a
a
quick
informational
draft
of.
H
If
you
care
about
replayability
attacks
and
your
oblivious
usage,
here's
what
you
do
about
it
and
then
the
main
draft
can
just
be
nothing
other
than
a
citation
of
hey
replay
is
a
concern
for
it.
Here's
the
draft
to
look
at
if
you
do
care,
but
overall,
since
I
think
the
majority
of
usage
won't
care,
I
don't
think
this
should
be
polluting.
The
main
draft.
D
D
And
the
queue
is
trained
great,
so
we'll
take
the
take
that
to
list
there's
some
ideas
floating
around
here
that
we
can
probably
take
up.
I'm
not
convinced
that
we
can
ship
something
like
this
without
some
replay
protection,
but
we'll
see
so
final
final
issues,
a
couple
of
small
things,
issue,
number,
58
and
and
19
are
things
that
I
think
we
can
dispense
with
relatively
quickly
here.
I
just
wanted
to
test
since
the
room
on
two
of
these
things.
D
D
And
there
are
others,
so
it
doesn't
seem
to
make
much
sense
to
make
it
a
grand
experiment.
But
I'd
like
to
hear
reasons
why.
D
Wouldn't
it
be
great
if
medieca
would
scroll
the
chat
all
right
so
issue,
19
is
talking
about
discovery
and
there's
a
number
of
things
that
we
could
talk
about
in
discovery.
I
think
I'd
just
like
to
reaffirm
that
we're
not
going
to
do
it
in
this
draft.
D
D
D
Because
I
don't
know
how
to
do
discovery
in
this
document,
because
yeah,
I
can
imagine
the
service
b
config
stuff
is-
is
going
to
be
enough
of
a
challenge
for
the.
What
is
it
ddr
so.
I
B
Hello,
hello
theory:
did
you
wanna.
B
You
want
to
share
a
screen,
or
I
think
it
might
be
easier
to
do
sharing
of
pre-loaded
slides.
J
Hey
good
afternoon,
everyone
I'll
be
I'm
through
from,
I
will
be
presenting
the
obvious
proxy
feedback
draft.
We
published
this
rap
a
few
weeks
back
and
we
got
some
feedback
on
this
stuff.
So
we
worked
on
that.
So
I'll
share
the
feedback
as
well
as
what
this
draft
is
trying
to
propose.
J
The
problem
is
quite
straightforward
that,
because
we
have
there's
an
obvious
proxy,
which
is
basically
masking
all
the
clients
behind
it
and
sending
the
traffic
there's
a
good
chance
that
the
traffic
that's
coming
from
the
proxy
could
exceed
the
capabilities
of
the
target
server
and
that
would
cause
it
to
start
rate
limiting
the
traffic
from
the
process.
J
This
is
something
that
we
have
seen
in
various
deployments
today
with
regard
to
various
types
of
rate
limiting
that
gets
applied,
but
this
is
quite
different
that,
because
there's
a
proxy
involved,
typically
the
rate
limiting,
is
usually
applied
for
clients,
but
in
this
case
the
right
lifting
would
get
applied
to
the
proxy.
And
if
the
rate
clipping
gets
applied
to
the
proxy,
then
it
would
start
harming
all
the
clients
that
are
behind
the
proxy.
That's
the
problem
that
we
are
trying
to
solve
by
this
draft.
J
The
proposal
is
to
signal
the
overload
from
the
server
to
the
ws
proxy
and
the
proxy
uses
this
feedback
to
rate
limit
any
http
request
from
like
offending
clients
or
misbehaving
clients
or
botnets,
which
are
trying
to
overwhelm
the
server
one
of
the
feedback
that
we
got
was
there's
already
a
draft
in
http
api
working
group
with
regard
to
use
of
rate
limit
headers,
which
basically
is
publishing
the
quotas
or
service
limits
to
clients.
J
J
So
what
we
did
was
we
introduced
a
new
proxy
header
earlier
in
the
zero
zero
version.
We
had
just
a
feedback
editor
and
the
feedback
was
not
to
pick
a
very
generic
header,
so
we
changed
it
to
ohi
proxy
feedback
header,
which
provides
feedback
from
either
the
request
resource
or
target
resource
to
the
proxy,
and
the
whole
idea
is
the
proxy
would
remove
the
header
before
sending
the
http
response
to
the
client.
J
A
simple
workflow
would
be
like
the
client
gets
sent
in
that's
in
a
crafted
packet
just
to
target
the
server
and
the
target
resource
identifies
that
it's
an
malicious
request
and
it
sends
in
400
response,
along
with
the
ohio
proxy
feedback
back
to
the
request
resource
which,
which
again
forwards
it
to
the
proxy
proxy,
basically
takes
it
out
from
the
foreign
response,
puts
it
in
the
request
resource,
takes
it
out
and
puts
it
in
the
200
response
only
for
the
consumption
of
the
proxy
and
sends
the
encapsulated
400
response.
J
J
So
what
we
did
was
all
these
parameters
that
we
have
would
be
part
of
this
proxy
feedback
header.
So
we
have
picked
the
various
fields
that
are
there
in
the
rate
limit
headers
to
be
part
of
this
header,
which
would
be
sent
back
to
the
proxy
for
enforcing
this
rate
limits,
either
for
the
proxy
to
enforce
late
remits
or
for
if
it's
an
offending
client
which
is
trying
to
attack
the
server.
J
Then
the
target
server
can
ask
to
the
proxy
to
rate
limit
this
offending
client
and
and
the
server.
Obviously,
in
this
case
would
not
have
any
information
about
the
client.
G
G
G
J
So
so,
in
this
example,
we
have
just
targeted
the
target
resource
getting
an
offending
packet,
for
example,
assuming
it's
getting
an
abnormal
header.
The
attack
could
be
even
on
the
request
resource
that
it's
getting,
let's
say
and
garbled
encapsulated
request
and
it
tries
to
decrypt
it,
and
it
just
fails
so
the
attack
could
happen.
The
draft
currently
talks
about
attacks,
encapsulated
malformed
request
or
higher
rate
of
request
coming
and
hitting
the
request
resource
or
it
could
be
hitting
the
target
resource
with
crafted
packets.
E
Hi
eric
scroll,
I
guess
I'm
a
little
unclear
on
how
this
is
supposed
to
work.
So
you
know
the
you
give
the
example
of
of
a
undergraduate
message,
but
like
it's
perfectly
easy
to
make
a
decrepit,
robust,
critical
message
right.
So
I
guess
like
so.
I
guess
like
what
I'm
trying
to
understand.
I
mean
it's
like
there's,
there's
like
like
so
there's
like
one
very
specific
set
of
veg
cases
where
somehow
there's
a
particular
kind
of
like
message.
E
I
send
that,
like
has
abuse
like,
has
abusively
high
single
message,
consumption
of
resources
on
the
server
like
I
do
something
that
requires
you
like
making
a
normal
secret,
powerful,
big,
computation
right
and
then
no
legitimate
client
would
ask
for,
but
the
way
that
dots
and
act
like
frequently
work
is,
I
send
legitimate
appearing
requests
and
and
they're
indistinguishable
from
illegitimate
requests
and
their
problems,
the
volume
of
those
requests
and
so
given.
So
I
don't
understand
like
which
give
it
if
you're
getting.
J
Yeah,
there
are
two
types
of
attacks
when
I
was
referring
to
either
it
could
be
a
simple
http
flood
or
it
could
be
a
flash
cloud
scenario,
and
it's
quite
undiscontinual,
in
which
case
the
target
resource
is
getting
over
and
branded
want
to
handle
those.
The
other
one
is
basically
an
attack
where
it
could
be
a
slow
loris
or
it
could
be
a
malformed
request
or
a
garbled
request,
kind
of
an
attack.
E
J
The
malicious
request
is
not
for
the
legitimate
tracking,
but
for
if
the
target
resource
is
getting
really
overwhelmed
with,
let's
say
black
millions
of
requests
coming
from
the
proxy
and
it
wants
to
rate
limit,
then
it
can
basically
signal
that
the
rate
limit
is
not
just
specific
for
a
client.
It
could
be
for
the
proxy
as
well.
E
I
guess
I'm
still
not
like
I
hear
you're
saying,
but
I
don't.
I
don't
see
how
this
works,
because
you're
just
now
you're
just
rate
limiting
everybody
like
you
got,
you
got,
I
mean
for
this
to
work
properly,
to
discriminate
between
the
valid
traffic
and
invalid
traffic,
and
you
know
if
you
just
don't
like
how
much
bandwidth
you're
getting
likes
like
you
know,
slow
down
your
tcp
stack
or
something
I
mean
like
it
just
seems
like
you
know
I
mean
I
I
guess
I
mean
it
seems.
E
I
don't
understand
how
that
works.
What
I'm
saying
is:
is
that,
like
the
the
nature
of
the
proxy
is
to
transparently
send
the
data
coming
into
the
data
coming
out,
and
so
in
order
to
and
so
like,
and
it
will
like,
naturally
attempt
to
share
between
those
clients
right,
and
so,
if
you
want
to
like,
do
less
you've
got
to
somehow
tell
it
give
these
clients
less
and
these
other
clients
more.
I
don't
understand
how
you're
saying
that.
I
I
E
L
A
cue
here,
tommy,
hey.
F
All
right,
thank
you,
teru,
so,
actually
to
this
point
that
you
have
on
the
slide
right
here
about
aligning
with
the
rate
limiting,
it
doesn't
really
seem
to
be
quite
aligned.
I
see
that
you're
using
some
of
the
same
terms
as
like
the
values
of
your
http
fields,
but
in
the
right
limit
document.
Those
are
the
actual
field
names
the
header
name,
so
I
would
really
suggest
like
just
use
the
rate
limit
headers.
I
I
don't
see
really
what
we're
getting
from
these
additional
ones
and
overall.
J
The
reason
why
we
didn't
pick
the
fields,
as
is
as
what
is
there
in
the
rate
limit
header,
was
we
didn't
want
this
signal
to
go
back
to
the
client.
If
it
goes
back
to
the
client,
then
probably
the
client
would
know
that
hey
it's
attack
strategy
is
being
identified
and
that's
being
mitigated,
so
we
wanted
the
signal
only
to
reach
the
proxy.
J
F
I
guess
I
would
say
overall
for
ohtp
if
it's
not
already
said,
the
proxy
should
not
just
be
relaying
arbitrary
headers
between
the
target
and
the
client
or
the
client
and
the
target.
I
I,
if
that's
not
said
it
should
be
said
by,
I
wouldn't
expect
that
the
proxy
would
just
say.
Oh
yeah,
the
target
just
can
just
include
whatever
random
data
it
wants
and
then
we'll
send
it
kind
of
as
clear
text
not
inside
the
actual
encrypted
payload.
J
F
Yeah
and
then
overall,
this
seems
like
it's
like
what
we
should
have
here
is
just
normal
rate,
limiting
between
the
target
and
the
proxy
saying,
hey
proxy
you're,
sending
me
too
much,
and
then
the
proxy
should
just
figure
out
on
its
own.
What
it
needs
to
do
about
that
and
it
can
identify
its
own
clients
that
are
overwhelming
it
right,
but
that's
it.
We
should
just
leave
it
very,
very
simple.
This
is
just
two
layers
of
normal
rate
limit
headers
and
we
don't
need
another
header
field
here.
I
think
okay,
so
sure.
I
Quick
point
of
order:
did
you
have
more
presentation
content
you
wanted
to
get
through
because
we
have
a
queue
here.
We
can
put
that
off.
So
you're
done.
J
Sure,
just
one
more
slide,
so,
basically,
if
I
wanted
to
have
a
discussion,
whether
this
header
needs
to
be
only
conveyed
to
trusted
proxies
and
if
an
attacking
proxy,
which
is
colluding
with
attacking
clients,
would
leak
this
feedback
to
change
the
attack
strategy.
So
is
this
something
that
needs
to
be
only
conveyed
to
proxies
which
authenticate
back
to
the
target
reso
target
and
resource
servers,
or
could
this
be
used
with
any
any
any
proxy?
That
was
one
of
the
questions
I
wanted
to
discuss
with
the
working.
M
From
piconets
just
wanted
to
say,
you
know,
I
have
only
personally
reviewed
the
draft,
but
at
least
based
on
heroes
description.
If
we
can
go
back
a
couple
of
slides,
I
am
still
unconvinced
whether
this
rate
limit
based
approach
is
the
right
approach
to
take
to
try
and
mitigate
these
challenges.
M
So
I
I
I
see
clearly
that
there
are
two
types
of
challenges
here
that
you
know
like
eric
mentioned,
one
type,
which
is
where
there
is
a
obviously
individual
request
that
is
likely
to
be
malicious
in
nature
either
because
of
of
the
scope
of
that
individual
request
or
how
it
is
structured,
it
may
contain
junk
or
it
may
contain
the
kind
of
content
which
will
is
designed
to
overload
the
server
okay
so
there,
the
challenge
is
the
fact
that
it's
a
malicious
or
you
know
overloading
style
request,
which
you
want
to
restrict.
M
The
second
condition
that
we
are
looking
at
is
like
eric
mentioned,
a
large
amount
of
legitimate,
looking
requests,
but
just
at
a
scale
designed
to
disrupt
the
target
server.
Okay,
now
in
scenario,
one
where
it's
an
individual
request
that
is
malicious
by
nature
of
its
content.
Okay
fill
rate
limiting.
M
M
So
in
the
first
in
the
first
case,
rate
limit
is
obviously
the
wrong
approach
to
take
to
try
and
you
know,
mitigate
or
block
that
particular
traffic.
J
Hey
rajiv
I'll
answer.
The
first
question:
I
think,
if
you
look
at
the
rate
limit
headers
graph
right,
it
allows
a
rate
limit
option
where
you
set
it
to
zero.
Basically,
that
means
to
say
that
the
server
is
not
willing
to
accept
any
any
further
client.
So
the
rate.
J
No,
so
if
you
see
the
headers
that
we
have
defined,
we
have
defined
multiple
headers.
One
is
rate
limit,
p
limit,
which
is
for
the
proxy,
and
one
is
rate
limit
rate
limit,
hyphen
limit,
which.
M
Is
for
the
client
basis,
which
then
brings
into
the
picture
the
fact
that
now
the
now
the
dependence
is
on
the
target
server?
Somehow,
knowing
that
this
is
coming
through
a
proxy
infrastructure,
and
I
need
to
have
separate
headers
for
that
or
you
are
talking
of
putting
intelligence
at
the
request
resource
level
to
identify
that
when
I
get
a
rate
limit
zero.
Okay,
I
am
now
supposed
to
be
setting
the
rate
limit,
p
limit
and
sending
that
onwards
to
the
proxy,
but
in
in
most
cases
the
expectation
from
the
target
resource
will
be
hey.
M
I
am
talking
to
the
requester
resource.
That's
that's
all
I
know.
Okay,
I
am
the
whole
point
of
being
oblivious
is
that
the
target
resource
should
not
have
to
know
that
it's
coming
through
a
proxy
infrastructure.
Anonymization
intermediation
is
all
a
black
box
to
it.
All
it
knows,
is
I've
received
a
malicious
request
from
request
resource,
so
I'm
sending
a
rate
limit
zero
to
that
request,
resource,
which
means
anyone
any
other
traffic
coming
through.
That
request.
Resource
has
effectively
now
been
dosed.
J
M
J
J
To
have
that
context
right,
if
you
see
what
we
have
proposed
is
there
has
to
be
some
communication,
that's
happening
between
the
request
resource
and
the
target
to
identify
whether
this
was
this
request
actually
originated,
because
it
was
obvious
or
or
a
direct
communication,
so
that
way
obvious
basically,
so
that
the
target
can
actually
set
the
right
kind
of
headers.
There.
M
Yeah,
so
this
kind
of
goes
against
the
text,
at
least
in
the
first
section
of
the
draft,
where
it
clearly
says
that
the
whole
purpose
of
oblivious
one
of
the
use
cases
that
should
be
supported
is
the
fact
where
the
fact
that
traffic
is
coming
through
an
oblivious
proxy
channel
is
masked
and
not
visible
to
the
target
resource.
The
target
resource
should
not
not
know
nor
care
that
this
is
traffic
coming
in
from
an
oblivious
reason,
yeah.
J
M
Okay,
so
so
on
on,
on
the
one
hand,
then
you're
saying
that,
yes,
you
know
you
will
have
some
sort
of
probably
header
level
communication
between
the
requester
and
the
target.
Saying
that
hey!
Oh,
oh,
by
the
way,
this
isn't
a
request
directly
from
me
it's
from
it's
for
a
downstream
client
of
mine,
but
it's
anonymized.
M
So
therefore,
I'm
not
sharing
that
with
you
so
and
and
then
you're
talking
of
the
target
server,
then
having
intelligence
in
it
to
understand
this
communication
from
the
requester
and
know
that
if
I
see
a
malicious
request
of
type
one,
I
need
to
send
back
headers,
specifically
allowing
the
requester
resource
to
continue
to
come
to
me,
but
also
to
communicate
downstream
to
its
clients.
Saying
that
you,
you
specific
client
are
no
longer
allowed
to
come
right.
M
J
M
Okay,
can
I
very
quickly
address
my
second
point,
which
is
the
volumetric
actual
volumetric
attack
that
date
limit
was
designed
to
counter
against?
I
tried
to
keep
it
down
to
30
seconds
so
again.
In
a
case
of
a
rate
limit,
the
same
you
know,
limitation
applies.
The
target
resource
has
to
be
aware
that
this
is
a
volumetric
traffic
coming
through
a
proxy
infrastructure
of
some
kind,
so
that
right,
you
do
not
land
up
in
a
scenario
where
the
rate
limit
basically
doses,
your
proxy
infrastructure
itself,
correct
and.
J
M
J
We
have
been
implementing
rate
limits
for
ddos
attacks,
all
the
while
and
we
could
even
program
millions
of
such
ip
addresses
to
be
blocked
or
rate
limited.
So
I
can
share
more
details
on
how
it
could
be
done.
M
J
M
B
H
Yeah
sort
of
quick
mental
exercise
to
sort
of
decide
if
this
is
a
worthy
thing
to
pursue
or
not.
If
we
ignore
your
specific
proposal
and
just
focus
on,
what's
the
simplest
thing
to
solve
the
issue,
I
I
limit
the
issue
down
to
saying
this
specific
thing
that
this
request
was
doing:
something's
bad
in
it
and
the
proxy
needs
to
know
that
to
either
ban
or
limit
or
act
on
that
specific
client
if
they
have
more
requests
of
that
sort.
I
think
the
simplest
thing
we
can
do
is
proxies.
H
I
mean
the
the
server
sends
a
bit
to
the
proxy
saying
something
was
bad
about
this
request
and
then
that's
one
bit
going
from
the
survey
prox
and
this
immediately
something
hey.
This
sounds
very,
very,
very,
very
similar
to
that
whole
shadow
banning
conversation
we
just
had
45
minutes
or
so
ago.
Now.
So
from
that
perspective,
maybe
we
should
essentially
shelve
this
until
the
debate
is
settled
on
the
proxy
banning
thing
and
if
it's
decided
yes,
sending
one
bit
or
similar
information
from
the
proxy
to
the
server
makes
sense.
Okay,
then
we
should
go
back.
H
H
H
Hi
ben
schwartz,
so
I
think
this
issue
is
real.
I
think
it's
it's
worth
solving.
I
think
that
you're,
I
understand
the
concern
that
you
raised
about
using
the
unmodified
rate
limit
headers
directly,
and
I
think
the
solution
is
to
use
the
unmodified
rate
limit,
headers
and
work
with
that
draft
and
the
http
draft
to
make
sure
that
they
actually
work
in
the
way
that
you
that
you
want
it
to
work
here.
J
H
J
Yeah,
the
whole
challenge
was:
how
do
you
make
sure
that
the
proxy
strips
it
also
if
there
is
a
way
we
can,
the
the
target
resource
can
put
these
headers
but
the?
But
how
does
it
from
from
the
http
response
and
how
does
it
get
percolated
back
in
a
way
that
it's
it's
it's
it's
not
being
sent
to
the
client
field.
We
can
discuss
that.
H
D
So
eric
had
a
fairly
good
in
it
that
I
think,
probably
works
in
this
case.
It
requires
communication
between
the
request
resource
and
the
proxy,
not
the
target
result
and
I'll
get
I'll
get
to
that,
but
just
like
in
the
in
the
issue
66
discussion,
we
were
talking
about
sending
a
a
single
bit
that
says
this.
This
client
is
suspicious
in
the
direction
of
the
proxy
to
the
request
resource,
not
the
target
or
the
request
resource
so
that
the
request
resource
can
act
accordingly.
D
D
There
is
a
private
communication
that
the
proxy
resource
understands
this
message
or
that
the
proxy
resource
is
not
it's
not
generic
and
doesn't
blindly
pass
header
fields
alone,
but
I
think
both
of
those
things
are
quite
reasonable
in
this
heading
for
the
target
resource
communicating
this
state
there's
a
lot
of
cases
where
you'll
have
the
request
resource
and
the
target
resource
collocated,
in
which
case,
none
of
this
coordination
problem
really
needs
to
occur
over
the
wire.
D
In
the
other
case,
I
think
that
the
the
rate
limit
stuff
doesn't
work
in
this
case
because
of
the
way
that
it
depends
on
understanding
who
the
client
is,
but
some
there
may
be
some
ways
in
which
we
can
signal
those
sorts
of
things.
I
think
there
are
two
different
solutions.
There,
though,
and
the
more
important
is
the
is
the
signal
that
is
private
between
the
request
resource
and
the
proxy
resource.
J
See
since
martin
you're
suggesting
that
for
malicious
requests,
we
just
say
that
it's
a
malicious
one
rather
than
providing
any
rate
limit
parameters,
but
for
proxy
continue
to
have
the
right
limit
parameters
being
pushed
on
to
the
proxy.
D
Yeah,
I
I
think
in
a
lot
of
cases,
though,
as
as
ekka
pointed
out,
the
individual
request
won't
be
problematic.
It
will.
You
will
need
some
other
ch
or
perhaps
a
separate
type
of
signal
that
says
hey
by
the
way.
The
number
of
requests
or
the
rate
of
request
that
I'm
receiving
is
too
high.
J
Okay,
why
do
you
think
that
the
inline
one
is
bad
and
we
need
an
out
of
band
a
new
communication
channel.
D
Sorry
I'm
going
off,
then
you
ask
me
a
question
for
exactly
the
reason
ecker
stated,
which
is
that
individual
request
in
a
denial
of
service
state
will
generally
be
genuine
or
appear
to
be
genuine,
so
it
will
be
from
the
perspective
of
the
server
that's
operating
here.
It
will
be
unable
to
distinguish
between
the
the
requests
that
are
coming
from
bad
actors
and
good
actors,
and
all
it
needs
is
try
to
push
further
out
toward
the
edge
the
responsibility
for
filtering
those
down.
J
M
One
additional
point
that
just
came
you
know
into
my
way
or
achieve
here
was
the
fact
that
in
cases
of
most
target
resource
servers,
while
they
may
implement
things
like
sending
out
rate
limit,
headers
and
stuff
like
that
many
times
when
there
is
detection
of
any
sort
of
malicious
traffic,
there
may
also
be
other
security
measures
that
kick
in.
I
like
an
application
level
firewall.
That
starts
blacklisting
stuff,
like
that.
M
So
having
some
text
in
the
draft
specifically
saying
that
hey,
you
know
any
traffic,
that's
going
towards
a
target
resource
from
a
request.
Resource
must
contain
some
header
or
some
indication
that
it's
coming
from
a
proxy
which
then
allows
the
target
resource
to
say:
hey.
Yes,
I'm
seeing
a
malicious
request
that
would,
under
normal
conditions,
lead
to
an
immediate
fan
on
that
ip,
but
because
it's
coming
from
a
proxy
infrastructure
of
some
kind.
M
I
don't
do
that
ban
right
now,
because
I
may
end
up
impacting
legitimate
users,
which
again
kind
of
to
me
it's
a
bit
of
a
disconnect
between
the
text
of
what
the
draft
is
intending
saying
that
you
don't
want
a
mechanism
where
target
resources
have
to
be
necessarily
aware
of
the
oblivious
framework
in
order
to
be
able
to
support
the
traffic.
So
I
just
thought
this
was
also
something
since
we're
discussing
security
implications.
M
That
should
be
part
of
the
draft,
and
you
know
it's
a
problem
that
needs
to
be
addressed,
even
if
we
don't
have
a
solution
for
it
at
least
say
that
this
is
a
potential
problem
that
we
need
to
look
at.
B
Okay,
great,
I
think
we've
drained
the
queue
yeah,
so
it
seems
like
my
sense-
is
that
the
is
that
there
is
a
need
for
something
like
this,
but
maybe
that
should
go
in
the
http
api
range.
Limiting
draft
in
a
section
over
there
or
sounds
like
like
more
fundamental
work
is
needed
in
the
contract.
J
B
Great
tommy
you
up
next.
F
And
I'm
going
to
forgo
sharing
video
for
the
moment.
Apologies
all
right,
so
we
recently
posted
a
draft
about
how
to
do
some
discovery
of
oblivious
target
configurations
via
dns
using
service
plan
service,
binding
records,
and
this
is
what
martin
was
referring
to
earlier
as
one
of
the
directions
for
discovery.
F
F
This
allows
us
to
integrate
with
work,
that's
going
on
in
the
add
working
group
to
discover
encrypted
resolvers,
either
via
a
dns
query,
to
a
well-known
resolver.arpa
name
or
as
something
that
can
come
in
dhcp
or
ra
messages,
and
this
also
explains
the
similar
use
for
the
previous
version
of
odo,
which
is
the
experimental
version
that
pre
was
the
precursor
to
a
lot
of
the
oblivious
http
work.
F
F
The
proxy
discovery
case
could
be
added,
but
I
believe
it's
something
that's
quite
different,
as
we
discussed
earlier,
the
relationship
between
a
client
and
a
proxy
and
that
trust
about
what
it's
doing,
what
it's
potentially
adding
or
not,
adding
really
matters,
and
I
don't
see
as
much
of
a
use
case
for
that
being
arbitrarily
discoverable
in
a
mechanism
like
dns.
Maybe
there
could
be
a
registry
of
known
and
trusted
proxies,
but
what
we're
talking
about
here
is
discovering
targets.
F
So
as
an
example,
the
we
have
two
cases
here.
We
can
have
oblivious
dns
being
advertised
for
essentially
the
discovery
of
encrypted
resolvers
case
ddr.
So
we
would
have
a
record
for
dns.resolver.arpa.
F
F
I
have
a
client
that
has
an
oblivious
proxy
that
it
goes
through,
that
it
trusts
that
has
a
strong
relationship
with,
and
it
currently
would
be
by
default,
talking
to
one
or
more
oblivious
dns
targets
that
it
knows
about.
This
is
how
icloud
private
relay
works
today,
and
in
this
scenario,
I
can
imagine
that
I
join
a
network
and
I
connect
to
the
isp
dns
resolver,
just
for
my
normal
maintenance
traffic.
F
If
that
target
is
known
and
trusted
by
the
proxy.
So
there
is
a
transitive
trust
relationship
here
in
which
the
client
has
to
trust
the
proxy
in
the
proxy
or
some
other
mechanism
that
the
client
can
check
against
validates
that
this
is
not
a
unique
per-client
target,
but
is
something
that
is
more
widely
known
and
registered.
F
F
The
client
needs
a
way
to
know
that
this
target
is
supported
by
a
proxy
and
what
path
to
use,
because
oblivious
http
does
require
that
there's
a
mapping
between
the
request
to
a
proxy
and
what
the
target
is
this
I'll
point
out
and
there's
actually
an
issue
on
the
main
ohtp
draft
that
ecker
raised
about
the
flexibility
of
these
mappings,
while
ohttp
does
require
that
you
have
a
one-to-one
mapping
that
one-to-one
mapping
could
be
something
that
is
also
expressed
as
a
uri
template.
F
F
F
So
the
question
is,
you
know:
do
we
want
to
adopt
something
in
this
direction?
Certainly
you
know
there
could
be
more
refinement
and
changes,
but
you
know:
do
we
want
to
be
able
to
communicate
these
types
of
configs
in
service
binding
records?
F
If
so,
it
seems
like
it
should
be
in
scope
for
the
charter
as
one
of
these
secondary
items
and
as
part
of
that
we'd
like
to
get
these
service
binding
parameters
allocated,
because
we
are
planning
on
using
them
for
our
olivia's
dns
deployments
and
that's
all
I
have.
E
Hi
tommy
thanks
for
raising
this
I'm
a
little.
I
I
understand
what
you're
saying,
and
I
think
this
is
like
a
reasonable
thing
to
think
about
I'm
trying
to
think
through
some
of
the
privacy
implications,
and
it's
not
like
entirely
clear
to
me
that
the
price
implications
are
straightforward,
as
one
might
imagine,
and
so
let
me
give
you
a
concrete
example.
E
Supposing
I
have
two
and
I
have
two
endpoints
I
I
have
two,
you
know
dns
endpoints,
a
and
b
right
and
they're
otherwise
identical
and
I'm
the
isp
right
and
I've
two
there's
end
points.
E
A
and
b
are
identical
and
so
to
everybody,
but
you
I
give
a
and
to
you
I
give
b
and
yes
and
that
they're
specified
by
by
path
or
whatever
right,
and
so
yes
now
and
so
now
as
another
proxy
like
faithfully
does
this
it
hides
everything,
but
it
doesn't
matter
because
I
okay,
because
I've
I've
now
I've
not
linked
up
the
dns
publication
in
that,
and
so
I'm
like
I'm
like
I'm
just
really
worried
that
like
actually
this
will
be
quite
easy
to
attack
and
I
can
imagine
like
I
can
imagine
a
bunch
of
other
like
variants
of
this.
E
F
And
I
think
that's
so.
The
approach
you
know
we
are
thinking
of
here
is
again
relying
on
this
relationship
between
the
client
and
the
proxy,
and
that
there
is
trust
there.
The
proxy
is
in
a
place
where
it
can
ensure
that
the
volume
of
traffic
of
essentially
different
clients
to
a
given
target
path,
as
well
as
like
the
the
key
ids
of
the
configurations,
is
sufficiently
large.
The
proxy
can
definitely
recognize
or
clients
could
all
report
to
some
other
entity
for
transparency.
F
You
know
what
are
the
paths
and
stuff
they're
seeing
so
you
could
detect.
Oh,
this
is
a
unique
client
and
they're
the
only
ones
who
get
this
and
that
that's
part
of
why
I
think
there
is
a
there
is,
I
think,
a
case
where
this
discovery
makes
sense
and
is
useful,
but
it's
it
has
a
lot
of
constraints
on
it
and
it
doesn't
make
sense.
F
Oh
sorry,
I
I
don't
know
what
to
do
about
that:
you're,
better
now,
okay,
sorry,
anyway,
yeah!
I
I
think
it
depends
a
lot
on
how
the
client
can
trust
the
proxy
or
work
with
other
clients
to
detect
unique
configurations,
and
I
think
we
have
a
call
out
to
some
of
the
work
that
chris
and
mt
we're
doing
for
key
consistency.
E
Right,
well,
I
I
I
think
I
guess
so
I
I
think.
Well,
I
guess
I
perhaps
feel
differently
about
this
in
the
sense
that,
like
I
feel
like,
we
kind
of
we
kind
of
like
hand
weight
this
away
when
we,
when
we
started
privacy
pass
and
now
we're
like
having
to
fight
it,
because
we
haven't
waved
it
away
and
we
didn't
have
a
solution.
So
I
guess,
like
I'm,
a
little
reluctant
to
take
that
bet
again
without
having
like
some
a
more
clearly
worked.
E
Example
of
how
to
make
this
one
how
to
actually
make
this
work.
But
but
I
agree
it's
a
problem
worth
trying
to
solve.
H
Hey
ben
schwartz,
so
maybe
some
people
didn't
see
the
the
comments
on
the
the
ohio
mailing
list
about
this,
but
so
I
read
through
this-
and
I
conclude
this
is
this
arrangement
is
first
of
all,
not
secure
like,
above
all,
this
arrangement
essentially
hands
over
root
ca
powers
to
the
proxy
server,
who
can
now
arbitrarily
invisibly
impersonate
any
any
website
on
the
internet.
H
It
doesn't
even
have
to
be
a
website
that
actually
supports
oblivious
http,
so
I
am
seriously
concerned.
I
do
not
support
the
draft
as
as
written.
I
do
think
that
there's
I
I
do.
F
Disagree
about
that
model.
I
think
it'd
be
good
to
explore
that
a
bit
more
right
sure.
So
there
is
a
trust
relationship
between
the
client
and
the
proxy
already,
and
I
don't
think
that
it's
going
to
be
able
to
just
grab
tell
the
client
whatever
keys
it
wants.
That's
not
where
the
keys
are
coming
from.
H
Right,
the
keys
are
coming
for
over
an
insecure
channel
they're
coming
through
the
dns,
which
is
assumed
insecure
unless
stated
otherwise.
So
in
our
in
all
our
threat
modeling
for
the
web,
we
always
assume
that
a
dns
attacker
could
be
swapping
in
arbitrary
things.
We
don't
trust,
you
know
we
don't
rely
on
the
ip
addresses
coming
back
from
dns
or
anything
else
to
be
secure
in
order
to
in
order
to
authenticate
the
domains
that
we're
visiting
this.
H
This
would
change
that.
This
would
say
that
if
the
proxy
can
poison
a
dns
cache,
if
the,
if
the
entity
that
operates
the
proxy
can
also
execute
a
dns
cache,
poisoning,
attack
or
just
operates
the
dns
resolver,
which
seems
entirely
reasonable
or
happens
to
be
near
the
path
between
the
user
and
an
insecure
dns
server,
which
also
seems
highly
plausible
for
a
lot
of
these
proxy
deployments,
then
the
proxy
can
swap
out
the
or
or
even
inject
a
synthetic
ohttp
config
for
whatever
domain
the
the
user
is
attempting
to
resolve.
M
H
Then
act
as
the
origin,
so
to
me
that
makes
this
really
a
serious
problem.
I
also
have
some
other
concerns
with
the
design.
I
think
the
use
of
a
path
here
is
basically
wrong.
The
dope
the
path,
the
doe
path,
parameter
in
the
dough
service
b
mapping
is
not
a
good
thing.
It's
there
as
a
compatibility
hack
to
be
able
to
work
with
pre-existing
dns
over
https
servers,
but
the
the
dns
service
beam
mapping
has
a
long,
long,
multiple
paragraphs
of
caveats,
trying
to
explain
the
dangers
that
this
creates,
because
that
path
again
is
attacker-controlled.
H
So
so
you
can't
assume
anything
about
it.
There's
some
problem,
like
pat
multiple
paths
on
the
same
server,
can
interfere.
So
I
think
that
we
shouldn't
have
a
path
here
right.
There
should
only
be
that
should
somehow
be
a
fixed
value
effectively.
You
know
through
the
dot
well
known
mechanism,
I
mean.
H
I
understand,
but
we
don't
need
it
to
be
flexible,
given
that
there
is
no
install
base
that
we're
trying
to
maintain
compatibility
with.
We
can
set
that
we
can
say
in
the
case
where
you
are
trying
to
upgrade
a
a
standard,
http
connection
in
this,
in
the
case
where
you're
trying
to
bootstrap
ohttp
off
of
a
dns
name,
where
all
you
have
is
the
dns
name,
then
we
can
say:
there's
a
fixed
default
path
for
that
or
there's
a
fixed
inbound
mechanism
for
learning
that
path
by
again
querying
to
something
in
dot.
Well
known.
H
I
do
think
that
there's
a
version
of
this
that
can
be
made
secure
and
and
workable,
and
it's
actually
much
simpler,
and
that
is
to
just
set
a
flag
here.
That
says
I
do
http
or
I
require
ohtttp
and
then
let
the
client
actually
contact
that
origin
directly
and
you
know,
through
an
authenticated
channel,
learn
the
ohtttp
config,
learn
the
preferred
path
and
then
use
them.
N
Ted
hardy
speaking,
can
you
go
back
to
slides
seven
and
eight?
Please
start
on
seven.
N
F
F
N
One
is
you
if
you
say,
there's
an
allow
list,
you
you
either
can
pay
a
penalty
by
checking
each
one
of
the
proxies
that
you
have
access
to
to
find
out
if
any
of
them
are
on
the
allow
list,
and
that
that's
perhaps
not
a
very
serious
latency
penalty,
but
it
definitely
would
occur
if
the
only
way
to
discover
whether
it's
on
the
allow
list
is
by
experimentation
or
you
can
publish
the
allow
list
in
some
way,
and
I
think,
if
you
publish
the
allow
list,
you
actually
have
changed
the
scope
of
what
you
were
talking
about
from
changing.
N
How
do
I
find
a
a
target
to
also
how
do
I
find
a
proxy,
because
what
you're,
in
effect
doing
is
using
the
the
information
from
the
the
information
carrying
that
the
target
list
to
also
carry
the
the
permitted
proxy
list.
So
I
think,
if,
if
you're
gonna
go
down
that
path
and
use
that
allow
list
as
something
that's
published,
you're
you're
in
effect,
whether
that's
generic
or
not,
you
are
providing
discovery
for
for
proxies.
N
I
don't
necessarily
think
that's
a
bad
thing,
as
was
pointed
out
on
the
list.
This
is
this
is
sort
of
a
twinned
problem.
Here,
a
proxy
is
useless
without
targets
and
the
target
is
useless
without
proxies
they
don't
without
knowing
both
of
them.
You,
you
actually
can't
get
this
oblivious
service,
but
I
think
if
you
are
going
to
do
that,
you
have
to
kind
of
rethink
architecturally
that
that's
the
scope
now
and
is
this
the
best
way
to
do
it,
and
maybe
maybe
that
does
need
some
more
thought.
F
N
F
Right,
thank
you
and
just
to
respond
quickly
to
that.
So,
yes,
you
certainly
could
have
an
approach
where,
along
with
the
target
information
and
target
configuration,
you
get
a
list
of
associated
or
trusted
proxies.
The
other
way
that
this
can
work
is
again.
The
client
has
to
know
with
its
proxy
what
the
mapping
is
of
paths.
So,
if
the
client
does
have
multiple
proxies,
it
essentially
needs
to
be
able
to
ask
its
proxy.
F
N
So
I
agree
with
that.
Thank
you
and
there
are
other
ways
you
could
do
it
as
well
right.
The
the
dns
resolver
target
could
tell
you
what
its
reachability
limits
are
and
say
basically
give
you
a
reachability
limit,
that's
only
within
the
isp
or
only
within
some
region,
etc.
And
so
you
could
do
it
in
a
bunch
of
different
ways,
but
I
think
it.
It
may
be
important
that
you
work
out
in
a
little
bit
more
detail
which
one
you
expect
to
be
the
standard.
G
Davidskenazi,
hey
tommy,
so
overall,
I
think
there's
definitely
value
in
having
a
way
to.
G
For
a
target
to
be
able
to
say
hey,
if
you
want
to
talk
to
me,
here's
a
way
of
doing
it
that
improves
your
privacy.
So
I
I
like
that
I
think
there's
a
problem
worth
solving
here,
however-
and
I
think
you
know
some
of
the
deployment
concerns
are
somewhat
orthogonal,
some
of
them
not
all
of
them,
but
the
the
point
that
ben
made,
where
because
of
the
way
we
have
odo
sorry,
oblivious
http.
G
Today
you
don't
have
a
step
or
you
verify
ownership
of
the
private
key
for
the
tls
certificate
that
breaks
kind
of
the
security
model
of
a
lot
of
things.
So
I
would
say
totally
agree
with
the
problem
to
be
solved
here.
I
think
we
should
refine
the
solution
to
make
sure
that
that
gap
is
fixed,
because
otherwise
this
sounds
like
a
giant
foot
gun
thanks.
B
D
So
I
I
think
the
large
open
problem
that
you
have
here
is
that
there's
there's
a
target
and
you've
described
a
way
of
discovering
the
target,
which
I
think
generally
works
there,
but
the
discovery
proxy
and
finding
the
intersection
of
the
set
of
proxies
that
the
client
will
trust
and
the
set
of
proxies
that
the
target
will
trust,
is
challenging
and
because
there's
no
discovery
mechanism
for
the
proxy
pieces.
D
The
information
the
client
has
is
insufficient.
Based
on
your
your
current
design,
I
think
we
did
a
little
bit
more
about
that
to
ted's
point
earlier,
so
I
think
we
probably
need
something
more.
I
found
that
the
draft
to
be
quite
confusing
on
that
front,
because
thinking
as
a
client,
I
I
saw
a
url,
but
that's
not
the
url
that
I
I
can
use.
D
I
need
two
of
them
in
order
to
make
a
request
with
the
oblivious
http,
and
you
only
gave
me
one
of
them,
so
I
think
there's
probably
around
before
we
would
go
there.
F
F
So
a
lot
of
this
is
managed
to
clarify
it,
assuming
that
you
still
have
that
proprietary
mechanism
to
communicate
what
the
config
of
the
proxy
is
to
the
client
and
that,
essentially
the
client
and
proxy
can
learn,
you
can
negotiate
what
they
support
and
what
the
paths
are
etc
and
learn.
Whether
or
not
this
target
is
supported
and
a.
M
F
Mechanism
have
that
relationship
between
a
client
and
a
proxy
that
is
generic,
I
think,
is
a
larger
scope
and
is
less
clear
to
me.
D
Yeah,
so
probably
the
other
idea
that
I
had
here
was,
if
you
get
a
url
for
the
target
resource,
there's
a
bunch
of
things
you
can
do
by
talking
to
that
target
resource
in
terms
of
learning,
what
proxies
it
trusts
and
learning
what
it's
its
config
is.
That,
of
course,
creates
some
interesting
issues
when
it
comes
to
consistency,
but
those
are
the
issues
that
I
think
we
need
to
be.
We
need
to
be
thinking
through
here.
B
O
O
So
I
don't
yeah,
I
take
the
point
of
of
others
about
yeah.
We
need
to
explore
all
the
implications
of
it,
but
I
think
it's
a
it's
a
good
idea
and
I'd
like
to
see
it
adopted.
F
H
Yeah
very,
very
similar
to
the
last
comment.
Actually
I
I
think
this
is
a
very
good
problem.
I
like
the
overall,
I
like
the
overall
solution.
I
like
the
overall
area
solve.
I
think
this
is
something
we
should
be
adopting.
I
think
the
only
question
is:
do
we
solve
the
very
big
security
issues
that
I
agree
with
almost
everything
ever
said
in
those
areas
already.
Do
we
solve
those
before
we
adopted
or
do
we
adopt
and
write
the
solutions
for
them,
and
I
don't
think
they're
too
hard
of
security
issues
to
solve.
H
We've
had
a
lot
of
talk
in
the
chat
about
hey
signings
of
all
this,
the
draft
r
discussed
a
little
bit
linking
the
consistency
draft
for
stuff
about
the
the
other
one
that
echo
was
discussing
earlier.
So
I
I
think
if
we,
if
we
beef
up
the
handling
for
security
implications,
this
is
a
very
good
draft
to
have
so
I
mean
I'd,
say:
let's
adopt
it
and
do
that
beefing
up.
F
Great
and
what
I
would
say
is
besides
stuff
on
the
list:
there
is
a
github
for
this,
and
so
eric
and
ben
and
egger.
If
people
have
ideas,
let's
work
on
them
and
try
to
revise
this.
We
can
turn
out
another
version
of
the
draft
quickly
and
if
that's
something
that
people
like
the
solution
on
more
then
that
would
be
a
good
adoption
target.
H
If
there's
time
I'll
say,
I
do
think
that
that,
to
step
back
one
more
step,
I
think
that
we
need
a
little
bit
more
of
an
integrated
architecture
approach
here.
That
figuring
out.
What
makes
sense
here
really
does
depend
on
how
we
decide
to
handle
key
consistency.
H
For
example,
it
depends
a
little
bit
as
some
some
other
folks
are
saying
on
how
how
users
are
learning
the
initial
url
that
that
got
them
to
this
query.
So
I
think
I
would
like
to
see
us
step
back
a
little
bit
and
see
if
we
can
get
put
together.
You
know
a
set
of
puzzle
pieces
that
really
fit
fit
together
into
a
into
a
clean
overall
solution.
I
Jigsaw
puzzle,
one
might
say
yeah,
so
I've
been
chatting
with
with
my
co-chair
in
the
background
here,
it
seems
like
where
we're
ending
up
here
is
that
there's
you
know
some
energy
behind
this
being
a
problem
that
is
useful
to
solve,
but
it
sounds
like
this.
The
draft
is
not
yet
in
a
state
where
it's
it's
kind
of
adoptable
and
there's
enough
energy
around
the
draft
in
particular.
I
So
I
I
think
the
investors
there
is
to
kind
of
keep
refining
the
problem
statement
and
especially
around
the
security
requirements,
and
I
think
we
can
look
at
you
know
whether
a
future
draft
would
align
with
that
better
all
right.
B
B
B
Well,
the
shadow
band
case,
I
think,
was
the
specific
one
you
were
thinking
about,
but
maybe
for
some
of
the
other
ones
as
well.
So
if
folks
would
be
interested
in
that,
I
mean
please
let
us
know:
yeah
go
ahead.
Martin.
D
I
would
say
weeks.
The
anti-replay
stuff
is
maybe
less
settled,
but
I
I
suspect
we
can
make
some
good
progress
on
that
on
on
the
list.
The
discovery
issue
does
seem
to
be
something
that
is
is
worth
spending
a
bit
of
in
person
high
bandwidth
time
on.
D
I
Maybe
don't
need
to
schedule
a
virtual
inum
right
now,
but
if
we
get
to
a
point
where
we're
ready
to
have
some
deeper
dives
on
those
latter,
two
topics
we
chairs
would
be
happy
to
do
it.
B
Yep
that
sounds
right
francesca
anything
you
want
to
say.