►
From YouTube: IETF110-DPRIVE-20210309-1200
Description
DPRIVE meeting session at IETF110
2021/03/09 1200
https://datatracker.ietf.org/meeting/110/proceedings/
A
A
A
Your
glorious
host
for
the
day
or
myself,
brian
and
my
sidekick
tim,
so
first
thing
we
want
to
do,
is
do
a
quick
check
on
on
all
the
logistics.
You
know
eric
is
our
our
continuing
area
director,
so
he
gets
to
keep
an
eye
on
us
for
another
couple
of
years,
probably
much
to
his
chagrin
jabber
room
if
you're
not
actually
following
along
in
in
meat
echo
is,
is
deepprive
at
ietf.jabra.org.
A
The
only
thing
I
would
ask
for
for
a
jabber
scribe
today
is
is
if
there
are
people
who
are
actually
following
along
just
in
jabber
with
without
audio.
If,
if
there
was
somebody
willing
to
to
to
channel
any
mic
comments
that
they
want
to
make
tim
will
try
and
keep
an
eye
on
the
jabber
room
just
to
double
check.
But
if
there
are
people
who
are
willing
to
to
do
that,
relay
that
would
appreciate
it
and
the
minutes
are
going
to
be
kept
in
the
in
the
code
emd
site.
So
the
link
is
there.
A
Tim
is
going
to
keep
track
of
the
major
action
items.
If
people
want
to
follow
along
in
the
minutes
and
and
correct
whatever
typing
errors,
he
has,
because
I'm
not
sure
if
he
got
his
requisite
level
of
caffeine.
Yet
for
the
day.
A
Before
we
get
too
far
along
the
the
inglorious
notewell,
just
remember
that
you
know
these
are
the
rules
that
you
agreed
to
when
you
signed
up
for
the
meeting
and
are
participating
in
the
ietf
I
will.
I
will
state
that
some
of
the
bcps
at
the
bottom
there
are
probably
the
best
material
you
could
read
if
you
are
having
problems
with.
A
Insomnia
agenda
we'll
do
a
quick
check
to
see
if
there
are
any
updates
that
we
want
to
make
before
beforehand.
A
Blue
sheets
will
be
collected
automatically
via
meet
echo,
we'll
start
with
some
updates
of
older
current
work,
move
into
some
of
the
current
working
group
business
and
new
working
group,
business
and
you'll
notice
that
we
did
not
keep
them
completely
separate
because
we
wanted
to
intermix
a
couple
of
presentations
to
keep
them
adjacent
to
one
another
for
for
coherency's
sake.
A
Information
on
the
on
the
materials
that
we're
working
on
are
in
the
data
tracker.
If
you
have
the
materials
downloaded,
you
can
simply
click
on
the
link
and
it'll.
Take
you
to
the
magical
location
for
deprive
the
one
thing
that
I'll
bring
up
as
a
part
of
keeping
track
of
of
what
the
working
group
is
doing.
A
We
did
get
a
revision
of
76
26
biz
tightened
down.
It
was
not
able
to
get
submitted
for
official
publication,
but
we
do
have
a
link
sent
to
alissa,
who
is
the
the
sole
discuss
holder
on
that,
so
we're
we're
hoping
she'll
be
able
to
review
it
and
agree
that
the
changes
that
we
made
address
her
concerns.
A
So,
within
the
the
next
couple
of
hours,
we're
going
to
talk
about
dns
over
quick
and
and
we'll
talk
about
oblivious
dns
over
https
as
the
kind
of
the
first
block
of
of
time
for
the
meeting,
and
then
the
the
second
second
hour
of
the
meeting
will
will
be
more
towards
discussing
the
the
more
general
dns,
authoritative
encryption
discussion.
A
So
this
will
include
the
document
that
we
just
recently
adopted
that
paul
and
peter
have
been
been
authoring,
and
then
we
also
have
a
new
document
that
that
ecker
and
a
few
other
people
have
put
together
that
they
wanted
to
compare
against
the
the
pollen
and
peter's
draft.
A
So
those
are
our
four
discussion
topics
for
the
for
the
day.
Anybody
have
any
updates
changes,
modifications
they'd
like
to
make-
and
I
think
this
would
be
the
point
where
I'd
like
to
poke
scott
hollenbeck.
If
he's
online,
to
raise
the
question
that
he
had
for
us
in
the
in.
B
C
Talk:
okay,
brian,
this
is
scott
holland.
Back
sorry,
I
tried
to
enter
the
queue
in
there,
but
didn't
see
anyone
get
recognized
anyway.
So
just
a
quick
question:
if
we're
going
to
be
talking
about
the
solution
and
that's
what
I'll
call
the
two
that
are
up
here
on
the
slide,
could
we
also
have
a
quick
update
on
where
things
are
with
the
requirements.
A
Sure
so
one
of
the
one
of
the
things
that
that
we
noted
during
ietf
109
was
there
seemed
to
be
some
discrepancies
in
in
some
of
the
the
statements
that
were
in
the
in
the
requirements
document
and
for
the
most
part,
the
authors,
you
know
were
trying
to
incorporate
things
that
they
thought
were
important
and
they
weren't
getting
a
whole
lot
of
feedback
from
the
working
group.
A
There
are
going
to
be
needs
for
those
requirements
in
order
for
us
to
do
anything
useful
with
these
two
documents
that
are
sitting
on
the
on
the
agenda
list
right
now,
so
I
believe
benno
as
one
of
the
co-authors
of
the
requirements
document
did
chime
in
that
he
was
going
to
have
time
to
work
on
this.
A
I
hope
that
there
are
discussions
on
the
mailing
list
that
actually
help
iron
out
that
document,
because
I,
like
I
agree
with
you,
scott,
I
don't.
I
don't
think
that
there's
are
ways
that
we
can
make
a
definitive
statement
as
to
which
one
of
these,
if
either
of
them,
are
going
to
solve
the
problem
that
we're
trying
to
address
here.
D
Yeah,
okay,
thank
you!
Well,
yes,
and
no
I
I
I
want
to
chime
in
indeed,
so,
I'm
I'm
more
than
happy
to
work
with
the
working
group
on
this
requirements
document
and
indeed
incorporates
the
recent
discussion
or
the
discussion
that
itf
109
and
on
the
mailing
list,
but
also
indeed
so,
for
I
think
one
step
back.
I
also
would
like
to
hear
from
the
working
group
how
useful
this
document
is.
D
Indeed,
so
I'm
happy
to
work
and
spend
time
on
this,
but
also,
I
know
scott
and
some
others
really
value
this
document
as
it
gives
direction,
but
also
as
cuts
remarks.
Well,
if
this
document
is
not
being
updated
and
other
drafts
are
being
developed.
So
what's
what's
the
use
of
this
draft
of
the
requirements
draft
and
if
the
working
group
at
large
agrees
that
this
is
very
useful
also
in
making
progress
and
give
direction
to
the
to
other
work
items,
I'm
really
more
than
happy
to
work
on
this,
but
it's
it's.
D
I
would
like
to
step
one
to
make
one
step
back
and
ask
the
working
group.
Is
this
useful
document?
Do
we
want
to
have
this
document?
I
know
that
the
chairs,
like
this
document,
I
know
that
some
individuals,
including
myself,
want,
I
think,
is
a
useful
document
but
yeah.
What's
what's
the
working
group
at
large
thinking
about
this
document.
A
C
A
What
I'll
do
is
I'll
ask
tim
to
actually
put
an
action
item
on
on
our
list
for
the
chairs
to
actually
solicit
the
mailing,
the
mailing
list,
to
raise
any
points
or
comments
that
they
have
on
the
current
instantiation
requirements
documents
so
that
we
can.
We
can
come
up
with
a
set
of
concrete.
D
A
Cool
thanks
ben
and
thanks
scott
for
bringing
this
up,
because
you
know
that
is
a
document
that
we
need
to.
We
need
to
keep
moving
in
order
for
us
to
understand
what
it
is,
we're
trying
to
do:
yeah
yeah.
Thank
you.
B
Sure,
just
two
things
one
is:
I
wanted
to
emphasize
something
brian.
You
said
that
we
need
the
requirements
document
to
know
which
problem
we're
working
on.
It's
really
clear
that
there
are
at
least
two
problems
that
we're
working
on
and
one
of
the
issues
with
the
current
version
of
the
requirements
document.
Is
it's
trying
to
shoehorn
things
into
one
and
but,
but
I
think
from
the
mailing
list
very
clear.
Some
people
care
about
opportunistic
some
people
care
about
fully
authenticated.
So
as
long
as
the
requirements
document
can
cover
more
than
one.
B
B
B
F
Hi
this
is
peter,
as
I
mentioned,
to
berno,
maybe
privately
a
long
time
ago.
To
me,
a
requirement
document
is
not
necessarily
a
full
list
of
requirements
that
all
have
to
be
met,
but
it
definitely
is
a
list
of
things
to
judge
a
solution
by,
and
this
fits
in
with
what
paul
said
there
there's
a
lot
in
the
solution
space
we
probably
have
to
distinct,
but
somewhat
overlapping
problems
and
solutions
going
with
those.
So
maybe
requirements
is
too
strong
a
word,
but
regardless
of
what
you
call
it,
I
do
like
having
such
documents
thanks.
A
Thank
you,
peter
yeah.
I
agree
that
that
you
know
it's.
You
know
the
requirements.
Documents
are
kind
of
the
you
know
the
common
terminology
for
things
like
that,
but
there
are
other
ways
of
thinking
about
what
we
list
in
those
documents.
So
you
know,
I
think
this
is
is
an
important
discussion
point
for
the
for
the
mailing
list
and
we
need
to
make
sure
that
that
that
we're
all
in
agreement
on
on
what's
going
in
there.
H
A
G
Thank
you.
So
this
is
an
update
on
the
status
of
the
dns
over
quick
specification
next
slide,
please
so
a
little
bit
potted
history.
I
was
quite
surprised
when
I
looked
at
the
data
tracker
that
it's
now
four
years
since
we
first
proposed
dns
over
quick.
At
that
time
the
draft
actually
went
into
the
quick
working
group.
G
G
So
more
recently,
there
was
a
presentation
at
the
last
ietf
on
the
status
of
the
draft
and
coming
out
of
that,
I
think
it's
fair
to
say
that
there
was
still
some
uncertainty
on
the
exact
direction
that
the
working
group
wanted
to
take.
This
draft
in
the
original
version
was
specified
only
for
stub
to
recursive.
G
G
We
were
asked
to
think
what
it
would
take
to
extend
it
to
cover
recursive
to
authoritative
and
in
particular,
xfr,
and
the
latest
version
of
the
draft
begins
to
talk
about
that
a
month
after
that
presentation,
ad
guard
actually
launched
the
first
dns
over
quick
resolver
service,
which
has
been
running
since
then
and
they've,
given
some
presentations
on
on
that
service,
two
related
developments
last
month.
One
is
that
the
core
documents
describing
the
quick
protocol
are
now
in
the
rsc.
G
G
So
we
did
a
update
to
the
draft
last
month
and
there
were
four
main
changes
to
the
draft
content.
The
first
one
was
that
we
had
an
implementation
section
and
I've
got
a
slide
on
that
shortly.
The
second
was
that
we
added
an
appendix
trying
to
address
the
xfl
question
again.
I've
got
a
slide
on
that
one.
G
Other
thing
that
happened
after
some
discussions
with
paul
hoffman
was
that
the
very
early
versions
of
the
draft
had
suggested
using
port
784
for
experimental
implementations,
but
we
now
think
it
makes
more
sense
to
go
ahead
with
a
port
request
for
port
8853,
which
is
available,
so
it
would
be
good
to
get
some
feedback
on
what
the
group
thinks
about
that.
G
I
think
that,
possibly
given
some
of
the
discussions,
it
might
be
a
little
bit
early
to
move
ahead
with
that,
but
it's
it's
a
proposal
for
the
future,
and
there
are
some
minor
updates
related
to
transport
parameters
based
on
some
interop
work.
G
G
The
first
four
of
these
here
came
from
the
ad
guard
folks,
who
have
wonderfully
open
sourced
all
their
implementations,
that
they're
using
for
their
stub
to
recursive
service
so
summer
and
go
they've,
got
some
c
plus
plus
libraries
there's
a
couple
of
basic
implementations
and
there's
also
an
experimental
implementation
of
flame
thrower.
So
we
do
have
a
few
code
bases
that
we
can
play
with
next
slide.
Please.
G
What
we
don't
have
yet
to
my
knowledge,
is
any
rigorous
performance
measurements
of
quick
compared
to
other
encrypted
protocols.
G
One
thing
I
will
say
is
that
the
presentations
that
the
ad
guard
folks
have
done
they've
said
that
they're
finding
it
performing
just
as
well
as
other
encrypted
protocols,
they
they
support
the
full
range
and
in
fact
they
argue
that
they
think
it's
coping
better
in
mobile
environments,
which
they
have
a
large
user
base
in,
and
that's
all
the
reasons
that
you
would
expect
with
quick
because
of
the
characteristics
of
the
protocol.
G
We
think
that,
in
terms
of
discovery,
doc
and
dot
are
analogous
in
the
sense
that
both
can
have
dedicated
ports
and
they
both
use
the
same
authentication
model.
So
it's
not
a
lot
of
difference
there
next
slide.
Please.
G
The
picture
gets
more
interesting
when
you
look
at
what
the
current
mapping
is
and
then
you
think
about
trying
to
support
xfr.
The
current
mapping
that's
proposed
is
incredibly
simple.
It's
it's
very
clean
and
all
it
does
is
map
a
single
dns
query
response
pair
to
a
single,
quick
string.
So
each
end
immediately
closes
the
stream
after
sending
a
single
message,
and
that
means
the
content
of
the
stream
is
is
udp
like
a
minor
difference.
Is
that
in
quick?
G
G
So
the
o2
version
of
the
draft
has
an
appendix
where
we
have
a
initial
proposal
for
one
way,
we
could
update
the
mapping
to
support
it
and
there
are
two
changes
that
proposed
for
this
solution.
One
is
to
prepend
all
the
dns
messages
with
a
two-byte
field.
Length
and
second,
is
that
we
relax
the
mapping
so
that
we
allow
multiple
responses
on
a
single
stream,
so
that
xfr
can
work.
So
in
practice
this
means
making
the
stream
content
in
doc
more
like
a
tcp
connection,
now
the
pros.
G
G
The
cons
are
that
it
complicates
the
mapping
from
that
very
simple,
clean
one,
there's
a
small
overhead
introduced,
and
we
have
a
few
new
error
conditions
that
we
need
to
handle,
particularly
at
the
client
end
but
other
otherwise,
it's
a
relatively
straightforward
revision.
If
we
go
down
this
road
next
slide,
please
now,
of
course,
this
isn't
the
only
way
to
do
it.
G
There
are
some
other
possibilities
ranging
in
complexity,
and
if
the
working
group
decides
it
does
want
xr
support,
obviously
we
can
enumerate
them
and
evaluate
each
and
decide
where
we
want
to
go.
So
that's.
This
is
something
that
we're
going
to
need
input
in
from
input
on
on
the
work
from
the
working
group.
G
G
G
So
that's
where
we
are
at
the
moment,
so
I
would
appeal
for
people
to
have
a
look
at
the
draft
and
review
it,
and
hopefully
I
can
get
some
input
from
the
group
today
as
well
on
current
thinking.
That's
everything
if
we
could
open
up
to
questions
that'd
be
great.
I
I
read
the
document,
it
looks
really
nice
and
I
would
like
to
comment
on
whether
on
the
xfrp's
or
xmrp's
in
particular,
I
think
I
think
it'd
be
really
really
nice
if
this
draft
covered,
like
everything
that
we
need
to
encode
in
the
protocol,
now
I
don't
know
if
we
should
like
cover,
like
especially
use
cases
between
different
types
of
servers
like
stop
records
if
accuracy
alterative,
but
I
think
it
would
be
nice
if
the
protocol
itself
could
encode
anything.
I
You
mentioned
that
the
problem
is
with
the
multiple
messages
that
my
discipline
might
return
for,
transfers
I'm
actually
wondering
if
this
is
a
problem
or
I'm
actually
wondering
whether,
if
the
server
used
dns
or
quake
for
transfer
whether
it
actually
needs
to
send
the
multiple
answers,
because
the
limitations
come
from
from
dcb,
because
we
said
that
on
tcp
we
need
some
framing.
So
we
added
this
to
by
prefix.
So
we
are
limited
to
a
message
that
has
65k,
but
for
quake
we
have
a
different
signaling.
I
We
can
use
the
stream
fan,
so
we
can
essentially
send
much
larger
response
for
xfr.
So
maybe
we
could
make
it
work
with
like
this
simple
approach,
where
you
have
one
one
stream
that
is
used
for
single
query
and
single
response.
I
On
the
other
hand,
I've
been
thinking
about
use
cases
like
proxies,
because
if
we
change
this
then
then
the
proxies
wouldn't
work.
We
would
have
to
yeah.
So
I'm
just
just
asking
whether
you
have
considered
this
alternative
or
because
yeah
I
don't.
I
don't
like
the
65k
limit,
and
the
question
is
whether
we
should
like
keep
it
here
in
in
quick
as
well.
G
Yeah,
so
it's
a
really
good
point,
so
in
the
very
early
versions
of
the
draft
we
included
that
limit,
but
it
was
pointed
out
to
us
that
the
same
thing
happened
with
dough,
which
was
that
in
principle
it
could
have
supported
much
longer
larger
packet
sizes,
but
that
again
was
limited
to
the
65k
limit
for
exactly
the
reason
you
mentioned,
which
is
proxying
and
compatibility
with
with
other
transports.
So
we
updated.
I
think
it
was
the
o1
version
included
a
limit
on
packet
size.
G
I
mean,
of
course
we
can
revisit
this,
but
in
in
some
senses
I
think
we'd
be
re
hashing.
The
argument
that
happened
in
dough-
and
I
expect
we
would
come
back
to
the
same
conclusion-
that
it
solves
a
lot
of
problems
to
keep
that
limit.
I
Okay:
okay,
thank
you
yeah.
In
that
case,
I
just
suggest
using
this
like
proposed
to
buy
prefix,
and
I
suggest
to
do
it
now,
so
that
we
don't
have
to
handle
the
incompatibilities
with
all
clients.
Thank
you.
Good
work.
G
A
Thanks
john
ben
schwartz,.
J
Hi
ben
schwartz-
I
I
wanted
to
put
in
a
in
a
a
vote
for
port
853
udp.
You
know
I
we,
the
the
dns
over
dtls
draft
is
experimental.
J
It
is
minimally
deployed
and
quicken
dtls
are,
as
I
understand
it,
fully
demuxable.
There
quick
was
designed
to
be
able
to
share
a
port
with
dtls.
So
I
think
that
we
should.
We
should
aim
to
have
the
final
port
number
when
we
have
a
standardized
protocol.
B853.
G
So
that
was
a
conversation
that
was
had
off
list
and
others
can
probably
speak
to
the
details
of
the
port
allocation
process
better
than
I
can,
but
part
of
the
concern
was
that
that
would
make
it
a
longer
process
if
we
had
to
go
through
that
set
of
arguments
as
opposed
to
just
requesting
a
dedicated
port
for
quick
and
that,
ultimately,
it
didn't
make
a
huge
amount
of
deployment
difference
that
if
port
853
was
blocked,
port
8853
might
well
be
blocked,
but
it
I
mean
it's
something:
we've
considered
so
yep.
A
G
G
Okay,
so
thanks
jim
I
mean
it.
It
does
mean
that
if
you
do
have
proxies
translating
from
other
protocols,
you
do
have
to
then
have
a
special
case
for
xfr,
which,
just
if
we
are
talking
about
applying
this
to
recursive
authority,
it
feels
like
a
bit
of
a
a
walt
yeah.
I
appreciate
the
idea
that
simpler
is
is
better,
but
then,
if
we
had
to
modify
the
protocol
later
to
be
able
to
handle
something,
different
we'd
probably
have
to
define
it
as
a
different
protocol
with
a
different
alpn.
G
So
that
was
why
we
wanted
to
fully
assess
if
we
could
support
it
within
the
current
one
and
and
in
fact
it's
just
a
sense
of
that.
The
two
by
length
is
just
framing
the
answers
within
the
stream
content,
which
is
an
honor
by
stream.
So
it's
really
just
providing
additional
compatibility
with
the
existing
protocol,
but
yeah.
These
are
the
discussions
that
we
we
need
to
have
about
which
direction
to
take.
So
thank
you
for
that.
Okay,.
L
Some
lad
from
cloudflare,
so
my
one
question,
I
read
the
draft
and
I
couldn't
really
figure
out
why
this
is
preferable
or
what
is
advantages
and
disadvantages
are
to
doing
dns
over
http
3.
It
might
be
useful
to
have
a
short
section
in
the
draft
explaining
that.
G
Right,
yes,
so
I
think
there
was
a
little
bit
of
discussion
about
this
in
the
presentation
at
the
last
itf,
and
the
argument
is
that
particularly
for
one
of
the
arguments
is
particularly
for
recursive
to
authoritative,
you're,
then
packaging,
the
whole
http
layer
around
your
dns
queries.
G
When
I,
I
think
what
we
heard
at
that
meeting
is
that
most
recursive
and
authoritative
implementers
see
that
as
an
unnecessary
overhead
on
that
path
in
particular,
and
would
much
rather
have
the
cleaner,
lighter,
pure
quick
mapping.
That's
proposed
in
this
draft,
but
again,
if
other
people
have
have
other
recollections
for
that,
please
speak
up.
M
Hi
good
morning,
oh
happy
to
take
you
to
the
list,
but
I
wanted
to
make
a
very
short
point
about
the
port
number.
We
did
some
so
sorry.
I
used
to
be
the
tech
lead
for
quick
and
chrome
when
we
did
some
experiments
there,
a
while
back.
M
Not
all
udp
ports
are
created,
equal
and
getting
over
the
internet,
so
I
might
suggest
like
actually
just
going
and
using
443
quick
demultiplexes
by
lpn,
so
that
that
would
allow
you,
even
if
you
wanted
to
share
both
services
on
the
same
port.
It
would
just
work.
I
totally
get
your
point
about.
You
know
fighting
an
uphill
battle
with
with
purists,
but
this
might
be
the
best
deployment
strategy
just
without.
A
All
right
any
other
questions
for
sarah
on
dns
over
quick.
A
N
Awesome.
Thank
you
all
right
good
morning,
everyone
here
to
talk
about
oblivious
dns
over
https.
This
is
something
we
talked
about
in
this
group.
I
think
back
in
singapore
and
there's
been
a
lot
of
work
both
on
the
specification
side
and
the
implementation
and
deployment
side,
and
so
we
wanted
to
bring
an
update
to
the
group
remind
folks
what
this
was
and
talk
about.
Next
steps
next
slide.
Please.
N
It's
a
protocol
to
proxy,
though
queries
between
a
client
and
a
resolver
such
that
the
resolver
in
this
particular
case
only
sees
the
queries
and
the
proxy
only
sees
the
client
and
the
target
and
is
not
able
to
link
the
two
together
next
slide.
Please.
N
N
The
the
sort
of
minimal
privacy
assumption
that
we're
making
here
is
that
the
proxies
and
targets
do
not
collude
and
noting
right
now
that
there's
no
technical
measure
mechanism
for
enforcing
that
other
than
like
checking
certs
and
seeing
that
they're
not
you
know
the
same
entity
organization,
but
that's
just
sort
of
part
for
the
course
here
and,
as
I
was
saying
earlier,
the
privacy
goal
is
quite
simply
just
to
ensure
that
only
one
party
in
this
whole
entire
system,
that
is
the
client
stub
resolver,
has
access
to
both
the
the
contents
of
a
query
and
response,
or
the
plaintext
dns
messages
and
the
ip
address
and
no
other
party
be
at
the
target
or
the
proxy
is
able
to
link
these
two
together
next
slide,
please.
N
So
the
protocol
works
roughly
as
follows:
there
are
more
or
less
three
or
there
are
more
three
or
more
entities
in
the
system.
There
is
a
a
stub
resolver.
There
is
a
proxy
there
is
a
target
and
then
upstream
there's
the
rest
of
the
dns
or
we're
just
referring
to
this
as
a
sort
of
resolver
and
most
often
at
least
based
on
performance
considerations
and
performance
results.
N
Art
experiments
that
we've
done
it's
best
if
the
target
and
resolve
are
sort
of
co-located,
one
like
right
next
to
each
other
or
the
target
is
itself
a
recursive
resolver,
but
we've
split
them
out
here
just
to
illustrate
that
how
odo
is
sort
of
built
on
top
of
dough
next
slide,
please.
N
So
the
very
first
step
is
the
client
stub,
somehow
discovering
the
target
public
key
there
are.
There
is
a
a
mechanism
in
the
draft
to
actually
put
the
key
in
an
hp
service
record
right
now,
but
this
is
not
mandatory
and
I'll
I'll
touch
on
this
a
little
bit
later.
There
are
other
discovery
mechanisms
possible.
In
fact,
we've
implemented
another
one
in
production
as
well.
N
Next
slide,
please,
and
then,
through
that
persistent
hps
connection,
it
sends
a
public
key
encrypted
query
from
the
stub
to
the
target,
such
that
the
target
can
decrypt
it
using
its
private
key,
recover
a
query
and
then
send
it
upstream
to
its
resolver
to
be
answered
next
slide.
Please
and
then
the
response
flows
back
naturally
from
the
resolver
to
the
target,
where
it's
encapsulated
again
with
another
layer
of
sort
of
application,
encryption
and
sent
over
the
these
https
connections
through
the
proxy
back
to
the
stub
next
slide.
Please.
N
So
there's
obviously
a
number
of
different
ways.
We
could
go
about
building
this
sort
of
system
that
achieves
this
unlinkability
goal,
the
first
of
which
is,
you
know,
simply
using
connection
to
oriented
proxies
like
connector
socks.
N
N
So
if
you
were
to
use
a
connection
or
your
proxy
and
do
end
to
end
https
between
the
stub
and
the
target
in
this
case,
you
would
either
have
to
pay
that
connection
setup
time
for
each
query.
If
you
wanted,
unlinkability
or
you'd
have
to
permit
some
sort
of
bounded
link
ability
which
is
sort
of
the
approach
that
tor,
I
guess
dns
over
tor
uses
in
that
circuits,
are
established
for
a
short
amount
of
time.
N
You
can
send
queries
over
them
for
a
short
amount
of
time
and
then
the
circus
turned
down,
and
then
you
want
to
spin
up
and
get
you
the
same
properties
but
again
you're
trading,
off
link
ability
for
performance
reasons,
and
then
tor
would
of
course,
work
here
as
well.
It's
kind
of
in
the
same
bucket
as
connecting
sox
pro
keys
in
sec.
N
In
fact,
there's
a
presentation
in
ndss
this
year
at
the
dns
privacy
workshop,
describing
sort
of
someone's
experience
running
dns
over
tour
for
a
long
time
over
the
course
of
several
months.
I
believe-
and
basically
the
conclusion
was
it's
not
that
bad.
However,
tor
is
sort
of
a
very
heavyweight
solution
for
this
particular
for
this
particular.
N
I
guess
problem
we
have
here
particularly
requires
multiple
hops.
It
does
a
bunch
of
things
to
mitigate
traffic
analysis
on
on
top
of
just
regular,
proxying
and
encrypting,
and
it
for
that
particular
reason.
It's
not
sure
we're
not
sure
that
it's
the
best
fit
here.
Also,
the
tour
is
now
sort
of
widely
deployed
in
you
know,
most
most
user
agents
and
in
most
servers,
there's
also
oblivious
http.
In
fact,
martin
thompson
will
be
speaking
about
this
at
dispatch
later
in
the
week.
N
So
I'd
encourage
you
to
check
that
out,
and
it
should
be
not
surprising
that
there
is
a
strong
overlap
between
oblivious
hdp
and
odo
and,
in
fact,
that's
very
intentional.
N
Oblivious
http
is
sort
of
the
successor
to
odo
sort
of
v2
to
the
v1,
but
odo
is
in
its
current
state,
something
that
is
will
be
considered
to
be
fairly
well
refined,
analyzed
and
shipping,
and
so,
while
certain
aspects
of
generalizing,
a
generalized
ohttp
protocol
are
sorted
out
in
particular,
you
know
concerns
around
this
proxy
supporting
ohtp
and
becoming
sort
of
an
open
proxy.
N
We
think
odo
is
a
perfectly
good
candidate
for
a
v1
in
which
we
can
experiment
with
used
to
sort
of
build
the
system,
assess
its
capabilities
and
its
performance
and
then,
when
ready
shift
over
to
something
like
ohttp,
when
the
protocol
is
more
stable
next
slide,
please.
N
So
there
are
obviously
a
number
of
deployment
questions
that
will
likely
come
to
mind
if
you're,
if
you've
read
this
draft
or
if
you've
or
if
you're,
been
thinking
about
this
for
a
while,
the
first
of
which
is
undoubtedly
key
discovery
as
I
was
alluding
to
earlier,
and
I
think
this
will
vary
most
likely
by
deployment
scenario.
N
So
the
the
aft,
as
I
said,
has
a
current
mechanism,
for
you
know
packaging
up
all
the
configuration
and
public
key
information
that
a
stub
resolver
might
use,
and
it
specifies
a
way
to
shove
that
in
the
dns,
because
that's
our-
I
guess-
that's
the
answer
to
most
things
these
days,
but
we
also
in
our
production
endpoint.
We
also
package
up
and
advertise
this
information
in
a
well-known
http,
endpoint
and
given
sort
of
that's
likely
the
direction
that
ohdp
will
go
as
well.
Very
well
might
be.
N
That's
that's
sort
of
the
what
becomes
the
de
facto
mechanism
here
all
that
is
to
say
that
key
discovery
in
particular,
is
definitely
out
of
scope.
Similar
to
how
like
discovery
of
you
know,
doe
resolvers
is
out
of
scope
for
dough.
What
we're
describing
in
the
draft
is
just
sort
of
the
the
the
the
packaging
information
for
this
configuration
material.
N
The
second
important
question
that
might
be
on
your
mind,
is
who
will
actually
proxy
so
in
our
deployment
we
partnered
with
equinix
to
offer
up
proxy
services,
specifically
for
odo
for
user
agents
that
are
interested
in,
and
we
expect
this
to
sort
of
be
the
pattern
of
the
deployment
pattern
for
protocols
of
this
type,
where
we
have
good
samaritans
or
just
entities
acting
on
the
behalf
of
clients,
proxying
for
well-known
targets
that
support
the
protocol
or
proxying
two
targets
that
they
can
verify,
support
the
protocol
and
the
last
of
which
is
this.
N
N
In
fact,
that's
very
much
sort
of
out
of
the
the
point
of
obdo
is
to
sort
of
put
technical
barriers
in
place
such
that
people
can
enforce
whatever
sort
of
policy
they
want
in
terms
of
you
know
which
proxy
they're
going
to
talk
to
which
target
they're
going
to
speak
to
through
that
proxy
and
to
make
it
sort
of
difficult
for
entities
to
to
to
do
this
link
ability
that
I
was
alluding
to
earlier.
N
But,
of
course,
this
non-collusion
is
sort
of
necessary
and,
of
course,
required
for
the
privacy
guarantees
that
we're
talking
about
here.
Next.
N
So,
just
to
give
you
an
update
on
the
status
this
is
used
in
production.
We
have
the
target
support
at
cloudflare's
edge
right
now.
There
are
a
number
of
different
client
libraries
and
proxy
implementations
available.
These
are,
I
mean
people
are
dog
foodiness
right
now
it
is
the
proxy
is
running
and
we're
actively
using
it.
We're
trying
to.
N
I
mean
we're
standing
up
this
infrastructure,
first
and
foremost,
to
sort
of
do
these
performance
tests
to
see
you
know
what
is
the
actual
user
experience
impact
in
terms
of
this
sort
of
proxying
protocol?
We
have
some
preliminary
data
that
was
published
at
ndss
this
this
year
or
discussed
in
andy,
says
this
year
that
that
shows
without
getting
into
the
details
that
you
know
page
load
time
and
response.
Latency
are
not
quote,
I
should
have
quotes
around
significantly,
but
significantly
impacted.
N
N
So
there's
there's
a
lot
of
room
for
improvement
on
the
results
that
are
there
and
so
we're
continuing
or
we're
continuing
to
we're
committed
to
continuing
our
experiments
with
still
resolvers,
to
get
more
data
to
do
more
performance
analysis,
to
make
sure
that
this
is
something
that's
sort
of
livable.
N
On
I'd,
also
like
to
state
that,
thanks
to
jonathan
hoyland,
we
have
backing
format.
Analysis
in
tamron
too.
Just
give
us
confidence
that
this
is
something
safe
to
do
right
now,
as
it's
currently
specced
next
slide,
please.
N
So
the
the
question
for
the
group
is
given
that
there
is
a
sort
of
a
interest
in
in
in
this
month.
I
I
I'm
obviously
a
little
bit
biased
here.
There
appears
to
be
interest
in
this
particular
protocol.
People
are
investing
in
it.
It's
it's
shipping.
The
question
for
the
group
is:
is
there
interesting
to
document
us
as
an
experimental
v1
as
something
that
we
can
deploy,
live
on
and
get
confidence
in
before
we
move
to
some
future
v2,
be
it
based
on
oh
gdp
or
or
whatever?
N
And
with
that
I
will
pause
for
questions.
B
So,
thank
you
chris.
Actually,
especially
for
that
last
comment.
I
was
going
to
actually
ask
on
the
chat.
Is
anyone
else
hearing
that?
So
congratulations
for
your
dog
it's
four
in
the
morning,
so
I
think
this
is
a
reasonable
experiment,
but
I
don't
think
it's
worth
bringing
having
the
working
group
working
on
it
because,
as
you
said,
you've
already
got
you
know
it
deployed.
We
know
that
there's
a
v2
that
might
be
more
generalized
and
such
like
that.
I
think
this
would
be
perfectly
reasonable
straight
to
take
straight
to
the
independent
submissions
editor.
B
As
in
we
thought
about
it.
There
was
some
discussion.
It's
clearly
not
crazy.
We've
actually
got
some
proof
that
it's
a
reasonable
security
concept.
The
other
reason
why
I'm,
I
think
I
would
be
hesitant
to
take
it
to
this
working
group-
is
I
still-
and
chris
you've
heard
me-
ask
this
many
times.
B
I
think
that'll
get
answered
better
from
the
the
general
http.
This
working
group
work,
and
so
I
would
say,
don't
bring
it
up
in
the
working
group
but
get
it
published
so
that
people
can
take
a
look
at
it
and
bang
on
it.
Thanks.
N
N
Yeah
thanks
eric.
P
Yeah
just
comment
on
that
last
point
about
explaining
to
users.
I
I
don't
expect
this
to
be
something
that
can
easily
explain
to
users
either,
but
we
can
get
good
benefit
out
of
this.
If
the
browser
is
implemented,
we
can
do
things
without
the
users
having
to
understand
anything
at
all
similar
to
how
we
do
go
auto
upgrade
today.
P
Just
so,
I
I
like
this
proposal
just
my
one.
Concern
is
similar
to
a
lot
of
people
chat,
just
the
all
upcoming
alternatives
and
such
it
seems
like
if
this
just
is
just
a
temporary
thing
before
new
stuff,
like
oblivious
hps,
is
very
likely
to
take
over
then
this
doesn't
seem
like
something
that
should
be
a
full-fledged
adopted
standard
track.
Rfc,
yes,
definitely
not
standard
yeah.
Let's
do
this
right
now,
but
it's
the
alternative.
It's
getting
me
scared.
P
N
Yeah,
sorry,
I
I
didn't
mean
to
cut
you
off.
There's
a
lag,
yes
totally
agree.
We
are
not
interested
in
making
this
proposed
standard,
given
that
there's
something
slated
to
be
better,
oh
dp
in
the
future,
but
there
is
a
use
case
for
this
right
now.
We
are
actively
shipping
this
right
now
and
we
want
to
actively
experiment
with
it.
So
experimental
is
is
great
from
my
perspective,
so.
R
Yes,
we
can
all
right
sounds
good
yeah.
So,
just
to
kind
of
echo
what's
been
said,
I
think
experimental
is
the
completely
appropriate
thing
for
this.
If
the
working
group
doesn't
want
it,
I
think
then
it
would
make
most
sense
for
it
to
just
kind
of
go
directly
to
independent
submission
or
maybe
sponsor
something
to
be
experimental
like
that.
R
You
know,
quick
processing
thing
for
this
working
group
just
to
have
the
opportunity
for
the
working
group
to
have
the
reviews
to
give
input
on
some
of
the
text,
particularly
around
any
language
that
we
can
improve
for
dns
considerations,
considerations
into
the
architecture,
since,
if
we
view
this
as
a
step
towards
something
that's
going
to
be
generalized
with
oblivious
http
and
used
for
dns
in
the
future,
having
the
thought
and
consideration
of
the
group
on
how
kind
of
best
to
think
about
this
area,
I
think,
would
improve
the
document.
R
So
if
people
would
like
to
do
it,
that'd
be
great
I'll,
also
speak
as
an
implementer
that
we
are
planning
on
having
support
for
this
at
some
point
from
the
apple
side,
and
that
would
be
able
to
reach
a
lot
of
users,
and
you
know
it
can
be
done
in
ways
that
don't
require
them
to
have
a
deep
understanding
of
the
trade-offs
here,
but
still
provide
a
benefit
to
them
in
their
privacy
overall.
R
A
Yeah,
so
so
just
a
quick
response
to
that
tommy
I
mean
we
can.
We
can
either
do
a.
We
can
either
do
a
poll
now
or
we
can
just
repeat
the
poll
again
on
the
mailing
list
so
that
everyone
can
participate.
So
you
know
my
suggestion
unless
people
really
want
to
do
a
poll
right
now
is,
is
we'll
we'll
make
an
action
item
to
to
pull
the
working
group
on
adoption
and
we'll
we'll
elaborate
on
the
on
on
the
three
primary
options?
A
Assuming
that
experimental
is
what
everybody
agrees
to
then
it's
either
the
working
group
adopts
it
independent
stream
or
ad
sponsored,
and
that
the
third
one
will
require
us
to
have
a
a
chat
with
our
our
area
director
to
see,
if
that's
something
that
he's
interested
or
willing
to
see.
E
N
N
N
In
the
chat,
which
was
that
discovery
is
going
to
take
a
long
time
to
sort
out-
and
I
definitely
agree-
which
is
why
it
is
distinctly
out
of
scope
for
this
particular
document-
we
are
not
trying
to
solve
the
discovery
problem
here.
That
is
something
that
specific
deployments
of
this
problem
can
solve
future
specifications
we
can
solve,
but
it's
definitely
not
for
this.
E
Yep
thanks
and
I
always
have
the
opinion
we
could
adopt
something
and
then
decide
never
to
like
do
anything
with
it.
We
can
noodle
on
it
for
a
bit
and
realize
it's
not.
You
know,
but
some
people
feel
that's
a
that's
burning,
a
lot
of
working
group
time,
so
I
I
think
the
opposite
right.
I
think.
Sometimes
you
get
folks
really
thinking
about
stuff
like
this.
You
come
up
with
a
bunch
of
different.
You
know
possibilities
so.
A
While
we're
rolling
the
while
we're
rolling
the
poll.
L
Acura's
comment:
I
am
stuck
in
our
rfc
dep
ended
up
for
now,
but
I'll
just
say
what
I
think
about
odo
versus
ohtp,
which
is
I
like.
What's
on
the
slide,
but
hopefully
I'd
like
to
see
emerge.
I
guess
we
have
odo
as
experimental
in
case
there
is
no
merge,
as
we
expect
odo
to
eventually
be
replaced,
there's
a
bunch
of
plus
ones
in
the
chat
too
experimental.
A
Thanks
watson,
ben
thanks.
N
J
Hi
ben
schwartz-
I
I
said
earlier
that
I
I
supported
adoption,
but
I
just
heard
a
sort
of
change
in
the
the
the
context
of
the
discussion
with
the
the
authors.
You
know
noting
that
there's
substantial
deployment
already.
I
think
that
actually
suggests
that
maybe
this
isn't
really
a
great
fit
for
adoption,
because
you
know,
I
think,
adoption
by
the
working
group
really
means
change,
control
passes
to
the
working
group
and-
and
we
would
need
to
think
you
know-
I
think
it
would
be.
J
J
J
You
know
what
is
how
does
this
fit
into
the
broader
dns
architecture?
So
you
know,
I
think
this
would.
I
would
be
most
interested
in
in
the
working
group
taking
this
on.
You
know
as
a
sort
of
to
to
design
a
version
two
and
let
this
version
go
out.
You
know,
basically,
as
is
from
the
authors.
N
So
I
definitely
agree
those
are
things
we
need
to
sort
through
and
that's
specifically,
why
I
think
adoption
as
experimental
in
this
working
group
is
perhaps
the
right
move
here,
so
that
we
can
work
through
those
issues.
I
I
don't
know
that
means
I.
I
don't
think
it
means
that
we
need
to
be
like
focusing
on
v2
the
point
of
this
exercise.
As
I
see
it
is
to
make
sure
that
we
we
sort
of
work
out
all
the
kinks
for
this
initial
v1.
N
For
this
experiment
such
that,
when
v2
is
ready,
we
will
have
a
much
easier
time
getting
that
out
the
door,
but
that's.
J
J
And-
and
that
suggests
to
me
that
that
this
should
probably
just
go,
get
an
rfc
number
and
and
they
should
just
move
ahead
and
it's
and
we
shouldn't
try
to
mess
with
it
here.
Basically,.
N
So,
to
be
clear,
we
are
totally
open
to
handing
over
change
control
for
this
particular
document.
N
Should
that
be
what's
in
the
best
interest
of
the
document,
we're
not
like
just
seeking
an
rfc
number
right
here,
we're
trying
to
figure
out
what
is
the
right
way
to
shape
this
protocol,
such
that
it
is
deployable
and
people
can
live
on
it,
and
so
I'm
not
sure
I
see
an
issue
in
that
particular
respect,
but
there's
more
people
in
the
cuba,
maybe,
and
maybe
tommy
or
david-
want
to
specifically
speak
to
that
point.
A
N
We
expect
the
document
to
change
and,
and
that's
the
whole
purpose
of
this-
I
we
are
by
no
means
you
know
rigid
with
respect
to.
What's
currently
in
the
dock,
be
it
on
the
protocol
details,
crypto
details.
What
have
you
so
yeah?
I
hope
that
is
clear.
Okay,.
B
O
Thanks
the
point
I
got
on
cue
to
to
make
a
while
back
was
actually
on
the
caching
semantics
and
what
the
experiment
is
going
to
tell
you.
It
sounds
like
there's
a
great
deal
of
interest
from
the
proponents
of
the
document
to
use
this
experiment
to
to
work
out
what
the
relationship
is
of
this
work
to
the
rest
of
kind
of
the
the
way
the
dns
works,
and
I
think,
that's
very
valuable,
and
I
supported
an
experimental
on
that
behalf.
O
I
want
to
be
a
little
bit
cautious,
though,
in
drawing
any
conclusions
about
what
that
experiment
means
for
a
broader
use
of
oblivious
http
in
particular,
because
the
way
caches
work
is
sometimes
subtly
and
sometimes
not
so
subtly
different
between
the
way,
dns,
caches,
work
and
and
http
caches
work,
and
I
think
one
of
the
concerns
I
would
have
is
us
presuming
that
success
in
a
protocol
design
for
the
more
limited
set
of
mime
types
and
the
the
dns
caching
mechanism
was
necessarily
a
harbinger
of
a
good
result
in
the
http
side.
O
Cautious,
though,
in
in
thinking
about
what
the
results
of
the
experiment
would
mean,
because
I
think
there
is
a
strong
chance
that
you
will
end
up
with
an
oblivious
doe
that
is
divergent
from
oblivious
http
because
of
the
different
caching
semantics
and
in
particular,
some
of
the
risks
involving
the
relationships
among
different
caches
in
in
the
two
different
arenas.
O
N
Thanks,
that's
a
that's
a
great
point:
there
are
subtle
differences
between
this
and
oh
gttp
such
that
we
couldn't
just
lift
the
results
that
we
get
from
obdo
to
ohutp
and
call
it
a
win.
We
certainly
have
to
be
careful
and
mindful
in
how,
in
terms
of
how
we
do
the
migration,
but
our
hope
is
that
we
can
use
odo
and
track
it
alongside
ohgp.
As
close
as
we
can
such
that
the
divergence
and
delta
between
these
two
protocols
is
very
minimal,
and
it's
not.
R
All
right,
yeah,
so
first,
I
think
ted.
For
that
point,
I
think
that's
very
true,
and
I
think
that
difference
of
scope
is
precisely
why
I
think
it's
interesting
to
pursue
the
odo
experiment
on
its
own
now,
because
that
scope
is,
I
think,
more
tractable
in
the
short
term,
to
the
overall
point
of
change
control
that
was
being
brought
up
earlier
by
ben
and
paul.
R
R
It's
also
one
that's
versioned
and
we
would
try
to
keep
it
up
to
date
and
in
line
with
whatever
the
working
group
wants.
If
the
working
group
was
working
on
this,
so
I
don't
think
that
should
be
anything
that
blocks
us
and
yeah.
The
authors
would
be
very
happy
to
have
the
beneficial
input
of
the
group
on
improving
the
document.
M
David
thanks
sorry
waiting
for
the
microphone
to
kick
in
so
yeah
davidskenazi.
In
this
case.
My
relationship
to
this
is
my
job,
for
the
last
two
years
has
been
making
google
quick,
be
itf,
quick
and
if
we
had
said
when
we
started
that
effort.
Oh
no,
no,
we
can't
bring
this
to
the
atf
because
it's
already
deployed
we
wouldn't
have
quick-
and
I
think
quick
is
a
great
thing
for
the
internet,
and
the
atf
has
brought
great
things
to
quick.
M
M
P
P
We
could
easily
find
the
proxy
ecosystem
developed
for
dns,
but
not
necessarily
for
the
general
solution.
We,
it
seems
to
me
that
dns
is
a
specific
case
where
the
web
is
used
to
having
good
samaritans
servers,
show
up
and
be
there
without
any
necessarily
direct
income
potential,
and
we
could
easily
find
an
ecosystem
where
everyone
who
runs
wants
to
run
a
public.
Odo
resolver
also
runs
a
proxy
to
point
on
the
other
public,
auto
resolvers
versus.
P
A
All
right
thanks
chris,
so
just
as
a
note,
the
poll
showed
25
people
interested
in
adoption
and
13,
not
interested
in
adoption.
We'll
ask
the
same
question
on
the
mailing
list
so
that
we
can
get
more
participation
from
from
other
other
people
who
may
not
be
able
to
be
in
today's
session.
A
So
we'll
we'll
make
that
as
an
action
item
for
the
chairs
to
to
solicit
input
from
the
working
group
all
right.
Moving
on,
we
have
peter
talking
about
the
opportunistic,
a
dot
work.
F
All
right,
so
my
name
is
peter
van
dyke.
I
have
been
working
with
paul
hoffman
on
a
draft
to
basically
solve
the
working
group
charter's
problems
or
this
part
of
them
next
slide.
Please.
F
Next
slide:
yeah.
Thank
you.
So
the
basic
principle
of
the
this
draft
that
has
been
unchanged
over
various
revisions
is
the
unauthenticated
case.
If
we
assume
a
resolver
with
an
empty
cache,
it
gets
a
query
for,
say:
www.example.com.
F
It
goes
to
the
root.
It
goes
to
dot
com.
Eventually,
it
will
learn
the
name
service
for
example.com,
this
resolver,
having
no
knowledge
of
the
availability
of
dot
or
any
other
encrypted
transport.
For
these
authoritatives
will
ask
this
question
over
plain
classic
dns
over
port
53.,
the
resolver
gets
the
answer
from
the
name
servers
it
sends.
It
sends
the
answer
to
the
clients
to
the
client
and
the
client
is
now
no
longer
waiting,
so
any
magic
happens
outside
of
the
brief
instant,
where
the
client
is
waiting
for
an
answer.
F
F
F
It's
also
at
the
mechanism
where
the
discovery
happened
after
a
client
was
handled.
Discovery
mechanism
in
that
version
of
draft
was
probing
the
dft
port
being
853
or
perhaps
another
port.
If
we
decide
that's
recursive
to
out
dot,
we'll
use
a
different
port
and
stop
to
recursive
that
we
then
found
out
that
the
working
group
also
had
interest
in
a
fully
authenticated
protocol
for
encrypted
transport
from
recursive
to
outs.
F
F
Then,
shortly
after
we
publish
this,
the
the
draft
from
ecker
and
other
authors
appeared,
which
uses
svcb
as
a
discovery
magazine
mechanism,
which
means
these
two
drafts
now
overlap
in
some
places,
conflicts
in
some
places-
and
I
suspect
the
next
version
of
our
draft
will
be
shorter
than
the
seconds,
but
we
will
get
to
that
next
slide,
just
to
put
everybody
on
the
same
page,
I'll
be
going
through
some
comparisons
between
the
two
drafts.
F
The
first
two
lines
here
are
about
my
meme,
my
pulse
drafts
and
a
docs
is
the
draft
that
ecker
will
be
presenting
after
this.
So
we
use
tlsa
on
dns
name
for
discovery,
addox
uses
svcb
on
the
ns
name
or
perhaps
in
the
additional
section
from
the
parent.
If
it
turns
out,
parents
can
and
will
cooperate
in
that.
F
Tlsa
is
very
suitable
for
dot
and
doq.
I
think
sarah
mentioned
something
similar
in
her
presentation
earlier
as
well,
but
tlsa
is
not
suitable
for
doh.
The
a
docs
draft
that
uses
svcb
would
make
all
three
transports
supported,
and
that
is
another
reason
it
might
make
sense
for
us
to
also
change
to
svcp
next
slide.
F
As
for
authentication,
we
define
two
modes
in
the
current
draft,
for
the
undertaken
authenticated
mode.
The
tlsa
or
svcb
is
just
a
signal.
There
is
no
authentication
for
the
authenticated
skeleton
protocol.
We
defined
the
tlsa
from
service
discovery
would
currently
cover
that
and
for
the
adults
draft.
If
we
read
it
correctly
and
I'm
sure
ecker
will
do
a
better
story
on
that
after
this
than
I
can
right
now,
the
authentication
might
be
web
pki
tlsa
from
from
a
second
request,
tlsa
chained
in
the
tls,
handshake,
etc.
F
This
title
is
written
very
specifically,
the
resolution,
if
no
service
is
found
in
the
cache,
so,
as
I
mentioned
earlier,
when
describing
the
protocol
from
an
empty
cache,
the
unout
case
would
just
use
classic
dns
over
53,
then
later
go
figure
out
if
tls
or
the
oq,
whatever
is
available
at
all
for
the
authenticated
case.
Of
course,
you
need
to
do
that.
Look
up
first,
while
the
client
is
waiting
and
the
a
doc's
draft,
which
is,
I
think,
authenticated
obviously
also
needs
a
lookup.
First
next
slide.
F
Now,
assuming
that
things
are
in
cash
either
from
previous
from
a
previous
attempt
or
because
the
lookup
has
been
done
before
the
client
even
can
get,
can
even
get
results
in
the
authenticated
case,
we
try
every
encrypted
transport
to
every
server.
If
all
of
that
fails
again,
we
go
back
to
do
53
for
our
authenticated
case.
F
F
The
authenticated
case
in
the
adopts
case,
of
course,
fill
the
connection
then,
and
here
it
is
clear,
that's
the
first
two
items
being
being
in
our
draft
and
it's
the
third
item
being
in
the
a
doc
draft
is
a
weird
split,
so
maybe
our
authenticated
case
should
go
away
or
move
into
the
adopts
draft
next
slide.
Please.
F
So,
to
recap:
our
document
started
as
opportunistic.
Then
we
added
at
least
some
room
for
combining
or
being
synchronized
with
an
authenticated
case
when
there
was
no
other
document
doing
that.
Now
that
the
a
docs
draft
is
under
active
consideration,
we
think
it
would
make
sense
if
we
remove
all
authentication
from
this
document
while
keeping
the
discovery
mechanism
in.
F
S
There
you
go
perfect,
so
I'll,
just
yell
next
slides
already,
so
that
thanks
thanks
for
that
overview
peter.
So,
as
you
said
this
that
came
in
a
little
late
hilariously,
the
draft
is
actually
called
draft
for
squirrel
d
private's
the
latest,
because
when
the
secretary
had
hoisted
it
out
of
the
repository,
they
didn't
substitute
latest
with
zero
zero,
but
there
we
are
so
in
some
future
version
of
what
we
only
will
not
have
latest
on
the
end.
S
But
thank
you
for
the
secretary
for
doing
and
thanks
for
eric
for
approving
that.
So
there's
always
been
a
bunch
of
discussion
about
about
how
you
would
do
this,
and
so
we
try
to
take
a
crack
at
it
using
service
b
next
slide
and
okay,
so
I'll
actually
want
to
take
a
step
back
from
the
the
question
mechanism
and
talk
about
threat
model
first,
because
I
think
it's
helpful
to
think
about.
S
A
lot
of
discussion
has
gone
back
and
forth,
but
I
think
has
you
know,
left
us
in
kind
of
a
confusing
situation
about
what
can
and
cannot
be
done.
So
I
want
to
talk
first
about
like
what
we're
trying
to
accomplish
and
then
what
the
constraints
are
that
I
see
are
and
those
kind
of
apply.
I
think,
regardless
of
whether
exactly
what
you
know
how
you
spell
indications,
so
you
know
there
are
three
threat
models
we
talked
about.
S
I
think
one
is
you
know
the
classic
352
threat
model
where
the
attacker
controls
is
on
path
and
an
active
attacker
on
path
between
the
recursive
and
all
the
authoritatives.
You
know
this
is
like
often
the
critical
calls
that
don't
live
yeah
attacker.
S
Then
we
have
a
passive
attacker
on
pathfinder
curse
of
all
authoritatives,
and
then
we
have
some
edge
case
where
the
password
hacker
is
on
top
of
the
recursive
and
some
authoritatives,
so
that's
probably
worth
considering,
but
I'm
not
considering
it
here,
so
I'm
assuming
that
the
the
attacker
at
minimum
can
see
all
the
traffic
coming
out
of
the
recursive
and
and
and
maybe
can
tamper
with
it,
and
that
becomes
relevant
shortly
when
we
think
about
what
works
or
what
doesn't
work.
S
Also
before
I
move
the
next
slide,
I
want
to
point
out
that,
like
we're,
gonna
we're
in
a
bit
of
a
funny
state
here
where
and
we'll
see
as
we
get
the
dns
sec,
where
you
know
where
we're
tls
is
providing
we're,
although
dns
is
the
ultimate
backstop
for
integrity,
integrity
and
authentic
and
on
data
origin
authentication
are
intertwined
with
confidentiality
and
sort
of
annoying
way
that
will
make
our
lives
harder.
Next
slide.
S
So
I
think
the
first
question
we
have
to
ask
is
what
needs
to
be
encrypted
and-
and
I
think
this
is
obvious,
but
I
want
to
like
just
hammer
it
home
generally
need
to
encrypt
the
traffic
of
both
the
parent
and
the
child,
and
the
reason
for
that
is
that
the
sort
of
common
case
is
effectively
that
you're
resolving
like
example.com
right,
and
so
so
you
know,
if
I
go
to
the
if
I
go
to
dot
com-
and
I
ask
for
example.com
well
that
that
query
reveals
asking
for
example.com
and
then
I
go
to
ns.server.example,
even
if
it's
not
run
by
example.com,
it's
run
by
somebody
else
that
also
works
with
the
querysample.com,
and
so
both
these
need
to
be
encrypted
or
you
don't
get
any
real
value
and
like
there
are
there's
edge
cases.
S
That's
not
true.
You
know
we're
asking
for
like
x.example.com
where
like
x
is
like
you
know
not
just
you
know
where
we're
where
x
is
part
of
some
large
anonymity
set.
It's
not
like
www,
but
basically
we
know
when
example.com
only
has
you
know
only
only
has
like
two
like
children
under
it.
Male
and
wwe
like
or
worse
yet
it's
just
example.com
you're,
not
getting
very
much
value
out
of
only
encrypting,
the
second
query
or
the
to
the
the
real
authoritative.
S
If
you
don't
also
encrypt
the
create
the
parent.
So
so
I
think
that
that's
like
a
really
important
insight
to
have,
because
it
means
that
some
of
the
things
you
might
naturally
think
you
want
to
try,
will
not
work
properly
or
will
not
get
you
the
value
you're
trying
to
get.
I
hear
people
I
I
did
see
there
was
asking
for
questions
at
the
end,
but
if
anybody's
like
strong
disagreement
with
something
I'm
saying
like,
I
think
it's
like
actively
wrong.
S
I
suppose
just
dumb
like
please
do
pipe
up
because,
like
it's
important
to
get
these
things
clear
and
like
and
everything,
I'm
gonna
say
proceeds
from
this
basic
analysis
next
slide.
S
So,
okay.
So
the
basic
idea
here
as
peter
suggested,
is
to
use
service
speed.
So
http
is
also
saying:
we've
been
working
generally,
so
so
the
key
idea
is
the
authoritative
has
an
svcb
record
and
the
sgt
parker
indicates
two
things:
one.
It
supports
encryption
and
two
what
protocols
you'd
like
to
support
it
actually
may
serve
on
three
things.
It
may
also
serve
which
form
authentications
you
expect
to
authenticate
with,
but
we
can
do
that
in
a
minute.
S
I
probably
should
highlight
this
point.
I
think
for
for
operational
reasons.
It
probably
generally
you're
gonna
want
it
served
by
the
parent
initial
data,
but
we
can
get
to
exactly
when
this
needed,
whether
it's
not
in
a
minute,
but
that's
definitely
part
of
the
concept
here.
S
S
If
you
you,
you,
if
you
can't
once
you've
gone
through
like
all
the
different
options
of
like
all
the
different
name,
servers
all
different
protocols,
you
support,
then
you
hard
fail
when
you
can't
negotiate
if
the
others
work
hard
fail.
There
is,
I
think,
one
special
case
which
I'll
get
to
in
a
minute
where
you
might
fall
back
roughly.
That
case
is
that
none
of
the
authentication
mechanisms
that
the
the
server
claims
the
support
you
support
next
slide.
S
So
here's
like
the
same
diagram,
except
like
with
you
know
me
not
bothering
to
color
things
in
you
know
that
they're
tls
and
indicating
this
glue
record
being
sent
so
like
this
glue,
we're
gonna
like
a
real
point
of
contention,
so
I'm
gonna
get
to.
I
get
that
in
a
second,
but
I
think
there's
like
important
reasons
why
you
probably
need
it
next.
S
S
Okay,
so
right,
so
so
what?
If
you
can't
so
some
people,
the
real
suggestion
that
you
can't
use
the
glue
in
the
digital
data?
S
So
basically
so
the
fallback
in
case
you
don't
have
glue,
is
the
recursive
can
connect
to
the
nominal
authoritative,
namely
we
would
receive
an
ns
record
and
ask
for
sc
f
svcb,
given
that
you
have
no
indicate
in
a
naive
case,
you
have
no
indication
that
the
nominal
authoritative
has
has,
you
know,
supports
tls
and
so
you're
gonna.
Do
it
over
253
at
best
you're,
gonna
opportunistically,
because
again
we
have
this.
We
have
this
with
this.
S
You
know
we
have
this
chicken
egg
problem.
Where
I
I
only
have
the
ns
record.
I
don't
know,
I
don't
know
the
server
that
I'm
gonna
support.
Connection
with
tls
actually
supports
tls,
so
like
this
only
like
really
works
properly
in
two
cases
or
in
one.
Actually,
I
guess
in
one
case
that
has
two
requirements,
one
of
which
I
didn't
lay
out.
S
S
It
also
only
works
like
really
well,
if,
if
there's,
if
it's
in,
if
it's
a,
if
it's
a
zone
which
doesn't
like
not
responsible,
think
itself,
because
otherwise
other
otherwise
you're
asking
for
the
so
if,
if
you
know,
if
example.com
is
served
by
ns.example.com,
then
when
you
connect
with
that
for
example.com
you're,
just
kind
of
like
a
hulk,
you've
already
asked
for
it
and
you
didn't
use
tls,
so
it's
kind
of
like
game
over
so
so
so
again,
I
think
so.
S
S
J
Bend
hi,
so
I
think
there's
even
another
reason
that
makes
this
insecure,
which
is
that
the
the
ns,
the
name
that
you
are
connecting
to,
even
if
it
is
the
nsx
signed
and
has
a
service
b
record,
could
be
the
attacker
because
you
didn't
get
it
over
an
authoritative
channel.
Even
if
the
parent
is
signed,
because
the
parent
can't
sign
the
glue
records
right.
J
S
Okay,
so
paul,
if
you
ever
forgot
how
to
get
water
bowl
I'll
shut
up,
I
don't
let
you
talk.
Okay,
next
slide,
please!
S
Oh
sorry,
sorry
can
you
back
for
a
second
back
back
one
slide.
I
will
note
there's
one
special
case,
which
is
paul.
I
think,
observed
an
email
that
people
aren't
going
to
want
to
serve
sc
fcc
in
blue
in
glue.
However,
currently
the
root
zone
actually
for
busy
serving
as
as
vcb
at
all,
but
like
there's
a
like
as
a
temporary
fix,
where
you
like,
basically
pre-configure
the
recursives
or
the
tls
status
for
all
the
tld
authoritatives,
which
aren't
like
an
enormous
number.
S
It's
not
amazing,
but
it's
like
possible.
You
certainly
wouldn't
want
it
for
the
next
level
ones
next
slide
so
like
I
want
to
hit
on
this,
like
I
think,
ben
was
just
saying
this.
S
Like
aren't
you
trusting
the
parent,
and
the
answer
is
absolutely
just
apparent,
and
the
re-
and
this
is
the
part
where
like
tls,
is
actually
writing
integrity
and
and
data
identification,
and
not
just
and
not
just
and
and
not
just
confidentiality
and
the
reason
is
ben
just
said,
you
need
to
get
the
ns
record
for
the
child.
Otherwise
you
like
have
to
worry
about,
like
the
hacker
lying
to
you.
So
I'll
finish.
S
This
I'll
finish
talking
in
this
slide
paul
and
then
go
out
of
your
way,
so
like
dns,
does
help
here
in
the
sense
that,
like
that,
even
though
the
ns
represents
the
parents
that
are
unsigned
by
the
time
when
you
catch
the
child,
you
can
tell
like
that
you
were
just
lied
to,
but
at
that
point
you've
already
connected
the
server
and
found
out
that
the
ns
records
are
like
bad,
but
you've
already
sent
stuff
over
so
and
of
course,
you've
also
already
revealed
what
you're
trying
to
do
to
the
parents
or
the
parents
lying
you're
kind
of
in
real
trouble.
S
T
Okay,
so
so
we
came
here
now:
yes
yeah,
okay,
great
sorry,
so
it's
not
about
like
people
not
wanting
to
serve
as
vcb
records
at
the
parent
right.
It's
just
the
infrastructure
we
have
currently
has
absolutely
no
support
of
of
sending
those
via
ebp
and
other
mechanisms
from
the
child
to
the
parent,
and
it's
really
hard
to
get
new
records
into
this
system.
You
know
we're
still
trying
with
things
like
cds
and
c
dns
key
to
work
around
that
system
to
get
records
there.
So
there's
absolutely
no
way.
T
Svcd
will
ever
have
a
realistic
chance
of
being
there
at
the
parent.
Like
it's
not
my
thing:
it's
not
like
some
people,
don't
like
it,
it's
just
the
reality
of
the
the
operation
of
dns
and
and
registry
system.
So
that's
that's
one
thing
so
then
I
totally
agree
you
can
do
svcb
at
the
child
and
and
and
for
for
those
name,
servers
that
are
in
baileywick.
T
You
just
have
no
chance
of
of
keeping
your
privacy
and
a
lot
of
the
privacy
is
based
on
the
fact
that
your
name
server
that
you're
resolving
initially
you
know,
doesn't
leak
enough
data
and
then
subsequent
domains.
You
know
the
thousands
or
millions
of
domain
hosted
on
that
name
server
you
can
prevent
from
leaking.
If
you
trust
the
connection
you
have
to
the
parent.
S
Thank
you
so
like
I
have
no
particular
brief
for
svcb
versus
anything
else.
I
think
my
fundamental
thesis
is
something
I
think.
The
point
of
the
point
of
this
light
of
arguments
is
that
something
has
to
go
in
the
parentheses.
It's
not
gonna
work
very
well,
and
so,
if
there's
some.
J
S
T
Server
name
like
you
know
the
dns
curve
style
from
the
past,
and
then
you
you're
gonna,
have
the
question
of
whether
you're
going
to
put
a
pin
in
or
not
for
this,
or
are
you
going
to
put
the
public
key
in
or
not?
And
you
know
we
can
discuss
that
further
on
the
list.
I
think.
S
U
Yeah
can
y'all
hear
me,
so
I
I
think
I
I've
seen
this
in
the
chat
too,
but
I
also
I
I
don't
like
this
idea
of.
We
can
never
do
anything
new
in
the
parent,
because
that's
gonna
strongly
limit
us
and
result
in
hacks
where
we're
abusing
record
types
for
things
they're
never
intended
for.
If
it
really
is
the
case
that
that
the
you
know,
registry
ecosystem
today
is
so
you
know
so
frozen
that
we
can't
ever
do
things
like
this.
U
Then
let's
say
that
let's
go
into
dns
op
or
something
and
say:
okay,
we're,
never
adding
anything
new
to
the
parent.
Everything
new
has
to
you
know
have
to
work
with
the
child,
because
I
I
just
don't
see
a
path
forward.
We
all
agree
the
parent
is
the
right
place
for
it,
and
then
you
know
we
say,
but
we
can
never
change
the
parent.
U
So
we've
got
to
find
a
way
to
get
past
that,
and
if
that
means
that
there's
you
know
some
a
number
of
interim
methods
until
we
get
what
we
want
in
the
parent.
That's
great,
that's
fine,
but
you
know
I
to
me
this
idea
that
we
can
never
change.
The
parent
is
just
a
non-starter
for
for
any
of
this.
S
K
K
S
Okay,
I'm
going
to
move
on.
I
think
someone
easy
mute.
S
Okay,
so
next
slide,
please,
okay.
So
the
second
contentious
point
was
how
you
authenticate
the
resolver.
As
you
see,
I'm
somewhat
dodging
this
by
saying
the
usual
way
you
know,
but
my
basic
thesis
is:
we
shouldn't
take
a
position
on
this,
so
you
have
the
ns
record
in
your
hand,
and
hence
you
have
the
name
or
maybe
an
ip
address,
and
so
you
have
and
you,
and
so
we
have
conventional
choices
for
how
you
authenticate
people
once
you
have
their
name
and
use
tls.
S
We
have
a
web
pki
and
we
have
date
and
obviously
for
dane
to
avoid
the
security
concerns.
You're
gonna
want
the
tls
extension,
but
we
have
a
tails
extension.
I
think
in
the
iseq.
So
I
I
know
there's
contention
about
about
exactly
how
that
should
be
done,
but
like
it
exists.
So
so
I
think,
like
I'd,
be
kind
of
sad.
If
we
were,
I
think
it's
a
mistake
to
try
to
like
you
know,
nail
down
this
too
early.
S
I
think
there's
an
organization
made.
I
know
I've
seen
the
arguments
for
and
against
both
web
pki
and
and
dane,
and
I
don't
want
to
recapitulate
them
here,
but
I
think
like
let's
let
the
market
sort
that
out.
So
one
important
point
here
is
that
or
what
happens
if,
if
like
I
try
to
connect
to
you
and
I
I'm
I'm
more
cursive
and
I
only
sport,
my
pki
and
your
result
and
you're
authoritative,
you
only
support
date
and
that
looks
like
a
misconception.
S
There's
no
way
to
authenticate
that
so,
and
so
we
don't
know,
and
so,
and
so
I
don't
want
a
heart
fail
because
like
actually
it's
really
cool,
but
I
don't
want
to
fall
back
because
otherwise
we
have
an
attack
problem.
So
I
think
we
need
some
way
for
the
authoritative
to
indicate
what
credentials
it
has
and
the
idea
would
be
that
you
know
the
authoritative
would
say:
look
I
support
what
pki
and
the.
If
the.
S
If
the
recursive
only
supports,
then
then
you
don't
treat
that
as
a
heart
fail,
and
I
think
robert
evans
suggested
that
what
you
do
is
you
under
those
circumstances,
those
circumstances
only
then
you
fall
back
to
opportunistic
because
you
might
as
well
do
tls
and
you
just
don't
follow
the
search,
but
the
point
being
a
secure
indicator
that
you
can
trust
that
says
you
were
never
going
to
validate
this,
sir,
as
opposed
to
just
like
discovered
accidentally.
You
couldn't
validate
the
cert
next
claim.
S
This
is
also
potentially
useful
for
like
retrieving
this
vcb
and
ns
records
from
the
child.
Next.
S
Right
so
I
mean
like,
and
the
bottom
weight
of
the
reason
about
this
is
it's
like
tls
all
the
way
down
right
so,
like
you
know,
th
this,
this
works.
If
the
certificate
like
is
good-
and
you
have
some
some
strong
way
to
get
to
you
know
some
reasonable
way
to
understand
what
the
ns
records
look
like
either,
because
you
got
them
from
a
tails
connection
initially
or
because
they
were
somehow
dns
signed
and
like
again,
do
you
have
some
reason
why
we
trust
svc
records?
S
And
so,
as
long
as
you
like,
basically,
you
know
start
with
start
with
something
secure.
Then
you
can
continue
walking
down
the
t,
the
changes
in
tls,
where
there's
things
you're
dna,
second
method
or
not.
So
the
question
is
how
you
start
with
things
that
are
secure
and
so
like
there
really
are
like
two
main
options.
One
is
you
have
some
sort
of
thing:
you've
retrieved
over
over
j53,
three
main
options,
one
or
different.
S
Three
anybody
has
a
dns
sec,
one
of
which
you
retrieved
over
tls
and
so
you're.
Trusting
tls
whether
or
not
there's
dns
sect
and
the
third
is
something
is
manually
configured
and
I
imagine
we're
gonna-
need
some
combination
of
each
of
these
three.
But
the
point
is
once
you
get
into
some
secure
loop,
then
you're
good
to
go,
and
so
obviously,
like
you
know
at
the
very
top
you
know
things
are
getting
manually
configured
whether
there's
dns
sect
or
tls.
S
S
Right,
so
I
guess
next
steps,
so
I
think
you
know
peter
kind
of
keep
this
up.
You
know
we
think
this
is
an
interesting
way
to
proceed.
We've
been
interested
in
seeing
the
working
group
take
it
up
in
some
fashion,
with
the
document
mechanics
to
be
determined,
I
think,
by
the
chairs.
S
I
think
it's
been
floated
that
there's
some
interesting
ideas
from
anna's
tooth
you
might
try
to
pull
in,
as
I
said,
I
think
you
know
if
you
had
to
ask
my
co-authors
may
want
to
speak
for
themselves,
but
if
you
had
to
ask
what
I
think
the
important
points
here
are,
I
think
it's
first
of
all,
you
know
understanding
where
the
indicators
have
to
be
and
understand
how
to
read
about
security
and
understanding
how
to
think
about
like
what
you
know.
S
What
what
you're
trying
to
say
from
the
from
in
these
records
about
what
I
want,
what
I
want
the
recursive
to
do
and
what
kind
of
authentication
I
support,
and
I
think,
like
the
exact
mechanics
of
like
what
how
it's
spelled,
I
don't
actually
care
very
much
about
it.
I
don't
think
of
my
collaborators.
S
I
know
I'm
had
a
lot
to
do
with
svcb,
so
I
imagine
they
think
svcb
is
the
right
answer,
but
if
it
does
not
be
the
right
answer,
I
don't
think
it's
going
to
like
make
us
cry,
but
I
think
that
the
the
key
point
is
that
the
data
flow
architecture
is
important.
So
I'm
done
now.
S
Oh,
I'm
sorry,
I'm
not
I'm
not
done.
Let
me
just
finish
what
I
was
going
to
say.
So
I
think
there
are
three
contentious
issues
here:
there's
dean
versus
web
pki,
this
dot
versus
doe
issue
and
our
position
on
both
is
like
what
is
why
take
a
position?
You
know
we
have
plenty
of
ways
to
indicate
this.
Let
the
market
sort
it
out.
S
I
think
it
would
probably
be
better
if
we
knew
for
the
answer,
for
one
of
them
was
like,
like
definitely
better
like
we
we
choose,
but
like
there's
no
particular
reason
to
given
that
we
have
ways
to
indicate
both
and
we
can
negotiate
them
now.
I'm
done.
A
Thanks
eric
all
right,
so
I
think
what
we
want
to
do
now
is:
is
we've
got
roughly
20,
25
minutes
and
and
what
what
tim-
and
I
would
like
to
see,
is
discussions
over
both
of
these
drafts
and
how
they
relate
to
what
what
the
working
group
wants
to
do
and
how
we
want
to
go
forward.
So
with
that,
I'm
going
to
open
the
queue
up.
First
up
is
kirsty
payne.
V
Hi,
thank
you
yeah.
It's
just
some
clarifying
questions
actually
on
the
presentation
I
just
saw.
So,
could
you
just
go
back
a
couple
of
slides?
V
Thank
you
yeah,
so
I
just
want
to
like
kind
of
clarify
here.
You've
got
a
couple
of
things
where,
like
the
connection
is
secure,
if
something's,
okay,
it's
like
sent
over
tls
or
signed
by
dnsec,
and
just
to
make
clear
that
those
provide
different
security
properties,
obviously
so
I
wouldn't
want
to
like
bundle
them
in
together
and
then
just
going
right
back
to,
I
think,
maybe,
like
one
of
the
first
slides,
you
talked
about
the
threat
model
and
talking
about
how
you
would
include
certain
groups
of
attackers,
but
not
others.
V
I
think
you
said
considering
a
passive
attacker
between
recursive
and
some
authoritatives
is
worth
considering,
but
not
here.
So
I
guess
just
a
question
like
how
do
you
make
that
value
judgment?
What
is
the
threat
model
that
you're
working
to
and
how
or
like?
Where?
Would
you
consider
that,
if
not
here
and
what
security
do
you
lose
by
not
considering
that
that
attacker?
So
just
a
couple
of
questions
really
about
the
security
and
the
threat
model,
thanks.
S
Sure
so
it
made
taking
the
second
first.
So
that's
a
that
that
attacker
is
a
strictly
weaker
attacker
of
the
attacker,
we're
considering.
So
I
think
what
you're
losing
is.
There
might
be
some
things
which
work
against
that
attacker,
but
not
others.
S
So,
as
a
concrete
example
like,
I
argued
that
you
needed
to
that,
you
needed
to
have
security
between
both
the
the
recursive
and
the
parent
and
the
recursive
of
the
child,
but
if,
for
some
reason
the
attacker
could
not
interfere
with
your
connection
between
the
between
the
cursor
and
the
parent
and
was
not
on
path
for
that
but
was
requested,
it
was
was
on
path
between
the
records
of
the
child.
S
Then
you
like
wouldn't
need
to
lost
connection
with
your
cursive
from
the
parent,
because
they'd
be
off
path
so,
but
I
think
like
trying
to
figure
out
who
has
who's.
That
situation
is
like
super
hard
to
reason
about.
So
I
just
don't
think
it's
worth
it.
The
you're,
absolutely
right
that
that
dnsk
and
tls
are
providing
different,
different
values.
Here
you
know
this
is
what
I
was
saying
earlier
about
the
sort
of
confusion
about
like
exactly
we're.
S
Trying
to
achieve,
but
for
the
pr,
so
I'm
not
primarily
interested
in
this
discussion
about
providing
integrity
for
like
any
records
other
than
or
data
or
justification
for
any
records
other
than
the
n,
the
ones
required
to
like
to
get
tls
going.
S
J
Setting
okay
few
points
one,
I
said
this
in
chat.
I
think
we,
this
is
really
about
the
first
draft
in
the
of
the
two.
J
J
The
current
draft,
as
I
understand
it,
makes
the
choice
between
authenticated
and
unauthenticated
a
choice
of
the
resolver,
which
means
that
there's
no
safe
way
for
an
authoritative
to
deploy
a
a
new
protocol
in
public,
get
it
get
some
traffic
on
it,
get
it
tested
before
they
they
give
it
their
full
sla.
J
Okay,
point:
two
requirements
drafting
is
really
hard.
I
think
this.
This
discussion
really
points
to
the
difficulty
of
formulating
a
requirements
draft
that
that
gets
to
what
we
need,
for
example,
we
need
to
do.
We
need
to
figure
out.
Is
it
a
requirement
not
to
require
changes
to
the
epp,
specifications
and
icann
registry
agreements
and
all
of
that
stuff?
J
I
think
that's
a
real
puzzle,
and
the
third
point
is,
I
think
there
is
a
path
forward
here-
that
that
sort
of
blends
a
lot
of
the
ideas
that
have
been
coming
in
roughly,
I
think
it
looks
like
we
use
some
kind
of
ds
hack
to
encode,
just
a
flag
that
says
this
child
is,
is
playing
this
new
game.
J
You
know
go
ask
go
ask
the
child
for
the
for
an
svcb
record
to
find
out
what
you're
supposed
to
do
and
that's
that's
opt
in
it
costs
latency,
but
only
for
the
zones
that
opt
in
to
play
the
game
and
then
eventually
we
get
out
of
that
extra
latency
by
by
putting
svcb
records
in
the
parent
and
having
the
parents
also
do
tls
or
in
the
case
of
the
root
zone
md
zone,
digest
that
also
only
works
for
out
of
bail,
eq
name
servers,
but
I
think
that
that
sort
of
lays
out
a
roadmap
where
we
can
people
can
start
getting
protected
now
or
soon
and
in
the
very
long
term
we
can
remove
the
performance
cost
to
make
that
more
attractive.
A
All
right,
thanks
ben,
I
believe,
peter's
in
the
queue
to
respond
to
your
first
point.
B
C
B
Thank
you.
Yes,
we
on
on
your
first
point.
We
totally
missed
that
and,
of
course
it
should
be
added
the
the
ability
to
do
this
incrementally
is
hugely
important
and
so
yeah
and
again
we're
in
the
very
early
stage.
Please
don't
think
that
if
we
didn't
include
something
is
because
it's
not
important,
it
certainly
is
on
the
general
point
of
adding
records
to
the
additional
section
and
the
parent.
I
think
that
that's
going
to
be
a
real
issue
and
we
on
the
opportunistic
side
don't
really
care
how
it
comes
out.
B
We're
happy
to
follow
whatever
the
working
group
does
on
that,
whether
and
it
may
be
as
ben
just
said-
we
don't
do
it
at
all.
We
do
it
as
a
ds
or
whatever,
but
to
be
clear
doing
anything
adding
anything
to
the
you
know.
The
additional
section
takes
not
only
unfortunately,
contracts
and
stuff
like
that.
It
also
takes
updated
software.
A
lot.
A
lot
of
authoritative
software
isn't
prepared
to
add
willy-nilly
to
the
additional
section.
But
having
said
that,
I
said
this
in
the
chat.
B
I
I
believe,
if
that's
the
way
we
want
to
go,
we
should
go
that
way.
We
shouldn't
go
like
oh,
it's
impossible.
This
will
never
happen.
I
think
getting
this
kind
of
encryption
is
important.
B
Of
if
it
takes
a
while
and
giving
up
now
is
silly
and
on
that
last
point
is
just
to
be
clear
what
peter
said
in
the
second
slide
is
since
we
do
have
one
working
group
document
now
where
we
we
slammed
one
idea
of
fully
authenticated
in
and
now
ecker
has
a
much.
You
know
much
more
full
explanation,
including
a
use
case.
B
We're
totally
happy
to
split
it
out
again,
and
I
I
would
you
know
it's
up
to
the
chairs,
but
my
recommendation
to
the
working
group
is
that
we
actually
work
on
these
two
in
parallel
and
not
jam
them
into
one
document,
even
though
it
would
be
easier
for
implementers.
If
we
only
had
one
document,
I
it
looks
to
me
like
the
two
documents
are
going
to
move
at
different
speeds.
A
A
W
S
W
I
wanted
to
follow
up
on
what
then
and
paul
we're
talking
about
about
how
we
get
to
the
deployment
phase.
If
it's,
if
the
signaling
is
a
hard
fail,
that's
a
problem.
This
requires
more
thinking
about
how
like
what
what
if,
if
signaling,
is
not
a
boolean.
W
W
So
people
can
learn.
You
know
when
there
have
been
failures
and
figure
out
what
the
what
the
failures
are.
We
see
this
in
mta
sts
and
a
bunch
of
other
situations,
so
we've
kind
of
glossed
over
that
in
these
two
pieces.
I
actually
think
it's
a
significant
amount
of
engineering
work
and
I
want
to
make
sure
that
we
keep
that
on
the
table,
because
if
we
don't
offer
an
intermediate
option
between
no
signaling
and
signaling
means
hard
fail.
A
On
all
right,
thanks
next
up
is
kunit.
X
Hi,
can
you
hear
me?
Yes,
we
can
hi.
This
is
puneet
here
from
google
speaking
more
from
the
google
public
dns
position,
so
I
agree
with
dkg.
I
was
going
to
make
the
point
that
we
do
need
the
transitional
support
for
resolvers
to
be
able
to
move
from
doing
no
auth
dot
to
doing
partial
or
dot
right,
opportunistic
or,
however,
we
define
it
and
one
of
the
things
as
I
listen
to
the
presentations,
I
think
I
don't
see
much
of
a
conflict
between
the
first
document
and
the
one
that
ekr
presented.
X
I
see
the
one
which
ekr
presented
as
more
defining
the
signaling
protocol
and
how
you
get
there,
like
all
the
information
needed
to
do
secure
transport,
while
the
first
one
is
more
focused
on
the
recursive.
So
I
think
if
we
look
at
it
that
way,
I
don't
see
overlaps
there.
The
paul
and
peter
van
dyke's
document
should
really
lay
out
the
the
user
profiles
or
the
authentication
profiles.
One
a
recursive
resolver
could
use
to
connect
and
yeah.
That's
that's
all
right
now.
A
All
right,
thank
you,
robert
evans,.
P
Hi,
this
is
robert
evans,
I'm
the
I'm
from
google
and
I'm
the
tech
lead
of
google
cloud
dns.
So
here
on
behalf
of
the
authoritative
server
interest,
the
the
a
doc
proposal
that
eric
is
working
on
is
very
interesting
to
me
and
our
I
have
been
focusing
primarily
on
the
authenticated
case,
and
I
wanted
to
point
out
that
the
idea
is
for
there
to
be
a
number
of
paths
to
security.
P
The
incremental
adoption
of
svcb
based
tls
negotiation
provides
a
road
security
and,
as
as,
if,
if
the
elements,
if
we
go
back
to
the
slide
that
had
the
the
security
there's
three
points
there
that
are
important,
which
is
that
if
the,
if
the
tls
certificate
checks
out
and
the
ns
record
set
is
validatable-
and
you
have
a
secure
signal
of
sdcb,
then
that
makes
a
secure
connection.
How
we
get
to
achieving
all
three
of
those
elements
could
vary.
P
With
all
the
appropriate
proposals
adopted
by
the
right
folks,
then
the
security
increases.
So
the
last
point
I
want
to
make
is:
we
can
have
fully
secure
from
an
integrity
point
of
view.
Sjcp
negotiated
tls
connections
today
without
parents,
as
long
as
there
is
dnsx
signatures
for
the
nsr
set
and
the
svcb
are
set
at
the
name
server
zone.
K
We
seem
to
be
looking
more
at
solutions,
and
I
think
that
might
be
a
little
bit
premature,
because
I'm
not
sure
we've
had
a
proper
discussion
of
the
risk
and
threat
analysis
and
have
we
got
a
clear
understanding
of
the
threat
models
and
what
needs
to
be
done
about
them.
Before
we
start
talking
about
solutions,
I
think
we
might
be
jumping
ahead
of
ourselves
here,
a
little
bit.
A
Okay,
paul.
B
Hi,
just
since
a
couple
people
in
the
last
couple
of
my
comments
have
said,
we
might
need
some
way
of
reporting,
especially
like
if,
if
a
authoritative
server
somehow
is
a
signal
saying
I
do
this
and
then
they
actually
don't.
I
just
pasted
a
link
in
the
chat
for
a
draft
that
will
be
being
presented
at
dns
op.
I
guess
that's
two
days
from
now:
that's
not
yet
adopted
by
dns
off,
but
I
think
it
will
be
of
interest.
B
This
specifically
is
for
general
reporting
from
recursive
to
authoritative,
and
even
though
the
use
case
in
that
document
is
mostly
about
dns
sec
failures,
it
certainly
could
be
used
here
as
well.
So
I'm
not
as
concerned
about
a
reporting
mechanism
and
I'm
not
I'm
certainly
not
concerned
that
we
would
need
our
own
reporting
mechanism.
I
think
a
generalized
reporting
net
mechanism
should
be
assumed
once
we
work
on
it
a
bit
thanks.
A
N
Yeah
thanks,
I
just
wanted
to
follow
up
on
jim's
comment
and
say:
yes,
totally,
agree
that
it's
important
to
nail
down
the
threat
model
and
make
sure
that
we're
solving
the
right
problems
and
as
an
authoritative
server,
we're
interested
in
like
figuring
out
what
the
threat
model
is
in
particular,
and
also
sort
of
figuring
out
what
are
viable
solutions
within
that
front
model.
N
Do
not
care
whether
or
not
it's
svcb
or
whatever.
As
eric
was
saying
during
the
presentation,
we
just
want
to
see
this
problem
solved.
S
I
mean
I
I
I
certainly
agree:
that's
important
to
understand
the
threat
model
on
the
requirements,
which
is
why
I
started
without
my
presentation.
We
tried
to
lay
that
out
in
our
document
at
the
beginning,
which
is
not
to
say
we
did
so
successfully.
I
guess
I'd
be
interested
in
seeing
you
know
in
in
some
form,
which
is
again
up
to
the
discussion
that
shares.
S
You
know
some
way
to
workshop
that
to
get
the
point,
we
have
agreement
that
we
have
the
right
threat
model
and
if
our
our
test
could
be
helpful
there,
that
would
be,
of
course
great.
So
I
guess
I
guess
I
guess
my
question
is,
I
don't
know
I
personally
am
not
quite
sure
what
else
needs
to
be
done
on
threat
model.
So
someone.
A
J
Ben
schwartz,
I
I
agree
with
the
threat
model
on
ecker's.
First
slide.
I
do
think
that
we
should
be
clear
that
our
our
aspiration
is
to
solve
the
the
whole
thing
and
that
we
are
going
to
produce
some
solutions
that
solve
only
part
of
it.
The
essentially
matching
matching
the
weaker
threat
models
that
ecker
also
mentioned.
A
E
Yeah
sorry
thanks
thanks
brian,
the
three
that
I
had
was
recommendations
for
the
authors
on
the
requirements
document
via
scott,
more
interest
in
the
oblivious
http
work
with
the
working
group
on
adopting
or
not.
I
think
we're
gonna
take
some
of
that
to
the
mailing
list
and
work
with
the
peter
and
paul
on
facilitating
new
edits
for
their
opportunistic
draft.
B
I
I
prefer
not
to
yell
there's
somebody
sleeping
in
the
other
room,
it's
still
before
six
o'clock
here.
I
think
a
question
for
a
next
step
is:
where
do
the
requirements
get
specified?
So
tim?
You
had
that
as,
as
you
know
like,
we
need
to
work
on
the
requirements
draft,
but
it
could
either
be
that
there
is
the
unauthenticated
draft
talking
about
its
requirements
and
the
fully
authenticated
draft
talking
about
it
or
they
go
into
their
into
the
requirements
document.
E
B
So
the
chair
should
figure
that
one
out
as
to
where
it
goes
I'll
speak
for
peter
here
we're
happy
to
throw
our
requirements
over
to
the
requirements
document
as
long
as
that
document's
going
to
get
worked
on
which
it
hasn't
been
lately.
On
the
other
hand,
I
think
that
it
would
be
better
to
have
them
in
that
document.
E
Yes,
I
think
it's
good,
to
put
it
all
in
one
place,
we
brian
and
I
weren't
sure,
if
like
even
if
we
did
a
lot
of
work
in
the
requirements
document,
it
would
ever
get
published,
rather
than
just
be
a
working
group
document
right.
So
that
was
still
something
we
were
sort
of
chewing
on,
but
I
agree
with
you
on
your
points
there.
E
So
that's
something
we
will
have
to
take
back,
but
mostly
what
we're
trying
to
do
is
get
more
feedback
from
the
working
group
per
scott's
note
on
helping
them
try
to
figure
out
where
to
go
with
that.
A
The
queue
vanished
any
other
comments
on
on
tim's
list
of
action
items.
I
think
the
one
thing
that
I
was
curious
about,
maybe
eric's
going
to
talk
in
is
adopting
the
document
that
he
talked
about
today.
Is
that
what
you're
asking
eric.
S
I
mean
I
mean
I'd,
be
fine
with
that.
I'm
also
fine
with
I'm
also
fine
with
like
not
nothing
right
away,
though
I
guess
I
like
to
understand.
I
guess
more
importantly,
like
you
know,
more
importantly
than
exactly
what
documents
get
adopted,
let's
understand
how
we're
going
to
move
forward
and,
in
particular,
like
I
don't
want
to
spend
like
the
next
like
nine
years,
workshopping
requirements.
S
I
want
to
like
get
solutions,
and
so
I'd
like
to
understand
what
we
need
to
do
in
order
to
get
the
solutions
and
then
and
and
then
what
documents
we're
going
to
use
to
start
with
those.
S
And
so
if
people
think
that,
like
I
guess,
what
I
would
say
is
if
people
think
the
requirements
like
laid
out
in
our
document
are
like
roughly
suitable
and
that
the
solution
is
like
in
vaguely
the
right
shape
then
like
we
should
probably
adopt
that
and
go
forward
and
if
they,
if
they
think
that
those
are
wrong,
they're
likely
to
understand
what
what
would
be
right.
M
E
E
A
Okay,
so
what
tim
and
I
will
do
we'll
go
back-
we'll
have
some
discussions
on
on
what
what
how
to
drive
those
requirements,
discussions
within
the
working
group
and
we'll
get
the
other
action
items
kicked
off.