►
From YouTube: IETF114 DNSOP 20220728 1730
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
B
It's
nice
to
see
everybody
here
in
in
the
room,
in
real
life,
welcome
to
philadelphia
and
for
those
remote
we're
happy
to
see
you
too,
wherever
you
may
be,
your
your
chairs
are
benno
tim
and
suzanne.
B
B
The
same
old
same
old,
familiar
notewow,
please
read
it
understand
it
think
about
it
carefully,
because
we
are
held
to
enforcing
that
next.
B
B
And
some
iutf
meeting
tips,
particularly
for
hybrid
some
reminders
for
the
in-person
participants.
Everybody
should
be
signed
in
to
meet
echo.
That's
our
blue
sheets!
It's
the
only
one
we
get
now
meet.
Echo
also
manages
the
the
mic.
Queue
allows
us
to
manage.
The
mic
queue
keep
audio
video
off,
if
not
using
the
on-site
version,
just
to
keep
things
simpler
and
please
we've
we've
been
asked
to
remind
everybody,
wear
a
mask
unless
you're
actively
speaking
at
the
microphone
remote
folks.
B
B
So
we've
got
we've
been
through
the
introductory
commentary.
We've
got
a
message
from
our
sponsor
rad.
B
B
A
I
turned
it
on
wow,
I
fixed
it.
So
actually,
two
things.
First
off
I've
been
doing
this
ad
thing
for
many
years
now
and
I
am
still
enjoying
it,
but
my
term
is
up
in
march.
I
probably
will
run
again,
but
I
would
like
there
to
be
other
volunteers,
etc.
Other
people
willing
to
serve
in
the
role.
A
So
if
anybody
is
interested
or
considers
being
interested
or
even
just
wants
to
know
more
about
it,
please
come
along
and
talk
to
me
and
I'm
happy
to
explain
what
the
role
is
actually
like.
What
the
time
investment
is,
you
know
the
good
bits,
the
bad
bits
etc.
So,
please
hunt
me
down.
I'm
sort
of
generally
around
I've
got
a
big
hat,
so
you
can
find
me
so
the
actual
topic
dns
directorate.
A
This
is
something
which
myself
and
eric
think
are
organizing.
So
apparently,
this
dns
thing
that
we've
built
is
really
popular,
and
now
everybody
wants
to
do
it.
Dns
related
topics
show
up
in
a
bunch
of
working
groups.
25
of
them
are
actual
sort
of
documents,
working
group
documents,
13
and
dns
up
3
and
dns,
sd,
3
and
80d.
You
all
can
read
that
the
thing
is
there
are
currently
59
existing
drafts
that
mention
dns
somewhere
in
the
name
or
abstract.
A
So
obviously
there
is
a
lot
of
dns
work
happening
and
also
there's
a
lot
of
places
where
people
are
doing
dns
work
sort
of
in
other
drafts,
but
they
kind
of
mention
dns
and
then
try
and
build
on
it,
and
that
doesn't
always
end.
Well
next
slide,
so
I
mean
here's
a
whole
list
of
non-working
group
documents.
A
A
So
recently,
on
the
telechat,
there
was
a
document
which
tried
to
make
extensive
use
of
the
dns
to
do
mapping
from
sort
of
some
type
of
physical
things
to
who
owned
the
physical
thing,
and
it's
not
clear
that
the
dns
is
the
best
way
to
do
that.
But
you
know
there
hadn't
been
any
review
until
then
by
by
dns
people,
and
some
of
that
is
there's
no
easy
way
for
people
to
get
dns
review
from
dns
people.
A
So
what
we're
planning
on
doing
is
creating
a
dns
directorate.
There
has
been
one
before
there
was
a
dns
directorate
many
years
ago.
This
is
sort
of
dns
directorate
v2,
and
it's
going
to
be
like
most
other
directorates
in
the
ietf
like
opster
and
sector
and
similar
it's
going
to
be
a
way
where
actually
next
slide,
where
documents
which
are
being
progressed
through
the
process
can
have
a
set
of
dns
people.
Look
at
it.
A
So
this
is,
you
know
not
actually
what
the
actual
purpose
is,
but
this
is
the
upstair
disclaimer,
with
a
little
bit
of
text
changed
basically
some
things
so
that
drafts
that
are
progressing
can
be
reviewed
by
somebody
they
get
reviewed
by
a
member
of
the
dns
directorate.
The
comments
are
written
for
the
ads
document.
Editors
and
chairs
are
supposed
to
treat
them
like
any
other
comment,
but
a
way
to
get
some
focused
review
on
it.
A
We
will
be
asking
for
volunteers
who
are
willing
to
serve
it
shouldn't
be
a
particularly
onerous
job,
not
that
that
many
documents-
and
hopefully
it
will
be
fun.
So
as
they
say,
we
are
going
to
be
looking
for
at
least
two
secretaries
and
a
bunch
of
volunteers,
please
send
myself
or
eric
think
email.
You
know,
find
us
contact
us,
etc.
A
A
D
B
And
we
need
to
we've
got
some
updates,
and
tim
is
the
guy
that
wrangles
the
list
of
what's
and
where
so,
thanks.
E
Looking
for
some
like
whoa,
that's
a
crazy
document,
sort
of
thing
we
better
sort
of
think
about
that.
So,
but
I'm
we're
kind
of
good
on
this
idea,
hopefully
sort
of
keep
tabs
on
stuff.
So
let's
do
some
document
updates.
E
E
Service
b's
and
they've
been
in
the
editor
queue,
mostly
because
of
the
ensi
document,
but
there
has
been
some
changes
and
martin
hoffman
suggested
some
abnf
changes
recently
that
are
in
the
repo,
and
I
think
we
should
probably
they
want
to
sort
of
bring
that
to
the
list
just
to
make
sure
nobody
has
any
issues
about
that,
but
that
that
thing
has
been
sitting
for
a
while,
and
people
are
still
commenting
on
it
in
the
repo
just
trying
to
understand
the
basic
things
which
is
good,
and
I
I
figured
let's
do
that
now
before
it
hits
the
editors
and
then
it
all
sort
of
goes
sideways.
E
So
thanks,
authors
on
service
be
and
as
you've
noticed,
there's
lots
of
service
b
documents
in
the
ietf
now,
so
it's
it's
not
just
becoming
an
archetype.
It's
become
like
an
archetype
that
everybody
wants
to
use
to
solve
everybody
else's
problems,
so
that's
for
good,
better
or
for
worse.
The
next
slide
starts.
E
So
I
threw
this
in
at
the
this
morning:
the
5933
biz,
I'm
gonna,
have
to
do
this.
Dimitri
submitted
a
new
version
this
morning
and
I
think
we
need
to
review
it.
There
was
a
comment
that
came
up
about
switching
it
to
informational,
which
actually
they've
done.
So
I
think
mr
hoffman,
who
had
sort
of
made
that
I
think
that's
that
that
has
been
sort
of
addressed
very
well
and
I
believe
they've
addressed
I'm
going
to
go
through
it
all
of
mr
st
john's
comments
as
well.
E
So
we
need
to
review
that,
but
it
came
in
sort
of
right
off
the
bat
this
morning
in
working
group
last
call
we
have
a
void
fragmentation.
We
put
a
three
week
call
on
that
because
of
the
meeting
in
august
and
stuff
and
so
we'd
like
people
to
sort
of
review
it
and
give
us
some
feedback.
We
think
it's
in
good
shape
and
we
hope
everybody
else
does
so
next
slide
you're
going
to
be
busy
for
a
while.
So
some
upgrading
work
through
class
calls
and
you'll
see
this
from
paul.
E
E
The
chairs
we
always
kind
of
sort
of
feel
august
is
a
you
know
where
the
civilized
people
take
holidays
and
so
which
that's
usually
europe
and
things
get
a
little
slower,
and
we
respect
that-
and
I
you
know
our
question-
is
you
know,
do
we
fire
everything
up
in
september
or
so
do
we
let
this
run
through
through
august?
E
You
know
through
now
as
well,
so
if
anybody's
got
opinions
on
that,
please
speak
up
like
to
hear
it
so
next
slide,
but
wait
there's
more
so
catalog
zones,
catalog
zones
is
done,
but
the
authors
are
all
actually
on
vacation
right
now.
So
we're
gonna
start
that
working
group
last
call
early
september
so
get
ready
to
be
busy
and
also
validator
requirements.
Daniel's
gonna
speak,
but
we
kind
of
feel
it's
been
through
the
process
enough.
It's
ready
for
that
too.
E
That's
another
one
we'll
be
in
september,
so
you
guys
are
gonna,
be
busy
reviewing
documents
and
we're
gonna
sort
of
be
chasing
people
down
next
slide.
What
do
we
have
here?
Oh
so,
ns3
validation,
we've
been
sort
of
being
patient
with
the
authors
on
this
because
they've
been
busy
and
there
are
a
few
outstanding
items
and
schumann's
assured
us
he's
getting
some
a
distance
assistance
to
finish
these
items,
so
we
can
sort
of
move
on
with
it.
E
So
that's
a
great
thing
and
we've
hoped
on
that
so
and
then
84.99
biz,
which
is
another
one,
that's
kind
of
been
stuck
a
little
bit.
What
we
want
to
do
is
have
it
interim
to
finalize
that
belly.
Wick
definition
and
the
interim
is
the
right
way
to
do
that.
E
It
really
can
sort
of
focus
the
conversation,
and
so
we
have
two
choices
once
before
the
I
can
the
other
one's
after
and
we'll
probably
send
out
a
doodle
bowl
to
figure
that
out,
and
maybe
after
it's
better,
you
know
we'll
let
people
decide
on
that
next
slide.
What
do
we
have?
Oh
air
reporting,
the
authors
are
on
holiday
as
well
right
now
they
got
back
to
us,
but
there
is
some
stuff
happening
and
we're
waiting.
E
They're
gonna
get
back
to
us
next
week
when
they're
back
from
vacation
to
tell
us
give
us
more
feedback
on
that.
We
don't
know
about
zone
version
other
than
the
latest
updates.
So
we
need
to
sort
of
talk
to
the
authors
about
that.
They've
created
an
edness
registry,
which
I
want
to
sort
of
read
more
about
and
sort
of
understand
what
they're
doing
there.
E
So
the
dnsec
automation
they
need
to
add
another
author
from
the
data
c
folks
and
but
johannes
implemented
the
entire
protocol.
E
So
that's
kind
of
a
bonus
here
he's
like
we
always
like
to
see
that
I
know
peter's
submitted
a
new
version
of
dnsec
bootstrapping
and
it's
it's
been
sort
of
moving
along,
so
we
we
think
that's
in
good
shape,
so
we're
pretty
happy
next
slide.
What
else
do
we
have
going
on?
Oh,
yes,
we
adopted
two
documents.
Yesterday,
the
domain
verification
techniques
and
the
caching
resolution
failures.
E
They've
got
enough
sort
of
comments,
but
the
service
b,
dain
document,
which
did
very
well
in
that
warren
poll
that
we
did
didn't
get
a
lot
of
comments.
So
I'm
hoping
some
people
can
walk
up
here
and
take
the
microphone
and
sort
of
speak.
You
know.
Yes,
you
know
we
should
adopt
this
sort
of
thing,
so
hopefully
there's
somebody
in
the
room,
because
we
we
feel
that
there
is
interest
in
it,
but
we're
surprised
by
the
lack
of
sort
of
email
comments
about
it.
E
So,
yes,
yes,
no,
no,
okay
or
just
send
email
about!
Please,
because
we're
kind
of
confused
by
that
one,
because
we
got
a
lot
of
positive
feedback.
But
then
we
didn't
get
a
lot
of
positive,
a
lot
of
email
about
in
the
call
for
adoption.
E
So
someone
can
help
us
sort
of
understand
that
that
would
be
great
next
slide.
Okay,
yep
our
stuff's
in
the
data
tracker,
we're
also
in
github
in
the
usual
places
we
do.
I
keep.
We
keep
data
tracker
very
up
to
date
in
terms
of
everything,
including
you
know,
repos,
for
where
documents
are
and
stuff
like
that.
So
we
try
to
be
pretty
much
on
top
of
that
which
makes
our
ad
very
happy
so
yay
and
I
think
that's
it
on
the
document
updates
anybody's
got
any
things
to
say.
E
E
F
Hey
everyone,
I'm
niels
from
the
esec
and
the
hackathon
update
from
our
site
is
we
implemented?
Oh,
I
can.
I
can
actually
look
here.
We
implemented
dns
bootstrapping
in
two
fashions.
The
first
one
is
a
period
of
cron
job
implemented
using
dns
python.
I
believe
thank
you,
john
o'brien,
and
it
sort
of
scraps
the
cds
and
cdns
key
records
from
a
trusted
source
and
generates
the
signaling
zone
and
then
pushes
it
pushes
it
out
to
an
appropriate
place.
F
So
it
can
be
then
read
from
the
actual
signaling
domain
and
the
other
part
is
work
from
peter
and
jerry
and
myself.
We
implemented
the
signaling
zone
in
powerdiness
using
their
lua
records.
So
in
that
implementation
everything
is
generated.
Synthesized
on
the
fly.
When
you
ask
the
query,
it
also
connects
to
a
trusted
source
obtains
the
records,
signs
it
and
delivers
it
to
you,
and
both
both
implementations
are
actually
deployed,
and
I
think
the
links
are
on
the
on
the
slides.
G
H
H
We
tried
that
happy
path
seems
to
work
testing
for
that
path
seems
to
work,
but
we
still
have
more
work
to
do
like
some
new
introduced
features
in
version
0.2.
We
need
to
make
sure
that
the
resolver
doesn't
kind
of
attack
the
authoritative
name,
server,
yeah
more
implementations
come
any
questions.
G
No
okay,
thank
you,
yogurt!
Thank
you
niels
yeah,
so
the
dinner
chairs,
I
think
it's
very
important
work.
The
itf
hackathon
resource
is
presenting
also
here
because
it
makes
it
helps
to
make
the
draws
to
make
progress
and
give
implementation
feedback.
So
thanks
again
to
the
to
the
hackathon
enthusiasts
and
the
software
developers.
J
J
J
That's
fine!
I
put
the
zero
zero
up.
Had
some
good
discussion
had
some
suggestion
on
rfcs
that
I
had
missed.
Oops
put
up
the
zero
one,
basically
no
discussion
since
then-
and
there
are
no
open
issues
at
this
point
that
we
know
of.
So
it's
not
like
we're
in
a
rush
to
get
to
working
group
last
call,
but
if
there
really
are
no
issues,
there's
no
reason
not
to
sort
of
move
it
through
again.
It's
not
defining
anything
new,
it
is
simply
saying
if
you
want
to
refer
to
dns
sec.
J
This
will
be
a
single
rfc
to
do
it
and
by
the
way
it's
a
bcp.
We
actually
believe
that
it
is
the
best
current
practice
to
use
dns
sec.
There
was
a
little
bit
of
an
issue
of
oh
well.
I
don't
want
to
use
dns
sect.
There's
a
lot.
You
can
always
not
do
bcps.
In
fact,
lots
of
people
don't
do
bcps,
but
the
dns
community
believes
that
dns
sec
is
the
best
way
to
secure
dns
messages
at
rest,
authenticate
them
and
such
like
that
next
slide.
J
Sorry
I
thought
I
had
like
a
longer
thing
than
that.
Just
on
the
issues,
I'm
thinking
my
other
presentation,
which
will
also
go
fast,
so
actually
we're
at
the
last
bullet.
Is
it
ready
for
working
group
less
calls?
The
chairs
have
just
said
that
they're
probably
going
to
do
that
soon.
J
Okay,
so,
does
anyone
have
any
questions
or
want
to
say
anything?
Now
again,
it
hasn't
been
in
working
group
last
call.
So
if
you
have
a
I
wish
this
was
changed
and
such
that's
perfectly
appropriate
to
do
in
working
group
last
call:
does
anyone
have
any
issues
less
than
15
minutes,
I'm
going
to
use
these
stairs.
B
Yeah,
the
the
the
bc,
the
dns
bcp,
seems
like
a
pretty
straightforward
thing
to
do.
The
document
is
short:
if
you
haven't
read
it
or
you
haven't
read
it
lately,
there
shouldn't
be
it.
It
seems
like
something
that
should
be.
We
should
be
able
to
move
forward
relatively
quickly,
so
please
speak
up
and
on
in
the
working
group
last
call,
particularly
if
you
have
issues,
but
also,
if
you
want
us
to
move
it
ahead
and
to
give
us
positive
support
for
it.
Thanks.
K
K
So
the
goal
is
to
describe
operational
recommendations
to
implement
sufficient
trust
in
that
makes
dns
validation,
accurate
and
also
to
respond
to
many
of
the
questions
of
people
that
are
willing
to
deploy
the
nsx,
which
includes
isp,
but
also
other
software
vendors.
K
K
So
we
have
three
kind
of
recommendation.
Those
you
need
to
do
before
you
start
the
resolver,
those
who
have
to
do
when
at
run
time
I
mean
regularly
in
an
automated
way
and
those
are
needed
on
demand.
K
We
spend
a
significant
effort
to
say:
don't
try
to
mess
up
with
the
dns
mechanics,
so
that
was
one
of
the
key
message
and
the
the
topic
we
discussed
are
of
course
regarding
time
how
you
manage
the
trust
anchors,
the
negative
trust
anchors,
key
keys
that
are
not
trust,
anchors.
The
cryptographic
duplications
reporting
of
invalid
validation
next
slide,
so
we
have
received
a
significant
amount
of
reviews.
Most
of
them
were
needs
or
clarification.
K
I
up
to
what
I
know
we
address
them
all
I
mean
this
is
all
documented
into
the
github,
but
we
also
review
and
rewrote
the
document
to
clarify
this
as
a
wall,
so
we
expect
the
document
is
pretty
well
shaped
to
be
sent
to
the
next
step.
So
if
you
have
any
comment,
that
would
be
really
appreciate
that
you
provide
those
during
next
month.
G
G
No
people
in
the
queue
no
okay,
thank
you
danielle,
and
we
won't.
We
want
to
make
progress
and
push
the
document
forward
thanks
today,
yeah
we're
good
we're
doing
good.
G
L
All
I'm
siobhan
and
schumann
paul
and
I
have
been
working
on
this.
L
Called
well
not
new
anymore,
called
domain
verification
techniques
it
was
adopted.
I
think
yesterday
so
eager
to
hear
any
feedback
next
slide.
So
just
a
quick
intro.
What
is
domain
verification?
What
are
we
talking
about?
So
many
providers
on
the
internet
need
folks
to
prove
that
they
control
a
particular
domain
before
you
grant
them
some
sort
of
privilege
on
that
domain.
So
as
an
example,
let's
encrypt
has
a
dns
based
challenge
for
a
user
to
prove
that
they
control
a
particular
domain.
L
Yeah,
so
what
the
draft
looks
like
right
now
is
that
it's
a
survey
of
the
different
techniques
that
different
providers
use
and
then
there's
a
recommendation
section,
but
the
draft
is
informational.
So
it's
purely
like
a
kind
of
like
a
survey
of
existing
techniques.
So
the
two
main
techniques
are
that
we
found
were
text
and
cname
next
slide.
L
So
the
text
based
method
basically
says
that.
Please
add
this
dns
text
record
with
a
random
value,
unguessable
value
at
the
domain
being
verified,
and
that
proves
because
this
it's
an
unguessable
value
that
proves
that
you
own
the
domain
and
the
the
service
provider
is
able
to
check
that
and
it's
supposed
to
expire
in
a
few
days
or
the
guidance
around.
This
is
pretty
vague
and
not
very
well
documented,
often,
and
that's
an
issue
like
people
have
had
outages
because
of
this.
L
So
there's
there's
wide
variation
and
part
of
the
goal
of
the
draft
is
to
is
to
say
that
this
is,
you
know
a
good
idea,
and
this
is
something
that
you
should
be
doing
next
slide.
L
So
just
a
few
quick
examples
in
the
in
the
draft.
A
lot
of
these
named
examples
where
the
feedback
that
we
got
on
the
list
was
that
it's
not
a
good
idea
to
have
them
in
the
main
docs,
so
we're
thinking
about
just
moving
them
to
the
appendix.
For
now
I
just
completely
removed
them,
but
I
think
it
probably
makes
sense
for
them
to
be
in
the
appendix.
So
this
is
this
is
an
example
where,
on
a
particular
website
like
bbc.com,
you
might
have
a
key
value
pair
for
and
the
key
says.
L
Okay,
what
is
the
company
that
is
trying
to
do
the
domain?
Verification
and
the
value?
Is
the
there's
a
unguessable
value
next
slide,
but
you
can
also
have
something
that
acme
does
and
github
does.
You
have
like
an
underscore
prefix
underscore
acmechallenge.example.com
if
you're
trying
to
verify
example.com
and
then
the
random
value-
and
you
can
imagine
that
like
this
is
this-
is
I
guess
the
spoiler
here
is
that
this
is
a
better
technique
in
our
opinion,
but
next
slide.
L
There's
also
the
cname
based
option,
which
is
often
touted
as
this
as
a
fallback
option,
so
different
reasoning
for
why
you
might
need
this,
but
you
know
if
the:
if
the,
if
you
already
have
a
cname,
then
you
can't
really
have
another
text
record.
So
so
that's
why
you
might
want
this,
and
this
typically
points
to
a
service
provider
property.
So
then
you
can,
then
the
the
provider
can
verify
that.
Okay,
actually
this
this
exists
next
slide.
L
So
this
is
another
example
where
you
have
a
random
value,
dot
the
domain
you're
trying
to
verify,
and
then
that
is
a
c
name
to
something
dot.
The
person
the
the
company
or
the
provider,
that
is,
will
check
for
the
value
next
slide
yeah.
L
So
I
guess,
as
I
mentioned,
it
seems
like
it's
best
practice
to
target
the
the
these
records
that
you're
adding
to
target
them
to
a
service,
so
that
and
the
second
one
is
that
this
is
also
good
of
their
time
bound
and
the
guidance
around
that
is
pretty
clear.
L
So
this
is
the
problem
is
bloating
and
it's
best
if
you
can
just
query
for
the
the
actual,
the
actual
service
that
you're
trying
to
get
to
next
slide
and
yeah
and
time-bound
checking
like.
I
have
definitely
been
in
a
situation
where
someone
removed
a
very
like
one
of
these
tokens
and
then
the
service
provider
is
like.
Oh,
I
guess,
you'd
no
longer
own
this
and
shut
off
the
access,
and
that's
also
not
great.
L
So
there
should
be
clear
guidance
around
when
can
the
record
be
removed,
and
arguably
maybe
you
don't
even
need
to
for
those
records
to
exist
in
perpetuity
but
yeah.
In
any
case,
there
should
be
some
clear
guidance
around
that
next
slide
yeah
and
we
got
a
bunch
of
feedback
on
the
list.
I
think
I've
addressed
most
of
that,
but
I
removed
mention
of
specific
companies
removed,
use
of
normative
language.
L
It's
a
purely
informational
draft
and
also
put
a
summary
of
the
recommendations
in
the
intro,
and
it
was
adopted
by
the
working
group
next
slide,
but
yeah.
So
there's
one
thing
I
need
to
do
is
and
there's
a
github
issue
for
this.
I
need
to
move
the
examples
to
appendix
and
also,
I
guess,
admin
move
to
the
working
group
github
that
I
think
that's
it
yeah
happy
to
take
any
feedback
and
or
if
folks
think,
I
think
it
should
go
in
a
certain
direction.
O
John
o'brien
university
of
pennsylvania,
thanks
for
doing
this
work
very
very
helpful,
and
I'm
also
particularly
glad
to
see
commentary
about
the
time
limited
having
things
that
are
time
limited.
The
other
experience
I've
had,
which
you
may
wish
to
add
to
the
document.
You
can
tell
me
if
you'd
like
me,
to
put
this
on
the
mailing
list
or
a
github
issue.
O
Or
in
some
cases
that
it's
at
a
zone
cut
and
that
causes
some
problems,
especially
in
organizations
such
as
mine,.
L
G
C
C
I
mean
like
not
using
it
put
putting
the
stuff
with
the
name
instead,
but
it's
instead
of
a
subdomain
not
putting
some
sort
of
tag
in
the
in
the
in
the
text
strings
so
that
you're
not
spoofed
out
by
by
wild
cards-
and
my
guess
is
that
is
that
there
is
not
much
disagreement
about
what's
good
and
what's
bad,
and
that
seems
to
me
it's
a
natural
thing
for
bcp.
You
know
I
would
like
to
have
a
bcp.
L
Yeah
yeah,
that's
an
it's
something
we
have
thought
about.
You
can
imagine
that
you
could
be
more
drastic
and
say
that
there
should
be
a
new
r
type
for
something
like
this,
but
we
didn't
want
to
do
that.
But
anyway.
L
P
Yeah
I've
got
exactly
the
same
comments
as
john
really.
Basically,
I
think
this
is
great
work.
It's
good
to
see
a
survey
being
done,
but
I'd
like
to
see
you
know
when
the
survey
is
done,
that
I
I
really
support
the
next
step
of
moving
this
into
some
kind
of
bcp
to
be
able
to
point
people
at
the
right
way
to
do
it
and,
in
particular,
to
get
rid
of
the
bullets
and
get
rid
of
the
records
that
sit
there
forever.
P
G
Thank
you
and
sorry
anthony
thanks.
Q
Anthony
liquid
yeah
good
document
not
to
sound
like
a
broken
record,
definitely
bcp
just
a
comment
on
the
tcp
thing.
I
think
definitely
draw
more
attention
to
that
because,
as
you
pointed
out
with
the
whole
retry
there's
some
quite
critical
records
that
sit
in
the
apex,
like
your
your
spf
records
and
causing
issues
with
email,
is
not
something
that
we
want
either.
So
it
would
be
really
good
if
we
just
draw
attention
to
that
as
well.
R
Ben
schwartz,
I
I
like
this
draft.
I
I
think
it
should
probably
have
one
sentence
about
d
name.
R
G
Yeah,
I
think
that's
a
good
discussion
and
I
see
tim
also
annoying
yeah
yeah.
T
H
N
H
We're
getting
in
the
future
dnsr
reporting
that
is
very
nice,
so
zone
operators
know
what
the
resulting
errors
for
their
clients
are
and
next
slide,
and
then
there
was
a
random
lunch
discussion.
That's
all
fine,
but
if
I
want
to
adopt
a
dnsec
that
doesn't
help
me,
the
report
is
way
too
late
and
then
we
thought
dmarc.
What
dmarc
does
is
before
you
actually
push
the
buttons
and
you
apply
policy.
You
can
just
get
reports
and
see
what's
happening
and
then,
when
you
feel
comfortable,
you
say:
okay,
let's
go
for
it
next
slide.
H
This
is
fine,
actually
without
fires
and
smoke
next
slide,
so
how
it
works.
You
sign
the
zone
and
you
published
in
dns,
so
everything
is
public,
but
instead
of
an
actual
ds
record,
you
just
put
a
driver
and
ds
record
in
the
parent.
Then
that
gives
a
signal
to
the
resolver.
Let's
say:
do
validation,
but
this
zone
is
dry
unsigned.
So
if
something
fails
generate
a
report
back
to
the
zone
operator
and
fold
back
to
a
known,
dry
run,
ds
record
and
by
the
way
validation
succeeds.
Yeah.
H
H
Everything
is
signed
and
ready
there
in
your
zone
and
then
you
just
need
to
replace
the
dryron
ds
record
with
just
a
new
ds
record.
No
need
to
do
any
more
changes,
so
what
seemed
to
work
previously
with
dryer
and
dns
should
also
work
with
the
ds
chains.
Next
slide,
you
can
experiment
with
dns.
If
your
zone
is
already
signed,
you
can
do
weird
stuff
with
dnsec
and
then
provide
the
drier
nds
record
and
the
resolvers
could
tell
you
what's
happening
on
their
end
next
slide
and
yeah.
You
can
test
kira
lovers.
H
N
H
H
You
can
also
break
it.
You
can
use
a
dns
option
so
that
the
clients
can
signal
the
resolver
and
say
you
know
what
I
know
that
you
understand
dryer
and
dnsec.
If
there's
an
error,
you
will
try
to
fall
back,
but
you
know
I
opted
in
just
so.
Please
just
give
me
the
error
back.
I
want
to
be
part
of
the
test,
so
this
can
enable
a
couple
of
scenarios.
H
H
This
is
fine
by
the
way
all
these
memes
were
already
there
for
me
to
find
so
people
are
weird
next
slide,
please
so
that
details.
We
got
some
feedback
on
the
previous
itf,
mainly
because
what
we
propose
as
a
dry
run,
ds
record
is
what
most
people,
including
us,
at
first
perceived
as
a
ds
hug,
and
there
were
a
couple
of
suggestions.
H
Instead
of
doing
a
ds
hype,
we
can
use
the
flags
in
the
dns
key,
so
they
are
now
ignored,
so
you
can
use
that
space
there
and,
although
that
seems
nice,
when
you
actually
need
to
go
from
testing
to
actually
signed,
you
would
need
to
change
the
dns
error
set
and
we
believe
that
this
is
no
good.
So
the
if
you
want
to
go
from
testing
to
actual
deployment,
it
should
be
as
simple
as
possible.
You
don't
need
to
change
the
data.
H
Another
feedback
was
there
are
a
lot
of
ds
hugs,
so
maybe
we
can
use
a
ds
hack
to
rule
them
all
and
we
thought
about
it,
but
we
now
see
that
the
dryer
on
the
s
is
more
than
a
ds
hype.
If
it's
going
to
be
adopted,
we
see
it
as
a
integral
part
of
dns,
because
it
will
give
you
the
ability
to
actually
test
dns
more
feedback
about
these
hacks.
H
So
why
don't
we
normalize
all
the
different
ds
hacks
with
a
range
of
arrow
types
on
the
parent
side
that
they
convey
information
about
the
delegation
itself?
Yes,
we
agree
for
that.
But
this
is
another
draft,
but
by
the
way
this
could
work
by
having
the
dry
run
type
as
a
new
arrow
type
on
the
parent
and
then
have
the
same
ds
data
as
the
actual
ds.
But
that
is
for
another
talk
next
slide,
please.
H
H
So
this
could
work
for
is
always
not
understand.
The
dry
run
digest
type,
but
it
will
results
in
variable
length
digest
and
we
got
some
feedback
that
this
may
upset
some
people,
especially
the
registries.
So
an
open
question
to
the
room
is:
how
bad
is
this?
Can
we
live
with
a
variable
length
digest?
Can
we
change
our
tooling,
or
this
is
a
no-go
next
time,
please
and
then
there's
a
multiple
time
line
where
we
introduce
a
dry
run,
ds
type
for
each
ds
for
its
dsi
for
its
real
dsi
yeah.
H
H
H
H
So,
and
this
is
a
feature
not
a
bug,
so
that
means
yeah.
If
you
get
the
adb,
that's
fine,
but
if
you
don't
you
can
take,
you
can't
get
any
results
from
that.
So,
for
example,
because
they
will
talk
about
dane
that
won't
work
next
slide.
Please
we
started
implementing
the
nsl
reporting
which
this
this
draft
relies
on.
We
are
in
the
early
stages,
but
when
we're
done,
this
could
be
a
next
step
for
us
for
unbound
next
slide.
H
U
All
right,
thank
you.
Thank
you.
I
really
like
the
idea.
I
think
it's
a
it's
a
great
idea.
You
know
it's
worked
well
in
mail,
it's
worked
well,
you
know
your
comparison
to
dmarc
was
spot
on,
with
with
one
caveat,
though,
which
is
that
anything
you
introduce
in
this
had
better,
not
disrupt
any
validation
behavior.
You
know
in
regular
dns
sex
so
to
me
that
immediately
rules
out
any
sort
of
ds,
hacks
and
immediately
rules
out.
U
You
know
algorithms
and
things
like
that
as
a
mechanism
for
doing
that,
because
we
already
have
cases
where
ds
algorithms
provide
funky
things
when
one
is
unknown.
U
H
G
Thank
you,
remote
participants,
steve
please
go
ahead.
Thank.
V
You
I'm
not
sure
I
understood
completely,
but
when
I
saw
going
insecure
I
start
twitching.
I
don't
like
going
insecure
at
all.
V
Yes,
I
I,
as
I
say,
I'm
not
sure
I've
completely
understood
the
totality
of
what
you
were
presenting,
but
at
the
point
where
you
said:
if,
if
something
fails,
you
go
insecure.
H
But
but
so
for
that
use
case
you're
starting
from
an
insecure
zone
right
and
then
you
try
to
sign
it
and
if
everything
validates
it's
going
to
be
secure
or
it
will
fall
back
to
insecure,
but
dryer
and
dinasek
is
not
meant
to
be
a
secure
final
state.
H
V
You're
saying
it
only
arises
when
you're
starting
from
an
insecure
state
from
a
from
an
unsigned
state
it
it
doesn't
arise
when
you're
in
the
middle
of
say,
changing
this.
The
signature.
R
H
So
so,
if
you
want
to
test
kira
lover,
so
that
means
that
your
zone
is
signed
and
then
you
want
to
test
if
rolling
a
key
would
work.
So
this
would
start
in
the
dry
run.
Let's
call
it.
The
dry
run
in
a
sec
part
of
the
zone
and
if
anything
fails
there,
the
resolver
should
fall
back
to
the
actual
ds
zone.
So,
okay,
the
previous
plate,
yeah.
V
Okay,
okay,
so
as
long
as
it
doesn't
introduce
a
period
of
going
insecure
from
a
secure
state,
then
I'm
okay.
H
H
W
Victor
de
koffny,
google,
my
concern
is
that
okay,
the
mic
is
low.
Sorry
about
that
warren's
fault,
so.
W
My
concern
is
that
you
seem
to
assume
that
all
the
resolvers
out
there
in
fact
behave
as
specified
and
will
do
the
right
thing
when
presented
with
an
unknown
code
point
just
two
days
ago.
I
tried
that
with
algorithm
zero
and
was
deeply
disappointed
from
at
least
one
major
resolver.
W
I
think
all
such
assertions
unfortunately
need
extensive
field
testing
to
determine
whether
that's
true
or
false,
and
I'm
also
concerned
about
considerable
implementation
complexity
in
an
already
fairly
complex
stack.
W
H
J
J
I
guess
that
was:
are
you
able
to
yes
to
to
one
that
he
had
single
track
and
dual
track,
and
I
was
looking.
I
wanted.
G
J
Yep,
I
think,
even
though
some
folks
might
not
like
the
variable
length,
because
it's
hard
for
them
blah
blah,
just
as
victor
said,
things
that
are
are
not
implemented
well,
are
going
to
have
a
hard
time
regardless.
I
think
this
is
absolutely
a
feature
for
us
testing
new
ds
types
where,
in
fact
the
digest
length
is
going
to
be
surprising,
such
as
for
post-quantum
crypto.
J
So
I
think
if
we,
which
fortunately
is
way
in
the
future,
although
whey
can
mean
a
lot
of
things,
different
people,
if
we
have
this
with
a
variable
length,
digest
type
defined
before
we
start
fuzzing
with
the
post-quantum
algorithms
and
some
of
the
things
that
go
along
with
that,
we
will
absolutely
get
much
better
data
about.
You
know
how
things
are
going
to
break,
so
I
would
strongly
recommend
going
to
this
with
a
variable
length
digest
type.
R
Ben
schwartz,
thinking
about
this
from
so
I'd
like
this
functionality,
I
think
this
would,
if,
if
somehow
we
we
lived
in
in
the
future
that
victor
outlined,
where
all
of
this
had
eventually
rolled
out,
this
would
be
a
very
useful
thing
for
use
cases.
I've
run
into
I
one
thing
that
I
would
like
if
I
were
using
this
is
to
be
able
to
answer
the
question.
What
is
the
error
rate
of
what?
R
What
is
the
error
rate
that
you've
created
with
by
by
adding
this,
and
I
don't
think
this
proposal
actually
gets
me
there
right?
I
get
error
reports,
but
I
don't
have
a
denominator.
I
don't
know,
I
don't
know
how
what
fraction
of
my
recipients
actually
implement
the
specification.
R
H
So
I
can
comment
on
that,
because
this
draft
relies
on
the
necessary
reporting
and
the
new
version
has
support
for
a
no
error
flag.
So
as
an
upstream,
you
can
signal
to
your
resolvers
to
say:
okay,
you
see
me,
but
please
send
me
back
nowhere
so
that
I
know
you're
there,
because
if
I
don't
get
any
error
reports,
how
should
I
know
that
everything
is
okay
right,
so
you
get
that
thumbs
up.
If
nothing
is
wrong
from
your
resolver
and
then
maybe
you
can
also
use
that
information
for
for
what
you
described.
R
Okay,
and
would
that
actually
work
for
this,
because,
knowing
that
the
resolver
supports
the
dns
error,
reporting
is
not
sufficient
right,
I
need
to
know
the
denominator
has
to
be
the
the
fraction
of
responses
that
went
to
resolvers
that
implement
that,
and
also
this.
H
Yeah,
what
you
would
see
is
that
okay
x
amount
of
resolvers
contacted
back
with
no
error,
so
I
know
they're
there
and
then
I
start
getting
errors.
Okay,.
F
H
Y
Hi
lars
lima
from
net
note.
Y
This
is
a
good
thing.
You
mentioned
that
this
is
is
viewed
as
a
temporary
measure
for
for
a
transition
between
states,
and
I
I
totally
agree
to
that.
Y
In
order
to
avoid
having
lingering
stuff
in
in
the
system
that
may
or
may
not
interfere
further
down
the
line,
would
it
be
an
idea
to
either
put
some
kind
of
timer
into
the
system
that
kicks
out
this
this
functionality
after
a
certain
time
or
to
have
a
recommendation
to
implementers
to
put
timers
in
the
software
that
that
interacts
with
this?
H
Y
U
H
Just
you
leave
the
ds
record
that
expire
on
the
keys,
but
yeah
that's
garbage.
But
that's
what
you're
saying.
U
At
the
risk
of
making
my
good
friend
steve
angry
with
me,
I
actually
think
this
would
help
if
you
were
going
from
secure
to
a
new
algorithm
role
to
insecure.
I
mean
I
wrote
a
draft
a
while
ago
about
that
that
sort
of
got
50
support
for
people
liking.
It
50
that
absolutely
hated
it.
This
might
help
in
that
case
right,
if
I
could
point
them
to
it,
saying
if
you
know
algorithm
roles
are
tough.
Try
doing
this.
Z
Hi,
it's
peter
from
the
esec
peter
thomason.
So
a
few
things
regarding
the
cleanup
of
stale
dry
run
records.
I
would
think
the
only
entity
that
may
feel
that
it's
harmful
for
those
to
stick
around
will
be
the
corresponding
registry
or
parent
authority,
and
I
think
it's
not
a
problem
for
the
registrant
or
somebody
else.
So
I
would
make
that
local
policy
for
the
registry
right.
S
Z
Z
Z
I
think
it
is
better
to
sacrifice
a
bit
of
the
digest
type
field,
and
I
also
don't
think
that,
with
post
quantum
crypto,
it
would
be
necessary
to
expect
that
the
digest
field
length
will
vary,
because
even
when
the
key
length
varies,
the
sha
hash
is
still
just
a
shar
hash
and
then
last
thing
I
wanted
to
say
maybe
to
point
out
as
an
implementation
note
in
the
draft.
Z
So
far,
if
my
understanding
is
correct,
although
I
don't
know
exactly
the
provisions
in
the
dynastic
specification
is
that
when
you
do
validation
of
a
chain
and
you
encounter
ds
records,
essentially
you
need
to
find
one
that
matches,
and
then
you
start
processing
and
it's
not
necessary
to
keep
scanning
and
matching
other
things
and
it's
up
to
the
result
of
whatever
digest
type.
They
would
consider
suitable
for
matching.
Z
Now,
if
a
resolver
does
have
support
for
ignoring
validation
failures,
when
there
is
the
dry
run
bit
set
and
even
they
would
send
out
dns
error
reporting
stuff.
In
that
case
that
they
may
not
be
hitting
that
code
path,
if
they
earlier
are
encountering
a
ds
record
that
does
validate
and
then
they
stop
processing.
Z
So
I
can
imagine
that
a
naive
implementation
of
support
for
this
could
run
into
this
problem,
and
perhaps
the
draft
should
point
out
that
you
need
to
continue
scanning
the
other
ds
records
to
actually
run
this
kind
of
experiment.
H
Was
when
you
encounter
the
dsr
set
pick
one
of
it,
so
one
dry
run
one
real
one,
the
ones
that
you
would
have
picked
normally
and
then
try
the
dry
run
and
then,
if
that
fails
see,
if
you
have
a
non-dry
run
and
try
again,
okay.
Z
U
F
Neil
swizziel,
the
isec,
sorry
for
being
late
to
the
game.
I
just
wanted
to
float
the
idea
of
sending
the
s
records
with
edns
client
to
the
to
the
resolver.
Instead
of
propagating
all
this
information
to
the
parent
zone,
I
believe
I'm
not
a
resolver
implementer,
but
I
believe
that
could
keep
down
the
complexity.
So
this
is
a
reply
to
victor's
comment
about
complexity.
F
H
H
H
Through
the
parent,
with
the
driver,
nds
no,
I
mean
what
you
described
works
perfectly,
but
only
for
the
case
that
the
clients
are
opted
in
right.
Yes,
you
can
test
random
people
on
the
network
right,
yeah,
okay,
yeah.
F
W
Couple
of
comments,
I'm
not
sure
that
client's
idea
signaling
is
viable
because
cached
data
has
already
been
validated.
It's
rather
unclear
how
this
would
work
with
results
already
in
the
cache,
in
terms
of
where
to
put
the
dry
run
bit
and
the
variable
and
digest
type
and
so
on.
I
do
agree
that
stealing
a
bit
from
the
hash
algorithm
number
is
saner
than
moving
the
digest
up
digest
type
into
the
digest
value.
W
Hash
algorithms
are
introduced
exceedingly
rarely
symmetric,
hashes
are
very
stable.
There's
no
evidence
that
shatu
is
likely
to
be
compromised
anytime
in
the
next
hundred
years.
We're
doing
very
well
with
that
one.
We
may
add
shot
three
at
some
point,
though
this
little
demand
so
steal
some
space
from
the
from
the
hash
algorithm.
If
this
is
to
go
forward,
that
seems
to
be
the
the
way
to
do
it.
H
If
I
can
comment
on
the
casting,
because
it's
a
good
point,
we
thought
about
it
and
this
will
need
to
keep
both
dnsec
states
in
the
cast,
because
clients
may
need
to
get.
The
error
may
need
to
get
the
non-error
response.
Yeah.
T
Mark
andrews,
that's
a
variable
length.
T
T
In
terms
of
a
replacement
for
c
dns
key,
I
think
you
should
be
looking
at
doing
another
key.
Another
type
which
signals
the
similar
semantics
to
cdn
is
key
that
signals
that
you're
wanting
a
dry,
a
dry
run,
ds,
be
produced.
T
And
in
terms
of
ex
exp
experiments,
it
wasn't
doesn't
matter
earlier.
It
really
doesn't
matter
for
experiments
for
experiments.
As
long
as
you
don't
have
both
values
that
are
known,
you've
got
a
new
value
in
either
those
two
fields
in
the
ds
record.
That's
perfect,
you
make
about.
You
have
a
safe
experiment,
so,
okay,
that
makes
sense.
G
Thank
you.
Thank
you
for
all
the
feedback
and
thank
you.
Yogas
thanks.
Yeah
next
up
is
paul.
Hoffman.
N
J
I'm
also
still
a
new
deck
being
shared
so
now
we're
in
the
space
where
things
have
not
been
adopted
by
the
working
group.
So
this
is
a
request
for
people
to
consider
adopting
a
draft,
even
though
this
is
abyss
draft
of
something
that
we
finished
not
that
long
ago,
8109
is
just
a
few
years
ago.
J
J
The
way
you
prime,
is
that
you
do
this,
and
then
this
and
then
this
so
8109
didn't
change
that
it
helped
resolver
operators
understand
better
sort
of
some
of
the
the
I
wouldn't
even
call
them
edge
cases,
but
interesting
considerations
of
that,
and
yet
still
we
see
some
resolvers
that
don't
necessarily
always
do
it.
The
way
that
would
be
best
for
its
users
and
since
it
is
so
important
to
get
priming
right,
getting
it
wrong
affects
a
lot
of
people
pretty
much
silently
which,
which
is
really
bad.
J
So,
as
this
working
group
was,
was
working
on
what
became
8109,
some
issues
came
up
and
people
said
we
just
want
to
get
8109
publish.
J
J
J
We
think
that
it
is
worth
opening
this
up
again,
even
if
some,
even
if
we
punt
on
some
of
the
issues,
there
are
plenty
of
other
things
in
here
that
are
more
or
less
important
to
more
fewer
people.
The
some
of
it's
just
terminology,
the
root
server
operators,
have
a
very
specific
way
that
they
want
to
be
referred
to.
I
think
it
would
be
good
for
us
to
do
that.
We,
this
working
group
loves
the
tc
bit
at
the
same
time
that
it
hates
the
tc
bit.
J
We
we
want
to
deal
with
that
more
and
such
like
that,
but
it's
it's.
We
think
that
it
is
worthwhile,
even
though
it's
not
the
most
interesting
subject
to
actually
get
this
nailed
down,
and
so
this
would
be
abyss.
It
would
actually
obsolete
8109,
I
mean
again,
if
you
find
any
of
these
interesting.
Please
look
in
the
current
draft,
see
where
it
is
next
slide.
J
So
this
is
what's
not
yet
in
the
draft.
Pre-Fetching
has
become
a
much
more
popular
topic
in
the
last
few
years
and
certainly
for
the
root
zone.
J
Many
people
would
say:
pre-fetching
would
be
a
good
thing,
so
we
should
be
talking
about
it.
There's
nothing.
We
can
say
you
should
you
shouldn't,
but
we
shouldn't
pretend
it
doesn't
happen
and
then
there's
also
the
question
of
post-priming
strategy
so
after
you've,
primed
after
you
have
this
set
of
ns
records.
J
Thank
you.
After
you
have
the
set
of
ns
records.
Everyone
knows
that
different
resolvers
do
different
things
to
pick,
which
is
their
favorite,
authoritative
server
for
any
zone,
not
much
less
the
root
zone,
but
some
resolver
software
actually
treats
the
root
zone
as
special,
we
can
say
don't
do
that
as
often
as
we
want,
but
they
do,
and
so
there
should
actually
be
a
discussion
of
once.
You've
got
the
zone
sitting
there.
How
do
you
pick
which
authoritative
server
to
use?
J
Do
you
pick
the
fastest?
Do
you
pick
the
one
underneath
the
thing?
So
these
are
all
things
that
we
know
that
people
have
talked
about.
We
don't
need
to
say
you
should
do
this
or
that,
but
we
should
certainly
admit
that.
In
fact,
there
are
these
post-priming
considerations,
particularly
for
people
who
are
watching
things
on
the
wire.
It
is
completely
believable
that
a
resolver
comes
up.
Does
a
priming
query
goes
to
q
root,
reboots,
does
a
priming
query
and
then
goes
to
our
root.
J
Instead,
you
know
like
like
within
the
course
of
15
seconds,
for
whatever
reason:
normally
we
would
think
oh
it'll
always
go
to
the
fastest.
We
know
that
that's
not
the
case.
Q
root
and
r
root
might
be
approximately
as
fast
there
may
have
been
a
round
switch
things
like
that.
This
should
be
discussed
in
8109
is
our
belief.
So
I
think
that's
the
last
slide.
J
G
AA
So
I'm
dan-
this
is
basically
now
an
extension
to
ede,
which
was
nice
to
see
some
progress
on
earlier
next
slide.
So
the
purpose
for
this
is
when
there
is
dns
filtering
to
provide
more
information
about
what
was
filtered
and
by
whom,
so
that
that
can
be
tracked
down.
If
that
filter
was
sorry.
AA
I
had
some
papers
next
to
it.
My
apologies
that
any
better
thank.
G
AA
So
the
changes
we
made
between
zero
one,
which
is
the
last
time
that
I
presented
on
this
to
the
current
version,
is
further
constraining
the
json
and
when
it's
displayed
to
the
user-
and
the
reason
is
there
is
concern
that
the
browsers
would
be
unwilling
to
display
information
that
was
controlled
by
the
dns
resolver,
that
the
browser
is
talking
with
and
instead
just
throw
that
into
a
log
that
it
or
a
more
sophisticated
user
can
look
at,
but
not
have
it
be
something
that's
popped
up
straight
in
front
of
the
the
user's
face
and
also
to
help
constrain
those
messages.
AA
AA
So,
for
example,
what's
defined
right
now,
one
is
for
it
was
filtered
because
of
malware
two
for
phishing
three
for
spams
and
numbers
like
that,
and
then
also
to
ensure
that
we
don't
have
a
cash
poisoning
problem
is
to
require
the
the
add
resolver
info
to
signal
that
there
is
support
for
this
and
that
the
information
returned
in
there
is
actually
from
that
first
top
resolver
and
not
being
sent
down
from
something
else
and
causing
the
ede
to
get
propagated.
AA
And
the
next
slide,
please,
it
shows
an
example
what
the
json
is,
that's
being
sent
and
that's
currently
being
sent
in
the
ede
text
field.
So,
instead
of
sending
you
know
human
parsible
text,
it's
got
kind
of
human
parcelable
json
and
that's
my
update
next
slide.
Please
and
I'm
wondering
if
there's
any
further
interest
in
this,
there
seems
to
be
and
would
certainly
like
any
comments
from
the
room
or
from
the
108
other
people.
O
I'm
john
o'brien
university
of
pennsylvania.
I'm
curious
if
you've
looked
at
whether
this
could
interoperate
usefully
with
response
policy
zones
and,
if
not,
that
seems
like
it
would
be
a
valuable
addition.
X
Excellent,
I
like
the
changes
and
how
I
hope
that
the
browser
vendors
will
like
them
as
well.
Do
you
have
any
feedback
from
either
google
people
or
firefox
people
how
they
think
about
the
latest
changes,
because
that
that
was
the
manage
last
time
right.
AA
Tyro
has
received
some
positive
feedback
I
forgot
from
whom
I'm
sorry,
but
it
it
seems
more
amenable
to
browsers,
especially
you
know,
getting
rid
of
the
freeform
text
and
going
to
error
code
numbers
so
that
then
the
browser
can
localize
the
message
and
also
especially
not
to
display
that
the
freeform
text
that
we
had
previously
that
seemed
to
resolve
most
of
the
issues
that
we'd
been
getting
feedback
on
for
those,
but
unfortunately,
turo
is
asleep
right
now,
so
I
don't
know
who
those
specific
folks
were.
P
I
just
want
to
add
my
support
to
adoption
of
this
and
and
also
agree
with
the
previous
guy
who
who
mentioned
supporting
of
of
rpg.
This
would
be
really
useful
operationally
for
us
for
our
protective
dns
resolver
that
we
we
operate
in
the
uk.
AA
R
Ben
schwartz,
I
I
do
think
that
this
this
revision
is
an
improvement.
I
I
still
wonder
I
wonder
a
few
things
I
wonder
if
dns
op
is,
is
really
well
placed
to
to
deal
with
this.
I
think
that
this
is
really
getting
to
a
much
much
deeper
and
thornier
question
of
how
how
malware
type
filtering
ought
to
work
in
in
a
user
device
setting.
R
You
know
the
major
browsers
today
already
have
functionality
of
their
own
here
in
the
context
of
things
like
safe
browsing
and
those
systems
aren't
simple
query
response
systems
in
the
way
that
the
dns
is,
for
example.
Instead,
they
tend
to
use
complicated
quasi
information,
retrieval
algorithms,
to
avoid
revealing
private
user
information.
You
know
we
can.
Yes,
we
can.
We
can
sort
of
stuff
this
kind
of
functionality
into
the
dns,
but
should
we.
E
Sorry
I
made
I
made
this
comment
in
the
chat,
but
the
chairs
are
interested
in
this,
but
we'd
like
to
hear
from
folks
who
are
actually
going
to
be
implementing
this,
because
that
will
actually
help
us
drive.
You
know
the
sort
of
where
we,
if
the
working
group
sort
of
takes
us
on
as
well.
So
if
folks
are
interested
in
implementing,
please
speak
up
and
give
us
some
heads
up
and
that'll
help
us
a
lot
thanks.
G
Yeah
in
indeed
so,
outside
the
genius
of
working
group,
we
did
hear
from
other
people
that
were
interested
in
the
draft
and
also
probably
interested
in
implementing.
But
we
would
like
to
hear
that
also
in
the
working
group
and
maybe
well,
we
will
plan
a
call
with
with
dan
and
the
chairs
how
to
go
forward
and
get
more
well
support
or
explicit
support
of
the
implementation
potential
implementations.
G
D
Yeah,
I
I
was
just
giving
another,
that's
one.
I
support
adoption
of
this.
I
can't
personally
commit
to
akamai
implementing
this,
but
I
could
see
I
could
see
it
being
very
feasible
for
us
to
implement
for
our
services
that
we
offer
that
provide
filtered
responses,
and
I
think
you
know,
and
ben's
question
about
is
dns
upright,
is
a
fair
one.
I
do
feel
like
every
time,
there's
a
real
world
problem
that
operators
are
coming
and
saying.
This
is
an
issue.
The
question
comes
up
of.
W
Dr
carney,
my
comment
is
mostly
actually
back
to
ben.
I
think
I
don't
think
this
is
a
new
mechanism
to
you
know,
implement
virus
scanning
that
isn't
of
any
sort.
This
is
really
to
deal
with
reporting,
existing
rpz
feeds
and
and
their
classification
of
a
domain.
It's
already
blocked
we're
just
adding
transparency
as
to
why
so
the
user
can
be
told
why
the
dns
name
isn't
resolving.
W
G
N
Hi
chris
box
bt
as
a
customer
of
powerdns,
I
would
like
to
see
this
developed
and
we
would
deploy
it
in
the
network.
O
It
just
occurs
to
me
that
since
the
web
or
the
internet
hasn't
been
completely
overtaken
by
http,
that
this
would
be
useful
for
applications
other
than
web
browsers.
That
can't
present
a
captive
portal
and
might
still
like
to
be
able
to
propagate
up
exception.
Information
of
this
kind
to
a
logger
or
some
sort
of
user.
G
AB
Hi
I'll
have
slides
in
a
second
right,
so
I'm
here
to
speak
about
some
research
is
trying
to
answer
a
long-standing
open
question
about
dns
second
internet.
This
is
joint
work
with
austin
hensel
and
nick
femster
from
princeton
and
chris
wood
from
cloudflare.
So
next
slide,
please.
AB
So
here
this
is,
this
is
not
my
work.
This
is
work
by
evp
nick,
I
guess
jeff
houston,
probably
measuring
the
extent
of
dns
validation
on
the
internet.
So,
as
you
can
see,
there's
quite
a
bit
that
you
know
this
line's
a
little
hard
to
follow
but,
like
you
know
something
on
the
order
of
like
40,
total
validation
rates,
so
that's
like
seems
like
good
news
except
next
slide.
AB
In
fact,
some
of
the
dsgs
is
like
largely
google,
public,
dns
and
people,
and
you
know
cloudflare
people
like
that,
so
so
no
major
operating
system,
although
do
endpoint
dns
validation
by
default
window
apple-
just
will
just
roll
this
out,
but
I
check
with
tommy
paulie
and
you
have
to
turn
it
on
for
yourself
and
browsers:
won't
do
it
either,
and
this
is
obviously
super
limiting
and
if
you
want
to
roll
out
any
feature
that
requires
like
dns.
AB
Second
requires
not
trusting
the
recursive
resolver
like
dane,
then
you
need
the
endpoint
validation
by
the
way.
This
is
like
this
talk
is
like
entirely
about
like
endpoints,
like
consumer
endpoints.
This
doesn't
apply
to
like,
like
like
server
machine
server
class
machines.
I
know
there's
quite
a
bit
of
dns
validation
for
like
send
mail
and
stuff
like
that
for
mail
and
stuff,
like
that,
it's
not
what
I'm
talking
about
here
next
slide.
AB
So
as
a
browser
manufacturer,
I
often
get
asked
why
don't
you
jerks
validate
and
so
I've
I've
produced
a
set
of
answers
for
this.
So
on
one
answer,
people
often
give
and
there
was-
is
performance,
namely
you
have
to
do
more
requests
and
maybe
someone
will
fail
or
maybe
have
some
trouble
doing
them
in
parallel,
somebody'll
be
slower,
but
the
primary
reason,
the
one
highlighted
in
in
bold
here
is
we're
concerned
about
breakage.
AB
AB
What
you
know
is,
you
know,
there's
like
a
ds
record,
and
you
know
things
should
have
been
signed,
but
congratulations
they're
not
or
these
two
can't
validate
it,
and
what
you're
supposed
to
do,
according
to
the
rfcs
at
this
point,
is
to
hard
fail,
is
refuse
to
accept
the
data
you've
been
sent
but
like
when
you're
looking
up
an
a
record.
AB
If
I
respond
to
that
by
hard
fail
that
my
user
sees
as
they're
trying
to
go
to
a
site
and
they
can't
get
there
and
the
general
sense
among
people
who
actually
build
this
stuff
is
that
any
significant
rate
of
non-delivery,
like
in
excess
of
like
fractions
of
a
percent,
will
create
unacceptable
failure
rates.
What
I
mean
by
that
is
that
if
we
roll
out
some
new
piece
of
functionality
in
the
client-
and
we
see
an
increase
in
failure
rates,
that
is
like
in
excess
of
a
fraction
of
a
percent,
it's
unacceptable.
AB
We
have
to
back
out,
there's
really
been
very
little
data
about
this.
The
there's
some
work
by
by
my
team
of
years
ago,
only
some
people
use
csd.
There
was
a
thing
about
maybe
10
years
ago
by
adam
langley,
where
they
tried
to
retrieve
random
text
records
from
the
internet
and
like
had
like
five
percent
failure
rates,
so
five
percent
would
be
way
way.
AB
Excessive
and
so
adam
has
this
like
famous
post,
that
we
often
refer
to
about
why
people
don't
do
this
and
it
like
refers
this
experiment,
but
like
that's,
not
dns
sec,
it's
text
records,
and
it's
a
long
time
ago,
so
we
decided
to
try
to
answer
this
question
next
slide
please.
AB
So
this
is
a
very
straightforward,
experimental
setup.
We
set
up
some
domains
that
we
control
ourselves.
They
have
valid
records,
one
trusts
because
we
can
set
them
up
actually
cloudflare
shut
them
up
with
their
automatic.
You
know
signing,
so
we
have
correct
dns
records,
rrsa
xds
dsq,
whatever
we
also
have
some
other
less
common
records
which
I'll
get
to
in
a
minute
like
https
service
b
on
sma,
some
new
records
we
made
up
and
we
use
firefox
as
a
measurement
platform.
AB
So
what
we
do
is
we
randomly
select,
select
a
sub
sample
of
firefox
glance
and
then
each
client
tries
to
directly
resolve
the
relevant
records.
We
don't
use
the
we
do,
use
the
system
resolver
to
live
in
a
second,
but
not
for
this.
So
for
this
we
do
we
do
like
a
udp.
You
know
we
look
up
the
operating
systems
resolver
and
then
we
talk
to
it
directly
with
udp
or
tcp,
and
we
ask
it
for
records
and
we
see
what
the
answers
are.
We
measure
the
success
rate
next
slide.
AB
So
here
are
the
queries
we
do
and
we
do
use
like
we
do
some
of
these
in
a
random
order
and
some
of
them
not.
We
start
with
looking
using
the
firefox
dns
resolve
api,
which
just
talks
to
the
system
resolver,
and
this
is
just
to
verify
like
everything's
working.
So
we
can't
get
this
then,
like
things
aren't
going
to
work,
it
also
gives
us
a
control
based
line
of
what,
like
a
failure
rate,
ought
to
look
like.
AB
Then
we
look,
then
we
ask
for
a
nsa
randomly.
We
ask
for
a
records
with
all
possible
values
of
d,
o
and
c
d.
I
said
some
people
we
hear
me
remember.
I
sent
mail
of
the
mailing
list
asking
what
to
set
these
values
to
so
we
set
them
all
just
to
be
sure
we
tried
dns
key.
AB
We
asked
for
service
b,
we
asked
for
sma
and
we
asked
for
four
records
with
all
combinations
of
small
and
large
and
code
points
in
expert
review
and
private
use
ranges,
and
all
these,
as
I
say,
are
like
correctly
populated.
So
they
should.
They
should
all
succeed.
AB
So
here's
the
next
slide.
This
is
where
the
actual
answer
is
question.
So
red
here
is
just
dns
sec,
so
the
baseline
failure
rate,
I'm
sorry,
I
did
lose
dns
resolve.
So
that's
that's
unfortunate.
The
baseline
failure
rate
is
about
two
percent,
which
is
like
a
little
surprisingly
high.
Actually
dms
resolved
about
one
percent,
a
little
less
so
there's
some
extra
something
quite
quite
right
with
resolve.
That's
causing
like
a
little
diff
that
we're
not
sure
about.
AB
But
if
you
look
the
as
soon
as
we
start
asking,
and
so
this
this
failure
rates
are,
are
we
getting
the
correct
answers?
It's
a
combination
of
we
got
like
nothing
and
do
we
get
like
the
thing
doesn't
match,
and
so,
as
you
can
see,
if
we
just
do
the
straight
up,
are
these
all
off
by
one?
AB
These
may
all
be
one
which
is
really
embarrassing.
The
bottom
line
straight
up
answer
is
dad
all
right.
I
see
what's
going
on.
Okay,
sorry,
so
the
bottom
line
straight
up
answer
is
that
if,
as
soon
as
we
try
to
ask
for
our
sake,
we
don't
get
it,
we
don't
get
it
like
a
third
of
the
time
so
and
and
like
we
like,
looked
at
this
eight
ways
from
sunday
we
looked
at
the
things
by
hand
and
like
a
lot
of
them,
are
too
small,
so
they
couldn't
possibly
have
an
rseg.
AB
They
don't
have
truncations.
Yet
it's
just
like
oh
come
on.
If
you,
if
your
cd
equals
one,
of
course,
you
don't
get
it
at
all,
which
is
expected,
but
if
you
do
do
you
don't
get
it
basically
right
so
dns
key,
you
get.
So
that's
fine,
but
you
don't
get
rsa.
Basically,
you
don't
and
then
these
other
records
it
kind
of
works,
sometimes
so
like.
If
you
ask
for
s
my
may,
you
get
it
like
percent
of
time.
Sorry,
85
percent
of
time
ask
for
https.
AB
You
get
like
93
of
the
time
and
these
other
records
kind
of
work
sometimes,
but
you
know
not
reliably
and
the
bigger
the
record
the
worse
it
is.
It
doesn't
seem
to
matter
whether
it's
in
private
use
or
whether
it's
in
whether
it's
in
non-private
use.
So
this
is
like
what
we
see
I'd
be
happy
to
talk
to
people
offline
about
this
or
online
about
the
methodology
here.
But
this
is
basically
what
we
see
here.
We
don't
by
the
way
we
don't.
AB
We
did
actually
ask
this
is
all
with
you.
Tcp
is
a
disaster
like
he's,
gonna
get
twenty
percent
right
across
the
board.
If
you
just
apply
a
little
more
color,
we
did
try
asking
for
rsx
separately.
That
also
doesn't
work
very
well
by
the
way.
It's
not
even
clear
that
should
work.
We
didn't
as
an
admission.
We
didn't
ask
for
ds,
but
obviously
we
don't
have
our
essay.
AB
This
won't
work,
so
doesn't
matter
next
slide
so
like
what's
the
impact
of
this
bottom
line
here
is
not
safe
to
enable
endpoint
dns
validation
over
to
53,
for,
like
generic
endpoint
clients,
it's
probably
not
fine
for
server
class
machines,
but
it's
not
fine
for
client
class
machines.
AB
It
might
well
be
safe
to
enable
them
over
doe
or
dot.
We
did
sort
of
experimentally
measure
the
public
resolvers
and
they
seem
to
do
just
fine.
So
if
you
like,
you
know,
if
you
ask
like
1.1.1.1
or
888
like
they
do
a
fine
job
of
giving
these
answers,
it
might
be
the
case
that
add
advertising
resolvers
do
better,
but
of
course
we
don't
really
know
yet,
because
that
doesn't
really
exist
but,
like
you
might
imagine,
he's
been
updated
just
a
bit
of
job.
The
security
posture
is
a
little
confusing.
AB
It's
somewhat
practical
to
deploy
other
record
types.
So
if
you
have
a
new
record
type
where
it's
like,
if
it
worked
80
percent
time,
that'd
be
great,
like
https
then
like.
That
really
is
fine
if
you
have,
but
if
you
have
something
where
it
is,
where
all
time
like,
basically,
it's
not
safe
to
do
anything
that
isn't
basically
like
a
and
c
name-
and
I
guess
ns
there
seems
to
be
some
variation
like
https
looks
especially
good
that
may
just
because
it's
smaller
than
symme
we're
not
sure
so
anyway.
AB
This
is
the
bottom
line
for
us.
We
have
a
paper
that
we've
written
that
we're
not
quite
ready
to
like
publish
but
we'll
publish
pretty
soon
that
has
more
diesel
methodology.
I'm
happy
to
answer.
Of
course,
any
questions
about
this
people
have.
AB
S
Yeah
hi
eric
ray
bellis,
I
see
so,
do
you
understand
correctly?
These
are
experiments
done
within
a
firefox
browser
on
end
user
sites.
Yes,
okay.
My
own
research
now,
which
is
like
10
years
old,
showed
that
home
gateway
resolvers
are
particularly
bad
at
this
side
of
things.
So
I'd
very
strongly
suggest
you
try
and
maybe
enumerate
whether
the
resolve
you're
talking
to
is
on
the
same
subnet
as
the
client,
so
you
can
actually
tell
the
difference
in
the
stats
between
on
that
resolvers
and
off
net
resolvers.
That's.
AB
B
Okay,
brian
dixon
go
ahead.
AC
AC
Is
the
failure
rate
consistent
and
are
you
able
to
track
that
across
network
changes,
and
can
you
potentially
aggregate
those
to
these
customers
always
failed
and
therefore
you
could
check
initially
and
if
you
get
failures,
not
do
the
validation,
but
for
any
of
the
ones
that
do
succeed.
AC
If
the
longitudinal
collection
of
data
suggests
that,
if
it
succeeds,
it'll
always
succeed
that
that
information,
I
think,
would
be
very
useful.
AB
Yeah
I
mean
so
in
principle.
The
answer
to
like:
can
we
take
that
measurement
is
yes
in
practice.
The
answer
is
that
the
grad
student,
who
did
this
work,
has
taken
a
job,
and
so
probably
the
answer
is
no.
If
I
manage
to
find
somebody
else
who's
going
to
do
the
heavy
lifting,
then
the
answer
is
probably
yes
again.
B
AD
Hi
obviously,
firstly
thank
you
for
doing
this.
One
question
I
had
is
you
mentioned,
I
think,
on
slide,
eight,
possibly
that
you
had
done
some
testing
of
dot
and
dough
resolvers
to
see
whether
to
see
how
they
behaved.
I
was
wondering
whether
you
had
done
enough
of
that
to
collect
any
data,
whether
you
have
any
graphs
for
that.
AD
You
said
about
don't
resolve.
Is
that
contacting
them
with
do
and
cd
set
was
more
successful?
Did
you
actually
did
you
get
enough
data
to
produce
any
kind
of
useful
statistics
on
that?
Do
you
have
any
charts,
graphs
or
anything
like
that
to
sort
of
like
to
be
able
to.
AB
AB
Yeah,
no,
what
we
did
was.
Basically
we
only
realized
that
this
was
an
interesting
question
at
the
very
end,
and
so
we
just
like
connected
from
like
one
of
our
machines
to
like,
like
those
nut,
resolvers
and
check
that
they
did
it
correctly,
and
our
assumption
is
that,
because
dope
protects
the
data
in
transit
or
dot,
does
we
don't
have
to
worry
about
whatever
intermediates
are
so
the
view
from
anywhere
is
as
good
as
the
view
from
anywhere
else?
So
we
just
we
assume
that,
like
if
cloudflare
or
google
does
it
correctly
once
they'll?
AB
AD
Sure
I
guess
I
mostly
meant
from
the
perspective,
as
I
think
one
of
the
previous
questions
mentioned-
that
potentially
the
issue
wasn't
necessarily
the
resolver
that
you're
talking
to.
But
the
middle
box
is
in
the
way
or
sort
of
right,
cpe
equipment
at
home
environments
right
whether
something
was
mangling,
the
traffic
on
the
way
there
and
back
and
like.
W
Victor
mccartney,
google,
thanks
for
doing
this,
it's
good
to
see
these
things
updated
again
and
would
be
good
to
see
in
the
future
after
some
time.
It's
often
we
end
up
quoting
things
that
are
five
to
ten
years
old,
so
excellent.
Thank
you.
I
was
curious,
whether
you've
anonymized
the
sources
to
a
degree
that
might
make
it
difficult
to
break
this
down
by
goip
or
s
numbers
or
whatever.
It
would
be
nice
to
see
how
the
situation
varies
across
the
globe.
AB
Yeah
we
do
have
that
data,
that's
in
that
some
of
that's
in
the
paper,
but
from
memory
it
is
substantially
the
u.s
and
europe
or
like
substantially
some
of
american
europe
substantially
similar
and
china
and
india
are
substantially
worse
and
especially
worse
for
like
the
non-standard
things,
but
that
data
is
in
the
paper
and
I'm
I
think
we
have
it.
We
have
it
by
if
I
recall
isp
nas
so
like.
If
the
other
analysis,
people
that
are
interesting,
we
could
probably
run
them.
AB
If
I
recall
I
I
could,
I
can
like
check
this
for
you,
but
if
I
recall
we
set
all
the
we
said
all
we
always
set
the
edns.
You
know
max
record
size
for
all
the
queries
and
like,
and
that's
one
of
the
reasons
we
think
it
may
be
worse.
For
the
for
the
baseline
query,
I
can
share
the
code.
The
code
for
like
doing
this
is
in
a
firefox
extension.
So
if
so,
I
can
share
that
with
you.
AB
Yeah
exactly-
and
I
think
so
I
mean
our
our
in
here-
our
assumption
is
this
is
like
largely
broken
home
devices,
but
of
course
we
have
to
we
have
to
go
to
like
war
with
the
home
devices.
We
have
basically.
M
Daniel
khan
gilmore
thanks
for
doing
this
work
and
thanks
for
publishing
it
it
so
one
thing
is
just:
it
would
be
nice
to
see
the
breakdown
of
failure
rates
by
actual
size
of
the
packets.
We.
AB
M
Okay
and
then
because
the
size
does
seem
like
it's
a
likely
a
likely
failure,
yeah.
AB
So
I
mean
we
don't
have
like
a
linear,
you
know
we
don't
have
a
linear,
but
yeah.
We
could
give
you
a
scatter
plot.
Basically,
I
think
what
you-
and
I
assume
you
mean
the
size
of
the
expected
packet
as
opposed
to
the
size
of
receive
packet.
AB
We
have
we
have
that
too.
I
mean
it's
like.
Basically,
when
we
saw
like,
like
you
know,
like
all
the
analysis
is
errors
right,
but
when
I
first
started
looking
at
this,
I
was
like
shocked
at
how
high
the
rates
were,
and
so
I
spent
some
time
like.
Are
we
really
I
just
like
our
parts
are
screwed
up
right
and
so
like
I
went
and
looked
like
binned
out
how
big
things
were
and
a
lot
of
cases.
AB
Things
are
way
too,
too
small
to
contain
the
record,
and
that's
what
persuaded
me
these
days
is
substantively
correct,
so
yeah.
So
we
have
all
the
information.
Let
me
see
what
I
can
do
to
to
add
that
to
the.
M
Paper
yeah
that
would
that
would
be
super
useful
and
then
the
second
thing
is
just
this
observation
that
you
know
the
network
is
filled
with
garbage
and
you're
helping
to
demonstrate
that
here
and
I
think
we
as
the
as
a
community
who's
thinking
about
how
we
evolve
the
networking
to
think
about
what
we
do
about
that
right.
M
We
can't
remove
all
of
the
cpe
equipment,
but
if
we're
writing
the
software
that's
running
on
the
client,
we
can
say
if
your
local
network
has
these
kind
of
failures,
then
we
will
do
something
else,
and
I
think
it's
worth
looking
at
this
and
saying
hey
encrypted
transports
are
both
more
privacy
preserving
and
more
capable,
and
we
should
start
ignoring
the
garbage
parts
of
the
network.
To
do
that.
I
just
think
you
know
that
we
should.
We
should
send
that
signal
as
clear
as
possible.
Q
U
8027,
actually
is
the
dnsec
roadblock
avoidance
document
that
talks
about
this
type
of
stuff
and
talks
about
you
know
it
would
have
been
neat
if
you
had
the
chart
of
like
how
many
resolvers
fell
into
the
different
categories,
because
one
of
the
things
we
did
is
outline
tests
for
all
the
things
you
need
to
do.
You
know
to
get
a
dns
compliant
sort
of
resolver
in
front
of
you
and
to
know
when
you
don't
so
that
you
can
make
appropriate
decisions
or
at
least
know
when
you're
being
you
know,
attacked
in
some
way.
U
U
AB
Oh
because.
AB
That
so
I
have
so
I
have
so
okay,
so
like
I
am
now
about
to
like.
Where
are
my
skis?
We
do
have
that
data.
Okay,
we
took
it
out
of
the
table
because
the
failure
rate
was
extremely
high
and
then
we
found
an
rfc
whose
number
I
cannot
remember,
which
would
led
us
to
conclude
that
the
rsa
query
was
not
supposed
to
work
and
I
can
find
a
link
for
you
and
then
and
then
I'll
and
you.
U
AB
Yeah,
and
so
so
we
came
to
the
conclusion
that
the
itf
thought
that
wasn't
supposed
to
work
sure,
and
so,
if
I'm
wrong,
like
we
have
the
data,
we'll
put
it
right
back
in
the
one,
we're
actually
missing
is
ds
and
we
just
forgot
is
basically
the
answer.
All.
U
AB
Okay,
great,
thank
you.
So
people
have
my
email
address.
If
you
want
to
talk
more
about
this
and
you
have
questions
you
know
please
reach
out
to
me
if
you
think
I've
screwed
something
up,
and
I
think
and
like
this
is
just
me
wrong.
Please
tell
me
I'd
rather,
I
know
now
than
otherwise,
and
if
there's
other
things
like,
maybe
I
could
get
out
this
data
would
help
you
please
let
me
know,
but
thank
you
for
listening.
I
appreciate
it.
Yeah.
G
N
Z
Z
So
you
all,
I
guess,
know
rfc
7344,
which
is
cds,
cdns
key
records
for
announcing
your
bs
parameters
in
the
child
zone
and
the
parent
can
query
that
stuff
and
then,
for
example,
use
it
for
rolling
over
the
s
records
when
you
do,
let's
say
key
change,
or
also
for
onboarding
dns
seg
for
a
zone
and
while
working
with
this
in
the
context
of
bootstrapping
dnsec,
I
encountered,
let's
say
an
interesting
situation
that
I
wanted
to
point
out
and
I
think
it
needs
correction
in
the
rfc.
Z
Z
So
let's
say
I
don't
know,
I'm
the
dot
de
registry,
for
example,
I'm
querying
cds
records
for
peter
thomason,
dot
de
and
then
what
to
do.
If
that's
different
on
one
name
server
and
on
another
name
server,
you
can
say
whatever
it's
endorsed
by
the
dns
operator.
So
who
cares
it'll
be
right,
one
way
or
the
other
and
it
didn't,
and
it
isn't
sorry,
and
if
it
isn't
it's
going
to
be
the
fault
of
the
dns
operator.
Z
But
that
is
not
entirely
true
in
multi-homing
setups,
where
I
think
this
is
a
specifically
a
severe
problem
because
it
may
lead
to
one
provider
in
a
multi-signer
setup
and
to
be
able
to
unilaterally
roll
the
ds
record
set
with
intention
or
without
intention.
I
think,
for
example,
a
typical
problem
could
be
that
there
is
multi-homing,
let's
say
at
ns1
and
cloudflare,
for
example,
both
of
them
do
signing
and
now
she's
announced
these
are
announce
each
other's
keys
and
there's
going
to
be
automation
for
this
and
rfc
drafts
going
on.
Z
And
all
of
that,
so
that's
going
to
be
more
common
in
the
future.
Perhaps,
and
let's
say
now,
one
of
the
providers
rolls
a
key
and
publishes
new
cds
records
and
at
that
point,
forgets
to
publish
the
other
operators
cbs
records
and
then
the
parent
comes
queries
that
ends
up
only
retrieving
one
part
of
it
essentially
rolling
the
ds
record
set,
and
then
the
zone
is
broken.
Z
The
delegation
broken
because
the
keys
of
the
other
provider
are
going
to
be
missing
and
that's
like
a
problem
if
it
occurs,
but
also
conceptually
a
single
provider
shouldn't
be
in
the
position
to
be
able
to
remove
another
provider's
trust
anchors.
So
I
think
this
needs
correction,
and
this
is
the
first
of
two
slides
and
the
second
slide
which
we're
going
to
now
has
the
proposed
solution.
Z
So
I
think,
would
be
nice
to
have
a
short
document
that
clarifies
that,
if
you're
querying
as
a
parent,
these
records
from
the
child,
you
need
to
query
them
across
all
of
the
name
servers
and
may
only
act
upon
them
if
they
are
consistent.
So
the
proposed
wording
here,
which
is
in
the
draft
I
uploaded
it's
an
individual
draft.
Z
Then
it
that
situation
must
be
considered
inconsistent
and
if
such
inconsistency
occurs,
then
the
parent
must
take
no
action,
specifically
not
delete
or
alter
the
existing
ds
record
set.
So
I'm
happy
to
discuss
what
people
think
about
this
and
if
you
think
it's
a
good
idea,
perhaps
we
can
work
towards
advancing
this
some
way.
Okay,.
T
Andrews
you
you've
got
a
potential
denial
of
service
attack
if
you
require
every
ns
to
return
the
same
ds
records.
All
you
need
is
a
machine,
that's
down,
and
you
can't
correct.
You
can't
recover
from
that
using
a
rollover
and
in
reality,
ds
records
are
no
different
to
any
other
ara
type.
If
you're
dns,
if
you're
modeling
home
and
one
of
them
does
something
to
the
zone,
the
s
records
aren't
really
any
different
to
any
other
record
type
that
can
potentially
be
served.
T
Z
Okay,
so
so
my
opinion
on
this
is
that,
of
course,
it's
just
records
like
any
other,
and
if
you
serve
two
different,
a
records,
that's
fine!
The
proposal
is
not
imposing
any
restrictions
on
how
to
publish
cds
records.
You
may
publish
inconsistent
cbs
records
if
you
like.
The
proposal
is
an
instruction
to
the
parent,
whether
to
act
in
a
situation
that
is
known
to
be
inconsistent
and
I
think
the
first
time
this
goes
wrong
for
a
large
domain.
W
I
think
my
concern
this
victory
covenant.
Google,
my
concern
is
similar
to
marx.
There
are
probably,
I
think,
non-uncommon
hidden
masters
that
are
listed
in
the
ns
record,
to
facilitate
various
kinds
of
internal
synchronizations,
but
aren't
necessarily
exposed
to
the
outside
world.
W
Such
domains
won't
necessarily
be
able
to
be
reached
at
every
name
server,
whether
it's
a
good
idea
to
list
the
dead
name,
servers
or
not
is
a
separate
question,
but
I
would
expect
it's
not
entirely
uncommon
and
also,
of
course,
if
the
only
name
server
with
the
private
key,
you
know
fails,
and
you
need
to
do
an
emergency
role
and
tolerate
the
outage.
You
know
whatever,
then
again,
you
can
be
stuck
as
as
mark
indicated,
so
requiring
every
single
one
is
sounds
a
little
risky.
This
needs
some
thought.
Z
Okay,
I
think
that's
a
fair
point,
so
perhaps
the
wording
could
be
amended
so
that
it's
fine
if
an
m
server
does
not
return
any
cds
or
cdns
key
records
with
a
proof
of
non-existence,
of
course,
but
those
that
do,
I
believe,
should
be
consistent,
because
otherwise,
how
do
we
address
the
concern
that
one
provider
is
accidentally
screwing?
It
up.
R
Time
constant
because
normally
the
ttl,
I
guess,
tells
the
parent
how
often
it
needs
to
recheck
the
cds.
But
if
you
checked-
and
the
answer
is
that
the
cds
is
in
an
inconsistent
state,
when
do
I
recheck?
How
often,
how
often
do
I
repeat
this
check.
Z
So
in
the
current
specification,
the
cds
ttl
does
not
impose
any
schedule
on
the
rechecking.
It's
completely
up
to
the
parent.
What
to
do,
and
I
believe
there
is
no
registry-
I
mean
there's
a
bunch
to
implement
the
scanning-
I
think
seven
or
eight,
but
I
think
they
all
do
it
daily,
regardless
of
ttl.
R
Okay,
that's
interesting,
but
even
given
that
I
don't
think
that
we
would
want
the
result
of
this
to
be
well
we'll
try
again
tomorrow,
if
you're,
if
you're,
trying
to
execute
a
transition
like
this,
you
know
an
additional
20,
you
know
checking
once
every
24
hours.
R
It
could
be
a
very
long
time
before
you
get
the
you
know
you
get
into
the
right
state
again,
so
I
think
you
you
might
consider
you
know
putting
in
some
time
constants
recommending
that
you
check
back
in
five
minutes
to
see
if
in
case
you
happen
to
be
right
in
the
middle
of
a
transition.
Z
L
Z
G
Okay,
thank
you.
Thank
you,
peter
for
the
feedback
from
the
audience
that
yeah,
I
think
we're
done
we're
perfectly
in
time.
We
completed
the
agenda,
including
time
permitted.