►
From YouTube: IETF113-DNSOP-20220322-0900
Description
DNSOP meeting session at IETF113
2022/03/22 0900
https://datatracker.ietf.org/meeting/113/proceedings/
A
A
E
G
A
A
A
H
A
Okay
welcome
this
is
the
dns
working
group,
dynastop
working
group
and
next
slide,
please
so
the
chairs
tim
vicinski,
suzanne
and
myself
benno
alexander
here
is
the
so-called
delegate.
It's
help
he's
helping
us
with
the
practical
issues
and
things
of
running
at
the
the
hybrid
session.
A
Warren
is
our
ad
sitting
in
front
of
here
we
have
a
jabber
scribe,
which
is
suzanne
and
we
have
the
note
taker,
the
midi
taker
and
this
paul
hoffman.
Thank
you
very
much
next
slide.
A
A
A
A
If
you
want
to
go
to
the
mic,
maybe
you
already
learned
that
yesterday,
but
I
want
to
repeat
that
if
you
go
to
the
mic,
you
have
to
raise
your
hand
on
your
telephone,
so
everyone
in
the
room,
even
though
you're
not
intend
to
go
to
the
bike,
though
please
register
with
the
following
quad
code,
because
it
also
resists
you
as
being
as
attending
the
the
dinosaur
meeting.
So
it's
the
blue
sheet,
also
thank
you.
A
So
for
the
document
updates,
tim
or
the
dinosaur
chairs
did
send
an
email
yesterday
this
morning
about
the
status
updates.
We
want
to
keep
it
brief
because
we
have
a
full
agenda,
so
we
rather
reserve
the
time
for
the
presenters
week.
So
next
slide,
we
did
finish
dynasec
ayanna
considerations,
it's
published
as
rfc,
and
also
dns
tcp
requirements
has
been
published
as
an
rfc.
A
Svcp
https
service
binding,
it's
still
in
ist
call.
There
are
some
well
discussion
in
the
isd.
A
new
id
is
asked
by
the
ise
and
the
process
is
going
forward.
Yes,
next
slide.
A
Yeah
so
the
interims
we
had
scheduled
two
interims
between
the
itf
112
and
113
this
one.
It
seems
to
work
quite
well
for
us,
so
we
planned
one
or
two
drafts
in
an
into
a
meeting
of
an
hour,
so
we
can
discuss
it
in
detail.
A
A
A
A
I
think
we're
in
good
shape,
so
we
did
a
survey
about
which
working
group
documents
were
or
potential
working
group
documents
are
of
great
interest
from
the
first
survey
four
four
five
months
ago,
the
dinosaur
dinosaur
bootstrapping
and
the
dinosaur
automation
drafts
were
selected
as
the
most
relevant
or
there
were
the
drafts
that
got
the
most
interest
from
the
dinosaur
working
group.
So
we
will
start
at
dinus
op
call
for
adoption
later
this
week
for
these
two
drafts.
A
Besides
that,
of
course,
there's
new
work
presented
today.
There's
discussions
on
the
mailing
list,
that's
good,
but
we
will
have
another
survey
for
the
working
group
to
fill
in
to
indicate
which
drafts
the
audience
the
the
participants
find
interesting
and
find
work
relevant
for
the
working
group,
so
we
can
make
another
selection
for
new
work,
so
we
want
to
prioritize
our
work,
also
in
coordination
or
with
feedback
from
the
genus
of
working
group.
A
I
So
warren
kumari,
so
bino
in
the
chairs,
just
went
through
like
a
long
thing
on.
You
know
we
should
do
stuff
with
the
working
group
and
make
sure
the
working
group
has
all
the
consensus
and
blah
blah
blah
and
that's
a
great
idea.
But
what
I
would
really
like
is
if
we
could
also
do
something.
In
addition,
there
has
been
some
discussion
on
a
bcp
on
dns
sec
and
there
was
some
discussion
in
that
on.
I
J
A
A
A
K
L
Yes,
so
we,
you
probably
have
heard
us
talk
about
the
multi
sign
up,
dns
automation
and
one
of
the
problems
is
that
for
multisigner,
bose
or
several
signers
need
to
support
the
same
algorithm
and
well,
if
you,
if
you
own
a
domain
name
and
let's,
if
you're
in
the
size
of
the
coca-colas
and
stuff
of
this
world,
then
you
probably
can
persuade
your
service
providers
to
support
the
algorithms
you
want.
L
But
for
the
rest
of
us,
that's
not
the
case,
and
so,
if
you
want
to
move
your
domains
and
your
service
providers
have
a
distinct
set
of
algorithms,
we
would
need
to
do
something
to
be
able
to
move
these
domains
and
there's
yeah
so
right
now
the
rfcs
requires
to
be
that
there'll
be
signatures
of
all
the
algorithms
that
are
in
the
dns
key
set,
and
that
can
be
done
in
the
if
the
signers
don't
support
all
the
algorithms.
L
Obviously,
and
so
we
would
need
to
adjust
the
dns
sec
rc,
but
we
would
have
to
do
some
additional
work
and
right
now
it
is
not
not
really
clear
how
to
do
that
and
we
need
to
discuss
that
and
yeah.
We
started
some
discussions
on
the
mailing
list
and
I
would
like
for
more
people
to
join
in
and
see
if
we
can
solve
this
problem.
A
A
Do
have
a
lot
of
new
work
in
the
working
group
because
well
the
existing
documents
are
almost
finished
and
we
are
discussing
the
details
on
the
mailing
list,
but
none
of
the
presenters
or
authors
did
ask
for
a
presentation
slot.
So
there's
a
lot
of
new
business.
I
won't
go
off
the
of
the
agenda
next
slide.
I
think
we
can
get
started.
A
M
So
this
is
about
negative
caching
of
dns
resolution.
A
N
M
A
Okay
dwayne:
can
you.
M
F
L
M
N
A
Can
you
send
me
your
slides
of
the
first
presentation,
the
glue
not
optional,
just
by.
N
M
Okay,
the
mail
has
been
sent
all
right,
so
this
is
a
presentation
on
negative
caching
of
dns
resolution
failures.
This
is
a
new
draft
not
adopted
by
the
working
group.
Next
slide.
M
So
the
the
the
gist
of
the
problem
that
we
want
to
talk
about
today
is
that
it
appears
recursive
name
servers
are
really
bad
at
caching
resolution
failures
such
as
timeouts
and
server
failure,
codes
of
refuse
codes,
validation,
dynastic
validation,
failures
and
loops.
M
The
graphic
on
the
right
is
data
from
verisign's
com,
net
name
servers
during
the
facebook
outage
of
october
last
year,
and
it
shows
how
the
rates
of
queries
increased
for
those
three
facebook
related
domains
from
you
know
a
reasonable
amount
to
kind
of
an
insane
amount
over
the
six
or
so
hours
of
the
outage.
M
So
this
this
list
is
also
in
in
the
draft,
and
these
are
some
of
the
incidents
and
evidence
of
you
know.
Failure
to
cash
resolution
failures
over
the
years
the
facebook
outage
I
just
mentioned
for
that.
We
saw
128
times
increase
in
queries
to
comnet
name
servers
in
2021.
M
Last
year,
myself
and
some
verisign
colleagues
presented
some
work
at
the
dns
work
meeting
where
we
we
were
doing
some
experiments
with
a
a
botnet
domain,
and
we
intentionally
caused
one
of
the
the
domains
that
the
body
used
to
return,
surfail
errors,
and
we
saw
a
1200
x
increase
in
responses
in
in
inquiries
to
our
sinkhole
name
servers.
M
I
suspect
a
lot
of
us
are
familiar
with
the
the
tsuname
work
from
also
2021
in
a
network
they
recorded
a
500x
increase
2020.
There
was
the
nxns
attack
research
that
was
published,
1620x
increase
going
back
further
in
time,
the
ksk
rollover.
M
Victor
duchovny
presented
something
at
dnsoarc
where
he
is
doing
a
a
daily
survey
of
dinasek
and
at
the
end
of
his
survey
he
retried
all
of
the
names
that
had
failed.
He
retried
them
all
again
and
that
led
to
a
very
significant
amplification
effect.
M
The
dine
attack
recorded
or
talked
about
a
10x
increase
in
retries
from
resolvers,
and
if
you
go
back
even
to
2009,
you
may
remember
this
paper
and
blog
post
called
rollover
and
die
from
from
when
the
ripe
ncc
did
a
rollover
of
some
inaudibles
next
slide.
M
So
briefly,
this
slide
talks
about
the
the
existing
requirements
that
you
know
we
were
able
to
find
in
some
of
the
rfcs,
so
rfc
2308
is
all
about
negative
caching
and
it
says
that
negative
caching
should
no
longer
be
seen
as
an
optional
part
of
a
dns
resolver.
M
M
M
Although
46.97
talks
about
the
nsr
set,
you
know
during
the
facebook
outage,
we
weren't
exactly
seeing
ns
queries,
but
we
were
definitely
seeing
queries
to
the
parent
zone
for
facebook.com.
M
46.97
also
has
some
other
should
level
requirements
that
that
we
won't
go
into
in
this
presentation
and
then
8767,
which
is
sort
of
stale.
I
believe
talks
about
says
attempts
to
refresh
from
non-responsive
or
otherwise
failing,
authoritative,
name
servers
are
recommended
to
be
done
no
more
frequently
than
every
30
seconds.
M
M
So
this
is
kind
of
what
we
propose
in
the
draft
for
updated
requirements.
First,
resolvers
must
cache
all
types
of
resolution
failures
for
at
least
five
seconds
and
not
longer
than
five
minutes.
M
Resolvers
should
employ
an
exponential
backup
algorithm
in
their
in
the
amount
of
time
that
they
cache
negatively,
are
actually
in
in
back
off
in
the
sense
that
they
retry.
So
you
know
every
time
they
retry
they
should
retry.
They
should
wait
a
longer
time
between
retries.
M
M
Okay,
so
that's
it
if
we
also
presented
this
material
sort
of
at
the
dns
arc
meeting
a
month
or
so
ago,
and
it
has
a
little
bit
more
information
on
the
background
and
some
some
data.
So
if
folks
would
find
that
interesting,
I
would
encourage
you
to
go
look
for
that.
A
O
Hi
paul,
if
you
go
one
slide
back,
I
guess
you
have
the
exponential
back
off
as
a
shoot.
Why
is
that?
Not
a
must,
because
that
seems
like
the
really
important
thing
to
sort
of
stem
the
flood
of
messages.
M
Well
to
me,
the
most
important
thing
is
the
at
least
five
seconds,
because
you
know
what
we
see
in
fact
is
certain
resolvers
query
like
thousands
of
times
per
second
in
in
failure
mode.
So
if
we
could
go
from
thousands
of
times
per
second
from
once
every
five
seconds,
I
think
that
would
be
a
huge
win,
but
I
would
also
be
willing
to
make
exponential
back
off
a
level
requirement
if
there
was
support
for
that.
P
Fever,
I
pretty
much
supported
adopting
that
because
I
think
it
is.
It
is
important.
The
question
is:
will
we
get
the
people
who
are
doing
this
with
the
thousand
requests
per
second
to
to
actually
read
that,
but
I
think
we
should
give
it
a
go.
Q
So
largely
man,
I
will
just
state
my
support
for
this.
I
think
this
is
very
important
work.
I'm
not
going
to
get
into
the
details.
This,
of
course
that's
going
to
be
hashed
out
as
we
work
on
it,
but
I
strongly
support
adopting
this
work
to
the
working
group.
Thanks.
A
R
Hi
so
first
thing,
I
think
it
sounds
like
a
good
idea.
One
thing
I
was
wondering
you
said
about
five
seconds
and
about
you
know,
you're
getting
thousands
of
queries.
Part
of
me
does
wonder
if,
depending
on
the
proportion
of
this
traffic
that's
coming
from,
shall
we
say
large
public
resolver
operators,
I've
got
to
say
my
name,
I'm
hazel
smith.
Google
is
that
gonna
like
like?
R
Is
that
timeout
supposed
to
be
like
her
backhand
task,
like
you
know,
if
you've
got
you've,
got
a
low
balance
that
sprays
across
you
know?
I
know
I
pick
a
number
out
of
my
out
of
my
hat.
10
000
back
ends,
each
of
which
is
a
name
server,
recursive
name
server,
each
with
its
own
local
cache
is,
if
each
of
those
ways
five
seconds,
given
how
many
people
asked
for
facebook,
you
know
every
second
of
the
day.
R
Is
that
gonna
be
enough?
If
each
one
of
those
waits
five
seconds
and
then
all
of
them
they
all
once
go?
I
don't
know
who
facebook
is
better.
Ask.
M
Yeah
thanks
hazel,
so
it
is
our
intention
that
it
is
per
back-end
server,
as
you
said,
so
I
wouldn't.
Maybe
we
need
to
be
a
little
bit
more
specific
in
the
in
the
language
about
that,
but
that
is
the
intention
what
what
we
see
today
again,
if,
if
there's
a
large
recursive
operator
that
has
thousands
of
backend
servers,
what
we
see
at
verifying
some
of
our
data
is
we
see
thousands
of
requests
per
second
from
each
of
those
backend
servers
right.
M
S
Can
I
speak
jim
reed?
I
strongly
support
this
draft.
I
think
it's
much
needed
so
kudos
to
doing
his
colleagues
for
actually
writing
this
up.
I
think
that
you
really
should
adopt
this.
I've
got
one
or
two
questions,
though.
First
of
all
is
the
numbers.
We've
got
there.
The
same
resolvers
must
cash
for
five
seconds
and
not
longer
than
five
minutes.
S
I
don't
really
know
care
where
these
particular
numbers
come
from,
but
I
do
think
if
we're
going
to
develop
this
further,
we
should
try
to
have
an
evidence-based
approach
to
that.
Maybe
these
values
should
be
smaller.
Maybe
they
should
be
larger.
I
don't
know,
and
we
should
reconsider
also
what
the
second
order
effects
would
be
once
those
values
are
chosen
and
put
into
place,
and
the
other
question
I
have
is
more
of
a
meta.
One
dwayne
is
that
you
saw
there
was
a
big
spike
in
queries
after
the
facebook
outage.
S
Were
you
able
to
identify
the
name,
seven
implementations
that
were
responsible
for
the
bulk
of
those
queries,
and
I
would
imagine
they're
going
to
be
coming
from
a
very
small
number,
because
the
bulk
of
the
resolver
traffic
these
days
is
coming
from.
You
know
like
google's
quality
and
the
others,
the
other
all
four
services.
M
Yeah
thanks
jim,
so
in
the
draft,
I
believe-
maybe
not-
I
don't
know
if
it
was
published,
but
verisign
published
a
blog
post
on
on
the
outage
as
well
and
in
there
we
were
able
to
identify
some
of
the
sources
for
this
right.
So
we
were
seeing
a
lot
of
the
traffic
coming
from
large
recursive
operators.
M
We
didn't,
we
didn't
really
attempt
to
identify.
You
know
all
the
open
source
implementations
out
there,
but
we
did
analysis
or
identification
based
on
source
address,
not
on
say
software
version.
If
that
makes
sense.
M
I
can
point
you
to
that.
If
you
need
to
sorry.
A
Yeah,
so
we
bumped
joao
from
the
queue,
but
if
we
do
have
time
for
a
short
question:
okay,
okay,
sorry
dwayne
go
ahead.
M
I
was
just
gonna
say
if,
if
anyone
needs
the
link
to
that,
other
blog
post
I'd
be
happy
to
provide
it.
Okay,.
A
Thank
you
so,
to
your
last
request
call
for
adoption.
We
will
run
the
survey
so
and
your
draft
will
certainly
in
the
service
of
the
working
group,
can
indicate
their
preference
on
their
work
and
that
we
set
priorities.
But
we
will
follow
up
on
that
and
we
will
report
well.
We
run
the
survey,
probably
one
or
two
weeks,
something
like
that.
So
early
april,
half
april
will
set
out
a
plan
for
adoption
of
work.
Yeah.
M
Yeah,
okay,
thanks
benno,
so
so
this
is
about
the
the
glue
is
not
optional
draft.
This
was
originally
written
by
mark
andrews
and
he
kind
of
allowed
myself
and
schumann
huck
and
paul
waters
to
join
him
as
co-authors.
M
The
the
title
originally,
of
course
is
glue
is
not
optional,
but
as
we'll
see
you
know
throughout
this
presentation,
there
may
be
need
to
change
that
slidell
title
slightly.
So.
M
Next
last
time
this
work
was
presented.
We
were
at
revision,
two
of
the
draft,
so
here
are
the
changes
summarized
since
then,
I'm
going
to
talk
in
detail
about
all
of
these
in
the
forthcoming
slides,
so
just
go
ahead
to
the
next
slide.
Please.
M
So
one
sort
of
important
change
is
that
we
added
some
clarifying
text
that
this
draft
only
refers
to
requirements
on
name
server,
software
implementations
and
doesn't
really
say
anything
at
all,
about
data
placed
in
zones
or
or
how
registry
should
operate.
So
a
few
people
told
me
they
were
very
pleased
with
that
edition.
M
M
So,
based
on
some
discussions
in
other
working
groups,
like
deprive
it,
it
seems
like
we
might
be
headed
in
a
direction
where
the
concept
of
glue
could
be
expanding
to
things
or
record
types
other
than
a
and
quad
a
records.
M
So,
for
example,
there's
there's
this
ds
glue,
draft
by
ben,
and
so
the
question
for
us
becomes.
Should
this
talk
him
and
talk
about
referral
glue
or
just
continue
to
work,
use
the
phrase
glue
to
mean
only
addresses
of
ns
records
below
the
zone
cut
at
this
point
in
the
in
the
zero
four
version
of
the
draft
we've
gone
ahead
and
changed
glue
to
referral
glue
in
in
most
places,
kind
of
to
test
the
waters
and
see
how
it
sounds
to
me.
M
It
sounds
a
little
strange,
but
we
went
ahead
and
did
that
and
then
also,
I
think,
there's
maybe
a
little
bit
of
an
open
question
about
where
to
define
glue,
although
maybe
maybe
somebody
else
has
some
updates
on
this.
At
some
point
there
was
talk
of
putting
it
in
the
dns
terminology
document
and
if
it
doesn't
go
there
then
it
seems
like
maybe
it
needs
to
go
in
this
document,
but
I
would
probably
not
like
that
so
we'll
see
next.
M
Another
aspect
that
was
controversial
with
the
previous
version
was
the
requirements
around
sibling
glue.
M
So
at
this
point
we've
made
sibling
glue
optional
and
that's
why
the
you
know
the
sort
of
strong
title
may
need
to
to
change,
but
what
what
the
current
version
says
is
that
when
a
name
server
generates
a
referral
response,
it
should
include
all
available
glue
records
in
the
additional
section
and
after
adding
all
the
in
domain
glue
records.
If
not
all
sibling
glue
records
fit
due
to
message
size
constraints,
then
the
name
server
is
not
required
to
set
tc
equals
one.
M
M
M
And
then
I
think
the
last
change
that
we
want
to
talk
about
is
to
the
document
now
talks
about
transports
a
little
bit
more
generically.
M
And
next
I
think
that's
it
okay.
So
at
this
point
we
see
sort
of
the
the
main
outstanding
issues.
Are
you
know
the
phrasing
of
glue
versus
referral
glue
and
where
glue
actually
becomes
defined.
G
G
G
So
I
also
don't
see
why
those
those
things
wouldn't
be
subject
to
the
same
rules
as
everything
else.
So
I
I
guess
I
don't
understand
the
need
for
this
distinction.
In
my
view,
glue
is
whatever
records
have
been
placed
in
the
parent
for
the
purpose
of
referrals
and
the
the
parent
doesn't
really
might
have
some
opinions
about
what
it
accepts
there,
but
once
they're
in
they
you
know,
I
think
they
should
be
treated
uniformly.
T
M
When
a
name
server
is
you
know
going
through
and
filling
in
the
decimal
section,
it
needs
to
have
some
sense
of
priority
right
unless
we're
just
going
to
say
all
again
go
back
to
all
types
of
glue.
G
Some,
no,
my
so
the
distinction
that
seems
relevant
there
is
about
the
location
of
it,
as
opposed
to
say
the
rr
type.
I
think
that
if
you
want
to
draw
a
distinction
in
the
draft,
I
think
the
important
distinction
is
the
the
location
of
the
glue
record.
The
owner
name
of
the
blue
record
relative
to
the
other
names
in
play.
A
Thank
you
paul
paul
hoffman.
U
So
because
the
terminology
document
is
still
open,
I
think
it's
the
best
place
to
put
this,
not
that
I
want
to
do
the
work
to
put
it
there,
but
I
think
that
since
we
haven't
closed
it
out
and
because
people
writing
future
documents
are
more
likely
to
look
for
it
there
than
in
some
other
rfc
just
put
in
the
terminology
document.
We
can
keep
it
open
for
longer.
K
I
wanted
to
add
same
my
weight
to
the
same
thing
that
I
think
the
the
definition
should
be
in
the
terminology
document
as
an
operator
who
quite
often
has
to
hire
new
staff
and
bring
them
on
board
and
train
them
and
stuff
and
the
amount
of
terminology
in
dns.
K
It
makes
that
quite
challenging
to
have
one
place
where
all
that
sits
is
really
really
useful,
and
actually
I
think
that
terminology
document
should
be
in
constant
edit,
so
we
should
be
publishing
it
now
and
again,
but
as
soon
as
it's
published
changes
happens,
it
always
requires
updates.
Basically.
J
P
Yeah,
so
on
the
glue
with
this
referral
group,
I
don't
have
an
opinion,
it's
just
words
all
glue
s,
pens
that
is
for
referral.
I
haven't
seen
anything
else.
The
other
thing
is
the
registry
statement.
In
there
I
mean
registries,
define
the
name
server
right
and
registry
to
kind
of
define
the
a
record,
let's
go
into
this
zone.
So
if
they
don't
need
to
follow
this,
then
not
sure
if
we
are
winning
something
here.
J
J
P
If
you
are
registering
a
domain,
the
the
place
you
put,
the
a
record
and
quarter
a
and
name
server
record
in
is
the
registry.
So
if
the
registry
don't
need
to
follow
this,
then
meaning
they
don't
need
to
input
the
stuff,
then
what
are
we
gaining?
So
I
found
that
statement
that
registries
don't
have
to
do
something
a
bit
weird.
M
So
I
I
think
the
you
know
the
impetus
for.
M
Well,
I
guess
we
should
take
that
discussion
to
the
list
ralph,
because
I
think
the
original
impetus
for
the
work
was
was
really
you
know
the
behavior
of
name
servers
in
and
whether
or
not
they
set
the
tc
bit
in
their
responses
and
what
I've
heard
from
some
a
few
folks
that
that
work
on
the
registry
side
is,
is
you
know,
different
registries
have
different
modes
of
operation
with
respect
to
to
how
they
treat
glue
records
or
address
records
in
their
registry,
and
and
they
were
concerned
that
they
would
have
to
change
their
models,
their
their
registry
models,
and
they
didn't
want
to
do
that.
M
Obviously,
some
have
you
know.
I
think
it
gets
down
to
host
object
versus
the
I
figure,
what
the
other
type
is
called,
but
there
were
concerns
that
they
would
have
to
change
their
their
registry
models.
For
this.
R
Hazel
smith,
google,
so,
firstly,
I
would
say
that
I
think
it's
a
good
idea
to
make
clearer
about
the
definitions
of
glue.
I
mean
it's
possible,
I'm
just
hilariously
under-informed
here,
but
I've
always
struggled
to
know
whether
I
should
call
the
sort
of
ds
records
that
are
you
know,
parent
side
of
the
zone
cut
whether
I
should
call
them
glue
or
not,
and
I'm
sure
I
did
try
and
look
for
a
straight
out
from
this
and
couldn't
find
one
before
so.
R
Wherever
this
ends
up,
it
would
be
good
to
have
a
straight
answer
somewhere
a
community
consensus.
On
this
point,
I
also
knew
what
paul
was
saying
that
perhaps
it
should
be
in
the
terminology
document
and
to
me
that
sounds
like
that
might
be
a
better
place
for
it,
but
as
long
as
it
goes
somewhere,
that's
my
feeling.
A
Thank
you,
alexander
in
the
room.
N
Thank
you,
alexander
mayor
veronica
deity.
Speaking
from
the
registry
side,
I
think
it's
very
important
to
differentiate
the
three
steps
between
a
registry
accepting
data,
which
is
the
first
step,
the
second
step
of
the
zone,
file
that
the
registry
generates
for
a
name
server
and
then
a
search
step
where
the
nameserver
actually
hands
out
a
certain
set
of
records.
N
Maybe
there
should
be
eventually
a
second
document
that
talks
about
the
second
step,
about
what
is
going
to
go
into
the
onto
the
zone
and-
and
I
would
leave
it
up
to
the
registry
community
in
regex
or
whatever,
to
define
what
actually
is
acceptable
and
we
are
trying
to
be
very
liberating
what
we
accept
in
terms
not
to
confuse
our
deer
registers,
because
they
send
all
kind
of
data
that
we
don't
really
put
into
the
zone.
M
So
I
think,
on
the
you
know
where,
where
the
definition
goes,
I
think
we're
pretty
clear
on
that.
It
sounds
like
we're
still
a
little
bit
unclear
about
exactly
what
glue
means
and-
and
you
know
whether
referral
glue
what
that
means.
So
you
know
I
think
that
goes
back
to
the
group
to
the
list
and
go
from
there.
G
G
G
G
So
this
is
particularly
inspired
by
some
discussions
that
have
been
happening
in
dprive
and
also
in
add
around
the
use
of
svcb
records
to
convey
information
about
the
secure
about
the
support
for
encrypted
transports
for
a
dns
server,
both
recursive
and
authoritative.
But
it
actually
applies
generally
to
things
like
using
http
3.
With
dane.
G
One
of
the
key
purposes
of
service
bindings
is
to
enable
upgrading
too
quick
and
skipping
a
tcp
bootstrap
step,
and
also
the
only
other
way
to
get
to
quick
is
using
the
http
specific
alt
service
mechanism.
So,
if
you're
not
doing
http,
then
svcb
is
the
only
way
currently
defined
to
to
get
to
quick.
So
this,
and
also
this
is
just
a
very
small
amount
of
text.
G
Okay,
I
want
to
have
a
some
brief
background
about
dane,
since
we
spent
some
time
reading
the
day
in
rfcs-
maybe
not
everybody
has
has
looked
at
them
recently.
G
So
dane
is
based
on
publishing
tlsa
queries
which
contain
tls
certificates
or
fingerprints
or
public
key
fingerprints,
and
those
queries
are
made
for
a
name
of
this
form,
underscore
port
dot,
underscore
transport
dot
underscore
base
name
or
in
the
documents
it's
sometimes
called
the
base
domain.
This
query
follows
cnames
as
usual,
and
this
is
all
very
logical.
It's
the
the
tls
query
base
domain
is
based
on
the
redirected
transport
endpoint.
It's
based
on
the
thing
you're
actually
going
to
establish
a
socket
to
essentially.
G
G
G
If
it
can't
resolve
the
tlsa
record
there,
it
should
rewind
to
the
beginning
of
the
cname
chain.
Try
a
different
base
domain
emit
another
tlsa
query,
so
there
are
actually
two
two
different
queries
for
different
names,
potentially
required
on
every
attempt
to
connect.
Using
dane
that's
that's
an
interesting
wrinkle.
G
Documents
describing
how
to
use
dane
with
mx
and
it
and
srv
records
and
those
have
their
own
wrinkles.
So
the
mx
and
dane
specification
tells
clients
that
they
actually
need
to
maintain
three
different
reference
identifiers.
So
they
should.
They
should
construct
the
base
domain.
Put
that
in
the
sni,
so
they're
telling
the
server.
G
I
want
this
dom
your
certificate
for
this
name,
the
the
base
domain,
but
they
should
also
accept
any
certificate
that
comes
back
and
covers
these
other
identifiers,
even
if
it
actually
doesn't
provide
them
with
the
name
they
were
asking
for
in
the
sni.
So
that's
that's
pretty
weird
from
a
tls
standpoint
and
it
gets
even
weirder
in
a
way
with
srv,
where
srv
says
that
the
point
the
expectation
here
is
that
the
target
server
host
name
should
be
in
the
certificate,
but
clients
are
should
not
ask
for
it.
G
They
should
ask
for
the
service
name.
In
other
words,
if
there's
a
some
sort,
if
there's
srb
indirection
clients
should
ask
for
the
thing
before
the
srv
in
direction,
but
expect
the
thing
after
the
srv
in
direction.
Literally,
don't
ask
for
the
thing
that
we
actually
expect
you
to
get,
and
you
know
there's
logic
behind
each
of
these
decisions,
but
but
these
are
the
considerations
that
sort
of
went
into
our
design.
G
G
So
that's
a
that
is
very
much
like
the
srv
specification.
We
we
tried
to
stick
close
to
that,
because
svcp
is
very
much
inspired
by
srv
and
and
is
essentially
an
evolution
of
that
idea,
and
but
we
did
add
some
simplification,
so
one
of
them
is
well
or
there
are
some
differences.
One
is
there's
only
one
reference
identity,
there's
none
of
none
of
this
like
try
to
validate
against
one
name,
but
if
it
doesn't
match
here
some
other
names
that
you
should
also
check.
G
That
seems
like
it
will
significantly
simplify
implementation
when
using
common
ssl
type
libraries,
but
there
is
also
a
another
difference,
which
is
that
srb
records
prohibit
the
use
of
cnames
after
the
srv
in
direction.
Svcb
does
not
and
even
mentions
it
in
the
draft
as
something
that
you
can.
You
can
do.
I
think.
V
G
That's
the
original,
dane
specification
or
well
that's
an
early
dane
specification
with
a
a
different
definition
of
what
the
transport
prefix
labels
mean,
and
the
key
here
is
that
the
current
text
just
says:
tcp,
udp
and
sctp,
and
this
raises
a
question
about
what
happens
if
you
have
dtls
and
quick
running
on
the
same
endpoint,
dtls
and
quick
are
demuxiable
at
the
transport
layer,
so
there's
no
absolutely
necessary
reason
why
they
would
have
to
have
the
same
certificate
or
the
same
key
material
serving
both
of
them
if
they
were
both
operational,
but
maybe,
more
importantly
than
that
is
that
there
have
been
some
proposals
to
use
the
presence
of
a
tlsa
record
to
indicate
which
transport
is
in
use.
G
And
so,
if
you
are
trying
to
tell
the
client
whether
to
use
or
if
the
client
is
trying
to
ask,
does
this
server
support?
Dtls
we'd,
like
the
tlsa
record,
to
be
unambiguous.
G
G
Here's
just
a
quick
example
of
the
kind
of
thing
that
motivates
this.
So
if
you
imagine,
you
have
a
dns
server
again
may
be
authoritative.
Maybe
recursive
doesn't
actually
matter
very
much
for
this,
and
it's
got
some
set
of
service
bindings,
including
some
advertisements
for
dns
over
tls
and
dns
over
quick,
maybe
on
different
ports.
G
G
V
Oh
yes,
my
video
is
upside
down
that
works
well
because
of
the
topic,
so
the
one,
the
the
reason
that
dane
chases
the
target
is
who
is
in
control
of
the
certificate
in
particular
right.
So
the
one
thing
that
you
should
definitely
consider
in
your
draft
is.
V
V
So
the
reason
for
the
name
chasing
is
that,
but
still
allows
that
you
know
if
the
cdn
actually
doesn't
have
a
tlsa
record
because
they're
they
don't
support
tlsa,
then
I
can
publish
one
anyway
right.
I
can
work
around
it.
So
that's.
V
Well,
so
that
right,
that's
that's
always
the
problem
right
that
you
want
to
do
the
least
amount
of
management
possible.
So,
following
to
the
target,
is,
is
you
know
quintessentially
the
purpose,
so
you
definitely
want
to
do
something
point
being
regardless
of.
Why
dane?
Does
it
right
right?
You
need
to
be
able
to
account
for
the
fact
that
client
that
zone
owners
do
not
necessarily
want
to
do
a
whole
lot
of
tracking
of
remote
certificates.
That's
a
real
pain.
G
In
danger,
so
I
I
think
I
think
that
we
are.
I
think
that
we
are
doing
the
thing
that
that
pushes
furthest
in
that
direction.
Yes,
I
agree
with
that.
A
A
J
A
Okay
well
for
timekeeping
bend,
some
final
words
and
call
for
good.
G
Options,
I'll
just
so,
this
is
a
call
for,
but
this
is
a
a
request
for
adoption.
So
please
consider
whether
you'd
like
to
see
this
work
in
dns
out.
G
We've
had
some
discussion
on
the
mailing
list,
mostly
about
this
last
point
about
about
wrinkle,
one
and
and
c
names
with
with
some
argument
that
this
maybe
should
actually
be
deprecated
across
all
of
dane
and
and
that
as
a
first
step,
we
should
not
permit
it
here
so
essentially,
the
the
client
would
follow
c
name
chains
to
the
end
and
would
only
do
a
single
tlsa
query
for
the
final
alias
target
and
not
not
rolling
back
to
the
beginning
of
the
cname
change.
G
N
N
J
C
W
That's
okay.
I
just
need
to.
I
can't
read
the
I
need
to
read
this
text.
Okay,
so
this
is
also.
This
is
new
work,
also,
maybe
looking
for
adoption
or
at
least
get
feedback
from
you
to
see.
If
you
like
this
id
driver
and
dnsec
a
dry
run
or
a
practice
run,
is
a
testing
process
where
the
effects
of
a
possible
failure
are
intentionally
mitigated
according
to
wikipedia
so
on
the
picture
you
see,
the
italian
astronaut
samantha
christopher,
doing
a
dry
run
of
a
fix
of
the
international
space
station
in
a
swimming
pool.
W
W
I
suppose-
and
next
slide,
please
so
what's
the
rea,
why
will
people
deploy
dns
sec,
so
one
of
the
thoughts
has
been
that
the
dane
would
be
a
driver
of
dinosaur.
W
W
W
I'm
referring
I'm
showing
here
the
picture
of
a
report
of
the
recent
failure
with
slack,
and
they
did
they
did
it.
They
did
actually
did
dry
run
that
in
a
sec
deployment,
but
still
managed
to
yeah
be
off
the
grid
for
the
ttl
time
that
the
ds
had
in
the
parent
next
slide.
W
Here
you
see
a
query
for
dnssec
failure,
failure.org,
I
believe,
and
the
ede
or
the
edns
extension
with
the
extended
error
saying
something
is
not
good
with
your
set
record
or
something
next
slide,
and
this
is
a
call
out
to
mozilla,
because
this
is
a
firefox
running
doe
to
cloudflare.
That
does
provide
the
ede
coder,
but
mozilla
is
not
displaying
it.
Why
not?
Now?
This
is
important
next
slide
and
recently
roy
irons
came
with
a
new
draft.
W
W
W
W
In
this
case
it
will
look.
Is
there
a
driver
in
ds?
If
so,
we
are
going
to
try
that
first
then
does
it
validate?
Yes,
great
you,
you
return
the
answer
with
the
id
bits
head,
if
not
send
the
report
to
the
reporting
agent
of
the
operator
following
extended,
dns
error,
reporting
and
then
do
the
validation
again,
but
with
pretending
that
there
had
not
been
a
dryron
ds
records
in
the
parent.
W
Continue
so
okay,
so
this
flowchart
is
for
because
it
conveys
the
id
easiest,
but
in
actual
implementation
I've.
So
the
one
of
the
others
is
yogos
tesseranike,
which
is
a
the
developer
of
the
inbound
recursive
resofa.
W
What
we
actually
want
to
do
is
take
a
ds
driver
and
the
s
and
a
non
driver
and
the
simultaneously
in
the
validation
process,
and
then
you
know
evaluate
them
simultaneously
and
also
have
two
security
statuses
with
the
rr
set
in
the
cache,
because
otherwise
it
will
be
yeah.
It
would
be
like
a
bit
complicated
for
implementations.
W
W
W
The
next
slide,
so
another
colleague
of
my
tom
carpe,
did
some
measurements
during
the
hackathon
trying
to
determine
what
the
effect
is
in
in
the
real
world
of
having
at
the
ds
record,
with
a
strange
digest,
type
and
well.
There
were
some
issues
with
vibe
atlas
at
the
moment,
but
we
could
the
results
for
a
running
driver
next
to
actual
dns
deployment.
W
W
I
think
there's
also
opportunity
to
do
something
else
as
well.
Have
clients
participate
in
the
dry
run
with
a
veteran
query,
flag.
W
It's
just
the
edness
option,
acting
as
a
flag.
W
I
hear
alexander
good
morning
yeah,
so
both
things
cannot
be
tested
with
the
light
atlas,
so
we
have
to
look
into
other
ways
to
see
how
backwards
compatible.
That
would
be
next
slide.
W
So
you
could
sign
your
zone
and
not
put
a
ds
in
your
parent
and
have
your
authoritative
service
report,
a
reporting
agent
right
so
wherefore.
Do
we
need
the
extra
the
new
ds
algorithm
to
signal,
driver
and
dnf?
W
Well,
because
it's
the
the
the
extended
dns
error,
reporting
edns
option
is
not
transitive.
A
staff
resolver
may
not
receive
it
and
so
for
stock
validation.
It's
better.
If
it
can
see
that
driven
is
the
intention
by
having
access
to
the
complete
validation
chain.
W
Also
without
it,
you
would
not
test
the
the
link
to
the
ds
record
and
yeah.
You
would
not
have
the
benefit
of
those
nice
other
features
like
boots,
strapping
actual
dns
sec
from
driver,
nds
and
yeah.
It's
in
then
combining
driver
and
with
actual
dnsec.
It's
nice
that
the
ds,
the
driver
on
vs
points
to
the
keys
that
are
intended
to
be
driving.
W
So
this
is
the
id
and
I'm
curious
what
you
think
of
it
and
if
you,
if
you
believe
that
we
should
adopt
it,
if
you
think
it's
a
good
idea.
Yes,
we.
T
Hey
this
is
gavin
brown
from
central
nick.
I
think
this
is
a
useful
tool
in
the
in
the
in
the
toolbox,
but
I
just
wanted
to
flag
a
couple
of
issues
on
the
provisioning
side,
so
I
can't
speak
for
other
implementations,
but
I
can
say
on
our
side,
we
validate
ds
records
with
which
we
receive
over
epp
and
one
of
the
things
we
do
is
we
check
the
hash
length
and
at
the
moment,
every
algorithm.
T
That's
in
a
ds
record
has
a
has
a
corresponding
fixed
hash
length,
and
this
would
make
hash
lengths
variable
because,
obviously,
if
the
dry
run
record
is
set
in
that
in
that
algorithm
field,
then
the
implementation
now
has
to
know
I
read
the
first
bit
off
and
then
I'll
validate
the
length
of
the
hash.
The
second
thing
is
the
way
that
we,
as
registries,
receive
ds
records.
There
are
two
interfaces
in
ebp.
T
The
registry
generates
the
ds
record
and
there's
no
way
at
this
point.
If
a
registry
is
using
the
key
data
interface
to
know
whether
to
set
the
dry
run
flag
or
not
so
we'd
need
to
extend
that
rfc.
The
epp
extension
to
add
a
dry
run
flag
that
could
be
provided
along
with
the
dns
key
data
when
adding
a
ds
record
to
a
domain
in
an
epp
server.
W
F
Shane
kerr,
so
I
was
skeptical
when
I
first
read
this,
but
actually
when
you
explained
it,
it
makes
a
lot
of
sense
to
me.
Have
you
thought
about
whether
it
be
applicable
for
helping
measure
the
potential
impact
of
like
a
root
rollover.
W
F
P
F
W
W
P
So
validation
already
is
one
of
the
most
complex
code
pieces
in
a
resolver.
P
We
are
now
adding
complexity
to
that
for
kind
of
no
benefit
because
you're
giving
out
sort
of
insecure
if
you
are
if
you
are
encountered-
and
I
think
that
we
should
be
pushing
ede,
get
clients
to
implement
ede
and
have
that
as
a
goal,
because
that
gives
the
end
user
a
clear
indication
and
if
you
implement
dnsec
you'll
find
these
things.
W
P
W
G
Hi,
I
I
think
this
is
a
recurring
pattern
now
of
people
trying
to
stuff
things
into
fake
digest
types
with
in
the
ds
record,
seeing
as
I
wrote
a
draft
to
do
it
and,
and
you
wrote
a
draft
to
do
it
and
we're
not
the
only
two
rather
than.
G
Then
keep
doing
this
for
every
idea
that
people
have
for
stuffing
things
into
the
ds
record.
Maybe
we
should
have
a
general
purpose,
meta
digest
type
where
we
can
then
avoid
polluting
the
digest
type
space,
with
lots
of
different,
weird
things
that
aren't
digests
as
to
this
proposal.
G
Specifically,
I
it's
not
my
area
really,
but
I
do
wonder
you
know:
would
we
be
better
off
just
providing
some
some
best
practice
guidance
about
how
to
set
up
a
a
duplicate
of
your
zone
in
order
to
enable
dns
tech
there
see
that
it
works
and
then
do
the
same
thing
on
the
real
one.
Excuse
me.
N
A
Okay,
thank
you.
Thank
you.
Willem!
Take
it
indeed
the
continued
discussion
on
the
mailing
list
and
we'll
go
forward
from
that.
Thank
you.
Next
presentation
is
remote
by.
A
D
D
This
is
about
a
draft
that
andrew
frickley
from
verizon
and
I
co-authored
on
the
use
of
stateful
hash
based
signature
schemes
for
dnsec
next
slide.
Please
maybe
a
little
bit
of
context
for
that.
So
I'm
sure
you've
all
heard
the
discussion
about
the
advent
of
quantum
computers-
and
maybe
paul
hoffman-
has
talked
your
ears
off
about
this
and
that
there
are
varying
opinions
about
when
practical
quantum
computers
might
come
to
be
around.
D
But
what
there
is
consensus
on
is
that
these
would
break
current
public
key
algorithms
and
that
this
causes
problems
for
internet
cryptography,
and
I
we've
heard
estimations
vary
anywhere
between
15
to
50
years,
where,
obviously,
if
it's
indeed
50
years,
we
shouldn't
really
be
worried
right
now,
whereas
if
it's
15
years,
we
might
need
to
start
scratching
our
heads
as
a
consequence,
post
quantum
crypto
algorithms,
pqc
algorithms
are
seeing
a
lot
of
development.
D
Maybe
you've
heard
of
the
nist
standardization
effort,
which
is
currently
in
the
final
phase,
and
there
is
also
momentum
to
start
deployment
of
pqc
algorithms.
There's
already
been
a
lot
of
experiments
in,
for
example,
the
tls
space,
but
because
of
this
standardization
effort,
you
can
expect
requirements
for
pqc
support
to
appear
in
government
tenders
in
the
near
future,
because
as
soon
as
this
becomes
an
official
standard,
governments
are
going
to
require
manufacturers
to
support
these
algorithms
next
slide.
Please.
D
But
then
there
is
the
argument
that
dns
signatures
have
an
effective,
zero
year
shelf
life
right,
so
they
they
don't
last
much
beyond
their
expiration
time.
So
why
should
we
now
already
care
about
pqc
for
the
nsx?
D
Well,
the
answer
that
we
have
for
that
is
that
standardization,
implementation
and
transition
of
algorithms
take
a
long
time
and
an
illustrative
example
of
that
is
the
odd
10
odd
years
that
it
took,
for
example,
to
standardize
elliptic
curve,
algorithms
from
when
they
first
appeared
for
use
in
something
like
the
insect
from
them
actually
being
standardized
in
rfc
and
then
implemented,
and
the
fellowship
there's
my
dog
and
implemented
in
I'm
just
gonna
close
my
door
implement
it
in
software.
D
D
There
are
worries
about
their
long-term
security,
which
is
precisely
why
they're
running
the
standardization
effort
to
figure
out
which
of
the
proposed
algorithms
are
actually
secure
for
use,
but
they
also
have
rather
unfavorable
parameters
for
use
in
dns.
So
that's
something
that
that
is
a
concern
for
us
because
of
the
challenges
with
new
pqc
algorithms
that
are
still
around,
but
also
because
of
the
long
time
it
takes
to
standardize
algorithms
for
the
use
in
dns
sick.
D
It
is
our
belief
so
andrew
and
I
state
that
we
think
that
we
need
a
safe
fallback
to
be
standardized
just
in
case
and
and
keep
that
in
mind
with
this
draft
next
slide.
Please
and
what
we're
proposing
is
to
standardize
stateful
hash-based
signature
schemes
for
for
dnsec,
a
very,
very
quick
primer
on
stateful
hbss.
D
They
were
first
proposed
by
ralph
merkel
and
are
constructed
using
merkle
trees.
I'm
not
going
to
bore
you
with
all
of
the
details.
The
the
takeaway
is
that
they
they
are
considered
to
have
very
strong
security.
If
you
use
the
secure
cryptographic
hash
function,
basically,
they
inherit
the
security
properties
of
the
hash
function
and
that
there
are
actually
some
good
security
proofs
for
this.
D
X
Hi
roland
stephen
farrell,
so
so
I
think,
there's
a
problem
with
stateful
hashtag
signatures
is
that
you
have
a
finite
number
of
signing
operations
I'll
get
to
that.
Is
that
okay?
Well,
but
the
reason
I
say
that
is
because
I
think
it
means
safe
is
not
the
right
way
of
describing
the
whole
situation.
D
Okay,
I'll
I'll,
take
that
we'll
take
that
on
board
when
I'm
when
I
okay.
So
maybe
I
should
clarify
that
when
we
say
safe
or
secure,
we
mean
provided
that
you
respect
the
limitations
of
these
algorithms
and
indeed
the
fact
that
you
can
only
make
a
finite
number
of
signatures
is
a
limitation
I
was
about
to
get
to
that.
So
maybe
next
slide.
D
This
is
a
very,
very,
very
simple
picture
of
what
touchy
scheme
looks
like
what
you
see
on
the
slide
is
the
the
merkle
tree
that
constructs
the
the
public
key
of
the
signature
scheme,
underneath
the
every
leaf
is
associated
with
a
a
one-time
used
public
key.
As
steven
pointed
out,
you
can
only
create
a
finite
number
of
signatures.
That's
because
the
private
key
is
actually
constructed
as
a
set
of
one-time
use,
private
keys
that
you
can
only
use
to
create
one
signature
and
should
never
reuse
them.
D
So,
basically,
a
signature
is
composed
of
the
one-time
signature
using
the
one-time
private
key
on
the
data
that
you're
signing,
and
then
there
is
an
authentication
path
in
the
merkle
tree,
which
is
the
list
of
intermediate
nodes
in
the
tree
that
you
need
to
reconstruct
the
root
hash
in
order
to
validate
the
signature,
and
in
this
case
those
are
the
red
nodes
in
the
merkle
tree
next
slide.
Please.
D
As
steven
pointed
out,
you
can
only
create
a,
but
there
are
limitations
to
these
algorithms.
You
can
only
create
a
finite
number
of
signatures
with
the
signing
key,
as
the
private
key
consists
of
a
collection
of
ots
keys.
D
It
is
essential
that
you
keep
stayed
for
your
signer,
because
if
you
reuse
one
of
the
one-time
signing
keys,
this
breaks
security
of
the
scheme,
and
this
is
potentially
a
challenge
for
online
signers
and
distributed
setups
and
I'll,
say
something
about
that
on
the
next
slide.
D
Another
limitation
is
that
signatures
are
very
large,
they're,
typically
larger
than
2.5
kilobytes
per
signature,
but
the
upside
is
that
the
public
keys
are
very
small
in
the
order
of
70
bytes,
so
for
dns.
This
essentially
requires
eating
a
zero
to
transport
signatures
as
part
of
a
message,
and
arguably
you
need
tcp,
transport
or
or
a
different
transport.
D
So
we
are
therefore
not
claiming
that
this
should
be
a
preferred
option
for
the
insect,
but
it
could
be
a
safe
fallback,
of
course,
if
implemented
securely
and
given
the
time
it
takes
to
standardize
these
new
algorithms,
we
argue
that
we
should
start
standardizing
this
now,
so
that
we
have
at
least
the
same
fallback
in
place
in
case.
We
need
it
next
slide,
please
maybe
a
sidestep
if
you
have
online
or
multi-signer
setups,
where
you
have
multiple
signers
that
need
to
create
signatures
independently.
D
D
D
We
consider
this
draft
sort
of
roughly
complete
in
the
sense
that
everything
we
would
think
needs
to
be
in
there
is
in
there,
but
of
course
it
needs
a
review
and
we
are
interested
to
hear-
and
I
heard
already
heard
multiple
times
from
benno-
that
that
he's
going
to
run
a
survey
or
that
the
chairs
are
going
to
run
a
survey.
So
we'll
wait
for
that,
but
whether
there
would
be
an
interest
in
adopting
this
and
moving
this
draft
forward.
D
I
would
say
that
the
the
way
we
set
up
this
draft
is
very
comparable
to
how,
for
example,
the
eddsa
rc
is
structured.
So
it
really
only
contains
the
essentials
of
how
do
you
use
this
algorithm
in
dns?
Section
anonymous
have
done
or
will
do
I
kind
of
lost
track
of
that
a
proof
of
concept,
implementation
of
this
and
unbound-
I
see
ben
on
nodding.
So
I
guess
that's
that's
been
done
next
slide.
Please.
D
We're
also
considering
some
follow-up
work,
we're
considering
a
draft
on
implementation
considerations,
which
would
look
at
interoper
operability,
the
trade-offs
that
you
have,
if
you
need
use
these
multi-tree
setups
parameter
choices,
transport
considerations,
etc.
Although
we
do
realize
that
transport
considerations
might
open
up
a
can
of
worms,
because
that
applies
to
much
more
than
just
the
use
of
these
signature
schemes,
I
think
that's
the
last
slide
next
slide.
Please!
D
Yes,
now
is
your
time
for
your
questions,
and
there
are
some
thanks
to
people
who
reviewed
this
draft
on
the
slide.
T
D
Yes,
I
can
so
the
the
state
that
you
need
to
keep
is
which
one-time
signature,
private
keys.
You
have
essentially
used,
so
you
need
to
track,
and
this
could
just
be
a
sequence
number
where
you
store
the
the
sequence
number
of
the
keys
and
for
how
much
time
that
depends.
So
one
of
the
param.
If
you
choose
the
parameters
for
the
for
the
key,
you
basically
determine
how
many
signatures
you
can
create
with
that
particular
key,
and
you
would
need
to
keep
this
state
for
the
lifetime
of
the
key.
D
Yes,
and
that's
so
that
yes
you're
correct
and
that's
obviously
a
challenge,
because
there's
a
trade-off
here
now
with
every
so
basically
one
of
the
parameters
is
the
depth
of
the
merkle
tree
that
you
use
so
obviously,
as
soon
as
you
add
another
level
3
the
number
of
signatures
that
you
can
create
with
that
key
doubles.
However,
your
signature
length
will
also
grow,
but
in
a
linear
fashion,
so
every
level
of
the
tree
adds
one
step
in
the
authentication
part
of
the
signature.
D
So
that's
linear
and
then
the
number
of
signatures
that
you
can
create
doubles.
So
you
can
create
keys
that
can
generate
quite
a
lot
of
signatures,
but
obviously
this
is
not
suitable
for
every
type
of
zone.
If
you
have
a
very
dynamic
zone,
you're
going
to
run
out
of
signatures
at
some
point,
yeah.
T
Okay
and
one
final
point:
you
describe
this
as
a
sort
of
fullback
in
case
you
know,
quantum
crypto
comes
along
and
everyone
breaks
everyone's
algorithms.
So
the
proposal
in
this
draft
is
that
to
that
we
should
implement
this,
but
not
use
it.
D
D
If
I
think
of
of
the
I'm-
and
I
I
don't
know
the
number
on
the
top
of
my
head-
it's
it's
86,
something
the
rfc
that
that
sort
of
specifies
which
are
recommended,
algorithms
and
and
which
ones
you
probably
should
not
use
this.
This
would
not
be
a
an
algorithm
that
you
must
implement,
but
it's
something
that
you,
you
probably
should
try
and
implement,
but
you
must
be
able
to
validate
it.
That's
that's
how
I
look
at
it,
but
yeah.
That's
a
fair
point.
A
U
Sure
so
this
is
actually
a
comment
roland.
You
said
that
you're
also
considering
doing
an
implementation
considerations
document.
I
absolutely
would
not
support
adopting
this
spec
in
the
working
group
until
we
have
that
implementations
document
in
parallel,
given
that
you
just
listed
a
whole
bunch
of
mine
fields
and
such
like
that,
I
think
that,
in
fact
I
think
the
implementations
consideration
should
be
part
of
this
document.
X
All
right,
stephen,
so
in
general,
I
think
doing
this
would
be
a
bad
idea
doing
it
now
would
be
a
really
bad
idea.
So
the
problem
with
stateful
signatures
is
that
it
creates
a
whole
bunch
of
weird
corner
cases
and
the
nist
competition
is
likely
to
produce
non-stateful
post-quantum
signature
schemes
in
the
next
year
or
two.
It
would
be
a
much
better
option
to
wait
for
those
and
never
think
about
stateful
signatures
for
dns
sec.
In
my
opinion,.
Y
Hi
peter
thomason
from
the
esec.
You
said
that
these
signatures
should
be
a
safe
fallback
option,
but
not
you're
not
pending,
for
them
to
be
the
preferred
or
a
preferred
dns
mechanism.
Y
So
I
wonder
how
you
can
have
both
at
the
same
time,
let's
say,
for
example,
you
need
the
fallback
in
place
when,
in
20
years
from
now
something
happens
with
the
post
quantum
algorithms,
then
you
already
need
the
root
zone
to
be
signed
with
these
things,
and
if
you
already
need
to
have
that
in
place,
then
it
would
have
to
be
supported
on
all
resolver
infrastructure
and
already,
I
guess
within
a
few
years
from
now.
So
how
does
it
go
together
with
it
not
being
preferred
in
the
sense
that
you
can
avoid
deployment?
D
Yeah,
I
think
so
so
I
think
that's
that
that
might
be
conflating
two
things
so
when
when
when
we
say
that
this
should
not
be
the
preferred
option,
what
we
mean
is
that
operationally?
D
It
should
not
be
the
preferred
option
right,
so
unless
there
is
a
very
good
reason
to
deploy
this
operationally,
you
shouldn't
do
that
the
support
for
it
implementing
support
support
for
it
is
a
different
matter,
and
because
signatures
effectively
have
a
zero
year
shelf
life,
you
can
deploy
such
a
fallback
option
only
when
you
really
need
it,
so
you
should
be
ready
to
go
to
deploy
it,
but
that
doesn't
mean
you
need
to
use
it
in
operation.
N
Okay,
martin,
I
don't
know
how
you
did
it,
but
you
actually
somehow
skipped
into
the
closed
queue.
So
I'm
sorry
we
are
out
of
time.
So
please
send
your
comment
to
the
mailing
list.
Thank
you.
A
Okay,
thank
you
roland.
Please.
We
continue
the
discussion
on
the
mailing
list
and
your
draft
will
also
be
be
part
of
the
survey.
Thank
you.
Next
up
is
donald,
not
eastlake.
Z
Z
I'll
see
if
I
can
cut
be
quick
here
so
that
the
last
presenter
has
a
chance.
So
this
is
about
expressing
communication
service
requirements
in
dns
queries.
This
is
a
zero
zero
draft.
Z
So
it's
not
doesn't
claim
to
be
fully
polished
at
this
point,
and
so
the
goal
is,
you
can
get
answers
back
that
depend
on
what,
for
example,
you'd
like
a
minimum,
latency
connection
or
maximum
bandwidth
or
like
wants
to
work
through
recursive
servers,
so
things
like
meta
rrs
are
not
really
appropriate,
with
no
changes,
of
course,
to
the
dns
on
the
wire
protocol
or
messages.
Z
So
you
know
the
only
things
that
are
survived
through
a
recursive
server
is
the
q
name,
q
type
and
q
class
and
really
q
name
is
the
only
practical
place
to
encode
additional
information
you
might
want
to
encode.
So
there's
already
people
well
aware:
it's
a
very
knowledgeable
audience.
I'm
presenting
to,
I
must
admit
ways
to
encode
things
about
the
service
or
the
communications
protocol,
you're,
gonna
use,
and
if
you
can
get
different
responses
back
depending
on
whether
you're
gonna
be
talking
tcp
or
udp,
it
seems
reasonable.
You
might
want
different
answers.
Z
Z
So
currently,
the
only
one
defined
is
the
prefix
for
internationalized
labels
that
can
be
interpreted
as
having
a
strict
set
of
unicode
in
them,
and
none
of
these
are
the
rldh
labels,
I
think,
should
affect
anything
to
do
with
protocol
on
the
wire
or
very
special
processing
or
security.
Z
So
what
sort
of
things
might
you
want
to
say?
Well,
you
sort
of
a
course
qos
might
be
useful,
where
you
just
ask
for
like
minimum
jitter
or
minimum
packet
loss,
or
things
like
that
or
with
reasonable
values
for
other
things,
or
you
might
actually
want
to
specify
a
more
precise
metric
where
you
say
a
minimum
acceptable
bandwidth
is
some
number
of
megabits
or
megabytes
or
whatever.
Z
So
the
draft
is
zero.
Geograph
has
a
specific
proposed
format
for
for
labels,
and,
yes,
they
might
start
with
qs
hyphen
hyphen
following
the
rldh
format
and
qs
either
way
happens
to
not
be
a
country
code,
or
you
know
it
seems
like
it
sort
of
stands
for
qos
to
some
extent
and
then
just
have
a
hexadecimal
encoding
of
tlvs,
where
you
might
just
have
one
tlb
or
possibly
several
and
a
hexadecimal
is
not
the
densest
encoding.
Z
You
can
imagine,
but
it
avoids
case
insensitivity
problems
with
dns
and
should
be
easily
readable
for
debugging
and
other
purposes
like
that.
Z
So
what
would
an
example
be
what
it
would
actually
look
like?
So
this
is
one
which,
according
to
the
information
currently
in
the
draft,
would
be
a
request
for
minimum
latency
for
example.com,
so
qs
hyphen,
hyphens
prefix
happens
to
be
that
type
1
is,
is
course
qos.
This
is
that's
one
byte
of
value
and
the
value
08
hex
is
currently
the
value
suggested
in
the
draft
for
minimum
latency,
so
qs
hyphen
iphone1108.example.com.
Z
It
means
I
want
an
answer
back
that
is
minimum
latency.
So
what
what
kind
of
data
are
you
getting
back?
Well,
there's
lots
of
different
things.
Obviously,
but
one
possibility
just
a
one
would
be
a
semantic
address.
It's
an
address
that
has
not
just
the
identification
of
a
interface
identifier,
but
also
encode,
some
additional
information
about
how
to
connect
to
that
related
to
routing
or
how
to
set
the
header
fields
or
who
knows
what?
Z
So,
you
might
just
be
doing
a
quad,
a
retrieval
with
a
quality
of
service
specified
and
get
back
a
ip
address,
which
has
some
additional
information
in
the
lower
order,
bits
and
one
way
you
could
use
that
is.
You
could
have
a
thing
where
your
application
does.
The
dns
query
takes
the
ad
as
it
gets
back
and
just
makes
a
network
connection.
Z
The
application
is,
could
be
pretty
ignorant
of
all
this
stuff
and
doesn't
have
to
understand
these
things,
and
maybe
the
first
hop
router
or
something
like
that
notices
the
special
the
quad
the
ipv6
address,
and
does
the
right
thing
to
get
you
the
network
connection
with
the
quality
of
service
that
you
are
seeking.
Z
So
you
know
what
how
much
you
have
to
tweak.
There's
really
no
necessary
change
at
the
resolver
end.
What
do
you
have
to
do
with
the
server
end?
Well,
you
know,
obviously
you
can,
if
you
just
want
to
test
out
your
your
resolver
or
application
or
whatever,
to
create
these
things.
You
can
always
just
ignore
this.
If
this
is
a
prefix
label
in
front
of
your
domain,
name
ignore
it
by
using
wildcard.
Z
If
you
have
a
small
number
of
values
like
these
course
qos
values
or
a
few
specific
metrics,
you
could
just
have
those
as
entries
in
your
zone
whatever,
but
indeed,
if
you
want
to
support
more
general
qos
metrics,
where
somehow
it
dynamically
computes
something
about
how
to
handle
your
connection
based
on
more
precise
or
complicated
qs
things,
then
there's
some
sort
of
thing
would
have
to
get
done
at
that
authoritative
server,
perhaps
calling
some
back
end
or
other
system
to
get
some
information,
so
this
draft
also
creates
a
and
a
registry
for
these
are
ldh
labels,
which
I
think
is
a
good
thing
and
has
its
expert
review
and
give
some
suggestions
to
the
expert
and
creates
a
registry
for
the
these
service
request.
Z
Types.
I'd
like
to
ask
people
to
take
a
look
at
the
draft,
certainly
welcome
any
kind
of
comments
or
suggestions
on
the
draft
and
again
here's
the
draft
name
and
I'd
like
to
ask
if
there
are
any
questions
or
comments
at
this
point.
G
Hi
ben
schwartz-
I
I'm
going
to
I
think,
say
my
my
standard
status
thing,
which
is
this
sounds
like
a
job
for
service
bindings,
so
I
I
don't
find
the
idea
of
encoding
the
quality
of
service
in
in
the
label
very
appealing,
although
I
think
I
understand
where
it's
coming
from,
I
you
know.
I
think
that
I'm
if
I
were
presented
with
this
problem,
what
I
would
do
is
to
say
that
each
of
these
different
quality
of
service
labels
is
something
that
can
be
encoded
into
an
svcb
record.
Z
I
that's
certainly
something
I'll
look
at
a
bit,
but
that
requires,
of
course,
that
the
application
know
about
this
stuff,
whereas
what's
been
expressed
here,
especially
if
you
can
use
a
simatic
address
means
you
could
have
a
an
application.
Z
That
is
just
given
a
domain
name
and
application
knows
nothing
about
any
of
this,
but
it
magically
gets
its
network
connection
with
the
desired
quality
of
service,
because
when
it
gets
the
quad
a
that
has
something
in
the
bottom
bits
that
they
caught
or
make
that
be
a
semantic
address
that
causes
the
quality
of
service.
It
wants.
G
Yeah
I
it
seems
like
a
little
bit
too
much
of
a
retrofit
to
me.
You
know
if,
if
you
think
that
that
kind
of
degree
of
retrofit
is
is
required,
I
think
it
would
be
valuable
to
understand
a
little
bit
more
about
what
the
motivating
use
case
is.
Okay,.
L
This
is
always
from
the
swedish
universe
registry
in
my
head.
Quality
of
service
is
a
property
of
the
network
pass
and
I
don't
know
how
the
resolver
or
resolving
paths
might
intersect
with
the
network
path,
but
there's
absolutely
no
reason
to
assume
that
it
does,
and
so
I
really
don't
see
how
this
solves
the
qs
problem.
Z
L
Z
The
idea
is
that
the
information
you
get
back
in
response
to
the
query
would-
and
it
might
be
able
to
do
that,
even
if
it's
just
a
query
for
an
address
by
using
semantic
addresses,
it
could
be
querying
for
other
things
as
well
of
course,
but
the
idea
is
that
the
information
returned
would
be
such
as
to
make
it
either
more
likely
or
certain
or
whatever,
that
you
would
get
the
quality
of
service
you
wanted.
Z
A
Okay,
thank
you.
Thank
you
all
and
well,
please
continue
the
discussion
on
the
mailing
list.
Any
other
questions
you
have
for
the
working
group,
donald.
A
Okay,
thank
you.
Okay,
our
final
draft
presentation
will
be
by
thiru.
So
sorry,
donald,
can
I
stop
sharing
your
slides.
You
can.
Oh
okay.
Are
you
convinced
I
can
do
this
perfect?
Thank
you.
AA
Hey
good
morning,
hey
good
morning,
everyone
we
presented
this
draft
in
the
last
two
idf
sessions.
Next
slide.
Please.
AA
Yeah
this
was
originally
unstructured,
dns
opera
page
that
was
presented
at
several
idf
sessions
and
based
on
the
feedback
we
got
from
the
working
group.
This
structured
error,
page
draft
was
being
created,
so
the
idea
was
basically
to
get
a
possible
json
for
the
user
and
the
id
troubleshooting,
especially
when
there
is
dns
filtering
and
the
client
displays
or
logs
the
json
with
its
own.
That's
the
background
or
recap
of
why
this
draft
was
created
next
slide.
Please.
AA
One
of
the
major
comments
that
we
got
in
the
last
itf
was
that
don't
introduce
a
new
adns
option
rather
free
you
see
extra
field
in
the
adhd
option
itself
and
that's
the
change
we
did
in
this
version
was
to
introduce
recast
extra
dn
text
field
in
the
ada
option
to
have
a
structure.
Json
fields
next
slide,
please.
AA
AA
We
had
also
updated
the
security
considerations
based
on
the
feedback
from
the
working
group
that
the
error
page
would
be
displayed
only
if
the
error
resolver
encrypted
resolver
has
a
sufficient
reputation
and
in
the
previous
version
of
the
draft
we
had
mandated
encrypted
resolver
as
a
prerequisite
for
receiving
and
processing
the
extra
text
field,
and
we
had
also
added
text
to
make
sure
that
this
error
page
is
going
to
be
displayed
in
an
isolated
environment.
AA
Just
like
captive
portals
not
to
get
confused
with
the
actual
content
provider,
giving
the
content
back
to
the
user
and
in
case
if
the
encrypted
resolver
is
not
trusted
by
the
client,
then
only
the
host
name
of
the
encrypted
resolver
could
be
provided.
That
way,
the
client
does
not
have
to
visit
the
error
page
pointed
out
by
the
c
and
r
fields.
So
those
are
the
major
changes
that
we
did
to
this
draft
and
next
slide.
Please
yeah.
AA
P
Not
a
question
really
more
more
comments,
so
this
shows
a
good
usage
of
the
sort
of
extended
dns
errors
and
what
you
can
do
with
it,
and
I
just
want
to
say
that
we
have
an
experimental
implementation
for
that
and
if
somebody
wants
to
work
with
us,
I'm
happy
to
to
work
with
them.
A
AB
All
right,
hello,
tommy,
paulie
from
apple.
I
just
wanted
to
speak
up
to
mention
that
I'm
definitely
supportive
of
doing
some
work
in
this
area
and
solving
it
on
the
client.
We
are
seeing
a
lot
of
cases
where
there
is
filtering
going
on
or
blocking
and
the
the
signals
are
not
explicit
and
they
often
interfere
with
actually
displaying
a
page
if
they're
redirecting
you
to
something,
but
your
original
connection
is
using
tls,
so
this
would
be
a
very
useful
mechanism.
AB
A
Tommy
ben.
G
Hey
my
my
biggest
concern
with
this
draft,
it
seems
like
it's
still
there
fundamentally,
which
is
that
this
this
draft
is
is
at
base
meant
to
to
create
a
situation
where
the
where
the
client
ultimately
opens
a
web
page
that
is
selected
by
the
resolver,
and
that's
just
a
really
big
change
from
the
security
posture
that
we
currently
have
between
clients
and
resolvers,
even
resolvers
that
we
are,
you
know
that
are
in
some
sense
trusted,
even
even
like
a
trusted.
G
Recursive
resolver
program
like
mozilla's,
does
not
allow
the
the
resolver
to
in
any
way
direct
the
user
to
open
particular
pages
or
websites.
That
is
the
the
real
point
of
concern
for
me.
If
we
could
find
a
way
to
keep
this
truly
machine
readable
so
that
the
the
information
is
presented
by
the
resolver
to
the
user
agent,
and
then
the
the
user
agent
can
figure
out
how
to
present
that
information
safely
to
the
user.
That
would
be
much
more
appealing
to
me.
AA
Thanks
ben
and
there's
several
fields
in
the
station
structure
like
the
justification
field,
which
is
just
a
plain
text,
and
not
an
error
page
in
itself,
so
the
error
pages
are
an
optional
text
and
the
human
friendly
organization
name
or
the
justification
text
fields
are
purely
machine
passable
and
the
user
agent
can
display
whatever
appropriate
error
page
it
wants
to
display
the
other
fields
are
pretty
much
optional.
So
I
think,
as
a
working
group,
we
can
decide
which
one
is
mandatory,
which
one
is
optional
and
then
make
progress
on
this.
G
Sure
so
yeah
if
the
draft
could
be
clear
about
what's
what's
optional
here,.
T
G
O
A
Thank
you
thanks
ben
no
other
questions
and
comments.
So
dear
yeah,
we
were
testing
your
patience.
Sorry.
A
Yeah,
we
explained
the
procedure
now
for
adopting
new
working
group
documents,
so
we
will
run
the
survey
and
your
drug
will
be
surely
part
of
that
of
the
survey.
AA
A
Yeah.
Okay,
thank
you.
All
we
come
to
conclusion
of
the
working
group.
I
would
like
to
thank
all
the
presenters
of
drafts.
Thank
you
very
much.
I
would
like
also
to
thank
all
the
remote
and
in-person
attendees
here
in
the
dinosaur
working
group.
It's
it's
good
to
see
you
back
here
in
the
room,
and
I
would
like
to
thank
alexander
for
helping
us
out
as
a
delegate
and
that
closes
the
working
group.
Thank
you.