►
From YouTube: DNSOP WG Interim Meeting, 2020-04-23
Description
DNSOP WG Interim Meeting, 2020-04-23
A
A
A
Drops
it
has
been
presented
earlier.
We
have
a
second
presentation.
Now
it
was
already
scheduled
at
the
Singapore
working
group
was
bugged
because
we
had
more
time
for
discussion.
We
needed
more
time
for
discussions,
there's
also
a
chair
action
with
a
question
adopt
or
not
we'll
ask
the
working
group
today,
but
also
on
the
mailing
list,
of
course,
and
their
two
new
drafts
not
presented
earlier
by
Anshuman
and
the
meet
later
this
afternoon.
A
A
B
B
So
this
is
recap
what
this
raft
is
about.
We
basically
took
to
IANA
registries
that
are
important
for
for
DNS,
namely
DNS
classes
and
resource
record
types,
and
this
draft
attempts
to
translate
these
registries
into
yang
data
types.
It
means
in
that's
suitable
for
suitable
to
be
used
in
in
the
yang
datum
modeling
language.
B
This
document
only
provides
an
initial
revision
of
the
yang
module.
The
idea
is
that,
after
this
document
is
published,
Jana
IANA
will
publish
the
yang.
Would
you
on
there
change
it
and
then
keep
maintaining
the
module
independently
of
this
RFC,
so
this
RFC
is
really
for
the
initial
revision
of
the
yang
module
next
slide.
Please.
B
The
most
important
one
is
that
now,
instead
of
having
really
the
initial
revision
of
the
yang
module
itself,
the
draft
now
contains
an
accessory
style
sheet
that
can
be
used
by
Anna
for
producing
for
generating
the
initial
revision.
I
am
going
to
explain
it
in
a
moment
and
then
the
other
change
was
that
the
two
statuses
used
by
Ayane,
Langley,
obsolete
and
deprecated
are
now
met
to
single
yang
state
status.
Mainly
obsolete
I
will
also
talk
about
that
in
in
a
minute.
Next
slide.
Please.
B
This
is
just
the
command
line
that
can
be
used
for
for
generating
the
complete
text
of
the
initial
revision
of
the
yang
module.
So
this
change
should
solve
the
concerns
that
were
expressed
previously
about
the
fact
that
the
are
the
future
RFC
would
stay
unchanged,
even
though
the
module
itself
would
develop
and
would
be
maintained
and
updated
by
Anna.
So
the
concern
was
that
somebody
might
come
later
and
use
that
module
from
the
this
RFC,
including
some
some
deprecated,
possibly
dangerous
items
from
the
IANA
registries,
which
is,
of
course,
not
not
welcome
and
not
intended.
B
So
this
this
ready,
because
this
I
level
generator
module
at
the
moment
the
status
of
the
registries.
That
will
be
current
at
that
particular
moment
and
in
fact,
if
anybody
happens
to
use
this
style
sheet
again
later,
he
or
she
can
also
generate
the
carbon.
The
vent
current
status
of
of
the
registries.
The
only
difference
from
the
official
yang
module
will
be
that
the
history
of
revisions,
Volvo
won't
be
included.
That's
because
the
revision
history
is
not
kept
in
the
IANA
registries
themselves,
so
it's
simply
impossible
for
the
stylesheet
to
include
it.
B
B
This
is
about
the
semantics
of
obsolete
and
operated.
Unfortunately,
there
is
this
discrepancy
in
in
the
meaning,
as
it
is
used
by
Anna
and
in
the
end
for
IANA.
Obsolete
means
that
it's
just
a
statement
of
the
fact
that
something
is
no
longer
in
use
as
the
prelate
is
deprecated.
It
means
that
some
item
is
not
recommended
for
use.
B
It
may
be
a
weak
encryption
algorithm
or
something
like
this,
whereas
in
the
end
obsolete
really
means
some
strong
low
or
relatively
strong.
No,
where,
as
deprecated
is
quite
liberal,
as
you
can
see,
it's
an
obsolete
official
definition,
but
it
purpose
new
implementations
in
order
to
foster
interoperability.
So,
of
course,
this
is
not
not
something
that
we
would
like
to
see
and
for
the
time
being,
we
decided
to
map
both
I
returns
to
the
obsolete
term,
which
is
like
a
strong
no
in
in
any
I'll.
B
This
discrepancy
could
be
fixed
in,
but
it
can
be
done
only
in
the
next
revision
of
yahuda
language
and
basically
this
meaning
of
obsolete
interpretate.
It
comes
so
like
I
say
from
from
the
types
of
SNMP
and
SMI,
but
I
have
already
raised
this
issue
in
the
network
working
group.
Hopefully
it
can
be
done
in
the
next
version
of
the
young
language,
but
it's
not
going
to
happen
anytime
soon.
So
I
think
this
fixed
for
the
time
being
is
quite
satisfactory
and
hopefully
we
can
use
it
immediately.
B
Next
slide,
please.
These
are
the
proposed
next
steps.
We
received
a
few
recommendation,
few
suggestions
for
improving
the
text,
so
we
will
implement
these
in
a
0:01
revision
of
this
draft
that
should
appear
soon
after
this
meeting
and
the
office
now
believe
that
the
document
is
ready
for
a
working
group
pascoal,
and
in
this
case
it
should
also
include
a
review
by
one
of
the
young
doctors
except
myself.
B
Hopefully,
so
somebody
else
will
look
at
this
from
the
point
of
view
and
find
some
problems
if
there
are
any
but
other
than
that,
I
believe
that
there
is
not
much
to
do
on
this
document
and
that
we
can
really
move
it
move
it
forward
in
the
DNS
of
working
group.
So
that's
all
I
have
in
my
presentation.
Thank
you.
If
we
have
any
questions,
I
am
ready
to
answer.
C
C
That
seemed
like
that
is
an
important
part
of
defining
these
modules
in
this
document,
because
I
think
a
lot
of
people
have
had
concerns
that
the
document
otherwise
will
be
out
of
date
as
soon
as,
for
example,
somebody
else
specifies
in
URR
type,
so
I'm,
like
am
I
correct,
that
this
is
kind
of
hinging
on
the
response
in
Mahayana
and
their
ability
to
keep
the
module
up
to
date
with
the
registry.
Oh.
B
Just
let
me
explain:
I
now
only
has
a
few
modules
like
this
that
are
based
on
other
I,
analogous
teas.
For
example,
interface
type
interface
types
was
registered
to
be
translated
in
this
form,
but
so
far
IANA
was
used
to
receiving
the
initial
revision
in
in
a
form
of
an
RFC
and
then
meant
maintain
this
module
using
their
procedures.
B
So
the
only
thing-
that's
not
clear
is
whether
this
new
method
of
producing
the
initial
revision
is
acceptable
for
Anna
I
hope
that
it
should
be
because
it's
no
rocket
science
to
run
those
two
easy
commands
and
Allah,
or
they
replied
to
my
to
my
question.
They
said
they
were
look
into
it
and
try
to
use
my
instructions
to
see
if
it's,
if
it's
doable
for
them,
so
I
hope
it
will
be
done
for
them
other
than
that
they
are
quite
I
would
say.
B
C
D
D
B
B
A
E
E
This
is
the
kind
of
attack
in
which
an
attackers
push
the
victim
IP
address,
possibly
from
botnet
to
flood
small
requests
that
results
in
large
answers
which
are
sent
to
the
victims,
IP
address
and
as
a
result,
clock
the
victims
correction,
so
Dena's
cookies
are
a
DNS
native
mitigation
against
these
kind
of
attacks.
The
protocols
on
top
of
TCP
do
not
have
this
issue
thanks
to
the
TCP
handshake
next
slide,
and
this
is
precisely
how
DNS
cookies
achieve
its
purpose
is,
introduces
a
handshake.
So
how
does
that
work?
E
A
client
creates
a
client
cookie
which
is
basically
in
months
and
sends
it
alongside
the
DNS
requests
in
an
EDM
s.
Option
Jennifer
creates
a
surfer
cookie
based
on
the
client,
cookie
client,
IP
address
and
the
secret
and
then
returns
that
with
the
DNS
response.
Packets
and
then
the
client
needs
the
query
that
server
again
it
sends
the
surfer
cookie.
It
learned
for
that
server
with
aquarium
so
can
recognize
its
own
cookie.
The
entries
one
to
policies
accordingly.
Did
the
client
presenter
fill
out
server.
E
E
So
now
this
was
our
good
but
turned
out
to
be
problematic
in
not
offend
any
cough
shots.
Our
c7
873
did
not
give
a
precise
who
beside
for
creating
surfer
cookies
with
as
a
consequence
that
cookie
is
created
by
software
of
one
fender
could
not
be
recognized
by
another
friend
as
a
name
service
affair.
Our
draft
is
addressing
precisely
this
and
gives
a
precise
facade
for
creating
the
in
a
cervical
piece
catering
for
any
car
sets
that
contain
different
vendor
implementations.
Next
slides.
E
So
the
first
of
concept
implementations
were
done
one
year
ago
at
a
hackathon
of
ITF
104
in
Prague,
and
it
was
a
huge
success.
So
implementers
of
the
different
publishers,
DNS
software
vendors-
managed
to
create
interoperable
share
cookies.
So
next
slide
mission
accomplished.
Well,
almost
the
draft
was
not
working
good
document
yet
also
the
authors
of
RFC
703
had
worked
on
an
update
to
addressing
things
we
did
not
address
in
our
draft
yet
such
as
directions
for
safely
updating
and
voluntarily
Cheryl
next
slide.
E
E
So
at
at
the
ITF
105
in
Montreal,
the
same
people
that
worked
on
it
at
the
previous
hackathon
continue
to
work
on
implementing
the
new
chef
cookies
taking
along
the
updates
and
are
also
taking
notes
on
how
to
create
a
client
cookie
for
privacy
reasons.
Client
cookie
was
basically
a
hash
based
on
secret
and
server
IP
address
for
minimal
authentication
of
the
server
and
also
for
those
privacy
reasons.
E
E
E
Instead,
we
changed
the
constructing
a
client
cookie
text
if
state
that
we
recommend
to
disable
DNS
cookies
when
privacy
is
required.
Also,
the
spec
got
adopted
by
the
working
group
that
ITF
so
despite
that
might
not
have
been
our
ideals
back.
We
thought
it
was
good
enough
for
their
purpose
of
addressing
the
multifamily
and
cassettes
issues
with
Dena's
cookie
sell
slight
bit
mission
accomplished
next
slide.
Please.
E
Then
there
was
this
flood
that
was
brought
up
on
the
Venus
up
list
by
Philip
Homburg
besides
privacy.
Having
a
client
cookie
associated
with
the
client
IP
address
has
a
role
in
surfer,
cookie
creation
too,
when
a
client
changes
its
IP
address
regularly
such
as,
for
example,
situations
with
multiple
gateway.
Its
surface
will
not
recognize
their
own
cookie
anymore,
because
it's
based
also
on
the
client
IP
address
and
on
the
client
cookie,
which
in
such
such
cases,
switch
settings
changed
frequently
shall
next
slide.
Please.
E
E
So
at
the
hackathon
of
the
ITF
1
106
large
for
Singapore,
we
wrote
this
all
down
for
Shively,
writes
and
created
version
2
of
the
draft,
it's
rewritten
construction
and
client
cookie
section
and
a
new
security
and
privacy
considerations
section.
So
we
thought
next
step
implementation
experiments,
probably
at
the
next
hackathon
next
slide.
Please
so.
E
E
Did
do
an
implementation
of
the
privacy
friendly
client
cookies
in
get
eNOS,
which
is
a
stuff,
is
over
library
and
it
works
as
designed
as
described
in
the
draft,
and
it
will
be
included
in
the
1.6
or
1
release,
which
will
be
done
shortly
only
two
week,
so
she'll
also
I've
had
reassurance
from
Bittle
from
I.
See
that
client
appeal
addresses
are
tracked
good
enough
for
adequate
implementation
in
bind
slightly.
E
So
where
are
we
now?
The
primary
purpose
of
this
draft
was
to
provide
a
precise
recite
for
server
cookies
for
the
different
implementations,
and
this
has
successfully
been
done.
The
reshaping
the
draft
is
already
in
use
with
not
eNOS,
since
version
2.9,
dot,
l
and
also
will
bind
since
version,
9.16
and
I
believe
it
also
has
been
backward
to
burn
911
and,
as
the
an
inbound
have
a
proof-of-concept
surface,
ID
implementation
flooring.
E
E
F
A
A
A
Okay,
so
I
will
invite
all
the
room
and
all
the
people
on
the
mailing
list
to
document,
and
we
will
start
a
working
group
last
call
well
Tim
will
schedule.
Their
working
group
last
calls
so
Billy
this
week
next
week,
the
week
after
so
we
try
to
do
and
call
for
adoption
or
working
group
like
a
call.
Every
week
will
be
soon.
It
will
be
part
of
the
actions
we
will
publish
after
this
meeting
remarks.
G
G
We
were
trying
to
get
some
implementation
experience
and
Mark
Andrews
has
done
some
of
that
on
an
older
version,
I'm,
not
sure
if
it's
been
updated
for
this
latest
version
next
slide,
please
I'll
go
over
the
base
of
it
quickly.
It's
a
fairly
short
draft.
It's
intended
for
authoritative
servers,
and
the
idea
is
that
when
you
install
a
resource
record,
there
could
be
a
lifetime
associated
with
that
record,
where
it
makes
sense
to
have
that
record
record
published.
G
It
could
be
managed
by
the
primary
server
where,
when
the
records
are
added,
the
primary
server
can
do
the
timeouts
of
the
records
that
they
cover,
but
there'll
be
a
transition
period.
We're
not,
they
might
have
to
be
managed
externally
through
dns
updates,
and
so
you
can
liken
that
to
either
the
primary
server
using
reference
counting
to
know
when
records
exist
and
should
be
removed
versus
maybe
garbage
collection
where
an
external
manager
would
be
periodically
scanning
to
see
if
a
record
should
be
removed.
G
Information
that
you
have
of
how
long
that
the
DNS
records
should
be
in
the
database
could
be
an
absolute
time
or
it
could
be
a
relative
time.
There
are,
you
know,
IOT
devices
out
there.
That
would
make
use
of
this-
that
don't
have
real
time
clock
hardware,
so
they
are
tend
to
use
a
relative
offset
and
there's
already
an
EDS
zero
option
called
the
update
lease
option.
That
would
give
you
a
relative
time
and
so
that
then
the
primary
server
could
create
a
timeout
record.
G
G
You
know,
restrictions
on
that
and
I
think
you're
gonna,
the
externally
you're
gonna
have
to
maybe
do
some
testing
to
see
what
a
primary
service
would
accept
and
what
it
wouldn't
accept
and
what
conditions
are
pasted
on
it
and
I
think
there's
you'll
have
to
have
a
little
bit
of
discovery.
There
next
slide.
G
G
Note
method
and
then
all
the
records
that
match
will
expire
at
that
time,
and
so
it's
a
kind
of
a
shortcut.
Well,
you
don't
really
need
to
do
anyhow,
hash
calculations,
it's
only
when
you
have
multiple
matching
records
with
different
expiry
times
that
you
need
to
specify
the
hash
for
the
ones
that
match
that
expiry
time.
G
G
They'll
have
a
certain
amount
of
lease
life
time
and
in
this
case
it's
going
to
be
Ln
and
the
time
at
which
the
update
was
sent
is
TN.
So
you
know
in
the
future
it
will
expire
at
this
absolute
time
of
TN
plus
Ln
its
case.
Both
the
a
and
quad-a
records
are
gonna
expire
at
the
same
time,
so
you
don't
have
to
specify
a
hash,
so
the
count
is
0.
G
There
there
are
a
few
cases
where
there
will
be
different
times
different
expiry
times,
with
the
same
owner,
name,
record
type
and
hash
I
mean
in
the
type
and
class
and
common
example.
Maybe
in
the
service
discovery
around
where
you're
sending
unicast
updates
for
like
white
area
barns
who
were
or
service
like
you
will
have
PTR
records
that
point
to
the
different
instances
of
the
service.
So
the
our
data
is
different,
but
the
owner
name,
class
and
type
is
the
same
and
they'll
be
coming
from
different
hosts,
which
means
they'll
be
coming
at
different
times.
G
So
the
expiry
times
will
be
different.
So
in
this
case
you
have
PTR
record
and
from
the
do,
different
host
looks
identical,
and
so
the
primary
servers
database
you're
going
to
have
to
create
hash
entries
to
have
the
different
timeouts
for
whole
host.
They
are
printer,
a
versus
printer
B,
and
so
the
on
the
bottom
you'll
see
the
list
of
timeout
records
that
you
would
have,
and
most
of
them
are
still
have
a
method
of
0
and
no
hashes,
but
only
the
PTR
records
have
hashes
because
of
the
collision
there
and
only
name
typing
class.
G
Had
gone
back
and
forth
a
lot
with
on
the
mailing
list
and
made
a
lot
of
changes
over
time
to
answer
all
of
the
questions
that
we
got
and
made
some
changes
to
make
it
work
better
with
existing
implementations
like
including
the
type
the
record
type.
So
we're
now
feel
like
we're
ready
for
to
adopt
this
as
a
working
group
document,
and
so
we
would
like
to
get
people
to
review
it
again
and
like
to
move
it
forward.
A
C
I
died
too
two
things
to
say:
one
of
them
was
I
think
another
reason
for
using
an
RR
type
for
this
and
not
something
more
ephemeral
like
an
Idina
seer
option
or
something
is
that
it
gives
people
clients
on
the
outside
who
are
not
involved
in
publishing
the
zone.
The
ability
to
see
what's
going
on,
you
can
troubleshoot.
C
So
if
you
want
to
diet,
diagnose
a
problem
where
a
record
has
disappeared
from
one
place
but
another
place
or
something
you
might
look
for
a
timeout
record,
and
that
might
give
you
some
clues
and
if
this
information
was
hidden
and
say
the
update
protocol
or
in
a
studio
era
that
disappears-
and
it's
not
published-
and
you
don't
have
that
diagnostic
information,
so
I
think
actually
a
lot
of
time.
Thinking
that
this
proposal
looks
really
quite
ugly
but
I've
changed
my
mind,
I
think
it's
better.
So
that
seems
good.
C
C
H
The
only
cases
where
you
they
need
to
hash
it
doesn't
really
make
much
difference
internally
in
terms
of
the
what
the
server
needs
to
support
it.
Just
it's
just
the
regeneration
of
timeout
records
as
you
remove
a
hash
and
that's
a
relatively
straightforward
thing
to
do.
It's
just
remove.
We
have
to
remove
the
record.
I
G
Yeah,
so
we
don't
want
to
have
a
lot
of
different
people,
a
lot
of
different
implementations
trying
to
figure
out
which
hashes
other
implementations
support,
so
preference
is
to
define
a
single
hash
that
everyone
uses
and
that,
if
the
condition
occurs
in
the
future,
where
that
hash
is
on
to
have
donor
abilities
and
will
define
a
new
one,
there's
a
registry
established
in
the
draft
to
do
this
and
then
that
one
will
be
the
one
that
everyone
should
switch
to.
There
will
still
be
some
older
implementations,
maybe
with
the
other
one.
G
I
G
H
At
some
point,
we
might
need
to
define
a
new
error
code
to
say
that
say
this
is
no
longer
supporter
or
something
like
that
against
the
update,
but
yeah
I
don't
think
it's
going
to
be
a
real
big
issue.
I
can't
remember
enough
to
know
which,
which
things
which
property
it's
got
to
be,
which
property
has
got
a
file
all
the
time
out,
not
records,
not
work,
but
I,
don't
think
it's
going
to
end
up
being
a
real
problem
anyway,
because
we
you've
got
the
data,
they
been
a
crime
it,
it's
only
matching.
H
A
Thank
you.
Thank
you,
Mark.
Thank
you,
Tom.
Well,
there's
a
good
interaction
with
also
with
the
software
developers
previously
I
understand.
Thank
you
for
the
input
from
the
working
group.
We
will
issue
you
a
call
for
adoption
in
one
or
two
weeks.
That's
up
to
the
planning
from
from
Tim
I,
want
to
wrap
up
this
presentation
and
go
to
the
next
presentation.
A
J
So
this
is
a
draft
that
was
put
out
recently
by
Paul
vixie
Ralph
tell
moms
and
myself,
and
that
has
had
a
bit
of
discussion
on
the
NSF
list
already
so
I'll
go
through
these
slides
quickly,
but
Paul
and
Ralph
I
think
are
both
here,
so
I
trust
that
they'll
interject
comments
as
needed.
So
next
slide
I'll
start
with
the
motivation.
Basically,
as
I
think
most
people
know
there
is
a
range
of
behavior
in
DNS
resolvers
today
and
how
they
process
delegations
some
prefer.
J
The
parent
and
I
set
some
the
child
and
for
many
others
the
behavior
varies
depending
on
the
dynamic
state
and
content
of
queries
and
responses
that
they
process.
So
some
of
our
goals
in
this
draft
are
to
try
to
get
more
commonality
and
predictability
in
the
behavior
and
to
do
so
in
a
way
that
is
in
accordance
with
the
DNS
protocol
and
also
to
solve
a
set
of
operational
problems
that
frequently
come
up.
J
Alright,
so
it
clearly
stated
in
the
DNS
protocol
specifications
that
the
child
and
SR
are
set
is
the
authoritative
one
and
that
the
corresponding
parent
and
has
set
is
essentially
non-authoritative
glue.
The
data
ranking
rules
in
RFC
2181
further
clarify
how
resolvers
should
categorize
and
treat
data
of
various
types
and
authoritative
data
should
clearly
be
preferred.
J
I've
just
exerted
in
this
slide
some
of
the
relevant
rules
here
and
I'm,
not
gonna,
read
them
all
now,
but
additionally,
the
child
and
asset
can
be
signed
with
the
NSF
and
the
parent
are
not
and
DNS
SEC,
as
you
may
all
know,
only
signs
of
our
tative
data
and
the
data
ranking
rules
also,
as
you
might
suspect,
state
authenticated
data
is
to
be
preferred
in
case.
Anyone
thinks
these
are
general
rules,
and
maybe
the
NS
set
is
special
and
could
be
excluded
for
them.
J
J
Configuring
with
minimal
responses,
the
resolver
wouldn't
see
the
child
and
I
said
at
all,
unless
some
downstream
client
of
the
resolve,
our
issues
and
NS
query-
and
this
is
not
something
that
normal
end-user
applications
do
as
far
as
I
know,
so
we
can't
rule
to
occur
with
any
regularity.
So
we
think
we
need
a
systematic
way
to
observe
the
child
in
us
again,
and
the
draft
recommends
that,
when,
following
referral
responses,
resolvers
should
issue
an
explicit
parallel
query
for
the
NS
record
at
the
child.
J
J
If
the
child
zone
is
authoritative,
they
can
dictate
the
DTL
that
resolvers
should
actually
honor,
and
this
allows
them
to
more
rapidly
make
changes
to
their
name
server
configurations
when
needed
by
temporarily
deploying
a
short
TTL,
so
that
they
can
not
only
more
quickly
make
those
changes
visible,
but
also
more
quickly
back
out
those
changes.
If
things
go
wrong
next
slide,
please.
J
So
Ralph
with
his
implementers
hat
I
wanted
to
make
sure
I
mentioned
this
note.
The
NS
query
need
not
bottleneck
the
fast
path
it
can
be
sent
in
parallel
and
we
processed
opportunistically
and
does
not
need
to
delay
resolution
of
the
query
that
actually
triggered
the
referral
processing.
I'm.
Just
gonna
quote
wealth
verbatim
here.
He
says
not
much
risk
little
complexity
and
no
speed
difference.
The
opportunistic
nature
of
this
query
also
allows
resolvers
to
deal
with
the
small
subset
of
broken
authority,
servers
that
don't
respond
to
explicit
NS
queries
without
incurring
any
performance
penalties.
J
All
right
so
now,
moving
on
to
the
second
part
of
the
draft,
which
is
delegation
revalidation,
we
state
the
resolvers
need
to
recheck
the
parent
delegation
at
the
expiration
of
the
TTL
of
the
parent
and
a
set
at
the
very
latest.
So
this
prevents
an
important
security
issue
arising,
namely
a
child's
own
living
beyond
their
authorized
lifetime.
If,
for
example,
the
parent
has
removed
the
delegation
or
we
delegated
his
own
to
another
party
without
revalidation,
this
situation
could
arise.
J
If,
for
example,
the
child
zone
operator
maliciously
set
a
very
long
TTL
an
attempt
to
artificially
prolong
its
life,
it
was
Oliver
Cash's,
it
could
be
done
by
accident,
I
suppose,
also
or
because
resolvers
that
do
prefetching
of
DNS
records,
which
is
becoming
more
common,
continue
to
inadvertently
prolong
the
life
of
the
child.
This
by
the
way,
is
not
a
theoretical
problem,
but
one
has
one
that
has
been
observed
in
the
field.
There
are
a
number
of
research
papers
going
back
a
few
years
to
talk
about
this
topic.
J
So
how
should
we
implement
revalidation?
The
simple
and
probably
most
obvious
scheme
is
to
just
cap
the
ns
TTL
in
your
cache
to
the
lower
of
the
parent
and
the
child
NS
TTL,
but
this
trap
also
presents
a
more
detailed
algorithm
that
deals
more
effectively
with
some
more
involved
corner
case
configurations,
and
the
last
bullet
item
on
this
page
is
not
in
the
draft
yet
but
came
up
in
mailing
list.
J
Discussion,
Brian
Dixon,
if
I
recall,
suggested
it,
and
it
was
backed
by
others
that
if
all
child
servers
are
assessed
to
be
lame
or
unusable,
that
should
automatically
trigger
a
revalidation
action
at
the
parent
zone.
So
we
agree,
but
it
would
have
to
be
done
in
conjunction
with
a
hold
down
timer
of
some
sort
to
avoid,
including
unintentional
das,
on
the
parents.
J
So
just
want
to
mention
these
ideas
are
not
new,
of
course,
by
any
stretch,
Paul
and
others
wrote
them
up
in
the
resin
proof.
Draft
of
2010,
which
I
think
many
people
are
aware
of.
Our
wine
guards
also
proposed
something
like
this
and
his
resolve
our
mitigation
draft
from
2009
and
the
unbound
resolver
from
NL.
That
labs
roughly
implements
this
today
with
a
configuration.
J
Knob
called
harden
referral
path
next
slide,
and
this
is
the
last
slide
so
discussion
around
the
staff
has
predictably
caused
a
related
discussion
about
whether
we
need
to
like
totally
overhaul
and
we've
designed
the
DLN
DNS
delegation
mechanism.
We
are
not
attempting
to
do
so
here.
This
draft
is
proposing
a
minimal
set
of
changes.
J
That
said,
there
are
undoubtedly
deficiencies
in
their
zone
delegation
mechanism
that
could
be
addressed
with
the
redesign
I'm
personally
interested
in
that
subject,
but
that
could
be
the
subject
of
a
much
more
ambitious
effort
and
it
isn't
clear
whether
it
could
be
successful
given
how
entrenched
the
current
DNS
is.
I'm,
just
gonna
stop
there,
but
before
I
turn
it
over
for
discussion.
I
just
wanted
to
quickly
check
with
Paul
and
Ralph
to
see
if
there
was
anything
I,
missed
or
misstated
I
think.
L
J
M
J
So
I
think
if
this
draft
actually
moves
forward
Warren,
we
have
covered
everything
that
was
in
the
room,
the
original,
resolute
draft.
The
main
additional
item
in
resin
proof
was
what's
called
clarifying
and
X
domain
means,
and
that,
and
that
has
already
been
republished
as
honestly
80/20,
if
you
recall,
listen.
N
Regarding
the
issue
of
explicit
and
s,
queries
just
want
to
remind
that.
We
are
also
discussing
draft
RFC,
7816
bisque
union
minimization
on
one
of
the
big
change
in
the
draft
with
regard
to
the
old
RFC
is
about
explicit
and
escrow
is
because
in
RFC
7816
attuning
minimizing
wizard
use
the
NS
queries
on
now.
There
is
a
change
to
create
to
prefer
a
queries,
because
one
of
the
reason
being
that
some
many
don't
know,
authoritative
with
all
those
timeouts
on
explicit
NS
queries.
So
both
a
draft
are
related
in
that
way.
J
K
Sir
yeah
so
I'm
involved
in
both
traps.
Actually
so
the
difference
here
is
that
this
one
is
opportunistically.
So
if
the
NS
query
fills
here,
it
is
all
fine
and
also
if
we
would
do
Kuna,
immunization,
witty
and
ask
you
type,
then
there
are
still
cases
where
you
won't
send
the
NS
q
type
to
the
apex,
because
if
you
are
going
to
resolve
the
name,
that's
where
the
original
queue
name
is
the
same
as
the
name
for
the
apex.
You
know
there
is
no
delegation.
Are
you
already
sent
what's
the
incoming
queue
type?
P
O
So
yeah
so
I
like
this
kind
of
work,
it's
something
we
touched
a
little
bit
in
in
our
recommendation
for
DNS,
validator,
so
I
suppose
this
kind
of
work
the
question
I
would
have
is:
why
do
you
consider
to
cap
the
DNS
TTL
based
on
the
NS
of
the
parents
and
maybe
not
the
DES
record.
J
J
We
can't
really
rely
on
it
to
cover
the
revalidation
case
that
we
need
to
happen
across
the
board
in
in
theory,
at
least
the
Deus
record,
if
it
is
present,
is
supposed
to
by
the
DNS
protocol
specification,
have
the
same
TTL
as
the
delegating
in
a
set,
but
as
a
practical
Jones
are
not
signed,
and
we
do
have
to
deal
with
it.
We
can't
rely
on
deals.
Okay,.
L
Like
to
follow
up,
there's
no
reason
in
principle
why
the
TTL
of
the
DS
record
cannot
be
part
of
the
equation
of
choosing
the
revalidation
interval,
because
if
it
is
expiring
differently
and
sooner
than
the
parent
NS,
then
there's
no
problem
with
revalidating
it
when
it
expires.
However,
that
is
what
DNS
SEC
validator
will
do,
so
we
did
not
feel
a
need
to
mention
it.
A
L
A
The
room,
questions
and
I
think
it's
positive
and
to
see
a
little
bit
more
discussion
on
the
mailing
list.
I
to
be
fair,
I
haven't
read
up
the
last
emails
on
the
dienes
of
our
own
activities
or
mailing
list,
but
we
will
issue
a
call
for
adoption
later.
I.
Don't
I,
don't
know
when
to
schedule
that,
but
before
the
next
ideas,
virtual
ITF
but
I'd
like
to
see
a
little
bit
more
discussion.
Also
on
the
mailing
list,
yeah
I
think
the
draft
is
quite
new.
A
J
We've
seen
a
fair
number
of
covenants.
I
guess:
you'll
have
to
judge
how
much
discussion
you
want
to
see
happen
before
calling
for
any
specific
action,
but
there's
been
discussion
not
only
on
the
list,
but
also
in
the
ORD.
This
operations
list
so
and
I
think
it's
continuing
to
happen.
So,
if
you
feel
like,
we
should
continue
that
until
we
get
additional
comments
from
the
larger
set
of
folks
I
think
that's
fine
with
us.
That's.
A
Q
H
R
R
Russian
ghost
algorithms
were
introduced
into
the
domestic
in
2010
in
RFC,
five,
nine
three
three,
this
RFC
specified
using
of
the
auguries,
must
cost
34
10
2001
for
digital
signature,
cost
error
study
for
1194
for
message
they
just
for
DES
reckoned,
but
unfortunately,
both
of
these
algorithms
I'd
appreciated
inertia
since
2019
next
slide.
Please
Oh.
R
Raphael,
who
said
you
suggest
profile
the
new
profile
prescribes
using
new
ghost
digital
signature
of
the
reasons
described
in
RFC
709,
one
always
digital
signature
parameters
introduced
in
RFC,
seven,
eight
36
and
the
message
digest
described
in
RFC
69
86
next
step
is
so.
This
document
was
desired
to
update
FC
5
9
3.
R
But,
as
it's
promised
additions
to
I
am
a
registries.
There
is
no
problems
with
Dennis
security
of
charisma
numbers
registry,
its
police
say,
implies
FC
required,
but,
as
I
think,
new
DS
type
digest
algorithm
requires
some
detection.
It
means
that
the
document
that
doesn't
fit
independence
state
requirements.
So
in
this
circumstances,
I.
R
P
S
I
also
support
this
work,
because
the
algorithm
videos
ghost
algorithm,
had
some
witnesses
and
they
definitely
need
to
be
changed
and
there
is
no
no
way
to
do
it
other
than
a
standards
process
so
either
as
a
drug
should
be
adopted
or
also
should
seek
for
ad
sponsorship.
But
I
think
that
since
Dennis
a
group
is
concluded.
Probably
this
working
group
is
my
big
home
for
this
job.
I
Just
a
very
quick
question:
I
support
this.
What
I
think
it
needs
to
be
done
and
but
just
also
make
sure
that
references
to
the
earlier
no
dedicated
to
algorithms,
and
that
is
seasonal,
cost
unisex,
saying
we
should
make
sure
those
get
marked
has
been
deprecated
I'm
having
a
too
quick
and
actually
check
out
information
in
the
animation
I.
Just
you
know
ya.
M
Kumari
as
AE
so
yeah,
it
was
originally
asked
to
fight
for
80
spots
of
us
and
they
felt
that
it
was
much
better
if
it
went
through
deer
myself.
We
have
a
group
which
just
fits
well
in
and
so
I
think
it
should
be
discussed
here,
but
a
reminder
that
our
main
thing
to
do
is
figure
out.
If
this
works
correctly
with
the
NSF,
not
discussions
on
the
algorithm,
the
cost
over
an
event
itself,
so
this
can
go
through
yeah
I
can
go,
go
ready!
Take
it
with
me.
A
Thank
you.
Given
the
first
comments
in
the
room
and
Suzanne,
please
correct
me
if
I'm
wrong,
but
I
think
we
will
schedule
this
draft
for
a
call
for
adoption
somewhere
later
after
the
after
the
meeting,
and
please
ongoing
discussions
on
the
mailing
list
are
more
than
welcome
feedback
comments
and
questions.
Remarks
on
this
draft.
A
No
I
we're
running
a
little
bit
out,
so
I
already
bought
a
body
who
run
into
you
sixteen
or
four
o'clock
in
15
minutes
UTC.
It's
now
four
o'clock
twenty
minutes
and
before
I
want
to
close
this
session.
I
want
to
give
the
opportunity
to
people
to
ask
a
more
generic
general
question,
not
related
to
the
drafts.
Any
comments,
questions
points
for
improvement,
I
shouldn't
ask
that.
A
A
A
Reevaluation
revalidation-
and
this
will
also
be
after
some
discussion-
go
for
call
for
adoption
and
the
very
last
presentation
on
costs.
Signature
algorithms
will
also
be
scans
notes
for
call
for
adoption.
Ladies,
this
month,
if
I
miss
some
action
points,
you
probably
don't
see
that
in
an
email
sent
out
by
one
of
the
chairs,
I
want
to
wrap
up
the
session
and
I
want
to
ask
everyone:
it's
not
done
yet
and
signed
blue
sheets
on
the
eater
pads.