►
From YouTube: IETF108-DNSOP-20200729-1300
Description
DNSOP meeting session at IETF108
2020/07/29 1300
https://datatracker.ietf.org/meeting/108/proceedings/
A
A
A
Checking
again
my
setup,
I
haven't
changed
anything
and
still
I
have
to
change
a
lot
before
I
could
get
your
slides
up
again.
Let's
see
if
I
have
this
in
full
screen
that
goes
well.
A
Okay.
Okay,
sorry,
then,
yeah
welcome
to
the
second
dinosaur
session.
This
afternoon
we
have
three
presentations.
The
first
one
is
by
paul
hoffman
again,
this
session
is
being
recorded.
A
Is
the
speaker
here
paul
hoffman,
so
be
kind
to
him
for
this
session,
for
this
presentation
or
if.
B
A
All
right,
okay,
sorry,
sorry,
paul
this
session
being
recorded.
If
you
have
questions,
please
raise
your
hand
with
the
by
the
microphone.
You
will
be
cute
and
we
can
allow
you
to
to
speak
on
the
microphone
or
on
the
video.
If
you
want
to
use
the
video,
that's
fine
and
use
the
micro,
sorry,
the
jabber
room,
if
you
want
to
ask
a
question
in
the
gem
room
mike
cologne
and
then
ask
your
question
and
we
will
monitor
the
the
gemma
room
good.
We
go
ahead.
Paul
please
take
the
floor.
B
Okay,
thank
you
in
case.
Anyone
noticed
that
the
lighting
is
slightly
different
on
my
face.
That's
because
it's
we're,
starting
to
have
sunrise
here,
welcome
to
six
o'clock
in
california,
so
this
draft,
I
very
very
short
draft
I
put
in
a
few
weeks
ago.
This
is
up
for
working
group
discussion,
as
tim
said
in
the
last
session,
although
he
seems
to
not
like
it.
Hopefully
he
can
say
more
next
slide.
B
So
the
motivation
for
us
looking
again
at
the
iana
registry,
for
what
is
required
for
dns
sec
algorithms.
You
know
in
order
to
get
a
registry
entry,
is
the
fact
that
gost
bis,
rfc
5933
bis
actually
needs
to
be
on
standards
track,
because
algorithms
that
describe
ds
records
currently
are
standards
required
for
the
iana
considerations.
B
Now
just
to
note
that
other
ietf
working
groups
so
like
tls
s,
mine
all
the
other
ones
who
do
crypto
have
all
of
their
registries
either
be
expert
review
or
rfc
required.
They
do
not
require
standards
track.
So
I
thought
you
know
what
yes
we're
doing
goss
now
we
could
just
put
it
on
standard
track
even.
E
B
Basically,
I
believe,
a
prox
in
this
working
group,
approximately
zero
people
who
are
reviewing
the
draft,
will
actually
understand
the
crypto
well
enough
to
want
to
put
it
on
standard
track.
Why
don't
we
deal
with
this
now?
Because
there
will
be
more
crypto
like
this,
which
is
what
is
generally
called
national
crypto
crypto
that
is
important
to
an
individual
country
or
possibly
a
region
that
people
want
to
document
and
want
to
get
code
points
for
so
that
in
this
case,
dnssec
can
use
them.
B
So
dnsx
were
the
folks
who
put
together
rfc,
6014
or
actually
it
was
passed
in
the
working
group.
It
made
all
of
the
new
dns
sec
registries,
which
is
you
know
at
the
time
in
2010
rfc
required.
However,
we
forgot
the
ds
records
were
not
new
and
therefore
they
were
not
covered
by.
When
I
say
we,
it
was
sort
of
interesting.
I
had
to
figure
out.
You
know
when
I
was
looking
at
this
draft.
B
I
said,
oh
god,
who
made
the
the
requirements
and
and
missed
ds,
and
I
looked
at
rc
6014
and
guess
what
I'm
the
author,
so
whoops
I'll
say
we.
But
in
fact
we
could
also
say
I
forgot
to
ds
records
here,
although
no
one
seemed
to
notice
it
also,
even
though,
after
that,
when,
when
dnsoc
defined
nsec3,
they
also
went
with
standard
required
instead
of
you
know
what
everyone
else
is
doing
so,
like
I
say
here,
tls
ipsec
s,
mine,
they
all
go
with
either
expert
review
or
specification
required
in
my
mind.
B
B
The
general
way
that
the
ietf
does
it,
which
is
either
standard
required,
I'm
sorry
rfc
required
or
in
fact
just
expert
review
should
be-
should
be
fine
one
of
the
things
that
like,
for
example,
tim
said
earlier.
Oh,
we
don't
necessarily
want
just
anyone
getting
an
algorithm.
B
B
So
this
can
be
done
quickly
if
the
working
group
adopts
it,
and
if
we
do
that,
then
we
can
obviously
continue
to
work
on
rfc
5933
bis,
but
then
it
doesn't
have
to
be
standard
track.
It
could
be
informational
like
most
of
the
other
national
crypto
rfcs
that
are
out
there.
B
Basically
we're
the
only
working
or
dns
is
the
only
place
where
national
crypto
has
to
be.
You
know
has
to
be
a
standards
track
rfc,
and
this
is
nothing
new.
By
the
way
I
mean
when
I
was
chairing
the
s
mine
working
group
20
years
ago,
it
was
decided
that
the
national
crypto
ones
should
all
just
be
informational
that
didn't
have
any
negative
effects
on
anyone
and
and
again
to
be
clear.
B
Let's
just
let's
not
think
of
this,
just
as
gost
and
the
russians,
there
are
half
a
dozen
other
countries
that
have
national
crypto,
some
of
whom
have
already
gotten
rfcs
in
other
protocols,
especially
s
mime
and
tls.
Why
not?
Why
not
have
dns
sec
be
pretty
much
similar
to
the
other?
So
that's
it
short
presentation
happy
to
take
questions.
F
This
tooling,
I
say
paul
you're,
citing
s
mime
and
tls,
and
I
would
say
that
those
are
actually
not
applicable
precedents
in
this
case
and
it
has
to
do
with
the
nature
of
the
parties
involved
and
I
think
in
s5
and
tls.
These
are
far
more
end-to-end
systems,
where
only
the
parties
at
the
ends
care
with
dns
a
lot
more
parties
in
the
middle
care
about
validating
the
results,
and
I
think
it
is
then
in
the
interest
of
the
internet
to
have
a
more
limited
and
more
standardized
set
of
algorithms.
F
H
A
Sorry
yeah,
I
was
muting
myself
again,
please
victor!
Please.
H
I
agree
that
dns
is
a
bit
different
in
the
fact
that
there
are
many
intermediate
parties
that
look
at
the
details
of
the
message
between
producer
and
consumer
and
that
need
to
understand,
potentially
and
accurately
forward
the
the
the
additional
signatures
and
so
on,
and
that
there
is
more
of
a
need
in
dns
to
have
the
algorithms
be
understood
universally.
B
B
I
think
all
of
them
should
be
rfc
required.
Yes,
well,
rfc
requires
different
than
standard
required.
Rfc
required
would
be
fine
here.
What
we're
up
against
is
standard
required.
I
Hi
hi,
okay,
so
just
I
don't
have
a
strong
feeling
about
this,
but
I
wanted
to
highlight
two
differences
that
stuck
out
to
me.
One
is
that
tls
cipher
suites
and
I
suspect
a
lot
of
these
other
spaces
are-
are
larger.
They
are.
I
A
lot
of
them
are
16-bit.
This
is
an
8-bit
bay,
but
actually
I
do
think
there
are
I
just
just
sort
of
poking
around
diana
right
now
I
I
was
trying
to
think
of
other
registries
that
are
are
similar.
One
of
them
is
a
client
certificate
or
the
the
certificate
type.
So
there's
a
tls
client
certificate
type
registry,
that's
a
single
octet
and
it
has
sub
ranges
that
are
standards
action
it
has.
It
has
a
sub
range
that
standards
action,
but
most
of
it
is
specification
required.
I
I
Overall,
I
think
one
thing
that
I
noticed
while
looking
at
this
is
that
the
the
dns
security
algorithm
numbers
registry
shows
the
references
and
also
indicates
whether
it
indicates
their
status.
J
I
B
I
You
know
if
there's
a
it
seems
like
it
ought
to
be
possible
to
provide
an
indication
of
at
least
the
status
of
the
draft,
that
if,
for
example,
we
were
specification
required,
then
hopefully
it
would
be
clear
from
the
registry
which
of
these
ciphers
correspond
to
ietf
documents
and
which
of
them
are
our
other
parties.
B
I
Article
my
biggest
concern
here
would
would
probably
just
be
code
point
exhaustion,
especially
if,
as
you
say,
you
think
that
these
so
we're
at.
B
I
I
K
C
Thanks
benno
a
few
points
to
make
here,
I
think
we
should
adopt
this
document
and
progress
this
it's
a
no-brainer.
C
I
think
we
need
to
do
something
with
the
issues
of
code
points
for
these
crypto
algorithms,
that's
broadly
in
line
with,
what's
done
for
crypto
code
points
in
other
working
groups
of
things
like
tls
and
s,
prime,
and
all
these
things
that
paul
looked
like
before.
I
think
the
current
request
we've
got
for
these
dns
crypto
stuff
is
too
heavy
weight.
C
The
point
that
ben
was
making
about
code
point
exhaustion
is,
of
course,
a
valid
one,
but,
as
paul
pointed
out,
we're
probably
going
to
worry
about
that
in
a
hundred
years
time,
and
I
don't
think
that's
going
to
be
my
problem
or
anybody
else's
problem-
that's
cut
in
the
working
group
to
figure
out
the
solution
for
that
we've
got
plenty
of
room
at
the
moment.
C
I
think
we
can
worry
about
this
later,
although
it's
still
a
valid
point
to
make,
and
I
think
some
of
the
things
that
we've
said
before
about
the
concerns
about
end-to-end,
interoperability
and
so
on
are
valid.
But
really
I
see
this
as
just
being
not
that
much
different
in
principle
for
what
we
do,
how
we
deal
with
dns
resource
record
types,
some
implementations
can
and
can't
deal
with
them,
and
so
maybe
we
need
a
separate
document
which
talks
about
mandatory
to
implement
crypto
or
optional
to
implement
crypto
for
dns
sec.
C
I
think
we
had
something
that
before
back
in
the
days
when
we
had
dsa
kicking
around,
maybe
it
might
be
an
idea
to
think
about
that
as
a
potential
way
of
dealing
with
this
problem
about
the
end
to
end
aspects
that
sam
and
ben
mentioned
before,
but
I
think
that's
a
separate
issue
from
the
simple,
pragmatic,
almost
bookkeeping
exercise
of
issuing
a
code
point
when
a
new
useful
crypto
algorithm
emerges
for
dns
sec.
I
think
we
need
to
keep
those
two
things
separate.
L
L
So
I'm
becoming
more
and
more
confused
on
what
the
group
wants,
because
we
were
speaking
about
not
like
growing
up
the
cable
and
and
requiring
the
support
from
the
implementations
to
introduce
a
new
rfc,
and
now
we
are
going
the
other
direction
like
whatever
just
relaxes.
So
everybody
can
get
a
code
point.
So
this
is.
This
is
very.
L
I
especially
for
a
register:
that's
only
eight
bits
long
so,
and
so
so
this
is.
This
is
one
thing.
The
other
thing
is
that
I
kind
of
agree
what
sam
said
at
the
beginning
that
the
interoperability
in
the
dns
sec
is
important
more
than
in
other
protocols,
because
there's
a
huge
inertia
in
the
dns
world.
It
doesn't
happen
in
tls
the
the
servers
get.
J
L
Upgrade
more
quickly,
the
clients
get
upgraded
more
quickly.
This
doesn't
happen
in
dns
and
we
must
reflect
this.
This
reward
situation
in
in
our
standards,
because
just
just
think
about
how
long
it
took
ayanna
to
accept
the
ecdsa
records
in
the
in
the
route.
It's
two
gears.
L
So
it's
it's
nice
to
have
a
flexible
registry,
but
it
it
doesn't
match
what
what
happens
in
reality.
So
I'm
definitely
sure
that
we,
it
needs
more
than
expert
review.
I
could
live
with
like
aligning
with
rfc
needed
other
algorithms,
but
I
think
we
should
be
very
strict
when
accepting
new
algorithms.
We.
B
A
A
E
D
A
No,
I
will
keep
your
mic
open,
but
I
will
go
to
warren
and,
in
the
meantime,.
M
I'm
sorry
so
I
think
I
largely
agree
with
what
the
profit
is
and
I
think
what
jim
said
standards
track
feels
very
heavy
for
this,
and
you
know
many
people
have
relatively
strong
views
on
national
crypto
stuff.
If
we
require
things
to
be
standard
track,
no
matter
what
it
says,
it
still
feels
as
though
we
are
going
to
be
sort
of
endorsing
national
crypto
stuff.
If
we
simply
have
it
as
rfc
required
like
pretty
much
all
of
the
other
registries,
do
it's
still
a
really
high
bar
right?
M
You
still
need
to
get
an
rfc.
It
still
needs
to
go
through
process.
It
still
needs
to
be
reviewed.
So
I
don't
think
that
that
really
lowers
the
bar
very
much
it
just
sort
of
changes.
The
tone
of
things
many
of
the
tls
registries
actually
have
disclaimers
in
them
right
notes
which
say
things
like
this
is
not
an
endorsement
or
not.
M
We
already
have
the
sort
of
check
valve
so
that
when
we
get
close
to
running
out
of
code
points,
we
will
have
to
reevaluate
this
and
again
it's
a
you
know
we
can
get
256
thingies
in
this
code
point
space.
We've
hardly
used
any
at
all.
M
I
agree
with
people
that
interoperability
is
important
and
the
dns
is
different
to
things
like
tls,
but
I
think
that
that's
actually
sort
of
an
argument
more
for
why
we
want
to
make
it
so
that
people
can
actually
get
things
put
in
the
registry
if
we
don't
make
it
so
that
it's
possible
for
people
to
add
things
to
the
registry.
M
You
know
if
we
have
it
rfc
required
people
can
ask
for
something
in
the
registry
when
we
can
expand
this.
To
have
a
note
section
about
recommended,
not
recommended,
like
many
other,
are
very
many
other
registries
have.
So
I
think
this
is
a
no-brainer.
We
should
just
do
it.
A
N
Okay,
sorry
paul
this
matches
up
mostly
with
something
like
broadcast,
cable
or
television
where
that's
encrypted.
So
it's
a
point
to
multi-point
system
and
having
one
or
few
algorithms
makes
a
lot
more
sense.
N
N
One
of
the
things
that
we
should
be
thinking
about
is
how
do
we
get
interoperability,
or
how
do
we
maintain
on
our
probability
and
the
national
cryptos
should
be
in
some
ways,
a
second
signature
on
a
different
on
a
different
algorithm
on
a
standard
algorithm
set
with
it?
Should
you
should
have
both
the
national
crypto
and
something
that
everybody
implements
on
the
data
that
you're
signing
so.
G
Okay,
mother
is
miss
law,
I'm
in
favor
of
adoption
is
dropped
because
it
makes
situation
more
consistent.
G
The
situation
with
ghost
draft
rises,
because
exactly
one
ds,
iran
registry
had
a
strict
policy
than
the
other
registers
that
needed
for
registering
ghost
a
new
good
signature
so
make
all
this
registry.
G
G
I
am
to
point
without
too
much
difficulty
or
for
proceeding
with
standard
stroke
accuracy.
So
I'm
in
favor
of
options
and
it's
a
it's
a
very
useful
technique.
A
O
Hi
there,
okay
I'll,
tell
you
I'm
just
observing
what
I've
noted
in
the
chat,
which
is
that
I
think
using
pgp
and
tls
as
examples
for
this
is
a
bad
idea.
Pgp
and
tls
are
both
protocols
that
have
explicit
negotiation
mechanisms
and
they
tend
to
be
pairwise.
O
That
is,
the
sender
knows
who
the
recipients
are
in
pgp
tls
is
between
the
client
and
the
server,
and
they
can
sort
of
figure
out
what
they
both
need
in
order
to
communicate
and
dns
is
not.
Dns
is
unilateral,
and
I
think
it
would
be
a
mistake
to
look
at
what
pgp
and
tls
are
doing
as
a
model
for
the
dns.
I
think
it's
much
more
important
that
we
have
a
minimal
number
of
cypher
suites
available
in
in
dns.
A
Okay,
thank
you,
dkg.
Well,
we
had
a
good
discussion
here.
There
was
also
a
good
discussion
on
the
mailing
list.
We
definitely
continued
this
discussion,
the
clear
positions
and
pro
and
cons
this
draft,
it's
good.
We
have
the
discussion
and
how
to
proceed.
We
will
go
on
on
the
mailing
list.
One
poll
please.
B
One
quick
request
is
that
before
people
discuss
them
on
the
mailing
list,
please
read
the
draft,
because
a
couple
of
the
comments
from
here
were
clearly
people
who
hadn't
seen
where
the
draft
was
so
that
that
would
be
great.
And
if
people
don't
want
it,
that's
fine.
You
know
the
draft
is
changing
the
standard
quote.
So
if
they
want
to
keep
the
standard
quote,
that's
fine.
We
just
drop
the
draft
yeah.
A
J
A
A
A
A
A
P
Please
go
ahead,
hello,
everyone!
I
am
thiru
I'll,
be
presenting
your
draft
or
dns
associate
error.
Page
next
slide.
Please
I'll
be
quickly
going
over
the
problem
statement
and
the
solution
that
we're
proposing
and
the
security
considerations.
P
P
Slide
dean
is
filtering
today
is
performed
by
several
networks
for
various
reasons.
Some
of
them
are
for
security
reasons,
for
example,
blocking
malware
phishing
sites.
Parental
control
is
one
other
reason
that
is
getting
very
popular
nowadays.
Several
organizations
have
internal
policies
for
blocking
certain
domains
and
domain
categories
and
filtering
is
also
required
by
law
enforcement
agencies.
P
For
instance,
if
you
look
into
enterprise
dns
firewall,
they
typically
block
access
to
malware
domains
and
phishing
sites
home
network
security
is
also
primarily
done
using
dns
filtering
a
mud
is
an
interesting
one
which
talks
about
manufacture,
user
description,
which
defines
acl
rules
for
iot
devices,
because
iot
devices
have
a
specific
intended
use.
A
bright
list
of
acl
rules
can
be
created.
P
For
example,
a
light
bulb,
can
only
talk
to
specific
domains
and,
and
only
those
domains
are
permitted
and
restored
domains
are
blocked,
so
that
would
ensure
that
the
iot
device
does
not
communicate
with
any
malware
or
command
and
control
servers,
and
nowadays
several
isps
are
also
offering
malware
filtering
service
and
they
do
enforce
dns
filtering,
because
there's
a
court
order
or
a
request
from
the
law
enforcement
agencies
next
slide.
Please.
P
Now
the
typical
solution
that
at
least
the
dns,
firewalls
and
http
servers
that
were
deployed
for
throwing
the
error
page
first
was
to
fake
in
was
to
spoof
the
ip
address
so
that
the
client
sends
an
http
request
to
the
http
server
and
the
user
would
see
an
error
page
that
pretty
much
worked
for
http.
But
since
nowadays
you
don't
see
any
http
sites
and
that's
for
good
that
the
industry
has
moved
to
https.
P
But
the
problem
is,
if
you
have,
then
the
device
would
not
be
able
to
see
the
exact
reason
why
the
access
to
the
domain
was
blocked,
because
the
client
would
see
and
certificate
error
page
and
what
would
happen
because
of
that
is
either.
The
client
would
repeatedly
try
to
attempt
to
reach
the
domain,
because
it
does
not
understand
why
it's
receiving
the
certificate
error
message
or
it
can
go
to
the
advanced
settings
and
click
and
accept
the
certificate
error
message
and
proceed
to
see
the
domain
page.
P
Even
if
there's
a
clear
warning
from
the
browser
saying
that
the
user
is
not
supposed
to
visit
the
page,
because
there's
a
security
risk
involved,
the
other
risk.
That
is
involved
is
many
devices
nowadays
have
multiple
interfaces,
so
the
user
may
switch
on
to
an
insecure
enterprise
and
try
to
reach
that
domain,
because
the
user
does
not
know
specifically
why
their
domain
was
being
blocked.
P
The
other
solution
that's
typically
used
today
is
the
user
is
asked
to
manually,
install
the
local
root
certificate
of
the
enterprise
network
and
then
the
enterprise
http
server
would
basically
spoof
the
search
server
certificate
and
throw
an
error
batch
black
to
the
client.
But
this
requires
the
enterprise
servers
to
start
acting
as
mitm
or
tls
proxies
next
slide.
Please.
P
And
thanks
to
the
new
error
codes
that
have
been
defined
in
this
working
group,
especially
the
censored
blocked
and
filtered
error
codes,
it
definitely
gives
a
lot
more
information
without
just
throwing
a
certificate
error,
page
back
to
the
user.
But
there
are
some
challenges
that
we've
been
seeing,
that
the
user
exactly
does
not
know
the
reason
why
the
domain
is
being
blocked.
For
instance,
if
you
see
any
dns
filtering
service,
it
has
several
categories
of
domains
because
of
which
it
is
blocked.
P
Information
to
the
user,
for
example,
if
the
dns
filtering
service
area
identifying
it
as
a
phishing
site,
it
can
go
back
and
tell
the
user
that
hey,
if
you
connect
to
this
site,
you're
going
to
lose
your
probably
your
financial
username
and
password
transactions
username
and
password,
and
could
impact
both
your
security
and
privacy
and
impact
you
adversely
throughout
your
financial
transactions.
P
Unfortunately,
that
information
is
not
available
to
the
user
and
the
user
does
not
exactly
know
which
entity
is
blocking
the
domain,
whether
it's
my
enterprise
network
blocking
it
or
whether
it's
being
blocked
by
some
other
entity
and
typically,
if
you
see
most
of
these
dns
filtering
services,
use
advanced
ml
techniques
for
identifying,
let's
say
a
domain
generated,
algorithms
or
phishing
sites,
and
there
could
be
false
positives
and
the
user
needs
to
know
the
contact
details
of
the
iot
infosec
to
visa
complaint
and
that's
not
available,
because
he
no
longer
sees
an
error
page
and
if
you
see
the
various
categories
because
of
which
a
domain
gets
blocked
in
any
enterprise
or
home
network
are
around
80
to
100
categories,
and
these
categories
are
pretty
much
vendors
vendor
specific
and
they
cannot
be
standardized.
P
The
new
error
code
for
this
answer-
that's
defined
external
editor
code
does
not
work
for
https
unless
a
local
search
is
installed
and
installing
local
cert
is
not
a
viable
option
and
it
may
work
for
it
managed
devices
only
next
slide.
Please.
P
What
we
are
trying
to
do
in
this
solution
is
to
define
a
mechanism
where
the
error
page
url
will
be
written
to
the
client.
We
are
relying
on
the
https
dns
record.
We
are
introducing
a
new
service
parameter
eot
in
the
service
form
so
that
it
could
provide
their
page
url.
P
It
could
be
returned
with
any
of
the
extended
error
codes,
be
it
blocked,
filtered
or
censored,
and
the
idea
is
to
say
that
hey
the
browser
definitely
does
know
that
of
the
http
client
knows
that
it's
an
forged
answer,
it's
an
error
page
and
and
can
and
can
decide
to
display
their
patch
back
to
the
user.
So
in
this
example,
if
you
can
see,
that's
the
url
and
the
block
domain
is
encoded
in
base64
url
encoding
next
slide.
Please.
P
Yeah,
and-
and
this
brings
various
I
mean
if
it's-
if
it
happens
in
clear
text
dns-
I
mean
it's
pretty
much
possible
that
an
attacker
could
modify
it.
So
what
we
are
discussing
in
this
graph
currently
is
doh
and
dvot,
especially
in
security.
Private
in
strict
privacy
mode
is
mandatory,
otherwise
the
dns
response
is
susceptible
to
a
modification
by
any
attacker
and
further
this
devote
dvd
server
has
to
be
trusted
by
the
end
point,
for
example,
it
could
be
a
pre-configured,
doh
or
dvt
server
in
the
osr
browser.
P
For
example,
several
browsers
nowadays
support
public
device
servers
which
do
dns
filtering,
for
example,
quad
9
blocks,
malware
domains
and
phishing
sites,
and
that
way
the
the
end
point
is
for
sure
that
it's
it's
actually
talking
to
a
trusted,
duh
duty
server
and
not
talking
to
an
dot
word
server,
hosted,
probably
an
attacker.
P
P
P
The
other
examples
would
be
for
just
like
similar
to
captive
portal.
It
could
auto
enable
privacy
browsing
mode
for
their
page
or
load
the
error
page
in
a
container
mode,
isolated
from
other
web
activity,
similar
to
some
social
networking
sites
being
isolated
from
other
web
activities,
so
that
they
cannot
snoop
into
the
other
web
activity.
P
So
since
the
client
is
now
aware
that
this
is
an
error
page
and
a
spoofed
response,
it
can
take
various
actions
to
protect
itself
even
from
a
misconfiguration
or
or
even
a
trusted,
resolver,
giving
a
bad
or
a
badly
formed
error.
Page
next
slide.
Please.
P
This
is
a
non-nomatic
contents
of
the
error.
Page
I
mean
the
error
page
is
supposed
to
have
the
exact
reason
for
blocking
the
access
domain.
For
example,
in
this
case,
open
dns
is
an
free,
dns
server
which
does
filtering,
and
it's
showing
that
the
domain
is
blocked
because
phishing,
because
of
phishing,
and
the
reason
why
phishing
should
be
blocked.
P
If
it's
blocked
by
a
law
enforcement
agency,
then
a
pointer
to
the
regulatory
text
would
definitely
help
the
user
to
understand
why
access
to
the
domain
is
blocked
and
the
organization
which
is
blocking
the
access
to
the
domain,
in
which
case
in
this
example,
it
would
be
up
on
dns
and
the
contact
details
of
the
it
infosec
team
and
further.
The
error
page
should
honor
the
asset
language,
header
header,
so
that
the
error
page
would
be
displayed
in
the
language
that
is
written.
That
is
that
the
user
understands.
P
And
next
slide,
please.
P
The
error
page
could
also
even
should
not
have
ads
or
any
dynamic
content
as
well,
because
it's
just
a
new
directional
just
to
say:
what's
the
purpose
of
the
blocking
the
assets
to
their
page,
any
comments
and
suggestions
will
become
on
the
screen.
A
I
Yeah
there
you
are,
I
think
I
click
bounce
yeah
first,
as
a
matter
of
the
mechanism.
This
is
not
a
good
use
of
the
https
record
type,
so
I
would
strongly
urge
you
not
to
do
this
using
https,
which
is
a
way
to
indicate
how
to
reach
the
page.
I
Also,
this
formulation
is
not
compatible
with
dns
sec.
So
if
you
have
clients
who
are
actually
correctly
implementing
dns
tech,
they
will
never
see
this
record
in
particular.
If,
for
example,
if
there's
a
dns
sec,
validating
stub
resolver
between
the
application
and
the
recursive
resolver,
which
is
a
configuration
that
a
lot
of
people
here
support,
these
errors
will
disappear.
I
P
Forged
answer
definitely
comes
as
part
of
a
server
which
does
filtering,
and
today,
as
you
know,
that
there
are
several
servers
in
several
deployments
which
do
filtering
so
nx
domain
is
typically
the
response
code,
which
is
returned
for
these
responses,
or
it
could
be
a
fake
type
addresses,
but
that's
the
typical
mode
where
dns
is
performing
filtering
today,
and
this
is
nothing
new
right.
I
What
I'm
saying
is
that
if
the
client
validates
dns
sec,
then
this
standard
will
not
function
because
the
because
the
application
will
never
see
this
https
record,
it
will
be
dropped
by
the
client
due
to
dns
validation,
failure
but
the
so
that's
the
first
thing.
The
second
thing
is
about
the
page
itself.
I
J
I
P
A
Responses.
Thank
you
next,
thank
you
is.
C
C
P
J
Okay,
my
apologies.
I
thought
you
said
it
would
be
in
the
answer
section.
No,
no
yeah
in
it's
in
the
addition
section.
P
Okay
and
it
will
be
only
returned
when
the
extended
error
codes
are
returned,
so
it
has
to
be
returned
along
with
the
extended
codes
censored,
filtered
or
blocked.
So
the
client
will
not
process
this
additional
record
or
the
new
parameter
that
we
are
introducing
unless
it
is
capable
of
processing
the
error
messages.
The
new
error
messages
and
those
error
messages
will
be
obviously
inserted
by
a
dns
filtering
server,
and
you
can
also
insert
this
arrow
page
here.
K
Hi
I
just
wanna
concerns
aside.
I
really
appreciate
a
draft
that
focuses
on
making
things
easy
for
end
users,
giving
them
information
they
need
giving
operator.
You
know
customer
support,
folks,
a
mechanism
to
further
help
end
users.
So
I
appreciate
the
focus
on
that.
Regardless
of
the
implementation
concerns.
E
I'm
going
to
try
and
limit
the
comments.
I
think
there's
a
lot
of
interesting
ideas
here.
We
talked
about,
you
know
better
mechanisms
for
you
for
returning
errors
in
ede,
and
it
was
generally
decided
that
no,
it's
not
a
safe
thing
to
do,
and-
and
this
isn't
safe
either
I
mean
just
doe
and
dot-
doesn't
even
protect
you
to
the
resolver.
You
know
up
streams.
If
the,
if
the
resolver
is
being
attacked
and
sent
random
https,
you
know
records,
it
would
still
redirect
the
client.
E
So
you
essentially
have
an
unsecured
redirect
you
know
without
a
signed
error,
and
that
can't
happen
if
the
resolver
is
trying.
Of
course,
as
people
pointed
out,
it
doesn't
play
well
with
dns
second
in
the
first
place,
and
doesn't
it
also
allow
you
know,
like
malicious
isps
to
basically
insert
a
man
in
the
middle?
You
know
showing
the
page
that
the
user
wanted
to
get
to,
but
going
through
you
know,
a
man
in
the
middle
of
sniffer
as
well
doesn't
seem
like
a
safe
idea
to
me.
P
P
For
instance,
if
you
have
a
browser
or
os
which
has
pre-configured
list
of
devoted
ui
servers,
for
example,
today,
firefox
trust,
comcast,
writer
and
chrome
uses
several
ardenius
filtering
servers
like
quad
9
and
open
dns,
and
they
are
the
ones
that
it
trusts
that
they
do
the
filtering
already
and
it
blocks
access
to
various
domains
for
security
reasons,
and
that
way
those
pvo
hdot
resolvers
can
be
trusted
to
provide
this
error,
page
url
and
not
by
any
pad
isp
or
an
attacker
who
is
in
the
network
or
somewhere
in
the.
P
A
A
I
Q
I
Yeah
I'll
try
to
keep
this
brief
as
requested,
so
I
want
to
express
some
general
support
for
trying
to
solve
this
area
within
chrome.
This
is
an
issue
we've
been
discussing
lately,
trying
to
find
a
solution
for
that.
We
want
to
get
out
of
the
current
solution
of
this
to
filtering
the
way
that
we
forge
address
results
to
get
stuff.
I
Continuing
the
discussion
of
everyone
saying
that
there's
these
security
issues,
I'm
not
sure
we
would
be
able
to
trust
it,
as
is
so,
it's
certainly
an
area.
We
need
to
continue
the
discussion
on
the
site
if
it's
good
and
also
expanding
further
on
some
of
the
comments
on
the
exact
next
mistake
in
hbs,
I
think,
would
be
a
bad
idea
to
do
anything
that
requires
forging
results.
J
I
It
should
not
be
creating
or
modifying
hbs,
that's
just
going
to
lead
to
problems
in
the
future.
The
reason
the
current
stuff
doesn't
work
is
because
all
the
technology
tries
to
fight
against
anything
forging
results,
so
maybe
something
eating
s.
Zero,
like
ed
did,
would
be
a
better
way
to
go.
I
don't
know.
P
A
Thank
you
chris.
Please
go.
A
M
Sorry,
if
no
I'll
keep
this
really
short
so,
along
with
many
other
people,
I
think
I
really
hate
the
idea
of
dns
filtering.
However,
the
internet
doesn't
really
care
what
we
think
people
are
gonna
do
dns
filtering,
and
so
I
think
we
need
a
way
to
make
it
as
non-harmful
as
possible.
M
I
think
that
this
is
one
of
the
ways
providing
that
there
is
a
good
way
to
signal
to
the
browser
and
from
the
browser
to
the
user,
that
this
is
happening.
Browsers
already
do
stuff,
like
sort
of
sandboxed
environments
for
things
like
capture
portal
pages.
So
if
you
could
have
a
way
where
you
know
the
browser
would
agree
to
have
a
huge
red
border
around
it,
not
allow
any
other
links
to
be
followed
from
that
page.
M
You
know
only
static
type
text,
which
shows
this
is
what
happens,
and
you
cannot
use
any
links
within
it.
You
cannot
use
any
javascript,
etc.
I
think
that
that
might
be
okay,
but
I
do
think
that
it
needs
some
strong
discussion
with
the
browser
vendors
to
agree
on.
You
know
how
it
can
be
carefully
sandboxed
and
only
used
for
display
not
for
anything
else.
A
Okay,
thanks
ecker,
please
go
ahead.
Q
Yeah,
I
guess
I
don't
think
we'd
be
super
enthusiastic
about
this.
We
being
firefox,
like
you
know,
if
you
look
at
safe
browsing,
there's
a
relatively
small,
really
really
small
bandwidth
channel
that
they
can
be
told
what
the
problem
with
the
with
the
site
is,
and
we
won't
go
there,
but
we
want
to
control
that
interface.
Q
We
don't
want
to
delegate
that
interface
out
to
the
out
to
some
other
party,
and
you
know
there
was
no
to
use
the
trust
resolver,
but
like
the
trustworthiness,
actually
the
resolvers
extremely
limited,
the
trusted
resolver
is
about
how
how
it
handles
your
data,
but
we
assume
that
the
primary
mechanism
for
preventing
hijacking
the
connection
is
tls.
It
is
not
the
browser,
not
the
resolver,
returning
the
correct
answer,
so
I
I'd
be
pretty
wary
of
like
accepting
arbitrary
stuff,
even
in
the
sandbox.
A
Okay,
okay,
thank
you
victor!
I
I
did
close
the
line
at
eckerd.
So
if
you
have
a
very
very
brief
question,
please
go
ahead,
but
really
brief,
because
we
also
want
to
have
jordy
give
his
one
minute
pitch
of
his
his
proposal.
His
draft
please
go
ahead.
Victor.
H
Two
things
very
quickly,
one
on
the
dnsec
question
so
long
as
the
additional
section
doesn't
actually
use
you
name
that
the
user
asked
for
it
can
be
in
some
clearly
unsigned
portion
of
the
namespace
that
might
help
the
other
thing
in
terms
of
trusting
this.
Just
because
it
came
over
d.o.t
forgets
that
the
dot
resolver
in
turn
often
makes
unauthenticated
connections
over
the
public
internet
to
authoritative
servers
which
can
be
easily
mitm'd.
H
P
Yeah,
victor,
that's
a
good
point
and
if
you
see
we
are
working
on
the
draft
to
enhance
to
say
that
this
mode
of
dot
would
only
work
when
strict
privacy
is
enabled
opportunistic
duty
has
two
modes:
one
is
strict
and
opportunistic
opportunistic
is
the
one
that
you're
referring
to,
which
is
the
server
certificate,
may
not
be
validated,
but
strict
mode
definitely
requires
that
the
client
has
to
validate
and
is
very
similar
to
what
our
dvd
is
doing
today.
But.
A
A
I
want
to
go
further
with
jordy.
He
will
give
a
one
minute
pitch
of
his
work
in
v6
ops.
If
I
have
the
working
group
correct,
I
think
so.
Please
jordy
go
ahead.
It's
related!
Sorry,
it's
related
work,
so
it's
not
dns
work,
but
it's
it
has
impact
on
our
working
group.
So
that's
why
we
scheduled
this
presentation.
R
So
this
is
a
b6
subs
work
related
to
optimization
of
464x
lab
and
because
we
are
using
the
dns
64
synthetizer
records,
we
want
to
make
sure
that
we
are
not
breaking
anything
else.
We
don't
have
time
for
the
full
presentation,
but
I
think
if
you
look
at
the
slide
set
it's
very
easy
to
follow
the
the
last
version
of
the
draft
was
published
about
two
days
ago.
R
The
main
difference
we
have
now
with
the
previous
version
is
that
we
are
including
in
the
in
the
translation
table
the
the
mac
address.
So
we
want
to
make
sure
that
we
only
optimize
if
it's
an
ipv4
only
host,
and
I
will
ask
everyone
to
to
see
the
slides
to
to
send
me
inputs
in
b6,
ops,
on
or
in
private,
as
you
prefer,
and
we
can
keep
improving
the
document.
Thank
you
very
much.
A
We
also
had
two
very
good
discussions
on
two
drafts.
Please
continue
the
discussion
on
the
mailing
list
and
we
will
see
there.
We
will
see
each
other
over
there
on
the
mailing
list.
We
also
make
a
summary.
The
chairs
will
make
a
summary
of
all
the
documents
being
presented
and
the
actions
for
the
working
group
chairs
or
for
the
authors,
and
that's
it
I
think
suzanne
do
you
have
some
final
words.
D
Only
thanks
for
everybody,
particularly
for
our
patients
with
for
patients
with
with
us
and
some
new
tools,
this
all
virtual
thing
is
is,
is
always
challenging,
but
also
for
one
thing.
As
far
as
timing,
we
all
it's
always
been
a
challenge
and
it
seems
to
be
harder
with
the
fully
virtual
meeting
so
we'd
appreciate
sort
of
if
you
have
dropped
the
word
in
benozier
or
mine
or
tim's
in
particular,
if
we're
going
to
have
trouble
juggling
climbing
would
people.
D
Rather,
we
move
things
around
next
time
and
departed
from
the
the
published
agenda,
because
we
we
do
have
some
flexibility,
but
in
in
the
past
the
working
group
has
been
very
uncomfortable
with
us
moving
presentations
around.
So
if
you
have
new
input
on
that,
please
do
let
us
know
other
than
that.
Thanks
very
much
for
everything,
and
we
will
see
out
there
thanks
very
much
bye
for
now.