►
From YouTube: IETF-TOOLS-20230110-1900
Description
TOOLS meeting session at IETF
2023/01/10 1900
https://datatracker.ietf.org/meeting//proceedings/
C
Thank
you
all
for
being
here
looks
like
we've
got
a
a
pretty
good
number
of
people
that
are
connected.
C
C
We
have
agreement
from
cloudflare
that
we
can
host
DNS
there
under
project
Galileo
and
there's
the
ability
to
specify
secondary.
So
we
can
continue
to
use
the
volunteer
secondaries
that
we
already
have
I
have
an
outstanding
question
into
our
into
cloudflare
about
the
data
that
would
get
included
in
zone
transfers
to
those
secondaries
by
default.
C
The
addresses
for
names
like
datatracker.ietf.org
would
point
to
the
origin
servers
if
queried
from
the
secondaries.
Instead
of
to
the
cloudflare
edge
servers,
there's
apparently
a
way
to
get
them
to
point
to
the
cloud
clearance,
servers
and
I'm
waiting
for
our
our
account
contact
to
come
back
from
vacation
and
tell
me
what's
going
to
be
involved
to
do
that
once
I
know
that
and
understand
the
level
of
effort,
then
I
will
start
working
towards
a
date.
C
I
am
hoping
that
we
can
make
this
transition
this
month,
although
I
am
planning
to
be
mostly
offline
from
the
30th
January
30th
through
February
3rd,
so
I'm
not
going
to
make
that
cut
over
like
right
before
I'm
not
available,
so
it
might
be
early
February
before
we
pull
this
off.
C
C
Somebody
say
something
just
to
make
sure
I'm
getting
audio.
Please.
D
E
C
C
The
rpcs
primary
server
is
scheduled
for
this
same
upgrade
and
next
week,
I
believe
this
upgrade
went
fairly
smoothly.
For
the
most
part,
it
was
utterly
unremarkable.
With
one
exception
there
was
the
open
decam
Package
had
a
change
in
policy
that
wasn't
well
advertised,
caused
it
to
start
to
basically
reject
messages
that
it
was
processing
because
it
didn't
like
the
configuration
of
the
the
basically
the
Unix
permissions
settings
on
one
of
its
configuration
files.
C
C
It
took
a
few
hours
to
find
out
where
the
break
was
once
the
brake
was
identified
and
corrected
messages
to
the
lists
flowed
out.
There
might
have
been
an
opportunity
for
confusion
as
during
that
time
list
messages
to
the
lists
were
showing
up
in
the
archive
immediately
but
were
not
flowing
out
to
people
through
their
subscriptions.
C
C
Over
the
winter
break
and
the
first
week
we
were
back,
we
found
a
a
stopping
bug
on
making
our
production
migration
to
postgres.
C
We
knew
that
we
had
an
issue
between
a
difference
between
the
databases
where
collation
was
different.
In
particular,
there
are
many
places
in
my
sequel
where
fields
are
treated
as
case
insensitive
for
comparison.
C
When
you
do
a
select
four
and
look
for
an
email
address,
for
example,
what
it
pulls
out
of
the
database
does
the
case
insensitive,
compare
against
what
those
email
addresses
are
initial
thinking
in
our
initial
testing
indicated
that
we
were
going
to
be
able
to
make
the
migration
in
the
time
frame
that
we
were
looking
at,
and
we
thought
we
had
found
the
places
where
we
would
have
trouble
based
on
these
moving
to
a
case
sensitive
database,
which
postgres
is,
but
we
discovered
that
there
were
some
assumptions
that
we
made
and
some
assumptions
that
the
code
made
that
are
going
to
need
to
be
explicitly
addressed
before
we
move
forward
and
email
is
a
email
addresses
are
a
very
good
example
to
point
to
to
explain
this
with
the
code
that
we
have
in
feet.
C
This
isn't
going
to
be
an
easy
one
to
address
we're
basically
going
to
have
to
do
a
full
scrub
of
the
code
and
go
through
all
the
queries
and
look
at
the
ramifications
of
things
becoming
case
sensitive
again,
the
most
dangerous
points
we
can
find
fairly
quickly
because
they're
going
to
be
at
places
where
references
are
being
made
from
other
tables.
So
we
just
can
just
look
at
the
primary
keys
that
are
on
character,
type
fields
and
and
compare
those
so
we'll
get
that
done
fairly
quickly.
C
But
I
think
that
the
effort
that's
going
to
take
to
look
at
the
all
of
the
codes
that
are
querying
and
make
sure
that
they're
case
sensitive
safe,
is
going
to
take
us
several
weeks,
so
I'm
proposing
that
where
we
reschedule
the
production
migration
to
you
just
after
the
Yokohama
ETF,
not
the
immediate
week
after.
But
the
week
after
that,
I'm
proposing
that
I
send
out
an
announcement.
C
C
The
SSH
Community
noted
that
they
have,
in
their
Corpus
of
of
web
properties,
links
into
an
older
version
of
a
draft
that
did
not
move
forward,
and
it
turns
out
that
the
backing
information
that
we
have
about
that
draft
in
the
data
tracker
does
not
include
things
like
who
was
the
what
was
the
working
group
state
of
the
draft
at
the
time?
Who
was
the
document
Shepard
at
the
time?
All
of
this
information
is
captured
in
an
object
called
doc
history
that
we
have
for
drafts.
C
C
We
have
many
many
internet
drafts
that
are
like
this.
For
the
most
part,
it
hasn't
irritated
people
enough
to
complain
for
this
particular
organization
and
in
this
particular
Point
into
the
history
of
of
Internet
drafts.
It's
causing
pain,
so
we're
going
to
go
address.
It
we're
having
conversations
inside
the
tools
team
we've
had
a
couple
of
interactions
with
Lars
about
whether
or
not
we
make
up
the
history
that
has
definite
negative
ramifications
or
whether
we
change
the
way.
The
view
behaves
so
that
we
can.
Let
pointers
come
in.
C
F
C
For
simple
types,
we
could
have
something
like
that,
but
form
types
that
are
clustered
clusters
of
interrelated
data
like
the
working
group,
State
I
think
it
would
be
quite
a
bit
of
engineering
to
do
that.
I
can
think
about
it
and
see.
If
it's
a
possible
thing
to
do,
then
we
could
automatically
backfill
with
unknowns
and
have
the
the
current
user
interface
continue
to
work.
I
suspect.
C
D
I
would
definitely
vote
against
making
things
up,
but
lots
of
metadata
schemes
have
ways
of
representing
unknown
that
it
might
just
it
might
be
reasonable
to
have
a
a
set.
This
is
what
we
put
here
when
we
don't
know
and
use
that
as
the
metadata
across
every
single
thing.
Where
we
don't
know,
that's
a
I,
don't
know
if
that
helps
you
solve
the
problem.
Yeah.
C
C
C
Next
question
that
I
have
up
for
discussion
as
we
are
working
on
several
places
where
we're
providing
reports,
there's
an
internal
reporting
to
the
isg
that
we
are
replacing
with
reports
at
mail
archive
and
data
tracker
I'm
doing
some
work
to
support
reporting
work
that
Greg
is
working
on
for
providing
information
that
goes
into
the
ITF
annual
reports,
we're
looking
at
what
we
should
make
available
that
these
reports
would
use
as
public
information
into
the
public
apis
and
one
of
the
ones
that
we
started
poking
at
over.
C
C
We
already
do
this
at
the
data
tracker.
The
all
the
email
addresses
that
the
data
tracker
knows
from
somebody
can
be
extracted
out
of
the
data.
Trackers
API
V1
without
a
great
deal
of
effort,
so
I
wanted
to
bring
this
up
and
have
a
discussion.
Do
we
need
to
change
the
data
tracker
to
make
that
a
little
bit
harder
to
extract
is?
Do
we
consider
it?
C
This
is
all
public
information,
because
you
can
go
get
this
information
out
of
the
email,
archives
anyhow
or
out
of
the
data
tracker
directly,
and
would
it
be
okay
to
make
things
easier
for
us
and
be
a
little
bit
more
transparent
if
we
just
make
the
full
set
of
posters
in
a
given
year,
something
that's
easy
to
to
extract
the
N
API.
B
Yeah
I
think
that
there
is
a
difference
between
a
situation
where
a
generic
crawler
is
going
to
find
this
information,
so,
for
instance,
for
all
of
our
documents,
the
email
addresses
are
ingested
by
any
generic
crawler
out
there
or
whether
you
have
to
make
a
specific
effort
using
specific
apis
to
get
the
addresses.
B
There
are
still
some
spammers
that
are
going
to
do
that
because
they
want
industry
specific
spam
lists,
but
it
certainly
is
a
smaller
attack
surface,
so
I
would
try
to
avoid
Crossing
that
boundary
too
often,
but
realistically,
our
email
addresses
are
very
easy
to
crawl.
So
this
is,
if
somebody
wants
to
to
hide
the
email
address.
The
iotf
is
not
the
best
place
to
do
that.
C
No
more
Backtrack
on
that
I
was
about
to
say
it
would
be.
It
would
require
a
post
I'm,
not
sure
that
it
would,
but
it
might
be
a
get,
but
we'll
we'll
take
that
into
advisement
and
make
sure
that
it's
not
just
a
simple
get
that
a
crawler
would
expect
to
find
a
link
to
on
on
somewhere
else.
In
the
the
web
space
that
we
currently
control,
Russ
you're
up.
A
So
I
recently
got
an
email
from
the
RC
editor
when
somebody
posted
a
Errata
about
one
of
my
older
rfcs
and
it
was
sent
to
extremely
old
email
addresses,
and
so
anything
we
can
do
to
allow
systems
like
that
to
translate
old
to
new
would
really
be
helpful.
I
think
I
was
like
the
only
guy
who
was
the
author,
who
had
a
who
received
it,
and
that's
because
I
still
watch
the
mailing
list
from
a
closed
working
group
and
the
you
know
so.
A
I
think
this
would
be
a
really
helpful
thing
if
we
can
come
up
with
a
way
to
do
it.
That
is
not
you
know
a
susceptible
to
spiders.
C
Yeah,
no
I'm,
anticipating
that
as
we
look
at
better
integration
with
the
RSC
editor
code
and
the
data
tracker
that
the
RSC
editor
code
can
call
the
data
tracker
for
a
given
person
and
say:
what's
the
their
current
primary
email
address
and
use
that
so
rich
I
think
you're
next
in
queue.
C
C
G
Go
okay,
good
got
you
now
all
right,
too
many
mute
buttons.
I
get
after
every
conference.
I
get
several
mail
messages
about.
How
would
you
like
to
get
a
list
of
all
RSA,
attendees
or
all
Comics
CES,
attendees
and
I'd
be
kind
of
worried
about
with
the
bulk
retrieval
about
getting
things
like
hey?
How
would
you
like
to
get
a
mailing
list
of
all
TLS
developers
or
all
DNS
developers
and
I?
G
Think
you
know
onesie
twosies,
updating
things
like
you
were
just
talking
about
is
is
fine
but
making
it
easy
for
somebody
to
collect
everybody
on
a
DNS
or
multiple
DNS.
Mailing
lists
makes
that
list
that
they're
gonna
then
try
to
sell
more
useful,
so
I
would
kind
of
discourage
that.
On
the
other
hand,
spam
filtering
works
pretty
well,
so
I
just
don't
see
a
use
case
for
anyone,
not
already
a
member
of
the
community
to
get
a
list
of
any
large
subset
of
the
community.
C
So
that
would
argue
for
us
if
we
created
such
an
API,
to
put
it
behind
a
secret
that
only
consumers
of
it
would
need
consumers
that
would
need
it
would
have
the
secret
to
have
access
to
it.
C
G
D
G
But
that's
I
also,
but
to
me
that
seems
different
from
what
you
were
talking
about
about
the
rpcs.
Those
were
two
different
topics:
gotcha,
okay,
the
second
one.
The
RPC
thing
makes
a
great
deal
of
sense
to
getting
a
list
of
you
know.
Well
that
makes
it
easier.
Then
you
say:
oh
here's,
the
active
DNS
developer
list.
Are
you
interested
mm-hmm,
okay,
I'll
get
out
of
the
queue.
C
All
right,
no
worries
looks
like
John
entered
and
left
the
queue
Jay
you're
currently
at
the
head.
F
It
it
does
seem
to
be
as
if
what
we're
talking
about
is
the
difficulty
with
which
somebody
gets
a
list
knocked
their
overall
ability
to
get
a
list,
because
we
all
agreed
that
somebody
can
get
a
list
if
they're
willing
to
put
the
effort
in
clearly
what
we're
worried
about
is
making
it
too
easy
that
any
idiot
could
do
it
so
that
I
think
is
difficult
for
us,
because
we
don't
really
know.
F
What's
the
the
point
at
which
it
becomes
sufficiently
easy
that
anybody
could
do
it
I
if
I
were
to
take
a
guess,
I
would
say
an
API
is
still
beyond
the
point
at
which
it's
trivial
and
is
therefore
sufficient.
F
I
would
I'm
not
100
confident
about
that,
though.
I
would
say
that
if
you
required
an
API
key
and
you
had
to
sign
up
to
to
get
the
account
and
get
the
API
key
and
the
API
key
were
free,
so
you
didn't
there's
no
authentication,
it
was
just
a
you
know
a
three-step
process,
then
I
would
be
really
quite
confident
that
that
would
be
as
much
of
a
complexity
to
Terence
as
as
we
currently
have
so
I.
Think.
F
That's
the
problem
really
is
just
understanding
that
and
I
don't
know
who
we're
going
to
ask
for
advice
on
that.
E
Foreign
I
think
this
horse
has
left
the
barn
made
into
it
in
about
two
minutes.
I
can
write
a
python
script
that
opens
the
IMAP
archive
and
scripts
all
the
addresses
out
and
then
uses
some
heuristics
to
you
know.
D,
you
know
put
the
at
signs
back
in
so
I
think
you
know
as
long
as
you
know,
if
we
have
a
unique
API,
it's
basically
not
going
to
be
worth
it
for
bad
guys
to
come
and
attack
us
I
mean
I.
I
also
get
those
things
of
like.
E
Oh,
would
you
like
this
list
of
attendees,
but
the
size
of
the
list
they're
offering,
are
frequently
an
order
of
magnitude
larger
than
the
number
of
people
who
went
to
the
conference,
so
why
assumption
has
always
been
that
those
lists
are
fake,
so
I
know
I'm,
not
too
worried
about
it.
So
I
think
I'm
much
more
concerned
about
what
what
Russ
said
is
like
actually
being
able
to
being
able
to
contact
people
when
they're
when
their
email
addresses
change.
Because
that's
that's
bad,
that's
a
real
issue.
E
We
get
a
lot
of
for
old
stuff
and
we
get
a
few
questions
about
old
old
drafts.
So
I
mean
I.
Would
foreign
in
this
case
just
for
the
utility
of
the
ITF
I
would
be
more
concerned
about
make
about
how?
How
can
we
make
it
easier
for
people
who
for
super
sensible
people
who
want
to
contact
people
for
real
reasons
to
do
so,
rather
than
trying
to
make
it
harder
to
hide?
C
All
right
anybody
have
anything
else.
I
think
I've
heard
a
sufficient
amount
for
us
to
know
what
we
can.
C
C
C
F
C
C
I
think
that,
in
the
spirit
of
what
we
had
done
for
the
last
couple
of
meetings,
we'll
shoot
through
here
and
just
hit
highlights
through
these-
and
people
can
interject
when
they
want
to
have
a
more
detailed
discussion
of
any
particular
point.
C
The
track,
Wiki
migration
to
Wiki
JS
is
ongoing.
The
data
tracker
had
a
few
releases
before
we
got
to
the
break.
There's
a
release
in
process
right
now
that
I
expect
we
will
deploy
either
late
today
or
sometime
tomorrow.
C
The
we
already
talked
about
the
big
glitch
in
our
plans
for
the
future
data
tracker
work,
we're
going
to
have
to
focus
more
on
the
postgres
transition
than
we
thought
we
were
going
to
have
to
we're,
giving
Cycles
now
to
the
IAB
T,
I'm,
sorry
to
the
rsab
balloting.
Basically,
the
publication,
the
editorial
stream
document
life
cycle,
work
to
moving
IAB
and
ISD
artifacts
into
the
data
tracker
and
then
the
list
of
things
that
are
after
that
are
on
the
page
for
the
bib
XML
service.
C
We
had
a
couple
of
spots
of
roughness
over
the
break
where
changes
at
ribose
unintentionally
disrupted
the
content
of
what
we
were
serving.
We
also
had
issues
with
a
Ruby
release
where
we
didn't
have
the
appropriate
pinning
in
place
to
keep
us
from
failing
based
on
that
release
caused
us
some
issues.
C
We're
going
to
be
spending
some
time
later
this
week,
organizing
the
outstanding
issues
that
we
have
with
the
bib
XML
service
and
try
to
make
visibility
into
where
that
project
is
a
little
bit
higher
and
make
sure
that
we've
got
priorities
with
ribose
set
correctly
moving
forward.
Because
R
is
there
anything
that
you
want
to
add.
C
Similar
with
authort
tools,
kasara
noted
that
we
had
a
few
issues
over
the
break
spider
related
anything
else.
You
want
to
bring
up.
C
So
work
on
the
wagtail
portion
of
our
website
continues.
It's
moving
further
towards
the
development
tip
of
wagtail
itself.
If
anybody
has
any
questions
or
anything
that
we
want
to
specifically
bring
up.
C
For
xmelt
RSC
casara
has
pointed
out
several
pieces
of
upcoming
work
here.
I
wanted
to
highlight
that
we
are
planning
to
release
8.95,
given
the
interpretation
from
rsab.
C
There
has
been
a
lot
of
discussion
in
that
ticket
about
more
things
that
we
should
do,
and,
yes,
we
should
do
those
more
things,
but
we're
going
to
take
the
incremental
step
of
doing
this
thing
and
work
towards
those
other
things
as
a
separate
as
a
separate
action,
so
we're
expecting
to
see
the
permitting
on
ASCII
within
T
without
the
use
of
you
in
the
next
xmlrc
release.
C
We
are
also
currently
planning
to
have
support
for
the
editorial
stream
name
and
the
emission
of
the
editorial
stream
boilerplate
in
that
next
release.
So,
let's
work
on
that
is
still
early.
So
if
we
encounter
a
blocking
point,
it
might
defer
to
the
release
after,
but
we've
prioritized
getting
that
in
place.
C
C
F
Could
I
add
something
Robert?
Yes,
please
just
for
people
to
know.
Greg
and
I
have
been
doing
a
brief
look
at
where
the
note
will
is
displayed
is
displayed
in
interactions
that
people
have
with
us
to
try
to
avoid
a
situation
where
someone
claims
that
they've
interacted
with
us
and
never
seen
the
note
well
or
never
been
presented
with
it.
F
F
We
spoke
to
lawyers
about
whether
it's
needed
for
IPR
declarations
and
their
view
was
that
it
isn't
needed,
so
we
won't
be
doing
it
there
and
Greg
has
been
doing
some
work
on
when
people
subscribe
to
mailing
lists
for
the
first
time
what
they
get
told
about
the
note
well
there
as
well
just
to
tidy
all
of
that
up.
So
that's
just
a
bit
of
info
feeling
all
about
work.
We're
doing
there.
A
C
Wanted
to
have
a
quick
discussion
about
is
whether
or
not
we
continue
with
the
practice
today
of
keeping
the
time
for
this
meeting
fixed
in
U.S
time
so
that
it
shifts
around
when
the
U.S
shifts
its
daylight
savings
time
or
if
we
shift
to
fixing
the
time
to
UTC.
C
If
this
is
a
no-op
for
people,
we'll
just
leave
it
alone
and
continue
to
do
things
as
we've
done
in
the
past.
But
if
anyone
feels
differently,
I
do
want
to
bring
this
up
every
couple
of
years
and
make
sure
that
we're
continuing
to
do
the
right
thing
for
the
people
that
want
to
participate
in
this
group.
C
Not
seeing
anyone
jumping
to
the
fore,
then
I
think
I
know
I
saw
Michael
did
respond
to
this
particular
question
by
email.
C
I
think
the
the
overall
feedback
is.
Is
that
there's
not
a
lot
of
care
one
way
or
the
other
and
I
think
we'll
just
continue
to
follow
the
practice
that
we've
been
following
today.