►
From YouTube: IETF-TOOLS-20221213-1900
Description
TOOLS meeting session at IETF
2022/12/13 1900
https://datatracker.ietf.org/meeting//proceedings/
B
Good
Reverb
and
it's
part
of
that's
this
ring
is
just
incredibly
live,
but.
B
B
B
All
right
we'll
go
ahead
and
jump
in
as
usual,
for
these
calls
remember
that
the
session
is
being
recorded
and
will
be
posted
to
YouTube.
B
B
B
All
right
so
I've
organized
this
as
before.
Nt
topics
that
I
think
we'll
need
discussion
and
then
things
that
are
there
mostly
for
reporting
purposes.
The
first
topic
I
have
for
discussion
is
we
are
planning
to
shift
DNS
from
being
served
from
ams's
infrastructure
onto
cloudflare
mid
to
late
January.
This
will
be
a
fairly
disruptive
transition
to
the
technology
that
we're
using
I
suspect
that
it
will
mean
an
unpopular.
B
Set
of
steps
that
we
go
through
with
DNS
sec
in
particular,
we
will
probably
go
for
a
day
with
our
Zone
not
signed
at
all.
This
is
the
recommendation
that
we've
gotten
from
several
community
members
It
lines
up
with
the
recommendations
at
cloudflare,
but
there
are
I
know
a
lot
of
people
that
work
inside
the
ietf
on
DNS
SEC.
That
will
be
very
unhappy
that
we
can
go
through
a
period
where
we
aren't
maintaining
a
signature
chain
back
to
what
we've
had
in
the
past.
B
So
I'd
like
to
leave
this
right
now
with
a
probably
third
or
fourth
week
in
January
time
frame
with
a
plan
to
send
an
announcement
to
the
community.
If
it
turns
up
that
it
looks
like
we're
going
to
do
anything
terribly,
disruptive
or
maybe
the
going
sometime
without
DNS
SEC
is
something
we
want
to
announce,
or
maybe
we
don't
want
to
announce
it.
This
is
up
for
discussion.
Anybody
have
any
any
thoughts
on
the
matter.
C
I
think
everything
sorry
for
the
video
once
again
mitiko
doesn't
want.
My
video
should
be
announced
right
because
it's
quite
a
big
change
even
for
one
day
and
a
second
questions
Robert
does
it
mean
that
all
the
server
the
DNS
servers
mean
the
primary
and
the
secondaries
will
be
cloudflare
properties.
B
We
should
have
the
ability
to
continue
to
use
the
people
that
are
serving
secondary
for
us
set.
The
documentation
indicates
should
be
supported.
We
won't
know
for
sure
until
we
get
in
and
attempt
the
configuration,
but
it
is
currently
our
plan
to
continue
to
make
to
take
advantage
of
the
volunteer
secondary
server
layer
that
we
we
have
at
the
moment.
B
So
we
have,
as
you
know,
contracted
with
serious
open
source
to
act
as
a
set
of
virtual
data
database
administrators
for
us
with
the
postgres
database
in
particular,
but
all
the
debate
database
engines
that
we're
using
between
the
last
time
we
talked
in
this
time.
We've
asked
Sirius
to
take
a
more
Hands-On
approach
on
actually
giving
them
access
to
the
machines
directly
asking
them
to
configure
the
the
postgres
engine
directly
and
set
up
a
different
and
more
comprehensive
backup
and
and
high
availability
strategy.
B
This
is
being
prototyped
on
the
sandbox
right
now.
We
expect
that
we'll
have
it
deployed
on
ietfa
before
the
month
is
over.
B
Certainly
in
the
first
part
of
January,
it's
hindered
a
little
bit
by
the
choices
that
open
Susie
has
made
for
packaging
for
15,
3
and
15
4,
which
we'll
talk
about
the
upgrades
to
that.
Next,
there
are
a
lot
of
utilities
that
we
need
to
be
able
to
use
like
PG
loader
that
are
not
packaged
and
don't
install
well
from
the
the
experimental
packages
that
are
available
at
opencz.
B
We're
going
to
work
around
it
for
the
short
term,
with
Docker
for
PG
loader
and
we're
just
falling
back
to
a
closer
to
the
base
package
set
of
tools
for
setting
up
our
incremental
backup
strategy
when
we
go
through
our
it
infrastructure
revision
process.
We'll
move
on
to
Services,
where
we
can
move
to
Stronger
tools
like
PG,
backrest
and
Sirius
is
doing
the
preparation
work
to
make
those
moves.
When.
B
We
have
the
resources
available
to
make
them,
so
the
big
takeaway
here
is
that
we've
got
more
people
and
more
contracted
people
working
directly
with
what
had
to
date
been
something
that
only
Glenn
would
do.
The
number
of
people
with
root
on
the
ietfa
server
in
particular
is
different
than
what
it
was
a
month
ago.
B
All
right,
the
next
thing
that
I
wanted
to
walk
through
was
our
plan
for
upgrading
the
ietf
servers,
Point
version
for
the
operating
systems
they're
currently
running
open
Susie,
which
we
mentioned
before
much
of
what
AMS
supports,
is
already
on
15-4.
The
non-production
servers
for
the
ITF,
like
the
sandbox
are
already
on
15-4
are
hot
standbys
have
been
moved
to
15-4.
B
We
haven't
seen
any
any
issues,
but
we're
going
to
wait
until,
after
the
break
to
move
the
Production
Services
to
15
4,
so
that
we
have
all
of
the
resources
for
addressing
any
issues
that
might
crop
up
with
the
applications
available.
B
So
our
plan
is
to
upgrade
ietf
a
where
most
of
the
services
running
in
ietfx
were
the
remainder
of
the
services
that
that
are
visible
to
most
of
the
community,
like
zulup
in
notes.itf.org.
The
wikis
are
running
during
the
first
week
of
January
and
are
a
CPA
during
the
third
week.
If
I
remember
correctly
again,
because
of
resources
at
the
RPC
team
be
on
hand,
should
applications
need
touching
we're
not
expecting
they
would
be.
B
These
upgrades
are
done
on
a
live
running
server.
The
server
doesn't
have
to
be
bounced
in
order
to
make
the
point
upgrade.
Any
individual
service
would
only
be
down
long
enough
to
make
the
the
basically
the
service
start
cut
over,
which,
for
most
of
these
Services
is
in
a
single
digit
number
of
seconds.
B
B
My
next
topic
for
discussion
and
I'm,
getting
less
discussion
than
I
expected,
is
scheduling
for
taking
the
data
tracker
offline
to
change
its
back-end
database
to
postgres
instead
of
MySQL,
our
transition
strategy,
at
least
in
our
development
environments,
has
been
stable.
Now,
for
almost
a
month
it's
been
thoroughly
tested.
We've
gotten
very
high
confidence
that
we're
going
to
run
on
postgres
at
just
fine
and
out
of
the
box.
B
It
will
be
quite
a
bit
more
performant
than
what
we're
getting
on
top
of
my
Sequel
and
then
we're
going
to
get
tuning
by
the
series
open
source
people
that
will
move
our
performance
up,
even
even
more
than
that,
as
I've
mentioned
before.
This
is
going
to
require
a
downtime.
That's
going
to
be
measured
in
tens
of
minutes,
I'm
estimating
somewhere
between
15
and
30.
B
I.
Think
30
is
going
to
be
on
the
long
side.
What
we'll
be
doing
during
this
time,
MySQL
and
postgres
will
both
be
stopped,
we'll
be
taking
a
snapshot
of
the
mySQL
database,
then
running
the
migrations
to
move
the
data
into
my
SQL
into
postgres,
recondition
it
and
then
start
the
service.
On
top
of
my
postgres.
B
We
are,
as
I
mentioned
earlier,
still
working
through
getting
some
of
the
utilities
that
we
need
onto
the
production
environment
in
order
to
make
this
transition
PG,
loader
being
the
biggest
hopefully
by
the
end
of
this
week.
We'll
have
a
worked
example
of
making
the
database
transition
on
the
sandbox
using
Docker
a
dockerized
PG
loader
I'm
expecting
that
to
work
assuming
it
does
then
I'm
suggesting
that
we
schedule
this
downtime
for
the
26th
of
January
in
the
afternoon
in
the
U.S,
and
that
is
hey
Thursday.
B
So
this
would
be
in
the
quiet
after
an
iesg,
telechat
I'm,
just
watching
the
data
tracker
logs
for
the
last
several
months.
This
seems
to
be
a
seems
to
have
been
a
good
place
to
deploy
good
time
of
day
to
deploy
new
revisions
of
the
data.
Tracker
and
I.
Think
it's
probably
one
of
the
least
disruptive
weekday
times
to
have
this
kind
of
an
outage.
C
B
Right,
it's
one
of
the
reasons
about
afternoon.
U.S
tends
to
not
accrue
in
rooms
because
of
the
overlap
with
European
Time.
B
I
will
oh
yeah
it's
a
good
idea:
oh
I!
There
there's
some
details
around
that
to
make
it
where
it's
very
clear
that
it's
not
that
it
says
something
other
than
just
tools,
but
we'll
we'll
definitely
send
a
note
to
ITF
announce
as
well
once
I'm,
confident
of
a
little
more
confident
of
the
date
than
I
am
right.
Now
we'll
give
we'll
give
people
it.
You
know,
weeks
of
notice
before
we
get
to
this
I'm,
not
hearing,
though
anybody
running
to
the
mic,
saying:
oh
no,
no,
that
date's
terrible!
B
During
ietf
115,
we
had
several
people
suggest
that
the
pop-ups
that
people
had
to
click
through
in
order
to
get
into
the
mediko
session,
to
get
to
the
note
session,
when
using
data
tracker
credentials
to
provide
consent
were
disruptive
and
unnecessary,
and
after
discussion
with
leadership,
we
believe
that's
true
we're
moving
to
a
model
where,
if
the
service
is
at
an
ietf.org
domain,
we
assume
that
you've
provided
consent
to
use
everything
that
the
data
tracker
knows
about
you
when
you're,
using
that
your
tracker
credentials
when
you're
using
those
services.
B
B
The
fun
properties
that
we
have
in
this
mix
are
medaco.
We
already
have
Medico
set
up
to
use
medaco.ietf.org.
B
Implementation
detail:
this
is
just
C
names
to
the
hostnames
at
mediko.com
that
were
already
being
used
as
people
are
going
through.
This
they'll
ultimately
end
up
on
a
mediaco.com
domain
with
that
particular
implementation.
Detail
at
the
very
end,
I
believe
the
browser
will
show
me
to
co.com
after
the
oidc
dance
is
over,
but
we
have
turned
off
the
oidc
consent
click
through
for
me
to
go
already.
B
Yang
catalog
is
not
an
ietf.org
domain,
even
though
it's
an
itf.org
activity
at
the
moment,
I've
left
it
configured
to
require
consent.
Now
the
only
place
that
oidc
has
used
at
Yin
catalog
is
for
people
that
are
logging
into
the
administration
console.
So
it's
a
very
small
number
of
people.
You
know
folks,
like
me
and
Eric,
and
the
contractors
that
are
working
on
the
project
that
would
ever
encounter
this
at
all.
So
it's
not
a
something
that
affects
the
the
community
at
Large.
B
So
very
very
quickly:
I'm
going
to
step
up
the
pace
now
we're
20
minutes
into
the
call
going
through
the
fyis.
B
If
there
are
things
in
the
FYI
is
that
people
have
added
detail
to
that,
they
want
to
call
out
on
a
call,
feel
free
to
jump
in
and
do
so,
but
at
a
high
level
we're
on
track
for
issuing
an
RFI
about
the
infrastructure
strategy.
Early
next
year
we
held
our
tools
Workshop.
There
are
links
to
the
notes
that
are
being
constructed
about
the
tools
workshop
for
publishing
documents
on
the
internet.
B
There's
a
very
likely
conversation
I
recommend
that
if
you
didn't
participate
that
you
either
go
watch
the
Youtube,
recording
or
or
start
reading
through
the
notes
we've
been
working
on
the
infrastructure
of
how
imapd
makes
use
of
the
data
tracker's
credentials.
We
made
a
big
change
around
what
version
of
python.
It
was
based
on
we're
going
to
be
making
another
big
change
scene
on
shifting
it
to
use
an
API
instead
of
trying
to
reuse
the
data
tracker
code
base
and
underlying
database
directly.
B
B
We
began
redirecting
ID,
Nets
and
RFC
diff
into
authort
tools.
Last
week
entirely
removing
the
www.itf.org
hosted
instances
of
rfcdef
and
ID
Nets.
We
got
a
little
bit
of
pushback
on
RSC
diff,
because
ID
Nets
has
a
different
dipping
strategy
in
some
cases
and
and
some
folks,
working
on
some
large
RCS
found
that
disruptive
so
casara
jumped
in
and
and
made
it
so
that
you
could,
as
an
option,
get
the
RFC
death
output
until
we
get
ID
diff
to
the
point
that
it
can
replace,
RSC
Dev
completely.
B
Nick
you
Nick,
provided
content.
I.
Think
an
excellent
yes
he's
here
for
the
wiki
GS
deployment
and
our
changes
to
are
CI
CD
tooling,
for
the
data
tracker
and
soon
the
website
itself.
Nick.
Is
there
anything
that
you
wanted
to
add
other
than
what's
typed
in
here?
B
Greg.
Do
you
have
anything
that
you
want
to
add
about
what
we're
doing
for
the
track?
Wiki,
migration
to
Wiki,
Js.
C
B
E
Thank
you,
So,
like
Lars,
did
that
stuff
with
the
new
htmlisation
would
have
been
a
good
example
of
something
that
he
was
running
on
his
laptop,
but
he
very
quickly
shouldn't
have
been.
B
Not
hearing
anything
I'll,
let
you
read
about
what's
happening
with
the
data
tracker,
the
velocity
has
been
incredibly
High
lots
of
big
changes.
Coming
in
frequently
we
successfully
deployed
time
zone
aware
we
are
well
on
the
way
to
successfully
moving
to
postgres
and
I've
got
listed
here,
the
next
big
things
that
we're
expecting
to
see
if
at
any
time
somebody
knows
of
something
that's
coming
along
that
we
should
be
working
on
instead
of
these
things.
Please
let
us
know
so
that
we
get
the
items
slotted
in
in
the
right
priority
order.
B
E
C
E
C
E
Just
trying
to
understand
what
parts
you
know
where,
where
we
can
ask
for
contractor
help
and
where
we
have
to
yeah,
write
our
own
PRS.
E
E
Yeah
I
I
could
do
that
and
it's
also
tied
up
with
the
progress
of
a
draft
which
is
supposed
to
be
get
out
of
Rob
and
Eric's
world
in
the
next
days
or
so,
but
I'm
just
trying
to
ask
for
I'm
trying
to
understand
the
the
interaction
at
the
technical
level
here,
the
political
who
can
do
what
and
I'm
just
hearing
I'm
hearing
that
I
should
go
debug.
The
problem
myself
that
that's
fine
except.
E
C
E
So
it
will
affect
the
end
catalog
because
it's
supposed
to
somehow
present
and
display
the
Sid
data
and
we've
discussed
that
but
I'm
just
unclear
what
the
who,
who
is
who's
going
to
be
responsible
for
that.
So.
B
Michael
could
keep
it.
The
model
I
think
an
analogy,
an
analogy
that
might
help
keep
this
straight
is
consider
it
to
be
in
the
same
place
that
live
XML
is.
F
B
You're
saying
yeah,
it's
something
that
XML
to
RSV,
for
example,
fundamentally
relies
on,
but
we
don't
maintain
it.
It's
the
thing
that
it
uses
to
to
manipulate
the
XML.
That's
coming
in.
G
I
would
like
to
ask
a
question
about
some
of
the
the
open
issues.
I
see
that
there
are
dependencies
on
either
data
tracker
or
relaton
and
those
seem
to
have
been
closed.
They
are
closed,
but
there's
I'm
not
seeing
a
whole
lot
of
motion
on
the
the
original
issues
that
were
opened
by
end
users.
For
instance,
issue
280.
G
there
there
were
things
that
needed
to
be
fixed
in
data
tracker
and
they're
fixed,
but
I
I.
Don't.
G
B
B
D
B
B
B
That's
our
little
throw
on
that.
We
are
Jay
moving
to
replace
the
Standalone
set
of
reports
that
would
capture
things
like
number
of
subscribers
to
lists
through
these
apis
that
that
progress
is
being
made
on
that
front.
C
B
Do
you
have
a
proposal
started
yet.
C
C
Now
there
is
inflation
in
Europe
and
blah
blah
blah
blah
right,
but
for
the
price,
but
the
amount
of
work
should
be
lower
all
right,
so
we
should
get
the
ball
doing
this
Robert.
Is
it
you
Jay
or
myself
or.
B
B
B
Think
we
can
take
the
rest
of
what
is
in
here
as
red,
we're
a
little
bit
over
30
minutes
in
and
give
back
a
half
an
hour
of
your
day,
unless
somebody
has
something
else
that
they'd
like
to
dive
into
or
to
return
to
an
earlier
topic.
B
Is
the
second
week
of
January
I
suspect
it
will
be
a
fairly
quick
call
then,
as
well,
because
of
the
end
of
year
break
in
between
this
call
and
that
one,
if
the
format's
still
working
well
for
everyone,
we'll
continue
using
the
format
that
we
use
today
other
net
for
any
of
you
that
I
don't
see
before
next
year.
I
hope
that
your
end
of
year
activities
are
are
very
pleasurable
and
I.
Thank
you
very
much
for
spending
your
time.
Robert
can
can
I.
A
Have
to
explain
this
meeting
so
Michael
already
started
alluding
to
this,
but
I
think
it's
an
interesting
question
that
definitely
I
don't
want
answered
today,
but
I
just
want
to
bring
up.
So
if,
if
there
is
something
like
a
live
XML
issue
or
a
peeing
issue
that
actually
is
slowing
us
down
in
in
some
form
of
tool
support
and
we
we
don't
have
a
natural
candidate
among
the
the
volunteers
who
can
fix
that.
B
This
is
working
plan
at
the
moment
is
we
would
either
work
with
somebody
that
we
have
on
staffer
through
our
our
set
of
contractors
to
at
least
get
some
guidance
to
the
community
that
is
around
that
tool
on
wrestling
under
the
ground
on
how
to
fix.
B
It
worked
examples
of
things
that
we've
done
in
the
data
tracker
when
infrastructure
components
have
had
issues
that
were
Show
Stoppers
for
us
that
the
project
we
were
relying
on
we're
not
going
to
dress
quickly
or
is
to
develop
a
framework
to
where
we
could
patch
those
tools,
yes
as
they
were
installed
so
at
a
very,
very
high
level.
This
is
the
the
kind
of
approach
that
we
have
taken.
A
Yeah
but
some
some
somebody
else
to
to
actually
find
people
who
can
do
this
and
make
sure
that
that
they
actually
are
compensated
for
doing
that
and
and
so
on,
and
so
on
and
I'm,
not
sure
right
now
we
have
a
way
of
doing
that.
I
mean
this
definitely
should
not
be
another
way
of
of
getting
tools
apart,
but
if
there
are
circumstances
that
make
it
hard
for
the
volunteers
to
actually
do
that
kind
of
work,
then
it
would
be
good
to
know
we
do
have
a
full
day.
B
D
I'm
just
going
to
jump
in
and
say,
I
mean
we
one
of
the
the
things
about
having
Nick
and
casara
on
board.
Is
that
they're
pretty
much
willing
to
try
anything?
So
you
know
we
do
have
a
backup
plan
there
of
people
who
are
willing
to
try
anything
and
where
we've
asked
them
to
do
that.
So
far,
that's
been
remarkably
successful.
So
we
have
that
as
a
possibility.
D
We
also
have
access
to.
There
are
plenty
of
mechanisms
to
have
access
to
specialist
contract.
Programmers
who
do
small
amounts
of
things
for
us,
so
I'm
not
worried
that
we
will
struggle
to
get
the
resources
for
a
an
urgent,
short-term
fix.
I
think
our
issue
about
resources
is
more
about
longer
term,
developmental
things
where
we
want
somebody
that's
around
for
a
long
time
that
learns.
It
is
able
to
build
it.
You
know
design
and
see
the
overall
strategy
about
how
things
fit
together
and
that's
something
that
you
know.
E
What
actually
I
wanted
to
say
is
that
I
think
that
in
this
case,
with
pyang
there's
some
what
I
perceive
as
architectural
issues,
possibly
across
the
whole,
the
whole
piece
of
software
that
are
revealing
themselves
as
we
try
to
do
some
things
at
the
edge
and
so
I'm
reluctant
to
I'm
reluctant
to
do
what
I
would
probably
would
be
a
hot
patch
for
me,
but
then
we'll
be
probably
be
create
a
lot
of
technical
debt
that
I,
don't
I,
probably
won't
even
be
aware
of,
and
that's
why
I
think
I
I
suspect
that
the
problem
I'm
running
into
is
a
larger
architectural
issue
and
that
I
would
prefer
that
the
that
the
changes
wound
up
with
some
the
with
someone
with
a
longer
view
to
what's
going
on,
and
so
that's
why
I'm
I'm
a
bit.