►
From YouTube: IETF-TOOLS-20220412-1800
Description
TOOLS meeting session at IETF
2022/04/12 1800
https://datatracker.ietf.org/meeting//proceedings/
B
A
C
A
D
Yep
there
I've
got
sound
now,
okay,
good
and
just
let
you
know,
I
added
something
to
the
hot
topics.
Let
me
take
a
minute
just
about
the
project
manager,
for
the
tool
for
the
rpc
tools
and
my
ability
to
find
someone
to
do
that.
E
G
I
did
update
it,
but
not
too
long
before
the
meeting,
so
maybe
it
didn't
refresh
soon
enough
for
people.
Sorry
about
that.
D
G
Yeah
and
the
the
medical
links
are
different
every
time
for
this,
so
there's
it
changes
every
time.
A
Maybe
we
should
change
the
calendar
to
say
just
to
say
just
go.
Look
at
the
notes,
page.
A
A
I
received
email
from
alice
out
of
band
that
there
are
no
blocking
issues.
She
provided
input
to
the
cmt
to
consider
it
their
next
meeting
for
changes
that
we
might
prioritize
over
others.
So
I
think
just
capturing
no
blocking
issues
and
the
notes
for
that
topic
today
is
sufficient.
A
We'll
skip
the
deployment
of
the
production
server
until
glenn
has
a
chance
to
join
for
moving
the
rest
of
the
services
off
of
tools.itf.org
and
redirecting
it.
We
did
set
up
a
temporary
bap
service
so
that
the
rpc
has
something
that
they
can
use
until
the
functionality
about
can
be
integrated
into
author
tools.
A
A
For
that
you'll
see
that
there
are
some
nearly
50
remaining
issues.
Almost
all
of
them
are
about
data
quality,
not
about
actual
application
functionality.
A
A
The
web
service
relies
on
data
in
github
repositories
for
what
it
serves,
and
there
are
github
actions
in
those
repositories.
There's
a
repository
for
each
of
what
traditionally
had
been
bivximo,
1,
bibxoml2,
etc.
Aka,
big
bib,
xml,
ids,
bibx
and
lrcs
bigabit
xml
ieee.
A
There
are
actions
in
those
repositories
that
go
feed
from
the
canonical
sources
and
make
both
relative
formatted
entities
and
bibx
web
xml
formatted
entities
that
just
live
in
those
repositories
and
the
web
service
feeds
from
those
caching
as
it
needs
to.
So
when
it
comes
up
from
scratch,
it
just
goes
and
gets
things
from
those
repositories.
A
A
Important
connotation
is
that
our
service
will
fundamentally,
at
this
point,
rely
on
github
so
as
part
of,
if
part
of
its
running
now,
if
we
ever
need
to
move
away
from
that,
it
would
just
be
a
matter
of
putting
those
things
in
a
get
repository
somewhere
and
changing.
What's
currently
in
those
github
actions
to
be
cron
running
somewhere.
So
there's
a
it's.
It's
not
a
a
difficult
thing
that
we're
being
locked
into,
but
I
wanted
everyone
to
be
aware
that
it
existed.
A
C
A
H
A
C
A
Again,
that's
great,
which
is
going
to
be
good
for
the
next
topic.
Yeah.
I
don't
see
glenn
still,
there's
a
question
about
whether
or
not
we're
doing
the
right
thing
with
our
api
design
at
author
tools
and
at
the
bib
xml
service.
We
designed
these
things
from
the
beginning
to
require
a
data
tracker
personal
api
key.
A
The
notion
behind
this
was
that
we
would
be
following
a
pattern
similar
to
what
cloudflare's
api
shield
follows
and
other
entities
that
are
are
protecting
apis,
that
you
have
a
an
allow
list.
You
know
basically,
a
a
somebody
comes
to
you
with
a
thing
that
says:
yes,
they
get
to
use
this,
and
the
api
key
was
what
we
were.
We
were
planning
to
use,
but
this
has
tension
with
the
apis
that
have
existed
in
the
past
at
places
like
xml
rfc.tools.itf.org
that
we're
just
free
to
use
by
anybody
anonymously.
A
A
To
get
what
we
would
want
out
of
it,
the
users
of
carson's
tool
would
have
to
go,
get
an
api
key
from
the
data
tracker
and
tell
the
tool
about
it
before
it
could
go.
Do
this
thing
and
carson
has
said
several
times
that
the
that
introduces
too
much
impedance
too
much
trouble
for
the
user,
and
he
doesn't
want
to
take
it
down
that
path.
A
One
of
the
things
that
alternatives
that
we
discussed
was
creating
a
key
just
for
the
application,
but
by
the
nature
of
the
application
that
key
would
be
accessible
to
anybody
that
bothered
to
look
at
which
point
we
shouldn't.
We
don't
have
any
advantage
for
having
keys
at
all,
because
random
robot
could
go
grab
that
key
and
start
doing.
H
A
So
the
question
is:
do
we
stay
on
the
path
that
we're
on
right
now
and
try
to
find
some
other
way
for
tools
like
kdrsc
to
get
the
kind
of
access
that
they
that
they
need,
or
do
we
back
off
of
the
position
that
we
want
this
positive
identification
kind
of
mechanic
and
just
open
the
apis
for
use
and
then
make
changes
later?
If
we
ever
actually
see
abuse
jay
go
ahead,
please.
D
Thanks,
isn't
the
impedance
not
in
having
a
key
and
getting
a
key,
but
in
the
process
by
which
one
gets
a
key,
you
know.
So
if
getting
a
key
is
trivial,
then
there
is
very
limited
impedance.
But
if
getting
a
key
is
complex
and
you
know
requires
multiple
email
round
trips,
all
that
kind
of
stuff,
then
it
is
then
that's
where
the
impedance
is.
A
D
D
Could
we
simplify
that
in
some
way,
rather
than
you
know,
rather
than
sort
of
abandoning
this.
C
Yeah
that
certainly
can
be
done
now
before
we
go
down
that
red
hole.
The
question
really
is:
do
we
need
this
api
key
right
now?
The
web
interface
for
author
tools
actually
uses
a
second
api
that
doesn't
need
an
api
key
and
of
course
I
can
simply
use
that
second
api
for
for
the
cardi
rc
access,
and
then
I
don't
need
an
api
key.
It's
not
apparently
not
a
published
api,
but
it's
not
hard
to
reverse
engineering,
and
I
did
it
so
do
we
really
need
that?
C
C
That's
easy
to
do
in
unix,
but
I
I
hesitate
making
sure
that
works
in
windows
as
well.
A
E
A
If
abuse
ever
appeared
by
requiring
these
api
keys,
is
there
a
fundamental
objection
to
just
falling
back
to
where
we
were
and
saying
you
know,
this
is
an
open
resource
that
anybody
can
use
we're
just
providing
free
compute
for
the
these
translation
services
in
the
in
the
case
of
author
tools
and
wait
and
deal
with
it.
If
it
turns
out
that
it
is,
you
know
something
that
gets
abused.
F
But
there
is,
I
mean
I
tried
to
get
the
secret
key
as
well
for
doing
some
api
access.
Honestly,
I
spent
10
minutes
on
it
and
then
I
gave
up
simply
so
they
have
friction
now.
F
C
A
A
Zulup
is
up
and
running.
I
don't
plan
to
bring
it
up
again
unless
we
run
into
issues
or
we
get
to
the
point
where
we
are
actually
looking
at
the
details
of
integration
of
medico
and
zulu.
D
Thank
you
so
just
a
reminder
from
the
notes
here.
I
need
a
contractor
to
work
with
the
rpc
for
what
will
probably
be
about
a
two-year
project
or
from
now
until
the
end
of
nature.
D
I
imagine
their
role
starts
off
as
a
ba
role,
largely
doing
requirements,
analysis
for
a
full
replacement
of
their
tool
chain
and
they're
the
the
then
working
with
the
rpc
to
help
them
decide
what
nature
of
replacement
they
want,
because
currently
they
do
a
lot
of
things
on
command
line
tools,
and
do
they
want
to
replace
that
with
a
lot
of
command
line
tools,
or
do
they
want
to
have
some
kind
of
workflow
system.
H
D
Off-The-Shelf
package,
or
something
like
that
and
then
supporting
the
production
and
the
running
of
a
tender
on
that
and
for
someone
to
develop
those
that
tool
chain
and
then
project
managing
the
the
the
client
side,
project,
management
of
the
implementation
of
that
and
the
delivery
of
that-
and
I
have
spectacularly
failed
to
get
anybody
to
do
this,
both
through
the
rfp
and
through
shoulder
tapping
people
and
through
going
to
church
and
through
rolling
bones
and
all
that
kind
of
stuff.
D
J
D
A
In
the
shoulder
tapping
capacity,
there
are
a
few
other
people.
I
could
tap
to
at
least
get
them
to
look.
Okay,
great,
fantastic.
B
D
A
Working
down
things,
we
have
a
design
team,
that's
going
to
start
in
on
describing
the
it
infrastructure
services,
an
initial
call
to
get
some
experiences
from
west
hartekers
automation
of
the
b
root,
as
worked
examples
of
of
things
that
we
can
of
mechanisms
that
we
could
use
and
start
talking
about
what
our
infrastructure
should
look
like
and
how
it
should
behave
when
we
are
on
a
different
target
that
is
more
automatable.
A
J
A
As
you've
seen
the
completed
projects,
the
svn
track,
migration
is
complete.
We
still
have
some
track
out
there
for
people
that
have
just
been
using
the
wikis
that
we
plan
to
have
a
crowd-sourced
migration
into
wikijs.
That's
waiting
for
us
to
finish
the
data
tracker
wikijs
integration,
which
is
behind
many
of
the
other
things
we're
going
to
be
talking
about
later,
in
the
call
I'm
bringing
it
up
here
so
that
we,
you
know,
know
that
it's
still
on
our
plan,
but
we
haven't
gotten
cycles
to
it.
Yet.
A
So
the
data
tracker
has
been
running
through
cloudflare
for
several
weeks
reasonably
successfully.
A
few
notable
exceptions,
the
model,
which
is
what
really
web
models
service
models
ought
to
follow,
are
more
strictly
enforced
by
places
like
cloudflare.
A
This
is
going
to
impact
the
people
that
have
automated
tooling
for
submission
to
the
draft
submission
api,
martin
thompson's
repositories
in
particular,
and
I
have
yet
to
reach
him.
But
I
need
to
coordinate
with
him
on
on
the
when
in
the
how
of
the
change
that
we
make
to
that
api,
so
that
these
submissions
can
succeed
when
the
processing
that
is
required
to
complete
them
is
longer
than
the
timeout
that
cloudflare
enforces
on
us.
A
I
have,
in
the
meantime
a
workaround
we
tested
this
with
john
yesterday.
If
you
just
hear
of
anybody
that
is
having
trouble
submitting
a
draft
because
it's
large
and
this
time
out
is
hitting
them.
The
answer
will
be
to
send
them
to
the
manual
submission
process,
and
I
will
teach
the
secretariat
a
way
to
go
around
the
proxies
to
to
do
these
submissions.
A
We
still
have
work
to
do
to
make
it
so
that
our
num
com
eligibility
calculations
are
correct.
This
is
becoming
pressing
as
we're
going
to
want
to
see
the
nom
come
before
we
get
to
july.
A
I
expect
we'll
be
collecting
volunteers
soon,
but
the
work
to
get
these
changes
in
place
is
likely
to
come
in
over
the
rest
of
this
month
in
the
early
part
of
may,
distracted,
of
course,
by
the
server
migration
that
we're
going
to
be
talking
about
a
little
bit
later.
A
So
if
we
miss
being
ready
for
this
next
num
com,
it
just
means
that
somebody-
probably
the
secretariat,
will
have
to
look
a
little
bit
more
carefully
about
whether
or
not
the
people
that
are
that
the
tool
thinks
are
eligible
really
are
and
that
we
didn't
exclude
someone
that
should
have
not
been
excluded.
A
We're
working
very
hard
right
now
to
get
to
the
point
where
we
can
make
a
new
data
tracker
release,
and
our
plan
at
the
moment
is
for
that
release
to
be
on
the
bootstrap
to
include
the
bootstrap
5
work.
A
So
if
we
follow
through
with
what
I'm
hoping
we
can
do
later
this
week
or
early
next,
we
will
actually
shift
the
production
data
tracker
to
the
bootstrap
5
styling
and
then
everybody
will
be
shocked
and
will
get
rage
and
violence
because
of
the
surprise,
even
though
we've
tried
to
warn
people
about
this
for
a
while,
unless
anybody
has
concerns
we're
going
to
continue
down
this
push,
I
don't
know
if
we
should
signal
this
in
any
way.
At
the
moment
I
could
send
another
note
to
chairs.
A
I
Another
hit,
I
think
another
heads
up
wouldn't
hurt
and
the
other
question
I
had
was:
are
you
expecting
that
the
colors
will
match
the
current
data
tracker
or
will
they
be
like
sandbox
or.
I
A
Sure
I'll
coordinate
with
you
later
in
the
day.
E
The
domain
those
release
on
articles
this
week
and
the
big
change
was,
I
had
to
temporarily
remove,
go
sk2
because
that
tool
had
a
change
after
a
year
or
so,
and
that
change
break
lots
of
things,
and
I
don't
think
that
the
people
who's
developing
that
tool
is
going
to
move
forward
with
the
tool
they
are
suggesting.
Few
folks
and
some
of
those
folks
don't
use
that
tool
as
a
command
rather
than
a
module.
E
So,
and
I
I
have
temporarily
removed
that
from
other
tools
until
I
can
find
a
re
better
way
to
install
it.
We
still
have
a
cg
or
north
tools
for
sk
diagrams
to
convert
them
to
svg.
E
A
Autotunes,
you
got
cabo's
note
in
the
chat
it's
next
time
when
we're
going
to
pull
something,
we
should
give
people
a
heads
up.
C
Yeah
just
a
simple
note,
because
I
I
can
of
course
work
around
things
like
that
as
svg
and
gold
are
functional
equivalent
and
I
probably
made
the
mistake
of
distinguishing
it
in
the
input
language.
C
C
The
the
other
thing
I
would
like
to
point
out
is
that
the
the
cram
down
rc
tool
changes
much
more
often
than
every
fortnight,
and
not
people
not
being
able
to
move
to
the
new
version
of
the
tool
becomes
an
actual
problem
when,
when
there
are
fixes
that
have
been
initiated
by
users
of
the
tool
and
then
they
have
co-authors
that
that
can't
use
the
new
version
until
we
have
this
two-week
period
elapsed.
C
So
I
would
hope
that
we
get
a
more
more
regular
update
of
that
specific
tool,
if
necessary.
So
that
people
don't
have
to
wait
for
two
weeks
for
for
fixes
to
become
available.
E
I
think
we
can
hear
like
eventually
looking
look
at
ways
to
automate
that,
because
both
both
are
github
now
we
don't
have
automatic
way
of
deploying
autotools
yet.
But
when
you
have
that.
E
Under
the
hood
on
itf
servers,
we
can
look
into
automated
so
because
it's
not
going
to
be
a
big
change
to
update
cram
down
on
autotune
side.
So,
but
I
I'll
keep
right
now,
I'm
I
am
doing
manual
releases,
so
I
will
keep
an
eye
on
cram
down
for
changes
and
yeah
try
to
match
that
face,
at
least
with
the
version
updates.
C
And
maybe
I
should
also
mention
that
I
do
have
a
plan
for
how
to
make
the
tool
coverage
much
wider
than
we
have
at
the
moment.
But
maybe
we
should
wait
for
the
current
wave
of
activity
to
be
over
before
we
start
that
activity.
E
E
A
All
right,
I
don't
see
ryan
on
the
call.
A
A
Adding
matomo
to
the
data
tracker
as
a
a
work
item
that
will
likely
happen
after
we
get
through
this.
The
big
set
of
disruptions
that
we're
working
through
right.
Now
we
have
an
initial
set
of
recommendations.
A
A
A
Zdx
security
is
standing
by
waiting
for
us
to
complete
the
transition
to
the
new
server
before
they
start
in
on
testing
the
remaining
set
of
the
web
services
and
re-testing.
The
data
tracker
they're,
currently
on
we're
currently
on
their
schedule
for
mid-may
to
mid-june
and
they're
willing
to
adjust,
should
our
transition
to
the
new
server
need
to
take
longer
than
it's
currently
taking
I'm
going
to
skip
forward
to
the
rc
model.
A
Rc
editor
model
transition
based
on
conversations
I've
had
with
various
stakeholders
the
need
to
have
support
for
the
new
model
and
the
data
tracker
is
expected
to
come
in
sometime
around
mid-may.
So
we
are
waiting
to
make
those
changes
until
we
are
post
new
server
deployment
and
on
the
bootstrap
5
branch
as
well.
A
That
leaves
us
outside
of
taking
the
other
things
that
we
haven't
talked
about.
Besides,
the
server
transaction
transition
is
read
to
the
server
transition
itself,
and
that's
going
to
take
all
of
our
available
remaining
time.
A
I
A
Anything
glenn
has
a
replacement
for
itfa
up
and
running
he's
got
a
process
that
is
keeping
it
in
sync.
A
We
are
iterating
with
him
right
now
on
improving
that
process,
so
that
we
have,
as
we
have
a
correct
replication
and
we
have
as
little
downtime
as
possible,
but
the
thumbnail
estimates
at
the
moment,
based
on
watching
what
the
sync
the
synchronization
is
taking,
is
that
we'll
end
up
with
a
two
to
four
hour
downtime
and
the
downtime
will
be
across
many
many
services,
most
notably
male,
and
that
means
this
mail
won't
be
delivered
for
that
four
hour
up
to
four
hour
period,
it
won't
be
lost.
A
A
Wes
go
ahead
and
anybody
there
there
are
a
few.
There
are
so
few
people
here
just
turn
on
your
audio.
L
Yeah
I
didn't
want
to
get
in
the
way,
so
the
real
question
was
right:
now
you
guys
have
only
one
mx
record
for
ietf.org
and
I've
always
wondered
why?
But
you
know
you
might
consider
just
as
a
fallback
adding
an
mx
for
a
short
period
of
time
through
an
outsourced.
You
know,
agency,
there's,
certainly
plenty
that
could
do
it.
I'm
sure.
There's
probably
you
know
companies
within
the
ietf.
F
A
So
my
opening
volley
for
when
we
do
this,
assuming
that
our
testing
between
now
and
then
doesn't
show
a
reason
for
us
not
to
is
during
the
work
week
on
monday
morning,
pacific
time
april,
25th,
so
that
we
can
have
the
right
hands
available
to
assist
with
any
baubles.
If
there
are
bubbles
along
the
way.
A
Lars
had
a
last
minute
conflict
with
this
call,
I
confirmed
with
him
that,
as
with
his
chair
hat
on,
he
doesn't
object
to
us
taking
a
chunk
out
of
out
of
a
normal
work
day
like
this
any
thoughts
from
anybody
else
on
the
call.
Are
you
comfortable
with
this?
A
I'm
hoping
that
as
we
go
through
the
next
steps
after
this
one,
that
this
will
be
our
last
multi-service
outage?
We
have
one
more
data
tracker
outage
that
is
going
to
be
big,
but
this
would
be
the
last
time
that
we
have.
You
know
the
issue
with
everything
going
away,
because
we're
we're
messing
with
the
infrastructure
under
it.
F
A
So
they,
the
cloud
player,
can
be
configured
for
things
that
don't
change
things
that
aren't
forms
right
so
website
the
website
itself
we're
not
expecting
a
we're,
not
expecting
it
to
to
actually
go
down
as
we're
moving
the,
but
even
if
it
had
to,
we
could
configure
cloudflare
to
serve
the
pages
that
it
has.
A
This
is
not
as
easy
to
do
with
the
data
tracker
most
of
the
data
tracker
users
are
logged
in,
and
we
have
cloud
for
necessarily
configured
to
not
cache
most
of
the
data
tracker
pages
for
anybody
that
is
logged
in
right,
so
they
wouldn't
have
anything
to
serve
to
those
people.
If
they
logged
out,
we
could
probably
configure
it
to
to
have
something
available
for
people
that
are
just
attempting
to
browse,
but
it's
a
it's.
It's
a
hit
or
miss
kind
of
experience.
A
So
if
people
are
on
board
with
this,
I
did
and
I
will
get
together
with
greg.
Oh
there,
you
are
sorry,
I
lost
your
image.
This
is
the
kind
of
message
that
maybe
we
should
send
out
super
visibly
with
the
a
you
know,
things
going
to
be
disrupted
and
we
should
probably
send
it
out
today,
because
that
is
less
than
two
weeks
from
now
today
or
tomorrow.
If
today
is
too
crazy,
but.
A
A
F
I
don't
understand
your
question,
so
we
are
doing
migration
right.
So
typically,
when
you
do
migration,
you
pull
down.
We
have
two
servers,
one
active
one
standby
or
whatever
you
put
the
standby.
You
migrate
this
and
then
you
switch
the
role
and
that's
if
you
have
good
synchronization,
that's
very
easy.
It
should
be
really
easy.
A
Yes,
we
we
essentially
had
that.
I
just
realized
that
I
had
intended
to
have
a
link
for
the
plan
that
glenn
had
put
together
available
for
you
to
to
see.
I
will
send
that
on
to
tools
development
shortly,
so
that
you
can
skim
through
what
what
we're
planning
to
do
but
yeah
we
have
ietfa
and
itfn
they're,
separate
machines,
and
the
basic
plan
is,
as
you
described.
There
will
be
a
point
where
some
services
on
atfa
are
stopped.
A
There
is
a
final
replication
step
from
itfa
to
iatfn.
Then
certain
services
like
making
the
mysql
database
master
on
ietfn
will
happen,
and
services
will
come
up
on
ietf
in
if
the
services
were
architected
a
little
bit
differently.
We
could
do
this
where
the
cut
over
time
was
instantaneous,
but
some
of
them
are
not,
and
the
dependencies
as
we
understand
them
on
the
male
processing
chain
in
particular,
are
such
that
we
need
to
have
the
male
q
quiescent.
A
While
there
are
some
rather
large
re-synchronization
efforts
that
go
by
I'll,
be
talking
with
ryan
and
glenn
between
now
and
then,
and
if
we
discover
that
those
are
not
in
fact
something
that
needs
to
happen
before
the
mail
service
is
resumed,
then
we
can
cut
that
down
time
to
something
much
smaller.
A
So
right
now,
just
the
file
system
rsync,
is
that
that
final
file
system
rsync,
is
taking
two
hours
just
for
the
sheer
number
of
files
that
it's
checking
to
see
if
they
have
changed
or
not
right
so,
and
that's
something
that
I'm
working
with
glenn
on
attempting
to
tune
so
that
that
we
can,
we
can
shrink
that
window
as
well.
A
And
thank
everyone
again
for
their
participation
in
in
these
calls,
and
we
will
see
you
online
as
we
go
through
this
transition
and
I'm
sure
we'll
have
a
lot
to
talk
about
when
we
get
to
the
next
version
of
this
meeting.