►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
This
meeting
is
being
recorded.
It
will
be
posted
to
the
Project's
website
afterwards,
as
well
I'm
going
to
be
calling
two
roles.
The
first
roll
call
will
be
the
names
of
committee
members
and
alternates
of
committee
members
after
each
name
is
called
the
person
should
mute
themselves
and
say
here
so
that
we
can
make
sure
they're
mics
are
working.
A
I
will
also
note
that
our
wonderful
consultant,
Kathy
Sharkey,
is
a
member
of
acus
and
on
the
regulation
committee,
but
that
for
purposes
of
this
project
Ai
and
retrospective
review,
she
will
only
be
participating
as
the
project
consultant,
so
I'm
going
to
go
ahead
and
start
with
roll
call
and
we'll
start
with
Eloise.
Our
committee
chair.
A
Thank
you,
Sean
Ford,
an
alternate
for
Dan
Cohen.
A
Thank
you
Clayton
cook,
not
here
yet
Erica
Hoff
don't
see
her
yet.
A
Jessica
bieleki,
you
don't
see
her
either.
She
has
an
alternate
for
Marion
zobar,
so
I'll
double
check
when
those
people
come
through.
E
A
D
A
Thank
you.
Are
there
any
other
ecus
members
who
are
not
on
the
committee
on
regulation,
whose
name
I
did
not
call
all
right,
seeing
none
I'm
going
to
turn
it
over
to
our
chair,
Andrew
feust,
for
some
opening
remarks.
G
Thank
you,
Kaja
welcome
and
good
afternoon.
Everyone
I
want
to
thank
you
for
being
on
this
call
today
ready
to
work
on
this
important
recommendation
on
artificial
intelligence.
Thank
you
particularly
to
our
chair,
Eloise,
for
coming
back
for
another
cycle
after
she
did
her
first
cycle
in
the
ball.
It's
it's
great
to
see
you
and
thank
you.
G
Thank
you
to
all
of
you
again
and
thanks
to
our
consultant,
Captain
Sharkey,
and
to
Kaja
and
Jeremy
for
for
their
work
so
far
on
this
and
their
ongoing
work.
I
don't
have
to
tell
anyone
that
this
is
a
very
hot
topic
right
now.
You
can't
really
turn
around
without
running
into
somebody
somewhere
writing
about
or
talking
about,
artificial
intelligence
and
how
it
can
be
put
to
use.
It
seems
to
be
something:
that's
really
captured
the
the
Public's
imagination
as
well.
G
So
this
is
an
important
and
timely
topic.
It's
also
a
topic
that
allows
acres
to
build
upon
work
that
is
done
in
past
recent
years.
On
the
subject
as
well,
I
went
to
chat
DBT
for
the
for
the
first
time
in
connection
with
this
project
and
I
and
I
asked
it.
Can
artificial
intelligence
be
useful
in
federal
agency,
retrospective
review
and
in
a
couple
of
sentences.
It
gave
me
an
answer,
but
my
observations
are
one.
It
was
only
one
page.
It
was.
G
It
was
nowhere
near
as
in-depth
and
thoughtful
as
as
our
humans
get.
These
Archies
particularly
has
been
able
to
produce.
It
listed
a
couple
of
models,
two
of
which
Kathy
worked
with
and
addressed
and
two
which
are
kind
of
irrelevant,
but
it's
bottom
line
was
that
artificial
intelligence
can
be
useful
to
federal
agencies
for
retrospective
reviews.
So,
if
that's,
where
you
all
come
down,
we
we
have
the
blessing
of
chat
gbt
as
well,
so
with
with
that
I'll.
Just
thank
you
one
more
time
and
turn
things
over
to
our
chair.
G
H
Thank
you
so
much
for
that
Andy
and
thanks
for
everybody
for
being
here,
I'm
really
happy
to
be
here.
I
had
a
little
bet
with
myself
about
how
long
it
would
take
for
us
to
mention
chat,
GPT,
and
so
there
we
go.
It
took
about
60
seconds,
so
I
think
we're
all
all
in
sync
on
the
importance
and
relevance
of
the
topic.
My
first
order
of
business
is
to
read
the
following
scripts
to
you.
So
I
will
do
that
and
then
I
will
ask
Kathy
to
introduce
the
report.
H
You
can
then
unmute
yourself
and
please
re-mute
yourself
when
you
are
done
speaking
for
all
participants.
Please
use
the
chat
feature
only
to
indicate
that
you'd
like
to
speak
or
for
committee
members
in
their
alternates
to
vote
when
asked.
Please
do
not
hold
any
sidebar
discussions
or
put
substantive
comments
in
the
chat
feature.
Only
members
of
the
regulation
committee,
including
government
members
and
their
designated
alternates
public
members,
senior,
fellows
liaison
representatives
and
special
counsels,
have
a
vote.
Please
do
not
vote
unless
you're
a
member
of
the
committee.
H
H
Okay,
so
with
that
I'm
going
to
turn
to
my
next
order
of
business,
which
just
to
call
on
our
wonderful
consultant
for
the
project,
Kathy
Sharkey,
to
introduce
the
research
and
the
report
to
us
thanks
Kathy
great.
I
Thanks
Eloise,
it's
wonderful
to
have
this
opportunity.
It's
challenging.
I
was
told
in
five
to
ten
minutes.
To
summarize
the
high
points,
so
maybe
I'll
do
the
following
some
things
that
were
deep
in
the
footnotes
that
maybe
we
didn't,
even
though
everyone
who
read
the
report
carefully
might
not
have
seen
a
kind
of
methodology
and
how
we
came
to
the
project.
So
let
me
just
go
over
that
briefly.
I
So
I
was
involved
in
a
project
called
government
by
algorithm
Ai
and
federal
administrative
agencies
that
produced
a
report
to
the
chairman
of
aqueous
in
February
2020,
with
some
fabulous
colleagues,
David,
engstrom,
Daniel,
ho
and
Tino
Cuellar,
and
as
part
of
that
and
I,
commend
that
to
your
reading
as
well.
We
did
a
pretty
comprehensive
canvas
of
uses
of
artificial
intelligence
and
federal
administrative
agencies.
I
That
was
a
long
time
ago
that
was
probably
a
year
and
a
half
ago,
if
not
longer
and
so
quickly.
The
story
to
get
to
here
is
I
pitched
this
project
to
acus
who
approved
it,
and
we
began
this
project
with
pretty
minimal
understanding
of
what
agencies
might
be
doing.
We
had
someone
we
had.
I
I
I
think
the
internet
reads:
my
law
review
articles
more
than
anyone
else,
but
I
wrote
a
piece
called
AI
for
retrospective
review
in
the
Belmont
law
review,
where
I
really
used
hhs's
case
study
on
reg
Explorer
to
just
start
the
conversation
about
this
topic
and
since
then
aqueous
sponsored
there's
a
consultative
group
for
a
round
table
on
AI
and
federal
agencies,
of
which
I
was
a
member,
and
we
actually
had
a
zoom
meeting
that
allowed
me
to
present
this
topic
of
trying
to
uncover
more
nascent
Agency
use
cases
specifically
in
retrospective
review,
as
kind
of
the
beginning
of
searching
for
these
uses
in
rulemaking.
I
More
generally,
that
was
in
February
of
2022
and
from
that
point
on
this
project
involved.
First
I
too,
should
give
a
shout
out
actually
to
three
students
very
quickly.
One
is
on
the
call
Cade
Mallett
is
an
NYU
student
who's
jumped
on
board
at
this
juncture
to
help
with
the
process
going
forward
on
my
end,
but
two
students,
Giancarlo
carroza
and
Kevin
fotoro,
were
along
with
me,
the
only
ones
who
did
the
kind
of
field
work
of
this,
which
included
48,
Zoom
interviews
on
balance
about
an
hour
each.
I
We
canvassed
the
appendix
lists.
All
of
the
agencies
whom
we
talked
to
it
was
really
intensive.
I
It
was
a
lot
of
time
that
understates
I
think
the
field
work
component,
because
we
also
got
drawn
in
to
a
really
interesting
pilot
that
GSA
was
doing
with
CMS
and
we
were
allowed
in
real
time
to
watch
lots
of
the
presentations
Etc,
which
are
cited
in
footnotes
as
sources
not
available
to
the
public
lack,
but
that
so
we
put
in
a
lot
of
time
and
I
think
it
was
necessary
to
find
out
exactly
what
was
happening
very
briefly.
So
now
just
to
round
out
the
report.
I
You
know
the
report
starts
with
a
very
brief
background
on
retrospective
review.
This
is
a
long-standing
federal
administrative
agency
practice.
We
have
numerous
prior
acus
reports.
We
have
one
of
the
nation's
leading
experts
who
may
have
arrived,
Jonathan
weiner
who's.
A
member
of
the
committee
did
a
prior
report,
but
what's
novel
is
trying
to
think
about
the
ways
in
which
agencies
might
use
algorithm
and
algorithm
algorithms
and
AI
enabled
tools
to
do
this
process.
So
in
part,
two
I
present
four
use
cases,
they're
pretty
in-depth
studies.
One
is
the
HHS
regulatory
explorer.
I
That
was
the
only
one
that
I
was
significantly
aware
of
before
starting
this
project,
there's
another
one
from
Department
of
Transportation
and
their
reg
data,
dashboard,
a
study
of
DOD
and
its
use
of
Game
Changer,
and
then
this
GSA
CMS
pilot
that
I
alluded
to
before.
For
that,
as
I
mentioned,
we
interviewed
the
agency
officials.
We
also
interviewed
the
industry
collaborator.
So,
as
I
mentioned,
HHS
used
this
tool.
Reg
Explorer.
I
I
So
what
I
think
is
exciting
about
the
work
that
we
did
is
we
talked
to
the
people
within
agencies,
but
we
also
talked
to
the
developers
and
partners
of
those
tools,
so
we
sat
in
on
various
demonstrations
of
the
tools
got
to
use
them
ourselves
got
to
ask
various
questions
got
to
speculate
about
how
these
things
might
be
useful
in
further
uses
than
what
we're
being
experimented
with.
I
I
should
had
that
there
were
two
other
industry,
companies,
IBM
and
Regulatory
group
who
we
spoke
with
at
some
length
and
got
demonstrations
of
their
Technologies
as
well.
So
we
didn't
limit
this
only
to
the
industry
Partners,
whose
tools
were
being
used.
In
our
case
studies,
part
three
of
the
report
moves
to
asking
agencies
with
agency
officials
within
those
four
use
case
studies,
but
beyond
that
we
surveyed
a
sampling
of
16,
other
executive
branch
and
independent
agencies,
eight
of
whom
agreed
to
come
forward
and
speak
with
us,
though
they're
all
listed
in
part.
Three.
I
Their
responses
are
then
anonymized,
because
that
was
the
agreement
that
we
made
in
conducting
This
research.
So
that
was
really
edifying
in
terms
of
in
terms
of
hearing
both
from
the
agencies
that
it
started
to
experiment
with
some
of
these
tools,
how
they
thought
they
might
be
profitably
used
in
retrospective
review
and
rulemaking,
more
generally,
but
also
hearing
from
a
whole
range
of
other
agencies.
So
that's
methodologically
kind
of
what's
embodied
in
the
report
if
I
could
just
end
by
saying
in
terms
of
the
recommendations
that
the
report
makes,
that
I
would
highlight.
I
You
know
the
first
one
which
might
not
seem
significant,
but
I
think
is
very
significant
in
this
field
is
really
just
highlighting.
You
know
encouraging
sharing
of
information
and
experiences
among
the
agencies.
There
is
an
existing
executive
order
that
asked
agencies
to
Canvas
AI
uses
and
share.
There's
been
a
white
paper
following
up
it
hasn't
happened.
This
is
work
that
I
think
acus
can
do
very
profitably
like
can
highlight
and
share
and
encourage
other
agencies
to
take
advantage
of
how
they
might
exploit
various
Pilots.
I
The
second
is
really
to
insist
on
open
source
and
interoperability.
That
was
consistent
theme.
It
was
both
part
of
the
dod
use
case,
part
of
the
GSA
CMS
pilot.
It
was
referred
to
by
numerous
agency
officials
and
quote-unquote
stakeholders.
I
didn't
mention
that,
but
it's
in
our
appendix
you'll
see
that,
in
addition
to
talking
to
agencies
and
Industry,
we
talk
talk
to
cross-sampling
of
entities
that
were
representing
those
who
were
either
regulated
or
benefic
regulatory
beneficiaries.
I
So
I
think
I
will
I.
Think
I'll
conclude
there
and
seed
the
floor
back
to
Eloise
and
hope
that
we
get
into
a
robust
discussion
of
some
of
these
things
and
I'm
happy
to
elaborate
as
people
find
helpful.
Thank.
H
So
the
main
task
that
we
have
ahead
of
us
today
is
to
work
through
the
recommendations,
the
with
the
goal
being
that
we
work
through
them
all
and
then
we'll
sort
of
see
where
we
are
in
terms
of
the
Preamble.
But
before
we
get
into
the
nitty-gritty
of
the
line
by
line
recommendations,
I
wanted
to
just
open
up
the
floor
to
see
if
anybody
had
any
overview,
questions
or
comments.
H
Okay,
there
will
definitely
be
time
for
General
comments
as
they
emerge
from
the
discussion
of
specific
matters,
so
Kaja.
If
you
want
to
get
the
recommendations
up
on
the
screen,
I
do
think
that
we
should
I
want
to
flag
the
title.
The
proposed
title
change
as
one
of
the
overarching
kind
of
questions
that
we
have
ahead
of
us,
so
I
think
I'm,
gonna
flag,
that
this
question
is
here
and
see.
H
If
people
want
to
respond
to
this
broad
question,
it
may
be
that
we
need
to
come
back
to
it
after
we've
worked
through
some
of
the
recommendations.
I'm
not
really
sure,
but
I
want
to
give
the
opportunity
for
people
to
look
at
the
proposed
change
and
see.
Maybe
Kathy
wants
to
talk
about
it
or
see
if
anybody
has
any
questions
about
it.
H
So
we
do
have
to
talk
about
this
I'm
going
to
see
if
people
want
to
talk
about
it
now,
it
may
be
that
this
overarching
question
will
make
more
sense
once
we
get
into
the
recommendations
themselves.
But
I
want
to
note
that
before
we
head
in
to
the
recommendations
to
see,
if
anybody
has
any
initial
comments,
questions
or
observations
that
they
want
to
offer
about
the
title
change
in
particular.
H
Okay,
I'm
scrolling
through
just
to
see
if
folks
are
seeming
like
they
are
looking
for
a
a
button
to
to
raise
their
hand
or
to
type
something
I,
don't
see
any
right
now.
We
do
need
to
talk
about
this,
so
we'll
Circle
back
to
it,
but
it
does
strike
me
that
possibly
will
have
more
robust
to
sit.
Must
a
remote,
more
robust
conversation
about
it
after
we
work
through
some
of
the
tools,
but
I
just
want
to
flag
for
you
that
there's
this
overarching
question
about
how
to
characterize
the
subject
of
the
recommendations.
H
Thank
you,
okay.
So,
let's
start
talking
through
the
first
recommendation,
which
appears
at
line
40.,
and
it
says
that
agencies
should
assess
whether
they
can
use
algorithmic
tools,
including
those
enabled
by
artificial
intelligence,
AI
to
more
efficiently
cost
effectively
and
accurately
identify
rules
that
are
outdated
or
redundant,
contain
typographical
errors
or
inaccurate
cross-references
or
might
benefit
from
elaboration
or
clarification
okay.
So
the
floor
is
open
and
I'm
looking
for
hands
or
notes
in
the
chat
that
you'd
prefer
to
be
called
on.
That
way.
H
Okay,
I
see
none,
so
I
will
just
note
that
there
are,
you
know,
a
bunch
of
values
that
are
identified
at
line
41
efficiently
cost
effectively
accurately.
H
There
are
a
bunch
of
moments
of
when
we
are
thinking
about
suggesting
that
agencies
use
retrospective
rulemaking
when
they're,
outdated
or
redundant
typos
or
cross-references
that
are
bad
or
might
benefit
from
elaboration,
with
clarification,
so
I'm
just
going
to
call
those
out
specifically
just
to
make
sure
people
are
comfortable
with
the
set
of
values
or
haven't,
have
no
questions
about
those
and
are
comfortable
with
the
circumstances
in
which
we're
suggesting
that
agencies
consider
James.
B
Hi
just
quickly
to
let
you
know
my
own
background,
I
actually
have
a
degree
in
data
science
and
I
recorded
during
the
pandemic,
and
this
is
extremely
interesting
to
me.
I
would
say
that
and
I'm
just
trying
to
anticipate
an
issue
here.
The
one
single
biggest
thing
about
algorithmic
tools
that,
in
my
own
research,
is
that
quite
often
their
recommendations,
their
results-
and
this
is
everything
from
you-
know-
computer
vision,
right,
facial
recognition
to
many
textual
applications
like
chat,
GPT.
B
B
I
think
that's
more
useful
than
simply
saying
you
have
a
type
1
or
a
type
2
search
error
right.
Why
it
was
that
I
was
classified
as
a
suspect
when
really
the
only
thing
was
that
my
skin
color
wasn't
well
represented
in
the
training
set
right.
B
It
might
be,
ultimately,
very,
very,
very
helpful
and
I
can
only
see
the
first
four
recommendations
here.
So
you
may
already
have
anticipated
this,
but
maybe
at
least
mentioning
the
idea
of
explainable
AI
and
that's
going
to
take
a
while.
Europe
has
been
working
on
this
for
quite
a
few
years,
and
no
one's
really
satisfied
with
it.
The
United
States
is
barely
begun
thinking
about
it,
but
just
a
just
kind
of
an
overarching
suggestion
and
an
invitation
by
the
way.
B
If
you
want
to
get
in
touch
with
me
and
if
this
is
a
subject
for
for
further
aikos
work,
I'd
love
to
get
involved,
because
I
do
think.
I
have
a
little
bit
of
a
background
as
someone
with
degree
in
the
area
and
some
practice
experience,
but
anyway,
I
think
explainability.
If
I
had
to
put
it
all
down
in
one
single
sentence
would
be.
B
Can
we
get
agencies
of
the
United
States
government
to
be
on
the
leading
Vanguard
of
explainable
AI
that
when
people
come
to
them
or
they
make
decisions
based
on
AI,
that
you're
at
least
given
a
reason,
and
that
seems
to
be
the
single
biggest
objection
in
the
broader
Universe
of
legal
applications
of
artificial
intelligence.
All
over
my
hand-
and
me
myself
now.
H
Thanks
so
much
James
for
that,
it's
really
well.
First,
congratulations
on
completing
that
degree
during
the
pandemic
wow,
and
it's
great
to
have
you
here
on
the
committee
with
those
voices
voicing
those
views,
I
mean
Kathy.
I
So
first
is
I
agree
with
the
thrust
James
of
what
you're
saying
if
I
could
just
say
by
way
of
background,
I
mean
to
go
back
to
Chairman
voice
said
you
know.
Artificial
intelligence
is
on
everyone's
mind.
I
sit
on
you
know,
I'm
invited
to
numerous
panels,
including
most
recently
Kade
and
I,
were
just
at
one
where
there
were
European
members
talking
about
their
approach
to
AI
versus
the
U.S
legislative
approach.
In
my
opinion,
there's
actually
remarkably
little
attention
given
to
governmental
uses
of
AI.
I
There's
a
lot
of
attention
given
to
governmental
uses
outside
of
I
should
say
certain
very
high
profile
ones
like
in
the
in
criminal
justice
systems
in
surveillance,
some
of
the
things
James
that
you
that
you
mentioned
so,
on
the
one
hand,
I
think
there's
a
challenge
in
terms
of
us
kind
of
defining
a
sort
of
a
a
scope
of
a
project.
That's
looking
at
uses
in
retrospective
review,
in
which
and
I'll
talk
to
you
about
some
empirical
things
that
we
uncovered
it
was
a
little.
I
It
was
very
I
thought
fruitful
to
try
to
engage
agency
officials
and
outside
stakeholders
on
what
explainability
meant
in
that
specific
context,
as
opposed
to
say
explainability,
where
you're
denying
someone
a
benefit
or
explainability,
where
you're
targeting
them
for
criminal
law
enforcement
purposes,
because
I
think
it
can
mean
very
different
things.
So
the
first
point
is
just
to
say:
yes,
it
is.
It
is
an
important
you
know,
concept.
I
The
recommendations
here,
I'd
want
to
think
about
it,
a
little
bit
more
sort
of
start
to
get
at
that
with
regard
to
a
kind
of
transparency
and
letting
the
public
know
about
use
of
the
tools.
I
So
in
our
in
the
use
case,
studies
that
we
represent
I
would
say
that
we
would
not
hold
out
the
regulatory
cleanup
initiative.
Hhs's
final
rule,
where
they
didn't
disclose
use
of
the
tool
and
then,
in
a
later
rule
making
where
they
said
it
was
used.
They
then
said
it
was
proprietary
technology,
so
they
weren't
going
to
discuss
it.
I
That
would
sort
of
be
on
the
spectrum
of
what
we
would
not
be
encouraging
agencies
and
so
the
recommendations
that
really
Center,
based
on
the
dod
experimentation
based
on
the
GSA
CMS,
that
really
emphasized
the
open
source
and
the
allowing
you
know
DOD-
has
their
code
up
on
GitHub.
They
invite
others
in
to
collaborate
Etc.
I
Only
that
those
are
precursors,
in
my
opinion,
towards
being
able
to
study
and
I
agree
with
you,
James
we're
sort
of
at
the
beginning
of
explainable
AI,
so
I'm,
not
so,
while
I
think
it's
a
really
important
point
and
we
should
think
as
we
go
through
how
it
might
shape
some
of
the
recommendations.
I
would
be
hesitant
to
sort
of
on
the
basis
of
the
research
that
we
did
come
out
and
say
you
know,
the
U.S
should
be
at
the
Forefront
of
explainable
AI.
I
I
They
then
gave
the
fruits
of
that
to
subject
matter
experts
and
there's
some
commentary
in
the
report
from
both
FDA
and
CMS
Center
for
Medicaid
Medicare
Services
as
to
how
they
thought
it
was
a
good
first
step,
but
there
were
various
issues
they
would
not
have
wanted
this
to
have
been
just
fully
automated
and
there's
another
way
in
which
you
know.
There's
a
pilot.
The
GSA
CMS
pilot
was
kind
of
a
canned
experiment.
It
was
looking
at
a
subset
of
regulations
that
had
to
do
with
medical
devices
and
Ox
getting
particular
oxygen
subsidies.
D
I
Kinds
of
things,
I
think
are
very
helpful,
but
it's
kind
of
explainability
James
at
like
a
retail
level.
I
guess
that's
what
I
would
say.
B
If
I
may
really
quickly
kind
of
make
a
couple
of
quick
suggestions-
and
maybe
this
is
this
kind
of
thing
that
ought
to
be
teed
up
for
a
future
acus
project-
I
agree
wholeheartedly
that
you
know
just
read
your
the
four
recommendations.
I
can
see
on
the
screen
that
you
know
the
direction
here
is
different
and
that's
fine
and
I.
Think
if
we
wind
up
setting
a
path,
I
would
love
to
contribute
to
a
a
future
project
on
explainability
itself.
B
B
B
So
if
we
can
extrapolate
that
from
that
generalize
to
things
that
agencies
do
and
just
focus
on
the
retail
experience
as
it
were,
and
that
could
be
a
a
first
step
toward
a
very,
very
big
subject,
so
I
appreciate
that
I'll
go
ahead
and
mute
and
withdraw
so
other
people
can
contribute
to
conversation.
Thank
you.
That's.
H
Great
and
that
back
and
forth
is
really
great,
too
I.
Think
I'm
going
to
note
at
this
point
that
it's
possible.
We
can
revisit
the
question
of
explainability
in
five,
which
Kaja
just
took
us
to,
and
maybe
even
seven
if
people
end
up
thinking
that
that
is
a
good
direction
to
go
in
so
maybe
can
we
just
make
a
note
Kaja
to
come
back
and
think
about
whether
five
and
seven
have
opportunities
for
explainability
with
given
the
back
and
forth
that
we
have
just
heard.
H
So
that's
one
thing
that
I
wanted
to
note
and
then
I'm
going
to
let
Kaja
type
for
saying
the
second
thing.
H
H
I
just
wondered
whether
you
had
anything
else
that
you
wanted
to
add
about
one
and
going
for
the
recommendation
on
one,
because
I
think
we
do
need
to
work
through
these
in
order.
So
if
you
have
anything
that
you
want
to
add
about
the
recommendation
in
one,
then
we
should
do
it
and
then
hearing
nothing
else
from
the
floor.
H
A
Think
that's
right.
I
think
you
yeah
pointing
us
down
to
five
and
seven
I
think
were
my
my.
A
As
well,
I
I
think
the
only
thing
that
I
would
add-
and
this
goes
back
to
the
title
changes
you
know.
We
really
highlight
algorithmic
tools
which
might
not
have
the
same
explainability
issues
that
AI
in
general
have
algorithmic
tools,
usually
have
a
static
input
and
output
process.
I
think
so.
That's
I
I
just
wanted
to
raise
that
for
concern
and
that's
a
reason
why
we
were
proposing
the
title
change
as.
C
Hi
I'm
Sean
Ford
I'm
I'm,
standing
in
for
Dan
Cohen
from
the
Department
of
Transportation,
and
we
have
one
of
the
the
use
case.
Studies
here,
which
gets
I
think
to
your
your
question,
Eloise
about
the
whether
we
should
be
expanding
this
to
include
algorithms
generally,
because
that
that
is
very
much
an
algorithmic
tool
that
is
not
using
AI
I.
C
Don't
think
in
any
sense
that
folks
tend
to
meet
AI
I,
wonder
if
we
should
be
defining
AI
here
if
we're,
but
perhaps
we
don't
need
to
in
part,
because
I
think
a
lot
of
these
recommendations
are
really
more
on
the
algorithmic
side
of
things
which
has
has
caused.
You
just
pointed
out,
tend
to
be
explainable.
We
can
point
to
what
exactly
our
other
algorithm
is
looking
for,
how
it's
looking
for
it
and
why
something
may
have
been
flagged
for
potential
follow-up
and-
and
you
know,
as
a
target
for
retrospective
review.
H
H
I
Yeah,
so
there
are
various
existing
definitions,
so
I
would
just
be
cautious.
You
know
there
is
an
aqueous
statement
on
Agency
use
of
artificial
intelligence
that
mentions
that
there's
no
universally
accepted
definition
and
talks
about
some
differing
ones
and
I
kind
of
took
that
same
approach
in
the
report,
so
you'll
see
in
the
footnotes.
There's
like
a
definition
of
artificial
intelligence
from
the
national
AI
act,
Etc.
In
my
opinion,
it's
it's
becoming
perplexing,
particularly
to
various
groups.
I
Both
you
know
within
government,
but
but
also
outside
of
government,
who
are
very
interested
in
this.
To
wonder
why
governmental
bodies
are
using,
you
know
very
broad,
but
different
definitions,
so
I
guess
I,
that's
just
the
cautionary
note.
I
If
I
can
I
had
raised
my
hand
too,
though
I'd
one
just
point
to
follow
up
on
what
Sean
said
with
regard
to
D.O.T,
that's
obviously
absolutely
correct,
and
we
feature
it
here
on
purpose,
because
it's
using
it's
using
algorithmic
tools
and
it's
also
using
a
technology
that
some
others
have
used
with
AI
enablement
of
the
tools
and
the
report
comes
back
at
page
50
in
one
of
the
reports
recommendations
to
talk
about
how
agencies
might
consider
more
structured
rules,
because
dot
I
was
very
impressed
by
speaking
with
the
officials
therein
as
to
the
idea
that
they
have
very
structured
rules.
I
They
know
which
Industries
their
rules
affect,
so
they
don't
need
to
use.
Ai
enabled
Technologies
to
be
mapping.
You
know,
what's
the
subject
matter
the
subtopic
who's
affected
by
it?
How
are
these
things
overlapping,
Etc,
so
I
think
they
present
a
very
nice
use
case
example
that
we
shouldn't
be
thinking
about
this
or
suggest
to
agencies
that
AI
is
sort
of
this
new
Magic
Bullet,
and
it's
this
roving
thing
going
around
looking
for
problems
to
solve.
Instead,
the
idea
is
to
look
at
retrospective
review
and
some
pain
points
in
that
process.
H
Thanks
for
that,
Kathy
I
am
noticing
how
many
times
you
referred
to
the
report
in
this
in
your
response
and
I'm
wondering
whether,
since
the
recommendation
is
supposed
to
stand
on
its
own,
whether
some
of
the
definitional
stuff
that
you
just
described
May
be
valuable
to
be
here.
Nonetheless,
notwithstanding
even
the
lack
of
definition,
so
I
don't
know,
I
mean
the
lack
of
sort
of
unified
view
on
this
I.
Don't
know
it
would
be
great
to
hear
from
you
again
Kathy
about
that
more
limited
point
or
Sean
or
or
somebody
else
on
this
point.
C
I'll
I'll
just
say
Kathy.
That
makes
a
lot
of
sense
to
me
why
you
avoided
defining
Ai
and
I
think
that
people
use
the
term
differently,
and
perhaps
we
do
not
by
by
making
sure
our
recommendations
are
really
targeted
at
you
know,
use
of
sort
of
algorithmic
tools
that
may
include
AI.
We
can
avoid
the
definition
because
we're
not
limiting
our
recommendations
to
the
use
of
AI,
which
honestly
sounds
quite
prudent
at
this
stage,
where
we
are
much
further,
along
with
using
our
algorithmic
tools
that
are
not
AI.
H
Okay,
so
I
think
I
hear
that
we're
fine
with
not
including
more
in
recommendation,
one
about
capturing
the
universe
that
we're
talking
about
Okay.
So
so,
if
anybody,
if
anybody
else
wants
to
speak
on
recommendation,
one,
please
put
your
hand
up
now
or
or
put
put
a
note
in
the
chat.
If
you
want
to
do
that
instead
and
I
will
gladly
call
on
you
and
if
not
then
I
think
we'll
move
to
recommendation
too
so
seeing
no
hands
I.
H
Okay,
I
will
not
read
this
whole
three
sentence
thing
out
loud,
but
I
will
give
a
moment
for
people
to
take
a
look
at
it
and
then
I
invite
hands
or
chat,
raise
hands.
H
Okay,
I
will
just
note
and
then
I'll
ask
a
question
that
the
recommendation
is
full
of
they
will.
They
should
consider
and
is
not
full
of
they
should
do
and
I'm
just
making
sure
that
people
note
that
and
feel
comfortable
with
that
right
like
if,
if
there's
like
the
last
line,
for
example,
if
there's
no
such
tool,
agencies
should
consider
whether
they
have
sufficient
in-house
expertise
and
capacity
to
develop
an
adequate
tool.
Nothing
follows
that
saying:
if
they
determine
they
do
not
have
sufficient
in-house,
you
know
they
should
develop.
H
Speaking
intentionally
slowly
as
I
scroll,
my
zoom
room
to
look
around
to
encourage
the
hand
raising
that
sometimes
happens
a
little
belatedly
but
again
seeing
none
okay,
I'm
happy
to
move
to
recommendation
three
then
so.
Agencies
should
ensure
that
Personnel,
who
use
algorithmic
tools
to
support
retrospective
review,
have
adequate
training
on
the
capabilities
and
risks
of
those
tools
and
sufficient
technical
expertise
to
make
informed
decisions
based
on
the
output
of
such
tools.
H
Okay,
one
question:
I
have
that
I
will
throw
out
there
for
consideration
is
in
line
55
that
we
are
so
it's
technical
expertise
to
make
informed
decisions.
Is
it
is
it?
Is
it
technical
expertise
to
make
informed
recommendations
for
agency
decisions
or
like
who
is
the
entity
that
has
the
technical
expertise
I'm
wondering
if,
if
it's
the
technical
expertise
that
is
making
the
decisions
or
if
the
technical
expertise
is
the
thing
that
is
making
recommendations
for
further
decision
just
putting
that
out
there
for
further
discussion,
Sean.
C
Yeah,
thanks
for
raising
that
I
think
that's
absolutely
right
that
you
know
when
we're
using
an
algorithmic
tool
to
support
retrospective
review,
we're
not
making
decisions,
we're
not
committing
to
final
agency
actions,
we're
starting
a
process
where
human
beings
will
ultimately
make
the
decision.
C
So
I
think
it
should
be
clear
that
I
think
algorithmic
tools
supporting
retrospective
review
would
be
at
the
beginning
of
the
process.
Regulations,
of
course,
are
then
updated
by
notes
and
comments.
So
we
consider
what
human
beings
think
of
our
human
proposals,
and
then
we
make
a
human
final
decision.
H
Great
so
then,
I
think
that
would
lead
to
in
line
since
in
line
53.
It's
really
agency
should
ensure
that
Personnel
have
this
kind
of
expertise
and
maybe
line
55
should
read
that
the
Personnel
have
the
technical
expertise
to
make
informed
recommendations
based
on
the
output
of
such
tools
for
further
for.
H
C
Brief
comment
would
be
that
I
think
agencies
should
ensure
maybe
a
little
bit
too
strong,
because
we
can
certainly
work
toward
that
goal
without
necessarily
being
in
all
cases.
Okay,.
H
Okay,
let
me
call
in
Kathy:
are
you
getting
well,
let
me
call
in
James.
B
Hi,
this
is
with
respect
to
what
just
got
edited
on
lines:
55
and
56..
It
may
be
simpler
and
almost
too
broad,
but
I
hope
it's
received.
Favorably
to
say,
should
have
sufficient
technical
expertise
to
interpret
the
output
of
such
tools,
because
the
idea
is
that
you
don't
necessarily
make
any
recommendations
or
decisions.
You
simply
interpret
the
output
of
those
tools,
and
that
should
be
enough.
B
H
I
Yeah
I
I'm
just
wondering
here
so
sort
of
who
the
Personnel
is
also
right,
they're,
going
to
be
some
later
recommendations
that
we
come
to
with
regard
to
how
this
could
happen
at
a
more
centralized
level.
So
the
report,
for
example,
one
of
the
reasons
the
GSA
is
interesting
as
they
have
in
mind.
They
did
this
proof
of
concept,
but
since
they're
the
shared
I.T
providers,
they
may
provide
certain
services
to
other
agencies
and
it's
at
least
their
view
that
the
agencies
wouldn't
have
to
have
the
technical
capacities.
I
You
know
so
I.
You
know
it's
it's
a
little
bit
less
clear.
I
You
know
the
sufficient
technical
expertise,
I
mean
and
I'll
just
say.
Maybe
I
want
this
to
be
helpful,
but
my
own
view,
both
with
regard
to
this
report,
that
I
wrote
and
the
prior
government
by
algorithm,
is
that
the
best
results
are
going
to
have
by
are
going
to
be
when
technologists
and
subject
matter.
Experts
are
in
the
room
together
early
on
while
these
Technologies
are
being
developed.
So
the
GSA
CMS
pilot
was
very
interesting
because
it
had
them
in
these
lengthy
calls.
I
So
the
tech,
those
with
technical
expertise,
would
be
piloting
demonstrating
and
obviously
you
know
giving
a
certain
level
of
knowledge
to
the
agency.
Subject
matter
experts
and
likewise
the
subject
matter,
experts
who
know
what
they
need
to
do
would
be
able
to
ask
questions
of
the
technology
and
ask
for
various
tweaks
with
the
technology.
So
this
just
you
know,
I
think
there
should
be
this,
that
we
should
be
encouraging
such
a
kind
of
two-way
street
and
I
think
that
we
wouldn't
want
to
write
something
in
the
recommendation
that
suggests
sort
of
otherwise.
So.
H
Kathy,
if
I'm
understanding,
you
correctly,
you
might
want
to
have
the
Personnel
refer
not
only
to
agency
Personnel,
but
that
it
might
also
be
or
rely
on
Personnel
from
other
like
to
sort
of
capture
GSA.
That's
not
good,
but
I'm,
sort
of
wondering
if
it's
going
in
that
direction,
that
language
I
just
offered
was
not
good,
but
I'm.
Just
wondering
if
going
in
that
direction
is
what
you're
talking
about
so
that
it
kind
of
takes
us
out
of
just
the
one
agency.
I
If
it
is
right
and
but
I
wouldn't
want
to
like
the
spirit
of
the
recommendation,
namely
that
we
don't
want
to
have
you
know,
we've
got
the
technologists
elsewhere
and
then
the
people
within
the
agency.
So
there
should
be
this
two-way
street
I'm,
just
not
sure
agency
Personnel,
it
might
would
need
to
have
sufficient
technical
expertise,
let's
say
in
either
data
science
or
artificial
intelligence
for
this
to
work.
That's
that's
I,
guess
so.
I
There's
you're
right,
I'm
nervous
a
little
bit
about
Personnel,
being
you
know,
limited
maybe
to
the
agency,
but
I'm
also
a
little
bit
concerned
that
okay,
because
that
did
come
up.
You
know
there
are
there's
just
there's
head
as
will
not
surprise
anyone
there's
heterogeneity.
There
are
some
agencies
that
have
sufficient
technical
expertise
and
are
developing
these
tools
in-house,
and
there
are
other
agencies
that
we
spoke
to.
Who
either
are
very
small.
I
H
Would
It
capture
your
concern
here
to
and
maybe
also
speak
to,
Sean's
verb
question.
If
we
have
the
weaker
verb
in
line,
53
agencies
should
encourage,
but
then
we
have
a
second
sentence
where
agencies
don't
have
blah
blah
blah.
They
should
consider.
D
J
K
H
L
I'm
here,
hello,
Catherine
yeah,
all
right.
This
is
more
of
a
broader
question.
I
guess
I
might
just
be
a
little
confused
about
the
terminology
in
terms
of
the
difference
between
algorithmic
tool.
Basically,
what
an
algorithmic
tool
is
that
doesn't
use
Ai
and
I.
Guess
I'm,
just
wondering,
because
I
think
these
recommendations
refer
to
ones
that
don't
and
I
just
wasn't
quite
sure
what
it
means
when
we
say
sufficient
technical
expertise.
L
If
we're
talking
about
a
non
-ai,
algorithmic
tool
and
similarly
like
in
the
next
one,
you
know
where
we
talk
about
source
code,
would
that
be
applicable
to
a
non-ai,
algorithmic
tool
and
I.
Think
all
this
probably
just
stems
from
my
lack
of
understanding
about
what
exactly
we're
referring
to
there.
But
it's
just
kind
of
a
general
question.
H
Great
thank
you
for
asking
those
questions
and
so
Kathy
I'm
going
to
turn
to
you
to
respond.
If
I
may.
I
Yeah
I
mean
it's
a
difficult
one,
so
here's
the
thing
you
know
we
would
go
in,
and
this
happened
in
the
government
by
algorithm.
We
would
go
in
because
we
read
publicly
facing
documents
that
an
agency
was
utilizing
AI
in
doing
X
or
Y,
and
then
we
would
ask
to
see
the
tool
we'd
be
shown.
You
know
like
a
spreadsheet
that
doesn't
not
only
is
like
less
complicated
than
even
doing
like
a
logistical
regression,
and
so
we
would
decide.
I
Well,
that's
not
really
what
we're
thinking
about
here
so
in
some
ways,
algorithmic
tools
as
I
understand
it.
The
use
of
that
is
because
the
main
focus
like
what
we
were
going
into
and
looking
at
was
to
look
at
AI
enabled
tools,
but
in
the
course
of
that
we
discovered
and
the
dot
example
is
probably
the
most
crisp
one
on
this-
that
they
have
something
called
reg
data,
dashboard
and
they've.
I
They
use
some
fairly
sophisticated
algorithms
to
measure
various
features
of
their
regulations
Etc,
but
it's
not
using
any
kind
of
machine
learning
or
AI.
Most
of
the
other
tools
that
are
being
used
in
this
space
are
using
that
to
take
unstructured
texts
from
regulations
and
map
them
into
some
kind
of
knowledge
map
like
to
figure
out
what
is
that
regulation
about
so
that
you're
not
just
searching
for
Search
terms.
You
know
you
have
like
a
mapping
and
Dot.
You
know
didn't
need
that
step
of
it
so,
but
I
do
have
my
own.
You
know.
I
I
have
worries
I,
guess
this
gets
back
Eloise
to
your
earlier
question.
You
know,
on
the
one
hand,
I
was
saying:
let's
we
don't
need
to
give
our
own
separate
definition
of
AI,
calling
it
just
algorithmic
tools
to
include
I
mean
basically
it's
including
techniques
that
are
in
essence
doing
what
the
AI
enabled
tools
are
doing.
Just
Without,
The,
Machine
learning
component,
but
algorithmic
tools
could
be
used
in
a
very
loose
sense
right.
My
daughter,
who's
in
seventh
grade
does
algorithms
for
homework,
and
we're
not
talking
about
that
here
so,
but.
H
So
heavy
I
think
I
hear
you
saying
in
direct
response
to
Catherine.
That's,
even
though
we're
not
talking
about
only
AI
that
the
technical
expertise
Point
remains
valid
even
as
to
the
general
algorithmic
tool.
Is
that
your
question?
Because
that's
how
I
that's?
How
I
understood
Catherine's,
question
yeah
and
so
Kathy
is
your
response
that
we
do
need
to
still
retain
the
technical
expertise
Point.
Even
when
we're
yeah.
I
So
I
think
dot
is
a
great
example.
They've
got
a
lot
of
internal
technical
expertise,
which
is
how
they
developed
this
algorithmic
tool.
That's
basically,
you
know
a
data-driven
approach
to
analyzing
regulation,
so
I
think
that
would
be.
What's
you
know?
These
are
algorithms
and
so
yeah
I
was
answering
it
maybe
obliquely.
These
are
algorithmic
tools
of
a
particular
nature
that
we're
interested
in
this
report
right.
There
data-driven
approaches
to
analyzing
regulations
to
inform
retrospective
review
or
policy
rulemaking.
Okay,.
K
L
I
Yeah
so
in
I
mean
it
might
just
be
helpful
to
refer
in
the
report
to
Pages
12
to
the
top
of
14..
It
describes
that
particular
so
it's
built
on
something
called
quantgov,
which
is
an
open
source
policy,
analytics
platform
that
was
developed
by
the
mercatus
center
and
others
who
have
used
that
quantgov
have
utilized
AI,
enabled
tools,
it's
just
that
for
reg
data,
which
is
basically
creating
kind
of
this
repository
of
all
of
the
regulations
and
then
including
these
kinds
of
metrics.
I
They
were
able
to
do
this
without
incorporating
the
machine
learning
algorithms
that
are
part
of
the
reg
data
dashboard,
so
they
drew
from
an
algorithmic
enabled
tool
and
tailored
it
for
their
specific
uses
where
they
didn't
need
that
functionality.
I
H
For
that
Kathy
I'm
going
to
call
on
our
acus
colleagues
now
so
first
starting
with
Jeremy.
M
Great
thanks
Eloise,
maybe
I,
can
speak
on
behalf
of
the
directors
here
and
just
sort
of
explain
two
definitional
things
for
the
committee's
consideration
and
if
we
didn't
get
it
quite
right
or
to
committee's
liking,
we
can
go
from
there.
So,
in
terms
of
how
we
used
algorithmic
tools
versus
artificial
intelligence,
we
used
algorithmic
tools
as
a
broad
umbrella
term.
That
would
include
anything
from
simple
algorithms.
Just
a
simple
set
of
instructions.
M
M
It
seen
from
the
report
that
a
lot
of
the
systems
lean
more
towards
the
I,
don't
want
to
call
them
simple,
algorithmic
tools,
but
are
not
necessarily
machine
learning
black
box
systems.
Hence
the
use
of
algorithmic
tools
throughout
with
AI
established
as
a
subset,
as
for
three
technical
expertise
might
not
be
the
right
term
for
what
we
were
trying
to
get
at
here.
Really,
this
was
making
a
point
about
Guardian
setting
up
guard
rails
to
protect
against
automation,
bias.
The
idea
that
people
who
use
automated.
I
M
M
Act,
which
is
just
when
people
are
relying
on
automated
decision
making
tools,
make
sure
they
know
that
the
tool
might
be
flagging,
things
that
aren't
problems
and
might
be
missing
things
that
truly
are
problems.
So
it's
just
providing
simple
training
to
guard
against
those
sorts
of
type.
One
type,
two
errors
so.
H
Jeremy
that
leads
me
to
ask
you,
as
a
drafter
whether
either
of
these
two
Alternatives
might
better
capture
what
you're
saying
so.
One
is
just
simply
replacing
the
word
analytic
or
analytical
with
I'm
sorry
replacing
the
word
technical
with
analytical
or
analytic
like.
Is
that
part
of
what
you're
going
for
I.
H
But
my
other
question
is-
and
this
again
captures
I
now
can't
remember
whose
other
comment
it
was
from
before,
but
about
the
interpreting
versus
making
informed
decisions.
We
could
also
just
skip
that
and
say.
Personnel
agencies
should
whatever
verb,
encourage
that
Personnel,
who
use
algorithmic
tools
to
support
retrospective
review,
have
and
then
skip
to
line,
55.
H
You
know,
interpreting
or
or
continue
to
interpret,
or
something
like
that,
if
you're
not
actually
requiring
them
to
have
any
expertise
that
it's
I
couldn't
quite
tell
from
what
you
were
saying,
Jeremy,
whether
it
was
really
expertise
you
were
going
for
or
whether
it
was
just
the
act
of
interpreting
that
you
were
going
for.
It's.
H
Oh
so
we're
taking
out
adequate
I,
wasn't
actually
thinking
Kaja
that
we
were
deleting
the
adequate
training
I
more
meant.
If
there's
a
one-two
punch
like
have
adequate
training,
but
then
also
in
addition,
have
sufficient
whatever
and
so
I
was
just
wondering
whether
the
second
thing
should
be
and
retain
the
not
right
but
sort
of
like
retain
the
oversight
of
you
know,
I'm,
not
really
sure.
H
Okay,
let's,
let's
put
note
all
these
Kaja
I'm
going
to
call
on
you,
Sean
I'm,
going
to
call
on
you
and
then
we'll
kind
of
figure
out
where
we
are
in
terms
of
the
word
and
then
Jonathan
hello,
we'll
figure
out
where
we
are
in
terms
of
the
the
words
and
the
thing
of
the
points
we
want
to
make.
So
first
Kaja
yeah.
A
You
double
check
and
make
sure
that
Catherine
's
algorithm
versus
AI
Catherine
to
me,
okay,
Catherine,
Allen,
no
I
just
wanted
to
make
sure
I
was
going
to
explain
an
algorithm
to
be
like
you
know.
An
Excel
Macro,
for
example,
is
an
algorithm,
but
Jeremy
basically
addressed
everything.
I
want
to
do
so.
Thank
you.
H
C
Just
wanted
to
say
that
I
think
technical,
analytical
expertise
may
not
be
exactly
right
if
I'm
thinking
about
the
dot
tool
and
how
this
would
work
with
with
sort
of
getting
us
to
to
using
that
tool
as
as
fairly
and
well
as
possible
would
be
about
thinking
about
the
criteria
that
the
Tool's,
using
whether
that's
the
right
criteria.
These
are
kind
of
like
big
questions
that
perhaps
are
maybe
policy
questions
that
might
change
depending
on
leadership.
C
It's
not
exactly
like
technical
expertise.
So
if
you
look
at
the
criteria
that
the
that
the
reg
data
tool
uses
in
the
report,
I
think
it's
on
page
13..
C
It's
looking
at
account
of
the
words
like
shall
and
must
simple
word
count.
Complexity
of
the
of
the
language
and
the
date
of
the
last
update.
I,
think
you
know
the
the
first
two,
oh
well,
really
all
of
them,
except
for
that
last
criteria
seem
like
they
may
flag
more
recent
regulations,
as
opposed
to
Old
regulations,
because
we
tend
to
have
more.
C
Our
regulations
grow
over
time
as
we
determine
appropriate,
carve
outs
and
determinant
Enforcement
cases
like
what
how
much
more
specific
we
may
have
to
be,
and
it's
not
necessarily
clear
to
me
that
those
are
all
flags
that
that
would
lead
you
to
want
to
review
the
rule
that
I
think
I
think
they're
flags
for
potentially
burdensome
regulations
which
maybe
do
weren't
more
frequent
retrospective
review.
But
there
they
could
also
just
be
flagging.
Well,
written
loophole,
free
regulations.
H
Okay,
so
from
that
perspective
it
was
more
like
the
second
alternative
I
was
offering
Germany
Jeremy,
because
it's
not
really
that
we
were
talking
about
expertise.
It's
like
that.
The
second
point
in
line
55
would
be,
and
that
Personnel
you
know,
retain
or
or
that
Personnel
carefully
assess
the
output
for
further
consideration
or
something
like
that.
Does
that
something
like
that
better
capture?
What
you're
talking
about.
C
I
I
think
so
you
know
I
think
adequate
training
would
be
making
sure
you
understand
the
tool
right
on
a
technical
side,
and
perhaps
this
belongs
like
in
a
different
recommendation,
but
I
think
you
need
to
have
a
wisdom
check
after
you
on
the
tools
output
to
make
sure
it
aligns
with
policy
priorities
and
and
and
the
agency's
actual
expertise
in
enforcing
those
regulations.
Okay,.
H
I
appreciate
that
I'm
gonna
observe,
as
a
macro
point,
that
I'm
doing
a
lot
of
work.
Translating
your
all
excellent
comments
into
potential
words
and
I
also
welcome
you
yourself
from
the
get-go,
translating
your
ex
your
excellent
input
into
potential
words
here,
so
that
you
know
just
in
the
interest
of
of
potential
efficiency.
H
Anyway,
that
hasn't
been
said.
Jonathan.
J
Thanks
and
thanks
to
Kathy
for
this
report
and
to
thanks
for
Eloise
and
offer,
and
the
Ikea
staff
for
hosting
this,
and
also
sorry
that
I
had
to
join
late
because
I
had
a
faculty
meeting
until
1
30.,
so
I
might
have
missed
this.
But
on
this
recommendation
number
three
I
wanted
to
make
two
comments.
One
is
about
the
word
interpret,
which
I
think
is
a
good
word
here,
but
I
wanted
to
just
highlight.
J
Maybe
you
already
discussed
this,
but
there
is
a
discussion
or
debate
going
on
in
the
academic
literature
about
Ai
and
algorithms
between
what's
called
explainability
and
what's
called
interpretability.
Maybe
you
already
talked
about
that.
Of
course.
The
word
interpret
is
something
we
use
in
law
all
the
time,
but
it's
also
a
word.
That's
now
being
used
for
for
AI
and
algorithms,
for
example,
my
duke
colleagues
Cynthia
Rudin
who's,
a
computer
scientist
and
Brandon
Garrett
who's.
J
J
They
argue
that
explainability
is
a
kind
of
post-hoc
conjecture
about
what
an
algorithm
a
black
box
algorithm
was
really
doing,
whereas
interpretability
is
a
more
real-time
or
a
more
accurate
understanding
of
what
the
algorithm
was
doing
so
just
so
that
I
just
offer
that
in
case
the
word
interpret
is
being
used
for
a
more
specific
meaning
here
or
other
places,
but
I
think
it's
a
good
word
to
use
in
this
recommendation
number
three.
The
other
comment
was
about
reg
data
and,
as
Sean
was
just
mentioning.
J
You
know,
there's
also
a
debate
in
academic
literature,
about
the
measures
that
the
or
metrics
that
Reg
data
uses,
for
example,
counting
the
numbers
of
mandatory
words
like
or
command
words
like,
shall
and
must,
and
Sean
mentioned,
maybe
that
overstates
more
recent
rules
that
have
more
pages
and
use
those
words
repetitively
or
or
maybe
for
other
reasons,
there's
also
been
criticism
that
if
the
word
shall
or
must
is
used
with
respect
to
what
the
agency
must
do,
it
could
be
constraining
regulation
and
yet
it's
counted
as
more
mandatory
regulation,
and
so
there
there
are
measurement
issues.
J
There's
a
paper
by
Joe
Aldi
at
Harvard
and
some
others
so
I
think
we
should
be
careful
about
and
I
guess.
This
recommendation
does
not
explicitly
endorse
any
particular
metric,
so
just
wanted
to
flag
that.
But
there
is
a
debate
about
that
as
well.
So.
H
Are
you
comfortable,
then
Jonathan?
Thank
you
for
sharing
all
that.
Are
you
comfortable
with
the
kind
of
gestures
that
we
have
here?
As
being
I
mean,
we
haven't
actually
finalized
anything
in
any
requisition
three
yet,
but
are
these
sort
of
like
I
know
you're
cautioning
us
about
this
stuff,
questioning
us?
Okay,
great?
Yes,.
H
H
I
Be
very
brief:
I
just
wanted
to
and
I
don't
I
don't
want.
I
I
wanted
to
do
maybe
two
things
so
first
I
entirely
agree
with
what
Jonathan
just
said
and
in
a
brief
way,
footnote
73
of
the
report
responds
to
that
so
Kerry
coglinisi
myself,
others
have
written
about
the
particular
metrics,
and
it's
absolutely
right
that
the
report,
nor
should
any
of
our
recommendations
be
endorsing
any
particular.
You
know
methodology
Etc.
I
The
point
is
to
you
know:
dot
to
its
credit,
is
an
extremely
transparent
about
saying
what
those
metrics
are
in
terms
of
its
experimentation
Etc.
So
the
second
just
data
point
is
a
report
on
page
28
gives
a
a
really
concrete,
I.
Think
nice
example
I
alluded
to
this
before,
but
I
think
it's
now.
What
we're
talking
about,
which
is
that
CMS
and
FDA
officials
were
given
the
output.
I
I
So
it's
I
think
that
the
point
that
we
would
want
to
make
here
that
seems
like
a
good
example
of
their
using
their
subject
matter,
expertise
to
do
a
check
on
this
and
they
have
to
you
know
they
have
to
have
maybe
some
kind
of
understanding
of
the
tool,
but,
but
certainly
not
you
know,
and
it
would
be,
it
would
be
better
in
this
instance.
I
If
they
had
known
you
know
what
were
the
metrics
leading
to
the
flagging
Etc,
so
I
I
think
that
that
is
supported,
in
other
words,
that
the
that
we
need
I,
don't
know
about
personnel
I
mean
I
I
like
using
subject
matter
experts
that
agencies
themselves
talk
about
smes
all
the
time.
I,
don't
know
how
you
all
feel
about
that
terminology,
but
okay.
H
So
here's
here's
my
current
recommendation
to
how
to
handle
kind
of
everything
Where
We
Are.
Is
that
maybe
what
if
we
keep
the
insure
part
now
that
we've
slightly
shifted
where
we
are
so
Kaja?
H
Could
you
do
the
temporary
blackout
of
encourage,
even
though
I
know,
I
asked
you
to
add
the
asterisks
there
in
a
minute
a
minute
ago
and
that's
we
maybe
also
temporarily
cause
you
Brack
out
the
stuff
X
out
the
stuff
after
the
double
bracket,
because
maybe
I'm
gonna
float
whether
assess
the
output
for
further
consideration
captures
everything
that
we've
been
talking
about.
Notwithstanding
our
just
our
observation
about
the
word
interpret
that
we
had
a
moment
ago.
H
So
I
am
gonna
offer
this
as
the
kind
of
capturing
of
where
we
have
maybe
I'm
going
to
do
a
quick
follow-up,
Kathy
subject
matter
experts
or
you
know.
However,
you
want
to
see
that
phrase.
Would
that
capture
both
the
the
training
point
and
the
assess
point?
So
we
want
to
just
rename
Personnel
on
line
53
or
do
we
wanna?
Is
it?
Is
it
more
to
the
second
one
in
line
55.
I
K
I
H
What
about
I
thought?
The
point
was
that
if
they
are
having
their
own
in-house
people
to
use
the
tools,
they
should
ensure
that
they
have
sufficient
training
right.
I
thought
your
concern
from
two
was
that
we
can't
have
them.
We
can't
have
we
can't
burden
small
agencies
with
having
sufficient
Personnel
if
they
don't.
I
H
Yes,
so
my
question
was
just
in
line
53
we're
talking
about
Personnel,
we're
talking
about.
If
you're,
using
algorithmic
tools,
you
should
ensure
that
they
have
the
adequate
training,
whereas
I
thought
you
were
saying.
If
we
are,
if
they're
we're
not
making
them
all,
have
the
adequate
I've
lost
the
thread
because
I'm.
I
Thinking,
here's
a
specific
example:
there
were
agencies
we
specifically
talked
to
who
were
very
eager
to
learn
from
the
dod's
game,
changer
and
said
specifically
wow.
If
we
could
take
that
tool
and
tailor
it
to
our
uses,
that
would
be
great,
but
we're
tiny
and
we're
not
gonna
have
the
people
in-house
Etc
and
then
we
had
agencies
like
GSA,
saying:
there's
no
need
that
each
agents
have
to
have
that
expertise,
but.
H
Isn't
the
point
that,
if
they're
using
them
inside
the
agency
they
have
to
have,
if
the
people
who
are
using
them
inside
the
agency,
they
should
have
the
appropriate
personnel
that's
at
least
the
way
I'm
reading
they
should
have
the
training
if
you're
doing
it
in-house.
You
should
make
sure
your
people
have
the
appropriate
training
if
you're
not
doing
it
in-house.
Whatever
so
I
think
encourage
goes
more
to
what
we
said
in
two,
whereas
once
you're
doing
it
in-house,
you
should
Ensure.
A
Okay,
I'm
just
gonna,
sorry,
Eloise,
I'm,
just
gonna
jump
in
real,
quick
and
just
say
that
like
be
phrase
this
so
that
Personnel
meant,
like
the
regulatory
decision
makers,
so
whoever's
making
the
decision
based
off
of
an
algorithm
has
training
on
the
algorithm
so
that
they
know
exactly
what
Jeremy
has
said
not
to
have
any
type
of
automation,
bias,
I,
hope,
that's
clarifying.
H
Well,
but
then
it
answers
my
it
answers,
Kathy's
questions,
Kathy
can
I
okay,
one
meta
comment:
can
we
not
use
the
chat
for
all
of
the
subject
matter?
Stuff?
That's
loading
in
I
would
totally
welcome
all
of
this,
but
I
think
it's
it's.
H
The
chat
is
that
I
read
at
the
aqueous
thing
in
the
beginning
is
supposed
to
be
just
for
like
do
you
want
to
raise
your
hand
in
so
I
would
be
just
grateful
if,
if
we
could
following
the
aikis
protocol,
that
I
was
asked
to
follow
if
we
could
save
the
chat
for
hey
always.
Please
call
on
me,
though.
I
actually
am
totally
intrigued
by
everything
that
everybody
is
saying
in
there.
So
there's
not
a
diff
on
any
of
the
great
material
people
are
putting
there
just
sort
of
an
aqueous
protocol
Point.
H
Let's
save
the
chat
for.
Please
call
on
me.
I
thought
that,
in
response
to
to
where
we
are
in
recommendation
three
that
that
Kathy
had
just
suggested
that
the
subject
matter
Personnel,
she
was
really
concerned
about
capturing
that
at
line
55,
but
not
at
line
53.
But
Kaja
now
I
hear
you
saying
that
at
line
53,
you
also
meant
regulatory
sort
of
decision
makers
and
not
just.
A
Yeah
I
don't
think
this
one.
This
recommendation
we
didn't
contemplate
is
including
the
technical
Folks
at
all.
I
would
imagine
that
maybe
some
of
that
transparency
and
training
issues
are
probably
raised
in
a
lower
or
could
better
be
raised
in
a
lower
recommendation.
Okay,.
M
You
could
say
use
information
obtained
from
algorithmic
tools,
then
it's
probably
agnostic
at
that
point
right
at
a
certain
point:
it's
just
whether
they
are
themselves
using
the
tool
they
are
receiving
information
from
the
tool
and
seeking
to
interpret
it
for
regulatory
purposes.
So
that
might
be
an
easier
way
to
frame.
I
H
Expert,
are
you
actually
suggesting
that
we
use
that
phrase
here
or
are
you
like?
Kaja
was
just
offering
us
Kaja?
What
was
the
word?
You
used
regulatory
Personnel
regulatory
decision
making
regulatory
decision
makers-
I
am
agnostic
on
this
point,
but
we
should
have
something
on
the
screen
that
captures
what
it
is.
We're
trying
to
talk
about
regulatory
decision
makers,
May
capture,
Kathy's
point
in
a
more
generalized
way.
So
maybe
we
want
to
say
that
regulatory
decision
makers.
H
I
Yeah
I
I
don't
have
a
strong
view
if
they're
government
folks,
who
want
to
weigh
in
it
was
smes,
was
used
consistently
by
Those,
whom
we
interviewed
when
they
talked
about
this
interface
with.
But
it
may
you
know
I
think
in
the
report.
I
went
through
and
took
out,
sne
and
made
it
subject
matter
expert,
subject:
matter
expert
because
not
every
reader,
so
regulatory
decision
maker
May
capture
that
it's
a
a
I
would
defer
to
our
government.
You
know
members,
okay,.
H
So
I'm
gonna
now
just
throw
back
out
there
again
so
that
we're
that
that
we
should
keep
the
insurer,
so
agencies
should
ensure
that
regulatory
decision
makers
who
use
algorithmic
tools
to
support
retrospective
review
have
adequate
training
on
the
capabilities
and
risks
of
those
tools
and
and
then
delete
that
Personnel,
because
we're
just
talking
about
them
again
and
then
and
keep
the
end
and
carefully
assess
the
output
for
further
consideration.
That
is
my
I
think
that
captures
what
we've
been
talking
about.
H
H
I
guess
at
this
point,
though,
because
we've
had
such
a
fulsome
discussion
of
the
literature
and
the
the
kind
of
the
big
conceptual
points
at
this
point,
I'm
really
asking
for
a
word
smithing.
So
if
you
raise
your
hand,
it
would
be
wonderful
if
you
could
contribute
whatever
your
substitute
point
is
through
the
lens
of
a
inline
53
blah
blah
blah
input.
H
Okay,
I'm
scrolling
through
just
to
see,
if
again
it
looks
like
people
are
trying
to
unmute
or
something
like
that
and
I.
Don't
see
that
and
I
don't
see,
hands
and
I,
don't
see
a
request
to
be
recognized
in
the
chat,
so
I'm
going
to
now
just
say
and
then
I'm
going
to
wait
a
minute
for
somebody
to
raise
a
hand
and
desperate.
No,
no
I'm
going
to
say
that
I
think
what
we
have
now
at
lines
53
through
56,
including
the
red
stuff
and
not
including
the
stuff.
H
That's
xed
out
in
56
and
57
I,
think
that
this
is
capturing
where
we
are,
and
so
maybe
we're
good
with
it
and
I
knew
that
if
I
did
that,
that
I
would
get
a
hand.
So
Sean
I
recognize
you
in
the
sincere
hope
that
whatever
you
say
is
going
to
be
coupled
with
a
please
use.
Blah
blah
word
at
line
whatever.
C
Yeah,
no
no
I
I,
really
like
where
we're
ending
up
and
I
I
have
a
really
minor
point
that
I
I'm
just
kind
of
not
following
the
structure
of
the
sentence.
Right
now.
It
says
agencies
should
ensure
regulatory
decision
makers
who
use
these
tools,
have
adequate
training
and
carefully
assess
the
output
is
the
and
for
the
regulatory
decision
makers
or
the
agencies.
I
H
It
does
make
sense
to
me:
I
meant
it
for
the
regulatory
decision
makers,
but
I
see
the
confusion
and
that
they
carefully
assess
yeah
I
tell
you
that
I
did
not
add
the
that
they
because
I
was
envisioning.
Our
friends
at
the
plenary
in
June,
who,
like
do
all
sorts
of
strikeouts
of
you,
know
clarifying,
but
doubling
up
grammatical
points.
H
So
I
was
limiting
myself
based
on
my
experience
of
being
X
down
at
the
plenary
by
tight
words
methods,
but
Sean
I
actually
hear
you
and
almost
suggested
that
to
begin
with,
does
this
capture
what
you
were
going
to?
Okay,
sorry,.
D
I
H
H
Okay,
I
would
have
said,
and
I
bet
somebody
else
at
the
plenary
would
say
that
we
need
to
have
an
open
parens
before
A
or
B,
but
that
truly
is
like
a
an
acre
style
and
I
will
stand
down
on
that.
So,
like
you
do
whatever
it
is
that
this
committee
on
style
is
going
to
do
with
that.
Okay,
so
going
once
I
mean
I'm,
not
going
to
say
again
sorry.
H
Okay,
so
I
think
people,
or
are
we
good
with
this
for
now.
H
Okay,
obviously,
this
is
not
the
only
only
only
time
that
we
will
have
to
do
this,
but
I
think
it
is
where
we
are
for
now.
Okay,
so
it
is
2.
26
we've
been
at
this
for
an
hour
and
a
half,
let's
take
a
but
Kaja
and
Jeremy.
Can
you
just
remind
me
what
our
default
is
for
the
break
halfway
through?
Do
we
have
like
a
five
minute
default,
or
do
we
have
a
five
minutes?
Is
that
five
five
minutes
is
fine,
yeah?
Okay,
great?
H
H
I'd
love
to
get
started,
but
I'm
hoping
to
see
some
more
faces,
as
perhaps
there
is
a
little
point
in
getting
started
if
it
is
only
Kaja
Jeremy
and
me
on
the
line.
Hello,
Kathy.
H
Okay,
perfect
I
think
we're
in
a
position
to
to
continue
okay.
H
So,
let's
start
back
up
at
line
57
with
recommendation
four
to
promote
transparency
and
build
internal
expertise,
agencies
should
When,
developing
or
selecting
an
algorithmic
tool
to
support
retrospective
review,
ensure
that
the
source
code
for
the
tool
is
publicly
available
and
interoperable
with
other
government
systems.
H
If
agencies
use
an
algorithmic
tool
that
is
not
open
source,
they
should
ensure
that
key
information
about
the
tools,
development,
operation
and
use
is
available
to
agency
personnel
and
the
public.
Okay.
The
floor
is
open.
Let's
see
some
hands
for
suggestions
of
things
to
modify
or
tweak
or
change
entirely.
H
Okay,
I
was
not
necessarily
expecting
that
this
would
be
one
that
was
met
with
great
silence,
but
it
is
being
met
with
great
silence.
So
perhaps
we
have
some
general
agreement
here
and
we
should
move
to
line
63
for
recommendation
five
I
guess
I
was
just
wondering
whether
the
people
who
I
had
suggested
that
maybe
there's
opportunities
for
explainability
and
interpretability
in
five,
but
I'm
just
making
sure
that
people
don't
want
to
put
anything
in
four.
A
I
You
know
I'll
just
ask
a
question:
I
think
or
four
I
mean
the
report
insists
on
open
source
and
interoperability.
So
we
might
just
want
to
discuss
I
I
understand.
I've
I
have
a
long
history
with
aqueous,
but
we
might
just
want
to
discuss
why
not
insist
on
open
source
and
interoperability,
and
it's
not
just
me
insisting
DOD
insisted
upon
it.
Gsa
and
CMS
insisted
upon
it
in
the
so
there's
support.
You
know
in
the
in
the
report
for
insisting
upon
it.
Kathleen.
H
H
All
right,
all
right,
so
you're,
not
I
I,
couldn't
you
who
kept
using
the
word,
insist
and
I
couldn't
tell
whether
you
were
saying
that
you
wanted
it
as
stronger
than
insure
or
whether
you
just
wanted
to
observe
that
ensure
means,
insist
and
make
sure
everybody's
okay
with
that
yeah.
So
this
is
an
important
part
of
the
report.
So,
let's
just
you
know,
Sean,
let's
see,
let's
see
where
folks
are
with
this
Sean.
C
C
So
if
the
agency
is
unable
or
for
other
reasons
does
not
select
an
open
source
tool,
then
then
they
should
ensure
that
the
key
information
about
the
tool,
development
and
operation
use
is
available.
That
that
seems
to
me
like
an
important
thing
to
include,
because
it
may
be
the
case
that
an
agency
ultimately
selects
a
tool.
That's
not
open
source,
because
it's
efficient
and
already
exists
on
the
market
as
opposed
to
building
building
something
themselves.
H
Okay,
thank
you
for
the
clarification
does
enable
capture
the
second
part
of
what
you
were
saying
as
well,
or
do
you
think
that
there's
a
need
to
say
unable
or
for
other
to
use
or
for
other
reasons,
I.
H
C
B
E
H
I
think
that
captured
Sean
that
certainly
captures
I
think
what
you
were
going
with
and
I'm
not
seeing
really
objections.
Cafeter
overall
should
language
here.
H
I
Be
happy
yeah,
I,
just
I
mean
I
I.
Think
I
heard
a
little
bit
from
Sean
I
personally
would
think
that
we
would
want
to
discuss
why
an
agency
would
not
use
a
tool.
That's.
H
I
I
guess
I
would
just
for
purposes
of
our
discussion.
I
mean
the
report
mentions
at
Pages
47
to
49.
You
know
a
danger
that
there
could
be
vendor
lock-in.
For
example,
if
you
use
a
particular
tool
with
proprietary
technology,
it
will
not
be
one
that,
for
example,
you
know
DOD
touted.
They
had
Booze
Allen
Hamilton
working
on
this,
but
it's
open
source,
so
they're
not
locked
in
with
that
particular
vendor,
and
it
can
be
used
elsewhere
and
they're
able
to
share
this
tool
Etc
so
I
I.
I
Just
would
you
know
we're
making
recommendations,
so
I
guess
I,
I,
don't
know
or
I
just
would
like
to
hear
more,
because
it
doesn't
come
sort
of
from
the
report.
That
of
why
we
would
be
not
just
stopping.
We
have
language.
They
should
do
this.
Obviously
they
can
decide
not
to
do
that.
So
I'm
not
I'm,
not
sure
why
we
would
have
that
second
sentence,
so
I
just
have
to
I
just
want
to
raise
that.
M
It
just
sounds
like
in
this
conversation
that
there's
sort
of
a
usual
cost
benefit.
Calculus,
there's,
obviously
benefits
to
open
source
like
preventing
vendor
lock-in,
but,
as
Sean
said,
it
might
be
cost
to
using
it.
It
might
be
a
superior
product.
That's
proprietary
agencies
obviously
use
proprietary
tools
all
the
time
and
we're
using
Zoom
right
now,
which
is
a
proprietary
software.
So
there
might
be
good
reasons
in
certain
circumstances.
M
I
imagine
from
an
agency
perspective
to
use
tools
that
are
not
necessarily
open
source,
but
it
sounds
like
it's
really
just
consider
the
costs
and
the
benefits
of
using
open
source
software
I,
don't
know
how
that
gets
expressed,
but
it
sounds
from
the
discussion
like
that's
what
we're
getting
at.
I
C
Yeah
the
reason
I
think
the
second
census
is
important
is
just
because
I
think.
In
fact,
some
people
will
ultimately
not
use
open
source,
and
in
that
event
they
should
do
as
much
as
they
can
to
provide
transparency
about
how
the
tool
is
working,
even
if
they
can't
produce
the
source
code,
certainly
not
to
diminish
the
report's
findings.
I
do
agree
wholeheartedly
that
open
source
is
better
when
possible.
I
H
Thanks
for
that,
Kathy
Jeremy
is
your
hand
up
afresh
or
an
old,
okay,
all
right,
okay,
I
think
then
I'm
gonna
just
pause
for
a
minute
to
see
if
anybody
else
wants
to
weigh
in
on
anything
in
recommendation
four
again
with
the
request
to
get
some
language
in
for
our
consideration.
If
you
want
to
make
any
suggestions
and
I
will
pause.
H
Okay
hearing
none:
let's
move
to
recommendation
five.
H
Okay,
so
can
I
first
make
sure
that
I
see
now
that
we've
added
at
my
request,
the
first
comment
in
the
margin
about
whether
there
are
opportunities
for
explainability
or
interoperability
here
I
want
to
just
ask
the
Aika
staff.
What
exactly
is
this
referring
to?
Is
it's
also
referring
to
recommendation
five
versus
referring
to
something
else.
H
That's
for
number
five,
that
is
for
number
five:
okay,
great,
just
double
checking
okay,
so
I'm
gonna
give
a
minute
then
for
people
to
read
number
five
and
but
please
also
look
at
the
at
the
two
comments
about
whether
this
is
a
moment
to
say
something
about
explainability
and
or
interpretability.
H
Okay,
so
I
think
my
question
is
narrower
and
also
comes
from
less
knowledge,
so
maybe
we
can
deal
with
that
quickly,
because
I
I
don't
I.
Truly,
this
is
I,
am
not
a
subject,
an
SME
in
this
area
Kathy.
So
is
there
something
that
that
needs
to
be
said
here
that
should
be
said
here
about
explainability
or
interpretability
here
from
the
outside.
I
just
threw
this
in
as
a
potential
area,
because
we
were
talking
about
all
sorts
of
other
things
that
agencies
should
do
when
they're
talking
about
their
reliance
or
use
on
these
tools.
H
H
H
J
J
This
is
good
the
it
seems
that
if
the
algorithmic
tool
is
used
to
develop
evidence
or
justification
for
a
new
rulemaking,
so
if
there's
a
retrospective
review
that
used
an
algorithmic
tool
that
then
becomes
the
basis
for
revising
the
old
rule
in
a
new
rule
making,
then
you
know
the
basic
principles
of
administrative
law
would
require
that
explanation
of
the
reasons
for
the
Revision
in
the
rule
we
could
we
could
emphasize
that
here
as
well,
but
maybe
number
five
is
trying
to
say
trying
to
focus
more
on
the
plans
for
retrospective
reviews
and
the
description
of
a
specific
retrospective
review,
irrespective
of
whether
it's
used
as
the
basis
for
a
new
rule.
J
So
in
that
case,
this
it
seems
like
this
is
fine.
You
know
disclose
whether
they
used
and,
if
so,
explain
how
they
used
algorithmic
tools,
but
maybe
there
could
be
more
muscle
in
this
in
this
recommendation.
You
know
to
explain
how
it
The
algorithmic
Tool,
influence
the
findings
of
the
retrospective
review
as
compared
to
not
using
it
or
something
at
something
else.
Maybe
that
Kathy
may
have
a
better
suggestion.
H
So
Jonathan,
were
you
talking
at
first
at
first
I
took
you
to
be
saying
that
if
a
new
rulemaking
grows
out
of
what
happens
in
retrospective
review,
using
an
algorithm
that
the
new
rulemaking
should
explain
and
disclose
the
Genesis
of
it
through
this
project,
is
that
what
you
were
saying?
Because
the
second
thing
you
said
made
me
think
you
were
not
saying
that?
Yes,.
J
I
was
saying
that
I
think,
okay
and
but
then
I
was
saying.
Perhaps
recommendation
five
is
addresses
a
broader
set,
a
larger
set
of
retrospective
review
plans
and
specific
retrospective
reviews,
not
all
of
which
would
result
in
a
new
rule.
Yeah.
H
Okay,
so
this
is
then
a
question,
maybe
for
you
Jonathan,
but
also
maybe
for
Aces
staff,
whether
Jonathan's
suggestion
is
already
Incorporated
in
some
other
forward-looking
rulemaking
such
that
we
either
can
one
refer
to
it
or
two
don't
need
to
have
it
here.
H
Do
you
know
what
I
mean
right
like
does
it
do?
We
already
have
some
recommendation
that
says
when
doing
a
new
rulemaking,
if
the
Genesis
of
the
rulemaking
relied
in
part
on
some
algorithmic
tools
that
should
be
disclosed,
because
if
not,
then
what
Jonathan
is
saying
is
actually
like.
It's
sort
of
a
new
and
important
point
in
this
recommendation
and
so
wondering
whether
we
should
either
incorporate
some
other
thing
or
just
flag
it
here,
yeah.
M
It's
it's
a
great
point:
I,
don't
think
we've
addressed
it
anywhere,
I
can't
imagine
where
we
would
have
addressed
it
so
probably
worth
considering
here.
Okay,.
J
Yeah
either
way
would
be
fine,
I
think
it
could
be
a
third
sentence
here,
a
number
five
and
I
liked.
You
know
your
sentence.
A
short
and
simple
sentence
would
be
good,
so
something
like
when
I
think
you
said
something
like
when
an
algorithmic
tool
is
used
as
the
basis
I
think
you
might
have
said
Genesis,
but
I'm,
not
sure.
If
we.
J
Or
maybe
we
need
to
say,
because
this
report
is
about
retrospective
review.
We
need
to
say
when,
in
retrospective
review,
using
an
algorithmic
tool.
J
H
H
I
Yes,
this
is
really
just
a
question
to
Jonathan,
because
I
mean
that's
I
mean
that's
the
dictate
of
administrative
law.
If
it's
the
substance
of
basis,
so
I
wouldn't
think
we
would
I
I
see
your
point,
but
I
would
be
cautious
that
igus
recommendations,
don't
usually
States.
You
know
like
we
could
cite
cases
for
that
proposition
and
I
thought
that
the
point
of
this
was
to
get
at.
For
example,
when
HHS
published
their
final
rule
on
regulatory
cleanup,
they
didn't
disclose
at
all
the
use
of
the
tool
because
it
was
non-substantive,
Etc
and
I
thought.
I
This
was
encouraging
kind
of
public
disclosure
of
tools
being
used
in
a
not
when
they're,
not
the
substance
or
basis
of
the
new
rulemaking.
But
it's
more
a
question:
I
have
no
objection
to
it,
but
it
just-
and
maybe
this
is
for
the
aqueous
folks
right
like
we
could
cite
had
been.
We
could
start
Kate
like
that's
a
that's
an
administrative
law
proposition.
J
J
What
may
be
different
here
is
that
the
algorithmic
tool
may
be
used
in
a
way-
that's
not
as
transparent
or
not
as
visible,
so
I
I've
thought
of
this
when
looking
at
number
five,
because
number
five
refers
to
these
recommendation,
2021-2
on
periodic
retrospective
review,
which
is
about
statutes
and
executive
orders,
which
and
even
agency's
own
policies,
which
call
on
agencies
to
do
retrospective
review
periodically
such
as
every
five
years
or
so
or
every
two
years
or
eight
years,
and
and
it
it
it
is
frequently,
but
not
always
aimed
at
revising
a
past
rule.
J
Updating
a
past
rule.
So
recommendation
five
refers
to
that,
but
then
limits
itself
to
how
that
algorithmic
tool
was
used
in
the
retrospective
review,
and
so
I
was
just
thinking.
Well,
maybe
it
should
also
point
to
the
potential
use
in
the
rural
revision,
but
if,
if
you
think,
if
Kathy
or
others
think
that
doesn't
really
go
here
or
it's
it's
self-evident,
then
it's
not
crucial,
but
I
mean
recognition.
Number
five
seems
to
be
a
kind
of
bare
minimum
say
that
you
used.
J
If
you
use
an
algorithmic
tool,
say
that
you
did
so
I
thought
we
could
do
say
a
little
more
okay.
A
Do
yes,
I
I
think
this
is
great
to
include
and
I
know.
Well,
I,
don't
think
you
know
the
EPA
doesn't
contemplate
the
use
of
algorithmic
tools
and
so
I
think
that
this
does
expand
it
in
a
way
that
is
a
bit
of
a
a
safeguard
in
terms
of
transparency,
of
Agency
use
of
these
tools.
So
that's
that's
my
opinion.
I
think
it's
a
it's
a
good
addition.
Okay,.
D
Yeah
hi
is
my.
D
D
So
I'm
I'm
really
not
sure
that
that
you
know
recommending
to
agencies
that,
oh
by
the
way,
waves
Amendment,
you
know
you're
putting
your
Preamble
well
in
the
course
of
our
retrospective
review
of
the
regulation,
we
found
that
this
needed
to
be
changed.
For
whatever
reason,
and
by
the
way
we
used
artificial
intelligence
to
generate
that
this,
this
work,
product,
I,
I,
just
don't
see
it
I,
just
don't
see
the
point.
D
H
Thank
you
for
that.
So
this
sounds
like
you're
hearkening
back
to
our
earlier
recommendation
about
the
need
for
humans
to
still
make
the
decision
absolutely
yeah,
yeah,
okay,
so
Jonathan!
Maybe
if
I
could
call
on
you
since
it's
your
initial
initially
your
sentence,
do
you
have
a
response
to
Phil's
concern
here
that
it's
irrelevant.
J
Of
course,
it's
a
human
decision,
at
least
under
our
current
system
of
government
and
so
I
think
that
last
sentence
that
we
just
suggested
adding
is
also
about
a
human
decision
maker.
It's
just
saying
explain
how
the
human
decision
maker
used
the
tool
or
how
the
tool
contributed
to
the
development
of
the
new
role.
I
Yeah
I
think
that
we
should
the
spirit
of
what
Jonathan
wants.
We
should
talk
more
about
in
terms
of
how
to
beef
up
what
we're
suggesting
you
know.
Agencies
include
when
they're
in
their
plans
Etc
by
using
language
I,
don't
think
Jonathan
disagrees
of
the
basis
for
a
new
roommate,
and
that
is
a
legal
administrative
law.
Determination.
I
talked
to
numerous
agency
folks
who
have
differing
views
as
to
you
know.
I
If
it's
used
to
Target,
is
it
the
basis
or
not
Etc
I,
don't
know,
I,
don't
think
we're
giving
I,
don't
think
we're
doing
what
we
think
we
might
be
doing
with
that
second
sentence,
so
it
might
be
more
productive
to
take
out
something
that
will
ring
like
everyone
knows
that,
if
something
any
empirical
information
right
before
AI,
the
EPA
is
using
very
sophisticated
logistical
regression,
methodology
Etc
and
there's
some
times
where
That's
the
basis
for
rulemaking,
and
it's
all
disclosed
and
there's
administrative
law
about
that.
I
So
I
think
maybe
we
should
talk
about
like
substantively.
What
more
we
want
agencies
to
do,
other
than
include
information
and
move
away
from
making
it
sound.
You
know
I'm
just
not
sure
what
that's
you
know.
I
understand
as
I
said
the
spirit
of
what
you're
trying
to
do,
but
I
would
I,
don't
think
it's
a
it's
the
right
way
to
go
with
that.
James.
B
Hi
I
just
want
to
round
out
this
discussion
a
little
bit
I
think
it's
Way
Beyond
the
scope
of
what
can
go
into
paragraph
five,
let
alone
the
rest
of
the
the
document,
but
I
think
there.
We
should
bear
in
mind
an
important
distinction
between
right
hand,
side
and
left
hand,
side
uses
of
mathematical
tools,
and
just
this
is
really
oversimplifying,
but
on
the
right
hand,
side
or
left
hand,
side,
you're,
really.
B
So
when
we
say
how
does
the
tool
Advance
this
goal,
I
wonder
if
there's
a
way
we
can
distinguish
helpfully
between
predictions
forecasts
right
what
we
think
the
tool
is
going
to
tell
us
about
the
future
on
one
hand
versus
contributions
of
any
mathematical
tool
to
coming
up
with
the
etiology
or
the
the
reason
something
happened
coming
up
with
an
explanation
and
it
might
be
helpful
to
craft.
Even
a
split
of
you
know
just
a
little
phrase:
that's
connected
by
and
or
in
the
middle
that
distinguishes
between
predictions
and
explanations,
James,.
H
B
Indifferent
to
whether
I
mean
five
is
long
yeah
and
whether
this
is
5A
and
then
it
becomes
six
and
you
roll
everything
down.
But
I
do
think
that
there
is
a
useful
distinction
between
mathematical
tools,
algorithms
models
methods,
whatever
you
want
to
call
them
right.
They
they're
they're,
slightly
different,
but
there's
a
there
is
a
meaningful
distinction
between
the
predictions
of
a
tool
on
one
hand
on
the
left,
hand,
side
and
the
explanations
or
interpretations
that
the
tool
May
generate
on
the
right
hand,
side
do.
H
You
think
I
could
ask
you
to
take
a
stab
at
writing,
a
sentence
that
sort
of
says.
B
And
while
we're
talking,
do
you
want
me
to
how
am
I
going
to
do
it
besides
putting
it
in
the.
H
Chat,
yeah,
yeah,
yeah,
no
I
know
I
was
just
thinking
that,
can
you
think
about
it
for
a
second
and
yeah
put
it
in
the
chat?
Maybe
I'm
going
to
accept
that
as
a
suggestion
in
the
chat
I'll.
H
Just
so
that
we
can
be
looking
at
something
specifically
rather
than
just
generally,
and
while
he's
working
on
that
Kathy,
do
you
have
a
view
on
on
whether
the
thing
he's
working
on
is
part
of
five
or
not?
It
seems
to
me
like
it's,
maybe
something
different,
but
again,
I
am
not
an
SME
here.
H
Have
to
see
it
okay,
great
Jonathan,
can
I
go
back
to
you
with
the
wordsmithing
question
on
the
on
the
suggestion
in
the
third
sentence
here.
Do
we
really
mean
that
agencies
should
explain
how
the
tool
contributed
to
the
development
of
or
how
the
tool
contributed
to
the
decision
to
develop
the
new
rule?
I
feel,
like
you
mean
the
decision
I
feel
like
we
mean
the
decision
based
on
our
our
conversation
right,
because
the
the
stuff
that
we're
talking
about
in
this
paragraph
is
only
about
identifying
the
rules.
H
It's
not
it's
not
kind
of
deciding
what
to
do
with
them.
It's
not
like
the
substance
of
the
rules.
Developing
the
new
rules
sounds
like
we're
talking
about
the
substance
of
what
should
be
in
the
new
rule
and
I.
Think
I
heard
you
really
talking
about.
We
should
just
be
disclosing
that
we
relied
in
part
on
algorithmic
tools
to
figure
out
that
this
would
be
something
that
we
wanted
to
revisit.
Does
that
better
capture
what
you
meant
or
have
I
missed
it.
J
That
that's
fine
and
that
I
think
that
your
suggestion
refers
more
to
the
use
of
the
tool
in
the
retrospective
review.
So
how
that
spurred
the
new
rule
making
how
it
contributed
to
the
decision
to
develop
the
new
rule?
That
would
be
fine
and
but
I
also
want
to
go
back
to
Kathy's
comment
earlier.
If
I
understood
that
we
should
sort
of
discuss
this
more
and
separately.
So
maybe
that
means
this
sentence
should
be
separated
out
from
number
five
into
a
separate
paragraph.
So
it
can
really
be
critiqued
and.
H
Do
have
another
committee
meeting
in
April
can
can
I
actually
also
put
one
other
thing
on
the
table.
I
was
going
to
ask
the
Aika
staff
about
this,
which
is,
we
do
sometimes
say
consistent
with
blah
blah
blah.
You
know
some
Doctrine
or
whatever,
and
then
we
do
a
citation.
Is
that
do
we
not
do
that
I
mean?
Is
there
some
Rush?
H
Is
there
some
reason
to
and
the
reason
why
I'm
thinking
that
I
suppose
is
that
on
the
implementation,
Council
there's
been
a
lot
of
discussion
about
who
actually
is
the
recipient
of
the
recommendations
and
who
are
all
the
people
that
we
need
to
be
seeing
as
our
audience
in
agencies
and
I
guess.
My
personal
response
to
Kathy's
concern
about
this
being
like
a
basic
principle
of
administrative
law.
H
Is
that
that
not
that
may
not
be
so
basic
and
obvious
to
people
who
are
not
administrative
lawyers,
but
who
nonetheless
are
part
of
who
we're
targeting
with
these
recommendations?
So
I
guess
that's.
My
kind
of
reaction
is
that,
even
if
it
is,
you
know,
kind
of
a
core
principle
that
we
all
get.
That
still
may
not
mean
that
we
shouldn't
have
it
here,
one
and
then
two.
If
we
do
have
it
here
Kathy,
would
you
be
more
comfortable
if
there
were
some,
of
course
or
as
consistent
with
or
with
some
footnote,
so
Kathy.
I
Yeah
so
I
find
myself
in
the
awkward
situation
of
for
something
like
this
I
would
say:
a
law
review
article
that
I
wrote
in
Belmont
AI
for
retrospector
review,
Pages
405
to
406,
where
I
specifically
talk
about.
You
know
that
the
apa's
noticing
comment
mandate,
you
know,
has
been
interpreted
to
require
that
agencies
make
publicly
available
critical
information
underlying
proposed
rules
and
I
argue
that
we
should
go
further
and
ins,
especially
in
light
of
human
in
the
loop
gaining
kind
of
talisbatic
significance.
I
I
What's
new
about
the
AI
technology
in
this
space
is
there
are
some
people
who
draw
bright
line,
rules
between
supportive
AI
technology
and
determinative
and
I'm
of
the
view
that
there's
not
such
a
bright
line
and
I'm
all
for
subjecting
to
more
disclosure
and
also
noticing
comment,
but
it's,
but
so
at
a
minimum.
I
would
separate
it
out
and
then
we
should
just
know
that
we're
entering
I
know
Jonathan,
knows
I,
know
we're
entering
kind
of
a
debate
about
how
much
further
Beyond
just
existing.
I
You
know
that
statement.
Basically,
just
States
I
live
like
an
existing
administrative
law,
legal
proposition,
but
what
it
means
as
applied
where
AI
Technologies,
where
you
cross
the
line.
Where
has
the
technology
been
used
only
to
you
know,
spot
or
identify,
and
where
has
it
been
used
really
to
you
know
flag
for
overruling
only
by
humans?
Those
are
interesting
things
we
could
discuss.
H
H
Also,
okay,
all
right,
so
my
thought
here,
then,
is
that
it
seems
like
we
have
nothing
else
to
add
on
five,
as
is
because
we
were
sort
of
moving
beyond
it
and
then
I
think
we've
had
a
robust
discussion
of
what
is
now
six,
and
so
my
instinct
would
be
just
as
Jonathan
said
to
come
back
to
it
in
the
second
meeting.
Do
I
hear
any
concerns
with
this
plan?
H
C
H
C
You
and
and
I
agree
with
all
the
prior
points
that
that
this
has
completely
consistent
with
the
apa's
requirements.
It's
also,
you
know,
the
need
for
the
rulemaking
is
spelled
out
as
being
required
in
executive
order,
12866
right.
H
Okay,
okay,
so
acuspoint,
Kaja
and
or
Jeremy.
We
now
have
James's
proposed
sentence:
I,
don't
is
there
some
sort
of
order
of
operations
as
between
moving
through
the
recommendations
that
we're
already
here
versus
adding?
Okay?
If
there
isn't
because
I
see
you
shaking
your
head,
no,
so
then
could
you
could
I
ask
you
to
cut
and
paste
James's
sentence
just
with
just
the
you
know.
Just
this
thing
in
quotation
marks,
as
maybe
recommendation.
H
Okay,
I
think
there's
a
typo
in
predictions,
but
but
thank
you
for
that.
So
I
guess:
let's,
let's
talk
about
this
substantively
first
and
then,
let's
see
if
it
fits
here,
if
people
want
to
go
forward
with
it.
So
thank
you
James
for
crafting
it
can
I
hear
some
reactions
to
the
substance
of
it.
H
I
So
could
could
I
just
understand,
maybe
James.
You
could
say
some
more
about
the
purpose
and
the
need
I
mean
I
I.
Just.
B
You
know
sure
I
just
went
ahead
and
tagged
in
the
chat,
a
couple
of
references.
These
are
very
standard,
long,
long-standing,
lower
view
references,
Finkelstein
and
Fisher,
and
the
idea
is,
let's
just
use
the
example
of
a
traditional
linear
regression,
multiple
regression
with
let's
say
three
or
four
different
predictive
variables.
Nothing,
nothing
elaborate.
B
The
predictive
value
of
such
a
traditional
regression
model
is
to
come
up
with
fitted
values.
The
Y
hat
on
the
left
hand,
side
of
the
equation
and
the
explanatory
value
or
interpretive
value.
Sometimes
you
hear
infrared
causal
inference
is
a
term
used
in
connection
with
these
progression
models.
B
Causal
inference
or
effect
size
would
be
inferred
from
the
coefficients
in
front
of
each
variable
on
the
right
hand,
side,
and
it
has
been
understood
for
half
a
century,
at
least
that
there's
a
meaningful
difference
between
these
two
uses
of
mathematical
models
in
all
law
making.
B
B
We
think
that
the
price
of
10
or
copper
should
be
this,
and
if,
if
it's
not,
this
is
evidence
of
manipulation
or
price
fixing
or
predatory
pricing
or
something
but
many
uses
in
law,
aren't
necessarily
trying
to
find
the
prediction.
It's
trying
to
say
well,
a
classic
example
is
to
what
extent
do
we
think
that
these
differences
in
employment
outcomes
are
attributable
to
sex
discrimination,
race
discrimination
or
some
combination
of
the
two,
and
that's
where
you
would
use
the
right
hand
side
the
explanatory
component,
so
I'm
packing
a
lot.
H
H
Okay,
following
on
that
explanation,
I'm
wondering
how
that
fits
in
I
understand
all
the
examples
you
gave
and
I'm
wondering
how
that
fits
into
the
this
is
a
narrow
recommendation
of
retrospective
rulemaking
and
so
the
examples
you
offer
weren't
about
identifying
rules
for
retrospective
review
and
I.
Guess
I'm
wondering
if
the
if
there
are
some
applications
in
the
particular
con.
H
This
is
not
a
you
know:
it's
not
a
general
algorithmic
tool
recommendation,
it's
a
really
specific
one,
and
if
there
are
some
examples
on
retrospective
review,
could
we
consider
having
some
parentheticals
with
examples
for
how
on
the
predictive
side
and
how,
on
the
interpretation,
sure.
B
I
mean
you
know,
this
is
probably
better
directed
to
people
who
I
may
have
been
involved
in
I.
Just
can't
remember
anything
in
2021-2.
B
But
examples
would
be,
as
I
said,
if
you
have
anything
that
purports
to
give.
This
is
what
we
thought.
A
particular
number
would
be,
whether
it's
pollution,
it's
a
job
outcome,
it's
a
price
right
on
one
hand
versus
a
factor
affecting
one
of
those
outcomes
such
as.
We
think
that
you
know
stationary
versus
non-stationary
sources
contributed
this
much
right.
I
Happy
to
defer
to
Jonathan
I
was
just
about
to
say
that
James's
points
are
well
taken.
This
I
did
I
have
several
degrees,
where
I
did
lots
of
sophisticated
multivariate
regression
analysis,
but
the
specific
technical
details-
I,
don't
know
James
if
you've
seen.
We
have
this
appendix
too
that
talks
about
the
technical
details
of
these
AI
enabled
tools
and
it
doesn't
map
on
to
that
readily
so
I
I.
I
J
Okay,
thank
you
so
I'm
not
sure
that
2021-2
on
periodic
retrospector
reviews
spoke
to
this.
That
was
mainly
a
study
of
where
agencies
are
instructed
or
Guided
by
Statute
executive
order
or
their
own
policies
to
review
their
policies.
Every
so
often
sometimes
specific
interval,
time
periods
and
whether
they
do
or
don't
and
how
they
could
do
better.
So
we
didn't
really
talk
about
the
the
analytic
methodology
of
like
multiple
regression
analysis
in
that
report.
J
That
said,
you
know,
I
think.
The
point
that
James
is
making
here
are
very
well
taken.
Maybe
they
go
to
it,
it
might
have
been
a
prior
recommendation
or
maybe
it's
a
subsequent
one,
about
it's
about
transparency
in
the
use
of
machine
learning.
For
example,
we
could
say
more
generally
algorithmic
tools
and,
as
James
says
in
the
chat,
it
goes
back
to
regression
analysis
before
AI.
J
So
maybe
it
go
I!
Guess
it's
not
it's
not
the
open
source
code
paragraph,
but
do
we
have
another
paragraph
about
transparency
in
the
reporting
the
what
these
algorithmic
tools
are
are
finding?
Maybe
that's
where
it
goes.
But,
okay.
H
So,
let's
before
we
think
about
where
to
move
it,
I
just
want
to
make
sure
that
I'm
understanding
the
nub
of
it
and
I
am
the
person
who's
going
to
have
to
explain
at
the
plenary
how
this
connects
to
the
recommendation
and
I
don't
understand
it.
So
I
think
I
need
in
order
to
put
myself
in
the
position
of
being
able
to
explain
to
the
council
and
the
and
the
conference.
I
need
parentheticals.
H
That
would
say,
whenever
agencies
use
algorithmic
tools
in
retrospect
to
to
develop
the
or
to
assess
potential
candidates
for
retrospective
review
or
whatever
the
right
phrase
is.
They
should
distinguish
between
the
predictions
of
such
tools.
Parents
such
as
comma,
in
this
context,
blah
blah
blah
and
any
contributions
by
those
tools
to
the
interpretation
or
explanation
of
those
predictions.
Parens
such
as
in
this
context
blah
blah
blah,
because
otherwise,
James
I
really
really
appreciate
your
effort
to
explain
to
me
not
technically
trained
in
this
world
in
this
field.
H
B
You
may,
if
I
went
ahead
and
lowered
my
hand
thanks,
so
the
I
think
we
got
misled
a
little
bit.
You
wanted
me
to
Peg
it
to
paragraph
five
and
the
whole
point
of
putting
it
outside
paragraph
five
is
I
think
this
is
a
more
General
problem
in
all
algorithmic
tools,
as
John
Jonathan
says,
this
is
going
all
the
way
back
to
pre-ai.
This
is
foundational
to
all
mathematical
tools.
I
know
you
don't
have,
and
so
say,
white
hat
or
coefficients
may
not
help.
B
B
H
B
I
guess
that's
more
in
retrospection:
were
you
trying
to
all
right?
So
let
me
just
and
and
look
let
me
just
preface
all
this
by
saying
if
this
is
not
appropriate
to
this
section
or
this
recommendation,
I
am
more
than
happy
to
punt
and
we
can
figure
out
some
other
way
to
work
on
this
kind
of
thing
on
AI.
This
is
not
an
issue.
That's
going
to
go
away!
B
No
one
recommendation
is
going
to
exhaust
aqueous
work
on
this
and
we'll
do
it
tomorrow
in
a
future,
in
a
future
future
recommendation
all
I'm,
suggesting
that
even
if
you're
looking
retrospectively
the
accuracy
of
the
predictions
that
were
generated
is
a
distinct
idea
that
needs
to
be
evaluated
apart
from
the
inputs
into
your
decision-making
process
and
I
think
that's
foundational
to
all
of
law.
H
Another
time,
let
me
thank
you
so
much
for
that
further
explanation:
James,
that's
really
helpful.
Kathy!
Let
me
ask
you
to
weigh
in
on
this
here.
Is
this
something
that
you
think
we
should
have
in
this,
this
kind
of
thing
taking
its
value
in
general,
you
know
as
your
input
before.
Is
this
the
kind
of
recommendation
that
we
should
have
in
this
particular
project,
or
does
this
seem
like
a
more
General
point
that
belongs
better
in
a
different
project.
I
Yeah
I
mean
with
respect
James.
My
inclination
is
that
it
would
be
belong
in
a
different
project
and
it
would
it's.
You
know
I
fully
agree
that
many
people
do
not
understand
the
difference
between
you
know
making
predictions,
especially
based
on
pattern,
recognition,
technology
and
making
causal
inferences
and
I've
done.
Some
of
my
own
work
about
work,
for
example,
that
the
FDA
is
doing
is
decision
making
we're
misunderstanding
that
is
really
critical
as
to
understanding
the
output
I.
Don't
think
it
has
the
kind
of
resonance
for
these
particular
uses.
H
So,
thank
you
for
that
Kathy.
So
my
inclination
is
to
ask
Kaja.
Could
you
please
cut
and
paste
James's
very
helpful
sentence
into
some
acous
ether,
where
it
cannot
be
lost
for
some
future
project,
for
which
it
would
be
relevant
and
I'll
just
sort
of
yeah
I
see
a
thumbs
up
and
James's
observation
is,
of
course,
completely
right
that
this
is
not
an
issue,
that's
going
away,
and
so
we
don't
want
to
lose
this.
H
We
don't
want
to
lose
this
thought,
even
if,
even
if
the
sort
of
sense
of
the
of
the
zoom
room
is
maybe
that
it
belongs
in
a
different
recommendation
rather
than
this
one.
So,
okay,
so
thank
you
Kaja
for
for
keeping
that
okay,
so
I
think
that
leads
us
to
be
at
line
75
with
agencies
should
maintain
their
regulations
in
a
format
that
facilitates
the
effective
use
of
algorithmic
tools
and
retrospective
review,
for
example,
by
including
relevant
metadata.
H
But
okay
I'll
put
I'll.
Thank.
Did
somebody
just
raise
a
hand
Sean
your
camera
just
came
on.
Did
you
is
that,
like
indicative
of
something
yeah.
C
Yeah
I
guess
I'm
a
little
confused.
So
so
are
we
talking
about
made
this?
The
current
number
seven
agencies
should
maintain
the
regulations
in
a
format.
C
So
I
guess
I'm
a
little
like
generally,
the
regulations
are
maintained
by
the
by
the
code
of
federal
regulations.
That's
available
online.
It's
pretty
easy
for
tools
to
scrape
the
information
there.
We
don't.
We
sometimes
put
out
compliance
guides
that
summarize
regulations,
but
we
don't
tend
to
reprint
our
regulations
or
provide
them
on
the
web.
We'd
rather
just
point
to
the
ecfr.
I
Yeah,
okay,
we
can
hear
from
Kaja
and
Jeremy,
but
I
took
this
to
be
coming
from
the
recommendation,
the
report
around
page
50
and
actually
Sean.
It
referred
to
two
aspects
that
actually
agencies
should
consider
more
a
more
structured
form
to
their
rulemaking,
so
dot
existing
has
a
more
structured
form
to
its
rules,
which
we
discussed
before
obviated
the
need
for
certain
sort
of
sophisticated
natural
language
processing
tools
to
take
from
an
unstructured
form
and
structure
it.
I
And
then
the
metadata
Point
came
from
CMS
talked
about
in
their
existing
repository
of
regulations
to
be
able
to
add
important
features
that
are
sort
of
agency.
Specific
to
that
so
I
agree
with
you
the
way
this
is
phrased,
I'm,
not
sure
it's
not
really
about
maintaining.
It's
I
think
it's
it
should
be
about,
should
consider
they
can
consider
a
more
structured
format
for
their
rule
making
and
then
for
their.
C
Yes,
so
to
me,
I
guess:
that's
more
about
the
readability
of
the
regulations.
When
I
looked
at
the
this,
this
recommendation
I
thought
we
were
talking
about
who.
H
C
H
A
Yeah
you're
talking
about
this
considered
drafting
yes,
yes,
yeah
I'll
speak
at
least
from
I'm
Jeremy
might
have
something
to
add,
but
from
my
personal
interpretation,
I
think
I
didn't
really
understand
what
structured
format
would
mean
and
I
think
Sean's
point
was
well
taken
that
to
me
at
least
it
that
had
to
do
with
readability
and
I,
don't
associate
like
metadata
with
readability.
I
I
associate
that
with
like
searchability
of
of
Rules
by
algorithmic
tools.
So
I
think
that
was
our
intention
behind
the
phrasing
of
this.
A
I
I
Think
it's
a
good
recommend
I
think
that
they
should
consider
drafting
regulations
in
a
in
a
more
structured
format.
I
I
guess
they're
two
separate
points,
as
one
point
was.
The
D.O.T
example
is
their
structured
format.
Avi
did
the
need
for
one
stage
of
the
algorithmic
enabled
tooling
and
then
there's
the
separate
issue
about
the
relevant
metadata,
which
can
be
very
helpful
for
leveraging
the
tools
which
really
comes.
H
I
And
that
would
be
for
more
I
mean
you
would
want
to
do
that
again.
That
comes
out
of
like
the
CMS
recommendations.
They
have
an
existing
body,
meta
regs
and
they
talked
about
how
it
can't
really
leverage
AI
capabilities,
because
it
doesn't
have
a
Knowledge,
Graph
structure,
metadata
features
that
could
enable
those
capabilities.
So
it's
kind
of
like
the
first
point.
If
you
structure
your
rulemaking,
you
don't
need
to
do
a
complicated
process
of
having
NL
structure
your
rulemaking,
and
the
second
is,
if
you're
going
to
take
advantage
of
these
AI
enabled
tools.
I
A
Kaja
yeah
I
think
I'm
just
confused
as
to
what
how
what
the
bridges
between
structured
format
and
how
that
improves
in
algorithm's
efficiency
like
reorganizing,
something
in
a
different
way
to
me,
does
not
make
an
algorithm
more
or
less
likely
to
interpret
something,
whereas
metadata
makes
it
more
likely
that
it
will
incorporate
it
into
a
search
result.
For
example,
so
I
I'm
having
trouble
just
kind
of
Bridging
the
Gap
here.
I
I
think
we're
agreeing.
There
are
two
separate
points.
The
structured
rulemaking
is
like
the
dot
example.
Why
didn't
they
have
to
use,
as
some
other
agencies
have
like?
Let's
take
DOD
and
Game
Changer
that
has
mountains
of
policy
documents
that
are
internally
inconsistent,
conflicting
Etc
like
if
those
are
a
very
structured
format,
it
would
just
obviate
the
need
not
for
entirely
using
these
tools,
but
it
would
obviate
the
need
to
use
natural
language
processing
tools
to
map
topics
subtopics
using
clustering,
algorithms.
I
Because
it's
all
about
it's
sort
of
about,
if
our
overarching
goal
right,
is
to
enable
agencies
to
do
retrospective
review
in
the
most
efficient,
cost-effective
and
robust
manner,
they're
different
paths
towards
that,
and
so
some
agent,
you
know,
if
they're
different
models
of
how
and
the
less
sort
of
structured
your
rulemaking
is
the
more
you're
gonna
have
to
think
about.
Taking
this
whole
Repository
and
using
tools
to
organize
it
or
to
say
too,
it's
not
like
agencies
are
going
to
become
something
else.
I
Cms
talk
to
us
at
length
in
some
of
the
interviews
about
how
some
of
what
the
pilot
showed
to
them
was
actually
more
about
how
in
the
future
when
they
go
about
their
rulemakings,
they
should
follow
a
more
structured
format.
Just
seeing
you
know
so,
it's
it
is
related,
but
I
agree
with
you.
Those
are
two
separate
points,
so
how
speaking
in
the
metadata.
H
H
M
I
just
for
clarity,
because
I
imagine
this
will
come
up
by
the
council
or
by
the
assembly.
What
does
it
mean
to
promulgate
rules
in
a
structured
format?
I
mean
obviously
rules.
Take
a
certain
rules.
Look
a
certain
way.
We
all
know
what
the
CFR
looks
like,
but
what
do
you
mean
about
structuring
rules
in
a
particular
format?
I.
H
K
I
So
it
gets
a
little
bit
into
the
weeds
right.
The
the
up,
the
upshot
of
the
report,
shows
two
very
different
approaches,
so
the
GSA
CMS
pilot
was
using
this
krrr,
where
they're
basically
trying
to
encode
the
regulations
such
that
there's.
One
explicit
interpretation
of
regulations
there,
so
they're
making
regulations
into
a
very
explicit
structured
form
that
have
actors
and
duties
and
consequences
Etc.
I
The
other
kinds
of
way
of
thinking
about
it
is
you,
take
whatever
exists
in
this
totally
unstructured
format
and
try
to
use
technology
like
NLP,
to
structure
it
and
then,
on
top
of
that
you're
going
to
do
your
analytic
analysis.
So
really
it's
just
I,
don't
know
if
it's
about!
Maybe
it's
just
more
that
agencies
should
be
aware
that
the
effectiveness
or
ability
of.
I
I
I
don't
know
I
had
to
think
about,
because
it's
more
there's
an
interrelationship
between
the
types
of
technologies
that
are
that
you're
going
to
need,
and
this
and
there's
a
way
in
which
what
you
could
do
just
to
get
back
to
your
basic
question.
Dot
rules,
if
you
read
them,
are
much
more
structured,
but
do
you
mean
like,
with
a
b
c
little
I
little
B
little
little?
What's
the
topic?
I
What
are
so
if,
in
other
agencies
like
read
the
you
know,
the
report
about
Dot
Game
Changer
is
about
trying
to
figure
out
ways
from
multiple
sub-agencies
who
each
have
promulgated
multiple
guidances,
which
ones
affect
which
parties
that's
very
difficult
to
figure
out
without
the
use
of
the
game,
changer
technology,
but
if
the
rules
were
structured
in
a
way
that
it's
very
clear,
anytime,
you're,
promulgating,
a
rule
that
affects
this
entity.
You
know
this.
It's
it's
done
in
that
topical
way.
I
You
then
don't
have
to
use
like
these
kinds
of
techniques
to
do
that
at
the
in
the
beginning.
So.
I
Let's
just
step
back
right
retrospective
review
part
of
what
we're
talking
about
in
this
whole
thing
is
use
of
technology
to
identify
overlapping
redundancies,
Etc.
So
the
way
your
raw
material,
where
you
start
is
going
to
determine
how
much
effort
and
how
much
technology
you
need
to
get
to
that
end.
Point.
H
What
is
in,
can
we
think
of
a
synonym
for
more
structured?
Is
it
clear,
Clarity
or
like
what
the
can
you
define?
Can
you
come
up
with
a
synonym
for
more
structured,
because
I
mean
what
we're
really
saying
is
agencies
should
be
aware
when
they
are
drafting
regulations
that
drafting
regulations
that
do
blah
will
facilitate
subsequent
will
facilitate
the
subsequent
ability
of
algorithmic
tools
to
locate
it
in
for
potential
retrospective
rulemaking,
but
I.
H
So
Kathy
I
was
asking
you:
if
you
had
a
synonym,
do
you
have
a
synonym
I
mean
I,
see
James's
hand
went
up
also
I'm
happy
to
call
on
him.
If
you
prefer.
I
I,
don't
have
a
synonym
but
I
agree
with
the
spirit.
The
way
that
you
just
yeah.
K
K
E
Hi
yeah,
so
I
think
that
the
point
this
is
is
getting
at
is
that
you
need
to
be
able
to
encode
the
regulation
in
a
machine,
readable
format.
So
a
machine
can
do
some
sort
of
logical
steps,
formal
formal
logic,
propositions
on
it.
E
So
the
example
that
we
used
in
the
report
a
lot
was
this
format
in
krr
called
e-flint,
which
effectively
means
that
an
agency
is
going
to
take
its
regulation
and
it's
going
to
encode
it
into
some
this
one
structured
format
that
says
something
like
this
regulated
entity
has
a
duty
X,
and
this
other
entity
has
a
duty.
Why,
like
a
collection
of
agency
duty?
Is
this
thing
so
when
we
say
structured
format,
the
point
that
we're
trying
to
make
is
just
think
about
the
way
that
a
machine
is
going
to
understand
this.
H
H
Going
for
a
little
bit
of
a
higher
level
of
generality
here,
because
I,
what
I
heard
from
Cade
was
a
little
bit
kind
of
like
broader
than
the
Str.
It
was
kind
of
like
just
think
Downstream
about
how
you
may
want
to
be
involving
algorithmic
tools
here.
So,
let's
just
you
guys
can
put
your
hands
back
up
again
in
a
second.
If
you
want
to
but
James
you
were
patient
when
I
called
on
other
people,
so
James.
Let's
just
see,
if
you
want
to
add
something.
B
B
Yes,
okay,
so
so
I
suggest
and
Kate
I
assume
you
have
a
you
speak
very,
very
literally
with
about
specific
computer
technology,
so
I.
This
is,
with
the
backdrop
of
saying,
structured
and
unstructured,
as
Kathy
recognized,
has
a
very
particular
term
of
Art
feel
inside
data
science
and
computer
science
and
I'm
trying
to
reduce
this
to
a
phrase
that
is
acus
friendly
and
still
digestible
by.
B
By
people
without
computer
science,
degrees,
I,
think
digitized
and
vectorized
have
meaning,
especially
in
a
natural
language
processing,
but
the
biggest
most
general
definition
is
to
say
it's
machine,
readable,
okay,
whether
it's
been
optically
converted
I
mean
there
are
any
number
of
different
ways
of
doing
this.
But
if
it
is
not
machine,
readable
format,
that's
just
this
enormously
difficult
thing.
That
just
makes
the
process
work
a
lot
harder.
B
I!
Think
if
you,
if
all
we're
trying
to
communicate
here,
is
data,
that's
in
a
format
that
machines
can
easily
extract
or
harvest
or
scrape
is
different
from
data
that
is
utterly
unstructured
and
has
to
be
whooped
into
shape
before
people
can
do
their
magic
and
I
hope
that
this
short
phrase
captures
but
Cade
has
explained,
and
what
I
think
Kathy
ultimately
intends.
M
I
just
want
to
make
sure
that
it's
clear
to
a
lay
agency
official
reading
this
that
they
sort
of
know
how
to
actually
implement
this
recommendation
by
the
very
reason
of
being
published
in
the
Federal
Register.
Aren't
regulations
already
machine
readable
available
in
XML
format.
Is
there
something
else
that
we're
asking
agencies
to
do
to
structure
their
data
to
facilitate
retrospective
review?
On
top
of
the
mere
active
publication
in
the
Federal
Register.
M
Guess
the
room
I
I,
wish
we
had
our
FR
person
here
but
as
I
understand
it,
the
FR
I
mean
it's
certainly,
machine,
readable,
okay,.
H
All
right,
so,
let's
have
that
question
floating
out
there.
I
I
want
to
observe
that
we
still
don't
know
like
we
have
a
phrase
now
from
James.
We
don't
have
a
thing
that
we're
embedding
the
phrase
in
the
sentence
of
right.
I
was
going
up
to
this
higher
level
of
generality,
just
like
the,
when
you're
drafting
think
about
retrospective
review
and
algorithmic
tools
later.
So
that's
the
kind
of
spirit
that
we're
heading
towards,
but
we're
still
working
on
the
work
on
the
on
the
bringing
it
together
so
Dave
I'll
recognize
you.
F
Thanks
I
I've
been
working
here
for
this
conversation
and
I
keep
coming
back
to
whether
or
not
we're
getting
way
too
far
ahead
of
our
skis
for
the
level
of
expertise
that
anyone
that
reads
this
recommendation
is
is
going
to
be
at.
F
F
F
20
years
ago,
I
was
on
T-Rex,
which
was
you
know,
building
an
XML
platform
for
of
the
Federal
Register
and
the
CFR
we're
constantly
through
this
and
my
my
I
wonder
whether
what
we
really
want
to
say
is
that
the
government
is
a
whole
in
its
Representatives
in
you
know,
in
GPO,
and
ofr
really
should
just
keep
their
eyes
on
where
this,
where
this
goes
and
keep
operating
a
system
that
is
cognizant
of
of
the
available
Technologies.
F
But
at
the
end
of
the
day,
I
I
don't
know
that
we
could
even
advise
any
rule
writers
of
what
they
could
or
should
know
in
order
to
follow
this
recommendation
and
anything
that
they
anything
that
they
as
a
layer
person
would
do
in
a
rule
today
would
be
obsolete
by
the
time.
The
rule
was
final,
so
I
I,
just
I
I'm,
actually
questioning
the
need
for
the
recommendation
at
all.
I
We
could
I
don't
feel
strongly
that
we
need
to
have
this
recommendation.
I
I
disagree
with
so
I
didn't
take
the
point,
although
I
see
how
it
was
written.
I'd
miss
this
before
maintaining
regulations
the
C.
This
is
not
about
that
the
CFR.
You
know,
let's
put
it
this
way,
I'll
be
very
specific.
I
Reg
Explorer
Deloitte
Begins
by
scraping
the
CFR,
that's
the
beginning
point,
and
then
it
develops
all
sorts
of
different
functionalities
that
are
AI
enabled,
many
of
which
are
very
difficult
to
do,
because
regulations
have
never
been
designed
thinking
clearly
about
how
AI
assistive
technology
could
help
this
labor-intensive
human
process.
Maybe
they
should
have,
and
some
agencies
I've
done
that
so
that's
the
so
so
if
we
don't
want
to
get
Beyond
you
know
is
an
interesting.
I
I
So
it's
just
a
different
path
and
then
caves
point
is
more
about
the
road
not
taken
right
now,
what
most
agencies
DOD
and
HHS?
What
most
agencies
that
are
using
AI
enabled
Technologies
you're
using
natural
language
processing
to
come
up
with
very
sophisticated
ways
from
unstructured
texts
to
figure
out
what
their
topics
are.
Etc.
I
I
All
of
this
is
to
say
we
probably
want
to
be
agnostic,
we're
not
choosing
which
road
should
be
taken.
So
I'd
I
think
I'd
sooner.
Take
this
recommendation
out
than
try
to
parse
it
to
make
some
kind
of
affirmative
recommendation
to
agencies.
Now
that
I
understand
what
the
drafters
thought
they
were
doing.
Okay,.
K
Choose
among
these
strategies,
since
there
are
some
very
interesting
strategies
that
you've
described
I.
F
Don't
know
at
this
point
of
who
that
would
be
or
how
that
would
be
a
a.
F
K
H
Thanks
Dave,
so
I'm
gonna,
just
note
that
we
have
a
recommendation
to
OMB
and
a
recommendation
to
GSA
coming
up,
below
which
you
know
we're
not
going
to
have
time
to
really
discuss
those
now
but
Kathy
is,
is
either
of
those
entities
an
entity
to
do
the
kind
of
hey
agencies
when
you're
drafting
just
think
about
this
Downstream
stuff
kind
of
along
the
lines
of
what
Dave
was
suggesting
in
terms
of
best
practices.
Is
that
something
that
one
of
these
entities
could
be
recommended
to
do?
Or
is
there
some.
H
F
Not
not
to
correct
but
to
provide
some
input
on
that.
The
the
recommendation
with
regard
to
OMB
is
directed
at
two
different
offices
within
OMB,
so
ofpp
would
be
issuing
acquisition
guidance
which
I
was
actually
going
to
say.
F
I,
don't
know
is
necessary,
given
that
that
far
as
pretty
complex
in
of
itself-
and
it
is
a
Contracting
exercise-
elyra-
perhaps
on
the
use
of
algorithmic
tools,
but
the
group
over
at
GSA
would
seem
to
be
the
place
to
have
that
kind
of
more
detailed
Technology
based
discussion
about
the
options
available
to
agencies
in
terms
of
tools
and
techniques,
rather
than
necessarily
a
wire.
H
Okay,
thanks
for
that
clarification,
Kathy
is
that,
is
that
a
view
that
you
share,
or
do
you
have
some
other
reaction
to
the
kind
of
broader
question
I
put
on
the
table
about
who's
the
right
entity
to
be
have
this
directed
to.
I
You
know,
I
I,
think
that
if
we
wanted
to
keep
something
for
purposes
of
discussion,
I'm
not
sure
it
falls
into
any
of
the
entities.
I
think
it
would
be
a
more
General
point
of
agencies
that
are
considering
the
use
of
AI
enabled
technology.
So
it
would
be
both
natural
language,
processing
ones,
and
these
krrs,
you
know,
should
consider
how
the
structure
and
I
I
keep
I'm
sorry
about
structure
I
should
consider
how
their
Institute
consider
structuring
their
rules.
I
That
would
be
conducive
for
the
use
of
those
tools,
but
I
also
take
the
point
that
we
don't
really
want
to
be
recommending
to
agencies
that
they
Draft
rules
for
machines.
But
I
do
think
that
it
was
a
very
strong
point
that
surfaced
throughout
that.
If
agencies
are
inclined
because
they
identify
a
problem
in
retrospective
review,
specifically
of
how
human-intensive
labor
intensive
it
is-
and
they
want
to
use
these
tools,
it
behooves
them
to
give
some
thought
to
this,
namely
the
way
in
which
they
write.
I
A
I
you.
H
H
Okay,
so
I
think
my
inclination,
then,
is
to
do
I
mean
I'm
like
looking
to
see
if
people
want
to
raise
their
hands
and
weigh
in
on
this
and
I.
Don't
I
don't
see
Annie
as
I
am
looking
around,
but
please
know
I
am
specifically
welcoming
welcoming
your
thoughts
here,
but
I
guess
I.
My
instinct
would
be
at
this
point
that
we
delete
them
the
sentence
that
has
all
the
red
that
we've
been
working
on
I'm.
Sorry,
the
the
you
know
when
agencies
should
draft
maintain
the
regulations
they
should.
H
And
I
guess
my
only
slight
concern
is
if
we
are
not
with
with
this,
more
General
sentence
is
whether
we
are
missing
a
for
example,
and
I
I
hesitate
to
say
that,
because
I
know
we
just
got
into
a
whole
sticky
thing
about
like
what
do
we
mean
by
metadata?
What
do
we
mean
by
structure?
What
do
we
mean
by
Machine
readable,
so
I?
Don't
really
know
what
to
propose
there,
but
I'm.
Just
observing
you
know,
maybe
I
mean,
would
a
sentence
that
used
all
those
three
things
be
out
of
line.
H
Ensuring
that
rules
contain
relevant
metadata,
comma,
ensuring
that
rules
are
written
in
a
structure
that
avoids
the
need
for
natural
language
processing,
blah
blah
blah
blah
I
mean
this
is
not
my
area
at
all,
so
I
am
merely
parroting
back
nouns
that
you
all
have
been
using
comma
and
that
there
was
one
more
word,
I.
Think
from
yeah
from
James.
We
were
with
the
machine,
readable.
I
H
B
There's
a
long-standing
atheist
tradition
of
the
plenary-wide
committee
on
style.
There
could
be
a
committee
level
committee
subcommittee
on
style
where,
in
principle,
you
want
to
give
some
examples
and
I
think
the
drafters
and
Kathy
in
particular,
should
have
the
first
crack
at
coming
up
with
three
or
four
generalized
examples
getting
into
the
weeds
of
XML.
Okay,.
H
So
this
does
not
sound
like
committee
on
style
for
me,
but
it
does
sound
very
much
like
we
could
ask
Jeremy
and
Kaja
and
Kathy
between
now
and
the
next
committee
meeting
whether
we
want
to
give
a
couple
of
specific
examples
in
a
second
in
the
second
sentence,
and
if,
in
fact
we
do
what
those
examples
would
be
that
are
generally
understandable
but
I.
Don't
it's
not
Committee
of
style
yet
because
it's
still
specific.
H
H
If
we
need
to
have
a
third
meeting,
I
know,
that's
always
sort
of
part
of
what's
in
the
committee's
purview,
if
that
is
in
fact
required,
though,
obviously,
ideally,
it's
it's
great.
If
we
could
wrap
this
up
on
the
second
meeting
so
Kaja
or
Jeremy,
is
there
anything
I'm
for
meeting
I'm
forgetting
to
add
in
this
final
farewell?
No,
no,
okay,
great!
Well,
then,
let
me
just
end
by
thanking
everybody.
Who's
here
thanking
Kathy
thinking
the
acus
team
and
thanking
all
of
you
for
your
great
participation.