►
From YouTube: Duped by Bots: Why Some are Better Than Others at Detecting Fake Social Media Personas
Description
November 4th 2021
Presenter: Ryan Kenny
Ryan Kenny. Ryan Kenny is a Lieutenant Colonel in the United States Army, serving in the Signal Corps. He has served in the 82nd Airborne Division, Special Operations Community, and 516th Signal Brigade. He has three combat tours in Afghanistan. He received a BA in cognitive psychology from the University of Notre Dame, in 2003, a MA in national security and strategic studies from the U.S. Naval War College, Newport, RI in 2015, and is pursuing his PhD in Engineering and Public Policy at Carnegie Mellon University, Pittsburgh, PA. His research interests include Human-machine Systems, Artificial Intelligence, and Behavioral Decision Making.
A
All
right,
well,
you
know
first,
I
just
want
to
start
by
thanking
the
the
working
group
to
afford
me
the
opportunity
to
present
some
of
my
findings
and
have
a
discussion
around
this
topic.
I'll.
Tell
you
a
little
brief
couple
of
seconds
about
myself
my
background,
so
I'm
lieutenant
criminal
in
the
army.
I'm
branch
in
the
signal
course
so
most
of
my
career
has
been
in
terms
of
delivering
iot
services
from
tactical
environments
to
sort
of
enterprise
solutions.
A
I
mean
over
the
last
few
years
I've
been
thinking
a
lot
about
development
of
artificial
intelligence
and
and
applications
of
machine
learning,
as
it
changed
to
national
security
concerns,
and
I
was
afforded
this
opportunity,
through
the
advanced
strategic
planning
and
policy
program,
to
pursue
my
phd
in
a
topic
that
I
could
then
bring
back
to
sort
of
my
day
job
when
I,
when
I
returned
back
out
of
the
academic
setting
which
led
me
to
carnegie
mellon
one
for
its
noted
work
in
machine
learning,
but
also
inside
the
engineering
and
public
policy
program.
A
We
sort
of
think
deeply
about
what
all
this
means
in
a
context
of
supporting
public
policy
decisions,
and
the
other
reason
why
I
was
sent
out
from
this
program
wasn't
just
to
to
gain
some
some
knowledge
in
the
area
and
sort
of
hone
some
skills.
It
was
also
to
build
relationships
and
and
partner
with
with
folks
that
are
engaged
in
that
sort
of
public
private
partnership.
A
So
when
I
return
back
next
summer,
I'll
take
command
of
a
signal
battalion
here
in
fort
bragg
north
carolina,
but
for
the
remainder
of
my
time
in
uniform,
a
lot
of
what
I'll
be
doing
will
continue
to
look
at
these
topics
and
look
for
opportunities
to
work
with
folks
like
yourselves.
So
again,
I
appreciate
this
opportunity
to
present.
B
A
Advisors
at
carnegie,
mellon
are
dr
baruch
fishoff
and,
and
dr
alex
davis
they're
my
primaries,
but
I
think
you
all
know
dr
carly
fairly
well.
She
also
supported
this
research
and
helped
in
sort
of
the
formulation
of
the
study
itself
and
then
everything
I
present
here
obviously
is
non-attributional.
It
doesn't
reflect
the
views
of
the
army,
the
dod
or
any
other
sort
of
organization.
It's
my
own
independent
research.
So
with
that
I'll
begin
with
the
top,
the
topic
of
today
is
do
by
bots.
B
And
brian,
before
we
begin,
I
just
wanted
to
say
thank
you
for
making
my
job
easier.
I
was
going
to
introduce
you,
that's
fine.
The
only
thing
that
I
would
have
added
was.
We
are
really
excited
to
have
you
here.
Thank
you
for
making
yourself
available
and,
as
you'll
see,
that
some
of
the
folks
are
joining
in.
B
Typically,
what
I
usually
do
is
invite
folks,
to
just
say,
a
brief
introduction
about
themselves,
but
if
we
can
wait,
what
we
can
do
is
we
can
go
over
your
talk
and
then
we
can,
because
I
see
some
folks
are
still
trickling
and
maybe
it's
a
zoom
glitch
day
today.
So
we
can
get
started,
but
thank
you
again
for
making
yourself
available
and
this
is
really
cutting
edge
work.
B
You
mentioned
very
briefly
about
your
background,
but
I
I've
had
the
opportunity
to
hear
you
and
I
believe,
also
meet
you
at
sbp
conference
one
of
the
years,
so
I'm
really
excited
to
to
have
you
in
the
working
group
and
I'm
sure
will
learn
from
your
work
and
how
your
work
is
bridging
the
technical
aspect
and
the
operational
aspect
of
this
problem.
So
please
go
ahead
and
we
are.
We
will
mute
ourselves
while
you're
speaking.
B
If
we
have
any
questions
folks,
please
enter
those
in
the
chat
window
and
we
will
have
a
q
a
session
after
dude
and
colonel
ryan's
talk,
and
then
we
will
I'm
sure
we
will
have
a
very
lively
discussion.
Thank
you
again,
ryan.
Please.
A
Go
ahead!
Yes,
yes,
thank
you
for
that
at
add-on
and
and
orchestrating
this
endeavor
okay.
So
here's
here's
the
agenda.
You
guys
have
probably
experienced
this
in
the
past,
but
if
not,
let's,
let's
play
a
quick
game.
A
A
A
Today
is
they're
encountering
online
users
they're
conducting
this
kind
of
continuous
turing
test
if
you
will,
and
they
have
to
decide
to
the
extent
that
they
think
they're
interacting
with
real
individuals,
fully
automated
social
bots,
some
combination
of
the
two
or
what
have
you
so
the
motivation
for
my
research
kind
of
stems
from
from
that
start
point.
A
If
you
go
back
to
sort
of
herbert
simon's
term
attention,
economy,
they're,
attracting
increasing
amounts
of
intention
and
amplifying
narratives
inside
those
environments
and
even
though
most
perform
sort
of
you
know
harmless
monday
and
advertising
tasks
trying
to
sell
you
a
good
or
service,
others
are
obviously
trying
to
deceive
other
users
and
to
believe
that
they're
actual
humans
in
order
to
gain
trust,
manipulate
social
media
discourse
and
disseminate
you
know
all
sorts
of
sort
of
bad
information,
whether
it
be
simply
misinformation,
sort
of
miss
people,
not
really
characterizing
understanding
what
the
truth
is-
or
you
know,
even
things
that
are
more
sort
of
nation,
state
or
or
malicious
sort
of
disinformation
campaigns.
A
So
you
know,
there's
a
lot
of
evidence
out
there
from
various
research
groups
about
the
disproportionate
role
that
these
social
bots
are
playing
in
the
proliferation
of
this
little
credibility.
Information,
so
part
of
the
motivation
here
in
terms
of
public
policy
is,
is
we
think,
there's
some
sort
of
social
cost
beyond
just
sort
of
creating
noise
inside
those
eco
systems,
spot
detection,
confidence.
A
Amongst
you
know,
average
citizens
is
relatively
low
right
so
three
years
ago,
pew
research
center
surveyed
americans,
even
though
they
found
most
participants,
were
aware
that
social
bots
were
out
there.
You
know,
as
one
of
the
gentlemen
said
earlier,
when
he
logged
in
and
says
I
I
probably
have
been
duped.
A
You
know
only
seven
percent
reported
being
very
confident
that
they
could
detect
the
bots
and-
and
if
you
think
about
as
that
pertains
to
other
types
of
of
sort
of
false
information,
you
know
in
the
same
study,
84
respondents
felt
you
know
confident
they
could
detect
fake
news.
So
there's
obviously
a
difference
in
terms
of
the
content.
That's
that's
being
spread
versus
who's,
actually
spreading
it
and
people's
general
sense
of
how
well
they
would
would
do
on
the
task
itself.
A
So
the
central
research
question
of
the
of
the
study
that
I'm
going
to
talk
about
here
is
depends
in
part
on
trying
to
understand
people's
susceptibility
to
social
bots
right.
So
if
we're
going
to
sort
of
justify
any
sort
of
policy
intervention
to
reduce
social
bots
in
part,
we
need
to
understand
really
how
at
risk
our
people
to
being
duped
by
them.
So
that
was
in
part.
A
The
motivation
and
the
study
employed
a
signal
detection
task
to
examine
the
performance
of
humans
in
detecting
twitter,
social
bot,
personas
relative
to
normative
machine
learning,
assessments
of
the
same
personas.
So
one
is
to
understand
that
relative
human
performance
and
then
more
interestingly,
and
what
I
would
argue
is,
more
importantly,
to
start
to
understand
the
characteristics
of
individuals
that
may
either
improve
or
impoverish
their
ability
to
detect
social
bonds.
A
So
the
design
of
the
study
like
I'll
elaborate
a
little
bit
further
in
terms
of
signal
detection
and
in
this,
whether
you're
talking
about
a
human
judge
or
you're
talking
about
sort
of
an
automated
algorithm
that
the
concept
behind
these
is
is
the
same
really.
You
know
signal
detection
theory
assesses
two
aspects
of
performance:
first,
the
detector
sensitivity
to
differences
in
the
stimuli
and
and
then
the
decision-making
criteria
or
threshold
for
acting
on
those
beliefs.
Right
so
criterions
vary
by
individual
and
ultimately
are
determined
by
implications
of
the
decision.
A
So,
for
example,
if
I'm
a
radiologist
and
I'm
I'm
examining
figures
to
to
look
for
signs
and
signals
of
cancer,
I
might
have
a
very
liberal
criteria
where
the
consequences
of
missing
that
signal
are
very,
very
severe
and
therefore
I
might
have
an
increased
number
of
of
hits,
but
I
also
might
have
a
lot
of
false
alarms,
whereas
if
I'm,
you
know
a
general
motor
mechanic
and
I'm
doing
a
you
know,
sort
of
anal
annual
service
and
checkup
in
a
car,
I
know
my
customer
doesn't
want
to
pay
a
high
bill.
A
Maybe
I
have
a
more
conservative
criterion
and
it's
going
to
take
a
lot
for
me
to
really
say
something's
wrong
with
the
vehicle,
in
which
case
I'm
going
to
I'm
going
to
have
increased
misses,
but
I'm
also
going
to
have
a
lot
more
correct
rejection.
So
the
nice
thing
about
sigma
injection
theory
as
a
means
of
analysis.
Again,
you
can
differentiate
not
only
people's
ability
to
distinguish
between
signal
and
noise,
but
also
understand
how
that
criterion
affects
the
overall
performance
in
the
task.
A
I
chose
twitter
as
as
my
primary
area
of
interest
in
part,
because
social
bots
are
designed
very
specifically
for
different
social
media
ecosystems
in
which
they
operate
so
they're
going
to
have
different
characteristics,
so
you
kind
of
have
to
pick
one
if
you're
going
to
do
any
one
given
study
relative
to
some
machine
learning
detection
as
well
as
human
performance
and
then
obviously
well,
maybe
not
obvious
to
you.
But
it's
pretty
well
known
in
in
research
that
bought
activity
is
pretty
pretty
widespread
on
twitter.
A
You
know
they've
disclosed
themselves
that
they
have
scooped
up
and
trying
to
take
care
of
a
number
of
fake
accounts
and
then
others
doing
their
own
analysis,
sort
of
through
random
sampling
and
then
statistical
inference
have
come
to
this
rough
range
of
anywhere.
From
you
know,
10
to
15
of
personas
may
actually
be
some
form
of
social
bot
when
you're,
when
you're
on
twitter.
For
those
that
don't
you
know,
interact
with
this
platform
regularly
I'll
kind
of
walk
you
through
the
stages
of
the
decision-making
task,
as
it
pertains
to
trying
to
figure
out.
A
A
Your
next
step,
if
you
really
want
to
figure
out,
is
the
bot
or
not
you're,
probably
going
to
click
on
that
content
they
shared
and
then,
if
you
really
want
to
know
okay,
they
shared
this
piece.
I
I
understand
what
it
is:
I've
investigate
information.
Well,
let
me
go
investigate
the
source
that
will
drive
you
to
a
persona
profile,
which
is
you
know
clicked
on
and
when
you
get
to
that
point,
there's
additional
information
about
the
user
that
you
can
investigate,
that
you
can
investigate
that
source.
A
So
for
the
icon,
three
is
the
typical
human
user
investigating
a
source?
They
might
look
at.
You
know
pictures
that
the
profile
name,
how
long
a
user
has
been
on
on
the
service,
the
platform
etc,
but
if
they
want
to
go
further
than
that,
even
though
there
are
tools
to
do
that,
usually
it's
it's
researchers
and
folks,
like
dr
carly
and
others
like
the
folks
out
in
indiana
university
that
are
developing
algorithms
to
look
at.
A
You
know
the
number
of
tweets
that
a
person
is
producing
what
time
of
day
those
tweets
are
produced,
doing
network
analysis
who,
who
is
talking
to
whom
and
that's
sort
of
that
level,
four
level
of
analysis
that
most
typical
users
don't
have
access
to,
but
the
normative
machine
learning
systems
do
so.
In
this
study,
I'm
really
focused
at
trying
to
compare
it.
A
A
So
the
normative
predictions
that
were
used
in
the
study
were
calculated
using
two
different
independently
developed
machine
learning
tools.
First
was
bot
hunter,
which
was
developed
by
dave
bescow
under
dr
carley.
At
cmu
and
the
other
was
batometer,
which
is
pretty
well
known
out
of
out
of
iu,
and
we
ran
a
series
of
personas
around
5000
through
both
systems
and
they
each
were
scored
and,
and
we
use
those
scorings
to
come
up
with
our
normative
assessment
of
of
any
given
particular
stimulus
or
persona
that
school
task
itself
in
terms
of
discriminability.
A
What
we
had
was
a
range
of
stimulus
that
ranged
uniformly
at
unified
distribution,
basically
from
about
one
percent
up
to
98,
couldn't
find
any
any
persona
in
this
particular
batch.
That
was
above
98
probability
of
being
a
bot,
and
we,
we
interval
those
at
about
two
percent
all
the
way
through
that
range.
So
I
had
50
stimuli
in
total.
You
know
25.
A
They
fell
below
this
50
threshold,
25
that
fell
above
and
then
those
were
presented
in
a
random
order
to
any
given
participant
and
the
bot
probabilities
themselves
to
sort
of
get
some
confirmation,
because
not
every
every
every
system
has
its
flaws.
I
chose
stimuli
where
bot,
hunter
and
batometer
basically
had
the
same
probability.
A
There
were
a
handful
of
those
and
they
most
upper
ranges
where
the
difference
between
the
two
percentages
was
a
little
bit
higher,
but
they
always
confirmed
in
terms
of
whether
or
not
the
system
was
saying
it's
likely
to
be
a
bot
or
not
in
terms
of
predictions,
always
step
back
so
participants
after
you
know
receiving
the
same
instructions
they
would.
They
would
then
go
through
and
judge
all
these
ta
all
these
stimulus.
A
It's
either
saying
that
if
they
thought
it
was
a
bot
or
not
and
they
provided
a
self-reported
confidence
score
and
the
predictor
variables
for
the
study
were
first
social
media
experience.
You
know
that's
one
of
the
user
characteristics
that
one
might
expect
to
increase
performance.
So
you
know
if
experts
focus
their
attention
on
information
sources,
most
relevant
for
the
through
objective,
and
if
experience
leads
to
expertise,
then,
when
I
initially
proposed
a
study,
I
expected
experience
if
anything
should
improve
sensitivity
and
not
decrease
in.
A
As
far
as
analytical
reasoning,
there
are
a
number
of
other
studies
that
have
looked
at
an
individual's
ability
to
resist
initial
impulses
and
making
a
judgment
have
a
cognitive
reflection
to
that
task
and
and
then
ultimately
see
how
that
performs
the
ability
to
overcome
erroneous
initial
impulses
and
make
a
better
decision.
So
again,
we
predicted
that
those
with
higher
analytical
reasoning
would
also
outperform
those
with
low
random.
A
Analytical
reasoning
and
analytical
reasoning
in
this
study
was
measured
by
the
cognitive
reflection,
a
task
something
developed
by
shane
frederick
years
ago,
very
popular
and
then
for
social
media
experience.
That
was
measured
through
a
survey
that
was
developed
a
few
years
ago
that
we
implemented
the
same
protocol
for,
and
it
measured
things
not
only
in
terms
of
how
long
you've
been
on
social
media,
the
number
of
accounts,
but
also
things
such
as
how
engaged
you
are
the
number
of
times
you
access
and
and
some
of
those
kind
of
pro
type
behaviors.
A
That
would
indicate
having
higher
social
media
experience
as
far
as
the
stimulus
attributes.
Obviously,
this
screamability
was
the
start
point
and
that
was
determined
by
that
bot
indicator
score.
So
the
bot
indicator
score
the
higher
the
bot
indicator
score.
The
more
probable
any
given
stimulus
was
to
be
a
bot,
the
lower
the
score,
the
more
likely
it
was
to
be
a
human
persona
and
therefore,
as
that
that
bought
indicator
score
increased
I.e.
A
The
signal
increased
we'd
expect
to
see
a
corresponding
increase
in
sensitivity
and
people
more
more
likely
to
respond
that
it
was
a
bot
and
then
finally,
initially
this
was
a
bit
of
an
afterthought
when
we
were
designing
the
study,
but
then
it
became
you
know
pretty
pretty
important
to
me
as
I
was
finalizing
the
design,
and
that
is
you
know,
the
batch
of
stimulus
that
were
selected
for
this
study
came
from
on
the
excuse
me,
the
2018
midterm
election
and
they
were
kind
of
highly
charged
in
in
some
of
their
content
and
pictures
and
messaging,
and
my
initial
take
was
well.
A
I
wonder
if
this
can
affect
people's
judgments
and
sure
enough,
there's
plenty
of
literature
to
suggest
that
it
would,
and
so
then
we
ensure
that
we
controlled
for
that
and
then
categorized
that
in
terms
of
my
side
bias
and
what
this
really
comes
to
in
terms
of
prediction,
is
you
know
if
I
self-reported
I'm
a
liberal
and
I'm
and
I'm
looking
at
someone?
That's
conservative
persona.
You
know
there's
there's
sort
of
polar
opposites
in
in
political
views.
A
Therefore,
if
there's
any
sort
of
my
side
bias
present
that
might
affect
either
my
criterion
shift,
I'm
suggesting
that
I
might
have
a
more
liberal
response
pattern
and
assume
that
there
are
more
social
bots
amongst
those
that
don't
share
my
political
views
and
or
it
might
affect
sensitivity.
Initially,
I
I
was
more
interested
or
suspected
there.
It
would
affect
criterion
more
than
more
than
sensitivity
per
se,
but
but
we
made
sure
we
included
that
as
well
as
predictor
variable
and
then
lastly,
we
had
a
couple
of
controls.
A
We
had
one
for
fatigue
so
again
stimulus,
stimulus,
presentation
order
was
randomly
assigned,
but
we
also,
then
you
know
added
that
in
our
model
just
to
ensure
that
fatigue
wasn't
occurring
and
then
we
had
task
engagement,
measured
in
the
form
of
various
attention
checks.
A
We
had
a
couple
after
the
initial
experimental
instructions,
and
then
we
had
two
embedded
in
the
task
and
the
two
that
were
embedded
were
well-known
political
figures
from
each
party,
elizabeth
warren
and
mike
pence
and
again
those
those
were
deemed
to
be
obviously
humans
and
we
and
we
use
those
to
see
if
people
were
paying
attention
during
the
task
all
right.
So
I
sort
of
already
went
through
the
methods,
but
I'll
briefly
remind
folks.
A
So
we
we
had
113
participants
taken
from
amazon,
mechanical
turk
workers
they
completed
72
70
excuse
me
52
experimental
trials,
two
of
which
were
the
attention
checks
and
then,
after
they
completed
the
task.
That's
when
we
collected
demographic
information
to
kind
of
characterize
the
population
not
as
predictive
variables
but
just
to
sort
of
think
about
representativeness.
A
We
captured
political
views,
the
social
media
experience
questionnaire
and
then
they
completed
the
crt,
and
this
this
is
the
this
is
like
case
in
point
what
what
a
participant
would
have
seen
when
they
were
doing
them.
The
signal
detection
task
all
right.
First,
I'm
going
to
go
with
the
findings
from
the
signal
detection
aspect.
So
to
do
the
analysis
we
used
general
linear
mixed
effects
models.
We
we
had
the
factors
of
the
stimulus,
the
task
individual
and
then
how
that
affected
their
probability,
responding
bot.
A
We
used
a
probit
link
function
which
converted
those
z
scores
from
the
model
into
probabilities,
responding
bot.
So
in
all
the
figures,
you're
going
to
see
on
the
y
axis
you'll
see
the
probability
of
responding
bot
is.
Is
that
outcome
variable
and
then
the
bot
indicator
scores
served
as
the
normative
stimulus
attribute
predicting
probability
of
a
given
stimulus
being
a
bond
in
in
interpreting
these
things,
I
think
just
to
kind
of
orient
you
to
to
the
output.
If
you
haven't
seen
an
analysis
like
this,
you
know.
A
If
we
hold
bot
the
bot
indicator
constant,
then
any
main
effect
is
going
to
represent
a
shift
in
the
criterion.
So,
regardless
of
what
that
body
indicator
score
is,
if
you
have
higher
analytical
reasoning
and
you
had
a
higher
probability,
responding
bot.
Well,
then,
that's
that's
a
shifting
criterion
that
doesn't
reflect
sensitivity,
whereas
any
interaction
with
the
bot
indicator
score
does
reflect
a
change
in
sensitivity.
So
if,
if
someone
was
more
sensitive
to
the
signal,
then
we'd
expect
to
see
the
slope
of
that
probability.
A
Responding
bot
increase
at
a
greater
rate
than
if
they
were
less
sensitive
and
and
in
the
figures
you'll
see
that
that's
how
that's
depicted
all
right.
So
that's
the
setup.
Let's
get
into
the
results,
so
the
first
predicted
variable
of
interest
again
was
social
media
experience
and
my
a
priori
hypothesis,
I
had
that.
A
As
part
of
my
sort
of
research
work
here
at
cmu,
I
thought
experience
would
lead
to
to
increase
sensitivity,
but
if,
if
anything,
it
was
the
opposite
of
that
right.
What
you're
seeing
here,
if
I
can
orient
you
again,
is
that
outcome
variable
is
a
probability.
Responding
bot
and
the
x
axis
is
the
bot
indicator,
which
is
another
just
called
the
bot
signal,
if
you
will
and
so
for
any
given
stimulus.
If
I
had
you
know,
a
body
indicator
score
that
was
was
increasing.
A
I
found
that
only
those
with
lower
social
media
experience
relative
to
average
or
high
social
media
experience
were
more
sensitive
to
that
signal,
and
in
this
figure
I
should
note
in
the
the
others
for
the
signal
detection.
This
is
the
predicted
outcome
of
those
of
those
models
that
that
were
proven
to
be
significant,
so
counter-intuitively,
we
see
the
opposite
effect.
If
anything
experiences
is
dampening
sensitivity.
A
Next,
if
we
take
that
same
figure,
but
now
I
plotted
against
the
political
difference
between
yourself
and
the
persona
you're
viewing.
We
have
five
different
figures
we
can
investigate.
The
first
is
where
the
political
view
of
the
the
participant
and
the
stimulus
are
the
same
and
then
on
the
bottom
right
hand
corner
you
have
the
polar
opposite
of
that
and
then
the
lines
plotted
or
broken
out
by
ones
given
political
views.
A
What
I
noticed
was
that
there
wasn't
there
was
an
asymmetric
pattern
between
whether
one
was
a
liberal
or
conservative
and
the
effects
of
my
side
bias,
and
let
me
kind
of
unpackage
this
for
you,
because
it
can
be
a
little
bit
confusing
at
first
blush
and
obviously
I've
looked
at
these
figures
for
quite
some
time
and
tried
to
reason
through
it.
So
the
first
thing
is:
if
we
start
with
on
the
top
left-
and
we
consider
you
know
a
liberal
which
is
the
red
line.
A
Looking
at
a
liberal
persona,
we
do
see
sensitivity
right.
So,
as
the
bot
indicator
score
increases,
there
is
a
positive
slope
and
we
see
that
liberals
viewing
liberals
are
still
able
to
distinguish
and
and
and
find
bots
appropriately,
as
that
indicator
score
increases.
However,
for
conservatives
viewing
conservatives
there's
no
sensitivity,
so
this
is
this
in
a
sense,
is
reflecting
some
inability
to
distinguish
amongst
conservatives
looking
at
other
conservatives,
whether
or
not
they're,
social
bots
or
not.
If
we
shift
to
polar
opposite,
though
we
see
a
couple
things
occur.
A
First,
is
that
liberals
in
general
tended
to
kind
of
go
high
into
the
right
in
terms
of
their
criterion,
meaning
they
were
much
more
liberal.
You
know
so
by
default
they
tended
to
sort
of
assume
anybody
that
was
conservative
was
you
know
more
likely
to
be
a
bot,
so
the
probability
of
their
responding
bot
increased
and
for
criterions.
It
doesn't
pop
out
in
this
figure,
but
from
the
analysis.
A
Conservatives
also
had
that
criterionship.
So
not
only
do
both
conservatives
and
liberals
have
a
greater
tendency
to
assume
the
other
party
is
a
social
bot
when
they're
viewing
the
polar
opposite
group,
but
we
see
this
asymmetry
then,
as
well
in
terms
of
sensitivity
with
liberals
tending
to
be
a
little
bit
more
default
and
not
showing
any
sensitivity,
because
they
kind
of
assume
all
were
social
bots,
whereas
conservatives
now
actually
did
have
sensitivity
when
they
were
looking
at
liberal
personas.
A
A
If
you
did
get
all
three
questions
correct,
then
you
had
very
high
in
local
reasoning
and
that's
your
bottom
right
figure
relative
to
your
top
left
and
again,
as
we
look
at
the
effects
of
analytical
reasoning
on
signal
detection,
there's
an
asymmetry
between
conservatives
and
liberals-
and
this
can't
simply
be
explained
because
of
differences
between
analytical
reasoning
between
both
groups.
We
looked
at
that
there
was
no
significant
difference,
but
it
does
appear
that
how
one
is
using
your
analytical
reasoning
does
affect
their
their
judgment
in
terms
of
the
probability
responding
bond.
A
So
for
those
with
the
greatest
analog
reasoning,
liberals
again
seem
to
benefit
in
terms
of
their
sensitivity,
and
we
see
that
interaction
again
with
the
bottom
indicator.
Score
such
that
as
bot
indicator
score
increases
the
probability
of
their
responding
bot
also
increases
again
representing
sensitivity,
whereas
for
conservatives
we
we
don't
see
that,
and
there
was
no
significant
forward
way-
interaction
between
all
of
these
variables
to
sort
of
tie
a
really
nice
bow
on
it,
but
for
interpretation.
A
What
I
would
suggest
here
is
that
in
the
literature
that
looks
at
behavioral
decision
making
and
considers
my
side
bias
they're
kind
of
two
two
opposing
camps,
one
that
says
we
use
our
analytical
reasoning
to
increase
reflection
and
to
buy
down
our
biases
and
and
sort
of
resist
that
impulse
to
to
go
with
a
heuristic
or
a
stereotype,
or
something
that
could
lead
to
a
poor
judgment
and
there
are
others
that
say
well,
actually,
no,
it's
it's
it's
kind
of
uglier
than
that
you're
going
to
use
your
analytical
reasoning
to
seek
out
confirmatory
evidence
and
scrutinize
to
an
even
greater
degree
and
to
con
continue
to
build
your
case,
despite
any
evidence
that
that
counters
your
views
right.
A
So
that's
that's
the
ongoing
debate
in
that
side
of
the
literature
and
in
this
set
of
findings.
I
think
this
speaks
to
it
in
in
an
interesting
way
and,
and
it
speaks
to
potentially
different
asymmetric
patterns
between
those
populations.
Liberals
versus
conservatives-
again,
probably
any
broad
conclusion
from
the
study-
goes
beyond
what
what
what
this
data
can
really
definitively
say.
But
it
does
suggest
that
there's
some
nuance
there
and
it's
probably
important
to
think
about
how
different
groups
may
employ
analytical
reasoning
differently.
It
may
not
be
a
black
or
white
decision.
A
Okay,
so
in
the
paper
that's
under
review
right
now,
that's
kind
of
where,
where
the
results
end,
but
I've
done
some
additional
analysis
on
the
confidence
data.
That's
unpublished,
but
I
thought
I
would
share
it
with
you,
because
it's
it's
interesting
as
well.
So
after
after
individuals
responded
with
their
with
their
bond
decision,
we
then
asked
them
to
rate
their
confidence
from
basically
50
to
100
in
the
sliding
scale.
A
We
assume
that
if
they
made
a
decision
they
they
were
at
least
beyond
a
certain
level
of
uncertainty
so
that
we
put
it
within
that
range,
and
so
what
you're?
Looking
at
in
this
figure
is.
We
ran
a
linear
mixed
effects
model
just
to
look
at
how
these
other
predictors
affected
confidence
and
because
participants
weren't,
given
any
feedback
on
their
accuracy
and
accuracy
itself,
is
somewhat
determined
arbitrarily
by
where
we
set
a
threshold
on
this
normative
model.
I'm
just
showing
you
the
the
outcome,
without
considering
accuracy
at
all.
A
That's
that's
a
separate
sub
analysis
which
gets
into
sort
of
calibration
and
under
confidence
and
overconfidence,
but
I
think
these
results
are
still
pretty
interesting.
Just
looking
at
the
confidence
group,
despite
not
presenting
the
accuracy
results
for
you
right
now.
So
the
first
thing
to
note
is
if
we
just
ask
people
how
confident
they
are
and-
and
we
look
at
that
bot
indicator
score-
the
nature
of
their
response
affected
their
confidence
and
because
the
the
slope
for
human
is
is
negative.
The
slope
for
bot
is
positive.
This.
A
This
is
a
good
sign
again
that
reinforces
that
there
is
some
sensitivity
amongst
participants
for
the
bot
indicator
score,
regardless
of
how
they
responded.
We'd
expect
that,
as
the
bot
indicator
score
was
increasing,
that
their
sort
of
confidence,
in
that
judgment
of
whether
or
not
they
they
thought
it
was
a
bot
or
not,
is
also
going
in
one
direction
and
human
it's
going
in
another.
A
I
I
forgot
to
mention
that
in
general,
across
all
participants,
there
was
a
tendency
to
avoid
responding
bot,
so
the
criterion
was
actually
very
conservative,
so
it
took
it
took
some
more
effort
or
mental
energy
for
people
to
actually
say
it
was
a
bot
and
and
that
criterion
shift
or
for
average
participants
was
something
that
that
then
people
had
to
move
from.
A
The
second
thing
which
goes
back
to
our
discussion
about
social
media
experiences,
the
confidence
of
those
with
higher
social
media
experience
the
x-axis.
In
this
case
I
didn't
normalize
it
I
apologize
for
that.
So
this
is
kind
of
the
raw
scores,
but
there
was
again
a
significant
linear
relationship
between
higher
social
media
experience
and
overall
reported
confidence,
regardless
of
the
of
the
given
stimulus
right.
So
this
has
nothing
to
do
with
any
given
judgment.
This
is
average
to
cross
judgments.
A
So
not
only
do
people
with
higher
social
media
experience
perform,
have
worse
sensitivity
and
performed,
you
know
less
well
as
a
result,
they
were
also
more
confident
in
their
responses,
and
I
I
don't
show
you
this
data,
but
when
I
look
at
calibration
they
were
also
more
miscalibrated.
They
tend
to
be
more
overconfident,
which
makes
sense
in
terms
of
that
performance
relationship.
A
When
we
talk
about
analytical
reasoning,
if
I
can
kind
of
orient
you
to
this
graph
as
well,
we
see
this
nice
sort
of
pattern
which
matches
the
first
result.
So
if,
if
as
you're
responding
human,
the
bot
indicator
score
is
increasing,
then
I
would
expect
your
confidence
to
kind
of
go
down,
even
though
you
responded
human,
if
that
bot
indicator
score
is
rising
and
the
opposite
would
be
true
if
it's
increasing.
So
if
you
responded
but,
and
the
bot
indicator
score
is
increasing,
then
I'd
expect
your
confidence
to
go
up.
A
So
what
you
see
in
this
figure
is
this
broken
out
and
and
then
both
in
terms
of
the
nature
of
the
response.
But
then
you
have
the
different
levels
of
analytical
reasoning,
and
the
first
thing
to
note
is
those
with
higher
analytical
reasoning
in
general:
have
lower
confidence
overall.
But
when
we
look
at
the
data
and
look
at
that
sort
of
conditional
relationship
it,
it
appears
as
if
those
with
the
the
least
amount
of
medical
reasoning
tended
to
just
be.
A
You
know
in
that
75
percent
to
100
confident
on
every
response,
whereas
those
with
higher
reasoning
tend
to
use
more
of
the
range
when
they
were
assessing
their
confidence,
which
again
kind
of
speaks
to
their
ability
to
reflect
and
some
sensitivity
in
the
judgment
that
perhaps
those
with
lower
analytical
reasoning
didn't
possess.
But
again
you
see
that
relationship
play
out
in
terms
of
analytical
reasoning
and
the
confidence
in
the
judgments
that
that
differs
based
on
the
nature
of
the
response.
A
Lastly,
if
we
go
back
to
the
symmetry
between
political
views
and
analytical
reasoning
and
confidence,
here
again,
we
get
this
sort
of
asymmetric
pattern,
so,
looking
across
the
the
four
different
levels
of
analytical
reasoning
and
the
figures
and
then
considering
how
one's
political
views
affected
confidence
in
general
again,
this
has
nothing
to
do
with
the
type
of
response.
So
this
is
this
is
average
across
either
human
or
bot,
but
we're
just
looking
at
how
people
how
confident
they
were
in
their
responses.
A
You
see
that
conservatives
for,
for
the
most
part,
tended
to
be
a
little
bit
more,
more
confident
than
liberals,
but
that
really
sort
of
shifted
when
we
talked
about
that
higher
analytical
reasoning
group
so
for
for
liberals,
looking
at
bots
as
that
bot
indicator
score
increase
their
confidence
just
really
really
began
to
dive,
whereas
for
conservatives
it
it
increased
slightly
or
kind
of
maintain
that
that
that
positive
relationship.
So
again,
these
are
unpublished
results.
A
These
are
sort
of
a
set
of
a
pattern
that
also
tie
in
some
calibration
results,
which
I'll
show
you
here
in
a
sec,
but
I
think
it
adds
to
another
level
of
looking
at.
Not
only
the
nature
of
how
sensitive
people
were,
but
also
then
what
did
they
think
about
their
judgment?
How
people
talk
about
confidence?
A
This
is
over
and
under
confidence,
and-
and
we
can
come
back
to
this-
if
you
want
during
the
discussion-
so
maybe
I
won't
go
at
great
lengths
because
it's
a
whole
other
sort
of
set
of
analyses
and
results,
but
I
would
just
say
that
when
you
look
at
the
proportion
correct
as
it
pertains
to
the
confidence
that
calibration
score,
it
really
was
the
the
human
type
stimulus
where
people
were
best
calibrated
and
as
the
sort
of
I
should
I'm
sorry.
A
Let
me
orange
is
the
line,
so
the
red
line
is
where
stimuli
fell
below
the
25
likely
probability
of
being
a
bot
green
is
the
the
stimulus
was
somewhere
between
25
to
75
percent
kind
of
in
that
muddy
area
is
a
human
as
a
bot.
It's
somewhere
in
between
around
the
threshold,
and
then
blue
is
where
it
was
a
pretty
strong
indicator
from
the
normative
machine
learning
models
that
it
was
likely
to
be
a
bot
okay,
and
so
these
data
points
are
averaged
across
participants
and
bend
based
on
their
response.
A
A
You
know
at
this
point
speculating
on
that,
but
maybe
we
can
get
into
that
on
the
q,
a
all
right
so
just
to
wrap
this
up,
so
we
can
get
to
the
q
a
we
applied
a
signal
detection
task
in
this
study
to
evaluate
performance
in
a
social
bot
detection
task
and
again
this
was
done
using
twitter
personas
across
the
overall
set
of
results.
There
was
relatively
low
sensitivity
when
we
averaged
across
all
the
other
predictive
variables
and
the
criterion
shift
suggested
that
there's
an
aversion
for
mistaking
bots
for
humans.
A
In
the
analysis
for
the
signal
detection
portion,
we
used
a
general
general
linear
mixed
effects
model
to
predict
bot
responses
based
on
those
characteristics
of
participants
in
the
personas
and
in
general
we
found
poor
performance
amongst
participants,
reported
higher
social
media
experience.
A
So
some
of
the
policy
implications
that
this
group
might
care
about
is
one.
These
results
do
suggest
that
people
have
heuristics
for
detecting
social
bots,
but
obviously
they
still
appear
vulnerable
to
being
duped
by
them.
Moreover,
measures
of
individual
differences
demonstrate
that
you
know
some
may
be
more
vulnerable
than
others
and,
and
we've
already
discussed
that
at
length.
So
in
terms
of
conclusions
for
policy
interventions
first,
you
know
we
think
that
we
can't
just
default
to
saying
you
know
people
just
need
more
experience.
You
know
those
that
are
online,
we'll
figure
this
out.
A
After
after
after
a
while
either
you
have
people
going
online
that
are
just
in
spending
a
lot
of
time
there
that
are
naturally
not
very
good
at
this
or
spending
more
time
online
leads
to
you
being
less
good
at
it.
Right
can't
claim
causation
here,
but
but
obviously
there's
a
relationship
and
then
finally,
those
social
media
users
interact
primarily
within
echo
chambers
and
are
only
looking
at
like
groups.
A
Some
limitations
that
we're
thinking
about
for
the
next
study
is
obviously
any
conclusions
that
we
talk
about.
Accuracy
and
detection
here
depend
upon
our
normative
estimates
that
were
used,
and
I
can
speak
more
on
that
during
the
q.
A
perhaps
another
is
you
know
this
is
potentially
an
unnatural
experimental
condition.
You
know
how
do
people
act
in
the
wild?
I
think
our
one
response
to
that
is.
It's
probably
worse.
A
I
mean
in
this
case
we
expected
people
to
have
greater
vigilance
because
they
knew
they
were
in
a
study
and
yet
obviously
they
didn't
perform
particularly
well
and
in
terms
of
the
criterion
shift.
I
think
it's
also
notable
that
you
know.
If
anything,
you
might
have
expected
that
people
would
have
defaulted
to
saying
bob
more
often
because
they
expected
this
as
a
study
and
and
you're
going
to
have
more
of
those
things
introduced
than
not,
and
yet
we
still
saw
that
criterion
shifted
to
more
of
a
conservative
side
for
future
work.
A
For
my
dissertation,
I'm
I'm
looking
at
a
couple
different
things
in
the
next
study.
I'm
really
looking
at
this
the
same
type
of
decision
task,
but
in
the
human
machine
teaming
setting
where
you
know
we'll
provide
a
you
know.
A
One
experimental
group
with
some
bought
indicator
scores
and
signals
about
the
you
know
the
probability
of
a
stimulus
being
a
bot
and
then
we're
going
to
ask
to
what
extent
do
they
use
those
signals,
both
in
their
judgment,
as
well
as
their
confidence,
and
then
we're
also
interested
in
looking
at
interventions
improved
to
design
either
sensitivity
or
criterion
or
both
and
seeing
what
might
be
an
effective
approach
for
those
that
are
trying
to
develop
training,
aids
and
various
intervention
tools.
A
So
I
want
to
close
this
again
by
thanking
the
folks
that
are
on
my
committee,
my
advisors
for
their
support.
Obviously
the
asp
3
folks
for
funding
this
endeavor
and
then
this
group
for
allowing
me
to
take
your
time
and
present
my
findings
and
with
that
I'll,
open
up
for
a
q.
A
thanks
so
much.
B
Thank
you,
ryan.
It
was
always
a
fantastic
talk
and
it's
great
to
hear
the
exciting
work
that
you're
doing.
Let
me
start
by
asking
question
and
then
hopefully
that
will
open
up
the
floor
for
more
discussion.
B
B
I
also
wanted
to
understand
from
you
in
your
experience
and
work
as
we
know
that
bots
evolve
and
they
become
smarter
and
smarter.
How
does
that
evolution
of
bard
present
challenges
or
opportunities?
The
way
you
see
it
to
the
work
that
you're
doing
and
largely
to
the
the
information
operations,
space
and
overall
disinformation
misinformation,
space.
A
Yeah
great
question,
so
my
first
response
is:
it
makes
my
life
very
difficult
as
a
phd
student,
because
I
constantly
have
to
sort
of
update
my
normative
assessments
and
see
what's
going
on
in
the
industry,
which
is
it's
both
exciting,
but
it
also
means
you
have
to
stay
on
top
of
things,
because
your
point
is
incredibly
important.
A
You
know
from
a
military
con
construct
or
national
security
setting.
This
is
a
sort
of
a
classic
security
dilemma
in
which
you
have
opposing
groups,
upgrading
their
systems
as
one
gains
new
capabilities.
So,
on
the
one
hand,
as
detection
abilities
come
online
and
people
gain
access
to
those
those
that
are
trying
to
circumvent
those
detection
systems,
we'll
we'll
try
and
come
up
with
clever
ways
to
to
you
know
to
still
deceive
humans
and
and
trick
users.
So
I
I
think,
number
one.
A
A
I
think
there
are
a
lot
of
folks
that
are
looking
and
trying
to
characterize
different
social
bot
networks
and
different
sort
of
ways
in
which
these
things
are
being
used,
and
that
is
a
way
in
which
interventions
are
going
to
have
to
adapt
and
be
studied.
So
I
I
don't
want
to
say
that
it's
it's
fruitless.
A
What
I
would
say
is
that
this
is
sort
of
job
security
for
anybody
in
the
business,
because
you're
not
going
to
ever
get
the
perfect
system
either
from
an
automated
algorithm
that
does
this
tool
or
from
any
sort
of
training
for
humans,
because
it
will
become
dated
within
you
know,
month
years,
if
not
months,.
B
Absolutely
the
same
could
be
said
by
said,
for
the
the
spam
filters
in
our
emails
right,
they're
getting
smarter,
but
yet
the
spammers
are
also
one
step
right.
So
it's
almost
like
a
cat
and
mouse
game
of
catching
up.
Thank
you
for
your
response.
Let's
open
up
the
floor
for
more
q,
a
I'm
sure
there
are
folks
here.
Please
go
ahead
and
mute
yourself.
Ask
your
question
or
you
can
also
use
the
chat
window
to
ask
questions
from
lieutenant
ryan.
B
And
while
we're
waiting
for
other
folks
to
maybe
type
their
questions,
if
I
may
ask
another
question,
your
study
is
looking
at
one
of
the
most
widely
popular
platforms,
twitter,
and
it
is
quite
well
known
where
it
is
infested
with
a
lot
of
bots,
and
you
have
presented
a
statistic
which
is
all
the
way
up
to
almost
15
percent
of
twitter
users
could
be
bots,
but
I
want
to
take
a
step
back
and
see
how
do
you
or
in
this
area,
how
have
researchers
looked
at
bots
in
other
domains?
B
Maybe
it
is
facebook
or
youtube
or
other
platforms
that
are
probably
equally,
if
not
more
infested
with
bots,
and
then
there
are
also
other
types
of
parts
which
are
not
just
mimicking
human
behavior,
but
they're,
actually
creating
content
so
bots
on
youtube
that
could
create
these
fake
videos
or
parts
on
instagram.
That
could
create
these
fake
images
that
we
call
deep
fakes
and
so
on.
So
what's
your
response
to
that
line
of
research
and
how
your
work
and
largely
the
work
in
this
community
can
address
the
emerging
threats
from
those
platforms.
A
You
know
the
first
is
that
I
think
we're
in
an
era
where
folks,
like
myself,
are
trying
to
gain
that
deep
understanding
with
any
given
ecosystem,
and
we
look
inside
that
and
with
the
social
media
being
one
of
those
social
media
ecosystems,
because
each
is
nuanced
and
each
has
its
own
sort
of
platform
provided
information
access
to
you
know
sort
of
like
the
json
data
that
comes
off
the
surfaces
that
you
can
actually
use
for,
research,
etc,
and
then,
obviously,
the
nature
of
the
interactions
varies
so
it
to
to
some
extent,
that's
still
very
stove-piped
and,
to
the
extent
that
a
social
media
platform
is
interested
in
partnering,
with
with
an
academic
or
researcher,
is
probably
the
limiting
factor
right
now
in
terms
of
where
and
what
people
are
doing.
A
A
So,
if
you
think
about
okay,
I'm
in
ohio
and
there's
a
cold
burning
plant
there.
Well,
that's
not
just
affecting
ohio,
so
you
know
that's
why
you
need
a
federal
agency
to
kind
of
govern
this.
We
also
see
you
know
bots
that
are
obviously
taking
content
from
twitter,
passing
it
onto
facebook,
taking
it
from
facebook
passing
on
youtube,
etc,
and
I
haven't
seen
a
lot
of
people
looking
at
that.
So
I
think
that
would
be
a
really
really
great
area.
A
If
you
want
to
get
a
true
understanding,
particularly
from
a
national
security
security
perspective,
that's
trying
to
understand
nation,
state
or
or
really
sophisticated
information
operations,
and
then
the
third
thing
which,
which
is
an
even
deeper
sort
of
ai
machine
learning,
question
which
is
using
really
sophisticated
tools
to
create
content
and
and
really
mimic
users
in
ways
that
aren't
simply
automated
messaging
data
in
itself,
is
it
is
its
own
sort
of
bevy
of
turing
test
and-
and
that's
in
part
where
I
I
initially
came
at
this
from
because
I
was
interested
in
that,
particularly
in
terms
of
natural
language
generation,
and
there
was
a
question
well,
how
do
you
study
it
in
what
environment-
and
you
know,
how
do
you
actually
get
through
what
you
need
to
get
through
to
try
and
get
a
result?
A
And
I
think
that
area
is
one
in
which
you
know
there's
a
lot
of
work
in
terms
of
competition
versus
collaboration
versus
you
know,
really
just
sort
of
exploitation
of
the
tools
for
ways
to
rapidly
get
ahead
of,
producing
any
type
of
content
that
a
human
can
produce.
And
that's
you
read
guys
like
max
tag,
mark
and
sort
of
people
looking
at
life,
3.0
and
the
future
of
of
digital
sort
of
metaverse
type
topics.
I
think
that's
its
own
bevy
of
of
research
questions
and
it's
something
where
we
should
assume.
B
Thank
you
so
much
for
that
elaborate
response.
We
have
a
question
from
dr
shannon
mckean.
He
thanks
you
later
and
colonel
ryan
and
he
asks
is
there.
Information
on
platforms,
policing,
bots
and
helping
users
are
there
platforms
that
are
safer.
A
If
you
you
know
when
you
watched
after
january,
6,
okay,
I
think
that
every
platform
had
to
react
to
concerns
about
not
just
automated
tools
but
fake
news
and
then
obviously
sort
of
this
rise
of
extremism
and
then
how
all
this
gets
mixed
together
to
affect
the
public
perception
of
truth
right
at
the
time
it
seemed
like-
and
I
still
seem
like
this
is
the
case-
that
every
platform
is
sort
of
begging,
policymakers
in
dc
to
come
out
and
and
help
them
answer
a
question
that
they
may
not
be
completely
comfortable
answering.
A
So
let
me
let
me
step
back
in
because
there's
two
ways
of
approaching
this.
I
think
one
there's
platforms
trying
to
buy
down
misinformation,
which
sounds
a
lot
like
information
control,
which
sounds
a
lot
like
freedom
of
information
and
and
and
then
you
know
a
policy
pick.
You
know
I
wrote
a
policy
essay
that
looked
at
this
challenge
of
society
having
to
establish
its
own
threshold
in
this
signal
detection
task,
meaning
that
if
you're
a
society
that
values
freedom
of
information,
then
you're
gonna
want
to
have.
A
You
know
really
a
very
a
very,
very
sort
of
conservative
criterion
so
that
you
don't
accidentally
scoop
up
any
genuine
information
or
anything.
That's
true
in
your
efforts
to
buy
down
anything,
that's
fake
right,
whereas
if
you're
society,
that's
more
autocratic,
that's
more
sort
of
state-controlled
information,
then
you
might
say
I
don't
want
any
of
this
content
getting
through
and
I'm
comfortable
with
a
lot
of
content.
That's
not
harmful
being
scooped
up
as
an
effect.
So
that's
fundamentally
the
challenge
that
any
policymaker's
gonna
have
to
bring
to
this.
A
This
discussion
and
society's
gonna
have
to
decide
really
what's
what's
the
the
social
cost
of
limiting
information
versus
the
social
cost
of
being
exposed
to
harmful
information?
So
that's
that's
a
problem
that
the
platforms
aren't
really
ready
to
figure
out,
but
they're
just
they're
kind
of
making
up
on
their
own.
The
second
is
whether
or
not
you're
a
real
persona
or
not,
and
every
platform
sort
of
has
in
its
bylaws.
You
know
in
your
anything
you
sign
that
says
I
I
will.
I
will
use
the
platform
appropriately.
It
says
you
have
to
be.
A
You
have
to
be
representative
of
yourself,
so
if
you,
if
you
come
on
board-
and
you
say
I
am
a
bot-
and
this
is
a
bot
thing
now-
you
might
be
okay
with
that
so
long
as
they
say
it
was
developed
by
me
and
here's
my
location
and
and
I'm
using
it
for
this
purpose.
But
if
you
do
anything,
that's
deceptive,
the
platforms
will
kick
you
off
and
the
reason
why
it's
hard
to
find
information
about
that
to
be
to
really
be
more
direct
in
your.
A
Your
question
is
because,
to
the
extent
that
a
platform
reveals
how
many
bots
are
on
its
platform,
it
buys
down
the
credibility
that
it's
actually
has
users
on
there
that
are
interacting
with
content
and
that
affects
their
their
marketing
dollars.
Right.
If
you
saw
this
stuff
that
just
came
out
of
apple's
privacy
limitations
and
how
it's
affecting
facebook
and
what's
happened,
and
others
that
use
those
metrics
to
glean
information
and
then
sell
you
ads,
it's
having
a
big
impact
on
on
advertising
dollars
inside
those
those
adjacent
platforms.
So
there's
a
lot
of
hesitancy.
B
Very
interesting,
thank
you
again
ron,
let's
see,
do
we
have
any
questions
from
our
audience.
Yes,
I
have
a
question
hi,
hi
ryan.
It
was
a
great
presentation.
I
really
liked
it.
I'm
doing
some
analysis
of
it
by
using
botometer
to
understand
some
users
in
twitter.
B
A
Yeah,
so
there's
yeah
there's
two
ways
of
looking
at
that
and
yes,
yes,
I
did,
and
so
what
I
did
is
we.
We
pulled
out
basically
batometers
probability,
and
this
is
from
two
three
years
ago
and
I
know
they
update
their
models.
But
at
the
time
for
the
personas
we
used,
we
pulled
out
those
probabilities
based
on
on
their
scoring
and
then,
as
I
mentioned,
and
that
was
our
secondary.
Our
primary
was
bot
hunter,
so
we
use
that
as
an
index.
A
It
does
the
same
type
of
thing
and
then
we
we
married
those
so
that
you
know
whatever
bot
hunter
said
so
long
as
botometer
also
judged
it
that
the
persona
the
same
way,
then
we
felt
fairly
confident
about
that
normative
score.
But
what
I
would
share
with
you
is
that
if
you
brought,
if
you
look
at
these
models
broadly,
they
vary
in
their
distributions
in
their
skew.
So
you
know
the
skewing
of
one
had
a
bit
of
a
positive
skew.
A
The
other
had
a
bit
of
a
negative
skew,
which
kind
of
speaks
to
how
they're,
how
they're,
looking
at
the
base
rate
and
their
assumptions
about
base
rate-
and
I
know,
batometer
sort
of
uses
that
its
estimated
base
rate
to
affect
kind
of
where
it
establishes
the
threshold,
which
I
think
is
it's
somewhat
problematic
and
it's
also
a
way
in
which,
though,
in
my
study,
I
had
to
impose
a
threshold,
I
think
that
it's
always
setting
it
at
50
is.
Is
it
poses
some
challenging
issues
to
research
and
you
have
to
be
purposeful.
B
Okay,
good,
because
in
the
research
that
I'm
doing,
I
set
the
threshold
on
90
percent
and
the
number
of
butts
that
I
could
get
was
so
low.
I
was
thinking.
Maybe
I
should
set
it
to
another
number,
so
you
said
you
set
it
to
50
percent.
Well,.
A
And
that's
just
because
yeah
this,
because
this
was
a
signal
detection
task
right.
So
if
it's
greater
than
50
we're,
assuming
that
you
should
you
surpass
that
threshold
of
this
sort
of
the
50
50
uncertainty.
It
goes
back
to
sort
of
broadly
looking
at
signal.
Detection
theory
in
those
methods,
for
your
study
may
be
different.
You
know,
and
the
other
thing
I
would
I
would
really
encourage
you
to
do-
is
consider
having
a
second
model.
A
Look
at
those
personas,
because
I've
I
have
run
into
and
there's
a
third
one
out
there
too
norton
lifewalk,
I
think,
is
developing
one.
I've
been
in
discussions
with
daniel
katz,
the
principal
researcher.
For
that
I
haven't.
I
haven't
gotten
under
the
hood
in
terms
of
the
machine
learning
model
as
as
I've
sort
of
done
with
baton,
because
it's
out
there
batometer
and
bot
hunter
you
can.
You
can
read
how
they
built
the
models
but
but
again,
you'll
find
that.
A
A
One
thing
I
didn't
add
in
my
study
is
that
about
a
year
afterwards,
as
we're
doing
the
final
write-up,
I
actually
went
back
and
I
looked
up
all
the
personas
and
I
looked
to
see
one
where
had
the
account
been
suspended
or
not,
because
these
were
all
live
in
the
wild
when
we
when
they
were
initially
judged,
and
then
I
also
looked
at
changes
in
following
follower
accounts
and
I'm
sorry
I
didn't
have
that
slide
prepared.
A
But
what
I
would
tell
you
is
that
the
the
stimulus
that
we
use,
that
scored
above
75
percent
probability
had
a
a
huge
number
that
had
been
suspended.
Like
I,
I
want
to
say
something
like
you
know.
Well,
over
half
had
been
suspended.
The
bot,
those
are
the
bot,
probably
below
25.
None
of
them
had
been
suspended
and
similarly
kind
of
above
that
50
threshold
there
were
pretty
large
drops
in
in
following
and
follower
percentages
by,
like
you
know,
10
15
20
percent.
A
I
suspect
some
of
that,
because
this
was
on
twitter
and
it
was
after
january
6..
You
know
they
did
a
pretty
public
announcement
of
calling.
You
know
those
that
shared
misinformation.
So
you
don't
you
can't
say
definitively
whether
that
persona
was
a
social
bot
because
they
would
have
lost
some
followers
etc,
but
it
does
sort
of
speak
to
them,
potentially
being
part
of
a
larger
sort
of
network
of
bad
influencers.
If
you
will
so
just
another
technique,
some
forensic
techniques,
you
might
want
to
consider
as
you
defend
your
choice
of
stimulus
or
approach.
B
B
All
right,
okay!
Well
again,
thank
you
so
much
ryan
for
speaking
to
the
working
group.
It
was
a
pleasure
and
I'm
sure
there
will
be
folks
who
will
reach
out
to
you
after
this
and
ask
for
copies
of
your
publications
or
your
future
work
so
stay
in
touch,
and
thank
you
again.
Thank
you.
Everyone
for
joining
this
morning
for
the
working
group
and
we'll
see
you
in
a
month
from
now
and
have
a
great
rest
of
your
day.