►
From YouTube: Dmitry Khovratovich - Factoring Integers
Description
Materials: https://drive.google.com/drive/u/0/folders/149nD7tGtm7WsKLbuUtxyvdqEWig1NeLa
A
So
I'll
talk
today
about
modern
methods
of
factoring
well,
unfortunately,
I
know
very
modern,
because
the
main
advances
in
this
direction
were
in
80s
and
beginning
of
scientists
and
just
the
recent
factorization
records
that
we
have
seen
largest
application
of
these
methods
to
increase
in
computer
power.
A
built-in
power
that
is
in
hands
of
researchers,
Academy
and
so
on.
A
So
I'll
try
to
give
some
overview
without
going
deep
into
math
just
a
bit
deep,
so
that
if
you're
later
would
like
to
learn
a
bit
more,
this
material
may
serve
like
basis,
but
I
will
also
publish
some
additional
material
on
Google
Drive,
with
some
simple
explanations
of
number
field.
Sieve
and
similar
methods,
which
can
be
helpful
if
we
in
the
future
decide.
A
So
feel
free
to
ask
questions
and
I'm
between.
So
what's
actually
the
problem
of
factoring
that's
formulated,
and
it
is
suppose
we
have
a
big
number
capital
n,
which
has
n
bits
small
N
and
the
form
factor
is
defined.
It's
prime
decomposition
so
find
it
after
addition
into
prime
powers,
and
this
factorization
for
integers
is
known
to
be
unique
up
to
the
order
of
factors,
but
there
cannot
be
different.
The.
A
A
Every
field
element
can
be
present
and
have
like
multiple
infinite
number
of
representations,
but
it's
property
of
drink,
of
integers
or
think
of
polynomials
and
of
some
other
so-called
Dedekind
domains,
which
includes
some
funny
algebraic
fields
which
are
basically
fields
where
you
accession
fields,
where
you
add
some
roots
of
certain
animals
or
some
square
roots
of
negative
numbers
and
so
on.
There
are
some
fancy
sets
where
fertilization
is
unique,
and
this
is
used
some
extent
and
bacterial
nets,
but
not
everywhere.
B
A
Let's
go
on
so
if
the
number
cannot
be
factored,
it's
called
prime
number
and
it's
really
easier.
So
it's
there
exists
remains
the
coefficients
which
are
polynomial,
and
the
number
of
bits
that
determines
if
number
n
is
prime
factoring
is
of
course,
an
NP
problem,
because
you
can
quickly
check
that
factorization
is
correct,
but
it's
not
known
to
be
np-hard
and
I.
Think.
A
Can
we
simply
a
factor,
so
we
just
go
over
all
numbers
up
to
the
square
root
of
N
and
if
numbers
prime,
we
test
if
it
divides
and
and
of
course,
when
we
exhaust
all
the
numbers
up
to
the
square
roots,
we
exhaust
all
the
all
the
numbers,
all
the
factors
and
what
what
remains
it
can
be
bigger
than
square
root
of
n,
but
it
it
must
be.
Prime,
not
they
single
prime,
that
remains
and
complete.
City
of
this
is
about
square
root
of
em.
A
A
A
A
Now
there
is
some
interesting
stuff
suppose
that
for
some
number
X
X
I
bigger
than
the
square
root.
It
happens
that
if
we
compute
the
square
of
X
and
take
it
model
M-
and
we
have
some
numbers
that
then
it
factors
into
powers
of
two
three
and
five,
just
two
three
and
five,
so
some
two
to
the
some
power
of
a
a
I
three
to
some
power
of
VI
and
five
to
some
power
of
CI.
So
we
can
actually
factor
that,
so
we
try
for
every
set
that
we
compute
this
way.
A
If
it
factors
into
this
into
two
three
and
five,
and
what's
interesting
interesting,
that
if
we
collect
sufficiently
many
of
such
numbers-
and
if
you
have
three
factors-
then
actually
yeah
then
I
need
for
such
relations.
Why
is
that?
Because
if
I
collect
four
different
relations
like
this
with
sufficiently
big
X
bigger
than
square
root,
then
what
happens?
If
I
have
four
such
relations,
I
can
compose
a
matrix.
We
are
the
powers
of
these
prime
factors.
A
Are
in
the
metrics
and
then
I
seek
for
linear
combination
mode
to
model
or
to
Y
modulo
two,
because
there's
basically
means
if
some,
for
example,
if
some
column,
if
we
sum
up,
for
example,
some
rows
and
there
is
0
modulo
2.
This
basically
means
that
this
numbers
some
to
some
even
number,
and
this
basically
means
that
if
I
multiply
this
that
I,
the
J
tau
together,
if
I
multiply
them
up,
but
then
this
exponents
they
will
add
up.
This
will
be
even
number,
and
this
means
that
this
will
become
a
square.
A
A
A
If
we
put
this
here,
then
it
will
translate
to
some
when
everything
thumbs
up
so
in
this
vector
will
get
so
some
you
to
here
to
you
to
here
to
you
three
here
to
u5,
basically
here
and
if
model
to
the
salt
gives
all
zeros.
And
then
this
means
that
we
have
now
the
product
of
squares
into
some
power.
So,
instead
of
having
two
to
the
something
through,
the
some
five
to
the
south
will
not
have
four
to
something
and
so
on,
and
this
basically
means
that
this
is
the
square,
and
this
is
the
square.
A
For
example,
suppose
that
we
would
like
to
factor
number
143
anti-tick
square
root
of
143,
which
is
smaller
than
12
and
I
compute
powers
of
this
number
size,
slightly
increase
and
computer
fortune
square
15
square
and
16
square.
They
do
not
they're,
not
divisible
by
two
three
and
five,
but
17
square
is
just
three
nine
ten
square
75
21
square
is,
if
I
take
all
this
model
133.
A
Twenty-One
square
is
four
for
one
equal
12,
which
is
2
times
3
square
and
so
on.
And
if
we
note
then
so,
if
you
look
at
this,
then
if
we
multiply
17
by
19,
then
there
are
squares.
For
this
course
will
happen
that
will
have
3
by
3
and
5,
squared
2,
so
17
times,
19
squared
equal
to
3,
squared
5
squared,
which
basically
means
that
30.
A
C
A
A
B
A
B
A
A
Okay,
so
this
method
is
very
nice,
but
the
problem
is
that
the
number,
the
direction
of
numbers
divisible
by
2,
3
&
5,
is
very
small
and
we
have
to
adjust
this
method
somehow
to
to
benefit
from
this.
So
the
matrix
part
that
we
like
looked
for
linear
dependency
is
very
simple
because
some
of
the
three
factors
but
we
have
tested.
So
if
you
see
that
I,
for
example,
tested
twenty
numbers
and
only
in
four
cases,
I
got
nice
decomposition.
A
So
the
smart
and
method
of
factoring
I
introduced,
what's
called
smoothness
and
we
call
number
zero
be
smooth
if
all
its
vectors
are
smaller
than
B
or
like
smaller
or
equal
to
B
and
in
in
our
case,
if
we
say
that
it's
like
smaller
than
B,
then
clearly
the
previous
case,
when
you
consider
only
two
three
and
five,
we
talk
about
six
smoothness,
but
of
course
we
can
consider
bigger
smoothness
value,
so
how
this
is
how
this
works?
We
take
some
X
out
there
later.
A
If,
yes,
we
add
it
to
our
table,
and
this
is
called
seeding
if
we
find
about
B
well
B
over
blogger,
if
not
be
good
to
be
formally,
then
if
you
find
about
be
such
X
and
then
we
have
a
matrix
with
so
we
become
so
if
it's
be
smooth,
then
in
our
table
of
exponents
there
are
at
most
B
columns,
because
every
mccollum
correspond
to
exponent
of
some
prime
and
there
are
at
most
B
Prime's.
Here,
listen
through
also
in
our
metrics.
A
There
will
be
big
columns
B
over
and
if
we
find
about
be
such
X,
then
after
applying,
for
example,
the
ocean
elimination
to
the
metrics,
and
we
can
find
some
linear
dependency
and
basically
it
will
see
that
pretty
much
the
same
way
that
some
product
of
square,
so
product
of
squares,
gives
you
another
square
and
when
we
aggregate
these
guys
and
take
it
modular,
and
then
we
have
X
square
equal
Y
square
and
we
are
done
so.
What's
the
complexity
of
this
principle.
So,
basically
the
metric
step.
A
We
work
with
metrics
B
times
B,
but
interesting
that
we
don't
need
B,
cube
operations
to
find
linear
dependency
to
find
basically
an
element
of
the
kernel,
because
the
matrix
is
very
sparse
and
interested.
There
are
special
algorithms
called
block,
V
demon
or
block
lancers
algorithms
for
sparse
matrices
that
find.
A
Elements
of
care
of
kernel
of
sparse
matrix,
and
usually
so
this
algorithms
works
in
about
B
Square
times.
Well,
then
sparsity,
somehow
so
it's
about
this
queer-
and
this
is
one
step
that
of
course,
can
be
improved
in
hardware.
So
so
there
will
be
improvement
to
see
even
but
the
metric
step.
I
can
already
tell
that
there
are
several
suggestions:
how
to
make
this.
This
thing
much
faster
and
much
more
efficient
and
hardware,
so
this
can
be
considered
independently
of
this
even
part.
A
There
are
very
complex
part
of
number
field
sieve,
but
this
thing
is
there:
it
is
very
simple
and
can
be
explored,
I
think
relational,
easy
how
fast
it
can
be
on
custom
hardware
to
run
the
service
or
modifications
of
this
elevations
that
are
suitable
for
multiple
cores
on
like
this.
So
what
about
seeding
step?
A
So
the
probability
decreases
significantly
with
the
girl
as
that
grows,
and
we
have
to
take
this
into
account,
so
the
recipients
for
Stephen
we
need
B
such
numbers
and
too
basically
they're
complex.
To
find
one
number
is
the
fraction
of
B
smooth
numbers
amount,
our
outputs
for
given
that
and
multiplied
by
the
complexity
of
test,
so
how
expensive
it
is
to
test
that
that
is
smooth,
but
actually
what's
really
interesting
that
the
modern
sieving
methods
they
are
quite
fast
in
a
smokes
test
and
they
may
take
into
account
the
benefit.
A
A
That
is
be
smooth
and,
of
course,
since
the
metric
step
is
about
of
this
queer,
we
would
like
that
this
B
of
B
Square
to
so
that
both
steps
are
balanced,
and
if
we
make
the
compass
complexity
almost
1
or
closed
1,
then
the
trade
of
is
where
the
smooth
probability
is
1
over
B.
But
it
of
course
depends
how,
because
that
is
so
the
smaller
that
is
the
bigger
we
can
take
this
be
and
the
bigger
numbers.
We
can
basically
factor
questions
so
far.
A
Yeah
so
actually
I
like
to
stress
that
this
smoothness
property
is
very,
very
important
thing.
It's
actually,
why
factoring
is
much
faster
than
brute
force.
Factor
in
that
square
root
of
n
is
because
we
can
in
integers
there
is
this
exist,
this
notion
of
smoothness,
and
there
are
numbers
that
factor
that
this
way
so
in
in
other
sets.
We
don't
have
this
notion
and
because
of
that
problem,
similar
to
factoring
like
this
could
look
are
much
harder
there.
So
this
is
the
crucial
property.
A
A
So
what
about
the
sitting
step
I'll
have
to
go,
how
exactly
ceiling
works
in
some
in
two
algorithms
in
quadratics
event,
number
field
sieve
and
then
I
will
explain
so
how
hardware
will
help
there
so
about
the
smoothness?
How
we
get
actually
this
faster
factoring,
that's
faster
than
square
root.
It's,
for
example,
if
B
is
constant
like
six,
we
have
used
then
probability
that
the
number
is
be
smooth
is
one
over
N
to
the
logarithm
of
B.
A
But
if,
if
B,
is
this
big
like
exponent
of
square
root
of
logarithm
so
which
is
not
like
logarithm
by
1/2
but
square
root
of
lager,
then
the
probability
the
number
smooth
is
one
over
square
root
of
B,
and
because
of
that,
if
we
defined
be
such
numbers,
we
spent
B
to
the
3
over
2
time
and
the
metric
step
is
be
square.
So
by
further
tuning
this
number
we
can
arrange
that
numbers
are
factored
with
about
this
complexity.
How
I
will
be
a
bit
more
precise
in
the
next
algorithm,
which
is
called
quadratic
C?
A
So
what's
interesting
about
quadratics?
If
it's,
it
can
be
explained
relatively
trivial
compared
to
the
number
of
fields,
if
so,
what's
in
the
connecticut,
what
we
are
doing
so
the
first
nice
step
is
how
we
select
our
x
if
we
select
x,
close
to
square
root
of
n,
so
square
root,
plus
some
epsilon.
So
of
course
it's
the
integer
part
of
square
root.
A
Then,
if
we
compute
x
square,
a
modulo
n,
then
if
we
took
and
we're
modeling,
so
this
will
become
almost
zero
and
what
remains
is
epsilon
square
and
two
epsilon
times
square
root
of
n.
This
makes
our
number
Z
Y
of
size
about
square
root
of
N
and
because
of
that,
it's
much
more
likely
that
it's
be
smooth
compared
to
random.
That
Mutulu
n,
because
number
that
is
half
of
digits
of
n,
is
much
more
likely
to
be
be
smooth,
so
we're
going
to
find
all
of
B
over
what.
Where
can
be
such
X?
A
We
construct
our
equality
of
square
reports
and
we
are
done
but
how
to
test
smoothness
and
for
smoothness
test
it's
interesting,
so
we
basically
try
X
and
we
have
arithmetic
progression
shape.
So
if
it
has
be
square,
then
we
take
square
root
of
n
plus
1
square
root
of
n
plus
2
and
so
on,
plus
B
Square,
and
how
we
work
with
that,
so
why
it's
called
quadratic
sieve.
A
So
remember
we
constructed
X
square
minus
M
of
4x,
which
is
arithmetic
progression,
and
this
means
that
for
some
first
element
so
how
to
find
smooth
numbers
here
we
take
one
number
one,
prime,
and
we
find
here
the
root
we
find
we
solve
for
some
X
in
the
beginning
of
this
equation.
X
square
minus
n
equals
0
mod
lookyou,
we
solve
it
and
then
so
suppose.
A
This
is
true
for
this
number
and
and
then
this
means
that
if
this
is
divisible
by
Q,
then
we
can
add
to
the
argument
any
multiple
of
Q
and
with
like
an
arithmetic
progression
in
equal
gaps.
The
outputs
of
our
polynomial
will
be
divisible
by
Q
and
we
can
divide
it
by
Q
like
a
radish
and
sieve,
and
eventually,
if
we
repeat
this
many
many
times
the
different
Q,
some
numbers
will
boil
down
to
0
the
one
because
we've
factored
out
all
the
prime
numbers
of
them.
A
So
how
this
Civic
works
is
that,
within
in
a
very
long
curry,
we'll
find
some
number
to
start
with
and
then
every
cues
number
we
divide
by
Q,
and
we
know
it
will
be
divisible
and
to
repeat
this.
For
for
all
the
prime
numbers
and
eventually
what
remains
are
there
must
be
about
B
numbers
out
of
B
Square,
let
up
once,
and
these
guys
will
be
be
smooth
questions.
E
E
A
A
A
D
A
It's
it's
the
same
veritas
and
stiff,
but
butte
in
a
bit
different
way.
So
in
there
at
us
and
see
if
you
have
like
Prime
3
and
you
you
cut
out
3
6,
9,
and
but
here
you
got
not
369,
but
you
have
to
start
from
somewhere
in
between
from
like
47,
for
example,
you
figure
out
that
queue
of
47
its
visible
by
7,
and
then
you
divide
q47,
then
P,
47,
plus
75460,
168,
and
so
on.
So
in
equal
gaps.
You
find
numbers
that
are
divisible
by
this
prime
and
you
divide.
A
D
D
D
A
D
A
C
A
A
We
can
attack
much
bigger
numbers
so
because,
if
in
in
regular
method-
if
said,
for
example,
you
can
spend
like
two
to
the
60
time
and
then
for
random
said
the
probability
to
be
a
smooth
is
2
to
the
minus
30,
for
example,
and
you
can
break
I'd,
know
200
bit
numbers
with
this
complexity,
then,
since
in
the
new
method,
Zeb's
are
of
size
square
root
of
n.
That
basically
means
you
cannot
attack
twice
bigger
numbers
roughly.
C
A
So
the
serum
gives
just
and
of
Itamar
ties.
Is
the
test
cost
but
I
think
even
three
really
the
test
cost
is
not
that
high.
Well,
if
you
do
it
like
stupidly,
then
you
have
like
B
numbers.
If
you
have
like
B
smoothness
you,
you
have
B
square
numbers
and
you
test
everyone
for
this
movements.
You
have
to
try
B
factors
right.
You
spent
B
fact
be
divisions,
trial,
dividends
for
this
betrayal
divisions
for
this
and
so
on.
So
eventually
it
stand
like
B
cube.
D
A
C
D
A
A
It
allows
you
to
like
if
you
know
that
your
factor
is
small,
then
you
can
find
it
whatever.
It
is
faster
than
and
by
just
trial
division.
So
you
can
optimize
using
this
trick
as
well,
so
there,
of
course,
we
quite
many
optimizations
here
and
the
fact
that
so
how
how
you
can
exploit
this
in
hardware,
so
you
recall
what
I
said
about
arithmetic
progression
that
we
do
this
even
using
arithmetic
progression,
so
we
access
memory
in
a
very
predictable
way.
So
we
we
take
this
number.
A
Then
we
step
by
this
prime,
that
we
divide
by
it
and
again
and
again.
So
all
these
steps
are
pretty
predictable
and
it's
possible
to
share
this
task
among
several
cores
and
there
this
is
quadratic
sieve.
The
modern
method
called
will
use,
what's
called
latest
eve,
but
they
all
share
pretty
much.
The
same
strategy
and
memory
accesses
here
are
very
predictable
and
I.
Think
after
you
have
generated
this
numbers,
you
can
properly
share
them.
Among
your
smaller
computer,
so
if,
like
everyone,
can
do
their
tasks
to
filter
out
they
to
find
smooth
outputs.
A
Doesn't
make
sense?
This
is
the
second
crucial
point.
I
wanted
to
make
that
the
sealing
step
is
very
predictive
and
oh,
it
requires
quite
certain
amount
of
memory.
It's
it's
very
predictable
and
you
can
even
compute
these
guys
on
the
fly
and
you
can
divide
this
into
segments
and
do
this
process
and
of
independently
for
different
segments.
Just
you
bit
loose
into
the
total
complexity,
but
you
can
save
a
lot
of
memory
for
every
single
core.
C
A
Ok,
so
then,
so
we
proceed
and
yes,
interestingly
so
the
optimal
B
is
e
to
the
1/2
e
to
the
square
root
of
1/2.
Logarithmic
fan
I
will
not
computer
directly
right
here
so
because
of
the
square
roots.
This
all
this
complex
estimates
are
a
bit
odd
and
not
trivial
to
state,
but
the
thing
is
so
since
B
is
this
way,
then
the
total
complexity
is
B
squared,
and
this
basically
means
that
when
you
square
this,
you
have
to.
A
It's
complex
of
quadratic
C
and
the
next
advanced
method
is
called
number
field
sieve
and
number
field.
Sieve
uses
very
sophisticated
algebraic
tricks
to
make
this
number
even
smaller.
So
all
the
rest
is
almost
the
same,
but
they
managed
to
have
this
said
even
smaller
than
square
root
of
n.
They
make
it
like
some
root
of
n,
which
is
not
like
square
root,
but
something
in
between
like
two
point
half
or
cubic
root.
A
A
Let
me
just
state
the
main
points
here
and
maybe
later,
if
you
would
like
to
understand
a
bit
better,
you
can
return
to
this.
But
what's
important
here
is
that
algebraic
part
doesn't
play
almost
any
role
into
there
by
computing
this
and
hardware.
So
so
the
code
doesn't
change
significantly
from
the
number
from
the
quadratic
sieve
and
the
properties
of
architecture
needed
to
break
the
number
with
number
field
sieve.
A
It's
almost
the
same
as
for
the
quadratic
sieve,
so
you
can
have
understanding
of
quadratic
sieve
and
built
ASIC
for
quadratics
if
it
will
be
quite
efficient
for
number
field
sieve
as
well,
so
how
number
field
sieve
works?
First,
they
take
some
polynomial
of
small
degree,
I
conflict
numbers
about
five
or
six,
so
that
it
has
some
roots
that
we
know
modular
m
and
this
polynomial
actually
can
be
derived
rather
trivially.
A
So
we
can
imagine
so
this
number
say
if
this
is
five,
then
clearly,
this
number
should
be
about
five
fifth
root
of
n,
and
we
can
imagine
that
if
we
a
digital
representation
of
N
and
divide
it
into
five
segments
and
have
this
M
every
presentation
of
n,
then
we
can
find
such
polynomials
our
ability
to
find-
and
there
are
many
of
them
suppose,
also
that
if
we
can
see
there
are
not
modular
and
but
just
over
integers,
that
it
has
some
root
which
we
denote
by
alpha.
So
by
default.
A
It's
probably
not
divisible,
I
don't
have
any
real
roots,
and
actually,
if
it
does,
then
it
probably
we
can
find
a
factor
of
n
trivially,
so
we
assume
it
doesn't
and
then
what
we
can
do
is
that
if
we
have
la
anything
or
which
are
built
over
alpha
as
variable
and
if
we
substitute
into
this
polynomial
number
M,
we
actually
get
a
homomorphism
from
the
set
of
polynomials
to
integers
and
this
nice
homomorphism.
It
has
very
nice
properties
that
basically,
if.
A
A
In
parallel,
we
do
the
same,
but
in
the
algebraic
number
in
the
algebraic
number
field,
and
we
try
to
find
smooth
numbers
of
this
kind
and
if
they
are
smooth,
then
this
product
is
we
can
find
by
using
the
same
linear
algebra.
We
can
find
it's
equal
to
better
Square
and
if
it
is
equal
to
sum
by
the
square
that
we
know,
but
we
haven't
computed
it
yet
then
it's
possible.
A
And
what
nice
here
is
that.
Well,
we
have
we
search
for
a
and
B
so
that
this
number
is
smooth,
and
this
number
is
smooth
as
algebraic
number
and
we
can
make
them
rather
small.
So
that's
the
main
trick
that
if
a
and
B
are
small
enough,
then
M
is
also
small
enough.
We
remember
that
it's
some
root
of
M,
so
this
guy
is
small
and
this
guy
is
also
rather
small.
A
So,
even
when
the,
even
though
we
need
that
both
of
them
are
smooth,
this
probability
is
much
higher
than
for
the
zit
that
we
served
before
the
smooth
and
if
both
of
them
are
smooth,
then
impious
efficient
team
finds
officially
minion
that
a
and
B
then
using
the
same
linear,
algebra
trick.
We
find
X
square
+
y
square.
So
that's
basically
the
core
of
number
field
sieve.
So
to
summarize,
even
if
you
don't
understand
anything
like
I
did
many
like
first
ten
times
I
read
about
that.
D
D
A
I
think
that
the
moat
I
think
this
we
can
view
them
as
actually
that
of
alpha,
because
well,
these
guys
are
always
integer
in
our
case
I
believe
so.
Actually
this
so
probably
should
be
I
think
this
Q
should
be
so
here.
The
co
morphism
is
works
for
Q,
but
I
think
it's
older
further
explanations.
They
are
over
Z
over
alpha
yeah.
B
A
You
for
that,
so
basically
we
just
to
find
smooth
elements
of
this
set
indeed
of
Z
over
alpha
is
not
not
trivial,
because
we
cannot
divide
that
easily
in
that
field,
but
what
we
can
do
is
we
can
again
map
this
number
to
integers
using
simple
formula,
so
we
substitute
this
to
the
know
that
we
have.
This
is
called
the
norm
and
fortunately
the
norm
is
multiplicative.
So
if
this
guy
is
a
square,
then
so
is
the
norm.
A
So
what
you
basically
do
is
we
find
a
norm
which
is
be
smooth
and
after
composing
efficient,
sufficiently
many
norms?
We
find
the
product
of
elements
that
is
a
square,
and
because
of
that
there
is
very
high
chance
that
the
product
of
algebraic
elements
is
also
square.
Oh
yeah,
I
think
I
will
I'll
skip
this
details.
A
Yes,
there
are
some
explanations
why
this
numbers
are
small.
Basically,
they
are
incurring
D
about,
like
n
to
the
zero
point.
Four,
and
this
section
is
sufficient
to
give
us
this
increase
in
the
length
of
the
number
we
can
factor
if
we
go
to
some
concrete
complexity.
So
we
can
translate
this
ideas
into
concrete
complexity
and
there
is
a
very
well
bit
weird
thing.
So
if
n
is
the
number
of
bits,
then
this
is
the
formula
then
we
have
take
the
square,
the
cubic
root
of
the
number
of
bits.
A
We
also
have
to
take
logarithm
of
the
number
of
bits
and
take
power
of
one
over
point.
1
point:
5
multiply
all
this
by
2
point
4
in
the
deck
16,
and
this
gives
you
the
complexity
in
the
way
that
the
square
root
of
this
is
the
smoothness
bound,
so
practically
for
recent
factorization
space
is
about
like
2
to
the
30,
and
this
is
about
the
60,
the
64th.
That's,
of
course,
good
question,
but
if
we
translate
it
into
core
years
using
the
recent
accreditation
records,
then
we
will
see
that
this
is
about
50.
A
So
each
element
is
about
50
CPU
cycles.
So
it's
some
kind
of
basic
integer
operation,
big
integer
operation,
which
gives
you
like
I,
think
about
TCP
uses.
So
this
is
the
complexity
of
number
field
sieve,
but
this
is
just
the
computation
complexity,
the
number
of
operations,
but
this
doesn't
tell
us
so
far.
What's
the
complexity
to
break
it
in
dollars,
but
that's
actually,
of
course,
very
interesting
for
us.
A
What's
the
actual
complexity
break
this
thinkin
dollars,
we
know,
are
from
recent
factorization
records
how
many
core
years
they
have
spent,
but
this
core
years
are
very
well
but
but
weird
core
year.
So
these
are
not
GPU
years.
These
are
mainly
class
three
years
and
clusters
of
different
sizes
in
different
countries,
people
around
by
different
people
and
so
on.
So
these
numbers
suggest
aggregation
of
something-
and
this
is
not
very
precise,
but
what's
interesting
that
the
architecture
here
is
well
I've
been
busy
to
the
big
extent
and
here
just
normal
clusters.
A
What
we
can
do
is
we
try.
We
can
try
to
translate
the
cost
of
the
very
cost-effective
this
900
careers
into
the
electricity
cost.
So
how
we
do
this,
we
take
a
number
of
years
and
we
calculate
how
many
hours
we
calculate
how
much
one
core
consumes
the
energy
and
we
calculate
how
much
one
kilowatt
hour
costs
and
dimensionally.
What
I
get
is
that
this
seven
seven
hundred
ninety
five
bit
number
it
costs
only
only
eight
thousand
dollars
worth
of
electricity.
A
Eight
thousand
dollars
worth
of
electricity,
so
electricity
is
not
the
dominating
cost
here,
but
we
know
from
the
mining
that
actually
for
very
big
computational
efforts.
Electricity
becomes
to
dominate,
and
this
actually
means
that
these
numbers
from
the
electricity
perspective,
this
numbers
are
very
small
so
from
the
electricity,
this
cost
like
eight
thousand
dollars
and
this
cost
$30,000
electricity.
D
C
A
So
basically,
I'm
I
take
number
900.
This
is
two
to
the
nine
point.
Eight
then,
how
many
kilo
hours
in
the
year
give
hours
mean
thousands
of
hours
it's
eight,
because
we
have
like
25
hours,
24
hours
per
day,
300
days
and
so
on.
So
it's
like
80
hours
in
the
year
and
then
I
take
a
rough
estimate
at
one
core
in
the
cluster
consumes
about
30,
50
baht,
there's,
of
course,
not
very
precise,
so
it
can
go
up
and
down
a
bit
like
maybe
factor
of
two
or
something
and
then
I.
A
D
A
Okay,
so
how
what
we
can
expect
from
number
field
sieve
and
the
cigars?
So
there
are
two
main
importance
here.
So
one
thing
is
about
seeing
that
on
100
tests.
All
of
these
query-
integers
forbid,
smoothness,
oh,
and
there
are
methods
that
do
this
using
all
B
of
memory
and
all
of
this
queer
time.
So
these
memories,
because
we
need
we
store,
resultant
B
numbers,
but
fortunately
we
don't
have
to
store
much
more
than
that,
so
using
a
bit
the
furthest
main
processing
segment
by
segment.
A
D
A
B
A
Yes
and
also
another,
another
thing
is
that
we
now
have
extrapolations
from
this
amount
of
core
years,
but
the
thing
is
the
actual
amount
of
computation.
So
this
is
not
the
amount
of
computation
day
they
have
done.
Is
that
how
much
they
have
spent?
But
probably
a
big
fraction
of
this
core
years
were
spent
on
IDE,
no
cache
misses
or
memory
accesses
or
whatever.
So
it's
not.
It
doesn't
mean
that
the
number
of
operations
is
exactly
this
number,
so
it's
maybe
because
of
x86
architecture.
A
This
number
can
be
reduced
significantly
in
terms
of
the
number
of
operations
that
is
being
done
so
this
complexity.
The
estimate
that
is
here
is
based
on
the
assumption
that
this
core
years
were
spent
entirely
on
computation,
so
if,
like
only
1,000
of
them
were
spent
for
computation
there,
there
should
be
a
subtraction
of
another
10
here
and
when
we
calculate
the
worth.
A
This
basically
means
that
we
can
increase
to
balance
these
guys
again,
we
can
say
if
this
step
becomes
much
cheaper
for
some
reason,
then
we
can
balance
them
by
increasing
the
smoothness
bound
and
so
that,
oh
and
with
increasing
the
smoothness
bound,
we
can
break
bigger
integers
even
using
the
same
algorithms,
just
by
proper
balancing
the
things
together.
Do
you
see
the
point.
A
So
currently
both
take
be
square,
but
if
this
takes
not
be
square
by
but
be
21.5,
then
clearly
we
can
balance
them
together
and
use
some
different
B
so
that
they
again
use
the
same.
So
maybe
we
should
test
more
integers
so
that
the
the
smoothness
probability
is
different
so
that
they
have
they
spent
again
the
steps
and
the
same
where.
A
Of
1.5
comes
from
parallel
algorithms
that
works
with
the
sparse
matrices,
so
there
are
suggestions
by
DG
beam
by
Daniel
Bernstein,
who
suggested
some
parallel
fictions.
For
this
block
me,
demon
and
local
answers
exist
in
algorithms,
which
are
not
very
much
parallel,
but
he
has
some
suggestions.
They
are
mostly
theoretical
ones,
but
it
I
think
there
are
others
which
can
be
made
practical.
So
it's
very
right,
interesting.
A
E
C
A
Yes,
there
are
static
and
dynamic
Ram
and
it's
of
course
very
right
and
for
interesting
questions
so
which
one
should
be
used
here.
I
think
it
depends
on
the
algorithm
which
kind
of
memory
use,
because
some
I
think
the
one
that
you
pay
only
for
you
can
turn
it
on
and
off.
It's
I
think
it's
bigger
and
by
itself
it
requires
a
bit
more
space
and
cheap
and
so
on.
A
D
B
D
I
mean
I,
don't
think
that's
such
a
big
deal,
because
even
even
DRAM
the
cost,
when
you
don't
use
it
as
actually
not
that
high
you
have
to
refresh
it,
but
I
mean
it
is
very
low.
I
mean
like
a
good
laptop,
doesn't
really
significantly
drain
its
battery
and
standby
and
it
still
refreshes
it's
D
right.
B
C
A
A
In
my
example,
I
had
that
I
need
only
three
and
five
so
I
didn't
need
to.
But
the
chance
for
for
this
to
happen
is
not
so
high
thinking
in
real
numbers,
and
there
are
actually
it's
much
more
likely
that
you
will
have
some
positive
solutions
or
pursuit
of
solutions
if
you
have
to
filter
out
so
that,
as
far
as
I
know,
people
even
create
numbers
that
mattresses
with
much
more
rows
than
needed,
because
otherwise
there
are,
they
get
some
solutions
that.
A
Okay,
so
what
else
interesting
that
I
made
some
estimates,
for
example,
in
one
case
as
I
just
extrapolated
from
the
current
CPU
utilization
record
and
in
other
cases
I
I
try
to
figure
out
what
the
maximum
Ezek
advantage
could
be.
So
I
got
that
if
we
talking
about
course
across
the
course
that
spent
like
30
but
then
the
biggest
advantage
I
think
would
be
about
about
a
thousand
in
electorate
advantage.
I
mean
there
are
some
more
detailed
calculations
in
my
in
my
paper
in
my
report.
A
I
will
send
the
link
soon
in
it,
so
I
also
computed
so
I
for
some
conservative
scenario.
I
can
I
added
some
small
advances
like
2
to
2,
to
3
into
the
algorithms
improvement
and
somewhere
into
Moore's
law
and
so
on.
So
they
don't
differ
significantly,
but
the
rewards.
There
is
also
super
conservative
scenario
where
I
analyzed
the
case
when
we
we
have
found
this
B
to
the
1.5
time
algorithm
for
or
for
mattresses,
and
then
I
try
to
estimate
the
security
in
this.
A
So
how
I
did
that
I
I
took
80
BTS
key
and
I
try
to
figure
out.
So
if
you
take
the
current
Bitcoin
hash
power,
how
much
it
would
cost,
if
the
same,
if
we
don't
run
a
short
256
there,
but
a
yes,
how
much
it
would
cost
in
terms
of
electricity
to
break
80
BTS
and
actually
got
that
you
would
have
I'd
spend
$50,000
for
this
using
current
mine
well
before
the
fall
thousand
five
hundred
thousand
years.
But
it
was
a
Bitcoin
price
a
month
ago
and
then
128-bit
security.
A
It's
reasonably
that
it's
cost
of
150
bit
recover
key
recovery,
Nasik,
which
is
about
67
USD,
which
is
of
course
beyond
our
capabilities.
And
but
what
is
256
bit
security
256
I
say
that
the
system
has
big
security
if
it
becomes
128-bit
secure.
If
you
cut
out
half
of
the
key
or
if
all
algorithms
get
quadratic
speed-up.
That's
actually
the
same
so
and
in
this
circumstance
I
analyzed
the
perspective
of
different
model
sizes,
and
here
are
the
numbers.
A
So
you
see
that
in
this
metric,
if
we
stick
to
CPUs
to
the
clusters
on
which
the
most
recent
numbers
are
broken,
then
80
bits
acuity
is
950
bit
and
128-bit
security
is
2,850
bits.
But
if
you
consider
a
conservative
scenario,
where
we
have
some
advances
in
a6,
bringing
in
1000
reduction,
montaigne
factor
in
the
reduction
of
electricity
cost,
then
80
bits
is
not
900
bit
number
but
1500
it,
and
what
country
did
the
security
is
1000
bits
more
for
the
number
to
be
broken.
A
A
If
the
number
is
bismuth
with
the
same
saving
principle
and
that
you
find
sufficient
in
many
such
beasts,
most
numbers,
you
can
pose
the
same
linear
system
and
basically
for
every
you,
you
get
discrete
logarithm
for
every
number
in
this
factoring
base.
For
you,
you
logarithm
every
prime
number,
smaller
than
B
and
after
you've
done
that,
so
how
to
fact
how
to
discrete
logarithm
this
guy,
you
take
different.
How.
A
You,
if
you
find
B
such
guys,
then
you
have
your
the
same
metrics
actually,
but
you
you
call
the
columns,
so
you
compute
your
linear
dependency,
not
modulo
2,
but
modulo,
pi,
P
minus
1.
So
if
the
prior
to
this
P
then
basically
means
that
all
the
exponents
that
they
wrapped
up
in
minus
1
and
if
you
have
different,
if
you
have
all
of
BiBi
smokes
exponents
with
different
heat,
then
basically
you
can
solve
the
linear
system
and
well
this
linear
system.
The
unknowns
will
be
actually
these
guys
and
you
will
find
it
by
the.
D
C
C
E
B
C
D
D
A
So
what
I,
for
example,
I
have
calculated
myself
that
there
is
this
for
quick
cash,
which
requires
four
Z
cash
about
two
hundred
megabytes
of
RAM
and
if
you
take
a
six
for
them
and
then
again,
two
hundred
megabytes,
two
hundred
megabyte
uh-huh,
yes,
and
basically
what
you
do
with
the
RAM
is
sorting.
So
you
just
and
afraid
exhort
your
your
RAM
several
times.
That's
how
this
in
general
works
and
the
advantage
in
terms
over
I
love.
Regular
CPU
over
laptop
is
about
one
thousand.
Okay,.
D
I'm,
the
question
is:
does
it
all
have
to
be
in
the
same
memory
like
because
the
problem
is?
If
you
need
high
bandwidth,
then
you
probably
have
quite
a
bit
of
power
consumption.
If
you
can
lower
your
bandwidth
requirements,
that
would
be
less
what
if
you
could
split
it
across
1,000
different
memories,
with
all
much
lower
bandwidth
requirements
and
can
merge
it
all
together,
like,
for
example,
do
all
the
sieving
steps
for
different
primes
on
different
hardware
and
merge
the
results
together
at
a
later
step?
Is
that
possible
yeah.
A
D
The
case
you
can
probably
like
lower
your
bandwidth
requirements
are
extremely
and
go
with
much
lower
power
requirements
for
memory
and
I
think
you
will
end
up
with
something
very
low
already
like
in
normal
computer's
memory.
Isn't
actively
cooled
right
by
that
you
know
it
doesn't
really
consume
that
much
power.
C
A
C
A
Good
point
I
think
as
far
as
I
understand
in
the
fraction
of
integers,
not
much
progress
has
been
made
in
the
last
20
years,
but
in
discrete
loggers-
and
there
have
been
many
different
approaches,
because
there
are
these
different
algorithms
for
prime
fields.
There
are
the
other
types
of
focus
for
extensions
of
profile
of
prime
fields,
which
are
important
for
pairing
Bay
scripture.
A
There
are
algorithms
for
fields
of
characteristic
2
and
so
on.
So
there
are
like
several
groups
of
these
columns
and
they
all
follow.
This
number
should
see
if
and
one
another
guard
so
I
think
that
if
they
have
found
some
something
really
really
important,
that
would
be
translated,
probably
to
the
number
field
safe.
But
on
the
other
hand,
this
the
algorithm
is
itself
pretty
complex,
I.
Think
it's
the
fact
why
it
works.
It
requires
significant
algebraic
background,
so
the
code
itself
is
not
that
that
difficult.
A
But
if
I
understand
the
authors,
they
spent
several
years
just
to
make
all
assumptions
realistic,
so
they
they
basically
had
to
add
a
few
more
components
to
to
the
to
the
procedure
before
it
started.
It
started
working
for
every
integers
and
beginning
it
worked
on
before
very
specific
types
of
integers,
like
2
to
the
2,
to
the
M
plus
1,
or
something
like
this
and
to
make
it
for
all
integers.
They
have
to
go
through
sufficiently
many
obstacles.
So
it's
a
really
difficult
problem
and
yeah
I.
A
90
to
93,
and
one
of
the
prominent
guys
in
this
research
was
Leonard,
add
lemon.
You
might
remember
that
Elora
say
there
are
even
very
near
and
Ted
Lehman
and
some
people
say
that
women
did
the
minimal
amount
of
work,
but
still
was
included
into
the
RSA,
at
least
of
authors,
but
I
think.
As
far
as
I
said,
this
has
been
very
well
compensated
by
by
his
contribution
to
do
the
factory
people
like
very
few
people
who
made
this
NFS
really
possible.
D
D
We
are
kind
of
at
the
edge
and
there
isn't
that
much
more
improvement
possible
and
the
other
theory
is
like
the
bar
by
NFS
a
set
so
high
that
like
it,
would
be
crazy
for
a
young
cryptographer
now
to
go
into
factoring
because
like
they
would
have
to
spend
so
much
time
just
getting
there
and
it
wouldn't
be
that
likely
that
they
find
a
new
algorithm
I
wonder
what
your
attachment
on
that
is.
Yeah.
A
I
think
that
seven,
it's
not
a
cryptographer,
actually
the
program,
a
problem,
in
fact
so
I
think
it's
much
more
mathematical
problem,
because
there
is
almost
no
cryptography
involved
in
that
and
there
are
like
some
deep
mathematics.
Things
involved
like
when
you,
for
example-
and
you
try
to
read
about
this
imaginary
quadratic
groups-
it's
pretty
much
similar,
so
you
would
have
to
have
like
really
deep
understanding
of
algebra
before
you
figure
out
how
all
this
works
and
I
think
just
prove
that
it's
really
an
ambitious
task
and
ii.
A
Don't
have
that
many
mathematicians
this
days
right.
So
as
far
as
I
understand,
the
number
of
working
mathematicians
is
decreasing,
so
many
people
go
into
more
applied
science,
whereas
this
factoring
he
is
it's
actually
more
theoretical.
So
my
gut
feeling
is,
we
shouldn't
expect
real
advances
from
the
theoretical
side,
but
we
definitely
should
expect
some
advances
in
practical
side.
So
when
I
see
this
as
far
as
linear
algorithms
and
this
see
being
done
on
a
regular
CPUs
I
feel
that
it
can
be
much
much
more
sped
up.
A
B
A
Yeah
yeah,
something
like
this
I
think
people
who
are
closer
to
the
discrete
logarithm
research
could
give
a
better
better
answer,
because
there
there
are
some
advances
there.
There
have
been
some
advances,
maybe
not
for
in
this
particular
case,
but
in
some
in
some,
like
sister
algorithms,
they
were
their
worst
advance.
Maybe
they
can.
They
can
tell
a
bit
better
about
like
their
unexplored
area
and
some
potential
to
use
them
in.
C
A
A
Yeah,
so
it's
I
know
30,000
cars.
What
it
is
three
million
dollars,
maybe
think,
even
cheaper.
So
of
course
the
it's
it's
more
expensive
than
this
is
course
this
cost
of
electricity.
But
when
we
talk
about
custom
hardware,
I
think
if
we
take
cheap
of
some
moderate
say
it
sighs,
I'd,
know
ten
by
ten
centimeters
or
something
and
imagine
construction,
I'd,
know
million
of
these
chips
factoring
I
am
pretty
sure
that
their
cost
will
be
smaller
than
the
cost
of
run
in
time.
A
A
D
D
Mean
I
think
asymptotically.
You
are
certainly
right
depending
like
if
we
assume
that
that
you
have
a
reasonable
like
assume,
you're,
okay,
with
factoring
it
in
one
year,
I
would
say
like
if
you
want
to
factor
it
in
one
week
or
one
month
like
ASAP,
and
you
only
have
one
number
to
factor.
I
would
say
very
likely.
The
course
of
hardware
will
actually
dominate
because
you're
basically
only
running
all
your
hardware
for
that
amount
of
time,
but
like.
E
D
C
A
That's
a
good
question:
I
think
it
is
oh
well,
we
all
have
this
model
reductions
right,
so
I
think
the
model
where
this
number
is
used
in
me
model
reductions
and
if
model
reduction
circuits
can
be
optimized
given
a
particular
number,
then
we
can
win
there.
So
all
this
factorization
records,
of
course
they
knew
which
number
two
broke
to
break
but
I
think
they
couldn't
really
exploit
it
because
they
just
used
a
regular
CPUs
about
and
custom
hardware
I.
C
Mean
another
way
to
look
at
these
security
assumptions
is
just
to
because
you
know
we
talk
in
terms
of
dollars,
but
but
maybe
like
the
bottleneck
is
actually
just
how
much
electricity
the
world
could
produce
be
cursed.
You
know
like
how
much
is
the
world
producing
and
if
it
was
horn
percent
of
the
city
dedicated
to
this
problem?
How
much
time
will
it
take
to
factor,
and
is
that
more
than
ten
years,
and
if
it's
more
than
ten
years,
then
you
know,
we
definitely
say
well.