►
Description
Slides: https://drive.google.com/file/d/1XfvTReuBgQZE95_RSefO-YhhhGh5Emhj
A
Today,
yeah,
what
I
wanted
to
do
as
basically
kind
of
go
through
the
basics
of
how
fully
homomorphic
encryption
works
so
fully
homomorphic
encryption
as
I'll
get
into
a
bit
later.
Basically,
is
this
kind
of
encryption
that
allows
you
to
run
computations
on
encrypted
values
and
there's
a
lot
of
things
that
are
really
interesting
about
this,
like
you
can
do
privacy
preserving
computation,
really
easily?
It's
also
an
important
building
block
in
some
multi-party
computation
protocols.
A
It's
an
important
building
block
in
a
lot
of
the
proposed
of
obfuscation
protocols
that
people
are
starting
to
come
up
with,
and
the
interesting
thing
with
fully
homomorphic
encryption
is
that
it's
sound.
It
sounds
scary,
like
it's
there's
a
lot
of
I
feel
kind
of
impressions
that
it's,
this
very
complex
form
of
cryptography.
I,
just
got
figured
out
it
only
slightly
a
couple
of
years
ago,
and
so
it
must
be
really
complicated
to
understand
and
it
turns
out
it's
completely
not
like.
A
It
turns
out
that
fully
homomorphic
encryption
protocols
are
literally
and
if
simpler,
to
fully
understand,
I
would
say
than
even
a
lot
of
elliptic
curve
based
constructions
and
now
the
reason
we're
not
using
full
of
home
or
'fuck
encryption
for
everything
is
basically
because
it's
not
very
efficient
right
and
it
has
high
overhead.
It
has
ciphertex
that
are
very
large
and
I'll
get
into
why
this
is
the
case
if
you're
always
suit
as
well.
A
So
to
start
off,
let's
talk
about
kind
of
what
is
fully
homomorphic
encryption
right.
So
the
goal
of
fully
homomorphic
encryption
is
basically
to
have
an
encryption
scheme
where
you
can
compute
unencrypted
data
right.
So
if
you
have
an
encryption
of
X,
you
want
to
be
able
to
turn
that
into
an
encryption
of
f
of
X,
for
some
function,
f
without
being
able
to
decrypt
so
without
being
able
to
determine
X
or
determine
f
of
X,
so
you
can
think
of
a
partially
homomorphic
and
protocols
that
we
know
about
already.
A
So,
if
you
think
about
even
elliptic
curve
points,
no,
this
doesn't
technically
fully
satisfy
the
definition
of
folio
homomorphic
encryption,
because
it's
not
a
degree.
It's
not
an
encryption
scheme,
it's
just
a
group
that
has
a
homomorphism,
but
if
you
look
at
elliptic
curve
points
base,
if
you
take
some
number
X
and
you
multiply
it
by
some
elliptic
curve,
point,
that's
the
generator
and
then
you
add
the
generator
multiplied
by
Y.
You
get
the
generator
multiplied
by
X
plus
y.
A
This
is
a
a
really
nice
property
of
elliptic
curve
points,
basically
that
they
do
kind
of
follow
this,
and
if
the
distributed
prop
property
right
and
there's
a
lot
of
protocols
that
do
make
use
of
this
right.
So
in
the
case
of
block
chains,
for
example,
there
is
deterministic
wallets
that
have
master
public
key
is
that
they
rely
on
these
elect,
occur
from
work
as
upstairs
even
zero
knowledge
proof
protocols
in
a
base
based
on
elliptic
curves
things
like
bullet
proof.
A
Even
just
signatures
rely
on
this
room
on
this
kind
of
additive
homomorphism,
so
it
already
provides
a
lot
of
value
right
and
if,
instead
of
adding
you
want
to
be
able
to
multiply,
then
you
can
just
use
RSA
right.
If
you
just
use
kind
of
simple
RSA
encryption,
so
X
^
year,
then
X
y
^
e
equals
x
^.
U
times
y
^
yi,
and
so,
if
you
have
the
ciphertext
for
X
and
the
ciphertext
for
Y,
you
can
create
the
ciphertext
for
as
x1
right.
A
So
we
have
this
kind
of
family
of
schemes
that
gives
us
additive
homomorphism
and
we
have
this
family
of
schemes
that
gives
us
a
multiplicative
homomorphism.
Now
the
question
is:
can
we
come
up
with
a
scheme
that
lets
us
do
both
right?
So
can
we
come
up
with
some
way
of
making
a
ciphertext
so
that
you
can
do
both
adding
and
multiplying
on
ciphertext?
And
if
you
can
add
and
multiply,
then
you
basically
have
general-purpose
computation
right.
A
If
you
restrict
your
ciphertext,
the
just
being
zeros
and
ones,
then
you
can
do
pretty
much
every
logic
gate
right.
So
if
you
wanted
to
and
then
antov
x
and
y
is
just
x
times
why,
if
you
want
to
do
or
then
well
or
is
basically
you
can
kind
of
invert
X
invert
Y,
then
you
do
an
answer.
You
invert
again,
or
you
have
this
kind
of
simpler
formula
where
you
just
say:
X
plus
y
minus
X
times
y
right.
So
if
both
inputs
are
0,
then
this
is
obviously
a
0.
A
If
X
is
1,
then
you
know
you
have
a
1
here.
You
have
a
0
here,
I
mean
you
subtract
a
0
here,
so
you
get
1.
If
Y
is
1
and
X
is
0
its
symmetric,
so
you
also
get
a
1
and
then,
if
x
and
y
are
both
1,
then
you
get
1
plus
1
minus
1,
so
we
also
get
1
for
X
or
you
can
do
either
just
X
plus
y
and
that's
if
you
don't
care
about
the
numbers
staying
with
them
and
kind
of
the
range
0
&
1.
A
If
you
just
care
about
them
being
given
an
odd
or
if
you
care
about
the
numbers
being
within
the
range
0
&
1,
then
you
can
just
do
X,
plus
y
minus
2
times
x
times
y
great.
So
if
x
and
y
are
both
1,
then
it's
1,
plus
1,
minus
2
and
you
get
to
0.
So
if
you
can
add
and
multiply
that
you
can
do
pretty
much
whatever
your
logic
gates
you
want
in,
you
can
do
arbitrary
computation.
A
Now
it's
not
going
to
be
efficient,
arbitrary
computation,
because
if
you
imagine
wanting
to
do
some
math
based
off
of
numbers,
then
you'll
pretty
much
have
to
decompose
those
numbers.
It's
a
bit.
So
then
you'll
have
to
turn
your
addition
and
your
other
operations
at
the
circuits
and
you
have
to
evaluate
the
circuits
and,
and
so
you
can
see
how
kind
of
the
complexity
multiplies.
A
So
imagine
your
private
T
is
a
big
prime
so
and
think
of
it
as
a
really
big
prime
number,
then
to
encrypt
a
message
where
a
message
is
restricted
to
either
zero
or
one.
Then
what
you
do
is,
basically
you
take
a
big
random
number.
Then
you
multiply
it
by
P.
Then
you
add
together
a
small
random
number
multiplied
by
two,
and
you
add
your
message
right.
A
So
we
have
here
is
basically
we
have
a
multiple
of
P,
that's
offset
by
some
noise
and
a
message,
that's
hidden
in
the
least
significant
bit
of
the
noise,
and
if
you
want
to
decrypt,
then
you
decrypt
the
ciphertext
with
the
key
by
basically
saying
it's
the
ciphertext
mod
P
in
brackets
and
that
kind
of
kicks
out
this
term,
and
then
you
just
do
mod
two
and
that
kicks
out
this
term,
and
so
we
have
your
my
by
the
way.
A
If
people
have
questions
at
any
point,
please
feel
free
to
ask
I'm
happy
to
answer.
So
why
is
this
scheme
additively
homomorphic
right?
So
basically,
why
is
this
an
approach
where
you
encrypt
a
message
by
taking
a
big
multiple
of
your
key,
adding
a
small,
adding
some
noise
and
then
hitting
your
message
and
though
you
get
a
bit
of
the
noise?
Why
is
this
happening
or
fit?
Well?
A
Basically,
if
you
imagine,
you
have
two
cipher
texts:
each
one
of
these
cipher
text
is
going
to
be
a
multiple
of
P,
plus
some
noise
blossom
message:
multiple
of
P,
plus
some
noise
plus
a
message,
and
these
things
just
kind
of
naturally
distribute
distribute
right.
So
basically
the
CT
one
plus
CT
two
actually
is
a
ciphertext.
It's
a
ciphertext
in
the
correct
format,
because
you
just
have
a
multiple
of
P
that
is
k1
plus
k2.
Then
you
have
your
noise
and
the
even
part
of
the
noise.
A
It
also
adds
up
so
you
have
2
or
1
&,
2
or
2
over
here,
and
then
it's
just
2
of
our
1
plus
or
2.
And
then
your
messages
add
up
right.
So
m1
and
m2
basically
have
m1
plus
m2
over
here
and
and
note
that
this
is
addition
mod
2
right.
So
if
your
message
here
is
1-
and
your
message
here
is
1,
then
this
is
2,
but
the
2
over
here
is
indistinguishable
from
just
being
part
of
the
noise,
and
so
it
just
becomes
the
same
as
0.
A
But
if
we
just
want
to
do
binary
circuits-
and
this
is
totally
fine
right-
so
you
can
add
together,
two
ciphertext
and
you
get
a
ciphertext
that
is
of
the
crew
format,
of
the
addition,
or
rather
of
the
XOR
of
the
message
bits
now.
Why
is
this
multiplicative
way?
Homomorphic?
So
let's
say
you
multiply
together
your
two
cipher
texts.
A
So,
let's
figure
out
what
the
multiples
of
P
are
cuz.
We
don't
have
to
care
about
them.
Basically,
you
have
team,
K,
1,
P,
multiplied
by
all
three
of
these,
then
you
have
K
2
P
multiplied
by
all
three
of
these.
So
these
five
terms
go
over
here
now
you
have
two
times
so
now
we
have
to
figure
out
which
of
the
remaining
terms
that
are
not
multiples
of
P
are
even-
and
it's
basically
2
R
1
multiplied
by
this,
and
you
have
2
or
2
multiplied
by
this.
A
So
we
have
a
bunch
of
service
to
get
multiplied
by
2,
and
then
you
have
this
thing
over
here,
which
is
not
multiple
I'd
meet
my
P
and
not
multiplied
by
2,
which
is
just
the
product
of
the
messages
right.
So
if
you
take
this
expression
and
then
you
do
a
mod
p
and
then
you
do
mod
2,
then
the
mod
P
is
gonna.
Kill
these
5
terms.
The
mod
2
is
gonna,
kill
these
3
terms,
and
so
you
just
have
this
one
term
left.
So
we
should.
We
see
that
this
is
additively
home
morphic.
A
We
see
this
is
multiplicatively
homomorphic.
Now,
why
is
it
to
have
kind
of
any
level
of
security?
Basically
it's
this
approximate
GCD
problem
right.
Basically,
if
you
did
not
have
noise,
so
if
we
did
not
have
this
kind
of
two
times
small
random,
then
you
have
just
a
bunch
of
multiples
of
P,
plus
0
or
1,
and
then
a
bunch
of
them
are
gonna,
be
in
just
multiples
of
people
as
0.
A
Then
you
can
just
use
any
GCD
algorithm
and
you
just
keep
by
just
the
Euclidean
algorithm
mean
in
just
a
GC
de
all
of
these
multiples
together
and
you
get
PE.
And
then
you
can
kind
of
break
this
game
right,
but
as
it
turns
out,
if
you
just
have
approximate
multiples,
then
figuring
out
what
P
has
becomes
much
harder
right.
So
the
security
of
this
cryptographic
scheme
is
based
on
under
the
hardness
of
computing
approximate
GCD,
as
opposed
to
computing,
and
if
exact
GCD
is,
which
is
easy.
A
So
there's
a
more
general
principle
here,
which
is
basically
that
solving
systems
of
equations
becomes
much
more
difficult
when
these
equations
have
errors
or
a
clique
or
you
can
call
him
noise
right.
So,
if
you
add
some
noise
into
your
equations,
then
suddenly
figuring
out
a
hundred
parameters
that
are
part
of
parts
of
those
equations
that
you
can
normally
figure
out
easily.
It
just
suddenly
becomes
much
harder,
and
this
is
pretty
much.
A
A
You
basically
just
provide
a
bunch
of
encryptions
of
0
and
then,
if
you
can
publicly
encrypt
0
by
just
computing,
a
random
linear
combination
of
these
ends,
you
can
add
some
more
error,
and
if
you
want
to
encrypt
one,
then
you
could
just
take
these
encryptions
of
0
and
add
one.
If
you
want
something
more
generic,
then
into
your
public,
you
can
add
some
encryptions
of
1
and
then
to
encrypt
one.
A
You
add
a
bunch
of
encryption,
so
0
and
then
an
odd
number
of
encryptions
of
1,
and
you
have
a
new
encryption
of
what
right
so,
certainly
any
fully
or
for
morphic
scheme
that
is
private
to
you
can
start
into
a
public
key
one
pretty
is
aware:
that's
not
a
problem
now.
Why
doesn't
this
work
already
right?
So
there's
two
problems.
One
of
them
is
that
multiplication
just
doubles
the
length
of
the
ciphertext
right.
A
So
you
can't
do
more
than
a
logarithmic
number
of
multiplications
period,
but
there
is
another
problem
which
is
overflow
of
the
errors.
Great.
So
if
you
remember
the
sum
of
the
cipher
texts,
kind
of
turns
into
a
new
cipher
text
over
this
form
and
then
the
product
of
the
ciphertext
turns
into
a
new
cipher
text
of
this
format.
We
can
look
at
what
happens
to
the
even
kind
of
error
terms
here
right.
A
When
you
add
the
error
roughly
double,
is
you
take
the
sum
of
the
errors
when
you
multiply
the
error
squares
right
so
over
when
you
had
r1
and
r2?
Here
you
have
a
2,
r1
r2,
and
so
the
bit
length
of
the
error
doubles,
and
so
once
again
and
the
maximum
a
multiplicative
depth
is
going
to
be
the
logarithm
of
the
number
of
bits
in
P
so
or
the
log
of
log
of
P
right.
A
A
The
first
category
of
solutions,
basically,
is
that
we
do
clever
tricks
to
make
multiplying,
only
increase
the
error
by
a
constant
factor,
and
so,
instead
of
having
log
log
P
or
a
log
of
the
number
of
bits
of
P
and
multiplicative
depth,
you
just
you
actually
do
have
a
lot
P
a
multiplicative
depth,
so
the
multiplicative
depth
is
proportional
to
the
number
of
bits
and
P
right.
So,
if
you
imagine,
we
can
make
a
protocol
where
every
multiplication
increases.
A
The
error
by
some
fixed
factor
say
a
factor
of
a
thousand,
then
that's
only
ten
bits,
and
so,
if
your
P
has
or
if
your
modulus
has
a
bit
length
of
say
10,000,
then
your
multiplicative
depth
goes
up
to
1,000,
which
is
already
huge
now.
So
this
is
one
solution,
and
the
second
kind
of
solution
is
a
bootstrapping
right.
A
So
bootstrapping,
basically,
is
this
kind
of
switching
mechanism
where
what
we're
going
to
do
is
basically
imagine
that
you
have
some
ciphertext
that
has
some
noise
and
you
wants
to
convert
it
into
a
fresh
ciphertext
that
has
less
noise
right.
So
what
we'll
do
is,
we
will
basically
take
the
enough
circuit,
for
that
represents
a
decrypting
X
under
some
secret
key
and
we're
going
to
evaluate
it
at
home.
Amorphic
way
right.
A
Where
so,
first
of
all
you,
you
can
encrypt
the
bits
of
X
under
some
you
to
you,
so
you
can
encrypt
the
bits
of
X
here
under
some
new
key
s
to,
but
also
we
provide
the
bits
of
s
under
IDs
to
read.
So
what
this
basically
means
is
that
you
have
for
some
T
as
true
you
have
the
public,
the
regular
public
key
for
us
to,
and
you
also
have
an
encryption
if
s
under
s2
and
so
given
the
value
of
x
and
the
queer
you
convert
that
into
its
bits.
A
Then
you
convert
that
into
encryptions
of
the
bits
of
X,
and
then
you
have
encryptions
of
the
bits
of
X.
You
have
encryptions
of
the
bits
of
s
and
you
just
compute
FS
homomorphic
Li.
Basically,
you
just
compute
FM
compute,
this
decryption
process
using
these
inputs
that
we
have,
and
what
this
gives
you
is
this.
The
decryption
process
returns
one
bit,
and
so
you
get
one
bit
which
is
the
decryption
of
X.
So
it
is
the
same
bit
that
X
represents,
except
it
gets
encrypted
under
the
key
s2
right.
A
The
minimum
error
level
they
start
being
kind
of
depth
one,
and
then
this
decryption
has
some
constant
depth,
and
so
the
amount
of
error
that's
going
to
be
in
the
result
is
actually
going
to
be
constant.
Even
if
there
is
a
lot
of
error
in
the
ciphers
next
year.
So
before
I
kind
of
continue,
maybe
I
should
offer
an
opportunity
for
some
questions,
because
bootstrapping
is
kind
of
a
little
bit
complicated
to
let's
you
understand
so,
maybe
just
kind
of
think
for
a
bit
and
make
sure
that
you
understand
what's
going
on
here.
A
So
bootstrapping
just
basically
means
that
you're
evaluating
the
decryption
circuit,
so
the
place
where
there
is
a
kind
of
error
involved
is
basically
that
the
process
of
homomorphic
we
encrypting
the
these
experts
is
gonna,
involve
basically
taking
values
from
the
from
the
public
key
and
those
values
already
have
some
noise
inside
of
them.
So
the
process
of
executing
the
circuit
kind
of
after
you
have
those
values
doesn't
require
you
to
have
in
to
add
any
extra
randomness.
But
the
bootstrapping
key
already
has
a
real
I
neveri.
C
D
A
You
the
person
evaluating
the
circuit,
cannot
see
those
bits
is
because
that
person
does
not
have
X
right
so
computing.
The
decryption
requires
you
to
have
X,
and
it
requires
you
to
have
X
now
in
your
bootstrapping
key.
If
you're
not
going
to
give
them
X
you're
gonna
give
them
an
encryption
of
the
bits
of
s
under
s2,
and
so
the
only
thing
they
can
do
is
they
can
evaluate
the
decryption
circuit
in
a
way
that
gives
them
just
output.
C
E
A
Exactly
so
they
they
have
s
encrypted
under
s,
they
they
have
exit
and
the
clear,
and
so
they
just
encrypt
X
under
s
2
and
then
Dec
is
a
pub,
is
just
a
circuit
and
it's
part
of
the
algorithm.
So
everyone
has
it,
and
so
you
just
evaluate
deck
homomorphic
we
using
bits
encrypted
under
s,
and
so
you
get
0
or
1
encrypted
under
ester
right.
C
C
E
D
F
A
No
I
think
the
problem
is
right,
that
a
like
lower
error,
plus
higher
error,
equals
higher
error
and
generally
kind
of
any
function
that
like.
If
you
have
some
function
and
you
throw
a
and
a
low,
we
are
thing
and
a
higher
thing.
You
get
a
higher
thing
out
so
like
because
the
errors
are
just
randomly
generated,
you're
not
going
to
be
able
to
make
them
cancel
in
some
way.
A
So
one
subtle
technical
point,
so
this
is
encrypt.
This
X
is
encrypted
under
s
and
here
we're
saying
we
encrypted
under
s.
Technically,
you
could
just
set
s
equal
s
equals
s
to
it
right,
so
you
can
technically
just
have
a
encrypt
the
bits
of
exons
or
s,
and
then
you
can
have
the
bootstrapping
T
be
the
bits
of
s
encrypted
under
s,
and
this
makes
things
a
lot
easier
because,
instead
of
having
lots
of
tease
you
or
having
it
like,
an
entire
chain
of
T
is
what's
a
chain
of
bootstrapping
keys.
A
You
can
just
have
one
key,
so
why,
like
in
standard
descriptions
of
bootstrapping
until
people,
do
that?
Basically
for
for
weird
technical
reasons,
you
can't
prove
a
kind
of
a
security
reduction
straight
to
the
lattices,
to
the
lattice
assumptions,
and
so
there's
this
kind
of
special
term.
It's
called
circular
security
and
depending
on
how
whether
or
not
you're
one
of
these
cryptographers,
who
is
like
no,
you
have
to
have
proofs
versus
one
of
these
cryptographer
goes
like
yeah,
whatever
it's
fine
dog
like
it's,
your
choice,
whether
or
not
to
trust
or
to
our
security.
E
A
Question
and
I'm
about
to
get
Stewart
than
in
the
in
the
next
slide.
So
the
main
problem
is
that
the
multiplicative
circuit
depth
of
this
decryption
it
involves
a
module
or
reduction
and
xmod
Pia
has
a
circuit
depth
of
the
log
log
X,
which
is
bigger
than
what
log
P,
and
this
is
higher
than
what
fhe
can
support
right.
A
So
we're
gonna
skip
over
a
few
things,
so
Craig
Gentry
came
up
with
some
really
clever
tricks
in
2009,
involving
subset
sums
and
other
things.
I
don't
understand
to
try
to
kind
of
get
around
this
problem
so
to
try
to
reduce
the
historico
depth
of
bootstrapping
to
the
point
worried.
Actually,
we
can
have
bootstrap
in
a
fairly
simple
fhe
schemes
and
Craig
Gentry
is
great
and
we
love
Craig
Gentry
but
and
he's
gonna
come
back
in
the
in
the
story.
A
He
is
the
one
that
created
or
one
of
the
people
who
created
the
matrix
Stephani
protocol,
but
we're
gonna
go
straight
to
a
speaker,
Burke
tiersky
and
Vinod
Micron's
work
in
2011.
We,
which
is
head
of
the
first
scheme
that
really
I
kind
of
managed
to
get
around
this
problem
right.
So
the
schemes
that
I'm
going
to
show
from
here
on
they
don't
depend
on
approximate
GCD
if
they
depend
on
a
slightly
different
but
kind
of
similar
seeming
assumption,
which
is
the
cell
learning,
with
errors,
assumption
right
and
the
wording
of
errors.
A
Assumption
basically
says
that
if
you
have
a
system
of
equations,
so
if
you
have
this
kind
of
system
of
equations,
you
have
a
bunch
of
variables.
You
have
a
bunch
of
equations.
Normally
you
can
solve
them
using
Gaussian
elimination
and
it's
very
easy.
But
if
you
just
have
approximate
equations,
so
you
have
a
whole
bunch
of
equations
and
even
if
the
system
is
really
over
specified
that
have
a
small
additive
error,
then
finding
any
of
the
variables
it
becomes
very
hard
right.
A
So
solving
systems
of
linear
equations,
where
the
output
some
have
errors
in
them,
is
something
which
is
hard
and
is
something
which
is
which,
as
far
as
we
both
know,
is
quantum
hard.
So
this
all
these
mechanisms
also
reduced
to
hardness
of
lattices
and
the
shortest
vector
problem
and
all
of
these
other
kind
of
problems
that
are
fairly
well
studied
in
mathematics.
So
basically
the
schemes
that
I'm
going
to
get
into
from
to
your
on
they're
not
going
to
depend
on
up
proximity.
A
Let's
get
to
BT
2011,
and
so
the
the
key
is
going
to
be
a
vector.
So
it's
going
to
be
a
vector
of
numbers,
s
1
s,
2
2
s,
n
and
the
ciphertext
is
going
to
be
a
vector,
a
that
satisfies
the
property
that
a
dot
product.
S
is
a
message
plus
and
even
error,
and
this
is
all
gonna
be
done,
module
or
some
onto
you
see
it
doesn't
have
to
be
prime,
but
it
has
to
be
odd.
A
So
basically
what
you
you
generate
an
a
such
that
a
1
s,
1,
plus
a
2
s,
2
plus
dot
a
ANS
M
equals
m,
plus
a
plus
2
times
your
error.
So
the
connection
between
this
and
the
wording
with
errors
problem
is
basically
that
you
can
think
of
each
of
these
ciphertexts
as
being
an
equation
and
your
secret
key
right.
Your
secret
key
is
I.
Guess
what
is
your
s?
A
3
S
4,
the
coefficients
and
the
equation
are
going
to
be
the
a
and
the
output
is
gonna,
be
basically
some
small
number
and
it's
0
or
1,
depending
on
what
your
message
is
plus
some
error
and
that
error
basically
kind
of
perturbs
the
values
and
that
makes
it
hard
to
extract
apps.
So
your
ciphertext
is
just
if
you're
kind
of
comfortable
thinking
in
terms
of
vectors
in
dot
products.
A
It's
just
M
a
so
I
should
a
dot
product
s
equals
some
message
plus
2
times
an
error,
or
you
can
think
of
it
as
this
linear
equation.
One
optimization
is
that
you
can
set
the
first
number
in
your
secret
XE
to
1
and
that
just
makes
it
very
easy
to
construct.
Are
these
a
is?
Basically,
you
just
generate
every
all
your
other
is
randomly,
and
then
you
set
your
si
one
to
kind
of
compensate
for
whatever
the
rest
of
the
dot
product
gives,
and
so
whatever
the
rest
are
product
two
gives
you.
A
So,
as
I
mentioned,
an
alternative
interpretation
of
this
vector
interpretation
is
that
a
cipher
text
is
a
noisy
equation
where
the
variables
are
your
secret
key
addition
is
easy
right.
So
addition,
basically,
if
you
have
a
L
such
that
a
l
dot
s
equals
1
m,
plus
2
e
at
an
AR
such
at
AR
dot
s
equals
another
M
plus
2
e,
then
dot
products
are
additive
and
so
AO,
plus
AR
dot.
A
S
is
just
gonna,
be
the
sum
of
your
messages,
plus
the
sum
of
your
errors
and
the
way
that
you
decrypt,
obviously,
is
just
you
actually
compute
a
dot
s,
and
you
take
the
last
bit,
and
so
you
can
decrypt
the
cipher,
the
addition,
and
if
you
decrypt,
Li
the
addition,
you
get
twice
the
even
error,
and
then
you
have
the
kind
of
the
sum,
or
rather
the
XOR.
Your
two
messages
again
right.
So
addition
here
is
simple.
A
Multiplication
is
harder
right,
so
the
problem
here
is
that,
in
the
scheme,
I
showed
you
before
you're,
just
operating
with
single
numbers
and
single
numbers
can
be
added
and
multiplied
here.
Your
operating
over
vectors
and
vectors
can't
just
be
multiplied
with
each
other.
Well,
they
kind
of
can
right.
So
you
have
this
kind
of
notion
of
an
exterior
product
right.
So
basically,
you
have
kind
of
this
concept
of
an
exterior
product
where
a
vector
multiplied
by
another
vector
is
just
a
big
vector
that
contains
all
of
the
products
of
the
elements
right.
A
So
if
you
imagine
al
is
a
vector
with
ten
elements.
Ar
is
a
vector
of
ten
elements.
The
exterior
product
is
gonna,
be
the
specter
of
a
hundred
elements.
That's
like
the
first
times,
the
first
plus
the
first
times
a
second
plus
the
first
times,
the
third
all
the
way
up
to
the
first
times
ten.
Then
you
have
the
second
times
the
first
all
the
way,
blah
blah
blah
and
then
the
tenth
times
attach.
Now
you
have
this
a
second.
A
If
algebraic
identity
that
basically
says
hey
al
exterior
prod,
they
are
dot
product
it
with
s.
Exterior
product
test
equals
this
dot
product
multiplied
by
this
dot
product
rate,
and
so
remember
that
Al
was
one
ciphertext
AR
is
one
ciphertext
and
the
way
the
product
of
the
decryptions
so
m1
times.
M2
is
just
going
to
be
this
dot
product
times
this
dot
product.
It's
so
because
of
this
algebraic
identity
that
equals
the
same
thing
as
al
exterior
product
AR,
dot,
producted
with
s
exterior
product
s.
A
So
if
you
take
two
ciphertext
and
you
just
exterior
product
them,
so
basically
you
just
turn
them
into
this
big
vector
that
contains
all
of
their
the
product.
The
all
possible
products
are
there
elements,
then
that
becomes
a
valid
ciphertext
of
m1
times
m2
under
the
key
s
times
s
right,
so
you
can
multiply,
but
the
length
of
your
secret
key
grows,
quadratically
and
so
the
challenge
is,
you
need
to
have
a
way
of
going
back
to
what
here
so
to
go
back
to
linear.
A
What
we're
going
to
do
is
we're
going
to
have
this
a
procedure
that
we
call
a
real
inner
is
a
ssin
right,
so
the
procedure
for
a
real
linearization.
It
feels
a
bit
like
bootstrapping
like
basically,
because
what
you
do
is
you
have
and
if
encryptions
of
s
under
your
new
key
and
then
you
just
evaluate
things
on
your
new
kia,
so
it's
kind
of
similar
to
that.
But
it's
doing
something
slightly
different
right.
So
a
real
linearization
does
not
solve
the
error
problem.
A
It
just
solves
the
problem
of
taking
a
ciphertext
under
the
key
s
times
s
or
s
exterior
product
s
and
turning
it
into
an
encryption
under
s.
So
the
realization
key
is
going
to
contain
a
kind
of
encryptions
of
s
is
j
times
2
to
the
D.
So
it's
going
to
contain.
Basically
s
is
J
for
all
I
and
J,
so
all
elements
of
s
exterior
product
s
and
not
just
those
elements.
A
It's
also
gonna
contain
those
elements
times
two
of
those
all
bits
times
for
those
on
it's
times,
eight
and
so
on
and
so
forth,
going
up
pretty
much
all
the
way
up
until
the
for
every
power
of
D,
that's
smaller
than
your
modulus,
and
so
what
this
lets
us
do
is
it
lets
us
compute,
an
encryption
of
the
dot
product
of
Al
exterior
product.
They
are
with
s
exterior
product
s,
as
a
linear.
Combination
of
s
is
J
to
the
T
right.
A
So,
if
you
imagine
this
expression
like
just
as
we
did
in
bootstrapping
we're
not
gonna
kind
of
evaluate
it
in
a
clear
because,
well
you
don't
want
the
other
person
executing
the
the
circuit
to
be
able
to
figure
out
the
output
in
the
clear.
So
instead
we're
going
to
evaluate
at
homomorphic
ly
under
either
some
new
Kia
or,
if
you're
willing
to
assume
sorts
of
our
security.
You
can
provide
under
s
if
these
encryptions
are
addressed
right,
so
we're
gonna
evaluate
this
whole
thing.
Homomorphic
we
just
as
a
linear
combination
of
all
these
powers
right.
A
A
Basically,
what
you're
gonna
do
is
you're
gonna,
say
well:
you're
gonna
have
3
multiplied
by
s;
1
s,
1
4,
multiplied
by
s;
1
s,
2
6,
multiplied
by
s;
2
s
1
and
then
8
multiplied
by
s,
2
s
2
and
then
to
avoid
having
to
multiply
ciphertext
because
multiplying
ciphertext.
That
makes
the
arrow
blow
up.
You
have
these
powers
of
two
and
so
the
multiplication
you
can
just
express
as
a
linear
combination
of
these
powers
right
so
to
express
3
times
s.
1
s,
1
you're,
gonna.
Take
this
encryption
of
s.
1
s!
A
1
x
to
the
0
+
of
the
encryption
of
s,
1
s
1
x,
to
the
1,
and
so
this
gives
us
s
1
s
1
times
a
3
encrypted
under
your
new
key,
then
for
for
you,
just
have
s
1
s
so
here
for
is
the
second
element
in
al
exterior
yard.
So
we're
going
to
multiply
it
with
the
second
elements
of
s
exterior
X,
which
is
s1
s2.
A
So
we'll
use
the
encryption
of
s
minus
2
times,
2
squared
then
4
6,
once
6
is
2
to
the
1
plus
2
to
the
2,
and
so
we're
gonna.
Add
the
encryption
of
s
to
s
once
I've
seen
the
one
with
the
encryption
of
s2
s1
times
the
other
two
and
then
here
we're
gonna,
add
the
encryption
of
s,
2
s
2
times
2
to
the
3,
and
so
you,
basically,
if
you
add
all
these
encryptions,
then
you
have
a
vector
which
is
the
encryption
of,
or
rather
a
vector,
which
kind
of.
A
So
basically,
what
we've
done
is
we've
created
this
I'm
going
to
the
sum
of
a
bunch
of
the
ciphertext
such
that
if
you
were
to
decrypt
it.
So
if
you
were
to
dot
product
it
with
the
new
key,
then
you
would
actually
get
a
value
which
equals
to
the
evaluation
of
this
plus
some
more
error.
And
so,
if
you,
if
you
were
to
evaluate
it,
then
you
were
to
cancel
the
air
out
then,
because
this
gives
you
m1
m2,
plus
some
error,
m1
m2,
plus
some
error,
plus
some
irritants
about
the
area.
A
You
would
get
m1
m2.
So
this
both
are
going
to
preserves
the
message
and
because
it's
in
addition
of
a
bunch
of
linear
size,
cipher
texts,
the
cipher
text,
size
and
the
key
size
also
go
back
to
linear.
So
I'll
stop
again
and
kind
of
wait
for
questions,
because
real
linearization
is
also
not
very
intuitive.
A
A
Is
yes
very
good
question
I
was
about
to
actually
answer
this
myself,
so
the
in
kind.
If
the
encryption
scheme
as
I
provided
right,
it
can
only
encrypt
0
and
1,
because
it
has
some
even
error
rated.
So
you
might
ask
well
what
does
it
mean
to
encrypt
this
thing,
which
is
obviously
we're
going
to
be
much
bigger
than
0
&
1?
A
A
It
is
basically
a
vector
where,
if
you
dot
product
that
vector
with
the
key
that
you
get,
S
is
J
to
the
D,
plus
even
error,
and
so
the
reason
why
this
is
still
useful
is
because,
if
you
add
up
these
ciphertexts,
then
you
still
get
a
mu
L
times.
They
are
or
al
exterior,
ER
dot
products
and
with
s
exterior,
X
plus
a
whole
bunch
of
even
errors.
But
these
even
errors
just
kind
of
add,
together
with
the
even
errors
that
we're
already
inside
of
here,
and
so
whoever
ends
up
kind
of
ultimately
decrypting.
G
E
H
A
So
here
is
the
magic
trick
to
make
your
air
blow
up
less
quickly.
So
first
are
kind
of
a
fun
fact
right,
so
we
can
switch
ciphertexts
between
what
you
can
think
of
as
being
two
perspectives
right,
so
one
perspective
is
your
ciphertext
is
a
and
a
dot
product
to
this
equals
m
plus
URI,
where
m
is
either
0
1?
Now,
if
you
just
kind
of
module
or
divide
a
by
2
right,
so
this
is
all
mod
Q.
A
That's
of
this
form,
and
then
you
divide
your
a
by
2,
and
then
you
convert
it
into
a
ciphertext
that
has
this
format
where,
instead
of
your
message
being
in
the
lowest
bit,
your
message
becomes
and
if
in
the
high
order
bits
now,
this
perspective
is
better
for
multiplication,
because
if
your
message
is
and
low
order
bits,
then
when
you
multiply
your
two
messages
together,
your
message:
I,
is
still
kind
of
preserved
right.
Your
messages
in
the
kind
of
the
field
of
two
elements
or-
and
you
guess
what
you
can't
divide.
A
So
the
ring
of
two
elements
might
be
a
little
more
precise
right.
But
basically,
if
your
message
is
in
the
low
order
bits,
then
everything
that
happens
with
the
error
kind
of
happens
on
top
of
the
message
in
the
message
just
kind
of
flies
on
during
a
duck
and
it
doesn't
get
affected.
But
in
this
perspective
the
message
is
at
the
top,
and
so
everything
interferes
well.
So
in
this
perspective,
you
can't
multiply,
but
this
perspective
has
a
key
advantage,
which
is
that
it
makes
the
noise
less
structured
right.
A
So
here
the
noise
has
to
be
even
here.
The
message
just
becomes
small,
and
so
by
cleverly
switching
between
these
perspectives,
you
can
without
having
the
secret
Kea
change
the
module,
SMI
ciphertext.
So
here's
what
we
do
right.
We
start
off
with
a
such
that
a
dot
product
s
equals
met
your
zero
or
one
plus
two
times
the
error.
Then
you
do
a
perspective
switch,
and
so
you
have
an
e
prime,
which
is
a
modular
divided
by
two,
which
is
equal
to
small
error,
plus
either
zero
or
half
your
modulus.
A
Now,
what
we're
gonna
do
is
we're.
Gonna
switch
it
from
mod
to
you
to
mod
P
right,
and
so
what
we're
gonna
do
is
we're
gonna.
Take
this
ciphertext
we're
gonna,
take
the
a
prime
and
we're
gonna
multiply
it
by
P
and
then
we're
going
to
integer
divided
by
two,
and
so,
if
you
do
this,
then
what
you're
gonna
get
is
you're
gonna
get
a
value
which,
if
you
then
multiply
it
by
s,
so
you
do
rescale
a,
but
you
do
not
rescale
s
right.
So
this
is
a
big
kind
of
importance.
A
A
If
two
over
two
becomes
roughly
ceiling,
your
P
over
two
small
error
becomes
small
error,
except
the
error
itself
also
gets
kind
of
downscaled
right,
so
it
gets
down
scaled
from
being
something
times:
Q
some
small
small
fraction
times
Q
to
be
and
the
same,
a
small
fraction
times
P
and
so
the
actual
magnitude
of
the
error
kind
of
goes
down.
And
then,
when
you
basically
do
a
reverse
perspective
switch,
and
so
you
go
from
this
perspective,
where
it's
either
zero
or
ceiling
of
P.
A
A
Basically,
so
if
you
just
think
of
this
as
being
just
a
multiplication,
then
if
it's,
if
it
was
an
encryption
or
zero,
then
it's
just
going
to
give
you
a
multiple
of
two
and
if
it
was
an
encryption
of
one,
then
this
gives
you
a
multiple
of
Q
plus
half
of
two
right.
And
so,
if
you
multiply
this
by
P
over
Q,
then
multiples
of
Q
would
turn
into
multiples
of
P
and
multiples
of
X
kind
of
half
multiples
of
two.
You
turn
into
of
half
multiples
of
P.
A
But
if
you
instead
had
say
55,000
then
55,000
times
P
over
Q
is
gonna,
be
5500
and
so
module
1,000
I.
Guess,
though,
is
gonna,
be
500,
and
the
error
that
gets
introduced
by
kind
of
the
fourth
division,
not
being
perfect,
is
still
a
small
error,
so
it
just
kind
of
mixes
together
with
a
smaller
that
already
exists.
So.
A
So
you
know
just
kind
of
go
back
to
this
example.
Right
so
let's
say
you're
a
prime
dot
else
was
like,
as
in
as
an
integer
dot
product
equal
to
let's
say
instead
of
55,000
it
was
equal
to
fifty
five
thousand
and
thirty.
So
your
error
is
thirty.
Then,
if
you
multiply
by
P
over
Q
55,032
into
5503
and
so
module,
one
thousand,
it's
gonna
be
five
hundred
and
three
five
hundred
is
P
over
2,
and
so
your
new
error,
and
so
it
was
30
before
and
now
your
new
error
is
3.
A
So
I
guess
so
right
now!
Right!
Yes,
are
people
satisfied
that
if
you
have
a
ciphertext
whose
error
gets
bigger,
you
can
turn
it
into
a
ciphertext
whose
error,
as
a
number,
is
bounded
by
some
constant,
but
it's
just
a
module
as
to
keep
like
is
being
reduced
right.
So
this
is
kind
of
the
lesson
here.
A
E
I
A
So
here
is
why
why
we
switch
right
so
there's
two
reasons
to
switch
actually,
so
the
first
reason
to
switch
is
this
makes
bootstrapping
actually
practical
right,
so
bootstrapping
involves
basically
computing.
This
dot
product
is
mod,
Q
and
computing.
The
star
product
is
mod.
Q
has
a
depth
circuit
depth
of
water
like
you
and
by
reducing
the
modulus,
that's
what
we
can
do
is
we
can
basically
turn
something
which
to
itse
crips.
It
requires
a
circuit
depth
of
log
like
you
to
something
that
has
a
circuit
depth
of
log.
I
A
So
in
the
bootstrapping
key,
basically
the
way
that
we
do
it
so
bootstrapping
is
just
kind
of
computing,
this
dot
product,
mod
P
or
it
kind
of
as
a
circuit,
and
so
to
make
this
summit
computable
you
basically
just
you're
gonna,
provide
si
x
to
the
D
values
in
binary
representation
right
so
for
every
si
times
2
to
the
D.
You
get
well
you
basically,
and
this
is
modulo
P.
Then
you
provide
a
binary
representation
of
the
sense
that
we
have
n
log
n
log,
P
numbers
in
binary
representation.
A
So
for
every
one
of
these
terms
you
have
your
SN
and
basically
to
compute
an
SN.
You
take
for
every
kind
for
every
one
bit.
That's
in
a
n,
because
you
have
a
n
and
the
clear
you
multiply
that
by
the
corresponding
kind
of
bit
representation
of
the
power
of
SN,
and
so
you
just
have
a
bunch
of
these.
The
numbers-
and
these
are
our
numbers
in
binary
representation
right
so
they're,
not
single
ciphertex,
their
log,
p,
ciphertext
and
then
you
just
add
a
whole
bunch
of
these
stat
numbers
together
and
then
you
take
modulo.
A
If
you
like,
the
best
choice,
FP
is
gonna,
be
to
the
power
of
n
minus
1,
and
that
makes
module
really
easy
because
you
just
meet
your
addition
circuits
wraparound.
So
how
do
we
implement
right?
So
how
do
we
implement
these
addition
circuits
right?
So
basically,
so
the
addition
circuit
is
just
going
to
be
a
you.
Have
these
n
log
P
numbers-
and
you
just
have
to
add
together
these
and
watch
B
numbers
right
and
then
the
modulo?
A
Basically
your
addition
circuits,
instead
of
kind
of
instead
of
carrying
out
on
to
kind
of
even
higher
bits
like
every
time
you
carry
past
the
size
of
P,
you
just
kind
of
wrap
around
right
and
then
that
just
gives
you
modulo
a
2
n
minus
1
for
free
right.
So
if
your
module
was
as
a
thousand
and
23,
then
your
bits
are
in
128
256
512
1024
is
equal
to
1,
so
you
just
kind
of
wrap
that
around
to
one
so
modulo
lling
is
really
easy.
A
So
your
your
setting
is,
you
just
have
to
add
together
a
whole
bunch
of
numbers
right,
and
the
first
step
is
that
you
can
reduce
three
numbers
to
two
numbers.
So
you
can
have
a
kind
of
three
to
two
adder
that
has
multiplicative
depth,
one
right
so
and
the
way
that
this
works
is
basically
that
it's
every
bit
of
P
is
going
to
be
just
the
XOR
of
the
corresponding
bits
of
a
and
B
MC,
and
every
bit
of
Q
is
going
to
be
kind
of
the
two
of
three
functions
of
an
BNC
right.
A
So
this
is
zero.
If
a
plus
B
plus
C
is
0
or
1
and
1,
if
a
plus
B
plus
C
is
2
or
3,
and
you
can
think
of
the
SS
being
carry
digit
right
and
the
Q,
you
also
kind
of
multiplied
by
2,
which
basically
means
you
just
kind
of
left
shift
Q
by
1
right.
So
the
reason
why
this
works
is
basically
because
every
bit
in
a
and
B
and
C
it
gets
gets
represented
inside
of
P.
A
But
then,
in
the
specific
case,
where
you
have
kind
of
two
bits
or
that
turned
onto
one
then
inside
of
here,
they
get
turned
into
0
but
beasts.
But
you
get
you
flip
the
corresponding
to
you
from
a
0
to
a
1
and
then
the
Q
gets
kind
of
moved
to
the
left
by
1,
and
so
that
multiplies
it
by
2
and
so
basically
a
kind
of
every
individual
bit
in
a
B
or
C
against
some
kind
of
correctly
represented
in
T
plus
Q
right.
A
So
this
is
a
kind
of
three
to
two
adder,
and
this
has
multiplicative
z.
Up
someone.
Adding
two
numbers,
X
plus
y
does
require
logarithmic
motive.
What
kind
of
depth
right!
So
when
you
go
so
if
you
going
from
n
numbers
to
two
numbers
that
requires
with
the
multiplicative
depth,
which
is
basically
logarithmic
in
n,
so
order,
if
making
the
number
of
the
number
of
numbers-
and
we
don't
hear
about
the
size
but
then
adding
the
last
two
does
require
multiplicative
depth,
log
log,
X,
plus
y
and
here
x
and
y
or
mod
p.
A
So
basically
saying
log
log,
p
or
log
of
a
number
of
log
of
the
number
of
digits
of
p
bread.
So
if
p
is
a
thousand
and
23,
then
p
has
ten
digits.
The
log
of
10
is
gonna,
be
a
whole
four,
and
so
your
market,
multiple,
like
ana
depth,
is
just
going
to
be
four
now.
The
reason
why
adding
circuits
have
this
non
true
logarithmic,
multiplicative
depth
is
basically
because
you
have
to
worry
about
kind
of
threading,
the
kiri
bits
through
the
number,
and
I
mean
in
the
code
that
I
don't
like
to
later.
A
So
here's
optimization
number
two
right
so
I
mentioned
back
when
we
were
over
here
right
that
there's
two
two
tricks
to
overcome
overflow,
we're
one
is
bootstrapping
and
the
other
is
clever
tricks
to
make
your
multiply.
Your
multiplication
only
increase
your
error
by
a
constant
factor.
So
this
is
the
clever
tricks
that
I'm
going
to
talk
about
right.
So,
basically
imagine
if,
instead
of
just
doing
a
modular
switch
before
bootstrapping,
we
do
a
modular
switch
after
every
multiplication
right.
A
So
if
you
don't
do
this
right,
so
if
you
imagine
a
circuit
that
has
some
multiplicative
depth,
then
your
error
is
gonna
square.
Every
time
you
do
a
multiplication
right.
So
if
you
imagine
your
first
ciphertext,
let's
say
the
modulus
is
10
to
the
16
minus
1
and
your
ciphertext
has
era
10
squared.
If
you
square
it,
the
air
is
gonna,
be
10
for
10
to
the
4.
Actually
it'll
be
a
10
to
the
4
times
so
at
times,
some
constant
but
most
simple,
final,
say
it's
10
to
the
4.
A
Then
the
third
time
you
multiply,
the
Earth's
10
to
the
8th
and
the
fourth
time
you
also
applied.
The
air
is
10
to
the
16
and
then
the
error
kind
of
overwhelms
the
ciphertext
and
then
here
decryption
starts
failing.
So
if
you
just
do
this
kind
of
fully
homomorphic
or
just
read
harsh
way,
home
morphic
encryption
naively
then
after
a
logarithmic,
multiplications
app
you're
screwed.
Now
what
happens?
If
you
do
modulus
reduction
right,
so
you
have
X,
which
has
an
error
of
10
squared
modulo
10
to
the
16
minus
1.
A
If
you
square
this
again
right,
then
your
x
squared
goes
up
to
X
to
the
4.
Your
area
of
10
square
goes
back
up
to
being
an
irritant
to
the
for
your
modulus
is
the
same.
Then
you
reduce
it
again
and
now
your
error
goes
down
to
10,
squared
your
modulus
kind
of
hops.
Down
again,
then
you
square
it
again.
A
The
error
goes
up
from
10
squared
to
10
to
the
4,
and
then
you
just
reduce
the
modulus
again
and
so
notice
that,
instead
of
the
error
squaring
every
time,
basically,
we
have
the
modulus
just
kind
of
very
nicely
orderly,
just
stepping
down
by
a
constant
factor
every
time
right.
So
the
the
clever
trick
here
is
basically
that
when
you
multiply
the
cipher
texts,
the
ears
get
multiplied,
and
so,
if
you
just
make
sure
that
the
errors
as
numbers
stay
small,
then
squaring
the
error
is
always
just
going
to
be
a
constant
factor
multiplication.
E
A
By
multiple
I
could,
if
the
upside
means
the
maximum
multiplicative
depth,
that
you
can
support
right.
So
what
if
you
notice
like
the
example
at
the
top
and
the
example
at
the
bottom
here,
you
being
your
air
blows
up
to
the
point
where
it
overwhelms
the
modulus
I'm
after
three
multiplications
right,
because
each
time
you're
squaring
here
we're
keeping
the
air
under
control
right.
A
The
air
is
just
always
10
to
the
4
down
to
10,
squared
off
the
sent
to
the
4
down
to
10
squared
up
to
10
so
for
down
to
10
squared,
except
every
time
if
the
module
was
just
steps
down
by
a
constant
factor
of
10
squared,
and
so
here,
instead
of
doing
three
steps,
you
can
do
eight
steps
right.
So
here
you
can
do
a
more
than
like
two
steps
here,
because
you
watch
these
steps.
E
A
So
the
problem
that
matrix
fhe
solves
is
basically
can
we
try
to
find
something,
that's
kind
of
more
natural
and
more
efficient
than
real
and
yuras
Asian
right.
So
realization
is
expensive,
basically,
because
real
linearization
requires
you
to
just
add
this
really
huge
number
of
cipher
texts
and
if
you
imagine,
a
large
module
as
you're
talking
about
hundreds
of
cipher
texts
for
each
element.
So
can
we
try
to
make
something,
that's
kind
of
somewhat
nicer,
and
can
we
also
try
to
create
a
protocol?
A
That's
somewhat
simpler,
we're
adding
and
multiplying
cipher
texts
actually
is
done
by
adding
and
multiplying
elements
in
some
kind
of,
naturally,
naturally
reasonable
array,
so
matrices
yay.
So
here
is
kind
of
the
context
right
so
to
encrypt
s
with
our
tool,
crumbs,
ero,
with
a
secret
key.
What
we're
going
to
do
is
we're
going
to
have
come
up
with
a
matrix,
a
such
that
the
secrets
multiplied
by
the
matrix
is
an
error
and
to
encrypt
one.
A
We're
gonna
have
a
matrix,
a
such
that
a
secret
key
multiplied
by
the
a
gave
us
the
secret
key,
plus
an
error
right.
So
basically,
what
we're
doing
is
we're
kind
of
hiding
an
approximate
eigenvector,
we're
here,
the
where
the
eigenvalue,
if
it's
zero
with
the
eigen
value,
is
zero
and
then,
if
it's
one,
the
eigen
value
is
one,
and
if
you
really
want
you
can
potentially
encrypt
other
values.
As
all
I
can
see,
I
did
and
the
eigenvalue
kind
of
becomes
the
place
where
the
point
x
gets
hidden
and
it's
only
an
approximate.
A
I
get
a
value,
and
so
you
can't
extract
it
using
one
of
the
standard
techniques
for
the
standard,
because
of
learning
with
errors
makes
it
hard.
So
addition
is
easy
right,
basically
a
addition,
if
you
just
add
together
two
matrices,
then
so,
if
you
add
together
a
matrix
such
that
s
a
equals,
something
with
me,
it
works
where
s
a
equals.
Something
else,
then
s,
al
plus
AR
is
just
gonna,
be
basically
if
whatever
the
value
were
is
for
the
first
one,
plus
whatever
the
value
is
for
the
second
book
right.
A
So
s
times
s
times,
l
or
a
decrypt
of
al
plus
a
are
becomes
decrypt
of
al
plus
decrypt
of
AR,
which
is
gonna,
be
your
left
message
times:
S
Plus,
a
your
left
ear,
plus
your
right
message
times
this
plus
your
right
ear,
and
so
you
have
the
sum
of
your
messages
times
s
plus
R
the
sum
of
the
errors
right.
This
basically
kind
of
the
same
thing
that
we
had
before,
except
here
because
instead
of
dealing
with
vectors,
we're
dealing
with
matrices
and
matrices
yeah,
it
can
actually
be
multiplied.
A
Then
the
decryption
of
the
multiplication
of
the
matrices
is
going
to
be
s,
multiplied
by
L
times,
ER
and
remember
you
can
think
of
matrices
in
being
linear
maps
and
so
you're
gonna
kind
of
pass
s3l.
Well,
then
you're
gonna
pass
s
through
AR
and
so
the
eigenvalues
get
kind
of
get
producted
as
well,
and-
and
so
you
can
kind
of
show
that
this
so
kind
of
expands
out.
So
you
have
first
a
s
passing
through
a
l.
A
A
They
are,
then
you
have
al
x,
AR
and
so
here
time
multiplying
by
a
are
basically
me
and
your
MLS
is
gonna
turn
into
well
Ella,
MLM
RS,
because
AR
AR
times
s
by
itself
is
gonna,
give
you
kind
of
mrs
right,
and
so
if
s
x,
AR
gives
you,
mrs
then
ml
times
s
times
they
are,
is
gonna.
Give
you
ml,
mr
s,
so
here,
head
of
the
decryption
of
the
product
is
going
to
give
you
and
of
the
product
of
the
message
is
here
and
then
over
here.
A
Your
you
have
is
the
two
error
terms
right,
so
one
of
the
error
terms
is
going
to
be
at
the
left
air
x,
AR,
and
then
here
you
have
the
right
error,
x,
AR
x,
the
left
message,
so
you
can
add
you
can
multiply,
but
there
is
a
problem
right
and
the
problem
is
that
the
error
is
not
just
being
multiplied
by
the
other
error:
you're,
not
just
having
like
L
times
AR.
Instead,
your
error
gets
multiplied
by
the
magnitude
of
the
matrix
elements.
A
So
if
you
start
with
matrices
that
have
a
fairly
small
magnitude,
then
every
time
you
do
a
multiplication,
your
the
magnitude
of
your
matrix
elements
just
keeps
on
doubling
and
so
once
again,
you're
limited
to
your
depth
being
kind
of
log
like
you
by
the
way,
just
kind
of
one.
Other
reason
why
this
matrix
approach
is
nice.
Basically,
we
don't
from
here
on,
we
don't
actually
care
about
the
modulus
as
being
odd.
We
don't
care
about
them
being
prime.
A
We
don't
care
about
modulus
as
having
any
air
kind
of
property
is
and
so
to
make
the
math
easier.
We're
just
gonna
set
the
modules
as
they
equal
us,
some
power
two
to
the
K,
and
that
makes
them
kind
of
math
really
easier
and
it
makes
bootstrapping
really
easy
as
well,
because
you
can
just
kind
of
forget
the
our
vets.
A
So
the
problem
is
rate
that
you
have
this
kind
of
PL
x,
AR,
and
these
AR
values
themselves
are
kind
of
get
big
and
they
get
ya
bigger.
Every
time
you
multiply,
and
so
how
do
you
stop
the
error
from
just
blowing
up
so
the
fix
to
this
problem?
Is
you
have
this
kind
of
really
strange
and
clever
bit
splitting
trick
and
the
bit
splitting
trick
basically
says
that
if
we,
if
we
treat
X
as
being
a
vector,
then
we're
gonna
define
these
two
functions.
One
is
called
powers
of
two.
A
The
other
is
called
bidify,
where
powers
of
two
basically
says
as
for
every
element
in
X,
you
just
kind
of
concatenate
the
powers
of
two
of
X
and
then
bidify
of
X
just
turns
it
into
a
bit
representation.
Then
you
have
this
identity
that
the
dot
product
of
powers
of
two
of
X,
together
with
the
bidify
of
y,
equals
two,
the
dot
product
of
X
of
Y
and
so
kind
of
visually
show.
This
I
have
an
example
right.
A
So
if
you
have
X
as
one
two
three
and
then
Y
is
six
five
four
then
powers
of
two
of
X
of
one
turns
into
one
two
four
two
times
into
two
times:
1
2
4,
so
2,
4,
8
and
then
three
turns
into
3,
6,
12
and
then
Y
turns
into
its
bit
representation.
So
6
is
1.
1
0
5
is
1
0
1
4
is
1,
0
0
and
the
dot
product
here.
A
Well,
here
it's
1
times
6
and
then
here
we're
basically
kind
of
in
doing
powers,
times
bit
representations,
and
so
you
have
2
times
1
and
then
4
times
1.
So
this
adds
up
to
being
6.
Then
over
here
2
times,
5
and
basically
the
5
turns
into
1
0
1,
and
so
you
add
2
times
1
and
then
here
you
add
2
times
4
times
1,
to
kind
of
compensate
for
this.
A
A
So
you
have
12
right,
so
I
guess
also
pausing
and
just
kind
of
stare
at
this
and
their
ads
just
kind
of
verify
for
yourselves
that
this
works
right
so
basically,
powers,
a
kind
of
converting
X
into
powers
of
two
and
converting
Y
into
a
bit
representation
is
a
is
a
dot
product.
Preserving
represent
operation.
A
Yes,
it's
theory
is
very
similar,
except
this
is
kind
of
just
explicitly
representing
linear,
real
linearization
using
vector
methods,
it's
exactly
the
same
sort
of
stuff,
so
the
bits
pudding
technique
also
applies
to
matrices,
because
matrix
math
is
basically
just
batched,
vector,
math,
right
and
so
powers
of
two
of
s
times
bidify
and
a
equals
equals
s
times
a
actually.
This
should
be
a
dot.
This
should
be
just
actual
matrix
multiplication,
and
so
what
we're
going
to
do
is
we're
going
to
basically
do
instead
of
doing
a
all
times.
They
are
we're.
A
Gonna
do
al
times,
bidify
of
a
are
right
and
we're
gonna
set
our
ciphertext.
Instead
of
being
an
x,
then
we're
gonna
set
them
as
being
kind
of
n
times,
n
log
p.
Basically,
the
idea
is
that
whatever
is
on
the
left
side,
we're
gonna
kind
of
set
it
as
being
permanently
in
this
kind
of
powers.
It's
your
representation
and,
and
so
bidify
is
gonna.
A
Give
us
an
N
log,
P
times,
n
walk,
be
a
patriot
of
matrix
and
so
a
kind
of
the
dimensions
match
up,
but
instead
of
multiplying
a
all
times,
they
are
we're
going
to
multiply
a
all
times.
Bidify
of
a
are
and
the
reason
why
we
do
this
is
basically
because,
if
you
notice
here,
you
know
you
have
these
two
terms.
A
These
two
error
terms:
one
has
an
e,
are
AR
and
one
has
ELAR,
and
so
if
AR
elements
are
always
going
to
be
a
zero
and
one,
then
here
you
just
have
yell,
and
then
here
you
just
have
a
are
multiplied
by
the
message,
which
is
just
gonna,
be
zero
or
one,
and
so
the
error
blow
up
is
fall
right.
The
error
blow
up,
it's
just
gonna,
be
a
adding
a
whole
bunch
of
these
terms
that
were
proportional
to
the
original
error.
So.
A
A
How
do
I
describe
this
like
the
the
idea?
The
kind
of
intuitive
idea
right
is
that
the
IDIA
sales
are
kind
of
permanently
in
this
powers
of
two
form,
and
then
these
ARS
are
permanent,
are
gonna,
be
bidify,
and
so
their
dot
product
is
going
to
be
at
the
same
as
it
was.
If
you
just
had
a
kind
of
regular,
a
all
together
with
just
a
regular
ER,
and
so
there
are
the
eigen
vector
here
instead.
Well
here,
it's
not
kind
of
fully
an
eigenvector.
A
Instead
of
being
s,
instead
of
turning
as
two
assets
actually
gonna
turn
as
in
two
powers
of
two
of
us.
But
but
if
this
works
and
you
getting
kind
of
XP,
you
can
also
just
experimental
a
verify
that
it
works
right,
and
the
benefit
is
that
you
know,
as
I
mentioned.
Instead
of
the
air
below
x,
bigging
our
values,
you
just
guarantee
that
these
AR
values
are
always
0
and
1,
and
so
the
arrow
blow
up
is
smaller.
A
Right
by
the
way,
if
anyone
I
have
code
here,
so
if
anyone
what
and
the
code
is
actually
not
long
so
the
homework
encryption
here
is
about
three
hundred
whines
of
python.
This
is
under
two
hundred
lines
of
Python.
So
if
you
want
to
kind
of
see
for
yourself,
if
you
once
I
kind
of
play
around
with
matrices
for
yourself
and
beer
and
I'm
gonna
verify
things
for
yourself,
I
highly
recommend,
looking
through
the
code
as
well,
so
optimizations
right.
So
there's
a
bunch
of
ways
to
optimize
this.
A
A
If
you
want
to
perform
some
kind
of
operation
that
it
actually
makes
sense
to
perform
that
operation
kind
of
asymmetrically,
so
say:
if
you
want
to
do
about
your
additions,
then
you
just
kind
of
fold
them
all
into
this
into
the
same
someone
one
after
the
other,
the
ear
balance
criterion
becomes
kind
of
more
complicated.
So
it's
not
just
max
multiplicative
depth.
It's
not
just
max
polynomial
degree.
It
becomes
a
complicated
thing,
other
optimizations,
so
you
can
decompose.
So
you
don't
need
to
do
things
in
base
two.
A
A
You
can
pack
multiple
bits
into
a
ciphertext
and
you
can
do
kind
of
addition
and
multiplication
on
a
lot
of
bits.
At
the
same
time,
there's
also
this
thing
called
ring,
LW
ear,
where
basically,
instead
of
just
thinking
about
kind
of
a
whole
bunch
of
independence
equations,
you
can
represent
those
equations
as
being
an
equation
and
a
polynomial
ring
and
I
don't
really
have
time
to
get
into
this.
But
this
also
lets
you
kind
of
decrease
sexy
sizes
a
lot.
A
So
this
is
still
kind
of
basically
how
a
kind
of
a
modern,
fully
homomorphic
encryption
schemes
work,
except
there's
this
kind
of
whole
suite
of
optimizations
that
people
have
been
slowly
coming
up
with.
So
the
main
reason
why
there
is
a
big
overhead
is
basically
because
the
cipher
texts
are
matrices
right
and
in
order
to
satisfy
these
lwe
assumptions,
the
cipher
texts
have
to
have
a
pretty
substantial
length,
and
so
we
can
think
of
these
matrices
as
being
something
like
a
hundred
matrices
of
size.
A
A
hundred
by
a
hundred
or
whatever,
and
then
here
we're
gonna
store
them
in
powers
of
two
and
we're
gonna,
bidify
them,
and
so
the
100
potentially
blows
up
into
being.
Ten
thousand
at
least
temporarily,
and
so
matrix
multiplication
becomes
a
really
big
kind
of
multiplication
procedure.
And
if
you
wants
to
be
able
to
process
circuits
of
substantial
size
than
these
numbers
have
to
have
a
fairly
substantial
bit
length.
A
So
overpotential
we
talking
in
over
over
a
hundred
potentially
so
there's
a
bunch
of
factors
that
do
end
up
kind
of
conspiring
to
make
it
fairly
different
or
to
make
this
kind
of
fairly
inherent
large
blow
up
in
circuit
sizes.
And
this
is
the
reason
why
this
is
all
kind
of
less
large
and
what
ends
are
less
clean
than
things
like.
A
lip
to
curve
cryptography,
for
example.
A
There
is
definitely
a
bunch
of
libraries,
I
get
a
Chia
web
as
one
of
them
and
there's
some
others
with
different
names.
So
there
is
a
survey
or
a
book
by
as
Vika
Bartkowski,
the
inventor
of
the
2011
protocol.
So
you
could
see
if
I
can
work
this
up
right
now
here
you
have
fundamentals
of
Floyd
homomorphic
encryption,
and
so
it's
basically
just
search
fundamentals
of
fully
homomorphic
encryption
with
speaker,
burkowski
and
then
what
you
get
it's
for.
Two
thousand.
A
Or
the
matrix
approach
they
have
different
pros
and
cons,
and
then
you
just
apply
a
whole
bunch
of
optimizations
bringing
out.
So
lwe
is
one
of
them.
Another
one
is
as
various
schemes
for
being
able
to
kind
of
stuff
a
whole
bunch
of
cipher
texts
into
list
or
a
whole
bunch
of
messages
into
the
same
ciphertext.
So
like
one
of
the
ways
that
you
can
do
this
is
you
can
imagine
here.
A
Text
where
a
just
satisfies
a
dot
product,
it
was
one
message,
you
might
say
a
dot
product
message,
a
name
product,
another
message,
and
then
you
can
do
kind
of
SIMD
operation.
So
you
can,
if
you
add,
and
multiply,
that
kind
of
adds
and
multiplies
all
the
plaintext
simultaneously
and
that
you
can
do
kind
of
rotations.
You
can
do
an
appropriations,
and
so
there's
this
kind
of
growing
body
of
clever
tricks.
The
same
way
that
the
zero
knowledge
proof
space
has
a
growing
body
of
clever
or
clever
tricks.
A
C
A
Was
just
going
to
mention
that
you
can
also,
you
can
also
look
up
a
BB
2011
there's
also
another
protocol,
a
BB
2012,
which
avoids
the
need
to
do
modulo,
switching
I'm
in
the
car
completely
like,
basically
by
kind
of
pretending
that
the
site
that
the
cipher
texts
are
fractional,
but
still
representing
them
as
integers.
So
you're
just
going
to
feel
free
to
look
up
the
papers
for
for
all
of
this
as
well,
and
then,
hopefully,
they'll
be
more
understandable.
After
this.
C
C
A
So
basically
VM
I
India
here
is
that
so
bidify
is
Garant
is
guaranteed
to
always
contain
zeros
and
ones
right,
I,
think
I,
I
think
I
might
I.
I
might
have
a
good
intuitive
answer,
so
the
intuitive
answer
is
so
first
of
all,
like
powers
of
two
and
or
kind
of
power
of
to
production
and
bidify
or
kind
of
opposites
in
some
sense
right.
So
if
you
verify
something,
then
you
can
multiply
by
a
matrix
which
kind
of
converts
it
back
into
the
pre
verified
form.
But
if
you
take
a
major.
A
It's
not
verified,
then
the
multiplication
itself,
when
you
do
kind
of
modulo
its
going
to,
and
then
you
kind
of
unbelief,
you're,
basically
kind
of
what
this
is
is
and
you
can
think
of
it
as
being
kind
of
an
unbid
áfive
version
of
a
of
a
bitte
fide
thing.
And
what
happens
is
that
the
invert?
These
are
kind
of
inverse
of
edifying
that's
kind
of
inherently
baked
into
here.
A
It
has
this
property
that,
if
you
apply
it
to
things
that
are
not
bits
right,
so
a
bidify
matrix,
multiplied
by
another
verified
matrix
is
not
necessarily
going
to
get
C
in
bits,
because
matrix
multiplication
adds
a
whole
bunch
of
bits
together.
Then
you're
still
going
to
get
them
you're
still
going
to
get
a
matrix
and
you're
still,
because
you
have
modular
reduction,
it's
still
going,
it's
gonna
reduce
to
modular
modulus
and
then
so.
A
Basically,
the
module
reduction,
just
kind
of
magically
makes
the
the
higher-order
terms
go
away
and
because
you're
only
doing
the
matrix
multiplication,
while
the
matrices
are
verified,
means
that
the
process
of
matrix
multiplication
never
multiplies
any
particular
value
by
more
than
the
width
of
the
matrix.
And
so
the
error
only
gets
multiplied
by
other
by
a
small
amount
right.
A
C
H
A
A
Times
big,
it's
n,
Times
log,
B
and
think
of
Al
as
being
bidify
day,
L
multiplied
by
the
matrix
that
he's
thinking
of
as
a
benefit
nation,
certify,
matrix,
multiplied
by
bidify,
al
x,
Beta,
Phi,
AR
and
then
you're
gonna
take
bidify
al
identify
AR.
Then
you
multiply
them
together
and
then
to
actually
see
what
this
means.
You
have
your
kind
of
universe,
but
if
ocation
matrix
multiplied
by
the
product
of
the
product
of
the
benefice.
D
A
Right
so
basically
you
have
these
kind
of
matrices
that
have
the
that
half
the
property,
that
if
you
multiply
them
by
powers
of
2
of
s,
then
you
get
powers
of
2
of
s,
and
so
you
have
already
lost
matrices.
And
so,
if
you
multiply
by
s,
you
get
power,
you
get
powers
of
2
of
us
and
then,
if
you
take
al
and
kind
of
expand
it
out
and
yazi
as
a
verified
thing,
then
the
bidify
things
are
kind
of
they're.
D
I
E
A
Yeah,
it
recommends
like
not
even
just
look,
he
got
the
source,
but
even
just
kind
of
opening
up
a
Python
console
and
behind
of
playing
around
with
kind
of
bidify
and
multiplying
by
that
and
just
kind
of
see
what
property
is
the
ciphertext
and
the
and
the
bit
affine.
Cipher
texts
have
with
respect
to
x,
yes
and
multiplying
by
powers
of
two
of
us.
C
C
A
Right
right
so
another
way
to
think
about
this
I
think
like
this.
This
approach
is
just
an
optimization.
Another
way
of
thinking
about
this
is
you
can
make
a
less
efficient,
but
what
easier
to
understand
protocol
if
you,
if
your
cipher
texts
are
and
what
be
time
to
watch
matrices?
So
imagine
your
cipher
texts
are
kind
of
bidify.
They
only
are
big
matrices
that
contain
only
zeros
and
ones
that
have
the
property
that
powers
of
two
of
s
is
an
eigenvector
right.
A
So
we
agree
that
if
you
just
multiply
al
times
bidify
they
are
then
powers
of
two
is
going
to
be,
and
IRS
is
going
to
be
an
eigenvector
of
the
product
right,
but
then
what
this
operation
does.
But
basically
we
want
to
preserve
the
invariants
that
your
cipher
text,
or
just
zeros
and
ones
right
after
you
multiply
two
matrices
together:
they're,
not
necessarily
zeros
and
ones,
and
so
what
we're
going
to
do
is
we're
going
to
basically
kind
of
do
an
inverse
bidify
operation
so
squash
it
and
because
of
this
squashing
preserves
the
eigenvector.
A
So
basically
it
before
it
turns
powers
to
powers
of
their
powers
of
two
of
acity
powers
of
two
best
after
it
just
turns
out
our
two.
So
that's
just
wash
it
and
then
you
verify
it
again
and
the
squashing
in
the
bed,
and
you
had
to
you.
Basically
it
preserves
the
eigenvector,
but
it
it
kind
of
forces
the
values
of
the
ciphertext
to
just
continue
being
zeroes
and
ones.
C
E
A
A
Basically,
if
your
messages
are
not
zeros
and
ones
the
DC
area
below
put
right,
so
you
can
get
some
like
you
could,
potentially,
instead
of
working
in
the
ring
modulo
two,
you
can
work
in
like
the
ring
modulo,
some
small
number,
like
maybe
up
to
100
or
something
but
working
over
things
that
are
not
bits.
It
just
kind
of
seems
intrinsically
hard
because
multiplying
by
things
that
are
not
small
numbers
causes
error
to
blow
up
very
quickly.
So.
H
A
So
I
mean
so
the
trivial
way
to
do
any
of
this
right
is
that
if
you
want
what,
if
you
wants
to
like
multiple
there
are
mechanisms
for
I,
think
multiplying
a
so
sorry,
so
in
chem
L?
If
the
individual
ml
kind
of
as
bits
right,
it's
just
as
your
a
one
then
multiplying
by
three.
Doesn't
we
have
any
right?
A
A
It's
the
error
messages
to
always
be
zero
and
one,
but
there
are
optimizations
that
have
to
do
with
kind
of
computing,
more
complicated
gates
in
a
way
that
ensures
that
the
output
is
still
the
zero
and
one
without
I'm
having
to
do
as
much
work
as
if
you
just
kind
of
did
it
and
I
did
it
naively
with
simple
gates.
So
there's
nothing
a
kind
of
super
mathematically
elegant,
but
there
you
know
there
are
kind
of
these
and
if
bags
of
tricks,
that
was
the
way
you'll
optimize
a
lot.
Sometimes.
B
A
So
functional
right
so
functional
encryption
in
terms
of
the
definition.
Basically,
it
says,
instead
of
being
able
to
go
from
ink
of
X
to
ink
of
f
of
X
for
arbitrary
F,
you
can
go
from
ink
of
X
to
just
having
f
of
X,
but
only
for
one
specific
else.
So
it's
a
kind
of
difference
and
potentially
more
powerful,
I'm
a
primitive
in
terms
of
how
these
functional
encryption
protocols,
work,
I'll,
admit,
I,
haven't
got
to
fully
figure.
They
figure
this
out,
yet
there's
they're
definitely
considerably
more
complicated
than
FH.
You.
B
A
There's
like
a
lot
of
different
paths
to
try
to
can
get
to
obfuscation,
and
none
of
them
fully
satisfy
people
yet,
and
they
do
end
up
having
overhead
one.
Really.
Nice
thing
is
that
if
you
have
an
obfuscator
that
works
for
circuits
of
constant
size,
then
you
can
turn
that
into
an
obfuscator
that
works
for
a
kind
of
arbitrary
size
and
the
way
that
you
do,
that
is
to
obfuscate.
A
big
program.
A
But
what
you
can
make
you
can
make
proofs
that
kind
of
claim
more
nicely
work
together
with
the
home
worth
of
your
home
working
in
encryption
scheme.
So
you
have
less
blow-up.
So
basically,
then,
what
you
do
is
you
take
your
encryption
of
f
of
X
and
you
take
your
proof
that
you
actually
computed
F
and
not
some
different
function.
And
then
you
have
a
fixed
sized
program
that
checks
the
proven
estimators
corrected,
and
you
have
your
your
f
code
right.
B
So
I
think
pretty
much.
The
exact
same
technique
works
for
something
called
a
correlation,
intractability,
hashing
and
I.
Think.
The
idea
here
is
that
you
can
have
a
provably
secure
fiat,
Samir,
yes
Samir
and
then,
if
you
have
this
hash
function,
which
has
this
very
special
purpose
and
you
can
you
can
boost
it
to
arbitrary
circuits
I.
Think.
B
B
B
A
Yeah
so
I've
kind
of
taken
you
through
the
journey
other
kind
of
the
years
of
where
homomorphic
encryption
was
just
getting
kind
of
massively
better
with
the
new
discoveries
every
two
years
right
so
like
over.
Here
you
have
them
only
partial
bounded
degree,
Andrew
stratum
possible.
Then
you
have
this
real
complicated.