►
Description
Introducing QODA: The Platform for Hybrid Quantum Computing
Presents QODA and QGANs
Zohim Chandani
A
A
Previously
I
was
at
Righetti
Computing
thinking
about
designing
various
Quantum
algorithms
and
how
to
implement
them
on
quantum
computers,
superconducting
quantum
computers,
but
today
I'm
here
to
talk
to
you
about
Coda.
So,
let's
get
started
so
Quantum
Computing
today
is
is
focused
on
relatively
small
scale,
algorithmic
development
right
and,
to
that
extent
a
bunch
of
pythonic
Frameworks
have
been
developed.
You
may
have
heard
of
some
of
them
today
or
actually
used
some
of
them.
A
In
tutorials
we
have
Circ
by
the
team
at
Google
kisket
with
IBM
Penny
Lane
by
the
team
at
Xanadu.
If
we're
to
think
of
academic
institutions,
we
can
think
of
the
quest
simulator
developed
by
the
team
at
the
University
of
Oxford,
and
these
are
all
critical
and
without
them
we're
very
unlikely
to
discover
Quantum
algorithms
for
Quantum
advantage
in
the
near
future,
and
these
are
so
critical
that
Nvidia
is
kind
of
first
foray
or
Venture
into
the
quantum
Computing
space
was
to
build
cool
Quantum,
which
accelerates
a
bunch
of
these
pythonic
Frameworks.
A
But
what
we
then
noticed
was
that
there's
a
big
gap
between
experimenting
with
these
algorithms
and
the
fact
that
a
lot
of
the
GPU
accelerated
scientific
Computing
applications
of
today
are
the
most
likely
candidates
for
future
Quantum
accelerated
applications
and
that
transition
from
small-scale
algorithm
development
by
Quantum
physicists
to
application
development
by
domain.
Scientists
is
why
we
needed
a
platform
that
delivers
this
high
performance,
delivers
interoperability
with
applications
and
programming
paradigms
and
is
familiar
to
domain
scientists.
A
And,
if
you
think,
back
to
time
before,
2006
Nvidia
has
some
experience
within
the
space
right
before
the
launch
of
Cuda.
There
were
a
bunch
of
domain
scientists
leveraging
gpus
to
accelerate
their
work,
but
very
few,
and
you
may
ask
why
and
that's
because
they
had
a
program
in
graphics,
Shader,
apis
or
GPU
assembly,
and
from
this
perspective,
what
Nvidia
did
by
launching
Cuda
is
to
usher
in
a
revolution
in
accessibility
for
for
for
for
a
lot
of
people
and
particularly
for
domain
scientists.
A
So.
To
that
extent,
what
we
worked
on
was
Coda,
which
is
quantum
optimized
device
architecture
which
I'd
like
to
introduce
to
you
today
and
what
is
Coda
right.
So
it's
this
hybrid
Quantum,
classical
compute
platform,
which
addresses
the
challenges
facing
application
developers
like
myself,
who
are
looking
to
incorporate
con
Quantum
acceleration
into
some
of
their
workflows
and
whether
that's
through
a
a
Quantum
simulator
or
a
Quantum
processor,
which
quotas
which
quota
supports
both
natively.
A
Secondly,
Coda
allows
one
to
kind
of
quickly
move
between
running
all
or
some
parts
of
their
applications
on
a
classical
or
Quantum
emulated
or
Quantum
Hardware.
It
includes
this
kernel
based
programming
model
within
C
plus
plus
and
python
implementations
behind
the
scenes.
It
has
a
compiler
tool
chain
built
in
and
a
standard
Library,
which
I'll
show
a
couple
of
examples
of
of
commonly
used,
Quantum
algorithmic
Primitives
like
qaoa
or
eqe,
for
example,
and
with
Coda.
A
A
Language
parallelism
that
most
of
you
may
be
aware
of
with
like
openmp,
open
ACC
Cuda,
for
example,
to
allow
users
to
kind
of
incrementally
add
Quantum
acceleration
where
it
makes
sense
into
their
existing
applications.
A
Coda
is
qpu
agnostic
and,
from
the
start,
from
the
very
beginning,
we've
been
working
with
Hardware
providers
from
across
the
Cuba
technology,
Spectrum
atoms,
trapped,
ions,
superconducting,
Quantum
architecture,
photonics,
for
example,
and
we've
been
collaborating
also
with
a
bunch
of
software
companies
Zapata.
For
example,
we've
been
working
with
to
kind
of
tightly
integrate
Coda
into
building
algorithms
for
the
future
and,
of
course,
supercomputing
centers,
like
nurse
itself
to
kind
of
test
and
deploy
Coda
to
start
thousands
of
scientific
Computing
developers
around
the
world.
A
So
I
want
to
kind
of
take
some
time
to
dig
into
some
of
the
technical
design
Primitives
about
how
one
uses
a
code
at
Quantum
kernel.
So
what
Coda
does
is
it
follows
this
annotated
kernel
approach
with
typed
function,
objects
like
lambdas,
which
allows
users
to
kind
of
Define
functions
that
are
quite
generic,
which
can
be
reused
right.
So
here
we
see
this
example.
A
Example
with
a
variational,
Quantum
algorithm
or
a
vqe
where,
where
the
programmer
can
define
a
variational
Quantum
circuit
and
use
that
as
an
input
to
algorithmic
libraries,
specifically,
this
vqe
called
right
at
the
bottom
over
here
right.
A
We
see
how
easy
it
is
to
kind
of
construct
this
built-in
type,
the
spin
up
type
where
you
can
define
a
hamiltonian,
which
you
want
to
variationally
minimize,
which
is
the
which
is
the
purpose
of
what
the
variation
of
quantum
eigen
solver
algorithm
does,
and
the
overall
kind
of
efficiency
of
this
program
right,
which
is
a
couple
of
lines
or
at
Max.
A
Six
to
seven
lines
which
you
can
see
here
allows
a
scientist
to
kind
of
go
from
a
parameterized
anxiety
to
a
hamiltonian
that
they
want
to
minimize
to
run
on
a
on
to
run
this
algorithm,
to
run
the
vqe
algorithm
on
a
dgx
platform
or
on
a
physical
qpu
of
their
choice.
And
that's
what
really
stands
out
here
with
this
Coda
approach
and
this
snippet
that
I'm
sharing
here
the
code.
A
Snippet
is
what
truly
demonstrates
kind
of
the
underlying
philosophy
of
Coda
and
why
I've
been
why
we've
been
working
so
hard
on
Coda
and
that's
to
kind
of
provide
Concepts
to
describe
these
Quantum
Code
expressions
like
vqe
and
qaoa,
for
example,
and
then
have
the
ability
to
execute
them
on
whatever
platform
of
your
choice
that
you
desire,
we've
seen
some
code.
Let's
kind
of
look
at
some
numbers,
I
know
in
the
tutorial
that
jinsang
was
sharing.
A
We
wanted
to
kind
of
show
how
how
GPU
acceleration
speeds
things
up
by
100x
or
something
like
that.
But
here
what
we're
looking
at
is
some
preliminary
results
on
implementing
a
vqe
algorithm
on
Coda
in
comparison
to
a
leading
pythonic
framework.
Also
running
on
an
a100
GPU
right,
so
what
we're
doing
here
is
we're
comparing
GPU
to
GPU
and
when
running
in
an
emulated
environment.
A
Here
is
another
example
of
a
kite
algorithm,
Quantum,
imaginary
time
Evolution,
and
this
algorithm
is
intrinsically
hybrid
and
iterative,
and
by
hybrid
I
mean
it
goes
from
CPU,
slash,
GPU
to
qpu,
with
each
iteration,
depending
on
solving
a
linear
on
solving
a
linear
system
from
a
previous
iteration
right.
So
clearly,
this
is
an
opportunity
for
GPU,
qpu
interoperability,
and
this
code.
A
Snippet
here
demonstrates
just
that,
where
we're
using
Coda
to
kind
of
to
figure
out
an
expectation
value
for
a
set
of
party
operators
and
then
using
that
as
the
input
to
cool
solver,
which
solves
this
linear
system
for
us
outputs,
a
bunch
of
parameters
which
we
then
input
into
the
next
iteration
of
our
of
our
of
our
kite
algorithm
foreign.
A
You
can
intrinsically
kind
of
intrinsically
asynchronously
execute
tasks
on
all
available
qpus,
if
that's
your
back
end
of
choice
or
various
gpus,
if
you,
if
you
wanted
to
do
that
as
well,
and
we're
releasing
kind
of
a
bunch
of
multi-gpu
multi-node
support
in
the
near
future,
which
already
spoke
about,
but
in
this
example,
what
you
can
see
here
is
we're
simulating
a
hydrogen
chain
using
a
variational,
Quantum,
migrain
solver,
and
on
this
slide.
A
What
we
see
here
is
that
we
have
almost
near
perfect
strong
scaling
right
up
to
around
four
simulated
qpus
of
28
qubits,
each
on
this
djx
a100
box
that
we've
been
using
and
this
this
kind
of
work
just
scratches
the
surface
of
the
things
that
we've
been
exploring
with
Coda
and
we're
excited
to
kind
of
continue.
This
development
further
I
wanted
to
change
direction
a
little
and
talk
about
some
of
the
work
I've
been
doing
recently
with
Quantum
Gans.
A
Cool
so
I've
been
using
Code
recently
to
test
out
generative
adversarial,
Quantum
General
networks,
but
before
I
do
that
I
wanted
to
introduce
what
Gans
are
generally
networks,
and
this
is
this
algorithm
within
classical
machine
learning,
and
if
I
was
to
kind
of
draw
a
a
very
rough
analogy
as
to
what
a
gan
does
is.
Imagine
you
have
some
real
source
of
data,
and
the
task
here
in
this
case
is
to
plot
the
weakness
function
of
optical
Quantum
States,
for
example.
So
the
real
source
of
data
is
this
fault
state.
A
The
job
of
the
generator
is
to
kind
of
generate
some
some
source
of
data
in
in
at
the
start
of
training,
it
generates
some
random
sources
of
data,
and
the
job
of
the
discriminator
is
to
discriminate
whether
the
data
it's
presented
with
has
come
from
the
real
source
of
the
fake
source.
So,
as
you
allow
training
to
progress
between
the
generator
and
the
discriminator,
the
generator
starts
to
get
better
and
better
at
the
distribution
it
generates
and
eventually
is
able
to
kind
of
maximally
perplex.
A
The
discriminator
where
the
discriminator
can't
tell
whether
the
source
of
data
it's
presented
with
has
come
from
the
real
source
of
the
fake
source
right.
So
that's
where
you
terminate
your
Gan
training,
where
the
probability
of
the
discriminator
to
discriminate
correctly
is
exactly
a
half,
and
this
is
again
a
kind
of
a
neural
network
architecture.
Image
of
what
again
is
right.
You
input
some
random
noise.
A
At
the
start,
you
have
a
bunch
of
neural
networks,
two
in
particular
a
generate
hand
and
discriminator
and
they've
kind
of
adversaries
against
each
other,
where
the
generator
tries
to
create
statistics
that
mimic
the
true
data
distribution
and
the
discriminator
tries
to
determine
whether
it's
presented
with
the
real
or
the
fake
distribution
and
the
adversaries
of
each
other.
They
kind
of
pitted
against
each
other,
and
they
continue
this
game
until
Nash.
A
Equilibrium
is
reached
so
a
couple
of
years
ago,
I
think
in
2019,
actually
Seth
Lloyd
from
MIT
and
and
the
group
over
at
Xanadu
had
this
idea
of
quantizing
Gans,
and
we
introduced
this
Q
guns
and
what
they
thought
of
was
that
In
classical
learning
with
Gans,
when
you
have
neural
network
architectures,
for
example,
the
a
mister
adjust
the
weights
and
the
biases
of
these
neurons
to
find
an
optimal
kind
of
learning
strategy
where,
where
learning
can
terminate,
and
in
the
quantum
case,
what
we
can
assume
is
we
can
supply.
A
Instead
of
the
generator
and
the
discriminator,
we
can
supply
being
neural
networks.
We
can
have
parameterized
Quantum
circuits,
and
with
these
parametrized
Quantum
circuits,
we
can
optimally
tune
their
rotation
angles
or
their
parameters,
and
this
is
a
somewhat
of
a
good
idea
is
because
we
know
that
in
within
Quantum
information
processing,
we
have
this
ability
of
kind
of
performing
manipulations
of
of
matrices
of
sparse,
lower
rank
matrices
in
in
time,
which
is
better
than
what
you
would
do
with
classical
resources.
A
So
there
is
some
implication
here
that
a
Quantum
Gan
can
exhibit
some
potential
Advantage
over
trying
to
figure
out
a
distribution
over
a
classical
gun
where
the
object
of
the
game
is
to
reproduce
statistics
from
making
a
bunch
of
measurements
on
very
high
dimensional
data
sets.
So
what
have
we
been
using?
A
Quantum
gowns
for
right,
so
I've
been
exploring
this
idea
of
generative
modeling,
and
the
motivation
here
is
to
learn
a
particular
probability
distribution
from
a
bunch
of
finite
samples
that
we
have,
and
you
can
think
of
one
reason
why
you
might
want
to
do
this.
A
Imagine
you
want
to
you
want
to
do
supervised
learning
and
you're
limited
by
the
amount
of
training
data
you
have
and
you
want
to
generate
a
bunch
of
more
data,
so
you
can
attach
again
to
your
training,
algorithm
or
your
workflow
or
your
pipeline,
which
generates
samples
which
look
like
the
real
probability
distribution.
A
Okay,
if
you're
interested
in
the
math,
the
loss
functions,
are
at
the
bottom.
This
work
is
kind
of
heavily
inspired
by
a
paper
that
came
out
again
in
2019,
where
a
bunch
of
researchers
at
IBM
particularly
focused
on
this
problem
that
I'm
talking
about
of
learning
probability,
distributions
and
loading
them
into
Quantum
States,
but
I
wanted
to
kind
of
show
you
some
of
the
algorithmic
Primitives
and
how
the
algorithm
looks
like
in
Coda
right.
A
So
you
you
have
this
anxiety,
which
you
can
Define
by
a
bunch
of
qubits
and
layers,
it's
as
easy
to
add
a
bunch
of
gates,
as
it
is
in
whatever
framework
of
quantum
Computing.
You
want
to
use
whether
that's
kisket
Penny,
Lane
whatever,
and
you
can
see
what
what
kind
of
quantum
circuits
we're
working
with
again.
This
is
a
very
small
scale
demonstration,
so
we're
using
three
qubits
here
but
at
the
end
is
the
is
the
Quantum
call
that
you
make
right?
A
It's
the
algorithmic
Primitive,
that's
predefined
for
you,
which
is
get
fake
data,
and
you
just
provide
it
with
a
bunch
of
arguments
that
you've
defined
it
with
the
onsats
with
what
backend
you
want
to
run
on
so
on
and
so
forth,
and-
and
this
is
what
the
results
look
like.
A
So
these
are
a
bunch
of
hyper
parameters,
so
we
have
three
qubits
two
layers,
we're
using
a
particular
ons
Arts
and-
and
in
this
case,
what
we're
doing
is
we're
not
we're
not
substituting
substituting
the
discriminator
for
our
primary
parameterized
Quantum
circuit,
the
discriminator
is
classical,
so
the
discriminator
is
a
classical
neural
network.
A
The
generator
is
a
parameters
Quantum
circuit,
so
you
can
have
hybrid
Cube
and
as
well,
where
you
have
some
classical
and
some
Quantum
within
the
setup,
and
you
can
see
here
that
if
your
initial
source
of
data
is
is
normally
distributed
around
some
mean
with
a
given
standard
deviation
as
learning
progresses,
as
shown
by
the
number
of
epochs
over
here.
A
The
distribution
starts
to
capture
more
and
more
of
of
the
true
true
data
distribution,
and
one
can
Define
certain
metrics,
like
the
washer
Stein
distance
or
the
relative
entropy
between
the
two
distributions
to
determine
how
close
the
true
distribution
is
to
the
generator
distribution,
and
you
can
see
here
that
this
clearly
isn't
exactly
as
what
we
would
what
we
would
hope
and
that's
because
this
work
is
unfinished.
A
It's
still
continuing
I'm
facing
a
bunch
of
problems
with
my
algorithm,
getting
stuck
in
Baron
plateaus,
the
generators
losses
and
decreasing,
for
example,
and
the
fact
that
this
is
only
a
three
qubit
problem.
So
if
I
wanted
to
kind
of
make
it
more
richer
if
I
added
a
bunch
of
extra
Gates
which
can
explore
the
full
Hilbert
space
rather
than
being
restricted
to
a
particular
set
particular
part
of
the
Hilbert
space
that
would
potentially
alleviate
the
problem.
A
So,
hopefully,
I
can
kind
of
share
some
interesting
results
by
the
next
time.
I
speak
on
this
Symposium,
but
yeah
I
hope
you
kind
of
found
our
work
in
Coda
interesting.
Some
of
the
work
I've
been
doing,
and
hopefully
we
can.
We
can
get
coda
in
your
hands
pretty
soon,
so
you
guys
can
also
start
working
on
it
and
tell
us
what
you
think
and
give
us
some
feedback.
Thank
you
for
listening.
Welcome
any
questions
that
you
may
have.