►
Description
Quantum Computing Systems and Results for Hybrid Quantum-Classical Algorithms
Matt Reagor
A
Thanks
so
much
for
having
me
I'll
be
presenting
work,
I've
done
it
Righetti
and
trying
to
summarize
a
number
of
I
think
interesting.
Applications
results
that
may
be
of
interest
to
nurse
users,
as
well
as
some
ideas
around
system
design
that
they're
working
on
the
work
that
I'm
showing
is
you
know
a
huge
collaboration,
particular
scientist
from
LDL
who's
now
at
Righetti,
who
contributed
significantly
to
to
the
content.
A
That's
worth
calling
out
is
maximum
Dupont
with
the
archive
references,
as
you
see
here
great
so
at
Righetti,
you
know
we're
building,
building
out
quantum
computers
in
order
to
you
know
solve
the
important
problems
that
are
out
there.
A
We
really
see
this
kind
of
transition
to
a
useful,
Quantum
Computing
industry,
as
you
know,
proceeding
in
in
a
couple
different
steps
right
now,
as
as
we
see
it
together
with
our
partners
kind
of
in
this
era
of
exploring
use
cases
with
this
kind
of
emerging
potential
for
Quantum
Advantage,
where
a
lot
of
our
work
today
is
oriented
towards
prototypes
and
benchmarks.
The
the
shift
towards
narrow,
Quantum
Advantage
is
is
one
that
we're.
A
You
know
very
focused
on
today
where
we're
looking
at
improvements
to
accuracy,
speed
or
cost
as
kind
of
key
benchmarks.
So
a
lot
of
this
is
based
on
heuristics,
but
a
lot
of
it
is
I.
Think
also
has
the
potential
to
be
guided
by
you
know
collaborations
like
we
have
ongoing
with
a
pro
looter
and
we'll
we'll
share
a
little
bit
about
that
as
we
go
and
then
finally,
I
think
you
know
the
North
Star
continues
to
be
this
kind
of
fault,
tolerant
era.
A
I
won't
be
talking
on
any
of
our
work
on
Quantum
error
correction.
Today,
but
again
we
kind
of
see
this
as
a
continuous
transition.
From
these
you
know
benchmarking
prototypes
to
now
actually
seeing
some
advantage
against
those
benchmarks.
You
know,
as
we
work
to
underpin
things
with
real
kind
of
quantum
error
correction.
A
So
rigetti
we
just
for
those
of
you.
Don't
know
we
do
supercutting
Quantum
processors.
We
make
our
own
chips
down
in
Fremont
and
then
we
integrate
those
in
our
data
center
in
Berkeley
just
down
the
hill
from
nurse
our
public
roadmap,
as
shown
here
just
last
year,
we
transitioned
from
a
single
chip,
40
qubit
system
to
our
first
multi-chip
platform,
so
our
80
qubit
processor,
that's
online
today
has
two
nominally
identical
copies
of
our
earlier
generation.
A
40Q
chip-
we're
currently
you
know,
looking
at
early
2023,
changing
architecture
to
a
larger
tileable
unit
cell
that
we
anticipate
to
have
a
significantly
higher
Fidelity
than
our
previous
generation.
A
That
chip,
just
like
the
prior,
you
know,
scales
and
this
kind
of
tileable
pattern.
So
our
Target
by
the
end
of
next
year
is
to
have
four
of
these
chips
integrated
into
a
modular
processor,
and
here
are
their
names.
I
have
Anka,
which
is
mean
star
in
in
a
language
and
Lira,
which
is
a
constellation
in
the
sky.
A
A
A
Again
we
have
about
you
know
almost
all
of
our
processing
steps
in-house
and
this
expansion
is,
is
really
an
opportunity
to
to
accelerate
the
hardware
advancements
that
we're
planning
a
couple
of
the
results
that
I'll
be
sharing
are
based
on
this
large
National
qis
Research
Center
called
sqms
that
we're
part
of
a
norm
spoke
earlier
he's
one
of
our
algorithms
thrust.
A
Leaders
I'm,
the
CTO
of
the
center
of
bulk
of
the
program
is,
is
really
about
addressing
decoherence
in
Material
Science
and
superconducting
qubits
and,
like
confirm,
said,
there's
a
lot
of
opportunity,
I
think
to
to
think
about
how
HPC
can
be
brought
to
bear
on
those
problems,
and
you
know
definitely
reach
out
if
you
have
good
ideas
there.
A
But
it's
a
you
know
it's
a
multi-institution
program
and
we'll
be
sharing
some
some
highlights
from
from
you
know,
Norm's
work
and
others
in
the
following
slides
great
another
kind
of
programmatic
high
level
thing
to
call
out.
So,
while
regetti
processors
are
available
in
multiple
public
clouds,
including
AWS
bracket
and
Microsoft
Azure,
and
we
also
have
a
partnership
with
Oak
Ridge.
That
makes
these
resources
available
to
the
researchers
who
have
ongoing
funding
from
doe,
and
you
know,
there's
a
there's:
a
growing
Quantum
Community.
A
You
know
opportunities
to
think
about
how
to
you
know,
develop
that
murder
model
further,
but
it's
based
on
a
long
going
collaboration
where
you
know
some
of
our
good
friends.
There
were
some
of
the
first
people
to
to
use
their
quantum
computers
for
science.
A
One
thing
that
we
really
pride
ourselves
at
regettion
is
speed,
so
based
on
you
know,
published
benchmarks.
To
date,
our
systems
have
the
highest
circuit
layer
operations
per
second,
which
is
a
benchmark
that
are
very
good
friends
at
IBM.
Put
out
it's
it's
really
exciting.
You
know
that
we
we
didn't
develop
that
Benchmark,
but
the
effort
that
we've
been
making
around
building
out
hybrid
computational
engines.
You
know
carried
forth.
You
know
to
that
Benchmark.
A
So,
as
we
look
to,
you
know,
develop
generic,
you
know,
benchmarks,
I,
think
this.
This
notion
of
trying
to
fold
in
speed
is
really
important.
A
lot
of
what
you'll
be
seeing
out
of
this
talk.
You
know:
are
these
variational
algorithms
that
very
much
depend
on
on
that
I
ground.
This
pie,
chart
from
a
number
I
I
think
it's
Austin
down
there.
It's
not
up
to
date.
I
think
this
is
a
2018,
but
I
wanted
to
share.
A
Just
I
think
that
there
there
are
kind
of
a
broad
set
of
users
here
today.
I
just
wanted
to
call
out
places
where
we
have
active
algorithms
work
at
Righetti
right
now,
so
this
isn't
even
capturing
or
reflecting
what
the
broader
user
base
of
you
know.
People
who
are
you
know
just
using
our
computers
over
the
cloud.
A
These
are
really
places
where
rigetti
is
working
in
co-design,
with
algorithms
and
and
end
user
experts
and
I
was
happy
to
see
that
it's,
you
know
more
than
half
a
pie,
and
that
sounds
sounds
delicious.
A
So
I'll
give
a
couple
highlights
from
some
of
these
just
to
give
you
a
sense
of
the
kinds
of
prototype
applications,
scale
and
benchmarks
that
people
are
seeing
it's
going
around
the
pie,
so
one
of
the
kind
of
Flagship
sqms
algorithms
that's
being
developed,
is
applying
variational
Quantum
eigensolver
to
study
the
kataya
model.
So
this
is
a
topological
spin,
lattice
and
there's
been
three
or
I.
A
Think
three
papers
out
of
spms
I
put
the
most
recent
here,
an
observation
that
was
made
as
part
of
the
like
coming
together
of
field
theorists
and
Hardware
people
was
that
there
was
a
way
to
map
this
spin
model
onto
regetti's,
Hardware
topology.
A
So
you
know
double
use
of
the
word
topology,
but
you
know
the
idea
is
that
the
spin
couplings
that
we
want
to
simulate
are
represented
by
Hardware
native
to
keep
it
operations.
So
we
don't
have
a
whole
lot
of
overhead
for
swap
networks
and
this
kind
of
thing,
which
in
the
kind
of
nisky
era,
is
really
important.
A
So
you
know
the
group
is,
is
looking
at
topological
properties
of
this
model,
something
that's
you
know
exciting
within
the
centers
that
they're
also
testing.
You
know
combinations
of
error
mitigation
and
looking
at
how
these
systems
scale
under
realistic
noise.
A
That's
it's
pretty
fun.
Obviously
Fermi
lab
is,
is
the
lead
institution
in
the
center.
So
there's
a
lot
of
high
energy
physics
going
on
and
in
onsats
and
an
algorithm
that's
been
developed
and
is
under
tests.
Right
now
looks
at
D4
gauge,
Theory
and
here's
a
picture
of
it.
I'm,
not
a
high
energy
physicist,
so
I'm
not
going
to
explain
the
picture,
but
there's
matter
fields
and
Gage
fields
and
some
interesting
phasing
between
occupied
States
and
unoccupied
States.
A
One
thing
that
I
have
picked
up
from
this
is
that
the
the
theory
is
a.
A
It
comes
with
kind
of
a
set
of
group
operators
that
can
be
represented
by
Quantum
circuits
and
the
team
which
you
know
is
at
fermilab
and
a
number
of
our
you
know.
Institutional
participants
in
sqms
has
has
benchmarked
the
kind
of
basic
building
blocks
of
those
group
operators
on
Hardware,
and
so
this
is
kind
of
a
building
block
towards
towards
larger
scale
simulations.
A
These
gauge
theories
are
interesting
because
there
are
properties
such
as,
like
you
know,
coupling
constants
and
dynamics
that
are
very,
very
hard
to
calculate
with
it
with
a
sign
problem
and
even
our
kind
of
as
I
understand
it.
Even
our
best
numerical
techniques,
kind
of
have
100
uncertainty,
so
there's
really
an
opportunity
here,
I
think
with
respect
to
co-design,
to
really
understand
these
models
better,
something
maybe
surprising
to
folks
one
of
the
slices
of
pie
was
climate
and
Earth
modeling.
A
This
is
you
know.
This
is
not
a
quantum
mechanical
problem.
This
is
a
classical
data
problem.
However,
it's
really
complex
data
and
that's
actually
pretty
interesting
from
a
Quantum.
You
know
algorithms
development
perspective,
the
concept
that
we
were
working
with
in
this
climate
modeling
work
is
a
problem
in
generative
modeling,
which
is
given
some
highly
available
data
sets,
such
as
overhead
satellite
imagery
data
from
numerical
models.
A
Can
you
infer
what
the
weather
you
know
pattern
actually
is
so
over
the
course
of
a
year,
or
so
we
developed
this
hybrid
Quantum,
classical
classification
model,
and
the
basic
idea
is
that
by
embedding
your
classical
data
into
a
kind
of
quantum
Vector
space
or
Hilbert
space,
that's
you
know
much
larger
Vector
space.
It's
a
very
common
trick
in
machine
learning
to
raise
your
problem
into
a
higher
dimensional
space
where
it's
easier
to
solve,
and
so
that
was
that
was
the
inspiration
this
column
here.
A
Labeled
qnn
is,
is
the
output
of
our
model
and
again
we
see
with
a
lot
of
these
empirical
benchmarks
in
qml
the
results
weren't
strictly
better
than
classical,
but
they
were
different
and
kind
of
open
up
some
some
interesting
directions,
but
one
thing
that
we're
really
excited
about
with
respect
to
this
data
set
involves
some
extensions
to
Quantum
kernel
methods.
A
So,
just
to
you
know
quickly
give
a
give
a
highlight
here
and
I'll
give
a
shout
out
to
the
Penny
Lane
tutorials
that
were
mentioned
in
the
last
talk.
It's
definitely
the
One-Stop
shop,
for
you
know,
learning
all
this
kind
of
stuff
and
there's
I,
think
numerous
tutorials
on
on
kernel
methods
and
Quantum
that
are
super
duper
handy,
so
definitely
pop
over
to
Penny
Lane
to
check
that
out.
But
the
basic
concept
in
a
Quantum
kernel
calculation
is
to
try
to
estimate
the
similarity
between
two
data
vectors.
A
So
you
have
two
data
vectors
x,
sub,
I
and
x,
sub
J
and
you
encode
those
data
vectors
into
your
Quantum
circuit
instruction
and
the
the
way
this
works
conceptually
is
that
you
build
up
a
large,
highly
dimensional.
You
know
unitary
out
of
that
parameterized
data
input
and
then
you
unwind
that
unitary.
So
you
see
there's
kind
of
a
mirror
in
time
here
and
you
you
dagger,
should
give
you
the
identity
if
x
I
and
XJ
are
equivalent.
A
This
is
analogous
to
the
kind
of
classical
support,
Vector
machine,
and
what
this
gives.
You
is
a
way
of
measuring
distance
between
two
vectors.
In
some
kind
of
quantum
model,
what's
very
exciting
about
this-
is
that
Huang?
It
all
showed
just
last
year
that
there
are
geometrical
measures
that
you
can
apply
to
the
data
vectors
that
we
were
just
talking
about,
as
well
as
the
the
task
associated
with
with
that
kernel
estimator
and
what
they
give.
A
That
was
actually
easy
enough
that
you
know
a
classical
computer
is,
is
probably
the
right
resource
for
it,
but
it's
these
hints
that
you
know
real
world
data
may
actually
have
interesting
properties
to
try
to
investigate,
with
with
Quantum
machine
learning,
so
just
to
give
a
sense
of
of
how
we're
thinking
about
these
these
benchmarks.
A
So
we
have
these
empirical
tests
by
building
out
models,
building
out
benchmarks
coming
at
this
from
a
simulation
perspective,
we're
also
interested
in
regions
where,
with
increasing
qubit
count,
we
can
start
to
chart
out
regions
where
it's
actually
really
difficult
to
simulate.
The
output
of
the
quantum
computer
and
this
feeds
into
in
our
opinion
kind
of
you
know
they're
kind
of
guideposts
for
Quantum
Advantage.
A
So,
basically,
if
we
can
still
simulate
the
system
on
a
laptop
or
on
promoter
very
efficiently,
you
know
it's
unlikely
that
that
problem
complexity
and
the
problem
construction
is
a
path
towards
Quantum
Advantage.
So
we're
using
this
as
as
input
at
rigetti
to
how
we,
you
know,
develop
benchmarks
internally
for
our
Hardware,
as
well
as
for
applications
writ
large.
A
So
there's
you
know
the
kind
of
gold
standard
of
of
kind
of
state
Vector
simulations
that
give
you
everything
with
noisy
near-term
quantum
computers,
that
noise
is
often
represented
with
more
efficient
approximate
methods
such
as
finding
path,
integrals
or
kind
of
approximate
simulations
thereof.
A
And
so
you
know,
one
of
the
important
things
to
for
us
to
try
to
understand
is
what
is
the
role
of
noise
in
understanding
the
the
difficulty
of
of
reproducing
the
output
of
our
Quantum
Hardware
with?
Within
the
you
know,
techniques
of
tensor
networks,
as
as
Maxime
has
taught
us,
is
that
the
difficulty
of
simulating
you
know
perfect.
A
Quantum
computers
is
exponential
in
this
control
parameter,
which
is
the
bond
dimension
of
your
tensor
Network
simulation,
where
you
know
in
a
perfect
quantum
computer,
it's
exponential
in
in
the
size
of
the
Hilbert
space.
However,
under
increasing
noise,
we
expect
that
the
Fidelity
is
actually
lower,
and
so
you
can
represent
the
output
of
the
quantum
circuit
with
a
smaller
and
smaller
Bond
Dimension.
A
This
is
kind
of
a
pictorial
view.
Is
that
you
know
you
kind
of
have
this
fuzzing
of
a
fixture
if
we're
using
these
kind
of
matrix
product
states
to
compress
the
entanglement.
A
So
what
we're
looking
at
here
is
is
kind
of
you
know.
Can
we
calibrate
an
approximate
simulator
to
represent
the
heart,
Quantum
Hardware
noise
and
to
put
that
quantitatively
like
what
is
the
bond
Dimension
that
we
associate
with
the
output
of
one
of
our
circuits?
A
You
may
need
a
different
tensor,
Network
representation
depending
on
you
know
exactly
the
the
algorithm
that
you're
running,
but
just
as
a
test
case
for
us
we're
looking
at
these
graph
problems
on
Max
cut
there
there's
a
cost
function,
which
is
represented
by
this
kind
of
sum
of
zzs,
and
you
know
don't
need
to
worry
about
too
much
how
the
algorithm
works,
but
its
structure
has
these
layers
associated
with
it
that
involve
the
application
of
a
cost
hamiltonian,
which
is
the
exponentiation
of
the
CD
ZZ.
That's
where
all
the
entanglement
happens.
A
It's
also
where
a
lot
of
the
noise
happens,
and
you
know
we're
measuring
in
the
Z
basis.
A
Here's
a
view
of
what
the
output
of
this
circuit
onsats
looks
like
in
an
approximate
simulator,
where
we're
tuning
the
bond
Dimension.
Basically,
you
see
that
the
contrast
is
is
decreasing
here.
There's
there's
an
exact
score
for
your
QA
output
of
4.2,
but
under
the
kind
of
noise
it's
it's
decreasing.
What
we
can
see
is
that
we
can
kind
of
fit
the
output
of
our
Hardware
to
such
a
onsats
and
and
give
it
an
effective
Fidelity.
A
But
what's
really
exciting
is
that
we
can,
with
with
the
resources
that
were
awarded
to
us,
to
the
qis
program
at
promoluter,
able
to
extend
this
up
to
large
linuses,
with
under
approximate
simulation
and
figure
out
under
under
kind
of
somewhat
realistic
models
of
Hardware
noise.
How
much
Fidelity
do
we
need
in
order
to
kind
of
saturate
the
bond
Dimensions
efficiently
large
in
order
to
kind
of
put
us
into
this
Advantage
possible?
A
You
know,
region
and
I
would
just
say
there
are
much
better
techniques
for
solving
these
graph
problems
than
an
exact
simulation
of
qaoa,
and
you
know
we
definitely
recognize
that.
But
what
we're
trying
to
understand
is
you
know
how
how
how
much
Fidelity
do
we
need,
from
a
hardware
perspective,
to
push
the
sampling
problem
into
into
this
kind
of
you
know.
A
Quantum
Advantage,
possible
regime
and
yeah
I
would
say:
we've
learned
a
lot
from
that
check
out
this
paper
and
it
gives
some
some
quantitative
goal
posts
for
that
I'm
at
time.
But
one
thing
that
I
just
want
to
conclude
on
is
that
you
know
it's
very
exciting
times
this
conversation
and
others.
You
know
raised
a
bunch
of
questions
here
are
some
of
what
we're
thinking
about
thinking
about
taking
an
explorer
to
exploratory
approach.
A
Building
out,
you
know
multiple
different
kinds
of
systems
with
different
types
of
integration
and
really
kind
of
you
know
doing
doing
a
lot
of
you
know
quick
turn.
Prototyping
is
kind
of
in
our
blood,
so
I'll
stop
there
I'm
happy.