►
From YouTube: VASP Workshop at NERSC: Basics: DFT, plane waves, PAW method, electronic minimization, Part 2
Description
Presented by Martijn Marsman, University of Vienna
Published on December 18, 2016
Slides are available here http://www.nersc.gov/assets/Uploads/VASP-lecture-Basics.pdf
Presented at the 3-day VASP workshop at NERSC, November 9-11, 2016
A
So,
like
I
said,
these
local
functions,
their
solutions
to
do
anatomic
problem
represented
on
on
radio
logarithmic
grids
and
there's
absolute
ization
procedure
to
compute
out
of
these
all
electron,
partial
waves,
suet
ice,
partial
waves
and
absolute
ice
local
potential,
and
then
there's
procedures
to
construct
these.
These
projector
functions
and
the
only
real
requirement
is
that
they
are
dual
to
the
civilised
partial
waves
and
in
this
and
in
these
quantities
we
can
rewrite
our
conservation
in
such
a
way
so
I'll
be
before
so.
That
is.
A
The
oops
are
sorry
that
is
the
counterpart
to
this
particular
equation.
The
ole-ole
electron
atomic
problem
in
terms
of
these
two
dies
quantities.
One
can
write
this
particular
equation
and
where
we
have
seen
now,
you
see
these
projectors
popping
up.
The
solutions
to
this
equation
are
now
not
e,
all
electron
partial
waves,
but
the
two
last
partial
waves-
and
we
see
a
few
additional
quantities
emerging
and
those
are
these
D
IJ,
D,
IJ,
matrices
and
ku
IJ,
and
they
and
those
are
the
so
called
appear,
W
strength,
parameters
and
augmentation
charges.
A
So
these
equations
should
have
an
identical
eigen
value,
eigen
value
spectrum
and
in
scattering
theory
we
then
look
at
what
is
called
the
logarithmic
derivative.
Those
are
these
quantities
and
they
should
be
essentially
the
same
offer
a
whole
range
of
these
energies,
and
then
we
have
a
good
suitable.
Then
we
have
generated
in
good
to
their
potential.
So
this
is
a
well
not
a
very
modern
pseudo
potential
theory,
and
there
we
see
the
solid
lines
they
are.
A
Yes,
so
what
do
these
functions?
Look
like,
for
instance,
PW
method,
so
we
include
for
each
for
each
valence
Channel.
We
include
two
functions
in
the
standard
potentials
where
we
choose
one
of
the
eigen
energies
at
the
bound
state
of
the
of
the
atomic
valence
and
and
then
one
of
them
at
a
non
bound
state,
and
so
that
means
that
we
have
additional
degrees
of
freedom
in
our
in
our
local
basis.
So
the
use
of
the
frozen
core
approximation
is
also
at
the
level
of
this
PW
right.
A
A
A
And
this
is,
this
is
maybe
a
point
that
is
nice
to
make
had
it
in
a
conversation
at
the
coffee
break.
It
already
came
up
so
in
the
piw
method
are
in
terms
of
our
local
functions
and
we
can
express
the
charge
density
in
such
a
way
inside
of
the
POWs
spheres.
We
can
do
the
same
in
terms
of
the
of
the
pure
ice
partial
waves
and
those
particular
densities
are
then
used
to
represent
the
local
potential
on
these
radial
grids,
and
that
is
that
is
where
we
are.
A
For
instance,
the
PA
W
method
differs
from
address
off
to
low
potentials
because
authors
of
pure
potentials
in
computing-
and
we
compute
these
these
strength
parameters
in
other
sub
pseudo
potential
method.
You
would
use
the
atomic
density
for
these
two
terms
in
the
PW
method.
You
use
the
actual
density
decomposed
in
these
in
these
partial
waves
inside
of
the
of
these
length
parameters.
A
A
So
three
contributions,
like
we
said
one
plane
wave
so
that
lives
in
in
the
whole
cell
right
is
plane
waves
extent
over
the
whole
simulation
box.
Then
there's
a
part
that
we
subtract,
that
inside
of
the
PW
spheres
and
in
terms
of
these
Suraj
partial
waves
and
the
part
that
we
add
to
it
in
terms
of
these
all
electron
partial
waves
inside
of
these
POWs
spheres
and
how
much
of
those
things
are
at
mixed
to
our
plane.
A
Wave
solution
is
given
by
this
particular
projection
and
the,
and
this
particular
decomposition
of
of
the
wave
function
in
three
parts
in
a
pseudo
plane
wave
part
absurd
ice,
sutras
radial
part
and
at
all
the
election
radial
part.
This
sort
of
carries
over
to
all
the
quantities
that
that
we
that
we
will
compute
so
not
only
orbitals
but
also
densities
and
energies
will
decompose
in
these
kind
of
contributions,
and
that
is
written
here,
for
the
kinetic
energy,
for
instance,
are.
A
This
is
an
expression
that
we
already
saw
before
and
now
we
substitute
our
effort
for
the
electron
wave
function.
We
substitute
our
PW
approximation
and
then
we
and
there
we
assumed
and
completeness
in
these
local
bases,
and
that
is
always
only
approximately
given,
but
for
for
this
decomposition.
We
assume
this
to
be
complete,
and
then
we
end
up
and
for
this,
for
this
kinetic
energy,
we'll
end
up
with
three
contributions,
of
which
one
is
purely
plane
wave
nature.
A
A
A
Of
this
all
electron
wave
function
is
actually
the
variational
part.
So
that's
the
part
that
our
computation
tries
to
compute
right
at
that
vasp
tries
to
compute,
because
you
see
these
guys,
these
ones
and
these
ones
and
the
projectors
they
are
given
their
computed
pre
computed
and
specified
on
the
boat
car
file.
The
only
thing
that
remains
to
be
determined
is
this
plane
wave
part.
So
that's
the
variational
part,
so
that
is
that
is
here
but
there's
another
example
of
this.
A
For
the
density
operator,
you
see
that
it
separates
again
into
three
of
these
terms:
a
plane,
wave
part
of
the
density,
what
a
part
expressed
in
impure
eyes,
partial
waves
and
one
expressed
in
all
the
electron
partial
waves,
where
the
last
to
the
last
to
live.
Only
inside
of
these
prw
grades
on
in
the
POWs
spheres
on
logarithmic
radial
grades,
so
non-local
operators
are
a
bit
more
complicated
and
I
won't
get
I
won't
go
into
the
details.
A
The
thing
is
that
what
is
done
there
there's
a
there's,
an
additional
trick,
employed
the
earth
so
that
we
can
can
separate
expectation
value
of
a
non-local
operator
again
into
these
three
parts,
because
what
is
non-local
in
in
the
first
non-local
thing
that
we
have
is
already
the
Hartree
potential
right,
because
it
doesn't
depend
only
on
the
position,
but
it
depends
on
R
and
R
Prime.
So
in
that
sense,
it's
non-local
and
what
is
done
there
to
achieve
again
this
this.
A
Why
the
density
inside
the
sphere
is
the
correct
one,
because
if
you
have
the
moments
inside
of
a
sphere
correct,
then
the
potential
outside
will
be
the
true
one
yeah.
So
that
is
done
and
these
soft
compensation
charges.
Why
is
it
soft
or
soft
in
the
sense
that
I
have
to
be
presentable
in
terms
of
plane,
waves
yeah?
So
so
that
means
that
the
interstitial,
the
potential
in
the
interstitial
is
correct.
A
By
adding
these
charges
and
then
the
fact
that
we
have
messed
with
this
charge
density
is
then
compensated
by
fact
that
we
add
these
these
compensation
charges
on
the
radial
grids
as
well,
where
they
are
subtracted
out
right.
So
we
have
the
long
range
interactions
between
between
charge
density
is
corrected
on
the
plane
wave
grid
and
the
fact
that
we
have
messed
with
this
fluid
our
charge
density
is
corrected
on
one
of
the
radial
gates
and
then
again
for,
for
instance,
in
the
Hartree
energy.
A
This
means
that
we
again
separate
the
energy
contributions
into
a
part
that
is
evaluated
completely
on
the
plane,
wave
grid
and
two
parts
that
are
evaluated
on
on
the
radial
grids.
So
in
that
sense
you
can
for
non
local
operators
with
with
these
with
these
tricks
with
compensation
charges,
you
can
make
sure
that
you
can
that
again,
we
can
decompose
it
in
these
three
contributions,
where
there's
no
crosstalk
between
a
plane,
wave
crease
between
the
regular
grid
and
the
radial
grids
yeah.
A
A
So,
and
here
the
well,
this
is
a
point
that
I've
already
made
a
few
times.
So
what
is
good?
What
is
good
is
that
we-
and
that
is
what
you
saw
so
the
strength
of
the
p
r--
w
method
is
in
the
lies
in
the
fact
that
we
have
well
that
we
represent
the
scattering
properties
of
the
atomic
problem
of
a
wide
range,
so
they
are
highly
transferable.
A
Another
strength
is
that
that,
due
to
the
fact
that
we,
that
we
have
these,
these
local
functions
the
atomic
ones
and
that
we
had
that
we
make
this
reconstruction
of
the
nodal
feature.
So
we
implicitly
work
with
with
the
complete
one
electron
and
with
the
complete
all
the
electron
function
means
that
we
remain
orthogonal
to
the
core,
and
so
many
other
pseudo
potential
message
suffer
from
the
fact
that
that
you
had
it,
you
have
a
certain
amount
of
non
orthogonal
T
to
the
frozen
course
states.
That
is
not
happening
in
PW
method.
A
So
that's
two
things:
high
transferability
and
orthogonality
between
core
and
valence
state
is,
is
sort
of
guaranteed.
You
know
so,
and
the
essence
is
that
we
never
put
quantities
on
one
common
grade,
so
we
never
try
to
put
radial
functions
on
to
the
plane,
wave
grid
or
vice-versa.
Otherwise
the
methods
would
immediately
break
down.
We
get
a
lot
of
questions
because
and
and
I
can
understand,
this
people
would
like
to
visualize
the
wave
function
say,
but
can
I
not
visualize
the
the
PW
wave
function
as
I
know?
A
Actually
you
cannot
because
of
the
fact
that
the
PW
method
worked.
You
cannot
visualize
the
wave
function
because
to
visualize
it
would
mean
to
bring
everything
on
a
common
grid.
I
would
like
to
reconstruct
this
wave
function
is
never
explicitly
done,
and
so
this
this,
what
we
saw
here
this
object,
is
never
explicitly
calculated.
If
there
would
be
a
need
to
do
that,
then
the
whole
method,
you
could
throw
it
away,
then
you
could
add,
and
you
could
work
with
the
common
grid
immediately
yeah.
So.
A
Okay,
so
the
two
low
orbitals
are
the
variational
quantities
that
is
plane
wave
pseudo
pertains
to
our
parts.
Those
are
the
variational
quantities,
and
this
is
just
a
recast
of
the
of
the
previous
equation.
We're
now
by
the
way
you
see
here
before
here
we
had
these
two
nice
partial
waves,
but
actually,
obviously
in
real
life,
were
not
trying
to
reap
if
we
send
those
particular
ones,
but
we
want
to
compute
the
the
plane
wave
part
of
our
of
our
powa
function.
So
that's
the
actual
equation.
A
A
So
is
it
accurate?
Yes,
it
is
very
accurate.
So
this
is
a
recent
publication
where
a
lot
of
work
has
been
put
in
in
comparing
the
accuracy
of
PW
potential
sets
with
all
the
electron
calculations,
sort
of
benchmark
them
and
yeah
if
you're
interested
there.
It
is
bye,
bye,
quickly,
argument
and
ability
and
published
in
in
science
and
some
time
ago
and
essentially
what
they
they,
what
they
compute
is
a
measure.
A
So
this
is
the
volume
versus
energy
curves
of
one
code,
let's
say
our
code
or
or
any
other
PW
code
and
and
and
an
all
electric
code
and
the
area.
The
area
where
there
are
different
is
computed
in
terms
of
energy
and
is
this
is
the
measure
of
the
of
the
average
deviation?
You
would
find
for
elemental
elemental,
solids
of
all
these
elements,
and
that
is
truly
truly
tiny.
Actually,
for
a
long
time,
the
hardest
part
was
for
the
only
electric
methods
to
agree
on
a
common.
A
Saturday
they
are
the
individual
contributions
to
this
average
value,
so,
for
instance,
for
boron,
the
deltas
are
the
Saudis
area.
In
the
difference
between
these
two
curves
is
0.2
milli,
electron
volt.
If
you
compare
the
calculus
yeah
I,
yes,
well,
Europeans
like
to
use
commas.
Sometimes
it
needs
the
same
thing.
A
B
A
Yes,
yeah.
No,
this
is
zero
point
two
yeah,
so
this
is
a
comparison
against
already
spoke
about
this
before
against
the
Gaussian
Gaussian
basis
set
calculations
are,
is
GTOs,
gaussian,
orbitals,
yeah
anyway,
for
a
g2
test
set
sites,
it's
a
set
of
small
small
molecules,
and
this
is
done.
This
was
actually
work
where
you
are
benchmarking,
our
hybrid
functional
code,
services
for
PBE
and
for
PB
0.
It's
the
difference
between
Gaussian
calculation
and
a
PW
calculation,
and
it's
in
almost
all
cases.
A
It's
smaller
than
1
kcal
per
mole,
the
energy
differences
of
these
atomization
energies
of
these
of
these
small
molecules
and
1
kcal
per
mole,
that's
something
that
will
pop
up
at
several
occasions.
That
is
what
is
commonly
called
chemical
accuracy,
so
any
method
that
reaches
this
is
the
winner
and,
of
course,
with
respect
to
experiments
right,
because
that
is
sort
of
the
accuracy
that
they
can
reach
an
experiment,
and
that
would
mean
if
you
mess
it
guarantees
this.
You
wouldn't
need
to
do
the
experiment
anymore.
This
is,
of
course,
agreement
of
two
DFT
calculations
right.
A
So
this
this
is
not
close
to
experiment,
but
they
agree
amongst
each
other,
at
least
so.
Let's,
yes,
let's
do
this
electronic
minimization.
So
we
have
spoken
about
the
fact
that
so
here,
unfortunately,
I
think
I've
left
out
the
fact
that
we
work
on
pseudo
orbitals.
So
let's
forget
PW
formulation
for
a
while
right.
So
these
are
the
orbitals
that
we
want
to
determine,
and
we
said
okay,
so
we
have
to
solve
these
some
equations
and
it's
essentially
two
ways
that
that
one
can
do
this.
A
So
one
is
direct
minimization,
so
we
could
start
with
a
set
of
trial,
orbitals
and
and
minimize
the
total
energy
by
following
the
orbit.
The
gradient
on
the
orbitals
in
the
direction
of
decreasing
total
energy
and
the
gradient
on
the
orbitals
is
actually
something
like
this
yeah.
So
this
is
the
action
of
the
Hamiltonian
on
on
the
current
orbital,
its
its
current
approximate
eigen
value,
and
this
is
what
you
called
the
gradient
source,
sometimes
also
called
the
residual.
We
saw
this
before
yeah
so
and
at
mixing.
A
A
parts
of
this
to
the
current
orbitals
means
that
you
can
well.
You
can
use
conjugate
gradient
or
any
any
direct
minimization
method
to
follow
this
to
the
into
a
local
minimum
in
the
total
energy,
so
that
is,
for
instance,
done
in
copper
in
yellow.
The
thing
that
we
commonly
do
is
is
called
to
service
the
self-consistency
cycle
and
that's
slightly
different.
A
We
don't
start
per
se
with
the
trial
orbitals,
but
you
can
start
with
trial
density
and
then
you
construct
the
Hamiltonian
in
accordance
with
this
density
and
then
well,
essentially
diagonalize
it
so
I
solve
this
particular
equation
and
that
will
give
you
a
new
set
of
orbitals
and
a
new
density.
And
then
you
mix
this
new
density
with
the
old
one
and
it
defines
a
new
Hamiltonian
that
you
again
diagonalized,
and
so
that
is
different
here.
A
You
work
directly
on
the
orbitals
here
we
do
a
diagonalization
and
work
with
the
density
and
that's
the
self-consistency
cycle,
and
that
is
actually
well.
That
was
shown
to
be
a
bit
more
efficient
than
the
direct
optimization,
especially
if
you
go
to
two
metallic
systems
so
and
that's
a
comparison
that
is
shown
here.
So
here
we
have
convergence
in
the
total
energy
in
the
self-consistency
cycle,
and
here
we
have
it
for
several
different
system
sizes
that
are
sort
of
long
ish
disorder
diamond
sells
for
four
different
sizes.
We
have
that
here
in
the
direct
method.
A
If
you
then
go
to
a
metallic
system,
you
see
that
that
self-consistency
cycle
still
manages
to
converge
in
in
a
reasonable
number
of
iterations.
But
if
this
becomes
larger
this
cell
in
a
certain
direction,
then
this
direct
minimization
will
have
huge
troubles
finding
the
ground
state,
so
we
tend
to
rely
on
on
the
self-consistency
cycle
and
charge
density.
Mixing.
The
problem
with
with
this
with
this
situation,
especially
with
this
situation
in
in
the
direct
method,
is
called
charge
sloshing,
and
that
is
something
that
is
that
I've
tried
to
explain
here
so
the
gradient
we
can.
A
A
This
is
a
Hamiltonian,
a
matrix
element
between
two
states
that
we
currently
have,
and
this
is
their
occupation,
and
so
this
part
exists
only
if
parts
of
our
states
are
so
only
between
parts
of
the
spectrum
of
unoccupied,
States
and
occupied
States
right.
So
it's
a
rotation
between
a
net
mixture
of
unoccupied
to
occupied
States.
That
is
part
of
the
gradient,
so
considering
two
states
very
close
together,
and
that
is
what
you
would
have
in
a
metal
or
in
a
small
gap
system.
A
That
is
something
that
that
often
pops
up.
We,
we
try
to
counter
this
with
with
clever
mixing
of
the
charge
densities.
That
is
why
sometimes
your
systems
will
not
converge,
because
you
have
this
charge,
lossing,
very
small,
long,
wavelength,
changes
in
your
charge,
density,
huge
yield,
huge
responses
in
your
potential,
and
your
iterations
will
become
unstable.
A
Now,
because
this,
the
the
largest
stable
step
that
you
could
take
would
be
connected
to
the
so
so
this
this
change
in
the
wave
factor
is
connected
to
the
size
of
the
cell.
That
is
why
I
had
a
small,
and
so
this
is
like
a
particle
in
the
box
and
it's
connected
to
the
inverse
of
the
of
the
length
in
your
cell.
A
So
if
you
have,
for
instance-
and
maybe
you
have
noticed
this,
if
you
have
a
system
that
is
long
in
one
direction,
it's
harder
to
converge
it
then,
then,
if
it
would
not
be-
and
that's
because
of
because
of
these
effects.
So
if
you
have
a
surface
that
is
long
in
a
certain
direction
and
because
you
have
this
vacuum,
this
would
possibly
immediately
pop
up
yes,
so
so
the
smaller
stable
step
size
that
you
would
have
here
would
be
inversely
proportional
to
the
square
of
the
of
the
longest
size
of
yourself
in
terms
of
charge.
A
Rushing
if
you
have
a
large,
if
you
have
a
large
gap
that
doesn't
hurt
you
so
much
because
then
this
will
be
very,
very
small,
but
smaller
cap
systems
will
be
strongly
affected
by
this.
So
what
we
do
to
counter
this
or
what
what
the
self
consider
what
in
the
self-consistency
cycle
counters?
This
is
the
fact
that
how
we
set
up
a
Hamiltonian
for
a
certain
density
and
we
compute
these
wave
functions
where
I've
written
here.
A
Iterative
refinement
of
the
wave
functions
we'll
come
back
to
this,
but
essentially
we
do
a
diagonalization
and
that
uses
wave
functions.
If
you
the
new
charge
density,
if
I
would
use
this
new
charge
density
in
its
totality,
then
charged
sloshing
might
might
kick
in
immediately,
but
that
we
don't
do
instead,
one
it
mixes
it
only
partly
to
the
previous
charge
density,
and
it
should
dampen
out
these
effects
of
charge
washing.
A
So
a
clever
mixer
will
we'll
be
able
to
dampen
out
these
effects,
and
this
mixing
will
give
a
new
charge
density,
and
this
whole
thing
is
taken
for
another
for
another
spin.
So
so
two
things
that
are
important.
One
is
iterative
diagonalization
because
we
said
before
we
diagonalize
this
Hamiltonian,
but
we
don't
want
to
die
diagonalize.
It
exactly
come
back
to
this,
but
we
do
this
iteratively
and
and
another
thing
that
another
key
point
is
density,
mixing
to
keep
effects
like
charge
washing
under
under
control.
A
C
A
The
first
one
is
is
commonly
constructed
from
the
atomic
charges.
Saudi
atomic
charge
density
is
also
information
that
is
carried
on
the
Patkar
file
and
the
program
will
take
all
the
atomic
positions
and
put
atomic
charge
densities
on
the
respective
positions,
and
that
will
really
be
your
initial
charge
density.
So
that
is
the
standard
behavior
you
don't
have
to
do
anything
for
for
that,
and
that
is.
That
is
quite
a
good
common
choice.
If
you
then
look
at
what
happens
in
the
program.
A
So
if
you
start
from
scratch,
if
you
don't
have
wave
functions
from
a
previous
calculation
that
you
can
use
to
restart
and
underthink,
then
it
will
start
from
atomic
charge
densities
and
there
will
be
a
few
steps
where
the
charge
density
is
kept
fixed.
So,
depending
on
the
Elven
algorithm,
you
use
the
first
few
steps,
the
charge
density,
the
first
five
or
ten
steps.
A
The
charge
density
is
kept
fixed
because
the
wave
functions
that
you
use,
they
are
initialized
with
random
numbers,
so
there
might
be
pretty
bad
because
we
do
this
iterative
refinement
of
the
wave
function.
So
we
do
not
exactly
diagonalize
the
hamiltonian
only
on
its
way
the
wave
functions
are
improved,
but
starting
from
random
numbers.
The
the
charge
density
you
would
get
out
of
this
first
step
would
be
really
really
bad.
So
you
don't
want
to
mix
that
to
something
that
is
pretty
reasonable,
because
atomic
charges
are
pretty
reasonable.
A
So
why
do
we
not
do
a
direct
diagonalization
of
the
hamiltonian
exact
one,
because
you
could
envision
that
doing
that
you
set
up
a
hamiltonian,
express
it
in
in
plane,
waves
and
simply
diagonalize
it,
and
you
end
up
with
a
bunch
of
eigenstates
and
eigenenergies,
so
this
bunch.
So
this
would
be
a
matrix
of
the
size
of
the
number
of
plane
waves
in
our
FFT
grid
times
the
number
of
plane
waves
in
our
FFT
grid
and
you
diagonalize.
A
It
does
software
to
do
this
or
libraries,
but
you
end
up
with
an
fft
eigenfunctions
of
your
system,
and
you
don't
need
so
many,
because
this,
for
instance,
imagined
the
worst
possible
situation.
That
would
be
a
small
molecule
in
a
large
box
so
and
then
FFT
might
be
50,000,
for
instance,
and
but
you
for
the
small
molecule
you
need
only
four
states
to
put
the
electrons
in.
You
only
want
to
occupy
a
few
states
you're
not
interested
in
50,000,
icon
states
of
this
system.
You
only
want
four
of
them.
A
Yes
out
and
diagonalization
commonly
scales
cubic
ly
with
system
size,
so
you
pay
a
heavy
price
for
an
exact
line
diagonalization
of
this
matrix,
and
you
have
to
store
it.
So
so
you
don't
want
to
do
this.
An
iterative
diagonalization
is
a
way
around
it.
You
can
buy
with
by
means
of
iterative,
diagonalization
techniques
like
block
Davidson
or
rmm
you
can.
A
A
Okay,
so
yes,
so
that
is.
That
is
the
reason
that
that
we
use
all
these
nice
methods
like
our
diets
or
block
Davidson
yeah.
So
another
key
ingredient
is
a
subspace
diagnose,
a
ssin
and
that's
one
of
the
things
that
we
that
we
have
spoken
about
before.
So
the
subspace
is
the
space
spanned
by
by
our
current
orbitals.
A
So
and
essentially
the
things
that
we
do
in
in
in
in
iterative
matrix
diagonalization
methods
is
rayleigh-ritz
diagonalization
and
in
the
in
this
iterative
subspace
yeah,
and
that
is
only
of
the
size
of
the
number
of
bands
that
we
want
to
compute
sudden,
let's
say
of
the
order
of
the
number
of
electrons
in
our
in
our
unit
cell
times
the
number
of
bands.
Yes,
how
these
are
small
diagonally
diagonalization
problems,
much
different
in
that
sense
from
this
one,
which
would
be
a
huge
one
right,
so
integrative
matrix,
diagonal
asian
depends
on
yeah.
C
A
A
A
A
Yes,
this
is
sometimes
a
bit
complicated,
but
I
want
to
give
you
a
taste
for
what
happens
and
and
and
mention
some
of
the
of
the
buzz
words
connected
to
it.
So
yes,
I
do,
oh
sorry,
yeah
so
block
Davidson.
But
what
would
you
do
in
a
block
right?
It's
an
algorithm
which
is
one
of
these
iterative
diagonalization
methods
by
which
we
compute
the
zones
of
many
lowest
eigenstates
of
our
of
our
eigen
spectrum
of
the
Hamiltonian.
So
what
you
would
do
is
you
would
take
a
subset
of
bands
in
block
Davidson
her.
A
We
can
quickly
forget
this,
but
essentially
this
gives
us
the
gradient
on
our
orbitals,
so
we
construct
a
subspace
of
our
current
orbitals
and
the
gradient
on
them
and
then
do
a
diagonalization
or
rayleigh-ritz
diagnosed
ation
in
this
particular
subspace,
and
that
that
gives
us
well
eigenfunctions
in
this
subspace
and
then
we
apply
again
the
Hamiltonian
on
these
eigenfunctions.
That
gives
us
another
extension
of
our
subspace.
A
So
it
grows
a
bit,
but
it's
only
like,
for
instance,
you
would
commonly
take
four
bands
and
then
construct
then
compute
the
gradient
and
you
have
a
subspace
of
the
size
8
and
then
you
might
diagonalize
it
and
then
compute
the
gradient
again
on
those
orbitals,
then
you're
at
size,
12
and
so
the
basis
in
which
we
do
this
diagnosed.
Ation
is
growing,
but
it's
very,
very
small.
So
it's
not
not
a
really
a
big
problem.
A
So
after
a
few
of
those
steps
and
where,
where
the
space
is
growing
and
those
spaces
are
commonly
called
clean
off
spaces,
so
I
don't
know
if
you,
if
you
have
four
comma
closest.
This
is
a
key
love
method
so,
and
this
s
research
base
is
continuously,
as
you
add,
h,
sigh
to
your
search
space.
You
come
closer
to
this
to
these
eigenfunctions.
So
after
a
few
steps
and
commonly
we
don't
do
more
than
three
or
four
you
take
this.
A
So
many
eigenfunctions
and
eigenvalues
of
our
of
our
Hamiltonian,
and
that
is
obviously
still
the
Hamiltonian
here-
is
still
at
constant
density.
So
you
do
a
few
steps
of
this
iterative
refinement
and
then
in
orthogonalization,
and
then
you
compute
a
new
density
and
mix
that
one
and
then
it
goes
back
into
this
into
this
guy.
So
that
is
essentially
how
these
self-consistency
cycle
works
so
and
in
the
density
there
we
said.
A
Okay,
we
don't
want
to
use
the
complete
new
density,
because
then
the
whole
thing
will
be
unstable
and
I
don't
want
to
really
go
too
much
into
this.
But
what
is
essentially
what
is
essentially
done
in
in
programs
like
vast
volume.
It's
like
vassman.
Other
programs
use
this
as
well.
They
use
bright
and
mixing
and
bright
and
mixing
uses
a
more
construction
model
for
the
dielectric
function.
So
the
dielectric
function
tells
you.
A
At
some
point,
the
mixer
will
stores
information
for
this,
and
that
is
expensive
to
carry
around
indefinitely.
So
at
some
point
that
mix
it
will
get
reset,
and
that
is
why
you
might
sometimes
see
that
all
of
a
sudden,
your
convergence
behavior,
worsens
again
because
then
the
mixer
gets
reset.
Yes,
anyway,
that
is
in
a
nutshell,
what
what
Brydon
makes
and
what
the
Boyden
mixer
will
try
to
do
and
these
these
parameters,
that
sort
of
give
you
the
shape
of
your
dielectric
function.
Yeah.
Well,
there's
defaults
for
them,
but
I
would
not
work
equally.
A
Well
always
I
mean
the
defaults
are
chosen
such
such,
that
it
works
well
in
many
many
cases,
but
that
is
one
of
the
things
that
you
might
might
want
to
try
to
play
with
when
you
see
that
your
your
system
is
not
converging
rapidly,
as
you
would
would
like,
yeah
or
not
converging
at
all,
mostly
well,
where,
where
does
this
pose
a
problem,
there's,
for
instance,
in
magnetic
systems?
We
often
see
this.
A
So
it's
constructing
a
mixing
function
that
is
very
good
at
at
at
predicting
which
new
charge
density
to
choose
with
respect
to
electrostatic
interactions,
but
the
relaxation
all
modes
of
the
magnetic
system
they
are.
They
are
often
of
a
completely
different
order
of
magnitude
in
the
total
energy.
So
this
function,
that
is
getting
that
gets
constructed
and
will
not
be,
will
not
be
very
effective,
effective
at
mixing
the
magnetization
density.
A
So
if
you,
if
you
do
magnetic
systems,
it
might
help
to
do
like
20
steps
of
self-consistency
and
then
stop
the
calculation
of
the
twenty
steps
and
he
started,
you
will
restart
with
a
fresh
mixer
that
will
be.
That
will
be
much
for
that
particular
mixer.
The
magnetic
modes
will
be
much
more
visible
at
the
beginning,
because
the
responses
at
that
point
will
be
more
strongly
dominated
by
a
magnetic
modes.
For
instance,
said
some
some
of
those
things
yes
and
then
there's
horrible
systems
that
will
simply
refuse
to
to
convert
yeah.
A
C
A
C
A
B
A
So
they're
notoriously
difficult,
but
even
worse,
there's
only
a
well,
not
the
most
effort
went
into
creating
those
pseudo
potentials
so
because
they're
not
so
commonly
used.
So
so
there's
just
like
these.
These
PW
datasets
they
have
grown
organically,
so
the
ones
that
are
that
are
used
very
very
often
are
of
the
highest
quality
in
most
cases.
A
So
I
guess
that
this
was
an
old
calculation
actually
by
my
quote,
because
normally
Georg
when
he
sees
numbers
like
this,
he
starts
to
work
on
the
on
the
potential
one
should
as
a
side
issue,
and
should
we
should
be
aware
of
the
fact
that
this
Delta
value,
so
it
looks
like
most
of
them,
are
very
small.
But
a
delta
value
of
two
is
already
almost
perfect
agreement.
So
three
and
a
half
is
definitely
something
why
use
where
you
might
look,
but
it's
not
a
red
light
yeah.
A
A
Well
there
well
there,
there
are
people
that
do
this,
obviously
with
cristela
graphics,
and
they
would
use
mostly
better
analysis
and
then
compare
compare
spectra
that
can
come
out
of
this,
so
they
decomposed
the
better
charges
on
the
atoms
and
compare
these
two
to
diffraction
experiments.
That
I
know.
C
A
But
the
thing
is
that
this
is
DFT
of
against
DFT
right
and
as
soon
as
you
say,
experiment
and
then
things
change.
I
mean
this.
This
should
essentially
here
you
would
expect
if
everybody
does
their
job.
This
should
be
zero
sort
of
right
and
from
and
DFT
compared
to
experiment
or
hybrid
functional.
Or
what
have
you
not?
There
will
always
be
a
difference.
Yes,
so
so
there
it
would
would
not
necessarily
mean
that
that
something
is
bad.
It's
simply
that
that
your
approximation
is
limited.
A
A
Sorry,
why
don't
I
find
this
now?
Yes,
so
so
what
what
we
need
is
we
need
enough
functions
to
be
able
to
represent
the
density
right
yeah,
so
so
the
density
is
constructed.
Well,
here
it's
a
sum
over
this
n,
the
number
of
bands
and
in
each
of
these
of
these
1
electron
orbitals,
we
can
put
either
1
electron
or
2,
depending
on
on
whether
we
will
consider
spin
polarization
or
not
yeah.
So,
let's,
let's
forget
about
spin
polarization,
then
for
each
of
these
orbitals
in
each
of
these
orbitals,
we
can
put
two
electrons.
A
So
if
I
have
a
unit
cell
which
contains
50,
electrons
I
would
need
25
orbitals
to
be
able
to
put
them
in
and
then
I
can
compute
the
density.
So
in
that
sense
we
only
need
a
limited
number
of
the
lowest
eigenstates,
because
if
we
look
at
our
total
energy
expressions,
we
see
that
the
total
that
they
I
don't
know
if
I
have
that
one,
but
essentially
you
can
compute
the
total
energy
as
a
sum
Alfred.
A
Well,
it's
not
it's
not
this
expression,
sorry,
but
you
can
compute
a
represented
total
energy
as
a
sum
over
the
eigenenergies.
So
you
would,
if
you
would
fill
your
states
with
electrons,
you
would
fill
the
lowest
ones
first,
because
that
would
end
up
with
the
lowest
total
energy
right.
So
yes,
so
in
that
sense
we
only
need
a
limited
one
and
we
need
to
a
limited
number
of
the
lowest
eigenstates
of
our
of
our
Hamiltonian
yep.
B
B
A
A
I
should
be
able
to
do
an
Lda
calculation
with
the
PBE
potential
or,
and
that
is
exactly
what
happens
or
do
a
hybrid
functional
calculation
with
a
PvE
potential,
because
we
do
not
generate
potentials
that
are
used
in
hybrid
functions
with
hybrid
functions
right,
so
we
can
so
there
you
are
forced
actually
to
use
either
an
Lda
or
a
PvE,
and
it
should.
It
should
not
matter
a
lot.
That
potential
should
be
transferable
enough,
that
it
should
be
able
to
deal
with
this.
A
So
the
fact
that
we
have
different
versions
is
sort
of
a
historical
thing,
and
it's
limited
to
the
fact
that
we
can
generate
these
potentials.
Only
for
Lda
and
for
PvE
and
I
think
for
a
revised
PvE,
something
that
does
not
get
done
anymore.
Even
so,
we
do
still
generate
them
for
Lda
and
for
PvE.
For
a
long
time
we
were
doing
GW
calculations
with
LDI
potentials
because
they
were
only
generated
for
for
for
Lda.
Now
that
has
been
taken
care
of
you
can
use
PvE
ones
there.
I
standardly
use
PvE
potentials.
A
Unless
I
do
an
actual
LD
acre
halation
there
I
might
use
the
one
that
has
been
generated
with
LD
a
because
I
have
it
not
because
I
think
it
is
so
important.
But
since
I
have
it,
I
would
use
it
the
difference
between
then
there's
differences
between
the
potentials
and
so
not
related
per
se
to
the
functional
with
which
they
were
generated.
So
there's
always
this
one,
not
always
but
there's
versions
that
are
called
underscore
h
that
are
harder
ones.
Those
generally
have
a
smaller
core
radius.
A
So
if
you
have,
for
instance,
some
bonding
situations
with
extremely
short
bones,
then
there
might
be,
it
might
be
that
you
would
need
a
harder
potential
that
with
a
smaller
pure
potential
core,
so
that
those
cores
do
not
overlap
so
strongly.
So
typical
situations
are
oxygen,
dimer,
for
instance,
or
there's
an
O
a
potential
and
an
oxygen
underscore
h
potential.
So
if
you're
in
some
situation,
where
you
have
very
short
bonds
between
nitrogen
and
oxygen,
or
what
have
you
not?
B
A
A
A
Taking
care
of
the
fact
that
scattering
properties
are
are
truly
represented
up
to
a
much
higher
energy
within
the
spectrum
and
also
does
underscore
GW
potentials
they're
slightly
more
expensive,
because
the
one
Center
basis
is
larger
may
be
slightly
harder,
but
not
necessarily
so
so
you
could
use
them
for
ground.
State
calculations
as
well
so,
but
for
for
is,
if
you
are
interested
in
lots
of
unoccupied
states,
for
whatever
reason,
then
those
are
the
kind
of
potentials
one
one
should
use.
B
A
B
A
B
A
Unfortunately,
but
if
we
do
the
action
of
the
Hamiltonian
on
a
wave
function
or
or
compute
its
its
expectation
value.
The
block
weight
vector
of
this
of
this
entity
and
the
one
we
use
on
the
other
side
is
the
same.
It
is
diagonal
in
in
reciprocal
space
right,
so
there
only
couple
through
the
density,
unless
you
use
hybrid
functions
where
things
are
different,
but
you
compute
the
total
density,
there's
a
sum
offer
over
the
K
vectors,
but
then
in
essence,
computing.
A
C
A
No,
no
because
in
essence,
plane
waves!
Well,
you
could
construct
stars
out
of
them,
but
there's
no
need
no
real
need
to
do
that
here,
yeah.
It
is
done
obviously
in
in
the
step
where
we
symmetrized
the
charge
density,
because
symmetries
ation
of
the
charge
density
is
done
in
reciprocal
space
and
they
would
construct
these
stars
out
of
them.
Yeah.
C
A
You
would
not
have
to
extrapolate
per
se,
but
yes,
so
that
we
have
done
this,
for
instance,
and
that
was
a
nice
one
for
a
chlorine
dimer,
and
then
it
turns
out
that
that
well
it's
on
one
of
the
other
slides
that
that
you
will
get
to
see
you
have
to
go
to
a
huge
basis
set
and
and
use
a
sufficient
and
a
number
of
plane
waves
on
the
same
time.
And
then
you
see
that
the
results
agree,
but
only
if
you
crank
up
both
methods
completely.
A
Is
it
so
important?
I
mean
yeah.
If
you
want
to
go
to
to
agreement
of
one
kcal
per
mole,
it
is
important
for
many
situations.
It
would
probably
not
be
so
importantly,
you
would
use
a
smaller
basis
set
for
I,
don't
know,
I,
don't
know
Gaussian
Gaussian
well
enough
to
be
able
to
say
for
this.
You
should
use
that
or
for
this,
but
I
think
that
that
is
pretty
well
known
and
pretty
well
characterized,
especially
because
this
g21
test
set
was
obviously
chosen
because
it's
one
of
the
polls-
that's
right.
Yeah.
A
There's
we
have
a
recommendation
on
on
the
website
in
the
manual,
so
I
would
start
with
the
recommended
ones.
If
then,
for
instance,
yes,
for
GW
I
would
use
to
recommend
GW
one
and
then
it
it
there's,
no
there's
nothing
that
one
can
there's
no
guarantees.
Let's
put
it
like
this
right.
So
so,
if
you,
if
you
have
the
feeling
that
that.
A
That
maybe
that
it
would
maybe
pay
off
in
this
party
in
this,
but
for
this
particular
compound
that
you're
studying
a
good
payoff
to
include
more
electrons
in
the
valence,
you
might
look
whether
there's
a
potential
that
includes
a
part
of
the
core
into
the
valence
there's
all
kinds
of
these
varieties,
because
any
recommendation
is
only
a
general
one
and
and
the
particulars
of
where,
where
people
use
it,
we
cannot
predict
beforehand.
So
one
one
always
has
to
test.
B
B
A
That
that
is
so.
If
you
are
lucky
that
if
you're
lucky,
then
you
can
put
the
four
FS
in
the
core.
So
if
they're
strongly
localized
and
they're
there,
they
do
not
contribute
to
the
binding.
Then
then
there's
varieties
of
potentials
there
are
sort
of
ionic
potentials.
You
put
you
put
the
four
EPS
into
the
core,
and
that
would
be
very
good,
for
instance,
for
three
plus
rare
earths
in
ionic
systems,
or
something
like
that
as
soon
as
four
F.
A
If
as
soon
as
the
FE
electrons
start
to
build
bands
or
if
you
have
a
metallic
States
and
then
you
would
have
to
include
them
in
the
valence
and
and
and
hope
for
the
best
I
mean
it's,
the
problem
is
that
those
electrons
there
they're
often
they're,
often
strongly
localized,
and
then
it's
not
it's
not
so
much
a
problem
of
the
of
the
PW
method.
Actually
the
pnw
method
can
deal
with
this,
but
it's
DFT.
So
DFT
is
not
good
for
this.
For
these
strongly
localized
states
it
was.
A
It
wants
to
delocalized
them
too
strongly
against
the
functional
theory
or
most
density
functions.
Let's
put
it
more
specifically
and,
and
then
it
might,
it
might
end
up
that
your
calculation
has
not
a
lot
to
do
with
waste
the
physics
with
the
physical
reality
right.
So
yes,
they're
notoriously
problematic
systems.
D
A
Would
it
be
nice
to
use
other
potentials
I,
don't
think,
there's
a
real
need.
I
mean
this
study
has
shown
that
that
that
there's,
a
few
good
sets
of
potentials
out
there,
that
one
can
use
and-
and
we
are-
we
are
among
them
so
I-
don't
know
whether
what
what
the
need
would
be
to
use
another
one
but
direct
comparison
between
our
code
and
and
another
one
is
not
so
much
in
our
interest
right.
A
Oh
yeah,
okay.
Well,
that
is,
that
is
one
thing
which
I've,
which
I
should
probably
explicitly
mention
so
there
there
are
potentials
that
where
that
stuff
has
been
put
into
the
core
those
are
available.
If
you
have
particular
needs,
then
then
I
really
suggest
you
rely
on
your
casa
to
construct
a
potential.
A
We
don't
distribute
the
software
to
constructive
potentials
with,
not
because
we
think
that
that
we
don't
think
it's
it's
wise.
Actually
I
mean
construction
of
Tula
potentials.
Unless
you
really
know
what
you
are
doing,
it
is
it,
it
is
we're
sort
of
the
magic
happens.
This
is
still
a
sort
of
alchemy.
You
turn
a
bit
here.
You
turn
a
bit
here
and
it's
a
huge
parameter
space
and
people's
brains
connected
with
experience
can
find
a
way
through
this
parameter
space.
A
If
you
don't
know
what
you're
doing
it's
very
very
easy
to
make
a
bad
potential,
so
we
don't
actually
encourage
this
at
all.
I
would
not
try
my
hand
at
it
and
I'm
already
in
this
field
for
a
very
long
time.
If
I
need
something,
I
go
to
one
room
and
say:
please,
can
you
make
or
can
you
make
this
ghost
8
disappear
yeah
it
is,
it
is
so
tinkering
with
it
would
not
necess
I.
A
B
B
B
C
A
So
there's
a
quantity
that
is
called
ng,
X
or
Y,
or
set,
and
there's
one
and
it's
in
the
house
per
file
and
there's
one
that
is
called
ng
x,
f
YF.
Why
is
that
so
is?
If
this
one
is
two
times
this
one,
then
the
grid
used
to
represent
the
charge
density
should
be
free
of
aliasing,
but
it
should
be
at
least
two
times
that
so
that
that
would
be.
But
if
you
said,
if
you
set
this.