►
From YouTube: CHIPS Alliance - Analog Working Group - 2021-06-29
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
this
is
kind
of
some
general
slides.
I
have
just
an
overview
of
openram,
so
I
wanted
to
kind
of
get
some.
You
know
feedback
from
people
and
so
on
this,
it's
more
of
a
kind
of
an
overview
of
how
I
envision
the
project,
what
it's
for
and
kind
of
how
it's
moving
forward
and
so
on.
As
you
mentioned,
there's
tons
of
challenges
with
memory
generation
and
I
think
the
more
I
get
involved
with
it.
A
Every
year
I
keep
seeing
the
number
of
challenges
are
actually
even
more
than
I
thought,
and
so
it's
kind
of
good
that
I
jumped
into
it
naive
and
then
now,
I'm
kind
of
you
know,
figuring
out,
you
know
everything
and
how
it
has
to
work.
So
you
know
and
and
the
reason
I
started-
the
project
was
basically
no
one
else
wanted
to
do
it
right.
No
one
else
was
making
an
open
source
memory
compiler,
you
know
that's
true.
In
grad
school
it
was
always
a
challenge
to
get
even
standard
cells.
A
You
know
and
design
kits
we're
now
seeing
kind
of
high-level
languages
that
you
can
get
in
ip
in
terms
of
processors
and
stuff,
but
really
memories
are
kind
of
that.
One
component,
along
with
like
some
things
like
plls
and
really
process
specific
things
that
are
hard
to
get
without
really
licensing
from
a
particular
vendor,
and
so
that's
why
I
kind
of
wanted
to
address
this
problem,
and
so
you
know
this
is
more
for
a
general
audience.
I
think
most
people
here
know
you
know
what
a
memory
compiler
is.
A
You
know
memory
is
a
it's
a
component
for
your
design
flow.
You
know
you
treat
it
as
a
you
can
treat
it
as
a
black
box
from
a
design
perspective
and
using
your
place
and
route
flow
in
terms
of
actually
designing
memories.
You
know
it's
not
quite
pure
analog
design,
but
it
has
a
lot
of
analog
aspects
to
it
and
so
therefore,
I
kind
of
show
the
trade-offs
from
the
razavi
analog
design
book,
where
you
know
they
talk
about
more
for
amplifiers
and
things
like
that.
You
know
noise
and
gain,
and
things
like
that.
A
You
have
a
lot
of
these
same
trade-offs
and
kind
of
this.
This
trade-off
octagon
with
memories,
except
your
main
thing,
is
probably
area
or
memory
density
and
in
addition
to
power
and
all
the
other
issues,
so
you
know
it's
not
just
a
you
know:
digital
doesn't
work
or
not,
but
it's
you
have
to
worry
about
many
other
complexities
in
the
design
of
memories,
and
so
for
that
reason
there
aren't
really
tool
flows
to
design
memories
very
easily.
There's
a
lot
of
customization
and
things
like
that,
and
so
now.
A
Those
are
all
on
the
roadmap,
I'm
actually
working
with
a
student
in
turkey
to
make
a
cache
generator
using
openram,
but
that's
kind
of
a
higher
level,
optimization
and
so
but
openram
there's
a
couple
other
goals
that
I
want
with
openram
just
at
the
bank
level,
and
you
know
there.
I
had
this
frustration
where
I
started
looking
at
this
memory.
Compiler
is
not
a
new
thing.
You
know
every
company
has
their
own
if
they
do
high
performance
designs,
intel
amd
and
so
on.
A
They
may
not
actually
be
compilers
per
se
in
that
they
may
generate
a
couple,
memories
that
they
then
use
and
those
are
highly
optimized.
Usually
there
are
other
companies
that
sell
compiler
kind
of
infrastructure.
A
You
know
arm
is
well
known
as
selling
ip.
The
all
the
other
big
three,
I
think,
have
cadent
synopsis,
probably
mentor
graphics
as
well.
Actually
one
of
my
students
from
an
open
ram
went
to
work
for
mentoring,
their
memory
unit,
and
so
they
all
have.
You
know
things
to
help
with
memory
design,
but
there
was
nothing
open
source,
and
so
that's
one
thing.
You
know
these
all
require
expensive
licenses
and
so
on
now
there
were
some
academic
things.
A
Most
of
these
were
not
complete
memory
compilers
they
did
what
they
were
good
at.
You
know.
I
don't
even
cite
here
cacti,
which
is
a
famous
architectural
simulator
of
you
know.
What
is
the
power
performance
of
a
cache?
A
So
that's
you
know,
but
the
main
thing
with
those
is
they
were
kind
of
limited
in
that
they
weren't
complete,
like
they
didn't,
have
control
logic
in
the
case
of
pop
still
or
they
were
for
one
technology
or
they
relied
on
commercial
tools
and
to
do
the
design
like
with
skill
and
cadence,
and
so
we
wanted
to
address
that
now.
You
know
I
always
like
to
think
of
this
in
terms
of
being
an
optimist
and
a
pessimist.
You
know
in
terms
of
the
glass
half
full,
you
know.
A
A
A
Technically
memories
have
a
very
regular
structure.
You
know
it's
a
lot
of
people,
you
ask
a
software
designer,
you
know,
why
is
a
memory
compiler
hard?
It's
just
a
nested
for
loop
right.
You
make
the
rows
and
columns
and
you're
done
well.
You
know
it
is
true,
but
then
there's
the
control
logic
and
all
the
peripheral
circuitry,
which
is
actually
very
complex,
and
then
you
know
open
cell
libraries
and
pdks
are
available,
but
not
memory
ip.
A
So
those
are
kind
of
these
are
like
the
benefits
of
like
why
we
actually
want
to
have
memory
compilers
and
then
you
get
to
the
process
and
circuit
people,
and
I
already
alluded
to
a
couple
of
those
and
they're
like
well.
You
can't
have
an
open
source
memory
compiler.
It's
just
not
possible,
you
know
bit,
cells
are
proprietary,
you
know
the
lithography
alone,
you
can't
see
it,
they
won't
tell
you
what
it
is.
A
The
cells
are
black
box
we're
seeing
changes
now
you
know
with
like
the
sky
water
technology,
where
we
actually
have
the
actual
bit
cells
with
lithography
enhancement
from
skywater.
We.
So
we
support
basically
foundry
bit
cells
where
you
could
treat
them
as
like
a
black
box,
or
you
could
also
make
user
design
rules,
memory,
cells
and
so
on.
But
then
the
second
issue
is,
you
know,
to
make
memories
and
make
sure
that
they
work.
You
actually
have
to
take
them
out.
You
know,
unless
you
have
silicon
proven
ip
memory,
people
don't
really.
A
They
don't
believe
that
your
memories
are
good
designs
and
that's
that's.
You
know
realistic
right.
You
have
to
make
sure
they
work
and
so
we're
working
on
that
now
with
a
number
of
technologies.
A
Also
design
roles
are
typically
not
accessible.
So
it's
hard
to
access
that
information
and
then
to
distribute
that
information.
A
A
You
know
they
want
a
memory
of
this
size.
With
this,
this
data
width-
and
you
know
the
thing
we're
trying
to
do
with
openram-
is
keep
it
flexible
so
that
we
can
have
more
customization
memory.
Designers,
often
or
I
should
say
system
designers
often
had
to
compromise
in
their
design,
with
what
memories
were
available
from
existing
solutions,
and
so
we
want
to
basically
offer
more
options
to
basically
have
them
have
a
more
customized
solution
for
their
designs,
and
that
comes
with
benefits,
but
also
disadvantages
too.
A
So
the
principles
of
open
ram
are
basically
we
want
open
arm
to
be
this
kind
of
glue
framework
that
holds
together
things
around
memory,
and
we
want
to
be
usable
by
hardware
engineers,
so
they
can
make
designs.
We
want
to
be
extensible
by
both
hardware
and
software
engineers
as
well.
So
we
want
hardware
engineers
to
customize
which
types
of
memories
they
want.
We
want
software
engineers
to
be
able
to
use
this
in
larger
infrastructures,
such
as
machine
learning
chips
where
you
may
want
a
unique
memory
architecture.
A
We
also
want
it
to
be
maintainable
and
I
think
we
take
a
lot
of
principles
from
software
engineering
in
that
aspect,
where
we
have
a
lot
of
unit
tests
to
make
sure
things
don't
break
through
different
technologies
and
we're
also
starting
to
track
basically
quality
of
results
of
memories.
As
we
improve
the
design
as
well
number
four.
A
Where
you
may
say,
oh
this,
you
know
just
do
the
easiest
thing
rather
than
the
thing
that
gets
you
the
most
density,
if
area
doesn't
matter
in
something
like
the
example
I
like
to
point
out
is
in
the
control
logic.
You
use
a
couple
man
gates.
We
don't
need
the
area
of
those
nand
gates
to
be
highly
optimized,
because
you
use
like
three
of
them,
and
so
that
doesn't
really
affect
your
overall
area
of
your
memory.
A
We
know
that
companies
do
rely
on
commercial
tools
for
their
designs,
and
so
we
want
them
to
be
able
to
use
openram,
but
we
also
want
to
support
the
open
source
flows
and
so
that
anyone
can
download
stuff
and
use
it
right
away,
and
I
think
this
gives
us
flexibility
for
both
parties
in
that
regard
in
terms
of
open
features
of
openram.
A
I
added
chips
alliance
to
this
slide
now,
since
we
officially
are
an
associate
member
now
uc
santa
cruz
is
rather
but
the
open
ramp
project
too.
You
know
we're
working
with
a
number
of
these
open
source
consortiums
and
so
on.
A
We
provide
three
reference
designs,
free
pdk,
scalable
cmos
and
I'm
still
working
with
tim
at
google
to
open
up
the
skywater
repo,
but
it's
basically
there
and
we're
generating
single
port,
dual
port
memories,
and
so
we're
we
just
taped
out
a
bunch
of.
We
are
taping
out
a
bunch
of
example,
designs
right
now
for
that,
and
then
we
provide
basically
a
number
of
things
that
wrap
around
memory
design.
So
we
also
provide
basically
a
characterization
methodology
of
memories
and
functional
verification.
So
this
uses
back-end
spice
tools.
A
I've
actually
upgraded
this
to
use
seis
the
highly
parallel
spice
simulator.
Recently
you
with
the
support
of
eric
kiter
and
that's
you
know
something
that's
actually
kind
of
pretty
exciting
it's
very
fast.
A
Then
we
also
do
a
lot
of
the
general
file
formats,
so
gs,
spice,
verilog
reports
and
so
on,
and
then
we
basically
provide
kind
of
wrapper
scripts
for
basically
the
different
verifications,
so
lvs
drc
and
simulation
around
commercial
and
open
source
tools,
so
that
we
don't
depend
on
the
certain
features
of
a
tool
to
do
the
verification,
because
we
know,
for
example,
every
every
spice
simulator
is
somewhat
unique
in
the
way
it
does
simulations.
A
You
know
it
has
a
custom
command
or
whatnot,
and
so
we
try
to
basically
abstract
those
away
so
that
we
can
support
all
the
different
tools.
The
one
I'm
actually
adding
very
shortly
is
k
layout.
I
have
actually
a
number
of
pull
requests
for
k
layout
for
drc
and
lvs,
and
so
that
should
be
in
there
in
the
next,
probably
within
the
next
month
as
well.
A
I
mentioned
yeah.
If
those
questions
interrupt
yeah
yeah.
B
Sorry
matt,
so
do
you
have
any?
Are
there
any
challenges
in
terms
of
interfacing
with
commercial
tools
or
ensuring
that
nothing
proprietary
of
the
commercial
tools
end
up
in
the
data
that
is
generated?
I
know
certainly
both
cadence
and
mentor
are
now.
Siemens
have
some
peculiarities
around
some
of
these
things,
like
I
know
like
with
respect
to
drc.
That
mentor
is
particularly
protective
of
that
language.
As
I
painfully
learned
in
my
earlier
life.
A
Yeah,
so
we
don't
actually
include
any
of
that
stuff
in
openram,
so
we
rely
on
basically
pointers
to
your
pdk
to
have
those
rules,
so
we
keep
that
all
separate.
The
only
thing
that
we
may
use
is
the
public
apis
of
a
simulator
or
tool
or
command
line
options.
You
know,
okay,
and
that
brings
a
good
point.
There's
a
number
of
simulators
that
I
don't
use
like
I
have
a
user
contributed.
A
I
forget
what
the
simulator
is
xa
from
synopsis.
It's
another
circuit,
simulator!
Yes
right
right!
I
have
support
for
that,
but
I
don't
run
that
because
someone
added
the
support
you
know,
and
so
there
there
are
a
number
of
them
and
I
guess
maintaining
those
is
an
issue
when
I
don't
I'm
not
actively
maintaining
all
of
them
so
but
we're
relying
on
user
feedback.
For
that
sure
sure
that
makes.
B
A
A
Okay
and
of
course,
we
don't
do
any
benchmarking
with
commercial
tools
because
we're
not
allowed
to
do
that
with
our
licenses.
So
I
understand
that
too
yeah
yeah
and
that
goes
back
to
I
want
to
play
well
with
commercial
tools
too.
You
know
I'm
not
trying
to
get
rid
of
them.
I'm
trying
to
augment
their
flows
to
use
an
open
source
memory
compiler
so
now
in
terms
of
the
software
kind
of
maintainability.
So
we
do
provide
a
really
detailed
regression
suite
it
uses
the
python
unit
test
framework
and
coverage
metrics.
A
A
I
basically
run
through
this
flow
to
make
sure
it
doesn't
break
other
things,
and
that
includes
stuff
for
physical
design,
timing
simulation,
functional
simulation,
multiple
technologies
and
so
on,
and
that's,
I
think,
one
of
the
main
we
we
started
open
ram
by
making
the
tests
first
really
as
we
made
open
ram.
So
I
think
that's
one
of
the
unique
features.
A
Yeah
and
and
I'm
still
learning
about,
basically
your
release
schedules.
I
think
that's
one
of
the
things
I
need
to
work
on
a
little
bit
with
openram.
I
have
a
stable
branch
and
a
development
branch,
and
I
kind
of
just
push
frequently
to
the
development
branch
and
then
every
quarter
or
so
I
copy
it
over
to
the
stable
branch.
I
need
to
start
doing
some
more
strict
kind
of
versioning
of
you
know:
minor
api
changes
versus
minor
revisions,
and
so
on.
A
So
I
know
if
some
of
the
early
users
are
probably
frustrated
with
some
api
changes,
but
but
we're
settling
on
some
more
stability
now
so
in
terms
of
usage.
So,
as
I
mentioned,
it's
kind
of
a
wrapper
around
a
lot
of
the
simulator
and
verification
tools.
A
We
basically
take
gds
and
spice
directly
and
we
have
a
gds
representation
inside
of
openram.
A
We
take
a
subset
of
design
rules
in
a
python
file
and
these
are
kind
of
a
conservative
subset
just
to
generate
the
control
logic
for
the
most
part,
and
then
we
take
a
configuration
script
and
we
also
allow
users
to
basically
override
any
module,
and
this
is
one
of
the
other
challenges
with
openram.
Is
we've
made
it
in
a
way
that
you
can
have
dynamic
importing
of
things
so
that
let's
say
you
want
to
make
your
own
customized
decoder?
A
That
does
something
special,
like
someone's
asked
me
for,
like
a
siri
or
sequential
address
decoder,
so
it
just
goes
cycles
through
memory
addresses.
You
can
easily
implement
that
by
overriding
the
decoder
and
making
your
own,
and
so
we
offer
that
option
for
anything
from
the
cell
primitives.
So
the
actual
bit
cells
up
through
the
high
level,
architectural
components
of
open
rim
and
then
the
output
views
are
basically
the
standard
output
views
that
you
would
use
in
any
any
tool.
A
Now
there's
two
modes
for
openram
there's
a
front-end
mode
and
a
back-end
mode.
The
front-end
mode
is
basically
for
intended
for
users
that
are
system
designers,
architects.
A
So
this
is
where
you
want
to
basically
generate
memories,
that
you
can
kind
of
get
an
idea
about
what
the
size
and
performance
are,
but
they
may
not
be
complete
in
terms
of
verification.
So
there
there
isn't
final
drc
lvs
and
the
power
is
estimated
analytically,
we're
actually
adding
another
way
right
now.
We
basically
have
kind
of
a
simple
it's
a
cacti
like
model
for
power,
estimate,
power
and
delay.
A
Estimation
where
it
uses
mostly
elmore
delay,
type
models,
we're
actually
adding
a
kind
of
machine
learning
regression
model
where
we
basically
use
a
bunch
of
memories
to
predict
what
options
will
result
in
what
power
and
performance
and
that
that
should
be
released
it's
in
there
now,
but
it's
not
turned
on
by
default,
and
so
we're
doing
that
as
well,
and
then
the
more
data
you
have
about
a
technology,
the
more
accurate
you
can
get
in
the
back
end
mode.
A
A
And
you
know
bit
cells,
we
support
anything.
We
support
any
number
of
bit
cells
in
our
schematic
model,
but
we
only
support
up
to
two
port
in
physical
design
right
now.
That's
just
for
ease
of
placement
of
decoders
and
read
circuitry,
but
there's
nothing
that
wouldn't
support
more
ports.
For
that,
and
you
know
we
do
standard
6t
cells,
but
you
can,
it
doesn't
rely
on
it
being
a
6t
cell.
You
could
add
in
a
you
know
an
8t
or
something
like
this.
A
A
Here's
an
example
of
kind
of
the
sky
water
technology
on
the
left.
I
have
basically
the
actual
bit
cells
from
that
technology
on
the
right
is
the
a
dff
from
the
oklahoma
state
library
for
sky
water.
So
that's
a
I
forget
how
many
track
tall
but-
and
you
know
we
see
roughly
a
five
to
six
x,
smaller
sram
bit
cell,
compared
to
a
dff.
A
One
of
the
challenges
we
found
with
skywater
is
that
I
mentioned
it
does
have
the
lithography
correction,
the
opc
in
there
and-
and
it
also
has
kind
of
the
strap
cells
and
these
these
things
that
basically
tie
together
word
lines
and
tap
the
the
end
well
and
so
on,
and
the
the
opc
is
actually
made
with
certain
restrictions
on
this.
The
placement
of
the
tap
cells,
so
basically
what
it
ends
up
having
to
you
end
up
having
to
do
is
use
kind
of
these
cells
with
the
taps
in
most
of
the
cases.
A
So
it's
you
don't
end
up
with
kind
of
this
minimal
single
port
bit
cell,
it's
a
little
bit
bigger
with
the
overhead
of
these
other
parts,
but
it's
still
much
much
smaller
than
a
dff.
You
know
you
still
get
5x
reduction
in
area
for
the
array,
and
you
know
you
can
see
here
the
the
opc
that
comes
with
it.
They
basically
do
for
the
singles
bit
the
single
port
bit
cell.
A
This
is
actually
the
this
is
the
dual
port
that
sell
it.
No,
this
is
the
this
is
the
single
part
bit
cell.
They
do
opc
on
the
poly
li
and
the
diffusion
layer
and
it's
a
simple,
rule-based
opc,
where
they
just
basically
do
some
of
expansion
of
edges,
and
I
actually
have
those
rules
that
they've
publicly
given
me.
So
we
could
do
some
things
with
those
with
new
designs.
A
A
A
A
A
A
So
you'll
have
basically
three
different
control
logic
options
in
openram
and
I
think
that's
that's
going
to
be
an
exciting
addition
to
the
to
the
project
configuration
files.
We
just
use
python
for
the
designers
to
tell
us.
You
know
what
kind
of
options
you
want
for
the
memory
and
also
what
corners
and
kind
of
scenarios
to
characterize
you
could
do.
A
Or
you
can
do
it
per
technology
like
the
bit
cell,
like
if
you
want
to
have
a
customized
bit
cell
for
a
particular
technology
that
can
be
part
of
every
memory
as
well,
and
so
we
allow
both
of
those
options,
and
we
provide
a
number
of
examples
with
the
the
design,
including
a
number
of
memories
that
are
common
for
risk.
Five.
A
It's
it's
all
python.
Okay,
everything
is
in
python,
it
has.
The
gds
stuff
is
actually
a
python
library
that
was
originally
from
some
people.
I
know
at
michigan
it's
called
gdsmill,
they
don't
really
maintain
it
anymore,
and
so
we
kind
of
maintain
it
inside
of
openram
and
but
everything
else
is
just
python
and
rail.
Okay,
yep.
A
You
know
if
I
went
back,
I
would
probably
think
about
using
some
of
these
open
databases
now,
but
those
were
not
available
like
not
open
db.
I
think,
for
example,
as
part
of
the
the
darpa
project
through
san
diego
right.
Those
were
those
were
not
available
at
the
time.
So
martin
did
you
have
a
question.
C
Yeah
yeah
yeah
interesting,
I
mean
so,
if
you're
using
you're,
saying
in
lieu
of
something
instead
of
gds2.
How
would
that
work
then,
for
integrating
with
the
other
tds
designs
I
mean?
Are
there.
C
A
Yeah
and
then
internally
and
then
internally,
we
basically
have
a
data
structure
that
holds
the
the
net
list,
information
and
the
layout
information.
A
A
It's
by
no
means
good,
but
it
it.
It
basically
gets
the
thing
designed
done
and
it
one
of
the
challenges
obviously
is
on
and
off
grid
pins
and
basically
that's
something
we
need
to
kind
of
address
in
the
moving
forward.
Is
I
address
off-grid
pins
by
just
making
the
grid
size
of
the
router
bigger,
and
so
then
you
get
rid
of
the
problem
of
off-grid
pins,
which
may
not
be
the
best
solution.
But
it's
what
we're
doing
right
now.
D
Hey
matt,
this
is
warren
nicholas
from
berkeley,
a
quick
question
when,
when
looking
at
at
sky
130
produced
layouts,
we
see
perhaps
a
bit
more
of
a
use
of
a
router
than
than
than
we
would
expect
in
a
finished
memory
array.
Usually
you
expect
memory
arrays
to
be.
A
A
Basically,
you
have
to
have
power
stripes
to
connect
to
their
power
routing,
and
so
we
actually
have
added.
We
used
to
have
a
gridded
power
router,
where
we
would
just
put
a
grid
over
and
connect
everything
to
the
grid
and
a
lot
of
the
commercial
tools
would
hook
up
to
that.
Fine.
A
But
then,
in
the
open
lane
open
road
flow,
we
basically
had
to
add
a
power
ring
around
it
and
it
can
only
connect
to
the
power
ring,
so
they
actually
in
the
first
tape
out.
They
did
kind
of
a
hack
where
they
took
open
ram
and
they
basically
added
stripes
on
the
side
and
connected
the
power
to
the
stripes
on
the
side
by
hand,
because
they
didn't
tell
me
that
it
wasn't
working
with
the
tools
nicely,
and
so
I
didn't
know
I
had
to
patch
it.
But
does
that
answer
your
question.
D
A
Well
because
this
has
to
be
portable
to
many
technologies,
and
so
it's
a
lot
of
the
things
are
connected
by
just
being
placement.
For
example,
the
array
is
connected,
the
decoder
is
connected
the
each
of
the
the
the
address,
the
address
part,
the
data
part
and
the
array
and
the
control
logic
are
all
connected
kind
of
on
their
own.
But
then
at
the
chip
level
we
need
to
connect
the
power
or
the
memory
level.
I
guess.
D
A
Yeah-
and
so
I
mentioned
the
replica
bitline
timing
approach,
this
is
basically
what
we
do
where
we
have
a
column
with
you
know,
especially
replica
cells,
then
an
extra
row
with
dummy
cells
that
kind
of
don't
affect
the
bit
lines
in
the
blue,
and
then
we
turn
on
the
replica
row
to
start
sensing
from
the
red
column,
as
well
as
the
word
line,
we're
actually
accessing,
and
then
this
gets
sensed.
A
A
So
this
isn't
a
new
new
thing,
but
this
is
kind
of
the
horowitz
approach
and
it
is
what
we've
been
using
so
far
and
as
I
mentioned,
we're
going
to
be
replacing
this
with
a
non-replica
approach
and
also
a
an
asynchronous
approach
moving
forward,
hopefully
by
the
end
of
the
summer,
before
the
next
water
tape
out.
Yes,.
D
I
think
sky
130
uses
a
little
different
trigger.
I
think
it's
a
falling
edge
right
or
I
don't
know
what
you
mean
by
sky
130,
the
memories
so
yeah,
the
the
design,
I
think,
open,
run
generated,
and
somebody
should
correct
me
who
played
with
that.
If
I'm
not
correct,
I
think
the
you
know
it
starts
the
code
on
the
rising
edge
and
fires,
the
sun
stamp
on
the
falling
edge.
A
So
no
yeah,
so
you're
actually
kind
of
correct.
It
does
the
pre-charge
during
the
the
high
phase
of
the
clock
and
then
it
does
the
access
string
after
the
negative
edge
yeah.
So
this
all
happens
on
the
negative
edge.
A
Thanks-
and
we
just
did
that
for
simplicity
in
the
first
iteration
and
that's
why
the
you
know
replacing
it
with
a
a
delay
line,
pulse
based
approach
would
would
remove
that
need.
But
you
know
it
does
put
a
lot
of
the
delay
in
the
second
half
of
the
clock
cycle.
But
but
it's
it,
it
works
and
it
was
straightforward
to
do.
A
And
if
you
have
have
fixes
we're
well
we're
it's
open
source,
we
will
take
suggestions,
so
that
is
that's.
Why
we're
here?
We
want
to
improve
collaboration
with
people.
A
Yeah,
so
we've
also
added
a
hierarchical
word
line
stuff
with
the
local
arrays.
It's
actually
interesting,
where
we
do
arrays
in
a
way
where
we
can.
Actually,
you
can
kind
of
configure
how
big
you
want
the
local
arrays
to
be,
and
so
we've
had
this
working
for
a
while
and
and
this
isn't
working
in
sky
130.
Yet,
but
it's
working
in
the
other
technologies
and
well
it
just
has
to
be
debugged
in
sky
130..
A
You
can
actually
configure
it
anywhere
from
a
bit
right,
select
all
the
way
up
to
full
word
right,
select
and
so
obviously
risk
five
likes
to
use
byte
right
asking,
and
so
we
have
done
a
lot
with
that,
and
you
can
see
the
implementation
here
and
then
in
terms
of
designs,
so
we've
done
a
few
different
versions.
This
was
the
version
we
taped
out
last
year.
A
This
was
a
early
prototype
with
just
not
a
lot
of
time
to
get
it
done,
and
so
we
put
a
risk
five
on
two
chips.
One
is
with
a
risc-v
core.
A
But
then
we
actually
had
a
separate
memory
chip
as
well-
and
you
can
see
here
why
we
have
the
power
routing,
because,
basically,
when
we
pack
together
the
memory,
the
control
logic
is
in
the
lower
left.
Here
you
can
see
there's
a
little
bit
of
unused
space
where
we
do
want
to
improve
and
do
some
kind
of
better
floor
planning.
But
it's
made
to
work
with
a
large
number
of
configurations.
I
think
that's
the
thing
you're
going
to
find
with
a
lot
of
memory.
A
You
can
see
a
little
blank
space
in
here
in
the
bottom.
That
was
an
unoptimized
channel
route.
The
metal
was
turned
off
here
because
we
also
had
the
power
grid
over
this
that
so
you
couldn't
see
anything
with
metal
three
and
four
on,
and
so
that's
turned
off
in
this
view,
and
so
we,
this
was
the
initial
thing
just
for
prototyping
the
goal
was
functionality
and
it
works.
A
We
were
doing
some
additional
testing
on
it,
just
to
get
some
kind
of
voltage,
and
you
know
different
corner
analysis
and
stuff
and
that's
being
that's
ongoing,
the
newer
memories,
so
here's
some
newer
ones
that
we've
done
we've
done
a
power
ring
in
this.
A
A
Otherwise,
everything
about
this
is
a
little
similar
to
the
previous
one.
So
this
is
a
dual
port
as
well,
not
much
else.
We're
taping
out
a
number
of
memories.
With
this
now
so
here
was
this:
was
our
we're
doing
a
tape
out
it
kind
of
closed
last
friday,
but
we're
doing
updates
to
it
still
with
efablus
and
on
the
open
lane
flow
you
can
see
here.
This
is
the
caravel
multi-project
wafer
chip
on
the
left,
and
that
has
a
risc-v
processor.
A
A
It
just
shows
six
memories
of
the
same
configuration
because
we
weren't
we're
just
testing
the
infrastructure
at
the
time,
and
so
we
packed
in
a
number
of
memories
that
we're
going
to
then
single
port
and
dual
port
with
a
variety
of
configurations
for
testing
moving
forward
and
then
then
we'll
have
some
silicon
proven
designs
and
we've
also
done
work
with
commercial
tools.
A
Here
you
can
see
some
of
the
earlier
work
that
one
of
my
master's
students
did
with
again
a
pico
rv
32
and
a
45
nanometer
process
in
design,
compiler
and
cadence
innovas.
A
You
can
see
left
was
the
flip-flop
implementation
of
the
beaco
rv
32
and
on
the
right
we
replaced
the
both
the
data
memory
and
the
register
file
with
open
ram
memories
and
got
considerable
savings.
A
You
know
and
here's
the
key
there
were
some
optimizations
as
well
since
then,
so
you
can
see
some
wasted
space
in
the
memories,
but
you
can
see
that
you
know,
even
though
the
memory
array
is
not
the
most
efficient,
it's
improvement
over
flip-flop
based
is
still
huge
and
I
think
that's.
The
kind
of
the
thing
to
look
at
now
is
technology
portability
versus.
A
If
you
want
the
most
optimal
area,
you
should
customize
on
your
memory
by
hand.
If
you
want
a
technology,
portable
solution,
that's
easily
usable,
you
can
do
something
with
openram
and
that'll.
Give
you
a
pretty
good
solution,
but
not
the
best
solution,
and
you
know
so
in
general,
we
basically
got
this
flow
up
and
working.
We
generate.
You
know
layout
functional
models
in
the
circuit.
It's
open
source.
You
can
download
it
now
we're
actively
porting
it
to
different
technologies,
including
probably
a
90
nanometer
in
the
fall
and
we're
continually
adding
new
features.
A
We
offer
a
lot
of
flexibility
that
other
compilers
don't
other
compilers
will
generally
only
allow
a
limited
set
of
configurations
where
we
allow
a
very
large
number
of
configurations
and,
along
with
that,
comes
a
little
bit
more
responsibility
of
designers
that
they
have
to
do
verification
to
make
sure
it
works,
but
it's
kind
of
a
trade-off
and
we're.
We
are
also
basically
offering
a
library
of
ip
for
people
that
don't
want
to
run
open
ramps.
A
E
C
C
Just
know
what
practical
questions
about
using
open
ram,
you
know
you
mentioned
you're,
providing
a
lot
of
libraries
which
and
your
you
know,
which
you've
already
validated
in
terms
of
time
investment.
How
long
does
it
take
to
use
openram
in
terms
of
manpower
and
the
validation?
How
heavy
is
that,
compared
to
you
know,
using
a
commercial
tool.
A
Now
I
I
can
comment
on,
you
know:
porting
openram,
to
new
technologies,
so
we've
done
basically
four
technologies.
Now
the
three
I
mentioned
and
also
another
proprietary
technology,
it's
getting
easier.
The
as
I
mentioned
every
technology
has
a
new
kind
of
hiccup.
I
think
skywater
had
a
lot
of
unique
features.
A
A
F
F
I'm
very
very
next
question:
there
are
a
couple
of
things,
probably
in
that
surprise,
camp
kind
of
pending
for
the
ongoing
skywater
tape.
Out
you
mind
a
note
or
two
on
kind
of
what
those
are
and
how
the
outlooks
look
in
there.
Wait.
What
do
you?
What
do
you
mean?
Can
you
repeat
that
I
think
at
least
as
far
as
the
pre-published
stuff,
that
you
guys
were
still
working
on
a
few
tweaks
to
enable
the
ongoing
skywater
tape
out.
A
A
A
I
think
I
have
to
go
to
a
sub
array
to
make
it
to
get
it
to
actually
work.
We
also
have
single
port
countries
that
are
a
bit
different
in
the
way
we
design
them.
A
The
way
we
did
the
dual
port
to
get
it
done.
Quick
was
we
took
the
skywater
cell
and
the
kind
of
strap
cell
and
made
kind
of
a
custom
bit
cell
that
we
used,
whereas
in
the
single
port
we
actually
decided
to
write
custom
modules
for
the
array
generation
in
skywater,
and
so
we
kind
of
went
through
that
infrastructure
to
show.
A
You
know
that
you
can
have
just
custom
code
per
process
to
make
a
different
array
layout
as
an
example,
and
so
we
have
that
working
with
single
port
as
well,
which,
because
we
were
testing
that
it
took
a
little
longer
than
we
had
expected.
But
so
we
have
a
number
of
single
port
memories
on
there
as
well
you're,
ranging
from
I
think,
one
1k
to
4k
and
up
to
like
a
64-bit
data
word
so.
B
Hey
matt,
would
you
be
able
to
describe
the
software
architecture
a
little
bit
of
open
ram,
I'm
just
kind
of
curious
as
to
what
the
main
pieces
are
of
it,
and
also
then
just
to
get
a
sense
of
how
easy
or
hard
it
would
be
for
the
community
to
add
different
things
to
it.
A
So
we
have
auxiliary
functions
to
basically
add
wires,
pad
ports,
add
pins
and
so
there's
a
lot
of
kind
of
helper
functions
to
basically
do
a
lot
of
that
stuff.
Okay-
and
you
know
the
same
class
is
used
to
implement
everything
from
the
bit
cell
up
through
the
memory.
A
You
know
where
it's
a
module
with
ports
and
so
on,
and
so
it's
a
full
hierarchical
database
view,
okay
and
yeah,
and
it's
basically
the
responsibility
of
every
module
to
define.
You
know
what
is
its
interface,
what
is
its
boundary
and
that
lets
other
things
put
it
together,
compose
larger.
A
So
we
so
what
we
do
is
we
dump
out
the
entire
thing
and
basically
don't
dump
out
certain
cells.
So
we
can
flag
cells
as
kind
of
trimmable
cells,
okay
and
it's
up
to
each
module
to
decide
what
what
it
can
trim
or
not.
E
A
B
A
I
think
for
the
leaf
cells
like
the
bit
cells,
you
could
actually
just
say
its
leakage.
Is
this
okay
and
he'll
kind
of
scale
up
so.
A
A
You
know
resistance,
capacitance
of
wires
and
devices
to
kind
of
calibrate
the
model
right,
and
so
we
haven't
done
a
lot
of
that
in
sky
water.
Yet
we're
focusing
on,
I
think
the
regression
model
actually
just
has
a
better,
is
more
useful.
I
think
where
you
just
give
it
a
bunch
of
memories
and
have
it
fit
a
model
statistically
and
say
this
is
what
we
think
the
power
and
delay
will
be,
at
least
from
a
high
level
user.
B
What's
the
most
advanced
technology
node
that
openram
has
been
tried
for.
Do
you
have
a
sense
of
that.
B
A
We
haven't
released
that
the
the
yale
people
also
that
we're
working
with
also
did
65
nanometer
and
actually
the
28
millimeter
was
early
on
in
open
ram.
So
I
don't
think
I
have
not
maintained
that
actually
and
we
actually
looked
at
doing
free
pdk
15,
but
from
looking
at
that
the
design,
rules
and
stuff
are
actually
kind
of
limited.
It's
not
a
really
good
representation
of
a
15
nanometer
process.
A
I
see
okay,
and
then
I
was
talking
with
what
was
his
name
clark
about
asap,
seven,
which
is
another
predictive
model,
but
they
actually
had
some
issues
with
the
licensing
where
you
couldn't
redistribute
anything.
A
A
C
A
D
D
We
do
have
quite
a
as
then
in
further.
We
are
seeing
quite
a
few
challenges
in
in
trying
to
use
it
in
sky,
130
and,
and
you
know,
sky
130
and
later
sky
90
will
be
our
priorities
to
have
a
a
good
quality
output
in
these
technologies.
So
the
question
is
here:
how
can
we
help
get
this
memory
profiler
produced
things
that
is,
is
you
know
comparable
to
what
you
would
see
in
the
commercial
world.
A
Yeah
I
mean
we've
said
that
you've
been
the
most
that's
a
good.
That's
a
good
question.
You
said:
you've
been
using
it
and
having
challenges.
What
are
those
challenges
I
haven't
heard
about
them,
so
I
guess
sharing
and
filing
and
kind
of
following
up
with
things
that
would
be
useful.
D
Yeah,
then,
do
you
have
a
quick
list
or
should
we
set
up
it's
it's
nine.
Should
we
set
up
a
separate
call.
D
B
I
think
that
would
be.
That
would
be
great,
so
we
are
almost
out
of
time.
Are
there
any
closing
question
or
comments.
B
Well,
matt,
I
want
to
thank
you
for
an
excellent
presentation
today
I
thought
it
was
very
informative
and
it
is
an
important
area
of
work.
I
I
know
from
my
own
experience
that
many
companies
do
in
fact
roll
their
own,
so
hopefully
this
will
provide
a
foundation
that
the
industry
can
build
off
of
and
also
provide
additional.
You
know
research
and
help
for
the
for
the
community
that
you
are.