►
From YouTube: CHIPS Alliance Workshop - March 30, 2021
Description
The CHIPS Alliance held its Spring 2021 workshop, to share milestones, progress, updates and more.
Slides are available at: https://events.linuxfoundation.org/chips-alliance-spring-workshop/program/schedule/
A
So
let
me
start
by
providing
a
brief
introduction.
I
am
relatively
new
to
chips
alliance.
I
just
joined
in
early
january
and
it's
my
pleasure
to
be
a
part
of
this
exciting
community
of
folks
participating
in
this.
I
have
quite
a
bit
of
industrial
experience
going
back
through
ibm
sun
microsystems
oracle.
A
Was
done
in-house,
whether
that
be
architecture,
rtl
silicon
processes,
eda
the
whole
thing
was
right
there,
so
it
was.
It
was
very
convenient
from
the
work
perspective,
but,
of
course,
that's
a
very
high
overhead
for
any
particular
company
to
maintain.
A
So
what
that
has
evolved
to
more
what
I
call
a
supply
chain
management
type
of
thing,
which
you
might
see,
for
example,
as
apple
amd
or
what
I
saw
at
oracle
sun.
Where
you
reply,
you
rely
upon
different
suppliers
to
provide
you
different
things,
whether
that
be
eda,
tooling,
that
might
be
particular
ip,
that
you
license
a
from
a
ip
developer
and,
of
course,
from
your
foundry
as
well,
that
being
through
one
of
the
major
players
that
we
all
work
with.
A
And
what
we're
seeing
now
happen
is
that
we're
going
more
towards
an
open
collaboration,
type
of
environment
where
different
companies
may
come
together
and
collaborate
on
something
or,
as
we
are
now
promoting
in
chips.
Alliances,
open
source
collaboration
in
the
hardware
space,
which
really
will
provide
an
entirely
new
ecosystem
to
bring
forward
the
best
ideas
and
iep
that
is
shared
and
developed
by
the
open
source
community.
That
can
be
used
to
build
interesting
products.
A
B
A
Development
and
verification,
the
implementation
and,
of
course,
the
manufacturer-
it
just
takes
a
lot
of
time
and
money
or
nre
to
get
a
product
to
the
point
where
you
tape
it
out
and
when
you
do
tape
it
out.
Of
course,
you
want
to
make
sure
that
is
correct
and,
of
course,
in
terms
of
innovation,
we're
really
moving
beyond
the
cpu
and
gpo
type
of
environment.
We're
moving
to
specialty
accelerators
for
machine
learning.
Image
processing
could
be
database
accelerators
as
an
example.
A
So
what
chips
alliance
really
is
promote?
Promoting
or
proposing
is
an
open
design
ecosystem
right
that
moves
all
the
way
from
specific
specification,
rtl,
open
process,
design
kits
tooling,
physical
implementation
and
standards,
and
there's
many
potential
contributors
to
this
type
of
environment,
individuals,
universities,
foundations,
companies.
What
we're
really
trying
to
do
is
bring
forward
the
best
minds
on
very
complex
problems
and
to
build
a
platform
of
innovation
that
all
can
contribute
to
and
benefit
from.
A
If
we
look
at
the
chips
alliance,
it's
really
an
arena
of
pillars
of
interest
and
opportunities
right,
it
could
be.
Architecture
could
be
specification,
it
could
be
implementation,
it
can
be
tooling,
it
can
be
verification,
and
there
are
others
too,
as
time
goes
forward
right.
So
it's
providing
a
venue
of
collaboration.
A
If
we
look
at
the
open
source
model,
it's
really
about
ip
and
innovation
and
then
building
your
product
right.
So
we
could
start
with
an
open
ip
catalog,
that's
in
the
open
source
community
and
then
your
innovation
on
top
of
that
or
in
conjunction
with
that,
can
yield
a
market
differentiating
product.
But
we
have
a
number
of
areas
where
we
already
have
had
successful,
open
collaboration,
whether
that
be
on
the
successful
risk,
vise
implementation
with
the
swerve
core.
It
could
be.
A
The
omni
extend
cash
coherency
protocol
that
you'll
hear
more
about
today
during
one
of
the
talks,
or
it
could
be
the
aib
2.0
triplets,
which
provides
a
standard
mechanism
for
how
to
break
your
complex
architecture
down
into
smaller
pieces
that
are
better
for
manufacturing.
A
We
also
have
recently
formerly
formed
an
analog
interest
group,
which
is
of
interest
to
many
different
participants
that
has
characteristically
been
one
of
the
most
difficult
arenas
of
chip
design
and
is
often
the
long
pole
in
the
development
of
any
particular
piece
of
silicon
in
terms
of
goals
of
what
we
want
to
try
to
do
in
chips
alliance,
we're
interested
increasing
the
membership,
that's
certainly
a
key
goal.
We
want
to
get
folks
participant
in
it
and
take
advantage
of
what
we
offer
and
also
to
help
have
them
contribute
different
ideas
and
to
extend
the
ecosystem.
A
I'd
like
to
see
increased
collaboration
both
in
entarto
artificial
intelligence,
automotive,
5g.
Those
are
some
new
arenas
that
we'd
be
happy
to
participate
in,
of
course,
getting
more
ip
contribution
to
the
alliance
would
be
a
benefit
too.
I
think
that's
useful
for
everyone
and
really
provide
a
menu
of
choice
for
innovation
right,
so
that
any
anyone
can
pick
and
pull
different
items
from
that
to
help
build
differentiated
products.
A
We
also
have
quite
a
bit
of
interest
in
open
tooling,
as
I
mentioned
earlier,
so
I
just
generically
call
it
open
eda
that
could
be
architectural
simulation,
rtl
development,
verification,
analog,
digital
fpga
type,
tooling,
so
there's
a
lot
of
opportunity
there.
The
other
thing
that
I
want
to
formulate
this
year
as
well
is
a
university
mentorship
program
and
also
a
contest
for
students
to
participate
in
and
to
create
excitement
for
the
alliance
and
also
for
different
participants
right
in
terms
of
both
learning
from
a
career
perspective
and
also
academically
as
well.
A
So
that
I'd
just
like
to
wrap
it
up
and
say
you
know,
look
forward
to
today's
talk
and
that
hope
it
creates
excitement
for
you
and
that
also
interests
in
terms
of
joining
the
alliance
and
what
it
can
help
do
for
you
for
your
venue.
A
So
with
that,
I
will
introduce
our
first
speaker,
which
is
who
is
professor
bora
nicholak
at
university
of
california,
at
berkeley?
He
is
going
to
talk
about
shipyard.
Professor
nikolic
has
many
interesting
different
activities
under
his
research
is
an
expert
in
solid-state
circuit
design.
So
with
that,
I'd
like
to
turn
it
over
to
you,
professor
nicolette.
C
C
To
start,
I
would
like
to
highlight
that
we
are
interested
in
designing
generators
rather
than
specific
instances
of
ip
blocks.
Generators
are
there
to
enable
us
to
specialize
the
design
and
lower
the
design
cost
through
the
reuse.
C
We
have
two
types
of
generators,
digital
and
analog,
and
most
of
this
talk
will
cover
the
digital
generator,
so
I'll
touch
on
the
analog
ones
as
well
towards
the
end,
digital
generators
in
our
world
are
written
in
a
language
called
chisel
which
stands
for
constructing
hardware
in
scala
embedded
language,
which
is
essentially
a
piece
of
software
and
gets
executed,
generates
data
flow
graph.
That
is
a
presentation
of
the
rtls.
It
is
not
high
level
synthesis.
It
is
essentially
writing
hdl
code
in
a
bit
more
powerful
language
host
language.
C
In
this
case,
scala
version,
3.4
is
current
and
chisel
and
trans
translates
to
its
intermediate
form,
which
is
called
fertile
and
then
from
fertile.
It
emits
verilog
and
various
kinds
of
design
and
verification,
collaterals
and
chisel
originated
in
berkeley,
but
over
the
past
year
has
moved
to
chip's
alliance,
and
is
you
know
the
repository
actually
is
hosted
by
the
chips
alliance?
On
the
other
hand,
bag
is
an
open
source
python-based
framework
for
generating
analog
blocks.
C
Everything
we
do
is
based
on
risk,
5
say,
which
is
an
open
and
license
free.
I
say
that
most
familiar
most
people
are
familiar
with
it's
important
to
keep
in
mind
that
isa
is
different
than
the
ip.
So
there
are
many
research
and
commercial
implementations
of
risk,
5
cores
out
there.
For
example,
there
are
well-known
palpina
cores
and
there
are
rocket
cores
that
came
from
berkeley.
Rock
core
is
an
example
of
a
generator
written
in
chisel.
C
These
open
source
components
include
chisel
that
I
mentioned,
and
it's
intermediate
form
fertile
and
risk
5
isa
and
the
back
berkeley,
analog
generator
and
various
cores
like
the
boom
core
and
the
rocket
core
and
a
good
number
of
accelerators
to
go
with
that,
and
there
are
caches
that
have
been
contributed.
You
know
that
were
designed
in
berkeley
then
contributed
by
sci-fi,
just
style
length
also
developed
by
berkeley,
then
improved
by
sci-fi
diplomacy
from
sci-fi,
a
number
of
peripherals
from
berkeley
and
other
places
configuration
system
our
flow
tool,
which
is
called
hammer.
C
There
is
our
fpga
simulation
environment,
which
is
called
firesim,
and
it
became
obvious
that
that
whole
thing
is
pretty
disorganized
and
lives
in
a
very
large
number
of
independent
repositories
that
are
moving
independently
of
each
other.
It
has
become
even
harder
for,
even
even
for
our
students
to
put
systems
together
from
these
different
kind
of
components
that
not
unnecessarily
always
fitted
with
each
other.
C
C
Shipyard
is
a
framework
that
enables
us
to
put
together
many
things
that
exist
in
the
open
source,
primarily
from
berkeley,
but
from
many
others
as
well.
So
they're
grouped
into
these
sections
that
involve
tooling,
which
are
the
languages
and
the
tools
for
bringing
up
risk
five
systems,
then
things
associated
with
the
rocket
chip
and
the
design
flows
that
are
targeting
either
software
rtl
simulation,
fpga
simulation
or
vlsi
flows.
C
C
C
One
can
take
various
kinds
of
parameters
and
change
them
to
their
liking
or
replace
pieces
of
the
of
the
generator
with
their
own
ip,
including
a
separate
core
typical
customizations
involved,
as
I
said,
changing
the
parameters
adding
coprocessors
or
adding
ip
inside
the
rocket
chip.
There
is
also
a
library
of
reusable
ssc
components
where
you'll
find
memory,
protocol
component
converters,
various
kinds
of
arbiters
and
crossbar
generators
and
clock
crossings
and
asynchronous
cues.
It
is
the
largest
open
source
chisel
code
base.
It
has
been
used
in
industry,
sci-fi
is
maintaining
it
it.
C
It
was
developed
in
berkeley,
but
now
primarily
is
maintained
by
chips
alliance
than
sci-fi,
and
it
has
been
used
in
sci-fives
products
when
we
look
into
what
is
inside
of
our
of
a
rocket
chip,
soc
we'll
find
tiles,
which
are
the
units
of
replication
for
a
core.
So
in
our
case
we
can
have
our
cpus
that
are
either
rocket
or
berkley
out
of
order
machine
or
we
as
an
example
of
integrating
verilog
source
code.
We
have
integrated
eth,
zurich's
ariana
or
what
is
now
called
cva6.
C
Cache
has
a
two
level
cache
system.
There
is
a
front
bus
that
connects
to
the
dma
devices
and
there
is
a
control
bus
that
connects
to
the
core
complex
devices
and
interrupt
controllers
bootrom
and
the
jtag
unit.
Periphery
bus
connects
to
the
other
devices
and
the
system
bus
ties
everything
together.
C
In
order
to
integrate
many
of
these
generators,
you
know
you
have
to
have
the
right
version
that
works
with
each
of
those,
and
we
have
built
and
tested
many
configurations
that
work
together.
For
example,
we
have
the
cores
that
I
mentioned,
and
then
there
is
a
number
of
accelerators
that
we
have
put
out
in
the
open
source
or
adopted
from
the
open
source.
So
we
have
a
huacha
which
is
a
custom,
not
standard
risk,
5
vector
accelerator
gemini
is
a
machine
learning
matrix
multiplication
accelerator.
C
There
is
an
educational
accelerator,
people
can
go
through
and
quickly
run
through
the
exercise
of
integrating
it
in
their
design,
and
we
have
taken
nvidia's,
nvdla
and
integrated
with
our
design.
A
memory
system
is
a
sci-fi's
last
level
cache,
and
there
is
a
number
of
peripherals
that
are
out
there.
That.
C
So
when
we
customize
this,
there
are
many
ways
for
customizing
components
and
the
entire
soc.
So,
for
example,
one
one
of
the
good
examples
that
people
can
do
in
order
to
bring
themselves
up
to
speed
is
to
configure
a
minimum
configuration
of
the
of
the
rocket
core
to
build
it
as
a
controller
core
or
a
power
management
unit
or
system
management
unit.
C
Then
there
are
simple
rocker,
accelerators
rock
is
a
standard
coprocessor
interface
in
risc-v,
and
then
we
can
add
more
complex
arc
accelerators
like
watch
and
gemini
that
I
mentioned
before,
and
then
we
can
add
many
kinds
of
things
and
on
the
bottom
over
here
there
is
a
fairly
complex
soc
that
has
multiple
cores
with
the
heterogeneous
coprocessors
added
to
it.
C
Now
it
is
important
to
have
tooling
that
enables
us
to
evaluate
these
kind
of
systems,
so
we
have
simulation
implementation
targets
where
we
can
target
fpgas
in
the
cloud
through
our
firesim
flow.
We
can
just
simulate
the
you
know
as
software
rtl
simulation
through
behavior
verilog
and
we
can
have
vrs
lsi
verilog
to
drive
the
automated
vlsi
flow.
C
So
we
start
with
our
custom
soc
configuration
that
we
are
customized
for
our
target
application.
We
put
together
the
rtl
generators
and
then
we
go
through
the
build
process
that
targets
any
of
these
three
possible
flows.
C
It's
important
to
to
mention
the
thing
that
ties
all
of
these
together.
It's
risk
five
and
all
of
this
would
be
meaningless
without
the
software
that
would
be
running
on
it.
So
it
is
important
to
have
compatible
standard
risk,
five
tools
and
versions
that
are
packaged
with
the
shipyard.
C
So
in
this
case
we
start
with
that,
and
then
we
add
our
own
esp
tools
as
a
non-standard
equivalent
software
tools
package
with
that
that
enable
our
custom
accelerators
to
to
to
shine,
for
example,
vector
extensions
or
machine
learning
workloads,
and
then
there
is
a
lip
gloss
for
bare
metal
c
tests
and
fire
marshall
is
our
internal
tool
that
manages
workload
and
linux
distributions
of
the
updates
of
what
has
been
happening
over
the
past
six
to
nine
months.
C
Shipyard
has
been
bumped
to
version
1.4,
and
here
are
a
few
highlights
that
you
know,
may
be
of
interest
to
people
who
who
are
interested
or
are
using
gpr.
First
open
sbi
has
been
integrated,
and
that
is,
I
believe,
in
chip's
alliance.
It
originates
from
western
digital,
which
is
an
open
source
implementation
of
a
respire
supervisor,
binary
interface.
C
That
basically
replaces
berkeley's
bootloader.
So
this
is
something
that
has
a
much
broader
community
supports
that
the
bvl,
but
bbl
is
still
an
option
for
legacy
implementations
and
legacy
workloads
we
implemented.
Some
things
is
called
harness,
binders
that
are
generally
replacing,
so
something
is
called
io
binders,
which
were
the
the
generalized
abstraction
for
io
interfaces
from
the
chip
perspective
and
they
have
been
primarily
eyebinders
have
been
targeting
test
chips,
but
we
have
found
out
that
many
people
are
interested
not
just
in
the
test.
C
Ships
they're
interested
in
you
know,
fpga
simulations
and
only
wanted
to
have
a
unified
perspective
on
that
as
well,
so
harness
binders,
generalize,
abstraction
for
io
interfaces
from
the
test
environment
perspective,
basically
driving
the
chip
from
the
outside
world
and
making
it
work
within
its
environment.
C
We
have
pushed
a
few
versions
that
enable
local
fpga
prototyping
besides
using
this
in
in
within
firesim,
which
works
with
cloud
fpgas
in
amazon
aws.
C
A
lot
of
people
have
been
interested
in
trying
things
out
on
a
local
fpga,
so
we
are
supporting
a
very
inexpensive
fpga,
which
is
rt7
and
a
very
big
fpga,
which
is
which
vco118
we
have
based
a
lot
of
this
on
sci-fi's
fpga
shells,
but
they've
added
additional
components
to
that,
including
the
harness
binders
we
use
this
generally
for
simulation,
but
also
for
a
test
chip
bring
up
and
we're
looking
into
adding
future
pci
e-based
interfaces
and
virus
and
support
multi-clock
operation
has
been
dramatically
improved,
so
it
now
supports
diplomatic,
clocking
and
essentially
the
the
entire
experience
of
building
a
multi-clock
soc
has
become
a
lot
smoother.
C
So
there
there
was
a
number
of
relatively
convenient
features
that
have
been
added
to
that.
We
also
have
you
know
more
of
our
vlsi
site
tool.
Flow
has
become
visible
by
putting
a
lot
of
it
into
open
source,
so
we
have
a
largely
automated
make
file
based
flow
that
takes
the
target
rtrt
target
verilog
and
passes
it
through
a
set
of
commercial
tools,
as
well
as
some
of
the
open
road
tools
to
generate
the
drc
and
lvs
clean
gds
file.
C
What
I
said
it
is
becoming
more
obvious.
What
we
are
doing.
Most
of
that
has
been
abstract.
There
have
been
a
lot
of
obsessions
because
we
have
been
targeting
commercial
tools
and
nda
protected
design
kits,
but
now
with
open
road,
then
skywater
we
can
actually
publish
most
of
that
and
it
is
in
the
open
source
and
there
is
quite
a
bit
of
added
documentation
because
we
use
it
in
classes.
C
There
has
been
a
number
of,
as
we
were
rolling
out
shipyard.
We
have
bumped
a
lot
of
these
generators
generally.
Gpr
does
not
follow
the
bleeding
edge
of
a
generator,
probably
a
few
months
old.
More
mature
generators
are
being
integrated,
so
it
is
bound
to
chisel
3.4
and
fertile
1.4.
C
It
has
educational
solar
cores,
but
the
one
of
the
big
updates
was
adding
the
gemini
0.5
version,
which
is
which
is
used
for
accelerating
machines,
learning
workloads.
C
To
start
wrapping
this
up,
we
would
like
to
enable
complete
mixed
signal:
soc
integration,
so
we
have
been
building
mixed
signal
socs
based
on
on
shipyard,
but
the
integration
of
mixing
all
blocks
has
not
been
automated
to
say
our.
It
has
been
mostly
and
mostly
manual
integration
of
automatically
generated
blocks.
C
So
we
have
these
tools
that
I
mentioned
early
on,
which
are
back
two
and
bag,
three,
which
are
our
ways
to
generate
mexican
ip
blocks.
Majority
of
the
existing
generators
are
in
the
bag,
two
version
of
berkeley
analog
generator
and
we
have
some
fairly
complex
stuff.
That
includes
time,
interleaved
adcs.
C
High-Speed
services
and
plls
and
they're
targeting
commercial
technologies
like
tsmc,
16
28
until
22
or
global
founders,
14.,
we
do
have
a
bag.
3
migration
pad
back
3
is
not
backwards,
compatible
to
back
2,
but
over
the
next
few
weeks,
people
may
enjoy
seeing
some
of
these
generators
published
completely
in
the
open
source
and
sky
130,
and
there
will
be
equivalent
versions
that
will
be
targeting
state
of
the
art
technology
kits
in
intel,
22
or
global
foundry
12..
C
C
We
are
supporting
open
source,
eda
ecosystem
and
tooling,
so
we
are
targeting
now
skywater
130
flow
that
supports
complete
open
source
generator
to
gds
gds2
flow
and
hammer,
is
there
and
is
ready
to
accept
any
contributions
that
exist
that
come
up
in
this
community.
C
So
in
summary,
we
have
already
enjoyed
quite
a
bit
of
community
support.
We
were
very
pleased
to
see
in
the
risk
fire
summit
in
december.
There
were
a
couple
of
talks
that
presented
their
own
innovations
by
using
chip
yard
in
education,
we're
using
it
successfully
in
three
classes,
we're
using
it
in
three
classes
in
spring
2020
and
we're
using
it
in
four
classes
during
spring
2021.
C
C
They
took
shipyard
and
they
are
the
two
components
from
the
shipyard
added
their
custom,
stuff,
asc,
accelerator,
aes,
accelerator
and
bluetooth
transceiver,
and
that
whole
thing
is
going
to
be
taped
out
in
a
month
if
they
persist
with
that.
So
with
that,
we
have
quite
a
bit
of
a
documentation-
it's
not
as
rich,
but
there
are
about
140
pages
out
there
of
the
of
the
on
on
shipyards,
documentation
pages
that
enable
people
to
get
started.
There
is
a
number
of
tutorials,
some
of
them
have
been
recorded
and
can
be
replayed.
C
The
one
that
is
coming
up
soon
is
pisca.
2021.
mailing
list
has
200
threads,
and
that
is
basically
it
everything
that
we
have
is
in
the
open
source.
We
are
happy
for
people
to
use
it.
We
are
even
happier
if
you
put
a
pr
with
an
improvement
that
you
would
like
us
to
adopt
and
to
to
wrap
it
up.
I
we
have
made
it
easy
to
design
specialized
socs.
C
C
With
the
open
road-
and
I
haven't
done
almost
any
of
this
work-
it
has
been
done
by
a
number
of
our
students
that
leaves
a
few
minutes
for
questions.
A
Thank
you
so
much
professor
nicholas
for
an
excellent
presentation.
I
I
really
do
appreciate
it.
I
have
one
question
so
far
on
the
q
and
a
line
which
I'll
ask
that
first
and
I'll
ask
with
the
following
question
myself
too
and
appreciate:
if
there's
any
others
from
others,
so
question
is:
can
local
prototyping
be
done
by
distributing
the
target
design
over
multiple
fpgas,
as
occurs
in
firesim
on
the
cloud.
C
We
are
not
supporting
that
yet,
so
we
don't
support
multiple
fpga.
You
know
distributing
and
dividing
up
design
in
to
multiple
fpgas,
but
we
are
looking
into
that.
A
Okay,
I
had
another
question
that
just
came
in,
which
is:
does
using
chip
yard,
mean
other
eda
tools
from
different
companies
vanish
soon.
C
Does
shipyard
is
mostly,
if
you
look
at
it,
what
we
have
in
there
is
an
a
way
to
organize
open
source
source
code.
C
What
we
do
with
that
source
code
is
up
to
us.
We
can
go
through
commercial
tool
flows
and
that's
what
we
do.
You
know
there
are
no
completely
open
source
fpga
tools
that
enable
us
to
to
run
through
the
fpga
flow
in
a
completely
open
source
way.
C
So
we
will
generally
use
you
know,
say
xilinx's
closed
source
rules
in
the
vlsi
flow.
We
have
two
options
to
go
with
the
cadences
flow,
which
is
what
is
we
mostly
support
and
target
nda
protected
technologies,
but
you
are
now
supporting
and
open
source
tools
from
open
road
and
target
and
completely
open
source
sky
130
kit.
It
is
not
completely
stable.
The
the
release
is
not.
You
know
the.
There
are
gaps
in
those
flows,
but
I
think
other
speakers
are
going
to
address
and
tell
us.
C
When
are
we
going
to
see
a
stable
release
out
there,
that
anybody
can
take
the
source
code
for
an
for
for
an
soc
and
make
in
a
couple
of
days
their
own
gds.
A
A
C
So
we
are,
we
we
have
that
in
works.
There
are
system,
verilog
style
assertions
that
are
being
added
into
the
chisel
language,
so
stay
tuned,
and
you
will
see
more
of
that.
That
will
enable
us
to
do,
for
example,
coverage
analysis.
C
A
Okay
and
final
question
is
what
steps
or
approaches
are
being
taken
to
introduce
chisel
into
the
industry.
As
a
recent
chisel
learner,
I'm
really
impressed
with
the
flexibility
chisel
offers,
but
whenever
I
recommend
someone
to
learn
it,
they
are
hesitant
to
say,
are
hesitant,
saying
verilog
or
sv
are
the
industry
standard
being
just
a
marilyn.
C
Yeah,
that
is
true
it
we
do
have
a
boot
camp
that
people
can
take
over
five
days
and
it
is
in
sitting
in
the
open
source.
I
can
point
people
to
that.
People
can
take
a
boot
camp
and
teach
themselves
a
chisel.
If
you
talk
to
our
students,
they
really
don't
like
you
know.
If
you
are
20
year
old,
you
really
our
students
that
are
20
years
old,
really
don't
like
verilog.
They
feel
like
that.
A
A
So
that
we
will
move
to
our
next
speaker,
which
the
topic
is
risk,
five
design,
verification
work,
group
updates
and
that
will
be
presented
by
matt,
cockerell
and
call
authored
by
tao
lu.
Both
of
google.
F
F
The
obligatory
anti-trust
policy,
then
we
get
to
the
agenda
so
good
morning
or
good
evening,
depending
on
where
you
are.
My
name
is
matt
cockrell
and
I'm
joined
today
by
tao,
lu
and
udi
jangalada.
We
are
pleased
to
give
you
an
update
on
the
risk
dv
work
group,
we'll
begin
with
an
overview
of
the
group
itself.
F
Next,
we'll
go
through
an
exciting
update
on
the
continuous
integration
platform
that
we've
put
together
then
provide
an
update
on
the
python
based
instruction
generator
tao,
introduced
this
in
the
last
group
update
and
today
I'll
go
over
an
overview
where
the
tool
is
at
this
point
and
what
we
plan
to
do
in
2021.
F
Finally,
we'll
wrap
up
with
a
brief
description
of
a
feature
we
plan
to
add
this
year:
support
for
the
enhanced
pmp
or
what
they
call
epmp
extension.
F
But
before
we
get
to
all
that,
we
can
want
to
describe
what
the
work
group
is
aiming
to
accomplish
when
we
first
started
putting
the
work
group
together.
Last
year,
we
put
together
a
bit
of
a
mission
statement
to
help
kind
of
guide
us
on
what
we're
trying
to
accomplish.
F
So
it
goes
that
we
create
a
forum
for
discussing
chip's
alliance
contributions
that
will
provide
or
enhance
verification
platforms.
The
work
group
should
strive
to
accept
support
platforms
that
demonstrate
compliance
functionality
and
perhaps
eventually
performance.
So,
in
short,
the
current
focus
of
the
work
group
is
the
risk
five
dv
tool
that
the
tao
has
put
created,
but
in
general
we
want
to
foster
discussions
of
all.
You
know,
design,
verification,
related
tools
and.
F
Platforms-
okay,
so
what's
been
happening
in
in
the
work
group.
Up
to
this
point,
we've
had
13
meetings.
You
know
we
first
started
last
spring
right
when
all
the
pandemic
craziness
started,
we
average
about
10
to
25
attendees
in
each
part
of
in
each
of
the
meetings
we
have.
F
Participants
from
various
parts
of
the
industry,
simon
from
in
paris,
joins
us
on
a
regular
basis,
and
he
gives
us
great
insights
on
the
tool
from
an
iss
and
other
perspectives
of
you
know
some
of
what
embarrasses
customers
are
looking
for
in
the
tool.
We
have
numerous
participants
from
google.
You
know
myself,
tau
and
uday
being
you
know,
part
of
them
also
the
tool
is
used
pretty
heavily
in
the
open
titan.
You
know
verification
of
the
ibex
core,
and
so
we
have
participants
on
that
front
too,
that
come
from
google.
F
We
have
participants
from
western
digital
they've
used
the
tool
for
their
risk.
Five
cores
as
well
andes
has
been
helping
to
develop
support
for
the
vector
extension,
so
they
have
a
lot
to
offer
in
that
discussion.
Doug
and
amy
from
metrics
attend
on
a
regular
basis.
They've
had
quite
a
bit
to
do
with
helping
to
get
the
continuous
integration
platform
up
and
running,
and
perfect
vips
has
been
large
contributors
to
the
python-based
flow.
F
They
consistently,
you
know,
tend
to
provide
insight
on
development
of
the
the
python
based
generator
as
well
as
understand.
You
know
the
risk
5d
tool
and
enhancements
that
can
be
made.
In
addition
to
just
you
know,
general
discussion
on
this
five
dv
and
other
associated
topics.
We
we
have
we've
had
some
great
presentations
from
you
know
from
various
presenters,
a
few
of
them.
F
For
instance,
olaf
kindred
came
in,
and
he
gave
a
great
overview
of
edli's
infuse
soc,
which
was
great
in
the
context
of
integrating
the
risk
five
db
into
different
flows
and
automating
those
flows
to
make
it
easier
to
use
in
regressions.
F
We
also
had
the
open
hardware
group
came
in
and
gave
us
some
great
feedback
on
their
use
of
the
tool
they
use
ris5db
for
validating
one
of
their
cores.
The
cd32e40p.
F
As
I
mentioned,
we
also
set
up
a
ci
platform
which
has
been
you
know
great
for
just
maintaining
the
health
of
the
tool.
I'll
describe
that
a
bit
later
and
we
spun
off
another
bi-weekly
discussion
on
the
python
based
flow
and
I'll
go
into
that
in
a
bit
more
detail
later
as
well.
F
So
the
evolution-
this
is
an
updated
version
of
the
slide
that
tao
presented
last
september.
As
you
can
see,
the
tool
has
had
a
healthy
evolution
over
the
past
few
years
as
we
move
into
2021.
We
want
to
continue
the
progress
that
we've
we've
had
up
to
this
point.
You
know
we
want
to
continue
having
the
work
group,
you
know
on
a
regular
basis
and-
and
you
know,
grow
the
attendance,
so
we
can,
you
know,
gather
the
valuable,
valuable
feedback
you
know
we're
going
to.
F
One
of
the
top
priorities
for
2021
is
is
bringing
the
tool
up
to
compliance
with
the
vector
extension
version.
1.0
we've
gotten
some
feedback
that
there's
some
users
that
would
like
to
have
that
capability
in
the
tool
and
then
we'll
continue
work
on
and
then
we're
going
to
add
support
for
the
enhanced
pmp
extension.
That's
been
proposed
and
I'll
talk
about
that.
A
bit
more
later,.
F
So
for
those
of
you
who
aren't
familiar
with
the
tool,
this
is
a
high
level
overview
of
how
it
works,
so
the
tool
takes
in
various
inputs.
You
know
such
as
processor
configuration,
you
know,
supported
instructions,
whether
it
be
integer,
multiply,
compressed
and
so
forth,
and
it
generates
a
program
that
can
be
fed
into
an
iss
and
executed
in
you
know:
a
uvm
system,
verilog
simulation
environment,
the
outputs
of
the
iss
and
the
simulation
can
be
compared
for
correctness.
F
If
you
have
more
interest
in
actually
how
the
tool
works,
tau
has
a
great
talk
that
you
can.
You
can
view
on
youtube,
also,
there's
a
pretty
straightforward.
F
You
know
tutorial
on
on
the
github
that
you
can
look
at
that
will
guide
you
in
getting
the
tool
up
and
running
if
you
want
to
experiment
with
it.
But
if
you
have
questions
beyond
that,
I
encourage
you
to
join
the
work
group
and
you
can
voice
your
questions.
There
and
you'll
certainly
get
some
sort
of
answer.
F
So
the
continuous
integration
platform
in
the
workgroup
discussion
we
determined
it
would
be
valuable
to
have
a
ci
flow
that
would
maintain
the
stability
and
health
of
the
tool
against
the
tool
chain
and
make
sure
all
of
the
contributions
are
keeping
things
intact
and
not
breaking
anything.
So
when
we
first
share
this
idea
with
the
larger
chips
alliance
forum,
we
got
some
feedback
on
how
that
should
look,
and
ideally
it
would
be
all
totally
free
and
open
source.
You
know
like
hosted
on
a
travis.
F
You
know
ci
platform,
but
given
you
know
that
the
tool
needs
like
an
industry
standard
system,
verilog
parser
that
can
support,
you
know
uvm
that
didn't
seem
practical.
We
needed.
You
know
an
industry
standard.
You
know
tool
on
the
back
end
to
make
the
ci
flow
work
properly.
F
So
we
google
teamed
up
with
metrics
to
offer
a
cloud
platform.
F
Currently,
the
tool
is
only
visible
to
a
few
administrators
that
are
running
the
actual
ci
metrics
has
planned
support
for
different
types
of
visibility
for
their
their
metric
simulations
and
regressions,
and
so
we're
hoping
in
the
future
that
we'll
be
able
to
provide
that
and
share
that
with
the
rest
of
the
community
to
show
you
know
the
health
of
the
tool
as
it
progresses
and
contributions
are
made.
F
Now,
an
update
on
the
the
python
based
instruction
generator
in
one
of
our
first
workgroup
meetings.
The
question
came
up
is
how
do
I
use
this?
You
know
risk
5db
tool.
If
I
don't
have
a
license
from
you
know,
one
of
the
large
eda
vendors
caden
synopsis
or
another
eda
vendor,
and,
as
I
mentioned
earlier,
the
uvm
based
tool
requires.
You
know
those
types
of
tools
from
those
those
vendors.
F
So
to
address
this,
we
decided
to
continue
a
previous
effort
we
had
had
in
google,
which
was
to
create
a
python
based
tool
to
enable
a
completely
free
and
open
source
option
for
risk,
5,
processor
verification.
F
F
So
our
approach
includes
using
the
pyvsc
as
the
randomization
and
functional
coverage
framework.
It's
an
open
source
package
for
generating
random
stimulus
and
collecting
functional
coverage.
The
syntax
is
familiar.
It
looks
like
system
verilog,
randomizing
constraints
and
functional
coverage
syntax,
and
we
are
also
using
the
same
class
structure
as
we
did
in
the
original
tool,
and
this
you
know
we
have
the
same
randomization
randomization
flow
as
we
did
with
the
system
bare
log
uvm
tool.
F
F
To
this
is
an
updated
slide
from
the
tau
gave
in
september.
That
illustrates
the
collaboration
between
perfect
vips
and
google.
So
essentially,
in
parallel,
google
did
the
pyvsc
functional
coverage
definition.
F
Then
the
rv32i,
functional
coverage
and
implementation
and
perfect
vips
did
the
pyvsp
instruction
class
framework,
then
a
basic
implementation
of
the
generator
itself,
and
then
they
were
able
to
generate
a
random
program
that
fed
right
into
our
end-to-end
flow.
So
now
any
other
enhancements
made
to
the
the
pygen
or
the
pi,
the
risk
five
db
pi
flow
can
be
fed
into
our
end
and
integration
flow
to
make
sure
it
doesn't
it
still
works
properly.
F
Where
are
we
at
now?
So
as
of
now,
we've
implemented
the
instruction
class
and
generator
framework.
The
constrained
random
program
generation
is
done
with
the
ability
to
mix
in
random
and
directed
instruction
streams
for
rv
32i.
You
know,
mcdfda
also
multi-heart,
program,
execution
and
most
directed
streams
are
supported.
F
We've
implemented
the
rv32i
coverage
model
for
32-bit
instructions,
and
it's
also
been
integrated
into
the
end-to-end
flow.
Future
work
for
this
year
includes
support
for
64-bit
instructions,
extending
the
functional
coverage,
support
for
all
instructions,
support
for
different
privilege
levels,
user
supervisor
and
eventually
we
want
to
reach
full
parity
with
the
system.
Verilog
based
generator.
F
Perfect,
so
the
pmp
enhancement
are
epmp,
so
for
those
of
you
that
are
not
familiar
with
the
proposal,
the
original
pmp
is
meant
to
protect
internal
memory
ranges.
Ie
memory
accesses
are
checked
for
access
to
a
particular
user
or
lesser
privileged
mode,
but
I'm
summarizing
from
the
proposal
here,
but
with
with
the
current
spec.
If
we
want
a
pnp
to
be
enforced
on
a
non-machine
mode
and
denied
on
a
machine
mode
so
that
we
can
only
allow
access
to
a
memory
region
by
a
less
privileged
mode.
F
The
current
pmp
spec
doesn't
have
that
option,
so
the
enhanced,
pmp
or
epmp
proposes
a
new
csr
that
can
bypass,
update
or
enforce
existing
pmp
rules.
We
plan
to
implement
support
for
this
in
risc-v
for
this
register.
We
want
this
support
to
be
modular
so
that
we
can
maintain
backwards.
Compatibility
with
the
original
pmp
spec,
and
we
have
an
immediate
in
use
case
that
we
can
test
this
with
the
ibex
core
for
the
open,
titan,
router
trust
chip.
F
So
in
conclusion,
we've
made
some
great
progress
in
the
work
group
on
you
know
on
the
wrist
5db
tool
and
we're
adding
a
full
open
source
capable
solution
in
the
risc-5
deep.
You
know
dv
pygen
in
2020.
We
want
to
continue
that
into
2021..
F
The
quality
and
the
value
of
the
contributions
is
a
function
of
the
feedback
and
guidance
we
get
in
particular
from
the
work
group
discussions.
So
if
you
have
anything
to
add,
please
join
us.
There's
a
link
to
the
mailing
group.
Here,
that's
what
we
use
for
the
correspondence
says
agenda.
You
know
if
we
have
to
cancel
a
meeting
or
postpone
a
meeting.
We
we,
you
know,
share
that
information
through
the
mailing
group.
There's
a
link
in
the
slides
and
our
next
meeting
is
this
friday.
A
Thank
you
so
much.
I
really
appreciate
it.
It's
very
informative
about
presentation
matt,
so
we
do
have
a
couple
minutes
for
questions,
so
I
don't
have
any
from
the
audience
yet,
but
I
will
kick
off
with
one
question.
I
was
curious
as
to
how
continuous
integration
has
worked
in
practice
for
you
and
also
if
it
leverages
cloud
computing.
F
So
it
does
leverage
cloud
computing,
it's
actually
hosted
on
google
cloud
instances,
it
works
pretty
good.
We've
had
a
few
hiccups,
you
know,
and
most
of
those
were
really
just
related
to,
I
think
getting
the
proper
accesses
for
github
so
that
metrics
could
pull.
You
know
the
the
top
of
tree
version
and
fold
it
into
the
test
bench
that
we
that
we
were
using.
So
it's
worked
pretty
good.
F
You
know
it's,
it's
quite
accommodating
for
access
and
things
like
that,
but
but
yeah
it's
had
a
few
hiccups
and-
and
you
know,
we've
worked
through
it
and
it's
been
a
bit
of
a
learning
experience.
A
Appreciate
that
so,
but
we
will
be
publishing
the
slides
after
the
event.
So
for
those
that
wish
to
refer
to
them,
they
will
be
out
there
and
I
have
a
question
from
the
audience
which
is:
is
there
any
direct
method
to
transform
and
execute
the
programs
written
in
cpu-based
python
program
on
risk,
5,
cpus.
H
Yeah,
so
I
think
I
think,
if
I'm
understanding
the
question
correctly.
Basically,
the
python
based
generator,
is
just
able
to
generate
an
assembly
program
as
well
and
that
can
directly
be
fed
into
you
know
an
iss
and
compiled
and
loaded
into
a
a
verilated
test
bench
as
well.
So
the
same
execution
methods
are
still
available,
but
this
is
you
don't
have
to
rely
on
having
an
eda
simulator
that
requires
you
know
expensive
licenses
and
so
on
to
actually
be
able
to
use
this.
So
the
same
methods
of
simulation
are
available
as
well.
H
Pack,
I
make
this
one
also
as
of
right
now
we're
actually
experiencing
some
performance
hiccups
due
to
the
pi
vsc
library
that
we're
using,
which
is
developed
by
an
external
github
contributor,
basically,
and
so
there's
some
issues
with
how
it
implements
its
rent.
It's
a
randomization
constraint
solver,
which
is
causing
a
lot
of
performance
slowdowns.
F
A
A
Our
next
presentation
is
open
source
flows
and
asic
and
fpga
tool.
Development
michael
gilda,
will
be
presenting.
He
is
with
ant
micro,
so
michael.
I
I
Okay,
yes,
so
today,
I'm
going
to
tell
you
a
few
words
about
you
know:
open
source
flows
for
asics
and
fpgas
and
kind
of
how
to
get
there.
You
know
because,
of
course,
it's
not
all
possible
today,
but
we're
going
to
certainly
make
it
happen
with
chips
alliance,
a
few
words
about
my
company.
I
We
are
effectively
kind
of
an
open
source
business,
that
kind
of
flourishes
in
building
new
workflows
and
new
design
methodologies
based
on
open
source,
and
then
we
use
them
to
build
products
for
customers
in
various
fields,
and
so
we
span
a
wide
range
of
hardware
software
stuff.
But
of
course,
today
we're
also
going
to
specifically
focus
on
fpgas
and
asics
as
well
as
tooling,
which
is,
of
course,
a
big
chunk
of
our
work
for
our
clients.
I
I
So
now,
how
can
you
actually
enable
faster
innovation?
How
can
you
enable
more
of
it
in
this
pretty
complex
space?
So
chips
alliance
is
trying
to
achieve
that
with
various
activities,
and
I
chose
to
break
it
down
into
five
groups
of
things.
So
how
do
we
enable
those
open
source
flows?
How
do
we
kind
of
help
people
scale
into
real
real
life
projects
in
asic
and
fpga
development?
I
First
up,
I'm
gonna
talk
about
like
enabling
scalable
and
hybrid
cloud
computation,
which
is
necessary,
of
course,
for
just
kind
of
making
these
very
heavy
flows
work.
Secondly,
broadening
the
outreach
to
cover
you
know
more
people
that
can
be
attracted
to
this
very
interesting
space,
which
is
currently,
of
course,
a
little
bit
niche
as
compared
to
other
things.
Thirdly,
you
know
bridging
things
that
already
exist,
but
perhaps
haven't
really
been
benefiting
much
from
the
open
source
innovation.
I
Then
it's
about
you
know,
breaking
down
complexity,
so
kind
of
making
things
easier
to
partition
into
smaller
chunks,
so
that
you
can
handle
smaller
problems
instead
of
having
to
tackle
everything
at
once,
and
the
last
part
will
be
about
enabling
collaboration,
which
is,
of
course,
everything
we
do
is
related
to
enabling
more
collaboration,
but
more
specifically,
the
tools
and
the
projects
that
focus
on
that
part
and
chips
alliance
is
trying
to
approach
this
problem
in
a
very
complex
way
and
address
all
of
those
challenges.
I
At
the
same
time,
so,
starting
with
enabling
computation
at
scale
for
eda
development,
we've
been
working
with,
google
and
others
to
you,
know
enable
the
workflows
that
are
needed
in
the
cloud
to
build
complex
designs
and,
of
course,
as
you
know,
eda
related
flows
take
up
a
lot
of
time
and
complexity
and
and
they
involve
a
lot
of
tools,
many
of
which
are
not
open
source.
So
the
challenge
here
is
that
you
need
to
combine
various
things
coming
from
various
worlds.
You
need
to
make
sure
that
these
things
scale.
I
So
I'm
gonna
talk
a
little
bit
about
one
of
the
things
that
we've
been
doing
related
to
enabling
hybrid
setups
that
allow
you
to
do
public
ci
that
also
employ
internal
compute
resources
in
your
compute
data
center,
and
this
is
based
on
github
actions,
which
is,
of
course,
the
github
ci
system
and
we've
enabled
custom
runners
in
github
actions
which
basically
allows
you
to
run
some
stuff
internally
and
then
make
that
publicly
available
as
part
of
ci,
with
the
results
being
fully
open
and-
and
this
was
created,
of
course,
for
out
of
the
need
that
our
asic
and
fpga
flows
have
been
generating
because
using
whatever's
offered
to
you
by
github
by
default.
I
I
The
flexibility
offered
by
custom
runners
is
really
useful
and
what
we've
done
is
we've
container
enabled
containerized
builds
with
virtualizations
inside
our
own
cloud.
We've
also
made
it
work
with
google
cloud,
because
it's
a
collaborative
project
with
them,
so
you
can
essentially
manage
jobs
in
between.
I
You
know
the
default
runners
that
you
get
from
github,
but
also
on-premise
runners,
as
well
as
google,
cloud-based
runners,
we've
also
integrated
with
google's
build
event
server,
which
is
which
I'm
going
to
talk
about
in
the
next
slide,
as
well
as
we're
working
on
a
number
of
other
improvements
to
make
it
possible
to
scale
out
the
development
of
a6
and
fpgas
using
cloud
technology.
I
So,
of
course
those
all
those
heavy
flows
need
some
kind
of
ability
to
present
and
analyze,
whatever
you've
done
and
we've
been
implementing
bits
and
pieces
that
were
needed
to
help
integrative
systems
such
as
the
google
build
event
server,
so
that
you
can
basically
take
those
results
of
your
ci
bit.
An
internal
ci,
or
perhaps
this
github
action
ci
with
the
custom
runner
integration
and
pushing
it
to
whatever
sync
that
gathers
the
results
and
enables
other
people
to
view
them.
I
We
can
enable
custom
visualizations
like
usage
of
resources,
viewing
of
artifacts
in
specific
ways.
We
can
have
unlimited
storage,
both
in
terms
of
you
know,
amount
of
storage,
but
as
well
as
the
time
we
can
store
results
for
and,
generally
speaking,
this
is
a
very
vibrant
space.
The
best
protocol
itself
was
created
for
for
bazel,
which
is
also
a
very
interesting
tool.
I
Another
thing
I
want
to
talk
about
in
this
chunk
of
my
presentation
is
about
containerization
about
packaging,
so
in
order
to
enable
people
to
use
the
tools
that
we're
working
on
you
need
stable
reproducible
builds.
You
need
stuff
that
you
can
just
download
and
start
working
with,
so
we've
been
working
on.
You
know
packaging
things
into
docker
containers
into
conda.
I
It's
a
very,
of
course,
large
endeavor
that
takes
a
lot
of
people
and
it's
collaboration,
so
so
we're
working
on
some
aspects
of
this
google,
efables
and
and
people
like
uni
and
martinez.
Coral
are
helping
out
and
generally
speaking,
it's
a
very
vibrant
community
that
we're
building
there
with
the
help
of
chips,
and
you
can
take
a
look
at
the
hdl
organization
on
github.
That's
where
you
find
many
of
our
packaging
related
stuff.
Also,
we've
been
working
a
lot
on
eagle
eyes
of
olaf,
of
course,
and
and
the
fast
foundation.
I
If
anyone
doesn't
know,
eliza
is
python
based
framework
for
interfacing,
with
various
tools
for
for
for
asics
for
fpgas,
so,
basically
a
nice
way
to
abstract
out
your
flow,
which
is
a
very
important
thing
if
you're
going
to
make
that
flow
at
least
partially
open
source
and
potentially
fully
open
source
in
the
future.
I
So
the
next
thing
you
need
to
do
to
actually
enable
all
those
open
source
flows
is
making
sure
that
you
invite
the
right
people
to
the
table
that
you
brought
in
the
outreach.
And
basically,
we
were
looking
at
how
to
really
make
open
source
city
design
happen,
and,
ironically,
part
of
that
is
that
you
need
to
make
asics
more
like
software,
because
software
is
the
very
vibrant
space
that
you
want
to
attract
talent
from
you
want
to
attract
this
innovation
and
new
ways
of
thinking
about
design.
I
So
fpgas
is
one
thing
that
enabled
that
new
hdls
and
new
methodologies
is
another,
and
the
fpgas
are
actually
an
important
entry
point
into
a6,
a
lot
of
people
that
interest
get
interested
in
asics.
They
can't
really
build
an
asic
at
home,
so
instead
they
go
into
fpgas,
but
the
flows
in
fpga
are
also
closed
and
and
it's
a
fairly
niche
technology
anyway,
as
compared
to
you
know,
system
on
chips.
I
So
we're
kind
of
trying
to
change
that
and
we're
trying
to
make
sure
that
fpgas
are
also
open
because
open
source,
fpg
design,
open
source
development
boards
are
key
to
generating
enthusiasm
about
asics
and-
and
you
can
take
a
look
at
the
risk.
Five
context
contest
example
that
we
organized
a
few
years
ago
how
you
can
make
that
happen
by
going
into
asic
through
fpga.
I
So,
of
course,
sim
flow
is
one
of
the
key
projects
in
this
space.
It's
a
fully
open
source
tool
sheet
for
fpga
development
and
want
to
provide
an
end-to-end
and
vendor
neutral
flow,
because
of
course,
you
want
to
pull
those
engineers
from
you
know
all
those
islands
of
platforms
that
they're
working
on.
We
want
to
make
fpga
development
more
software-centric
with
simbiflow,
more
collaborative
and
we've
already
accomplished
a
lot
of
cool
stuff,
especially
a
niche
scenario.
I
I
We
want
to
make
them
something
that
you
just
use
for
a
configurable
silicon,
because
why
not-
and
one
of
the
successes
we've
had
here
in
the
space
is
the
collaboration
of
quick
logic
where
we've
developed
an
open
source
tool
chain
with
their
support,
which
is
kind
of
first
in
the
industry.
The
tutoring,
of
course,
was
the
key
part,
but,
like
we
also
enabled
a
lot
of
other
things
in
bristol
and
other
communities,
we
enabled
zephyr
rtos,
portland's,
open,
ips,
created
renault
support
and
so
on,
and
so
on.
I
It's
a
very
important
kind
of
comprehensive
aspect
to
this.
That
eventually
enabled
a
lot
of
excitement,
a
lot
of
projects
that
built
on
top
of
that
and
there's
some
great
things
coming
up
that
I'm
not
going
to
describe
today,
but
this
watch
this
space
with
interest
just
under
five
minutes.
I
Okay,
another
thing
that
needs
to
be
mentioned
is
you
know,
fpga
related
tools
related
both
to
programming
fpgas,
but
also
creating
new
fpga
silicon,
and
we
want
to
make
with
things
like
open,
fpga
and
various
routing.
I
We
want
to
make
fpgas
just
much
more
widespread,
and
last
thing
I
want
to
mention
here
is:
is
chisel,
of
course,
and,
and
you
know
all
the
related
work
in
new
hdls,
all
those
ideas
that
you
can
borrow
from
mainstream
software
development
are
kind
of
channeled
through
things
like
chisel
into
you,
know,
other
areas
and
chips
alliance.
In
general,
we
have
a
working
group
for
chisel.
That
is
a
vibrant
space
for,
for
this
kind
of
development.
Borah
has
already
described
a
lot
of
this.
I
So
okay,
we
have
new
methodologies
and
we
want
to
enable
new
people.
But
how
do
we
kind
of
also
reuse?
Whatever
has
been
done
before?
We
need
to
bridge
existing
things,
so
we
need
to
make
it
easy
to
combine
and
remix
stuff
that
already
exists,
and
we
saw
a
problem
here,
namely
there's
a
lot
of
great
open
source
tools
that
could
bring
value
to
asus
design,
but
unfortunately
it
wasn't
so
easy
to
interface
them
with,
for
example,
system
verilog,
which
is
broadly
used
in
the
industry.
I
So
we've
been
kind
of
tackling
that
problem
and
we
have
great
progress
here.
So
why
would
you
need
open
source
system
verilog
support?
Well,
a
lot
of
groups
use
system
very,
like
open,
titan
chips
alliance.
You
know
western
digital
designs
like
square,
even
ibex
are
in
a
system
very
log,
so
we
just
need
to
somehow
make
it
happen
and
also
the
the
risk
five
dv
update,
probably
kind
of
made.
I
You
realize
that
it's
an
important
thing
and
we
basically
took
this
approach
where
we
can
first
measure
so
we've
created
a
test
suite
for
system
verilog
tools
that
we
can
kind
of
measure
the
current
support
of
system
verilog
in
various
open
source
tools,
and
you
can
go
to
sv
tests
the
links
here
to
to
see
a
dashboard
that
we
can
generate
to
show
the
current
support.
But,
of
course
that's
the
dashboard.
I
But
of
course,
we've
also
been
working
on
improving
the
situation
and,
together
with
google
and
the
low
risk
and
open
titan
teams,
we've
been
actually
working
on
projects
such
as
sherlock
and
uhdm,
where
we
have
really
great
successes,
we've
been
able
to
simulate
and
synthesize
the
open,
titan
designs
with
fully
open
source
tools.
You
can
take
a
look
at
that
on
github,
so
that's
the
diagram
of
how
it
works
uses
the
uhdm
library.
I
Also,
if
you
want
to
do
linting
and
formatting
of
systemville
code,
you
can
actually
do
it
with
open
source
tools
now
using
variable
google's
open
source,
linter
formatter
for
system
verilog,
which
we've
been
also
very
actively
developing,
and
you
can
generate
very
interesting
stuff
from
that
using
tools
like
kaith,
so
there's
a
lot
of
activity
in
the
space.
I
The
last
thing
in
this
section
is
about
open
source
uvm,
which
we're
trying
to
enable
together
with
western
digital
and
and
chips
alliance
in
general,
we're
implementing
dynamic
scheduling
and
verilator,
which
is
the
first
step
to
getting.
You
know,
partial
uvm
support
and
we've
very
good
progress,
so
watch
this
space
as
well
for
for
updates
very
soon
about
being
able
to
do
some
simple
uvm
stuff.
I
Hopefully,
you
also
need
to,
of
course
break
down
complexity
to
to
make
sure
that
people
can
actually
get
into
the
space
easily
and
one
of
the
ways
that
you
do
that
can
be
bringing
you
know,
breaking
designs
into
chunks
and
eliminating
vendor
lock-in.
I
So
we've
been
working
on
projects
that
enable
you
know,
rapid
innovation
in
in
this
space,
there's
still
a
lot
of
things
happening,
but
not
so
much
open
source,
actual
implementation,
which
I
think,
tools
like
aib
and
frameworks
like
ab,
are
trying
to
change
so
great
kudos
to
intel
for
releasing
their
aib
chiplet
specification,
as
well
as
implementation
into
the
chips
alliance.
There's
going
to
be
a
talk
about
this
later
today,
but
it's
a
great
you
know:
high
performance,
chiplet
implementation,
that's
being
adopted.
I
I
I
One
of
the
things
that
we've
been
doing
ourselves
is
also
a
simulator
called
reno,
that
is,
enabling
people
to
create
designs
from
building
blocks
and
do
various
interesting
things
and
simulate
entire
socs.
Before
actually
implementing
them
and
we've
been
enabling
things
like
co-simulation,
with
verilater
to
quickly
prototype
different
solutions,
we've
been,
of
course,
also
implementing
consolidation
with
fb
with
physical
fpgas.
I
That's
also
a
way
to
break
down
the
complexity
and
bring
in
software
developers
into
the
space
and
make
sure
the
problems
we're
trying
to
tackle
are
easier
and
the
last
part
and
I'll
be
kind
of
writing.
Now.
This
talk
with.
How
do
we
enable
collaboration-
and
I
think,
that's
kind,
of
course,
the
most
important
part
and
risk
five
has
demonstrated,
of
course,
that
ac
design
can
be
collaborative.
But
in
order
to
to
make
that
happen,
we
need
a
lot
more
than
just
cores.
I
One
of
the
things
that
you'll
also
hear
about
today
is
the
open
source
pdk.
It's
a
white
collaboration
between
multiple
entities
and,
as
the
first
you
know,
open
source
pdk,
even
though
it's
130
nanometer.
This
is
a
very,
very
important
thing
to
happen,
because
this
enables
open
sourcing
other
things.
Wherever
you
have
tools,
interface
with
all
of
that
stuff
is
typically
very
close
and
very
proprietary,
and
without
an
open
source
pdk,
it's
very
hard
to
create
examples
that
can
be
actually
used
to
manufacture
chips
that
can
be
published
online
without
problems.
I
Skywater
pdk
is
solving
that
and
watch
out
there's
going
to
be
more
in
that
space,
there's
going
to
be
a
90,
nanometer,
pdk,
upcoming
and
and
better
stuff
after
that,
so
definitely
listen
to
tim
ansel's
talks
later
and
on
top
of
the
open
source
pdk.
There's
an
open
shuttle
program.
People
can
make
chips
and
it's
free
for
them.
There's
been
dozens
of
participants,
including,
of
course,
on
micro
building,
various
risk.
I
And
of
course,
this
talk
wouldn't
be
complete
without
mentioning
open
road,
which
is
you
know,
a
fully
open
flow
that
you're
also
going
to
hear
more
about
today
from
andrew
kang
and
tom
spyro.
So
I
won't
get
into
details
and,
of
course
you
need
open
source
ap,
but
that's
perhaps
a
topic
for
for
yet
another
talk,
one
of
the
things
that
have
happened
very
recently
and
I
think
it's
worth
sharing.
It's
we've
just
established
an
analog
working
group,
so
we're
not
just
about
digital
design.
It's
also
mixed
signal.
I
I
This
is
led
by
university
of
michigan
as
well
as
university
of
virginia
and
arm.
So
this
is
a
fully
open
flow.
I
mean
as
open
as
it
gets,
perhaps
at
this
point,
with
both
a
focus
on
digital
but
also
analog,
and
we
want
more
analog
related
tools
in
chips
alliance
to
make
sure
that
the
entire
space
is
covered.
I
So
summarizing,
there's
a
lot
of
work
to
be
done,
but
a
lot
of
stuff
is
is
going
really
well.
We
need
this
collaboration
between
the
asic
and
the
fpga
design.
Space
and
chips
alliance
is
playing
a
key
role
in
this.
You
can
also
get
support
in
building
real
stuff
in
this
space.
If
you
want
to
do
that,
go
and
talk
to
us,
we
can
help
you
get
commercial
support,
integrate
tools
and
implement
real-life
use
cases,
and
through
this
we
can
of
course
improve
the
ecosystem
so
happy
to
talk
to
you
afterwards.
A
Thank
you
so
much
michael.
I
really
appreciate
the
informative
talk,
so
I
would
encourage
folks
who
might
have
questions
to
follow
up
with
michael
offline,
as
I
want
to
make
sure
we
try
to
stay
on
schedule
here,
but
thank
you
again.
So
much
for
sharing
your
chat
was
very,
very
good.
A
The
next
talk
is
in
fact
about
the
aib
chippewa
ecosystem
that
dave
kellett
will
be
sharing
from
intel.
I
think
this
is
a
very
interesting
technology
that
they
have
put
into
the
open
source
community
and
I
think
it
really
opens
up
a
lot
of
opportunity
for
innovation.
So
with
that
dave,
take
it
away.
G
G
56
years
ago
gordon
moore
wrote,
it
may
prove
to
be
more
economical
to
build
large
systems
out
of
smaller
functions
which
are
separately
packaged
and
interconnected.
This
was
in
the
same
paper
where
he
articulated,
what's
known
as
moore's
law,
so
that
the
chiplet
idea
has
been
developing
for
a
long
time.
In
the
last
few
years,
technology
has
reached
a
point
where
triplets
are
practical
solutions.
G
In
my
presentation
today,
I'm
going
to
talk
about
how
chiplets
are
being
used,
just
as
gordon
moore
predicted,
I'm
going
to
cover
what
we're
doing
with
the
ship
project
that
we
have
with
the
us
government,
especially
about
aib
2.0
and
a
new
version.
We
call
aibo
we're
planning
open
source
release
of
protocol
ip
to
run
on
top
of
aib
for
the
purpose
of
increasing
interoperability,
and
then
I
will
wrap
it
up.
G
Multiple
companies
are
building
chiplets,
a
recent
article
in
semiconductor
engineering,
listed
20,
aib
chiplets
in
production
or
in
development
coming
from
seven
different
organizations.
In
addition
to
intel
the
standardization
effort
that
on
aib,
that
darpa
started
is
enabling
interoperability,
and
so
with
that
base
we're
now
starting
on
some
second
generation
concepts
like
security.
G
You
may
have
heard
last
fall
that
intel
was
awarded
the
second
phase
of
the
state-of-the-art
heterogeneous
integration
prototype
program
called
ship.
The
project
is
enabling
the
us
government
to
access
the
state-of-the-art
packaging
capabilities
at
intel.
Much
of
the
effort
with
ship
is
about
opening
up,
intel's,
advanced
packaging
and
manufacturing.
My
part
is
about
chip,
security,
interface
standards
and
protocols.
G
G
G
G
G
G
G
The
net
result
of
this
is:
we
can
hit
over
seven
terabits
per
second
per
interface,
even
at
a
relaxed
4
gigabits
per
second
per
wire.
Now,
with
all
this
bandwidth,
we
need
to
be
concerned
about
power.
This
is
addressed
by
using
low
swing
drivers
to
get
a
target
energy
per
bit
of
half
a
picojoule
and
with
all
the
aib
silicon
portfolio
from
intel.
Other
manufacturers
compatibility
with
aib
1.0
is
a
must-have.
G
G
So
we
have
an
open,
spec
and
an
open
draft
process.
You
can
read
the
open
spec
at
the
the
link
on
the
screen
here,
we're
developing
an
aib
2.0,
hard
macro
in
intel,
22
ffl
silicon
technology
with
a
company
called
analog
blue
cheetah
and
one
of
the
first
things
people
need
to
use.
Aib
is
the
logic
simulation
model.
We
just
posted
an
open
source
ab
2.0
simulation
model
under
the
chips
alliance,
github
repository.
G
Today,
I
want
to
introduce
a
version
of
aib:
that's
optimized
for
standard
packaging.
We
call
it
aibo
aab
for
standard
organic
substrate
packaging.
Now
aibo
is
a
subset
of
the
ab
2.0
capability
using
the
bump
spacing
that
standard
packaging
allows.
The
reason
to
do
this
is
that
standard
packaging
substrates
are
more
broadly
available
than
advanced
packaging.
G
Also,
you
have
many
choices
for
package
assembly
with
osats
three
years
ago,
I
gave
a
talk
about
how
you
might
design
differently
when
wires
are
free,
which
is
what
almost,
what
advanced
packaging
gives
you,
but
with
standard
packaging
wires
are
definitely
not
free
and
we
have
to
conserve
still.
I
think
there
are
applications
that
don't
need
the
highest
capability
and
for
these
standard,
packaging
density
will
work.
Just
fine.
G
With
aibo,
we
have
the
same
concept
of
channels
that
we
have
with
aib.
We
just
drop
the
number
of
data
bits
in
each
direction
from
40
down
to
10,
so
the
array
is
approximately
the
same
area
on
the
die
as
it
was
before
now,
since
we're
using
standard
packaging.
The
density
drops
by
a
factor
of
4.,
so
instead
of
a
ib
2.0
7.68
terabits
per
second,
we
get
a
quarter
of
that
1.92
terabits
per
second
and
and
just
like
aab
2.0.
G
G
G
G
G
G
Now
each
side
needs
to
know
how
the
protocol
is
mapped
to
the
aib
interface,
so
we're
providing
the
pro
protocol
adapter
pair
to
users.
They
see
the
familiar
protocol
interfaces
that
people
have
used
for
years,
like
axis
stream
and
xc4,
and
by
providing
the
protocol
ip
on
both
sides.
We
make
sure
the
mapping
to
the
aib5
is
understood.
G
We
originally
thought
about
using
the
pc
express
pipe
interface,
but
it
makes
more
sense
to
use
the
forward
looking
lpif
interface
as
cxl
starts
to
ramp
up
now.
The
really
important
part
is
is
that
as
part
of
the
ship
project,
the
protocol
adapter
code
will
be
released
as
open
source
under
our
chips
alliance.
Github
repository,
we'll
use,
apache
2.0
licensing,
so
this
is
freely
available
to
anyone.
A
G
So
if
your
silicon
interoperates
with
my
silicon,
we
together
can
meet
a
broader
set
of
customer
requirements
and
I
thought
I'd
kick
off
the
q,
a
with
a
question
I
received
on
friday
and
it
was,
it
was
kind
of
direct
and
it
was
why
should
I
use
aib
over
one
of
the
other
dietary
interface
ideas
and
and
I'd
say
that
there's
three
reasons
for
this:
the
first
one
is
aib
is
proven
in
silicon.
It's
in
production.
G
A
A
I
do
have
one
question:
thank
you
very
much
for
an
excellent
update.
I
appreciate
it.
The
question
I
have
from
the
audience
is:
could
you
take
a
moment
to
explain
the
rationale
behind
the
current
race
to
make
chips
smaller
and
smaller
to
seven
nanometer
down
to
three
nanometer
prototypes,
while
until
recently
130
nanometer
was
all
right?
This
has
caused
a
rush
to
one
or
two
fad
facilities
in
the
world.
A
What
is
achieved
by
this
condensation
are
there
benefits
in
exact
proportion
as
a
200
nanometer
chip
used
100
times
as
much
electricity
as
a
two
nanometer
chip,
and
it
had
a
hundred
times
as
much
latency
as
a
two
nanometer
chip.
How
important
is
it
to
aim
to
make
all
processors
in
sub
10,
nanometer
architecture.
G
Wow:
okay,
okay,
let
me
see
if
I
can
yeah
unpack
some
of
that.
The
first
thing
is:
is
the
you
know
that
the
basis
behind
moore's
law
was
was
really
that
the
you
know
the
the
increasing
transistor
density
for
for
the
same
same
cost
is
is
still
there.
G
It's
still,
there's
still
more
things
that
you
can
put
on
a
chip
than
we
could
fit
now,
we'd
still
like
to
put
more
process
on
there
more
capability,
but
I
think
that
the
the
what
was
also
in
that
description
is
that
the
heterogeneous
integration
is
extremely
valuable,
that
you
can
mix
something
that
does
need
to
be
super
dense,
like
a
leading
edge,
xeon,
cpu
or
leading
edge
fpga,
where,
where
there's
there's
always
a
desire
for
more
that,
you
want
to
mix
that
with
other
things,
that
where
density,
isn't
necessarily
the
most
important
factor
and
a
couple
of
what
I'll
I'll
mention
is
the
ir
photonics
optical
chip
that
they
talked
about
at
hot
chips
earlier
publicly,
and
that
uses
a
a
much
more
relaxed
semiconductor
process
node
than
the
most
recent
ones,
because
they're
doing
some
very
special
ip
with
the
photonics.
G
That
is
just
well
suited
for
that
application
and,
and
it
doesn't
need
to
be
a
you-
know-
seven
nanometer,
five
nanometer
pro
process.
So
I
think
that
the
the
heterogeneous
integration
is
extremely
valuable
and
it
does
have
a
great
place
to
use
a
span
of
process
nodes.
All
the
way
from
you
know
the
most
recent
highest
density
to
things
where
there
are
some
specialty
functions
or
io
that
that
could
use
a
much
more
relaxed
geometry.
A
Thank
you
dave.
I
just
asked
one
final
question
and
if
you
could
comment
on
the
maturity
of
the
design
methodology
required
to
implement
what
you
described
today
with
childhoods
and
then
also
the
corresponding
eda
ecosystem,
whether
that
be
commercial
solutions
or
perhaps
open
source.
G
Okay,
so
the
the
the
maturity
of
the
design
process.
You
know
the
the
stratix
10
powered
on
in
2016,
and
so
that
used
aib
1.0.
It
used
a
intel,
advanced
packaging
technology,
so
we've
got
got
five
years
of
experience
on
that.
There
are
other
advanced
packaging
technologies
such
as
interposers,
there's
a
lot
of
interesting
work
being
done
in
rdl
or
or
fan
out
technologies
to
reduce
the
the
cost
of
a
very
high
density
packaging.
G
So
the
the
tool
flow
that
that
we've
used
is
is
standard
commercial
set
of
tools
you
you
do
have
to
do
a
link
simulation
between
your
chiplets
to
ensure
that
you
can
successfully
signal
across
them.
G
So
there's
there's
some
some
additional
engineering
to
be
done
on
that,
but
the
the
maturity
of
it
is
is
well
understood,
and
you
know
we
and-
and
many
other
folks
have
done-
that
kind
of
work
on
on
on
data
interfaces.
A
Our
next
speaker
is
dejan
vasilic,
who
is
with
western
digital,
and
his
talk
is
on
the
omni,
extend
architecture
and
milestone
updates.
J
Hello:
everyone
thanks
for
having
me
here
today
I'll
try
to
give
you
a
very
brief
update
on
the
state
of
omni,
extended
thailink
and
in
general
coherence
and
the
risk
five
ecosystem.
J
And
just
to
refresh
everyone's
memory
in
2018,
western
digital
was
faced
with
a
dearth
of
high
performance
interconnects.
That
would
allow
us
to
connect
very
low,
latency
high
performing
kinds
of
memories
like
magnetic
ram
to
the
prevailing
compute
architectures
in
a
data
center,
and
it
was
a
strategic
problem
for
us.
Since
faced
with
this
rapid
rise
of
the
risk
five
ecosystem,
we
proposed
a
new
format
or
new
standard
interconnect
which
wasn't
all
that
new
right.
J
Our
proposal
was
to
try,
in
this
period
of
the
open
sourceness
of
the
risc
5
ecosystem,
to
also
use
a
completely
open
fabric
such
as
ethernet
for
transporting
coherence.
In
other
words,
we
wanted
a
completely
unencumbered
modern,
high
bandwidth,
low
latency
bus
to
connect
not
just
cpus
excited
to
socket
buses,
but
also
all
these
other
kinds
of
devices
to
its
storage
in
our
case,
but
also
gpus,
fpgas,
accelerators
and
other
kinds
of
isas.
J
In
a
way
that
is
not
asymmetric,
like
you
know,
peripherals
buses
like
pci,
express
and
cxl
and
so
forth.
So,
in
other
words,
the
idea
would
be
that
a1
cpu
would
not
own
the
architecture
of
your
data
center
you'll
be
allowed
to
create
a
more
heterogeneous
architecture
out
of
despair,
components
that
are
assembled
in
a
way
that
fits
the
algorithm
and
the
purpose.
J
So
this
was
usually
called
the
memory,
centric
architecture
memory
fabric
and
we
hasten
to
explain
what
it
is
and
what
it
isn't,
because
most
folks,
when
they
hear
memory,
centric
think
of
a
dma-like
system
where
you
have
a
cpu
with
some
cache
and
dram.
J
And
then
you
have
a
network
interface
card
which
is
a
dma
engine
and
then
typically,
you
have
some
sort
of
software
that
manages
the
so-called
fabric,
and
this
fabric
exposes
some
large
amount
of
memory
that,
basically,
you
have
to
run
your
local
software
to
fetch
blocks
pages
or
whatever
you
want
to
call
them.
And
then
your
nic
does
the
data
transfer
and
the
dma
engine
stuffs
it
in
dram,
and
then
you
have
to
load
it
through
the
cache
to
your
compute
pipeline.
J
What
is
what
started
the
whole
project
in
our
group
in
western
digital
research
in
2012
was
that
this
little
lightning
bolt
here
represents
a
contact
switch,
and
we
had
several
memory
technologies
in
the
pipeline
that
were
actually
so
fast
that
the
context
switch
cost
was
in
order
magnitude
faster
than
the
cost
to
access
memory
over
the
fabric,
and
the
fabrics
have
gotten
so
fast
that
typically,
a
latency
to
fetch
a
block
or
fabric
is
actually
lower
than
one
contact
switch
costs
on
linux
on
the
modern
cpus.
B
J
The
the
sort
of
intermediate
solution
that
some
folks
played
with
in
particular
gnz
was
this
idea
that
you
have
some
sort
of
online
translation
that
happens
very
fast,
and
then
you
go
with
the
loads
and
stores
into
the
fabric,
to
fetch
your
data
and
store
your
data.
So
just
to
be
clear.
What
we
proposed
was
actually
a
full
clash.
Coheres
protocol,
like
the
protocols
that
go
between
sockets
on
many
computer
architectures
today,
but
they
will
go
straight
into
the
fabric.
In
this
case,
this
was
ethernet.
J
Other
containers
were
infiniband
other
high
performance
fabrics,
and
then
you
basically
fetch
cache
lines
straight
into
your
cache,
so
your
local
cpu
wouldn't
even
have
to
have
year
on
controller
or
low
speed
of
adherence,
and
this
is
illustrated
by
this
pt
here,
which
indicates
page
tables.
So
the
idea
was
that,
whatever
your
paging
mechanism,
that
your
cpu
or
your
os
use,
the
page
tables
might
be
sitting
on
the
other
side
of
the
fabric,
so
your
whole
fabric
will
be
hitting
like
one
single
computer.
J
Now,
what
we
envisioned,
like,
I
said,
was
a
completely
asymmetric.
Sorry,
a
completely
symmetric
interconnect,
and
so
these
little
noodles
represent
socs
that
a
lot
of
you
folks
are
taping
out
where
you
have
the
encore
with
a
bunch
of
local
facilities.
And
then
you
have
a
bunch
of
cores
that
talk
through
some
sort
of
a
local
on-chip
coherence,
bus
which,
in
the
risk-fire
world
was
tilink
traditionally,
and
the
idea
was
that
you
could
connect
two
of
these
socket
to
socket
via
this
on
the
extend
ethernet
fabric.
J
Or
you
could
connect
one
of
these
to
a
peripheral
that
would
contain,
for
instance,
a
non-volatile
memory,
a
high
performing
memory
that
would
appear
as
the
lowest
point
of
coherence
and
as
basically,
you
plug
this
into
your
soc
and
the
soc
will
just
see
drm
or
something
that
looks
like
drm
right.
But
more
to
the
point.
J
What
we
wanted
to
achieve
with
this
new
generation
of
smart
switches
was
to
try
to
get
the
switch
to
behave
like
an
endpoint
for
coherence
protocol
and
then
allow
you
to
extend
these
systems
either
by
plugging
a
lot
more
sockets
together,
like
thousands
or
tens
of
thousands
of
sockets,
or
to
have
completely
you
know,
free
design
systems
where
you
have
a
large
amount
of
memory
and
a
few
sockets,
or
have
large
amount
of
sockets
with
a
little
bit
of
memory,
and
then
you
know
attach
machine
learning,
accelerators,
fpgas,
other
kinds
of
compute.
J
So
to
bring
you
up
to
speed,
we
published
a
protocol
called
omni
extend.
There
was
a
version
0.1
that
was
shown
off
in
december
2018
at
at
a
trade
show,
and
it
was
based
on
thailand
one
seven,
since
we've
actually
fixed
some
bugs
in
thailand,
in
collaboration
with
sci-fi
and
tiling.
1.81,
I
believe,
is
the
current
version.
That's
available
under
creative
commons
license
online,
and
this
is
transported
over
on
extend
version
103,
which
is
also
under
creative
commons,
license
on
github.
J
J
I
want
to
point
out
that
we
started
out
with
a
protocol
that
was
relying
on
these
custom
switches
that
will
allow
you
to
reprogram
the
ethernet
header
for
better
performance
and
so
on.
But
since
there
have
been
some
margins
and
movements
in
the
industry
and
we've
decided
to
revert
back
to
a
health,
ethernet
l2
compliant
header,
so
you
can
use
any
switch
any
of
the
shell
switch
like
a
broadcom
switch
or
even
these
very
inexpensive
switches.
You
can
get
off
amazon
for
100
bucks.
J
The
protocol
has
changed
to
incorporate
multiple
messages
per
frame.
So
it's
a
fairly
high
performance
high
bandwidth
utilization
protocol.
J
So,
just
to
point
out
again,
you
can
download
this
for
vcu,
not
118.
It
runs
very
stable
in
lab
and
it's
an
fpg
implementation
at
100
megahertz.
It's
a
sci-fi,
u5
risc-v
core
that
was
based
off
of
the
rocket
chip
developments
all
right.
These
are
some
performance
numbers.
Keep
in
mind.
This
is
fpga,
so
this
is
very
high
latency
in
general,
because
the
clock
rate
is
very
low.
J
As
you
can
see,
we've
been
running
a
single
node
with
local
dram,
two
nodes
with
direct
connection
or
connection
through
a
switch
and
then
up
to
four
nodes
with
a
local,
remote
dram,
and
you
see
you
know,
l1
l2
and
l3
cache
cache's
work
is
designed
right
and
then
you
go
to
these
latencies
and
your
five
microseconds
for
systems
that
can
extend
pretty
much
to
through
one
switch
to
about
64
or
256
sockets.
J
And
these
are
just
your
standard.
Pneuma
systems
running
a
single
instance
of
linux,
and
you
see
here,
16
sockets
visible
and
you
can
run
pretty
much
all
the
standard
software
as
if
it
was
a
single
computer.
J
All
right.
So
I
wanted
to
use
the
rest
of
the
time
to
bring
you
up
to
date
on
the
developments
with
this
on
the
extent
also
on
the
discussions
for
the
future
tiling
the
discussions
of
the
future
on
the
extent
so
the
first
thing
that
is
going
to
be
interesting
to
those
of
you
folks
who
are
hoping
to
attach
accelerators
to
on
the
extent
of
labeled
socs,
is
that
we
are.
We
have
developed
a
lowest
protocol
as
rtl.
That
is
going
to
be
fully
open
sourced.
J
This
is
at
the
moment,
is
based
on
a
standard
board
from
xilinx
alveolar
view
50,
it's
at
hpm
board
with
8
gigabytes
of
hpm,
and
you
will
be
able
to
run
a
storage
device
simulation
that
story
device
which
it
attaches
directly
to
on
the
extent.
So
it
essentially
shows
up
as
a
lowest
coherence
and
you
simply
get
a
memory
region
that
is
at
this
time,
backed
by
hbm.
J
But
this
rtl
is,
like
I
said,
going
to
be
fully
open
source.
Then
you
will
be
able
to
run
simulations
of
different
kinds
of
future
memory
technologies
by
simply
slowing
down
the
memory
or
introducing
algorithms
to
capture
the
idiosyncrasies
of
memory
such
as
flash
magnetic
ram,
resistive
ram,
and
so
on.
J
The
other
thing
that
we're
working
on
hard
is
what
we
call
beyond
intel
architectures,
with
apologies
to
our
co-op
editor.
So
these
are
things
that
you
cannot
do
with
current
off-the-shelf
hardware
right.
So
one
of
the
things
that's
been
generally
difficult
to
achieve
with
non-exotic
hardware
is
more
than
four
sockets
in
the
risc-five
world
off
the
shelf
for
sky
world.
J
So
far,
there
aren't
really
multiple
sockets,
so
omni
extend
is
your
only
game
in
town
if
you
want
to
play
with
scale
out
to
risk
five,
but
we're
hoping
to
show
pretty
soon
systems
with
up
to
64
sockets
of
one
switch.
And
then
there
are
efforts
to
understand
the
capabilities
of
these
smart
switches
to
have
the
switch
terminate
the
protocol
and
then
have
multiple
multi-switch
systems
that
enable
scale
out
to
in.
J
Unlimited
number
of
sockets
in
practice,
it's
limited
by
financing
right
and
then
another
thing
that
is
happening.
So
I
briefly
mentioned
we
have
a
boot
protocol
defined
now
that's
defined
online
on
github,
so
the
wood
protocol
allows
you
to
configure
the
endpoints
to
behave
as
any
kind
of
identity
right.
J
So
I
think
the
thrust
of
the
scale
out
efforts
at
the
moment
is
on
on
these
independent
os
systems
at
the
moment,
simply
because
the
change
is
required
to
say
linux,
kernel
and
the
infrastructure-
and
you
know
the
software
infrastructure
right,
the
the
signaling
to
the
user
process
of
failures
and
fault
alerts,
and
so
on
are
fairly
daunting
right.
So
we
will
probably
start
out
with
independent
os's
that
handle
or
try
to
handle
gracefully.
J
J
J
It
is
based
off
of
the
p4
qemu
emulations
that
were
available
in
the
networking
world,
and
so
the
hope
is
to
publish
pretty
soon
a
version
of
this
that
interfaces
with
hardware
so,
in
other
words,
you'll,
be
able
to
have
a
system
for
development
of
your
own
custom
coherence,
protocols
or
endpoints,
where
you
can
have
a
actual
hardware,
endpoint
right
now,
the
fpga
in
the
future,
an
soc
and
silicon
and
connect
it
with
an
actual
software
endpoint.
J
J
I
just
wanted
to
briefly
mention
the
discussions
that
have
commenced
about
the
2.0,
what
we
call
2.0.
So
a
lot
of
folks
have
complaints
about
thailink
that
it
doesn't
perform,
or
it
doesn't
do
this
or
doesn't
do
that.
Some
of
those
complaints
are
well
funded
and
there
have
been
discussions
among
us
here
in
western
digital
research
and
with
sci-fi
and
with
many
other
groups
of
what
this
should
look
like
going
forward
in
the
future.
J
So
I
just
wanted
to
point
out
that
the
current
focus
of
us
in
western
digital
research
is
actually
on
scale
out
and
fault
tolerance
right.
So
it's
not
necessarily
we're
not
necessarily
looking
at
raw
matchmak
performance
of
multi-socket
systems
in
risk
five.
We
think
there
are
a
lot
of
you
who
are
interested
in
this
and
there
are
probably
going
to
be
some
developments
here.
J
We're
a
peripherals
company
and
our
main
interest
was
in
having
a
high
performance
attached
for
storage
devices
which
doesn't
really
care
about
the
you
know,
forwarding
protocols,
you
know
three
hot
protocols
to
help
particles
and
so
on,
but
we
would
like
to
have
be
able
to
offer
you
say
a
system
with
one
excellent
main
memory.
J
So
our
main
you
know,
thrust
of
these
efforts
is
basically
on
having
many
many
storage
devices
attached
to
single
cohere
in
the
main
and
then
fault
tolerance
when
they
wear
out
where
they
fit
when
they
fail,
when
somebody
unplugs
the
cable
and
so
on.
So
that's
why
we're
kind
of
focusing
on
the
independent
os
model
right
now
and
on
figuring
out
to
communicate
no
loss
to
user
processes,
so
you
don't
have
to
bring
the
whole
cluster
down
when
one
device
fails
or
wears
out,
and
so
I
think
I'm
just
about
out
of
time.
A
Thank
you
so
much
dejan,
so
very
good
talk.
I
appreciate
it.
One
question
I
just
had
from
my
side,
which
was
in
terms
of
architectural
modeling,
with
on
the
extend.
Is
there
any
capability
of
being
able
to
model
two
different
pieces
of
cpu
or
accelerators
together
or
how
is
that
better,
any
comments
you
might
have.
J
Right
so
there
is
a
gap
in
modeling
from
software
to
the
entire
system.
Right
now,
this
is
a
fairly
daunting
task.
I
should
point
out
right.
E
J
To
model
the
entire
cpu
pipeline,
the
entire
behavior,
all
the
caches
in
the
system,
the
serialization,
the
quirks
of
the
fabric
and
then
the
behavior
of
the
whole
system.
So
we're
not
quite
there
yet
right.
We're
making
strides
piece
by
piece,
and
one
thing
that
was
sorely
missing
was
the
network
part,
and
this
is
where
we
made
the
biggest
strides,
and
this
is
the
qemu
framework
for
emulating
the
actual
protocol
communication
and
so
on.
J
So
the
there
are
there's
this
whole
framework
for
emulating,
p4
switches
straight
and
formulating
loss
of
packets
and
so
on,
and
you
have
an
entire
software
switch
that's
available
for
free,
it's
completely
open
source
and
we've
leveraged
this
to
introduce.
You
know
coherence,
endpoint,
emulation
on
this,
and
I'm
not
sure
if
the
entire
thing
is
open
source,
but
we're
planning
to
open
source
all
this
stuff.
J
It's
built
on
open
source
infrastructure,
so
you
will
be
able
to
basically
take
you
know
frozen
traits
of
coherence,
commands
coming
out
of
your
pipeline
or
out
of
your.
You
know
last
level
cache
and
just
introduce
them
into
this
network
simulation
system
and
then
introduce
packet
loss
and
see
what
happens
right
now.
J
The
the
the
question
was:
can
you
get
from
software
to
there?
So
this
is
a
much
more
difficult
problem
where
you
sort
of
need
to
simulate
the
entire
cache
hierarchy,
where
we
don't
really
have
much
to
offer
like.
There
are
many
of
you
who
have
much
more
elaborate
infrastructure
for
simulating
caches
and
state
machines,
and
so
on.
J
A
J
So
let
me
try
to
parse
this
unified
memory
standard
right.
The
the
word
memory
is
perhaps
overloaded
right,
so
we
will
have
we
we're
proposing
a
unified
coherence.
Protocol
standard
right
now.
Coherence
protocol
is
an
overloaded
term
right,
so
it
contains
commands
or
rather
transaction
types
that
include
memory
access
and
also
the
non-memory
access
right.
So
there
are
mmio
transactions
and
coherent
transactions,
and
mmi
is
everything.
That's
non-coherent,
including
access
to
other
sockets.
You
know
interrupt
controllers
and
so
forth.
So
in
that
sense,
omni
extend
is
already
a
standard.
J
It
is
it
is,
you
know
given
to
the
world
as
a
creative
commons
standard
and
I
think
we're
in
the
process
of
formally
recognizing
a
standard
and
ships
alliance
work
group.
So
that
is
the
status
of
that.
I
guess
I'm
not
sure.
Is
that
the
answer
to
the
question
or
was
the
question
I.
A
Think
I
believe
it
is
yes
thank
you
thank
you
for
doing
that.
So,
and
that
concludes
the
time
we
have
so
thank
you.
Dejean
for
your
updated,
really
appreciate
it.
K
Hello,
I
think
there.
F
K
I
should
click
to
start
all
right.
I
should
be
able
to
control
now
excellent.
Okay.
So
let
me
start
my
timer
all
right,
hi,
everyone.
My
name
is
jack
koenig.
I
am
a
staff
engineer
at
sci-fi
and
I
am
you
can
see
the
nice
little
double
acronym
there.
I
am
a
member
of
the
chisel
working
group
technical
advisory
committee,
so
in
our
as
a
chips
alliance
project,
we
have
a
technical
advisory
committee
to
run
chisel
and
related
projects,
and
so
I'm
going
to
talk
a
bit
about
about
those
projects.
So
you.
F
K
Kind
of
letting
you
know
what
I'm
going
to
talk
about,
I'm
going
to
introduce
the
what
chisel
is
and
related
projects.
I
know
you
heard
a
little
bit
from
professor
nicolette
and
from
michael
previously,
but
I'll
I'll
talk
a
bit
about
that
as
well.
We'll
provide
some
highlights
from
the
last
six
months
since
the
last
chips,
alliance
workshop
and
I'll
talk
about
a
bit
about
what
we're
doing
going
forward,
what
to
look
forward
to
so.
First
of
all
like
what
is
the
chisel
working
group?
K
Well,
the
first
question:
there
is
what
is
chisel
and,
as
you
heard
earlier,
during
the
shipyard
talk.
Chisel
stands
for
constructing
hardware
in
a
sky-embedded
language.
It's
a
domain-specific
language
where
the
domain
is
digital
design.
That's
a
lot
of
words,
but
you
know,
verilog
is
a
domain-specific
language
where
the
domain
is
digital
design
and
chisel
is
as
well.
K
Chisel
just
differs
in
that,
rather
than
being
a
standalone
language,
it's
actually
embedded
in
another
language,
it's
embedded
in
scala,
which
is
a
general
purpose,
programming
language,
as
as
professor
nicolette
mentioned,
it
is
not
high
level
synthesis.
Nor
is
it
behavioral
synthesis.
What
it
really
is
is
you're
writing
a
scala
program
where
you
construct
hardware
objects
and
and
build
your
hardware
design.
That
way,
we
like
doing
this
in
scala,
because
scale
provides
modern
programming
language
to
things
like
parameterized
types,
object-oriented,
programming,
functional
programming
and
static
typing.
K
All
these
features
are
things
that
we
think
help
you
write.
You
know
reusable
and
maintainable
code
and
it's
intended
for
writing.
Reusable
hardware
generators
like
chip
yard
and
all
other
projects
related
projects
mentioned
in
there
as
well.
As
you
know,
others.
So
you
know
kind
of
to
give
a
brief
overview
of
what
chisel
really
is.
It's.
It's
there's
no
loss
of
expressibility
so
long
as
you're.
You're
writing.
You
know
synchronous
digital
designs,
so
this
is
a
moving
average
fir
filter
that
looks
pretty
similar
to
what
you
might
write
in
verilog.
You
have
some
inputs.
K
K
So
what
you
really
want
to
be
able
to
do
is
write
a
generic
filter,
not
just
one
specific
filter,
but
maybe
a
whole
family
of
filters,
and
so
chisel
makes
it
really
easy
to
write
a
parametrized
fir
filter.
So
I'm
not
going
to
dwell
on
this
code
here.
Please
check
out
the
drizzle
bootcamp.
If
you
want
to
really
understand
it,
but
you
write
something
that
kind
of
looks
software-y
but
in
reality
you're
just
manipulating
the
exact
hardware
constructs
you
want,
there's
no
synthesis
going
on
here.
K
You
are
in
control
of
every
every
flop
and
every
you
know
logical
thing
you
want
to
do,
and
so
this
is
parameterized
built
by
the
bit
width
and
by
the
coefficients,
not
only
what
those
coefficients
are,
but
how
many
of
them
there
are
which
affects.
K
You
know
how
you
know
how
much
in
the
time
domain
you're
stretching
your
filter-
and
you
know
this
is
meta
programming,
where
you're
writing
a
program
that
is
then
constructing
another
quote
program
now
that
second
program
is
the
hardware
itself,
so
not
quite
the
same,
but
you're
really
reasoning
about
constructing
hardware.
When
you
do
this,
and
so
using
this,
this
generic
filter
we're
able
to
create
that
same
moving
average
filter
just
by
parameterizing
this
one
we're
able
to
create.
K
You
know
other
types
of
filters,
a
simple
one
cycle,
delay,
filter
or
perhaps
a
triangular
filter
with
five
points.
The
point
is
that
we
wrote
this
one
filter
and
now
we
can
do
all
these
different
things,
and
so
that's
really
what
chisel
is
about.
That's
what
shipyard
is
built
on.
That's
what
sci-fi
uses
in
a
lot
of
our
technology.
So
this
is
really
what
chisel
is
about.
K
So
I
mentioned
previously
the
chisel's
about
writing
reusable
generators,
but
as
anyone
who's
done,
any
hardware
design
knows
that
that's
really
difficult
right.
As
soon
as
you
have
some
specific
platform,
you
end
up
doing
platform
specific
changes.
You
know
if
you're
doing
something
in
ibm,
45
versus
sc28
or
a
more
modern
technology.
K
You
end
up,
you
know
needing
to
do
srm,
macros
or
other
very
specialized
things
in
your
physical
design.
Similarly,
fpgas
have
a
whole
different,
a
whole
host
of
other
features,
and
so
it
it
really
breaks
reusability
when
you're
trying
to.
If
you
try
to
write
something,
that's
reusable,
but
when
you
have
to
leak
in,
you
know
platform-specific
stuff.
So
in
order
to
specialize
our
designs
for
different
platforms,
we
realized
we
needed
a.
K
You
know
a
hardware
stack
similar
to
what
you
have
in
software
right
when
you
write
c
plus
plus
you
don't
care
if
you're
writing
it,
for
you
know
for
x86
or
arm
or
risk
five,
and
we
want
the
same
thing
in
hardware.
So
what
we
did
is
we
took
it.
Chisel
2
was
pretty
monolithic
and
we
replaced
that
with
chisel
3,
which
is
chisel
as
a
language
really
just
kind
of
this
front
end
along
with
the
fertile
compiler
underneath
it
now.
K
This
is
a
gross
simplification,
because
I'm
it's
I'm
kind
of
leaving
out
a
lot
of
details
about
what
goes
on
underneath,
since
we
just
admit
verilog.
So
I
want
to
give
a
shout
out
to
all
the
other
projects
that
do
the
real
heavy
lifting
here,
especially
you
know
open
road,
varilator
and
open
fpga,
so
you
know
fertile,
which
is
you
know?
The
other
main
one
of
the
other
big
projects
in
the
chisel
working
group
is,
you
know
it's
a
compiler
that
allows
you
to.
K
Let
me
see,
I
should
be
able
to
use
a
pointer
so
allow
it
has.
You
know,
kind
of
a
your
design
is
a
abstract
syntax
tree
with
core
transformations
that
lower
it
to
verilog
or
does
do
other
specializations,
but
what's
really
critical
is
it
allows
you
to
insert
custom
transformations
to
specialize
the
flow,
so
this
is
really
important
in
shipyard
and
in
firesim
and
as
well.
K
There's
a
lot
of
different
customizations
that
you
may
want
to
do,
based
on
your
own
custom
flow,
your
physical
design
or
you
know
just
what
technology
you're
targeting.
So
that's
handled
a
lot
of
that's
handled
by
the
robust
metadata
and
annotation
support,
which
is
really
critical
for
that
extra
collateral
for
physical
designer
verification,
and
so
there
are
several
projects
in
the
chisel
working
group.
K
I
really
talked
about
chisel
infertile,
but
there's
a
few
others
in
particular
chisel
test,
which
is
our
testing
framework
to
write
unit
tests
and
treadle,
which
is
our
our
emulator
for
for
fertile
simulator
for
fertile.
But
there's
several
others
and
I'll
just
note
that
we're
currently
a
chips
alliance,
sandbox
project,
which
is
like
the
first
step
of
that
shipyard,
sorry
of
the
chips
alliance,
life
cycle
of
projects,
but
we
intend
to
graduate
in
the
near
future.
K
Okay.
So
now
I'll
talk
a
bit
about
what's
been
going
on
over
the
last
six
months,
you
know.
So
if
you
want
to
see
what
happened
in
the
year
before
that,
please
check
out
the
my
previous
talk,
which
is
part
of
the
workshop
on
youtube
of
the
previous
chip
slides
workshop.
So,
first
of
all,
we
actually
moved
the
projects
over
to
chips.
Alliance
github,
so
you
can
find
them
there,
there's
still
a
few
a
couple
more
to
move,
but
at
least
the
major
ones
are
moved
over.
K
We've
been
further
improving
our
website,
which
is
you
know,
important
for
documentation.
So
we
have.
You
know
the
api
docs
for
more
advanced
users
to
see
everything
that's
available.
We
have
more
project
specific
documentation.
That
chisel
link
is
especially
useful.
There's
a
lot
of
documentation
there.
We
have
a
community
page
where
you
can
see
all
the
cool
stuff
that's
going
on
and
find
links
to.
You
know
our
stack
overflow
or
getter
where
people
chat,
and
we
know
the
search
bar,
which
is
very
exciting.
K
K
One
of
the
things
I'm
most
excited
about
on
the
website
is
our
documentation.
Examples
are
actually
compile
and
run
so
historically,
documentation
gets
out
of
date,
but
now
we
are
always
making
sure
that
our
examples
run
against
the
most
recent
version,
which
is
very
exciting.
K
K
One
thing
I'm
really
proud
of
is
that
we
have
a
robust
back
porting
methodology
for
bug
fixes
such
that,
even
though
we're
currently
on
3.4.2
and
about
to
release
3.4.3,
we
still,
as
you
can
see,
release
bug
fixes
for
3.2
and
3.3,
and
that
allows
people
who
you
know
may
be
stuck
on
older
versions
to
keep
getting
important
bug
fixes,
and
we
are
also
publishing
snapshots
of
all
the
stable
branches
and
the
unstable
branch
actually
so
for
people
who
want
to
try
the
bleeding
edge.
It's
pretty
easy
to
do
now.
K
This
is
from
last
time,
so
I'm
going
to
go
through
it
pretty
quickly,
but
starting
in
chisel
3.4.
We
did
a
lot
of
work
on
improving
naming.
So
you
know
you're
writing
a
program
to
generate
hardware,
so
you
can
of
course,
call
functions,
and
so,
in
this
case
it
used
to
be.
We
were
not
very
good
at
naming
these
signals,
but
now
we
actually
can
name.
K
You
know
x
and
y
as
well
as
we
give
them
a
prefix
from
the
result,
value
that
they're
returned
to,
and
this
is
really
important
because
you
could
call
this
function
multiple
times
from
multiple
places,
so
it
allows
you
to
kind
of
scope.
The
signals
that
are
created
based
on
your
actual
call
stack-
and
you
have
some
control
over
that.
But
what
I
want
to
really
get
into
is
in
the
last
six
months
we
kind
of
refine
this
naming.
K
So
this
example
is
a
little
specific,
so
it
may
you
know
it
may
work
well
for
people
who
are
more
familiar
with
chisel
and
maybe
not
so
well
for
others,
but
I'll
just
try.
So
you
know,
say
you
have
some
module
with
an
optional
some
optional
tap
and
let's
just
say
this
tap
is
for
debugging
a
signal
so
based
on
a
parameter
that
may
or
may
not
exist
an
option.
It's
just
something
that
may
be
set
or
may
not
be.
K
You'll
you'll
provide
a
width
and
if
the
width
is
set,
then
we'll
create
a
port.
So
the
question
is:
what
should
the
name
of
this
port
be?
And
this
is
actually
a
pretty
subtle
question
and
you
know
one
option
is
port,
because
when
you
construct
the
port,
that's
the
value
you
assign
it
to
one
option
is
tap
port
because,
as
I
talked
about,
we
have
this
prefixing
right
and
so
port
is
scoped
inside
the
tab
value.
K
Another
option
is
tap
because
you
actually
the
in
scala.
The
last
expression
is
the
return
value.
You
return
the
port
to
tab,
and
so
this
top
little
level
tap
value
is
the
port
or
it
could
be
tap
port.
Because
later
we
have
another
value
that
refers
to
the
same
port.
And
so
what
is
the
name
that
you
want
in
3.4.0?
When
we
first
introduced
this?
It
was
getting
tap
port,
which
is
it
turned
out?
Not
what
our
users
want.
K
K
But
I'm
pretty
proud
of
the
of
the
point
we've
come
to,
and
I
think
the
the
strategy
we
use
for
naming
here
is
is
a
pretty,
is
a
pretty
good
one
across
the
board
and
we've
been
using
it
for
five
months,
and
people
are
finally
deleting
all
their
little
dot
suggest
names,
which
is
how
you
can
try
and
add
more
control
to
the
naming
in
chisel,
but
people
are
finding
that
the
default
names
are
working
really
well
for
them.
K
Now,
just
other
shout
outs,
you
know
to
recent
tape
outs
a
lot
of
times.
You
know
you
kind
of
want
to
say
the
proof
is
in
the
pudding
on
the
the
google
sky,
water
efabush
shuttle
that
tim
will
talk
about
later.
There
were
at
least
three
designs,
including
chisel,
which
I
have
the
gds
plots
for
berkeley.
This
tape
out
was
a
little
while
ago,
but
they
just
recently
presented
it
at
isscc,
which
is
a
pretty
beefy
core
here,
with
eight
tiles,
four
l2
caches
and
l3
cache
lots
of
30s.
K
This
is,
you
know,
a
real
chip
where
basically,
everything
in
the
middle
here
is
implemented
with
chisel,
and
then
you
know
from
my
own
company,
sci
5
our
recent
board
with
high
five
unmatched.
You
know
most
of
our
design
is
in
chisel.
K
So
you
know
we
have
these
pretty
quick
dual
issue
in
our
cores
memory
hierarchy,
a
lot
of
stuff
going
on
there
all
implemented
in
chisel
and
there's
there
are
others
that
if
I
missed
any,
please
let
me
know-
and
there
are
others
that
I
wish
I
could
talk
about,
but
can't,
but
just
letting
you
know
that
there's
a
lot
going
on
in
real.
You
know
real
silicon.
K
So
now
with
I
have
about
five
minutes.
Five
minutes
perfect.
So
I'll
talk
a
bit
about
what
we're
working
on
and
what's
coming
so
historically
chisel's
been
used
primarily
for
asics,
as
I
just
kind
of
showed,
and
but
that
means
it's
been
difficult
to
take
advantage
of
a
lot
of
fpga
only
features
and
for
all
of
you
who
kind
of
work
in
that
open
source.
Fpga
realm,
that's
been
very
frustrating.
K
So
there's
been
some
recent
work
on
this
to
improve
our
support
out
of
the
box
for
fpgas
and
we'll
have
a
new
option
target
fpga.
To
help
with
that.
This
I
mentioned.
The
next
version
is
3.4.3,
so
this
will
not
make
that
boat,
but
should
make
the
next
minor
version.
So
I
just
wanted
to
let
you
know
that
that's
coming.
K
This
is
very
much
more
in
the
weeds,
but
my
chisel
users
will
probably
recognize
what
I'm
talking
about
here
and
be
happy.
So
clone
type
is
just
this
implementation
detail
that
leaks
into
the
user
api
and
that's
never
a
good
thing
right.
You
don't
want.
You
know
implementation
stuff,
and
but
when
I,
when
I
say
implementation
detail,
I
don't
mean
physical
design.
K
I
mean
implementation
of
chisel
itself,
so
this
has
just
been
a
huge
wart
in
the
language
for
a
long
time
we've
had
we
had
auto
clone
type,
one
as
you
can
guess
by
there
being
number
two,
but
it
had
three
important
limitations
it
would
it
required
you
to
do
some
boilerplate.
It
worked
in
most
cases,
but
not
all,
and
importantly,
it
was
actually
quite
slow.
K
You
wouldn't
notice
this
for
most
designs,
but
when
you're
starting
to
get
to
large,
you
know
mini
tile
soc,
it
really
starts
to
add
up,
and
so
now
in
in
the
next
version
of
chisel
and
3.4.3,
the
chisel
will
be
able
to
do
this
much
better
and
I'm
very
excited
about
this
feature.
K
Another
feature,
that's
a
little
more
half-baked,
but
it's
something
we're
working
on
is
what
we're
calling
data
view.
So
often
when
designing
hardware,
you
may
have
some
hardware
value
that
you
wish
you
could
reason
about
or
manipulate
as
if
it
were
a
different
type.
So
a
canonical
example
of
this
is
an
axi
style
flat.
Bus
interface,
which
is,
is
fine
and
that's
like
kind
of
what
the
standard
is,
but
a
lot
of
times,
you'd
like
to
manipulate
it
as
if
it
had
more
structure
right.
K
The
ready,
like
the
ready
and
valid
things,
are
at
the
same
level
as
your
payload
and
it's
just
kind
of
hard
to
deal
with
sometimes,
and
so
people
would
like
to
be
able
to
manipulate
it
using
you
know,
more
structured
typing
and
then
another
example.
K
Sometimes
you
may
have
a
1d
array
of
registers
and
you'd
like
to
treat
them
as
if
they
were
two-dimensional,
and
so
this
is
kind
of
similar
to
a
union
and
system
verilog
and
people
use
that
for
similar
things,
but
there's
a
lot
of
use
cases
that
aren't
handled
by
that,
where
sometimes
the
raw
bits
mapping
may
not
correspond
one-to-one,
and
there
are
other
limitations
as
well.
That
will
be
described
more
in
documentation
for
this
feature
once
it's
done.
This
is
also
similar
to
a
facade
pattern
in
object-oriented
programming,
but
that's
very
boilerplate
heavy.
K
It's
similar
to
conversions
and
functional
programming,
but
those
are
those
don't
allow
you
to
manipulate
the
underlying
representation,
and
so
the
reason
we
came
up
with
the
name
dataview
is
the
best
analogy-
is
a
view
in
sql,
so
for
any
database
people
out
there.
This
is
very
similar
to
that,
and
so
this
is
a
really
powerful
feature
that
I
think
is
going
to
enable
a
lot
of
design
patterns
that
I
think
people
will
really
like
it's
a
work
in
progress,
but
I
look
forward
to
this
in
chisel
3.5.
K
I
want
to
also
note
that
there
will
be
a
chisel
community
conference
in
shanghai
on
june
25th,
so
I
made
this
slide.
I
submitted
my
slides
before
I
got
the
update.
It
will
be
june.
25Th,
not
the
26th,
just
the
25th
it'll
be
co-located
with
the
risk
5
summit
so
keep
an
eye
on
our
websites,
which
I'll
include
another
link
to
later
and
maybe
put
in
the
chat
for
the
details
about
submissions,
and
you
know
more
details.
F
K
Else
is
going
on.
You
know,
I
talked
last
time
a
bit
about
verification
and
I'm
only
glossing
over
at
this
time,
mainly
because
a
lot
of
his
work
people
are
worried
about.
You
know
pre-publication
rules,
but
I
can
tell
you
that
there's
a
decent
amount
of
work
going
on
and
I'm
very
excited
for
what
what
people
or
what
students
are
doing.
K
There's
you
know
all
these
things
I
mentioned
here.
There's
works
on
how
you
specify
and
run
your
formal
verification,
there's
constrained
random
stimulus
generation
and
there's
software-based
vips
that
people
are
working
on.
So
there's
a
there's.
It's
there's
a
lot
going
on,
but
I
can't
go
into
too
much
detail
at
the
moment.
We
continue
to
work
on
documentation
for
my
chisel
users.
K
We
are
working
on
support
for
the
next
version
of
scala,
that's
almost
done
and
then
other
just
minor
things
like
vector,
literals
and
better
connection
semantics
that
I'll
just
leave
for
you
know
future
prs
and
documentation.
K
So
that's
it!
Please
get
involved,
go
go
to
our
website
chiseling.org!
You
can
find
the
community
page
with
links
to
all
kinds
of
places.
Previous
talks
getter
asked
questions
of
stock
overflow
thanks
a
lot.
A
Thank
you
so
much
jack.
That
was
that
was
a
great
talk
and
we
have
two
questions,
so
I
will
share
those
now.
First
question:
was
there
a
specific
reason
for
choosing
scala
as
opposed
to
other
common
languages
as
the
one
to
embed
the
hdl
into.
K
Yes,
I
will.
I
will
say
that
that
predates
my
time
I
joined,
I
started
as
a
grad
student
at
berkeley
in
2014
and
I
think
chisel
was
created
in
2010,
but
I
can
say
that
you
know
scala
has
a
lot
of
support
for
dsls
for
or
for
embedded
dsls
if
a
lot
of
attributes
of
scala
make
it
easier
to
embed
a
language
that
was
one
of
the
design
goals
of
scala.
It's
also
a
modern
programming
language.
K
It
has
like
you
know,
functional
programming
is
getting
more
and
more
popular
and
scala
was
I
don't
it's
not
on
the
forefront
of
functional
programming,
but
certainly
on
the
forefront
of
trying
to
make
it
popular
by
making
it
run
well
on
the
jvm
and
stuff.
So
those
are
some
of
the
reasons.
I
know
that
the
original
chisel
work
was
actually
in
ruby
and
there
were
performance
problems
and
so
chisel
being
a
compiled.
Language
runs
a
lot
faster
than
that
or
scalloping
a
compiler
language
runs
faster.
K
A
K
Right
so
professor
nicholak
mentioned
that
in
chisel
3.4
we
added
we
added
system,
verilog,
assert,
assume
and
cover
style
constructs.
So
that's
those
are
at
least
basic
primitives,
by
which
you
can
build
a
lot
of
things.
We
also
have
chisel
test,
which
is
kind
of
a
scala
bindings
to
to
run
your
tests
kind
of
similar
to
what
you
might
write
in
system
barolog.
So
those
are
two
things
that
the
time
frame
is
that
they
exist
now.
K
You
know,
there's
there's
always
more
work
to
be
done,
and
so
you
know
I
don't
have
specific
time
frames
for
like
the
constrained
brand
of
stimulus
generation
or
you
know
better
formal
support,
but
I
can
say
that
they're
actively
being
developed
and
if
you're
interested
you
know,
I
can
put
you
in
contact
with
the
people
who
are
doing
the
heavy
lifting
of
that
work.
If
you
can
reach
out
to
me
on
gitter
or
in
the
chat.
A
L
Hi
everyone.
I
hope
that
you
hear
me:
well,
yes,
yeah,
okay,
so
let
me
start
so
today,
I'm
going
to
talk
about
who
we
are,
what
we
do
and
then
we
will
focus
on
one
part
of
the
particular
product
which
is
the
we
call
it
swerve
core
support
package.
L
It
was
the
original
original,
like
a
business
model
or
product
that
we
had,
and
recently
we
added
also
the
surfboard
support
package
that
nicely
complement
these.
These
two
items,
the
eda
technology
itself,
the
tools
that
we
are
using
for
customization
these.
These
tools
were
developed
at
the
university
here
in
bernard
in
czech
republic
back
back
a
long
time
ago.
Actually
we
are
founding
members
of
rescue
foundation
or
international
nowadays,
and
we
actually
introduced
the
first
licensing
burst
5ip
back
in
november
2015.
L
As
far
as
we
know,
we
were
the
first
one,
the
company
itself,
the
r
d
centers
are
located
in
the
in
the
eu,
so
we
have
a
office
in
the
in
bernal,
which
is
the
biggest
one,
but
then
also
in
france
or
or
uk,
so
all
in
all
more
than
90
plus
employees,
and
we
have
a
presence
around
the
world,
including
you
know,
china,
of
course,
and
north
america,
and
and
so
on.
L
If
you
look
at
the
product
that
that
we
have
that,
let's
you
know
split
it
into
three
three
parts:
the
first
we
have
our
own
risk,
five
processors
that
are
designed
in-house.
L
They
are
meant,
you
know,
for
the
embedded,
low-power
design
through
the
high
performance
embedded
up
to
linux,
capable
cores
plus
multi-core
multi-core
solutions
as
well.
So
these
are
the
cpus
that
were
developed
by
us
and
they
are
developed
using
our
methodology
that
builds
on
top
of
corrosive
studio.
E
L
Is
the
second
product
where
recognizable
studio
is
the
eda
tool
that
I
was
talking
a
while
ago?
So
this
is
a
da
tool
by
which
you
can
really.
You
know,
create
any
pretty
much
any
cpu
from
scratch.
We
did
it.
We
did
a
risk
five
in
this
case,
and
then
you
can.
You
know
we
can
customize
the
the
processor
in
any
way
as
a
customer.
L
So
it
means
that,
for
instance,
if
you
need
some
secret
sauce
that
you
would
like
to
eat
inside
of
the
cpu
or
if
you
need,
if
you
have
some
kind
of
key
differentiation
point,
then
you
can.
You
can
add
it
there,
thanks
to
the
methodology
and
the
methodology
itself
comes
with
the
the
full
flows,
which
means
the
design
as
well
as
verification
as
well
as
sdk,
as
well
as
debugging.
Pretty
much
everything
is
auto
generated
for
you
using
a
studio.
So
you
get
so
you
get
the
rtl.
L
You
get
verification,
a
suite
that
you
can
run
and
operate.
You
get
a
stimuli
either
fixed
one
or
randomized
one.
You
get
the
sdk,
including
compiler.
That's
a
vary
of
your
changes
and
and
many
more
so.
This
is,
this
is
the
third
sorry.
The
second
product
which
is
called
the
corrusive
studio
and
the
last
product
that
I
like
to
focus
today
is
is
a
kodaship's
of
course
upper
package.
So
what
this
actually
is?
L
Well,
it's
a
it's
the
whole
environment.
It's
not
necessarily
packaged
itself.
It's
like
an
environment
for
for
integration
or
deployment
of
these
verve
core
cpus,
and
I
will
talk
about
what
they
are.
You
know
in
a
minute.
L
So
if
you
look
our
focus
on
the
third
item,
which
is,
of
course,
upper
package,
let
me
start
with
a
short
introduction
of
of
a
smurf
itself,
so
what
the
smurf
actually
is.
L
So
it's
a
implementation
of
risk.
Five
spec
and
originally
we
had
swerve
h1,
the
h1
was
developed
by
the
western
digital
and
it
was
open
sourced,
yeah,
cheap
airlines.
Islands
later
on
the
digital
released
also
the
two
more
cores
with
this
eh2,
that's
kind
of
successor
of
each
one
and
then
also
vl2.
L
Now,
if
we
look
at
the
feature
set
of
these
cores,
then
if
you'll
start
with
the
eh1,
which
is
the
the
first
one
that
was
introduced,
then
it's
a
single
threaded.
Dual
issue:
implementation
really
high,
embedded
core
high
performance
embedded
core,
very
nice
one.
You
know
you
can
configure
it
in
many
way.
You
can
configure
simple
things
like
the
branch
prediction
through
a
number
of
pipeline
stages.
You
can
configure
many
things
and
everything
is
available
at
the
cheaper
lens
for
you
in
the
case
of
eh2,
they
added
a
second
thread.
L
So
the
performance
is
it's
really
nice
one
in
this
case,
because
you
can
you
have
a
real
multi-threading
in
this
the
cpu
and
I
think
it
was
the
first
first
of
a
kind
in
the
risc
five
s5
domain.
L
Well,
then,
if
you
need
something
on
the
on
the
lower
end,
if
you
just
need
some
housekeeping
cpu
or
something
like
that,
then
you
can
choose
from
the
el
2,
which
is
single
threaded
again
and,
as
you
can
see
it's
just,
it
is
just
a
four
stage
pipeline.
L
So
again,
all
of
them
are
available
for
the
g
parents,
which
is
good,
but
then
you
know
you
have
rtl,
but
the
question
is
whether
it's
actually
you
know
all
you
need
or
whether
you
know
you
need
more
than
that,
because
if
we,
you
know,
if
you
take
the
rtl,
that's
fine,
but
you
need
to
have
also
other
tools
or
other
other
frameworks,
such
as
the
compilation
or
sdk
that's
compatible
with
the
rtl.
L
You
have
to
have
a
deep
black
infrastructure
as
well,
so
you
can
debug
properly
on
the
cpu.
You
should
have
some
kind
of
you
know
comprehensive
documentation
that
helps
with
the
integration
of
the
cpu
and
so
on.
You
know,
then
there
is
a
question
about
the
support.
So
if
you
as
a
customer,
you
would
like
to
create
a
real
chip
out
of
this
one,
but
if
there
is
a
bug
in
it,
so
you
know
who
we're
going
to
call
no
less
than
digital
or
community
or
who?
L
Then,
if
you
do
some
kind
of
synthesis,
then
you
may
end
up
with
some
troubles
there
as
well,
because
the
synthesis
script
is
not
open
source
and
you
have
to
write
it.
So
you
know
what,
if
you
do
a
mistake
there
and
again
who
gonna
call
and
then
similarly
to
other
parts
like
you
know
the
rtl
simulation,
although
there
is
support
for
verilog,
but
what,
if
you,
you
need
to
to
have
a
support
for
the
for
the
commercial
grade,
rto
simulators
and
so
on.
L
So
as
you
can
see,
the
rtl
is
really
essential.
That's
true!
But
it's
not
enough.
I
mean
you,
you,
you
really
need.
More
than
that.
You
need
to
collect
more
information.
You
need
to
collect
more
tools
or
more
data
to
be
able
to
integrate
the
the
cpu
and
program
cpu
later
on
in
the
in
the
real
design,
and
it
was
actually
this
this
you
know
question
or
these
questions
were
actually
a.
L
You
know
kind
of
kickoff
for
our
support
package,
because
we
said:
okay,
let's
create
a
package
or
environment
in
which
you
can
actually
get
pretty
much
all
you
need
at
one
place.
Then
you
can
start
with
so
on
your
side.
You
don't
have
to
care
of
which
version
is
compatible
with
which
you
know
in
the
case
of
I,
don't
know,
compilation
and
simulation
and
the
rtl.
L
L
One
is
free
version,
that's
available
on
the
on
the
github
again
through
the
sheep
alliance
repositories,
it's
a
free
of
charge,
it's
for
students
or
educational
purposes
and
itself
it
contains.
You
know
a
lot
of
stuff
already,
including
the
whole
compilation
two
chain,
including
the
rtl
for
the
cpu
itself,
but
also
for
the
for
the
soc.
So
there
is
a
swerve
of
soc
that
integrates
the
sort
of
course.
Originally
it
was
done
for
eh1.
L
We
enhanced
the
scroll
wall
for
patch
this
robot,
so
it's
we
have
a
version
that
is
able
to
talk
with
eh2
as
well
as
el2,
and
you
can
use
these
variables
with
the
open
source
compilation
tool
chain
based
on
the
gcc,
to
run,
for
instance,
benchmarks
to
get
really
nice.
You
know
benchmark
numbers
out
of
out
of
the
swerve
and
reproduce
you
know
some
some
of
the
nice
benchmarks
using
using
correct
tools.
L
There
is
a
bunch
of
documentation,
so
it's
done
in
a
way
that
you
basically
download
the
docker
docker
image
that
contains
pretty
much
everything
it
contains.
The
free
on
open
source
packages
that
you
can
look
at
each
package
comes
with
the
documentation
that
shows
how
to
use
it.
So,
for
instance,
there
is
documentation
that
relates
to
the
benchmarking.
L
There
is
a
documentation
that
relates
to
the
ide
for
the
software
development
and
how
you
can
create
projects
or
or
c
project
really
easily
in
it,
because
then
subvert
support
package
preset
pretty
much
everything
for
you
in
terms
of
software
development.
There
is
also
automation
for
the
for
the
debugging.
So
then
you
can
use,
for
you
know
the
proper
version
of
the
open
ocd
with
the
cpus.
So
then
you
can
do
it
also
the
the
onto
debugging
either
under
you
know
real
silicon
if
you
have
it
or
on
the
on
the
fpga.
L
So
that's
you
know,
that's
what
I
have
mentioned.
Everything
is
prepared
for
you
guys
and
that's
part
of
the
free
version,
but
we
have
also
the
pro
version.
Now
the
pro
version
comes
with
additional
stuff
in
it
and
the
basically.
If
you
look
at
the
content,
then
you
can
find
out
that
there
is
set
of
eda
scripts
that
cannot
really
easily
open
source
one
open
source.
So
there
is
a
flow
complete
flow
for
commercial
grade,
eda
tools
from
guys,
like
you
know,
cadence
or
synapses.
L
So
then
you
can
just
really
simply.
You
know,
integrate
or
install
this
commercial
grade
package
inside
of
the
environment,
and
then
you
get
pretty
much
all
scripts.
You
need
to
run
to
run
the
synthesis
flow.
There
are
also
other
parts
like
verification
or
about
advanced
verification
flow,
including
verification,
environment
for
swerve.
So
then,
if
you
need
or
if
you
would
like
to
do
some
kind
of
modifications,
then
we
may
help
you
with
that,
because
we
have
verifications
to
it.
L
On
top
of
that,
there
is
some
some
other
feature
set
that
you
can
check
on
the
on
the
github
to
to
find
out.
What's
what's
in
it.
L
Last
but
not
least,
what's
important
is
that,
with
the
pro
version,
we
offer
a
professional
support,
which
means
that
you
can
have
a
account
where
you
can
ask
questions
related
to
the
surf
in
general,
so
it
can
be
on
the
integration
side.
It
can
be
on
the
feature.
Richard
said
it
can
be
on
the
customization.
Even
so
so
this
comes
with
the
with
the
pro
version.
That's
well,
that's
again,
the
commercial
grade,
version
of
a
silver,
of
course,
support
package.
L
Speaking
of
the
of
the
commercial
commercial
grade
features
or
offering.
L
We
actually
created
some
add-ons
right
now
to
this
worth
core
eh1,
so
the
add-ons
basically
brings
more
functionality
in
it
and
namely
is
a
it's
a
high
performance
floating
point
unit
and
it's
a
single
and
double
precision,
so
it
flows
the
risk.
Five
f
and
d
extensions.
L
So
then
you
can
have
those
vh1
in
a
configuration
with
it,
and
the
the
benchmark
numbers
are
quite
quite
nice.
With
the
high
performance
fpu,
then
we
also
noticed
that
quite
a
few
customers
or
prospects
that
we
were
talking
to
they.
They
were
missing
a
data
cache.
L
So
this
is
something
that
we
also
also
fixed
or
added
on
top
of
each
one.
So
now
the
data
cache
is,
is
there
as
well
and
you
can
configure
it
similarly
to
to
the
instruction
cache,
which
means
that
you
can
configure
the
interface
you
can
configure
certain
features
of
the
data
cache
such
as
such
a
size
in
a
number
of
ways,
cache
line
these
kind
of
things
that
you
couldn't
expect
that
are
configurable.
That's
that's
part
of
part
of
the
add-on
or
add-ons
as
well.
L
Last
but
not
least,
is
that
we
integrated
the
zbb
extension
for
the
bid
manipulation.
So,
as
you
may
know
so,
eh
ii
or
el
2
comes
with
some
bit
manipulation.
Instructions
already,
and
I
think
that
the
final
bump
or
final
latest
update
at
the
japanese
with
regards
to
al2
comes
with
quite
a
few.
Actually,
a
bit
money
polishing
instructions,
it's
kind
of
missing
on
the
in
the
original
eh1.
L
So
this
is
the
reason
why
we
decided
to
to
integrate
it
as
well,
and
the
selection
of
instruction
of
instructions
were
based
on
the
again
customer
engagements
that
we
had,
and
so
we
tried
to
select
just
the
set
that
basically
was
asked
the
most
four
for
each
one.
L
So
then
you
can
you
can
ask
for
all
in
one's
package
where
you
can
have
pretty
much
everything,
but
you
can
also
pick
and
choose
so,
for
instance,
if
you
know
you
need
just
the
just
the
floating
point
you
need,
then
then
we
can
do
it.
We
can
really
do
it
for
for
you
without
without
an
issue,
so
it
was
really.
You
know
high
level
view
or
description
of
what
we
have
in
terms
of,
of
course,
support
package.
L
It
has
pretty
much
everything
that
you
need
for
get
up
and
running,
including
the
software
stack
you
can
just
you
know,
enjoy
the
feature
set
that
it
comes
with,
because
again,
it's
not
just
a
simple
package:
it's
basically
an
environment
with
many
packages
in
it
based
on
the
docker,
and
if
you
need
more
than
that,
if
you
are
really
serious
in
the
case
of
the
integration
and
you
think
that
you
need
support,
for
instance,
or
you
need
some
additional
add-ons
like
I
don't
know
fbu,
then
you
can
move
to
the
pro
version
where
we
are
more
than
happy
to
you
to
give
you
more
more
support
in,
in
this
case,.
A
L
Well,
the
the
support
that
we
we
are
providing
covers
the
core
itself,
plus
also
the
tools.
So,
for
instance,
we
did
a
lot
a
lot
of
improvements
in
the
case
of
openocd
to
making
make
it
up
and
running
for
for
each
one
as
well
as
other
other
cpus
like
eh2
and
er2.
So
we
are,
you
know,
supporting
also
these
projects
as
well
and
actively
donating
back
what
we.
A
A
Our
next
presenters
will
be
giving
an
introduction
to
the
open
road
project,
so
it
gives
my
pleasure
to
introduce
professor
andrew
kong
from
university
of
california
at
san,
diego
and
also
tom
sparrow,
of
precision
innovations
who
both
come
to
share
all
the
development
of
open
road
and
the
progress
that
they
have
made.
So
professor
khan.
M
Great
okay,
so
hello,
everyone,
my
name
is
andrew
from
uc
san,
diego
and
tom
spiru,
and
I
are
extremely
excited
to
give
this
update
on
the
open
road
project.
M
Open
road
delivers
an
open
source
rtl
to
gds
eda
system
as
part
of
the
darpa
idea
program
and
there's
a
lot
of
friends
and
supporters
of
open
road
that
we
can
see
in
this
meeting.
So
we'd
like
to
start
out
by
saying
thank
you
from
the
entire
team,
and
this
is
the
open
road
team
tom
has
been
the
chief
architect
and
technical
project
manager
of
open
roads
since
july
2019
I
serve
as
the
pi
of
the
project,
and
some
new
faces
in
the
past
year
include
professor
larry
clark
and
dr
don
mcmillan
who's.
M
M
A
shared
hierarchical
database
engine
integrations
to
support
tight,
incremental,
optimization,
unified,
executable,
easy
to
use
tickle
or
python
scripting
and
and
a
lot
more
so
open
road
scope
includes
all
of
digital
layout
generation,
as
shown
here
from
the
idea,
program's
original
definition
in
darpa's.
You
know
broad
agency
announcement,
as
it's
called
in
the
past
year,
we've
taken
on
the
database
task
of
the
idea.
Digital
flow,
including
advanced
node
support.
You
know,
left
5,
8
and
more
and
arizona
state's
asap
project
is
a
new
base
technology
task
in
open
road.
M
M
So
at
the
right
is
a
gf12
single
core
black
parrot
soc
in
university
of
washington's
pad
frame.
Our
database
has
seen
a
lot
of
improvement
that
asap
7
research.
Pdk
was
open
sourced
in
mid-december
we've,
added
to
the
team,
released
an
extraction
tool
and
turned
our
focus
to
ppa
power,
performance
and
area
along
with
a
growing
set
of
users.
M
So
the
architecture
of
open
road
software
has
also
evolved
substantially
for
quality,
maintainability
and
development.
Velocity
there's
now
just
one
repo,
with
only
one
sub-module
logging,
ci
dashboards
and
metrics
collection,
all
support
our
bring
up
of
intelligence
and
automation,
which
is
our
ultimate
goal,
and
this
is
a
picture
from
e-fabulous
showing
their
mpw-1
shuttle
in
sky
130.
M
And
here's
a
gallery
of
some
test
cases
you
can
see
macros
or
with
and
without
macros,
in
12
and
65
nanometer.
These
have
open
road
generated,
fill
they're
clean
on
all
timing,
drc,
lvs,
erc
and
antenna
checks,
and
the
validations
are
by
our
project's
internal
design,
advisors
team
with
commercial
sign-offs,
as
mentioned
in
the
slide.
M
So
phase
one
focused
on
automated
rtl
to
tape
out
clean
gds,
but
ppa
is
our
focus
in
our
current
phase.
2A
that's
year,
3.
Basically,
the
main
vectors
include
synthesis
placement,
clock,
tree
synthesis
and
optimization
in
our
resizer
plus
learning
and
insight
that
we
get
from
metrics
collection,
infrastructure
and
large
designs
of
experiments.
M
M
Improving
some
levers
like
sync,
clustering
and
cts,
and
simply
starting
to
aim
for
better
f
max
and
density,
already
show
some
significant
improvements,
and
these
are
the
green
numbers
relative
to
the
past
october's
black
numbers,
and
you
can
see
improved
wire
length
and
skew
and
a
36
improvement
of
effective,
f
max,
and,
let
me
say
a
bit
about
one
of
our
ppa
vectors
in
clock
tree
synthesis.
This
is
just
an
example.
M
M
M
We
can
collect
thousands
of
synthesis
place
and
route
run
data
points
in
a
day
for
synthesis
with
eosis
and
abc.
We've
studied
e-fabblis's
recipes
alongside
our
default
recipe,
which
comes
from
the
synthesis
experts
who
have
newly
joined
our
team,
and
this
is
a
rapidly
moving
area
actually
in
the
project.
M
We
also
use
big,
does
to
study
trajectories
of
qor
and
uncover
flow
issues.
So
the
upper
right
picture,
that's
being
shown,
shows
benefits
of
higher
utilization.
The
lower
right
shows
how
our
platform
run.
Script
might
stack
up
against.
You
know
many
possible
settings,
so
it's
the
yellow
circle,
which
has
pretty
good
slack
and
area
properties
and
the
colorings
show
how
post
cts
slack
on
the
x-axis
is
an
early
indicator
of
router
failure,
and
this
kind
of
experiment
allows
us
to
ratchet
up
the
baselines
for
our
test
cases.
D
M
M
Our
tools
now
or
all
engines
now
use
the
same
logging,
api
and
nomenclature,
and
we've
been
evangelizing
our
metrics
naming
convention
and
approach
to
both
researchers
and
industry,
whether
si2
or
the
ieee
council
on
electronic
design.
Automation,
because
right
now,
there's
no
standard
platform
out
there
for
eda
and
its
users
and
in
open
road.
This
infrastructure
serves
many
purposes
from
dashboards
to
analytics.
M
So
this
supports
metrics
collection
and
now
I'd
like
to
switch
gears
a
bit.
Also,
this
is
the
transition
to
tom
who'll.
Take
take
this
talk
the
rest
of
the
way,
I'd
like
to
remind
that
open
road
development
has
to
achieve
critical,
mass
critical
quality
maximum
velocity
maximum
openness
has
to
support
clean
foundry
tape,
out
automation
and
the
needs
of
early
adopter
users,
which
is
why
our
project,
which
aims
for
no
human
in
the
loop,
has
a
very
capable
gui,
and
so
let
me
sign
off
with
a
video
from
matt
liberty.
D
The
open
road
gui
serves
the
dual
purpose
of
allowing
users
of
open
road
to
investigate
their
designs
and
developers
of
the
open
road
application
to
investigate
their
algorithms.
Here
is
a
design
which
the
user
can
zoom
in.
They
can
pan.
They
can
select
individual
objects
for
further
exploration.
The
selection
here
is
showing
what's
been
selected.
D
M
And
let
me
turn
the
presentation
over
to
tom.
E
Thanks,
okay,
perfect,
so
I'll
show
a
little
bit
more
about
the
gui
and
then
go
on
and
talk
about
other
topics.
So
here
you
can
see
the
gui
being
able
to
support
rdl
showing
45
degree
geometries.
E
E
Here
andrew
had
a
small
version
of
this.
This
is
the
clock
tree.
Both
the
placed
and
routed
version
of
the
clock
tree.
This
design
is
black
parrot
in
gf12.
This
is
the
one
that
andrew
had
talked
about
that
we
took
all
the
way
through
to
clean
tape
out.
E
And
there's
a
lot
more
and
it
isn't
easy
to
extend
architecture
so
as
more
people
contribute
to
the
project,
people
may
come
up
with
ideas
for
what
they
want
to
see.
Visualized
or
especially
visualizations,
like
map
showed
of
the
algorithms
running
themselves
for
debugging
purposes,
is
turning
out
to
be
really
helpful.
E
E
E
Timing
characterization
flow
using
cadence
liberate,
is
in
progress
now
and
register
files.
Roms,
cams
and
tcams
are
also
in
progress,
and
we
see.
Asap7
is
a
great
platform
for
open
collaboration
and
system
exploration
and
including
this
vibrant,
open
source
hardware.
Community
people
could
do
designs
with
this
and
it
looks
like
a
real
process.
It's
also
going
to
be
great
for
us
to
have
regression.
E
E
Asap5
will
also
be
open
sourced
in
the
future
and
we'll
use
horizontal
nanowire,
transistors
and
the
ground
rules
track
boundaries,
advanced
node
scaling
boosters
such
as
single
diffusion
break
and
contact
over
active
gate,
a
6.5
track
cell
library
and
physical
verification
are
well
underway
at
the
bottom.
Here
you
can
see
a
d
latch
layout
and
an
sram
cell
array.
E
The
figure
here
shows
main
inputs,
verilog
signal,
mapping,
footprint,
io
library,
and
that
the
footprint
file
can
be
automatically
extracted
from
a
previous
layout
depth.
If
there
is
one
here
are
some
screenshots
in
gf12
lp,
staggered
pads,
as
well
as
flip
chip
and
also
another
screenshot
from
sky
water,
130.
E
E
One
thing
that
I
would
like
to
highlight
is
that
open
road
is
unique
in
that
it's
a
partnership
between
eda
academic
research
and
industry
veterans.
You
saw
from
andrew's
introduction
that
we
have
team
members
from
several
universities
performing
core
research,
large
industrial
semiconductor
companies,
providing
guidance
and
priorities,
and
industry
consultants
with
extensive
eda
experience
performing
key
development.
E
This
unique
this
unique
project
and
blend
of
expertise
is
focused
on
breaking
new
ground
in
terms
of
automation,
of
rtl
to
gds2
and
also
creating
a
robust
industrial
quality
piece
of
software.
So
it
can
be
a
basis
for
industry,
relevant
research
and
usable
for
important
target
users
like
the
defense
industrial
base.
E
A
Thanks
both
for
an
excellent
update
and
information
on
the
open
road
project,
I
think
it's
a
very
exciting
what
the
team
has
been
doing
and
providing
to
the
industry
and
certainly
helps
illustrate
to
many
the
complexities
involved
in
the
physical
implementation
and
analysis
space.
I
have
one
question
here
from
the
audience,
which
is:
are
you
guys
working
on
in-memory
compute
or
doing
any
collaboration
for
it
on
sram
or
flash
using
open
road.
M
Well
time
was
thomas
talking,
but
at
a
high
level,
not
specifically
in
memory
compute,
but
we
do
see
folks
trying
highly
tiled
aiml
architectures,
which
are
extremely
sram
dominant
and
one
example
would
be
with
performers
in
the
real-time
machine
learning
program
and
we
do
think
about
front-end
soc
support
or
the
you
know,
support
for
front-end
chip
planning
and
even
design
path.
Finding
like
architecture
exploration
quite
a
bit,
but
not
in
memory
compute
per
se.
A
E
I
guess
I
could
take
that
one.
So
we
do
have
a
few
users
and
we
actually
have
three
users.
You
know
you
saw
evapolis
and
all
the
tape
outs.
You
know
if
you
lump
that
all
into
one
which
is
really
40
different
users,
but
we
do
have
some
users
in
who
are
modifying
the
code
modifying
creating
their
own
scripts.
E
You
can
see
that
the
fabulous
put
open
lane
together,
which
is
a
recipe
which
is
very
targeted
at
sky
130,
so
we
definitely
do
have
people
creating
their
own
recipes
and
we
try
to
collect
the
best
of
those
recipes
and
put
them
into
open
road.
If
somebody
is
doing
something
complicated
with
the
script,
we
try
to
pull
that
into
the
application
and
make
it
a
command.
E
E
A
E
That
type
of
process
and
application
of
those
resources,
usually
customer
driven.
So
I
hope
that
that
will
happen
in
the
future.
We
haven't
seen
that
yet,
but
as
we
get
users
using
open
road
to
tape
out
to
various
boundaries
and
the
boundaries
see
that
there
is
a
demand
there,
then
I
think
that's
something
that
would
be
very
helpful
to
push
for.
A
M
I
think
it's
a
difficult
question
to
answer.
Yes,
I
I
mean,
if
you
look
at
the
history
of
how
asap
7
and
was
developed
by
asu
and
arm
even
the
sort
of
clean
room
approach,
that
everything
had
to
be
publicly
sourced.
That
went
into
the
technology
definition
and
the
ground
rules.
Definition
is
a
really
high
bar
and
there
are
the
free
pdk,
45s
and
15s
they're.
M
You
know
I
think
people
have
kind
of
mentally
moved
on,
and
so
it's
unlikely
in
my
opinion
that
you
know
people
will
invest
a
lot
of
volunteer
effort
to
to
bring
up,
let's
say,
an
equivalent
asap
22,
for
example,
but
that's
my
personal
opinion
everything's
solvable
with
resources
and
incentives.
So
that's
up
to
the
term.
I
suppose.
A
I
was
just
curious
as
to
I
know
you
mentioned
that
the
software
stack
is
running
in
google
cloud
just
in
terms
of
the
scale
out
capability
of
the
software
stack.
If
you
can
comment
on
that,
a
little.
E
A
Very
good:
well,
I
don't
have
any
no
further
questions
and
I
really
appreciate
the
excellent
presentation
and
the
work
that
you
both
have
done
and
the
team
on
open
road.
So
thank
you
so
much.
B
Great
all
right
well
good
morning
or
good
afternoon,
everybody
I'm
brian
faith,
the
ceo
of
quick
logic
and
the
last
time
I
spoke
with
the
chips
alliance.
It
was
around
the
fact
that
we
had
taken
this
big
step
to
into
open
source
fpga
tooling,
coming
from
a
proprietary
tool's
past,
and
so
this
presentation
is
sort
of
the
next
step
in
the
journey
along
those
lines,
and
so
for
those
of
you
that
don't
know,
quicklogic
is
a
fabulous
semiconductor
company.
B
We
started
as
an
fpga
company,
and
today
we
have
discrete
fpgas.
We
have
mcu
plus
embedded
fpga
socs,
and
then
we
have
an
embedded
fpga
ip
business
as
well.
So
programmable
logic
is
sort
of
the
root
of
everything
that
we
do,
and
so
it's
it's
fun
to
talk
about
this
in
the
open
source
context.
So
when
I
started
quicklogic
25
years
ago,
everything
that
we
did
was
proprietary.
B
We
didn't
necessarily
do
it
all
ourselves.
Sometimes
we
partnered
with
other
companies,
but
every
element
of
the
tool
chain
was
was
in
fact
proprietary
and
so
starting
as
an
application
engineer,
one
of
my
jobs
was
to
build
out
schematic
macro
libraries.
That
would
then
go
into
the
schematic
tool
and
write
verilog
simulation
models
that
would
go
into
simulators
and
then
have
to
validate
all
of
these
different
design
flows
from
different
vendors
for
verilog
and
vhdl,
and
schematic
capture
with
folks
like
mentor,
graphics
and
cadence
and
synopsis.
B
And
then,
of
course,
all
that
sort
of
comes
into
the
proprietary
place
and
route
tool
in
the
middle
there
in
the
brownish
colors
to
create
a
net
list
for
a
customer.
So
they
could
actually
program
a
part.
So
there's
lots
of
moving
parts
there,
many
of
which
are
not
ours
and
all
of
which
were
proprietary.
B
And
it
was
interesting
because
I
actually
taught
a
programmable
logic
class
at
santa
clara
university,
probably
about
20
years
ago,
and
I
can
tell
you
that
most
of
the
time
spent
creating
the
curriculum
for
the
class
and
the
labs.
And
so
on
was
really
about
how
to
take
the
same
design
and
run
it
through
several
different
vendors
tools.
And
if
you
think
about
that,
that's
really
not
a
value
add
for
the
student.
B
B
B
Change
is
definitely
painful
and
changes
risk
and,
if
you
think
about
engineers
that
are
building
products
and
putting
them
into
systems
that,
like
flight
control
of
airplanes
or
industrial
equipment,
you
don't
want
to
take
a
lot
of
risk
on
something
that
you
may
not
know,
and
so
the
fpga
vendor
in
the
world
for
the
entire
existence
and
there's
been.
You
know
over
60
of
these
companies
that
have
come
and
gone
in
the
last
30
years
in
this
area.
B
It's
really
that
this
robust
software
that
we
have
and
there's
no
incentive
really
for
the
fpga
vendors
to
change,
because
the
the
notion
of
this
proprietary
tool
chain
really
is
that
walled
garden,
and
so
people
get
accustomed
to
it
again.
They
don't
like
change,
and
so
it
just
sort
of
perpetuates
the
that
walled
garden
approach.
B
So
the
last
time
that
I
presented,
as
I
said
earlier,
it
was
really
about
this
first
step
that
we
had
taken
in
an
open
source,
tooling,
support
of
our
devices
and
as
I've
credited
many
times
I'll.
Do
it
again
here
publicly.
You
know
tim
ansel
and
his
persistence
in
convincing
myself
in
quick
logic
that
this
was
the
right
thing
to
do
is
why
we're
presenting
the
slide
today
and
many
of
the
folks
over
at
micro,
helping
take
our
architecture
and
and
bring
that
into
the
open
source,
tooling
domain.
B
So
today,
in
this
this
particular
chip
showing
here,
we
have
a
a
microcontroller,
an
embedded
fpga
block.
It
has
simple
flow
support.
It
has
renowned
support
and
it
has
a
variety
of
real-time
operating
systems
that
run
on
the
arm
core,
so
zephyr
and
then
free
r
toss,
and
we
had
a
little
dev
kit
up
there
in
the
the
upper
right,
so
that
was
a
year
ago.
So
what
sort
of
happened
in
the
last
several
months
is
the
the
tool
chain
is
starting
to
get
filled
out.
B
Even
more
so
now
in
my
gen
and
micropython
and
tensorflow
lights
running
on
it-
and
we
have
all
these
other
dev
kits,
and
the
interesting
thing
for
me
is
that
again,
coming
from
this
proprietary
past,
there's
just
everything
that
we
did.
We
did
for
ourself.
B
It
was
closed
and
proprietary
and
now
we're
starting
to
see
this
virtuous
cycle
of
people,
starting
with
things
that
are
on
github
and
then
adding
to
that
and
improving
importing
new
tools
on
top
of
tools-
and
I
know
several
people
on
this
caller-
are
part
of
that-
and
I'd
like
to
express
my
gratitude
to
the
community
for
making
that
happen.
But
you
can
see
now
brian
yeah.
B
Know
there
we
are
okay,
okay,
well
great
wrong
screen.
Thank
you.
So
this
was
that
graphic
on
the
fpga
tools
of
being
proprietary.
B
This
was
the
slide
on
why
vendors
resist
walled
garden.
These
are
the
tools
from
a
year
ago,
as
I
said,
the
the
simplified
support
for
the
embed
fpga
reno'd
for
the
whole
chip,
the
real-time
operating
system,
support
in
the
arm
core
and
then
today
really
get
that
getting
fell
out
further
with
tensorflight
and
mygen
and
micropython.
So
now
you
can
really
start
seeing
this
virtuous
cycle,
I'm
happening
which
is
sort
of
exactly
what
tim
ansel
talked
about
and
it's
it
is
happening.
B
And
then
I
think
the
really
neat
thing
for
me
personally
and
then
this
sort
of
goes
back
to
the
fact
that
I
I
did
teach
that
programmable
logic
class
and
have
a
strong
connection
with
my
university
still
that
now
one
of
these
companies
that
works
with
us
has
actually
taken
simbaflow
and
and
started
to
run
it
on
an
android
smartphone
and
on
raspberry
pi
and
they're.
B
Getting
this
into
the
curriculum
now
in
iit
hyderabad
in
india,
and
when
you
think
about
like
what
that
opens
up
the
capabilities
now
where
somebody
could
literally
take
a
a
10
dev
kit
with
open
source
software
and
start
doing
fpga
design
with
their
android
phone,
which
you
know
so
many
people
in
the
world
have
an
android
phone
or
raspberry.
Pi
really
opens
up
and
and
truly
democratizes
the
capability
to
use
this
technology.
For
for
cool
and
interesting
and
learning
purposes.
And
to
me,
that's
just
that's
so
powerful
and
it's
so
neat
to
watch.
B
That
kind
of
thing
happen.
So
I
think
this
is
just
going
to
start
and
perpetuate
further
bringing
it
back
to
quick
logic.
We
have
our
own
road
map
and
you
know
the
previous
ship
was
around
arm
and
this
next
one
is
around
risk
five,
and
so
in
this,
there's
already
renowned
support
for
this
and
simplifies
support
for
the
embedded
fpga.
B
But
I
think
that
one
of
the
neat
things
about
the
open
source
tools
here
is
that
it
actually
makes
it
very
efficient
for
us
quick
logic
to
do
architecture
exploration.
B
So
you
know,
as
we
look
at
these
different
use
cases
like
visual
wake
words
with
tensorflight
for
microcontrollers,
and
we
start
mapping
it
to
this
architecture
on
the
right
you
know
in
the
past.
B
If
we
wanted
to
do
proprietary
tools
only
in
working
with
proprietary
synthesis,
vendors
specifically
we'd
have
to
go
and
take
a
lot
of
time
and
a
lot
of
money
just
to
try
little
different
changes
in
the
fpga
architecture
that
they
would
then
have
to
map
to
or
infer
blocks
for
and
then
see
what
the
net
result
of
that
was,
and
so,
with
with
this
open
source
tools.
That's
sort
of
in
within
our
control
now
or
a
much
stronger
degree
of
influence
to
do
that.
B
Architectural
exploration
and
optimize
the
architecture
for
the
algorithm
versus
the
other
way
around.
And
so
this
is
a
I
think.
Again.
It
just
speaks
to
the
power
of
of
open
source
and
so
just
sort
of
wrapping
it
up
on
this
whole
thing,
and
I
think
a
lot
of
this
sort
of
comes
down
to
changing
our
mindset-
and
I
know
for
us-
it
took
like
a
year
of
that
talking
with
tim
to
to
get
to
that
point
of
of
having
this
different
mindset,
but
for
fpga
companies
in
general.
B
There's
a
massive
nih
factor
here
of
trying
to
say
that
proprietary
tools
are
are
not
as
good
as
open
source
tools,
and
so
I
don't
know
how
we
overcome
that,
but
I
think
there's
a
lot
of
work
to
be
done
there,
just
in
general
to
to
convince
people
of
that
and
it
convinced
them
that
there's
more
upside
than
downside.
I
think
any
time
you
have
a
walled
garden
when
you
start
thinking
about
breaking
down
those
walls,
that's
viewed
as
risk
and
so
how
people
can
view
this
as
more
gain
than
pain.
B
B
It
just
takes
more
persistence
and
creativity
from
the
community
to
to
do
that,
so
we're
on
board
we've
completely
switched
completely
switched
from
our
proprietary
tool
to
open
source
tools,
which
I'm
really
happy
about,
and
that's
allowed
us
to
do
things
like
these
devices
that
I
just
showed
you
the
roadmap
devices
for
our
embed,
fpgi,
p
cores.
We
can
very
quickly
spin
up
software
support
for
new
core
that
a
customer
of
ours
may
want.
B
We
were
able
to
participate
in
the
the
efabus
google
skywater
130
openmpw
project
as
well,
which
we
would
not
have
been
able
to
do
without
the
open
source
tools.
That
was
the
really
the
enabler
for
us
to
quickly
participate
in
that
project,
which
I'm
pretty
excited
about
so
anyway.
I
think
the
future
is
right.
B
We're
really
excited-
and
I
have
absolutely
zero
regrets
about
moving
into
this
open
source
direction,
because
it's
it's
the
right
thing
and
I
think
the
industry
will
catch
up
eventually,
just
not
quite
yet,
and
that
is
the
conclusion
of
my
slides
and
I
do
apologize
for
being
on
my
wrong
screen
as
we
advance
through
these
any
questions.
A
B
Well,
we
support
it
in
the
sense
that
we've
worked
with
google
and
micro
quite
heavily
in
the
last
I
would
say
year
and
a
half
on
getting
our
architecture
support
in
there.
B
I
would
say
that
contribution
wise,
there's
been
more
from
the
community
as
far
as
improving
the
tools
and
it's
not
because
we
don't
want
to
it's
just
because
we're
still
shifting
the
mindset
and
the
skill
set
from
the
proprietary
development
to
open
source,
but
we're
definitely
making
contributions
in
that
area
and,
like
I
said,
every
every
new
device
or
core
that
we
do
today
is
in
open
source
tool
support.
We
are
not
we're
not
doing
any
more
proprietary
development
in
that
area.
B
Yeah
we
have
on
our
github
repo.
We
have
a
whole
section
on
user,
I
don't
forget
the
name
of
it.
It's
a
quick
start
user
examples
that
you
can
actually
take
the
flows,
download
them,
install
the
packages
and
and
there's
reference
code
or
designs
that
you
can
take
through
the
flow,
and
we
actually
have
a
lot
of
webinar
videos.
Now
that
we've
been
recording
over
the
last,
I
would
say
month
and
a
half
almost
like
one
a
week
to
give
people.
B
A
Listen
so
I
have
a
question
here:
did
the
quality
results
improve
or
degrade
when
switching
to
the
open
source,
synthesis,
synthesis
and
placing
route
tools?
Are
they
on
par
with
the
closed
source
counterpart.
B
That's
a
great
question
and
I
meant
to
actually
include
that
in
my
in
my
slides,
so
when
we
started
actually,
let
me
just
take
a
step
back
so
at
the
or
conference
in
2019.
B
I
remember,
I
think
dave
shaw
was
presenting
some
results
on
place
and
route
and
how
he
was
tracking
to
proprietary
tools,
and
that
was
one
of
the
data
points
that
we
used
to
make
the
decision
to
go
into
this
whole
thing.
As
a
with
respect
to
our
architecture,
our
devices
for
reference
tend
to
be
on
the
smaller
side
of
you
know
the
mainstream
fpga
market,
so
we're
talking
about.
You
know
thousands
of
luts,
as
opposed
to
millions
of
luts
and
so
for
designs
of
that
size.
B
The
tools
that
we
have
access
to
to
the
open
source
tools,
they're
on
par
with
the
proprietary
tools.
In
fact,
I
think
it
was
like
plus
or
minus
10,
or
less
variance
to
that,
depending
on
the
design.
So
that
was
one
of
the
data
points
also
that
made
us
comfortable
that
we
no
longer
had
to
do
proprietary
tools.
So
we
use
the
bundle,
precision,
synthesis
from
mentor.
Graphics
are
now
siemens,
and
we
no
longer
do
that,
because
the
the
yosis
and
symbol
for
flow
are
very
good
from
a
quality
result.
A
B
Well,
we
sell
fpgas
in
ip
cores,
so
I
don't
support
competitors
products,
but
these
tools
do
so
simpleflow
and
the
next
presenter
is
tim
mansell
who's.
Very
you
know
he
knows
simplefoe
better
than
anybody.
I
think
he
could
probably
comment
on
that.
Support
for
xilinx,
but
quick
logic
itself
does
not
support
xilinx
neural
terror.
G
A
F
A
And
our
last
presenter
today
is
tim
ansel
from
google,
who
is
going
to
talk
about
fully
open
silicon
down
to
the
transistor,
so
tim.
A
A
Yes,
you're
a
little
bit
over
amplified
too
by
the
way.
N
N
Sorry,
the
topic
I'm
talking
about
today
is
fully
open
silicon
down
to
the
transistor.
N
That
is
a
pretty
long
topic,
so
I
did
a
full
paper
on
this
at
ic
cad
20,
and
so,
if
you
want
to
get
a
much
more
detailed
information
about
some
of
the
stuff,
my
team
is
doing
at
google.
You
can
go
and
read
that
paper.
N
I
also
want
to
say
that
I'm
a
software
engineer,
I'm
not
a
hardware
engineer
I
self-identify
as
a
software
engineer,
and
I'm
only
a
newcomer
to
this
hardware,
design
space,
I'm
not
a
newcomer
to
open
source
I've
been
doing
open
source
since
1995
back
when
linux
was.
You
know
this
fresh
new
thing
that
was
battling
these
old,
traditional
unix
vendors
of
which
very
few
still
exist,
and
I've
been
at
google
for
a
long
time
as
well.
N
I
think
it's
actually
almost
up
to
13
years
now,
and
one
of
the
things
that
I
know
about
google
is:
it
runs
on
computers,
google
kind
of
pioneered
this
idea
of
using
lots
and
lots
of
computers
to
do
distributed,
especially
commodity
off
the
shelf
computers,
and
it
uses
a
lot
of
them.
You
know
this
is
probably
a
smaller
of
the
data
center
type
pictures.
N
N
The
problem
with
that
is
that
demand
for
compute
power
at
google
continues
to
grow
exponentially,
and
previously
we
were
able
to
keep
up
with
it
thanks
to
things
like
moore's
law,
but
now
we're
increasingly
seeing
that
we're
going
to
need
to
do
a
lot
more
hardware
design
to
make
it
possible
to
keep
up
with
these
demands
and
one
area
that
we
have
been
highly
successful
and
I
wasn't
involved
at
all
with
the
cpu.
N
But
tpu
is
a
really
great
example
of
being
able
to
scale
to
the
growing
demand
for
ml
compute,
which
was
growing
at
you
know,
rates
of,
like
10
000
x,
to
keep
up
with
the
demand
by
using
specialized
hardware,
and
they
the
team
who
does.
This
has
done
an
absolutely
impressive
job,
but
they
have
definitely
felt
the
fact
that,
as
the
tpu
design
has
gotten
more
complicated
verification
costs
and
design
costs
have
gone
up.
N
There's
also
quite
a
big
gap
between
what
you
could
ideally
get
in
a
process
technology
and
what
you
actually
get,
and
so
my
team
was
started
to
look
at
how
we
can
enable
people
at
google
to
do
basics
like
the
tpu,
without
needing
a
team
like
the
highly
experienced
team
behind
the
tpu,
and
so
we
started
by
looking
at
what
you
need
to
build
a
ethic
and
you
kind
of
need
three
parts
coming
from
a
open
source
background.
N
I
was
very
interested
in
understanding
if
these
parts
had
open
equivalents
and
it
turns
out
for
a
long
time
there
has
been
actually
quite
a
thriving
open
source,
ip
library
ecosystem
and
the
recent
success
of
risk.
Five
has
actually
made
that
explode
massively,
there's
now
a
really
big,
thriving
ecosystem
of
fully
open
source
ip.
N
If
you
think
about
what
a
pdk
is,
it's
all
the
kind
of
low
level
details
needed
to
actually
create
a
device
and
like
since
you
want
to
actually
create
a
device.
This
data
is
really
really
important
and
when
we
looked
at
what
open
source
pdks
existed,
there
was
a
whole
bunch
of
stuff
that
exists,
but
it
was
non-manufacturable
and
it
frequently
had
restrictive
licenses.
N
When
I
wrote
this
slide,
a
step
7
was
under
a
restrictive
license,
but
now
that's
under
a
fully
open
source
license,
and
I'm
really
glad
andrew
managed
to
do
that.
But
it's
still
a
non-manufacturable
pdk
and
as
a
consequence,
if
you
look
at
like
e-fabulous,
they
created
a
open-source
design
called
ravina
which
had
fully
open
source
rtl
used
fully
open
source
tools,
but
because
they
used
the
xfab
180
pdk,
which
was
not
open
source.
N
The
pdk
data
effectively
infected
the
rest
of
their
design
and
meant
it
so
that
they
couldn't
release
a
system
that
anybody
else
could
just
take
and
tape
out
at
xfab
straight
away
without
having
to
think
at
all.
N
And
so
we
got
thinking
and
the
thought
that
the
lack
of
a
manufacturable
pdk
was
one
of
the
biggest
blockers
towards
having
a
fully
open
source
icy,
and
so
this
fully
open
source
roadblock
was
something
we
thought
we
could
do
something
about,
and
so,
after
a
lot
of
work,
I
managed
to
find
a
foundry
that
was
willing
to
collaborate
with
google
and
together
with
sky
water,
google
released.
N
What
I
believe
is
the
first
fully
open
source,
130,
nanometer,
manufacturable
pdk
and
when
I
say
fully
open
source,
I
mean
it's
licensed
under
apache
2
license,
you
can
just
go
to
github
and
you
can
clone
the
repo
and
all
the
data.
Is
there
there's
no
ndas
to
sign?
There's
no
lawyers
to
be
involved.
N
Nanometer
node,
the
sky
water
process,
is
a
fairly
expansive
process
that
includes
things
like
inductors
and
high
sheet
road
poly,
resistors,
mim
caps,
sonos,
shrunken
cells,
high
voltage
support,
including
five
volt
ios
high
voltage,
extended
drain
in
morse
and
pmos,
and
so
that's
really
cool,
but
coming
from
a
software
world,
I
got
really
frustrated
with
the
fact
that
all
the
documentation
was
delivered
as
giant
pdf
files,
and
so
we
converted
the
documentation
to
be
like
what
you
find
with
a
modern
software
project.
N
It's
hosted
on
read
the
docs
using
syncs.
This
is
kind
of
what
the
documentation
looks
like
and
it's
publicly
available
and
searchable
with
google.
Here's
some.
You
know
information
about
the
various
design
rules
you
have
to
comply
with,
and
these
are
kind
of
the
low
level
details
about
exactly
what
is
and
isn't
manufacturable.
N
You
know,
there's
stack
up
diagrams
but
that
by
itself
isn't
enough.
So
there's
also
digital
standard
cells,
there's
io
and
periphery
cells.
There's
a
full
set
of
base
primitives
and
drc
rules,
there's
analog
rf
primitives
and
there
should
be
a
sram
and
flash
build
space
anytime.
Real
soon
now
that's
been
pending
a
while
taking
a
lot
longer
than
I
had
hoped
to
get
released,
but
that's
definitely
coming
and
we've
already
seen
certain
things
be
released
there.
N
We
have
a
lot
of
different
standard
cell
libraries,
which
have
quite
a
comprehensive
list
of
cells,
and
you
can
kind
of
see
how
the
different
standard
cell
libraries
are
optimized
for
different
use
cases
and
the
high
voltage
one
is
obviously
optimized
for
high
voltage
and,
like
I
don't
have
to
be
scared
about
showing
you
this
gts.
This
is
a
cell
from
skywater.
I
have
no
idea
what
the
cell
is,
but
I'm
sure
all
you,
you
know
asic
designers.
N
There
can
tell
me
what
that
is,
but
I
can
just
share
it
with
you,
and
you
know
this
is
the
sram
fit
cell?
Both
you
know
in
single
port
and
dual
port
vera
variants,
and
you
can
see
that
it's
significantly
smaller
than
the
d
flip-flop
cell
on
the
sky
water
process
and
the
thing
is
you
need
to
build
a
full
memory
out
of
this,
and
so
I'll
talk
a
little
bit
about
that.
N
In
the
future,
and
we're
also
doing
things
like
trying
to
automatically
generate
symbols
and
schematics
of
all
the
parts
in
the
system,
because
we
want
people
to
do
more
than
just
digital
design,
we
want
people
to
evolve
the
pdk,
including
the
standard
cells,
and
to
prove
this
possible.
N
N
You
can
now
have
a
fully
open
source
ethic
and
I
think
that's
a
pretty
cool
achievement
and
so
that
had
a
lot
of
contributions
from
a
whole
bunch
of
different
partners.
But
what
I
really
care
about
is
the
ecosystem,
and
you
know
how
do
we
grow
the
ecosystem?
Well,
we
started
out
by
taping
out
some
chips
to
prove
that
you
could
do
this.
We
also
did
a
a
talk
series
which
currently
consists
of
seven
talks,
but
I
also
launched
a
fully
open
source
shuttle
program.
N
This
is
not
your
average
shuttle
program.
We're
taking
a
very
different
philosophy
here:
we
want
your
prototypes
to
be
considered
throwaway
and
not
uniquely
precious.
This
kind
of
thing
where
currently,
when
you
get
back
your
prototypes
from
an
mpw
run,
you
know
you
mount
a
couple
on
a
board
and
they
become
the
most
precious
things
on
the
planet.
You
let
none
of
the
interns
near
it
because
you
know
if
they
spill
coffee
on
it's
the
end
of
the
world.
N
What
we
want
to
see
is
that
your
prototypes
are,
you
know,
shareable,
you
have
plenty,
so
you
can
give
them
to
people,
and
we
also
want
to
see
you
fail,
not
because
you're
you
know
doing
things
poorly,
but
because
you're,
trying
things
that
have
never
done
been
done
before.
N
There's
this
kind
of
idea
in
silicon
valley
that
you
know,
if
you're
not
failing
some
of
the
time,
you're
not
trying
things
hard
enough,
and
so
we
also
want
to
make
this
open
to
anyone,
whether
you're,
academic
maker,
hobbyist,
commercial
startup,
it
doesn't
matter
you
can
participate
and
to
participate.
N
Your
design
has
to
be
open,
source
and
there'll,
be
roughly
40
slots
per
design
and
you
go
to
fabulous
and
you
submit
the
design
and
there's
a
common
harness
which
allows
us
to
have
a
standard
packaging
system
and
also
to
give
you
a
lot
of
functionality
like
an
in
ic
logic.
Analyzer.
You
still
get
plenty
of
space,
10,
millimeters
squared,
and
our
aim
is
to
get
you
back.
100
packaged
ready-to-go
ics.
N
This
is
kind
of
an
example
of
the
user
space
and
the
harness,
and
you
can
go
and
read
about
the
harness
and
we
want
to
do
multiple
runs
the
first
run
november,
2020
second
round
mid
2021
and
a
couple
more
in
2021
and
2022.
N
If
you're
paying
attention,
you
might
notice.
That
november
is
in
the
past,
and
so
we
had
40
designs
go
out
on
mpw1
and
they
come
from
a
wide
variety
of
different
types
of
designs.
We
had
processors,
we
had
crypto
miners,
we
had
radio
receivers,
a
whole
bunch
of
analog
whole
bunch
of
fpgas,
and
so
that
was
really
really
cool.
But
I
think
one
of
the
coolest
things
was
60
of
designers
had
never
done
an
asic
before
had
no
experience
doing
an
asic
and
the
mbw
one
was
their
first
asic.
N
They
ever
did,
and
I
believe
even
one
of
the
mpw
designs
was
done
by
a
high
school
student,
so
that
is
pretty
cool.
I'd
also
like
to
say,
you
know
that
chisel
grass
is
definitely
pretty
impressive,
but
I
think
the
sky
water
one,
is
a
little
bit
more
impressive.
N
It
really
shows
how
much
pent-up
needs
there
was
for
a
fully
open
source
pdk,
and
we
see
this
from
our
slack
channel
as
well,
which
anybody
in
the
world
can
join.
It
has
over
a
thousand
members
and
the
analog
design
channel
alone
has
over
400
members
and
we're
getting.
N
You
know,
roughly
200
new
people
every
two
weeks
download
the
pdk
who
have
never
downloaded
it
before
and
that's
pretty
amazing
to
me
and
like
we're,
seeing
you
know,
1
700
or
you
know
about
800
downloads
every
week
of
the
pdk,
and
so
this
is
definitely
very
different
to
what
people
have
previously
said
as
possible.
So
I
highly
recommend
that
you
join
our
mailing
list
enjoy
in
our
slack
and
get
involved,
and
we've
removed
one
of
the
roadblocks
that
I
believe
was
preventing
a
lot
of
new
innovation.
N
My
ic
cad
paper,
though
identified
four
roadblocks
that
were
discovered.
So
if
you
go
and
read
my
paper,
you
can
understand
all
four
roadblocks
but
to
quickly
go
through
them.
One
of
the
big
things
was
place
and
route
tooling
and
we've
been
collaborating
very
heavily
with
the
open
road
project
to
improve
the
quality
of
the
tooling
and
make
it
work
really
well
on
sky
130..
N
I
know
the
system
verilog
topic
came
up
multiple
times
and
we're
investing
very
heavily
in
improving
system
verilog
support
in
the
open
source
tools,
in
fact
we're
investing
so
much.
We
have
two
passes.
N
N
It
can
be
possible
to
make
invalid
system
verilog
code
that
becomes
valid
after
preprocessing,
and
so
this
is
google's
wearable
project,
but
we're
also
working
with
another
parser
called
sherlock
to
create
a
fully
open
source
simulation
and
synthesis
tooling,
and
so
we're
trying
to
separate
the
passing
from
the
back
end
tooling
and
use
a
thing
called
uhdm
to
do
that
and
shorelog
is
bolted
onto
the
front
of
both
yosis
and
verilater,
and
at
micro
demonstrated
some
very
interesting
results
of
being
able
to
use
this
to.
N
We're
also
excited
about
applying
this
generation
approach
to
analog,
because
analog
is
one
of
the
big
points,
so
we're
excited
to
see
the
chips
alliance
bringing
up
the
analog
stuff
thanks
to
borah,
who
was
our
first
presenter,
we're
now
seeing
a
lot
better
support
for
bag
on
sky
130..
N
The
big
problem
with
bag
is
that
it
still
requires
a
few
closed
source
components,
we're
hoping
to
work
with
bora
to
fix
that
in
the
future.
But
I'm
also
excited
that
our
collaboration
with
fa
sock
has
resulted
in
fa
sock
running
on
top
of
open
road.
So
the
fa
stock
approach
to
fully
open
analog
generation
is
now
fully
open
source
with
no
dependent
on
dependency
on
closed
source
tools,
and
I
think
this
is
a
really
interesting
milestone
that
medi,
who
has
been
the
main
driver
behind
this,
has
achieved.
N
N
So
again,
if
you
want
to
get
all
the
details,
go
and
look
at
my
ic
cad
20
paper
and
to
get
notified,
join
the
mailing
list
and
join
the
slack
workspace
and
I'm
excited
to
see
all
these
chips
alliance
projects.
You
know
working
with
the
sky,
water,
pdk
and
also
hoping
that
we
can
do
even
more
in
the
future,
and
I
think
I
went
five
minutes
over
time,
but
it's
hard
to
cover
all
the
exciting
things
that
are
happening
in
this
space
in
only
15
minutes.
A
A
So
the
first
question
I
have
is:
does
power
modeling
belong
in
the
pdk.
N
Yes,
that's
a
simple
answer:
I
have
no
idea
if
it's
there
or
not.
I
hope
it's
there.
As
I
said,
I'm
not
a.
I
come
from
a
software
background,
so
I
don't
know
you
know
what
is
there
at
the
moment?
If
it's
not
there,
I
would
love
to
have
it
at
it.
It's
mainly
driven,
though,
by
people
contributing,
as
I
said,
there's
a
lot
of
things
happening
and
I'm
responsible
for
a
lot
of
them.
N
N
A
N
N
My
group
is
very
strongly
focused
on
the
open
source
aspect
of
stuff.
We
don't
prevent
people
from
using
the
sky
water
pdk
with
the
proprietary
tools
like
synoxis
and
cadence,
but
they're,
not
a
high
priority
for
us,
and
people
have
been
slowly
adding
and
improving
support
there.
But
it's
not
a
high
priority
for
us,
I'm
very
supportive
of
it
happening
and
I
do
think
the
closed
source
tools
aren't
going
anywhere.
I
think
they
have
a
a
long
future
ahead
of
them.
N
So
google
is
a
system
for
a
log
shop,
which
is
why
we're
concentrating
so
heavily
on
system
verilog
tooling.
However,
I
do
know
that
there
are
multiple
groups
who
are
very
interested
in
vhdl
and
there
are
multiple
people
who
taped
out
vhdl
designs.
N
On
the
first
mpw,
I
don't
know
exactly
how
that
was
done.
I
just
know
it
was
done
and
I
know
that
microwatt
from
ibm
was
taked
out
on
sky
130
and
microwatts
is
vhdl,
so
there's
definitely
a
ability
to
take
vhdl
designs
and
get
them
onto
an
inter
open
lane.
But
I'm
not
the
right
person
to
ask
the
technical
details
of
how
to
do
that.
N
If
you
go
and
ask
on
the
sky
water
slack
channel,
I
I
believe
there's
even
a
dedicated
vhdl
channel
on
the
skywall
of
slack
for
people
who
want
to
do
vhdl
stuff,
so
jump
on
the
slack
and
ask
it
and
so
yeah-
and
you
know
micro
was
probably
one
of
the
most
complicated
cpus
that
was
put
onto
the
sky,
130
mpw1,
and
so
it's
definitely
pushing
the
boundary
of
what
can
be
done
there
and
I'm
hoping
that
we
can
convince
ibm
to,
for
example,
join
the
chips
alliance
and
you
know,
start
contributing
the
type
of
resources
google's
contributing
to
the
system,
verilog
ecosystem
to
vhdl
ecosystem.
N
Because
I
know
a
lot
of
ibm's
internal
work
is
done
in
vhdl
and
they
have
a
lot
of
interest
in
vhdl.
N
So
if
you
know
anybody
at
ibm
go
and
nag
them
to
join
the
chip,
clients
and
nag
them
to
support
open
source
bhdl
tooling,
I
would
love
to
see
some
corporate
members
stepping
up
to
help
fund
the
vhdl
language.
At
the
moment,
it's
entirely
driven
by,
I
believe,
community
and
academic
contributions,
so
it
could
be
supercharged
by
having
you
know,
people
invest
in
it.
A
And
one
final
question:
do
you
think
that
some
other
company
will
open
source
a
pdk
or
collaborate
with
google
to
open
source
a
pdk
like
sky
water
has
done
what's
what
are
your
thoughts
or
forecast
on
that.
N
So
definitely,
I'm
looking
for
boundary
number
two
to
release
a
another
pdk
in
the
kind
of
180
130
110
nanometer
space,
I'm
also
working
with
skywater
to
get
their
90
nanometer
process
fully
open
sourced,
like
the
130
process
has
been
open
sourced
and
you
know
I
want
to
see
a
growing
ecosystem.
I
want
to
see
multiple
foundries
supported.
I
want
to
see
foundries
from
the
us.
I
want
to
see
boundaries
from
asia.
I
want
to
see
foundries
from
europe.
N
I
also
want
to
make
it
really
easy
that
people
can
use
the
right
foundry
for
their
job.
You
know
skywater
specializes
in
doing
a
lot
of
custom
things
to
your
system
that
most
other
foundries
aren't
willing
to
do,
and
so
sky
water
has
a
lot
of
ability
to
do
things,
but
that
does
drive
up
the
price
a
bit
and
being
based
in
the
u.s
does
drive
up
the
price
a
bit
and
some
of
the
foundries
in
asia
offer.
N
You
know
extremely
cheap
services
with
much
more
limited
capabilities,
but
that
could
be
the
right
thing
for
your
chip.
N
Right,
like
your
chip,
might
take
advantage
of
these
advanced
capabilities
or
cost
might
be
that
what
you
want
to
optimize
for
and
so
giving
people
the
flexibility
to
move
between
these
designs
and
like
foundries
and
move
designs
between
foundries
much
more
easily
is
one
of
the
key
aspects
that
I
want
to
see
happen
here,
and
you
know
I
worked
very
heavily
with
skywater,
but
they
know
that
they
are
just
trailblazers
here.
I
definitely
want
other
groups
to
be
involved
and
you
know
there's.
No.
N
This
is
kind
of
a
rising
tide
lifts
all
boats.
Here
we
want
to
see
the
industry
grow,
you
know
not
10x
or
100x,
but
like
a
thousand
x,
and
so
all
the
foundries
in
the
world
are
going
to
be
super
busy.
If
we
succeed-
and
so
I
don't
think
it's
going
to
be
a
problem
that
there
are
multiple
foundries
supported
and
I
would
like
to
say
one
answer,
one
more
question
which
is
yes:
the
fabrication
is
fully
free
for
the
mpw
program.
N
I
believe
we're
even
covering
shipping
to
you
and
a
goal
is
to
get
you
back.
Some
boards,
which
have
already
been
some
ics,
which
have
already
been
mounted
on
pcb
boards,
ready
to
go
as
well,
and,
yes,
anyone
can
participate.
N
The
cost
is
the
same,
whether
you're
a
hobbyist,
an
academic
or
a
commercial
company.
The
only
thing
you
need
to
make
sure
of
is
that
your
design
is
fully
open
source.
So,
if
you're
an
open
source,
startup
you're
perfectly
fine.
If
you
want
to
be
close
source
in
some
way,
you'll
need
to
talk
to
epablis
about
going
through
their
paid
shuttle
program
where
you
don't
have
to
open
source
everything.
N
I
don't
think
that
actually
launched
that
program
yet,
but
keep
bugging
muhammad
until
he
does,
because
I
want
to
see
that
happen,
because
I
want
to
see
as
many
people
doing
this
as
possible.
I
just
believe
open
source
is
a
really
great
way
of
doing
it,
but
I
would
love
to
see
a
lot
more
designs.
N
Full
stop
whether
they're,
proprietary
or
open
source-
and
I
think
open
source
can
help
build
out
a
lot
of
these
common
stuffs
that
everybody
needs
allowing
people
to
concentrate
on
just
the
one
thing
they
think
they
can
innovate
on.
And
this
will
dramatically
increase
the
number
of
things
we
could
try.
N
N
Have
you
know
older
130,
nanometer
processors,
and
I
think,
if
you
talk
to
muhammad
kassam
he'll
say
to
you
like
over
50
of
the
wafers
in
the
world
are
done
on
older
than
130
nanometer
nodes,
and
so
there's
definitely
lots
of
groups
out
there.
There's
also
groups
like
ziltera
x-fab.
N
You
know
if
you're
in
china
smick
there's
like
lots
of
others
that
I'm
probably
forgetting,
if
I'm
forgetting
you
it's
not
because
I
don't
want
to
talk
to
you,
it's
just
that.
You
know
I
have
a
terrible
memory
and
I
would
love
to
see
everyone
open
source
their
process
technology
and
enable
a
new
wealth
of
users
to
start
doing
designs
on
your
process
like.
N
If
your
process
is
a
useful
thing
to
exist,
then
people
are
going
to
want
to
use
it
and
open
source
enables
people
to
use
it
much
more
efficiently
and
quickly.
Anytime,
I
have
to
talk
to
lawyers.
You
know
my
excitement
for
doing
the
project
dramatically
goes
down
and
when
there's
an
nda
involved,
I
have
to
talk
to
lawyers,
and
so
you
know,
open
source
makes
a
lot
of
this
go
away.
I
can
just
go
to
github
and
start
doing
stuff.
I
don't
need
to
worry
about.
N
You
know
the
type
of
problems
like
finding
somebody's
phone
number,
so
I
can
call
them
to
figure
out
whether
we
can
even
potentially
do
a
mda
in
the
first
place
and
you'll
be
surprised
at
how
many
projects
get
killed.
Just
because
that.
A
N
I
want
to
see
it
grow
and,
if
anything
like
the
graphs
were
seen
growing
exponentials,
you
know
we
have
a
really
bright
future
coming
up.
A
Thank
you,
so
I
want
to
thank
all
the
presenters
for
their
time
and
effort
to
put
the
presentations
together
today.
I
thought
it
was
excellent
material
and
also
to
the
audience
for
their
time
and
interest
and
the
questions
I
received,
and
also
special
thanks
to
brian
warner,
for
helping
him
helping
me
put
this
all
together
and
orchestrating
all
the
slides.
So
thank
you
very
much.