►
From YouTube: CHIPS Alliance 2022 Biannual Technology Update
Description
Watch 7 exciting technology presentation by CHIPS Alliance participants, and see how open source hardware is changing the game.
A
A
Welcome
everybody.
We
will
start
our
spring
or
biannual
update
here
shortly
in
just
a
few
minutes.
While
we
wait
for
more
folks
to
join
us
online.
A
Hello,
everybody
and
welcome
to
the
chips
alliance
biannual
update.
My
name
is
rob
mains.
I
am
the
general
manager
of
chips
alliance
and
hope
you
can
all
see
my
screen
with
the
title
of
the
event,
so
we're
very
fortunate
today
to
have
a
number
of
different
speakers
on
a
variety
of
topics,
our
first
one
being
mao
han
from
alibaba
updating
us
on
the
risk.
A
5
cpu
update
and
porting
android
to
that
to
that
followed
by
an
update
on
the
chisel
echo
system
by
gigantic
of
sci-5,
the
recently
formed
fpga
or
f4
pga
work
group
that
we've
created,
which
is
championing
an
open
source
tool
chain
and
there's
a
lot
of
exciting
developments
in
that
arena.
A
Dave
kellet
from
intel
will
be
joining
us
and
talking
about
the
chiplet
ip
protocol.
That
intel
has
been
championing
and
now
is
an
alliance
with
others
in
industry
to
help
move
that
forward.
Anatole
carney
from
western
digital
is
going
to
chat
about
nvme,
computational
storage
processor,
for
both
the
edge
and
data
center
applications.
A
We'll
then,
have
a
great
update
from
the
niss
from
brian
hoskins
and
fabia
shrista
towards
open
source
models
for
of
cryogenic
cmos,
which
is
definitely
another
interesting,
advanced
technology
area.
A
A
So
you
know,
over
the
past
year
we've
had
continued
growth
in
chips
alliance
we've
had
a
number
of
different
universities
join
us.
We've
had
companies
such
as
xilinx,
which
is
now
a
part
of
eight
which
is
now
amd,
join
us
as
well.
So
that
is
continuing
momentum
in
the
open
source
community
and
we
continue
to
work
to
drive
towards
building
that
and
to
provide
a
open
source
ecosystem
that
participants
can
utilize
free
of
worry
by
having
it
on
a
apache
2.0
license.
A
A
So
really
what
is
chips?
Alliance
about
it's
about
building
an
open
design
ecosystem,
and
it
has
many
participants,
as
you
can
see
here,
on
the
slide,
and
it
spans
all
the
way
specification
out
through
rtl
process,
design
kits
tooling,
the
actual
file,
implementation
and
standards.
So
what
we're
trying
to
create
as
an
open
innovation
ecosystem
that
allows
participants
to
come
together
and
work
on
hard
problems
in
a
collaborative
fashion
and
hopefully
accelerate
time
to
market.
A
You
know
the
notion
of
an
open
pdk
is
actually
very
important
from
the
perspective
that
it
allows
different
parties
to
participate
and
collaborate
very
easily
for
those
not
familiar.
It's
often
a
real
challenge
to
get
an
actual
nda
sign
between
a
foundry
and
a
given
company,
and
often
you
have
to
involve
other
partners
as
part
of
that,
so
it
can
become
quite
a
challenge
to
get
that
completed
in
a
reasonable
time
frame,
as
example,.
A
A
We
also
look
at
other
design.
Languages
as
well
include
a
lot
of
the
work
relative
to
system
verilog
that
we've
pulled
into
chips
alliance,
starting
from
google
and
micro,
and
we're
also
now
looking
at
bringing
in
system
c.
That
does
coming
forth
from
intel
and
some
of
the
work
that
you've
done
there,
so
we're
really
trying
to
provide
a
multiple
language
type
of
environment.
A
A
So
again,
I
look
forward
to
today's
talks
that
we
have
and
han
is
going
to
be
our
first
presenter
today
chatting
about
alibaba
and
then
at
8
40,
we'll
go
on
into
sci-fi,
so
try
to
leave
a
couple
minutes
for
questions
at
the
end
of
each
presentation,
so
we'll
see
how
that
goes
and
look
forward
to
hearing
from
folks.
So
with
that,
I
will
stop
sharing
and
turn
it
over
to
you.
A
A
B
Hello,
everybody,
my
name
is
mohan,
I'm
a
senior
engineer
for
albert
head
and
the
chair
of
the
least
far
end
of
sig
since
alibaba
had
reported
reporting,
android
basic
functions
on
205
and
genetical.
Last
year,
more
effort
have
been
spent
to
rebase
all
previous
work
on
android
12
and
enable
the
third
party
window
modules
to
facilitate
video
camera
and
the
wi-fi
bluetooth
related
features
on
this
file.
B
The
upgrade
is
involved
featuring
energy
12,
our
hardware
that
has
high
performance,
refill
processors.
We
are
we'll
also
provide
an
insight
into
how
to
build
the
t5i
modules
running,
recycles
and
building
ai
modules
using
high
performance
codes
and
software
stacks
will
help
accelerate
the
learning
of
this
file
to
smart
intermediate
players.
B
B
The
poor
and
peaceful
on
the
result,
architecture
and
the
incompatibility
and
the
lack
of
verifications
in
the
software
stacks
as
a
major
chinese
match,
will
develop
the
new
features
and
and
to
the
next
mice,
don't
offer
adding
the
opposite
of
this
file,
support
enhancement
and
upstream
inclusion.
B
B
And,
as
I
mentioned
in
the
last
talk,
the
enjoying
system
is
huge
and
the
basic
lsp
angel
sport
is
the
only
tip
of
the
iceberg
and
with
the
tremendous
effort
in
system
future
involvement,
we
have
expanded
the
risk,
five
support
from
basic
osp
infrastructure
to
some
code
to
search
and
the
third
party
libraries
bsp
this
time
and
by
integrating
the
window
modules,
how
into
implementation,
services,
service
framework
and
user
applications.
B
However,
more
efforts
are
required
to
fill
the
architectural
sport
bank
in
the
whole
ecosystem.
We
are
looking
forward
to
see
contributions
for
from
more
participants
and
to
accomplish
goal
of
the
upstream
inclusion.
We
have
set
a
pair
of
six
steps
to
get
a
clear
culture,
workbook
scope
and
the
lesson.
The
inclusion,
effort
of
purchasing
equipment
and
initial
washing
we
started
with
the
angel
protein
was
angular
10
and
it
was
formed
by
the
open
source.
B
And
currently
we
are
also
we
are
currently
working
on.
The
key
features.
Involvement
on
the
reserve
base
the
angel
system
to
ensure
that
the
most
mostly
motor
shoot
can
be
exited
based
on
the
respawn
platform
and
will
continuously
improve
the
compatibility
and
quickness
of
the
response
board
by
fixing
the
test
case
fields
in
the
cts
and
vts
and
after
the
patch
refinement,
24
and
12,
and
this
purchase
for
the
external
project
can
directly
go
to
the
third
party
open
source
project.
B
While
the
enticle
component
relates
what
needed
to
be
submitted
and
during
the
development
to
endo
of
the
new
android
version
and
by
the
time
of
android
30
degrees.
We
can
replace
the
android
the
resource
report
with
minimized
changes
and
define
the
co-component
pages
for
the
risk
file
architecture,
and
it
will
then
the
purchaser
will
be
ready
for
submission
by
the
development
window
of
until
14.
B
And
through
the
press
work
we
figure
out.
We
also
figure
out
some
basic
hardware
and
requirements
for
android
development,
and
the
hardware
platform
is
a
priority
to
have
more
than
two
cpu
call
at
chinese
19th
level
to
handle
the
multi-streaming
run.
The
offer
long-time
service
workload
and
the
basic
system
service
takes
about
2
gigabytes
of
memory.
B
The
black
blocks
are
the
render
modules
with
later
relation
to
the
cpu
architecture,
which
includes
audio
radio,
camera
and
other
powerful
modules,
and
next
we
will
show
some
introduce
you.
You
will
introduce
the
new
id
features
and
then
and
talk
about
the
changes
in
bank
and
art
and
basil
is
the
new
build
system
introduced
by
android,
yellow.
It
is
used
to
replace
the
old
blue
print
for
the
410
compared
to
the
older
build
system.
B
C
B
Zero
permanent
language
introduced
by
ng12,
it
has
bear
memory.
Safety
guarantee
will
not
reduce
the
overall
performance.
It
is
currently
used
in
some
security
related
signal
at
the
moment,
and
we
already
used
everywhere
on
c
or
the
purpose
is
currently
used
and
the
result.
What
for
us
is
main
localities
in
the
task
target,
bridge
libraries,
separate
library,
search
parts
and
celebrity
interface.
C
B
Besides
the
changes
in
osp
main
projects,
we
also
supported
the
app
apk
generation
with
the
reservoir
api
support
api
gen
in
android
studio.
We
started
by
generating
the
the
windows
watch
ndk
with
the
list
for
architecture
support
after
we
increased
this
ndk
into
officialist
sdk
and
input
as
import
sdk
into
android
studio,
and
then
we
get
the
list
of
architecture
spotted
in
the
android
studio.
B
And
we
have
also
successfully
running
the
chevrolet
modules
on
the
reservoir
course,
relying
on
the
industrial
spot.
The
software
implementation
of
neural
network
was
first
integrated
into
system
to
verify
the
api
functionality
and
different
repairs
with
hardware
neural
network
accelerator
to
improve
the
combination,
speed
of
neural
network.
B
After
that
we
added
the
resource
goods
for
j45
and
generated
the
demo
apk
ways
and
studio
installed
the
apk
through
adb
and
executed,
and
we
can
see
the
simple
resolution
example
running
on
this
one
base
angel
on
the
right
side
and
it
can
up
sampling
15
times,
50
low
resolution
picture
to
a
200
times.
200
resolution
picture.
B
And
the
bionic
support
of
this
file
has
had
some
major
changes
in
the
ng12
upgrade.
The
democratic
tribune
of
iphone
is
supported
in
the
new
version
it
can
obtain
the
hardware
capability
through
hardware
cap
and
dynamically
bind
iphone
camera
through
a
specific
assembly.
Optimization
information
and
the
other
changes
impress
the
democracy
is
called
generation,
couldn't
hear
the
upgrade
and
some
new
support
of
relocation
types.
B
It
mainly
includes
changes
in
two
parts,
the
interpreter
and
the
model
art
the
interpreter
use
the
sims
calling
constraint
as
a
computer
to
save
the
static
transition
and
between
m
interpreter
and
the
computer
modules
and
the
modulation
of
artifact
the
art
project
to
difficult
and
serve
other
modules
into
a
sim
and
go
effects,
file
and
dynamically
load
it
around
the
initialization
of
the
system
and
all
these
changes
result
to
about
to
ten
thousand
line
of
code
modification
related
to
gen
iron,
the
functionalities
in
art
and.
D
B
B
There
are
in
general,
three
kinds
of
trains
we
are
facing
to
integrate
to
this
sport
to
on
resurrection,
including
the
positive
published
webs
comparability
and
lack
of
verification.
B
Most
of
the
modules
doesn't
have
a
function
of
reservoir-based
scripts,
as
they
have
never
been
integrated
into
this
file
based
android
before
some
of
the,
when
the
codes
have
compatibly
people
due
to
the
bandwidth
head
definition
or
api
difference
and
in
terms
of
deriving,
we
also
met
many
problem
caused
by
the
incorrect
configuration
from
missing
or
compilation
errors
due
to
lack
of
software
stack
verification
on
this
file.
We
would
like
to
call
for
more
contribution
from
rp
vendors
to
improve
the
result,
support
in
their
android
sdk,
then.
B
B
Fuel
can't
be
generated
with
the
64-bit,
only
response
port
and
therefore
we
modify
the
build
script
and
the
header
files
to
enable
the
basic
resist
service
generation
on
the
list
file
city
copy
to
only
build,
and
then
many
exceptions
occurred
during
the
initialization
and
loading
of
hardware
or
max
or
service,
as
the
compilation
flag
and
the
source
code
haven't
been
verified
on
sweep
pitch
platform
before,
and
we
have
to
pay
out
effort
to
debug
these
issues
and
in
addition,
there
are
several
conflicting
os
definition
in
the
engine
main
project
and
the
window
information.
B
B
Finally,
we
can
get
to
the
camera
worker,
as
shown
in
the
video
here
and
now
we
have
got
in
anderson
ronnie
in
the
israeli
international
organization
on
the
software
horizon
committee.
At
the
moment,
it
is
aimed
to
open
coordinate
efforts
of
developers
to
improve
the
response
of
angular
software
stack
and
help
make
unified
based
android
product
a
reality,
and
the
certain
developments
can
be
divided
into
several
domestic
subfields
that
can
be
handled
by
task
builders.
B
The
reason
why
support
offer
entry
12
is
partially
uploaded
to
the
code
laboratory.
Let
you
report
on
the
github.
Now
you
can
looking
for
the
source
code
and
related
boundaries
through
the
this
link
here
and
if
you
are
interested
in
20
cpuip
soc
their
board.
You
can
visit
the
alibaba
header,
openshift
community
website
here,
products
info
technical
spark
and
the
service
email
and
some
other
information
can
be
found
in
this
webpage.
B
A
So
I
was
just
you
know:
I'm
impressed
with
the
continued
progress
that
alibaba
is
making
on
building
the
software
development
ecosystem
for
risk,
5
processors
in
the
mobile
space.
I
was
just
curious
how
much
involvement
have
you
had
from
the
open
source
community
and
helping
you
on
this
on
this
path?.
A
Now,
that's
great,
it's
it's
wonderful
to
see
the
progress
here,
particularly
with
the
open
instruction
set
based
architecture
from
the
risk,
5
folks
and
then
also
the
open
source
work
in
terms
of
building
up
the
entire
software
stack
that
very
important.
So
we
did
have
well
there's
been
some
dialogue
here
on
that.
I
think
maybe
the
question
was
answered,
but
this
is
from
vishal
dikkol,
which
or
excuse
me-
I'm
not
pronouncing
the
name
correctly
question
is:
can
we
also
bring
unix-like
operating
system
to
the
risk?
Five
like
free,
bsd.
D
A
What
would
you
say
in
terms
of
building
up
the
overall
software
stack
has
been
the
most
difficult
challenge
that
you
and
your
team
and
alibaba
have
faced.
B
So,
as
I
mentioned
in
the
on
in
the
sections
are
here,
we
are
currently
involving
a
lot
of
system
features
in
the
android
system,
and
it
a
lot
of
these
features
are
based
on
an
external
window
modules
support
and
we
find
that
the
sport
was
relatively
poor
compared
to
the
window.
Motor
support
on
ammo,
x,
86
and
some
part
of
the
symptoms
service
is
not
compatible
to
risk
fiber,
because
the
some
of
the
architectural
information
is
missing
and
some
definition
from
the
system
files
have
some
conflict
with.
B
The
headers
in
the
lsp
project
and
a
lot
of
console
code
is
not
verified
on
the
res
based
architecture
angel
system
before.
A
B
Is
there
any
further
information
about
this
procedure?
Is
it's
for
use
for
high
performance
processing
or
some
low
power,
different
kind
of
applications.
C
B
Only
the
script
related
modules
is
compiled
with
rust
at
the
moment,
and
it
is
some
kind
of
improvement
to
use
memory
safety,
language
to
help
reduce
the
potential
for
making
like
and
some
some
other
main
bugs
to
improve
the
system,
robustness
and
stability
and
the
risk.
Five
major
changes
are
the
things
we
did
to
support
the
list.
File
lost
to
two
trends
for
this
file,
including
the
test
and
target
support
and
some
related
library,
application,
interface,
support.
A
Appreciate
that
so
I
do
have
some
questions
here
from
the
audience:
do
you
have
a
target
goal
of
what
an
alibaba
product
using
this
infrastructure
could
be
released.
A
B
A
A
Okay,
so
we
have
time
for
two
more
questions,
so
one
is,
you
showed
a
copy
of
a
tablet
in
the
video.
What
hardware
is
that
manufacturer
soc
source?
If
you
can
comment.
A
Okay,
thank
you,
then.
The
final
question
is
it's
a
question
about
lack
of
verification
and
just
if
you
could
clarify
what
was
meant
by
that.
B
Yeah-
and
I
I
mean
this,
software
are
not
haven't-
been
wrong
on
unlisted
access
or
android-based
rifle
architecture.
Before
I
on
android-based
device
before.
A
Thank
you
well,
thank
you
so
much
for
preparing
this
chat
today
and
also
to
alibaba
as
well.
I
think
there's
some
really
exciting
developments
and
progress
that
you
all
have
made
in
relative
risk.
Five
and
building
the
software
stack
around
it.
I
know
it's
late
there
in
the
evening,
so
I
do
appreciate
you
staying
up
and
talking
with
the
audience
here
today
too.
So
thank
you
so
much.
A
With
that
I'd
like
to
introduce
our
next
speaker,
who
is
jack
koenig,
he
is
with
sci-fi,
he's
a
senior
staff
engineer
at
sci-fi
and
an
open
source
maintainer
of
the
chisel
three
infertile
projects.
His
interests
include
hard
design,
simulation
and
programming
language,
so
with
jack
with
that
jack.
If
you'd
like
to
take
it
away,
appreciate
it.
E
Yes,
sorry,
I
also
was
struggling
to
unmute
so
share.
Okay,
can
you
see
my
slides
okay.
A
Yes,
we
see,
and
we
can
hear
you
just
fine
too,
all
right
great.
Thank
you.
E
Very
much
so
thank
you
for
that
introduction.
My
name
is
jack
koenig,
as
mentioned,
I
work
at
sci-fi,
but
I'm
also
one
of
the
maintainers
of
the
chisel
infernal
projects.
E
So,
and
you
know
you
can
see
down
there,
cwg
tac
member,
which
you
know,
stands
for
chisel
working
group
technical
advisory
committee.
So
let
me
now
introduce
what
is
a
chisel
working
group
and
to
do
that,
I
always
have
to
begin
by
introducing
what
is
chisel
now
many
of
you
may
have
seen
these
introductory
slides
before
I
apologize,
but
I
find
every
time
I
give
a
talk
about
chisel.
There
are
always
at
least
a
decent
number
of
people
who
have
never
heard
of
it.
So
I'll
first
talk
about
chisel.
So
what
is
chisel?
E
It
is
it's
an
acronym
for
constructing
hardware
in
a
scala
embedded
language,
and
you
know
I'm
kind
of
distinguishing
constructing
here,
because
people
often
think
about
hardware
description
languages
where
you
write
code.
You
know
that
describes
your
hardware
and
then
that
is
directly
translated
and
chisel
things
work
a
little
bit
differently,
so
I'll
touch
on
that
in
a
minute.
E
Now
chisel
is
a
domain
specific
language
where
the
domain
is
digital
design.
It's
a
little
repetitive,
but
you
can
think
about.
You
know
verilog
as
a
domain
specific
language
where
the
domain
is
digital
design,
but
I
do
want
to
specify
that
chisel
is
not
high
level
synthesis
nor
behavioral
synthesis.
So
often
when
people
see
chisel
code,
they
think,
in
terms
of
you
know,
hls,
like
c
to
gates,
which
is
a
fairly
popular
approach
where
you
can
just
write
c
code
or
c
plus
code,
and
that
will
be
directly.
You
know
synthesized
into
hardware.
E
That's
not
how
chisel
works
and
that's
why
I
underline
that
constructing
word
and
I'll
get
to
more
of
that
in
a
minute,
because
because
what
you're
really
doing
in
chisel
is
you're
writing
a
scala
program
to
construct
and
connect.
Hardware
objects-
and
you
know
scala
is
a
general
purpose.
E
Programming
language,
as
you
know,
as
distinguished
from
a
domain
specific
language,
you
know
scala,
is
you
know
some
similar
to
java
c,
plus
python
swift,
whatever
you
know,
general
purpose
languages
you're
familiar
with,
and
it
has
a
lot
of
the
modern
language
features
like
parameterized
types,
object-oriented,
programming,
functional
programming
and
static
type
typing,
with
powerful
type
inference
so
in
the
world
of
programming
languages.
E
There's
a
lot
of
features
that
one
might
refer
to
that
make
a
modern
programming
language
and
you
can
see
even
older
languages
like
c
plus
and
java
moving
in
this
direction.
But
scala
is
one
of
these
kind
of
more
recent
languages.
Recent
ish,
I
mean
it's
it's
over
two
decades
old
now,
but
recent
ish
languages
that
has
all
these
powerful
features
and
what
chisel
is
really
intended
for
is
writing
reusable
hardware
generators.
E
So
what
we
want
to
do
with
chisel
is
to
make
hardware,
or
especially
you
know,
digital
design,
more
productive
and
the
way
you
make
people
more
productive
is
you:
have
them
write
less
code
and
the
way
you
write
less
code
is
that
you
have
the
ability
to
reuse
code
or
to
use
libraries
right.
If
you
think
about
when
you
write
code
in
python
or
java,
say
you
know
you
don't
usually
have
to
go.
Re-Implement
the
you
know
the
functions
to
open
files
on
disk,
or
even
you
know,
command
line,
parsing
library
or
command
line
parsing.
E
Usually
you
can
just
use
a
library,
so
it's
that
type
of
reusability
that
we're
trying
to
build
for
hardware.
So
now
you
know
brief
introduction
to
what
chisel
kind
of
looks
like,
and
I
think
it
might
help
if
I
have
a
pointer
okay.
So
this
is
what
I
might
call
verilog-like
chisel,
where
you
can
write
things
that
are
very
similar
to
what
you
might
express
in
verilog.
So,
yes,
the
syntax
is
different,
but
the
basic
principle
of
what
you're
doing
is
the
same.
So
in
this
case
I
have
a
simple
module
similar
to
verilog.
E
I
have
my
ports,
which
is
just
an
input
and
an
output
which
are
both
units
which
you
can
think
about
as
just
like
unsigned
bit
vectors,
it's
parameterized
by
bit
width
just
like
you
might
do
in
verilog,
and
you
know
I
have
two
registers
and
then
a
sum
at
the
end,
and
so
this
is,
as
mentioned
in
the
class
name,
a
moving
sum
three
fir
filter
and
the
question
that
immediately
comes
up
when
you
write
something
like
this
is
what
happens
if
I
have
more
than
three
points.
E
What
if
I
want
to
weight
my
averages?
What
I
really
want
is
a
generic
fir
filter
where
there's
different
types
of
parameterization
based
on
what
you
would
like
to
do
with
it,
and
so
what
chisel
enables.
So
on
that
previous
slide,
I
showed
that
you
can
write
it's
chiseled
kind
of
like
verilog
if
you
want
to,
but
you
can
also
write
much
more
powerful
software
generators
where
you
can
parameterize
it
using
the
full
power
of
the
scholar,
programming,
language
and
yet
and
then
construct
your
hardware
in
that
way.
E
So,
in
this
case,
the
fir
filter
is
parameterized
by
not
only
the
bit
width
but
also
the
values
of
your
coefficients
and
how
many
of
them
you
have,
and
so,
as
you
can
see
here,
you
know
we
have
multiple.
We
have
a
you
know
a
serial
in
parallel
out
shift
register,
which
are
these
z's
right
here,
so
each
cycle
it
shifts
to
the
next
one.
That's
what
this
little
chunk
of
code
does
and
then
we
actually
have
these
multiplications
with
our
you
know,
parameterizable
number
and
value
of
coefficients,
so
we
can
multiply
each.
E
You
know
each
delayed
value
from
the
input
with
the
coefficients,
and
then
we
sum
at
the
end,
and
so,
if
you
think
about
what
this
really
is,
this
is
this
is
meta
programming.
Where
you
can,
you
know,
you're
writing
a
generator.
You
can
reason
about
the
fact
that
you're
generating
hardware
and
you
can
make
decisions.
B
E
On
your
parameters
to
decide
what
you're
trying
to
build
and
what's
really
cool
about
this
is
now
you
have
that
that
same
filter
I
showed
you
before.
Is
this
filter
with
a
certain
parameterization,
but
you
can
also
make
other
things
like
just
a
one-cycle
delay
or
you
can
make
a
five-point
triangular
impulse
response,
filter
and
so
there's
all
kinds
of
things
you
can
build
just
from
the
ability
to
have
greater
power
parameterization.
E
Now
all
this
is
really
cool
and
really
important,
but
it's
not
enough.
So
what
we've
realized
over
the
last
few
years
is
that
sometimes
you
know
there
are
very
platform
specific
or
application
specific
changes.
You
need
to
make
trtl
in
normal
design
flows,
for
example,
if
you're
running
on
an
fpga
you
may
want
you
know
some
type
of
scan
interface
or
interactive,
debug
or
snapshotting.
E
Some
of
these
features
are
supported
directly
by
the
fpgas,
but
often
the
features
aren't
as
powerful
as
you
would
like,
and
you
may
want
more
and
you
may
need
to
modify
your
rtl
to
to
get
that.
Other
examples
are
you
know
when
I
say
reusability.
What
that
mean.
It
doesn't
just
mean
that
I
can
write
something
once
in
my
design.
It
means
I
can
write
something
once
hopefully
forever
and
I
can
use
it
on
an
fpga
or
in
multiple
different
technologies
right
for
anyone,
who's
done,
asic
design.
E
You
know
that
oftentimes
you
have
to
do
a
lot
of
specialization
for
your
technology,
some
of
these
examples
being
srm
macros
specialized
layouts.
You
know
clock
generators,
different
srm
macros
with
their
own
specialized
layouts,
and
so
what
we
realize
is
that
we
need
software
stack
for,
but
for
hardware
so
and
originally
with
chisel
2.
E
We
broke
chisel
up
into
a
front
end
and
a
compiler
called,
and
you
know
I
do
want
to
always
note
that
you
know
this
picture
was
is,
is
leaving
a
lot
of
details
undiscussed
and
that
there's
a
lot
of
work
that
goes
on
underneath
you
know
we
are
only
doing
one
small
piece
and
that
these
two
you
know,
projects
like
verilater
and
fp
f4,
pga,
open
road,
are
doing
a
lot
of
heavy
lifting
underneath
the
idea
of
just
being
this
full
software
stack
built
on
top
of
that,
and
so
a
little
bit
more
about
this
software
stack
that
we
have
built
is
called
fertile
and
it's
an
extensible
hardware,
compiler
framework,
where
you
have
transformations
that
take
a
design
from
that
comes
out
of
chisel
and
then
can
lower
it
down
to
verilog,
applying
transformations,
as
well
as
custom
transformations,
it's
written
in
a
very
modular
way
and
has
robust
metadata
and
annotation
support
and
I'll
get
back
to
this
more
later.
E
But
this
enabled
all
kinds
of
custom
extensions
to
really
show
how
flexible
this
hardware
compiler
is.
Okay.
So
with
that
introduction,
the
projects
of
the
chisel
working
group
are,
as
I've
mentioned,
chisel
infernal,
especially
but
there's
a
few
others.
This
is
chisel
test,
which
is
our
testing
framework,
there's
treadle,
which
is
kind
of
a
simulator
for
fertile.
E
E
We
have
the
boot
camp
in
the
template.
We
have
some
older
projects
that
are,
I
would
say,
as
of
now
deprecated,
so
the
I
o
testers
have
been
replaced
by
chisel
test
dsp
tools,
I'm
not
fully
saying
they're
deprecated,
but
we
are
looking
for
maintainers.
So
if
you
use
gsb
tools,
please
reach
out
to
me
and
a
diagrammer,
I
think,
will
be
folded
into
fertile
itself,
so
the
separate
project
is
kind
of
what
will
eventually
be.
E
You
know
deprecated
in
favor
of
just
going
to
pearl,
okay.
So
now
some
highlights
from
what's
been
going
on.
So
some
of
this-
I
I
talked
about
the
last
the
october
update,
but
I
want
to
note
that
that
was
when
we
had
just
done
the
rc1
and
now
we've
actually
released
many
of
these
things
and
they're
being
used
in
production
and
have
been
now
for
the
last
six
to
eight
months,
so
vector
little
support,
just
the
ability
to
make
vectors
of
things
more
easily.
E
This
is
just
some
simple
syntax,
I'm
just
noting
that
these
are
some
of
the
quick
things
and
I'll
go
into
some
other
things
in
more
detail,
scala,
213
support.
This
is
really
you
know,
helping
get
up
to
the
most
up-to-date
or
the
not
most
up-to-date
version
of
scala,
but
a
more
up-to-date
version
of
scala.
E
We
have
now
a
decoder
and
minimizer
api,
so
this
is
really
helpful
for
expressing
decoders,
especially
but
any
type
of
logi,
of
like
declarative
decoding
style
logic,
and
this
is
really
really
helpful
and
has
espresso
integration,
source
locator
compacting,
just
to
make
the
source
locators
in
the
emitted.
Ferrolog
look
nicer
lots
of
performance
improvements.
E
Now
that
didn't
go
quite
how
it's
supposed
to,
but
chisel
itself
is
about
40
40
to
50
faster
depending
on
it.
It
depends
a
lot
on
your
particular
design
because
you
know,
as
I
mentioned,
chisel
being
a
generator
you're
writing
a
scallop
program.
The
user
can
write
things
that
would
run
slow,
but
for
typical
designs
that
we
have
at
sci-fi.
E
It's
gotten
substantially
faster
and
fertile
itself
has
also
gotten
quite
a
bit
faster,
and
you
know:
we've
added
new
utilities
to
the
standard
library
and
an
open
source
contribution
of
an
rtl
io
backend
is
exciting
just
to
show
kind
of
integration
with
other
open
source
projects.
E
Rtl
il
is
the
intermediate
form
used
by
yosis,
which
is
a
very
you
know:
the
open
source,
synthesis
and
tool
very,
very
useful,
and
so
it's
just
cool
to
see
that
type
of
integration
and
then
professor
martin
schubert
has
updated
his
digital
design
with
chisel
book.
There's
now
a
third
edition
updated
to
chisel
3.5,
and
just
noting
that
this
book
now
has
versions
in
chinese,
japanese
and
vietnamese.
E
So
that's
all
very
cool
and
very
useful
and,
of
course
see
the
release
notes
for
all
of
us
on
github,
because
this
is
some
lightning
highlights
of
smaller
things,
and
now
I'm
going
to
talk
about
some
bigger
things.
So
chisel
tests
are
similar.
You
know
our
testing
framework
has
had
a
lot
of
improvements.
The
way
you
can
interact
with
the
design
is
simpler.
Now,
with
scala
types,
it
has
an
improved
barrel.
E
It
has
improved
variator
simulation
performance,
so
previously
it
had
to
use
inner
process
communication,
and
now
it
has
a
more
direct,
has
more
direct
integration.
The
very
later
backend
now
supports
something
fst
instead
of
vcd.
That's
you
know
a
simple
but
convenient
thing.
For
you
know
many
cycles
vcd
can
get.
Those
files
can
get
very
large.
E
There's
I
mentioned
that
chisel
I
o
testers
is
deprecated,
so
there's
a
compatibility
api
to
help
migration
simulation
contracts
cannot
be
annotated.
I
talked
about
the
extensibility
of
the
compiler
and
this
is
a
piece
of
that.
A
certain
assuming
cover
are
no
longer
experimental,
there's
simulated,
binary,
caching,
which
helps
speed
up
when
you
run
lots
of
chisel
tests
and
then
there's
support
for
bounded
model,
checking
which
I'll
talk
about
on
the
next
slide.
E
So
chisel
test
now
has
kind
of
native.
If
you
will
formal
verification
support,
so
for
any
of
you,
who've
done
formal
verification
or
have
heard
of
it.
You
know
that
it's
kind
of
always
assumed
to
be
really
difficult
and
it
can
be
difficult,
but
as
with
most
things,
good
tooling
and
sensible
defaults
can
really
help
here.
So
when
you
have
something
like
in
in
chisel
test
case,
it
looks
very
similar
to
a
simulator
based
flow,
even
though
you're
actually
doing
a
formal
check.
E
A
really
common
thing
in
formal
verification
is
having
a
past
function,
and
so
there's
certain
safety
here
that
that
the
defaults
can
make
work
a
lot
better,
and
by
that
I
mean
a
common
issue.
When
you
do
formal
verification.
Is
that
you
have
a
pat?
You
know
you
take
a
you
ask
for
the
value
of
some
wire
or
register
from
a
previous
cycle,
but
during
the
hardest,
part
of
formal
verification
is
coming
out
of
reset
and
oftentimes.
Trying
to
deal
with.
E
That
fact
can
be
complicated,
so
sensible,
defaults
in
chisel
test
help
make
the
pass
function.
Safe
and
automatic.
Reset
guarding
is
really
useful
to
deal
with
some
of
these
reset
issues.
But
of
course,
if
you
are
trying
to
verify
verify
your
actual
reset,
you
can
disable
that
the
default
behavior
here
or
you
know
just
for
that
test
and
it
has
close
integration
with
the
simulation
flows.
That's
the
same
basic
api,
same
id
and
tooling.
E
And
it
automatically
will
give
your
counter
example
through
a
waveform
by
re-running
it
through
your
simulator
and
so
just
some
technical
details.
This
is
there's
an
smt,
lib
and
and
btor2
backends
infertile,
and
this
works
with
you
know
your
standard,
open
source
model,
checking
tools
like
z3
and
cbc4
or
not
model
checking
smt
solving
tools,
and
then
you
should
please
check
out
kevin
kevin
lawfor's
paper
on
this,
where
he
goes
into
greater
detail.
E
But
just
as
a
quick
picture,
you
can
see
here
that,
if
you've
written
much
chiseled
this,
you
know
you
are
testing
in
this
case
the
behavior
of
you
know
an
sram
and
you're.
Looking
at
the,
I
believe
this
is
looking
at
the
read
underwrite
behavior
and
you
know
using
the
previous
cycle.
Checking
if
you
had
a
write
and
a
read
and
the
addresses
were
the
same,
then
the
output
data
should
match
the
end
of
data.
E
E
This
is
just
if
you
do
a
write
and
a
read,
but
this
is
read
on
the
right
side.
If
you
do
a
write,
a
read
in
the
same
cycle
should
you
get
the
same
data,
and
so
this
is
you
know
this
can
be
checked
formally,
which
is
very
cool.
E
Okay,
so
another
big
feature
that
we
are
current.
This
is
kind
of
undergoing
development
at
the
moment,
but
is
available
in
chisel.
3.5
is
definition
instance,
so
historically
chisel
elaborates
every
module
and
then
deduplicates-
and
this
is
fine
most
of
the
time
until
you
start
building
large
things
so
for
most
users
of
chisel
who
tend
to
be
building,
you
know
smaller,
you
know,
32-bit
in-order
cores
risk
five
cores.
It
doesn't
matter
that
much.
E
But
when
you
start
building
big
things,
it
starts
to
matter,
and
so
this
is
a
new
api
to
help
us
distinguish
between
the
fact
that
you
may
want
to
define
a
module.
As
in,
like
you
know,
it's
it's
actual
implementation
and
all
the
details
in
the
module.
But
then
you
want
to
just
merely
instantiate
its
public
api.
Instead
of
having
to
you
know,
re-elaborate
it
each
time,
and
so
this
has
huge
performance
improvements
for
large
or
hierarchical
and
hierarchical
designs.
It
composes
with
our
annotation
flow,
but
I
do
want
to
know
that
this
is
experimental.
E
I
think
it
works
really
well
for
what
it's
supposed
to
do,
but
we
want
more
out
of
it.
So
there's
a
lot
of
development
going
on
here.
That
doesn't
mean
you
shouldn't
use
it,
because
the
current
apis
are
fine.
We
just
need
more,
basically,
but
the
basic
idea
being
you
can,
instead
of
just
instantiating
a
module,
you
can
create
a
definition
of
one
and
then
instantiate
it
multiple
times
and
an
alternative
api
that
we
are
still
considering
is
the
ability.
E
This
looks
more
similar
to
how
you
instantiate
a
module
normally
without
having
to
create
that
separate
definition
object,
and
so
this
is
an
alternative
api
that
we're
still
considering
okay,
so
now
the
feature
that
I'm
the
most
excited
about,
but
this
is
very
much
a
developer
as
a
developer.
I
like
this
as
those
other
ones,
are
probably
more
exciting
for
users,
but
dataview
is
something
I
think
is
very
cool.
It's.
It
comes
from
the
observation
that
often
users
want
to
manipulate
a
hardware
value
as
if
it
were
of
a
different
type.
E
A
good
example
of
this
is
an
axi
style,
flat
bus
interface,
which
we
all
know
from
you
know
common
axycode
in
verilog
and
from
the
standard.
But
when
you
write
things
in
chisel,
you
tend
to
write
things
in
a
more
structured
way,
and
so
you
would
like
to
manipulate.
You
would
like
to
give
that
flat
bus
interface,
because
that's
what
you
know
most
verilog
users
expect,
but
you
would
like
to
manipulate
it
as
if
they
were
more
structured,
and
so
this
allows
treating
objects.
One
type
is
another
you
can.
E
You
know
this
sounds
a
lot
like
a
union
or
like
a
cast,
but
it's
a
bit
more
powerful
than
that.
So
I
I
usually
draw
the
analogy
to
a
view
in
sql
and
I
don't
want
to
touch
too
much
on
the
details
of
how
this
works,
but
what
this
what's
so
cool
about?
This
is
how
many
things
can
be
implemented
by
this
one
primitive.
So
we
were
able
to
implement
seamless
integration
with
scala
types,
so
this
won't
mean
a
lot
to
you
if
you
don't
ever
write
chisel.
E
But
what's
these
you
know
these
parentheses
as
they
are
in
many
languages
are
tuples
and
so
tuples
are
built
in
scala
collection,
and
you
know
for
all
of
our
hardware
operations
like
connection
we
have.
We
implement
them
on
our
own
hardware
types,
but
this
dataview
feature
allows
us
to
take
these
tuples
and
view
them
as
if
they
were
hardware,
and
then
do
this
connection,
and
so
there's
this
looks
very
simple.
It
looks
like
it
obviously
should
work,
but
for
those
of
you
shoes
in
the
past,
you
might
know
that
this
was.
E
This
is
kind
of
a
difficult
thing
to
do.
Well
and
dataview
enables
it
bundle
up.
Casting
is
really
important
for
when
you
have
some
inheritance
relationship
between
bundles,
which
you
can
think
about.
As
structs
and
you'd
like
to
you
know,
take
the
child
bundle
and
manipulate
it
as
if
it
were
the
type
of
the
parent
bundle.
E
It
helps
enable
that
it
enables
you
know
user-defined
mappings,
so
those
were
all
built-ins,
but
you
can
create
your
own
mappings
and
don't
worry
about
this
code
here,
just
showing
that
it's
possible
and
then
a
perennial
ask
in
chisel.
Is
people
create
a
bun?
You
know
an
I
o
bundle
and
then
they
say,
but
I
don't
want
the
I
o
prefix
in
my
verilog.
So
this
same
feature
that
did
all
those
other
seemingly
unrelated
things
also
made
it
possible
to
implement
a
version
of
I
o,
where
there
is
no.
I
o
prefix.
E
So
this
is
in
3.5.3
which
I'm
intending
to
release
on
friday,
but
it
allows
you
to
you
know,
create
verilog,
you
know,
have
a
bundle
create
an
I
o
from
it,
but
not
have
the
I
o
prefix
in
the
emitted
parallel.
So
that's
very
cool
all
right.
So
now
I'm
going
to
backtrack
a
bit
to
this
slide,
which
you
may
hopefully
remember
where
I
talked
about
how
we
need
a
software
stack
for
hardware,
and
I
mentioned
that
fertile
has
very
robust,
annotation
support.
E
But
another
thing
that
we've
determined
during
all
this
time
is
that
it's
a
lot
of
work
to
build
and
maintain
a
software
stack,
especially
with
only
a
handful
of
developers,
and
we
have
this
whole
power
of
metadata
and
annotations.
I
have
this
picture
of
kind
of
like
a
solar
system
here,
because
there's
there's
a
whole
universe
of
what
people
can
build
with
with
the
annotation
support,
and
we
also
have
a
whole
bunch
of
custom
infrastructure.
E
Doing
a
lot
of
heavy
lifting
here
to
make
all
this
work
and
there's
kind
of
a
telling
picture
over
here
of
llvm
of
kind
of
a
future
direction
that
we
are
going
on,
which
is
that
llvm
has
this
new
kind
of
project
called
mlir
and
what?
If
we
could
share
infrastructure
with
mlir,
which
is
kind
of
like
you
can
think
about
it
as
a
generalization
of
lldm
rather
than
lldm,
just
for
emitting.
You
know,
as
intended
for
mainly
c
plus,
plus
and
and
for
emitting
assembly.
E
What,
if
you
could
just
share
some
of
the
more
common
aspects
of
what
compilers
need
and
again
and
what,
if,
instead
of
arbitrary,
unstructured
annotations
and
whatever
arbitrary
data?
The
user
wants,
we
could
try
to
constrain
and
better.
You
know
to
better
define
and
understand
what
what
data
people
are
manipulating
and
capture
that
information
in
multiple
dialects.
E
So,
instead
of
a
single
ir
to
rule
them
all,
which
you
know
even
lvm
found
that
doesn't
work
out,
because
if
you
look
at
how
you
know
rust,
swift
and
and
other
languages
built
on
top
of
lvm,
they
have
their
own
irs
on
top
of
it,
because
it
can't
represent
everything
they
need
so
similar
problem
in
hardware.
We
have
different
potential
irs
that
we
would
like
to
represent.
Okay.
So
now,
just
briefly
what
is
moir?
E
It
stands
for
multi-level
ir
and
it's,
as
I
said,
a
generalization
of
lvm.
So
it's
like
you,
take
lvm
all
that
infrastructure
in
the
compiler
and
you
throw
away
the
fact
that
you're
just
compiling
you
know
things
that
look
kind
of
like
c,
plus,
plus
and
you're,
just
compiling
to
assembly
and
instead
just
say,
you're,
compiling
whatever
ir
you
want
to,
whatever
you
want,
and
so
it's
infrastructure
for
building
new
compilers
and
as
such,
it
is
suitable
for
lots
of
for
multiple
irs
at
once.
E
It's
written
in
c
plus
and
extremely
performant,
and
so
that's
mlir.
That's
like
the
base
technology.
On
top
of
this
new
project
called
circuit,
which
is
as
with
we
all
love
our
acronyms,
it's
circuit
ir
for
compilers
and
tools
pronounced
circuit.
This
is
an
lvm
incubator
project.
So
unfortunately,
despite
my
suggestions,
they're
part
of
lvm
instead
of
chips
alliance,
but
it
does
make
sense
as
they
are.
You
know
part
of
the
lldm
repo,
but
this
is
compiler
infrastructure
for
generating
verilog
and
so
circuit
really
is
a
collection
of
hardware.
E
Dialects,
importantly,
for
the
for
the
chisel
community
is
there's
a
fertile
dialect,
but
there
are
many.
Others
is
some
barologue
dialect,
although
I'm
giving
a
star
it's
not
that
you
can
parse
arbitrary
system
verilog,
it's
more
that
there's
a
system
verilog
emission
dialect,
so
fertile
can
compile
the
system
verilog
and
then
emit
system
barrier.
There
are
other
rtl
dialects,
including
combinational,
sequential,
and
then
there
are
time
you
know,
there's
much
like
while
chisel
and
fertile
are
part
of
this.
There
are
other
dialects
going
on
too.
E
There
are
people
working
on
elastic
silicon,
interconnects
and
handshake
protocols
and
and
high
level
synthesis,
and
all
these
other
things
that
people
need
compilers
for
in
the
hardware
community,
and
so
really
it's
a
collection
of
tools
for
compiling
circuits
fur
tool
is
a
drop-in
replacement
for
fertile,
and
this
is
a
picture
from
their
website.
E
But
it's
just
showing
how
there's
all
these
different
projects
going
on
and
you
can
see
chisel
compiles
to
fertile,
which
goes
through
the
fertile
parser,
but
there's
a
whole
bunch
of
shared
infrastructure
here
between
these
different
dialects
that
different
people
have
built,
and
then
those
get
compiled
down
to
the
system,
verilog
dialect
exported
to
verilog,
and
then
we
can
get
system
verilog
out
at
the
end,
but
there's
all
kinds
of
other
stuff
going
on
here,
which
is
very
cool
and
hopefully,
as
this
project
continues
to
mature,
we
can
leverage,
even
more
so
on
the
mind
of
anyone
watching
this
talk
right
now.
E
What
does
this
actually
mean
for
chisel?
It's
that
there's
a
next
generation
fertile
compiler.
That
is
almost
there
it's
getting
very
close.
We
are
about
to
switch
to
it
as
our
production
flow
at
sci-fi.
It
has
much
faster
compilation,
five
to
ten
times
faster.
It
uses
a
lot
less
memory,
five
to
ten
times
less.
E
It
has
better
verilog
emission.
This
better
verilog
is
always
it's
always
an
ask,
and
it's
always
a
very
complicated
thing
to
do,
but
they
have
worked
very
hard
and
are
doing
a
very
good
job
and
are
continuing
to
improve
it.
E
E
You
know
we'll
have
flows
to
help.
You
do
that
and
then
starting
in
chisel
3.7,
we
plan
to
start
adding
features
directly
to
circuit.
That
would
have
been
hard
to
do
in
the
existing
flow
with,
like
you
know,
new
dialects
for
things
like
verification
and
whatnot
and
whatever
new
features
we
want
to
add.
E
Now,
I'm
trying
to
come
up
with
a
good
diagram
for
this
slide,
and
the
easiest
thing
to
show
is
compile
time.
So
I'll
show
the
compile
time
this
december
here
is
chisel
infertile.
E
You
know
at
sci-fi
with
with
a
design
from
today,
but
using
the
compilers
from
december
and
the
compilation
of
a
large
out-of-order
core,
and
you
can
see
that
this
compile
time
has
decreased
from
well
over
600
seconds
to
about
150,
and
that
includes
this
7.6
x
improvement
on
the
fertile
part
of
it
and
that's
by
using
this
next
generation
fertile
compiler,
but
also
you
know,
that's
obviously
the
biggest
by
far
the
huge
chunk
here,
but
don't
sleep
on
the
fact
that
even
chisel
got
a
lot
faster.
E
You
can
see
here
that
it's
not
quite
2x
but
close
to
2x
faster
than
it
was
only
you
know
four
and
a
half
months
ago,
and
this
is
a
combined
speed
up
of
4.3
x.
So
these
are
you
know
the
today
changes
that
are
you
know
a
few
things
like.
We
need
to
be
using
circuit
to
get
all
this
benefit,
but
a
lot
of
this
stuff
has
gotten
a
lot
faster,
even
in
the
last
few
months,
which
is
very
exciting.
E
Okay,
and
so
that's
all.
I
really
have
of
content
and
so
I'll
show
my
usual
community
slide.
Please
go
to
chiseling.org.
If
you
want
to
see
what's
going
on
and
there's
a
community
page
linked
to
on
the
website,
you
can
chat
with
us
on
gitter.
You
can
ask
questions
on
stack
overflow
and
you
can
watch
old
talks
on
our
youtube
channel.
There
are,
I
don't
know,
probably
about
100
or
more
talks
there
at
this
point.
So
there's
a
lot
of
stuff
to
look
at.
E
A
Hey
jack
thanks.
That
was
a
great
talk.
I
am
always
impressed
with
the
amount
of
progress
that
you
and
your
working
group
make
in
helping
advance
the
state
of
the
art
of
an
object-oriented
based
hardware
design
description
language.
So
I
really
appreciate
that.
A
A
As
part
of
this,
as
you
correctly
noted,
you
know,
formal
is
a
typically
a
very
challenging
area
for
design
engineers
and,
of
course,
as
you
know
as
well,
I'm
sure
that
there
are
some
problems
that
there's
just
no
way
to
tackle
with
you
know:
functional
simulation
such
as
with
verilater
or
comparable
commercial
solutions,
but
I
just
want
to
just
kind
of
curious
as
to
the
work
involved
to
be
able
to
include
formal
type
attributes
in
the
language
and
how
that
went
or
evolved.
E
Yeah,
that's
a
great
question,
so
you
know
part
of
it
is
like.
So
there
are
many
things
that
make
formal
verification
hard
and
it's.
I
would
say
that
the
first
part,
which
is
the
simplest
one
for
chisel,
is
that
you
know
verilog
is
an
extremely
large
language.
E
It
has
something
like
four
times
as
many
keywords
as
c
plus
plus,
and
I
don't
think
anybody
considers
c,
plus
plus
a
simple
language,
let
alone
you
know
what
simpler
things
do
and
so
chisel
by
having
much
fewer
constructs,
makes
it
really
easy
to
model
formally.
So
that's
step
one.
Now,
that's
easy!
E
That
kind
of
comes
for
free
based
on
how
chisel
has
always
worked,
but
two,
which
is
really
the
hardest
part,
is
you
need
to
represent
the
things
you're
trying
to
represent
formally,
and
I
think,
there's
a
lot
of
ongoing
work
there.
Our
primitives
now
are
still
very
simple:
it's
just
assertions
like
basic
assertions
and
like
past
operator,
but
it
turns
out.
E
You
can
do
a
lot
with
that,
at
least
for
simple
designs,
and
if
and
one
cool
thing
about
formal
verification
is
that
if
you
do
it
hierarchically,
you
can
verify
you
know
a
small
piece,
and
then
you
can
leave
that
piece
out
when
you
verify
a
piece
around
it.
But
it
is,
I
would
say
this
is
you
know
we're
not
exactly
like.
That's
not
necessarily
our
heaviest
area
of
innovation,
but
it
is
something
that
we
are
trying
to
do
more
of
and
and
help
mature
the
open
source
flow.
A
E
Yeah,
so
I
would
strongly
urge
you
to
go
to
the
website
and
especially
come
chat
with
us
on
gitter.
It's
a
great
place
for
basic
questions
like
for
you
know,
new
user
questions.
I
should
say
we
have
lots
of
people
there
asking
such
questions,
and
several
people
will
help
you
out.
The
main
thing
to
look
at
is
the
chisel
boot
camp.
So
it's
a
camera.
They're
called
a
jupiter.
E
A
Oh,
that
sounds
great,
so
I
have
a
question
from
john
lydell
or
lytle.
Does
the
chisel
community
have
a
working
group
targeting
process
device
specific
optimizations
in
the
tool
chain,
for
example,
fertile
to
verilog,
transforms
for
fpgas,
different
standard
cells
etc?
And
I
I'd
happen
to
note.
Actually
you
know
being
a
technologist
myself
that
you
know
you
did
mention
some
different
attributes
that
you
had
to
change
relative
to
the
design
for
different
target
technologies.
E
Yeah-
that
is
a
great
question.
I
would
say
we
have
some,
maybe
not
as
much
as
we
would
like,
like
there's
a
lot
more
work
to
do,
but
we
do
do
that
at
like
built-in.
We
do
it
for
memories,
so
we
have
a
flow
to
take
the
chisel
memories
which
are
functional
and
swap
them
out
with
you
know,
a
user
provided
platform,
specific
memory
or
target
specific
memory.
Now
we
need
more
of
that.
E
We
have
done
similar
work
for
clot
gates
internally
at
sci-5,
and
I
think
it's
now
part
of
the
open
source.
I
believe
that
type
of
work
is
part
of
the
open
source
circuit.
You
know
fertile
compiler,
so
my
answer
is
some,
maybe
not
as
much.
We
don't
really
do
standard
cell
stuff
we're
not
a
synthesis
tool,
but
we
do
at
least
for
things
that
are
that
are
obvious
and
need
to
be
specialized
for
certain
platforms
like
memories
and
clock
gates,
and
we
can
continue
to
expand
that
definition
a
bit.
A
Oh,
that's
great,
thank
you
again
so
much
for
an
excellent
talk.
I
really
appreciate
it
and
all
the
exciting
work
and
just
a
final
comment
is
that
martin
shrubble's
book
on
chisel
is
available.
It's
in
the
chat
there.
You
can
see
it.
You
can
actually
download
it
if
you're
interested
in
it.
So
that
sounds
like
a
great
resource.
So
thanks
again,
jack
really
appreciate
it.
Thank
you.
A
C
C
A
It's
okay!
I
am
almost
with
you
here.
A
It's
all
right,
yeah!
No,
it's.
G
We
can
only
say
click
to
exit
full
screen.
Oh
yeah,
now
you
can
see
it
right.
The
first
slide
I
mean
yes
yeah.
Thank
you
all
right,
so
I'll
I'll
get
started.
I
guess
so
today
we're
going
to
announce
the
creation
of
a
new
group
within
chips
lines
and
that
deals
with
a
lot
of
different
topics,
all
of
them
fpga
related.
G
The
key
topic
is
the
interchange
format
that
we've
been
developing
as
part
of
the
group,
but
that's
definitely
not
all
the
stuff
that
we're
working
on
and
I'm
going
to
co-present
with
carl
who's
very
involved
in
this
effort
from
the
technical
side,
I'm
working
more
on
the
of
course,
marketing
and
dissemination
and
collaboration
side,
so
yeah.
If
we
can
flip
to
the
next
slide,
just
a
few
words
about
us
so
we're
coming
from
a
micro
and
we're
basically
building
things
with
open
source,
helping
people
use
new
design
methodologies
and.
G
New
workflows
and
build
awesome
things
using
the
great
stuff
that
open
source
is
giving
them.
So
if
you
go
to
the
next
slide,
you'll
hear
us
saying
open
source
a
lot,
and
the
reason
is
that
we
try
to
find
ways
in
which
people
can
scale
their
efforts
with
open
source
and
gypsy
alliance
is
basically
about
the
same
thing.
We
believe
that
hardware
developers
can
get
the
same
kind
of
collaboration,
the
same
kind
of
benefits
from
open
source
than
software
people
can
and,
of
course,
we're
already
seeing
this
with
risk.
G
If
we
go
to
the
next
slide,
as
I
mentioned,
this
corresponds
to
the
mission
that
the
chips
alliance
has
and
and
the
breadth
of
the
the
the
entire
group
covers
a
lot
of
the
topics
we're
working
with,
including,
of
course,
cores
and
different
kinds
of
ip
blocks.
That's
together
across
the
risk
based
socs.
Well,
not
only
risk
five
necessarily,
but
predominantly
these
days.
G
Of
course
we
work
with
tooling
we
work
with
lots
of
different
topics
and
within
those
areas
we
lead
a
bunch
of
efforts
and
one
of
the
efforts
that
we
are
leading
is
in
fact
the
f
for
pga
group.
That's
concerned
with
a
number
of
topics
in
fpga
space,
so
in
the
next
slide,
you'll
see
the
layout
of
you
know
what
the
chips
alliance
does
and
that's
already
been
shown.
But
the
new
bubble
in
here
is
there
for
bj
group,
the
the
one
in
bold
that
we're
going
to
talk
about
today
in
the
next
slide.
G
You'll
see
you
know
our
logo.
We
have
kind
of
designed
a
whole
kind
of
branding
system
for
the
f4
pga
group,
and
you
might
kind
of
recognize,
of
course,
this
from
a
previous
effort
called
symbiflow
and
symmetra
was,
of
course,
a
pretty
kind
of
big
efforts
involving
multiple
parties
and
so
on.
But
one
of
the
things
that
it
didn't
have
was
this
kind
of
supervision.
You
could
say
this
kind
of
structure
that
the
chips
alliance
offers.
G
Most
importantly,
perhaps
the
group
involves
not
just
you
know,
different
kinds
of
people
and
companies
interested
in
this
topic,
but
three
kind
of
distinct
pillars
that
we're
leaning
on
one
of
them
being
academia.
Of
course.
That
leads
a
lot
of
the
research
efforts.
Industry
is
a
very
important
part
where
we
consider
us
part
of
this.
G
This
particular
pillar-
and
this
is
based
on
practical
use
and
fpga
vendors,
which
we
believe
are
of
course,
a
necessary
part
of
such
a
group
and
we're
very
happy
to
actually
kind
of
be
representing
all
of
those
three
groups,
and,
as
I
mentioned,
we
have
a
lot
of
different
focus
areas,
but
so
kind
of
some
important
ones
need
to
be
mentioned.
One
of
them,
of
course,
is
the
open
source
tool
chain
previously
known
as
symbiflow,
so
now
it's
the
f4pg2
chain.
G
G
So
that's
obviously
the
kind
of
things
that
researchers
like
to
work
on,
so
that's
already
kind
of
yielding
some
fruit
and
we'll
kind
of
talk
about
later,
but
generally
speaking
at
the
research
part
is
very
important
for
us.
Actually,
it's
probably
one
of
the
key
drivers
for
having
this.
This
effort
set
up
in
the
first
place
and,
of
course,
academia
also
makes
sure
that
you
know
people
are
taught
how
to
work
of
these
kind
of
flows.
You
know
university
of
internships.
G
They
have
different
kinds
of
programs
where
the
tool
chain
can
not
only
be
you
know,
developed
in
terms
of
like
research,
but
just
used
and
taught,
and
and
and
so
on,
so
we're
looking
at
academia
to
help.
You
know
extend
the
tool
chain
in
various
new
directions
and
get
it
adopted
amongst
new
engineers.
G
The
second
pillar
would
be
industry,
and
industry,
of
course,
provides
the
real
world
use
cases,
that's
kind
of
extremely
important,
because
without
the
real
word
part
of
it,
it's
it's
very
hard
to
do
something.
That's
significant
that's
meaningful
and
that
really
changes
the
state
of
the
art
and
also
industrial
use,
kind
of
improves
the
robustness
and
the
general
reliability
of
the
tool
chain
itself.
G
It's
not
just
a
research
tool,
it's
a
practical
tool,
that's
being
used
in
real
products
and
in
fact
the
tool
chain
is
really
used
in
real
world
products
being
developed
and
rolled
out
as
soon
as
this
year,
so
keep
your
fingers
crossed.
But,
generally
speaking,
this
practical
aspect
of
f4
pga
is
very,
very
important.
G
Where
we
can
jointly
collaborate
in
building
out
the
entire
ecosystem,
we
can
actually
reach
the
end
users.
You
know
we
can
kind
of
put
the
tool
chains
in
front
of
the
users
not
as
an
kind
of
a
renegade
alternative
to
the
official
tools,
but
also
as
a
kind
of
officially
supported,
and
even
if
alternative
kind
of
unfrowned
upon
methodology
of
working
and
also,
quite
simply,
since
the
fpga
vendors
developing
the
fpgas
themselves,
naturally,
their
assistance
in
kind
of
targeting
new
fpga
targets,
planning
providing
information
about
the
devices
themselves.
G
G
So
going
to
the
next
slide,
as
you
can
see,
we
have
a
bunch
of
current
members,
most
notably,
of
course,
amd
who's
joined
us
recently.
I
mean
in
the
in
the
part
of
xilinx
in
supporting
the
the
open
source
tool
chain
and
the
interchange
format.
G
Specifically,
we
also
have
quick
logic
who
have
been
kind
of
working
with
the
open
source
tuition
for
a
long
time
now,
we've
helped
them
adopt
a
completely
open
source,
end-to-end
perspective
on
on
the
tool
chain
and
in
fact,
the
primary
type
of
tool
change
their
users
consume
today,
as
open
source.
G
We
of
course
have
google
and
micro.
We
represent
the
industrial
part
and
kind
of
a
lot
of
the
development.
Work
is
being
done
by
our
companies,
as
well
as
universities
such
as
university
of
toronto,
who
is
developing
the
variable
to
routing
tool,
which
is
an
important
part
of
the
tool
chain,
but
also
others
who
you
know
either
use
it
develop
in
various
directions
or
teach
classes
such
as
byu.
G
G
Are
are
willing
to
join
our
effort
and,
looking
at
you
know
how
they
can
contribute
and
how
they
fit
into
the
picture.
But
overall,
there's
there's
really
great
interest
and
I
think
that
over
2022
will
will
do
really
good
progress
there.
So
I
think
that's
the
end
of
my
part
and
I'll
leave
it
to
carl
to
explain
a
bunch
of
the
technical
items
within
the
group
on
the
next
slide.
F
Yep,
thank
you.
So,
as
michael
mentioned,
one
of
the
biggest
effort
of
fopga
is
actually
effort
on
building
a
fully
open
source
tool
chain
for
fga
chips
for
fpga
development.
So
our
goal
here
is
to
provide
an
end-to-end
flow
that
is
vendor
neutral.
We
of
course,
have
some
back-ends
that
target
various
vendors,
various
fpga
vendors,
various
cheap
vendors,
but
they
are
packed
in
a
way
they
are
wrapped
within
the
two
chain.
In
a
way,
it's
not
really
visible
for
end
users,
so
we
can
think
of
it.
F
As
you
use
like
compilers
for
software,
you
don't
really
use
gcc
in
a
different
way.
If
you
compile
something
for
risk
five
or
arm,
maybe
some
like
switches
or
flags
pass
to
the
compiler,
but
nothing,
no,
not
really
much,
not
very
more
yeah,
so
you
just
use
the
touching
or
as
in
the
safe
same
way.
F
So
with
this
effort,
we
we
tend
to
switch
fpga
development
from
like
a
hardware
center
where
people
were
actually
you
know
connecting
were
collecting
blocks
of
some
black
boxes
or
or
things
like
that
or
pre-built
cores,
to
create
a
bigger
design.
We
want
to
actually
move
it
move
everything
into
like
code,
driven
software,
centric
development,
where
you
script,
everything
when
you
scale
everything
with
power
through
cloud
machines
when
you,
you
know,
spawn
multiple
instances
of
the
two
chain,
building
bits
and
pieces
and
then
combining
them
together.
F
As
michael
mentioned,
this
effort
was
previously
done
under
umbrella
of
symbiotic
organization,
but
we
rebranded
that
to
fopga
and
move
everything
into
all
species
of
chips
alliance.
So
everything
right
now
is
moved
there.
If
you
go
to
git
chips,
alliance,
github,
you
will
find
the
code
and
if
you
want
to
learn
more
about
the
tutor
itself,
you
can
visit
our
webpage
and
in
the
next
slide
you
can
see
the
list
of
currently
supported
targets.
So
we
do
support
signings
right
now,
amd
seven
series,
fpgas
and
those
are
pretty
well
supported.
F
You
can
build
quite
complex
design
with
with
the
two
chain
quick
logic:
eos
is
free
and
actually
that's
the
two
chain
that
is
used
for
this
platform
for
this
fpga.
So
if
you
are
actually
using
this
chip,
you're,
probably
using
our
our
two
chain,
we
also
support
a
few,
a
few
fpgas
from
lattice
that
those
are
closing
the
next
size.
40
and
ecp.
5.
F
and
morris.com
will
probably
hear
about
that
in
near
future
and
in
the
next
slide
we
can
see
the
structure
of
the
tooth
and
we
actually
divided
that
into
two
parts
into
front
end.
Like
synthesis
for
synthesis
tools
and
back
end,
those
are
place
and
route
and
big
stream
generation.
So
for
synthesis
we
use
yours,
that's
a
pretty
popular
and
well-known
synthesis
tool
and
for
back-ends
for
place
and
routing
and
generating
a
programming
file
for
certain
certain
targets.
F
We
use
either
next
pnr
or
verilog
routing
and
then
are
selected
projects
with
a
set
of
tools
that
allow
you
to
to
create
the
programming
file,
create
the
bitstream
that
you
can
actually
upload
to
the
device,
and
even
even
here,
in
this
slide
in
this
block
diagram
of
the
tool
chain,
you
can
see
that
there
are
at
least
two
place
and
route
tools
that
can
be
used.
There
is
only
one
synthesis
too,
but
in
reality
there
is.
F
There
are
more
synthesis
tools
and
more
place
and
row
twos,
maybe
not
open
source
but
proprietary,
but
still
there
is
something.
So
there
is
a
you
know.
This
ecosystem
is
pretty
fragmented
and
and
every
tool
implements
some
features.
That
can
be
reasonable
and
very
nice
really
nice
to
actually
combine
them
together
and
use
them
in
a
way
that
you
can
switch
between
them
seamlessly
so
to
actually
address
this
problem.
We've
created
something
called
integer
format.
F
If
we
switch
to
the
next
slide,
you
will
see
a
very,
very
nice
logo
of
this.
Of
this
project
and
interchange
format
is
actually
an
effort
to
provide
common
description
between
various
tools,
either
proprietary
and
open
source
tools
for
describing
fpga
devices
and
providing
a
way
and
mechanisms
for
actually
change.
Switching
between
the
tools
within
the
flow,
so
constructing
constructing
two
chains
by
combining
various
tools
from
various
vendors
or
various
projects,
and
it
doesn't
really
matter
in
the
end
if
it's
proprietary
or
open
source.
F
Of
course,
we
prefer
open
source,
but
still
you
can
use
that.
So
the
idea
here
is
to
make
the
tools,
in
the
end
better,
by
lowering
the
entry
cost
by
lowering
the
barriers,
because
imagine
a
situation
where
you
you're
academia,
researcher
and
you
are
working
on
a
let's
say,
placement
algorithm
and
you
have
an
idea
of
the
best
placer
algorithm
ever
ever
invented
in
the
world,
but
to
actually
test
it.
You
need
still
synthesis
too
and
you
need
still
routing
tool
and
some
kind
of
a
bitstream
generation
between
generation
flow.
F
F
You
can
use
your
algorithm
right
at
the
design
in
interchangeable
format
and
use
it
later
with
various
tools
that
are
already
in
the
market,
and
this
is
very,
very
you
know
something
that
changes
the
the
way
we
can
work
with
the
tools.
The
initial
development
of
this
format
was
done
in
collaboration
between
ourselves,
google
and
signings,
which
is
right
now
imd
and
we'll
continue
working
on
on
the
format
itself.
F
So
if
we
switch
to
the
next
slide,
we'll
see
some
basic
information
about
the
format
itself,
so
we
described
the
format
using
a
schema
defined
defined
with
captain
proto
library.
F
F
We
use
captain
proto
because
it's
pretty
major
library,
it
has
stable
app
api
and
implementation
for
like
various
languages,
so
you
can
adapt
it
with
your
flow
quite
easily
and
and
just
start
playing
with
it,
and
if
you
switch
to
the
next
slide,
you
will
see
like
some
code
snippet
actually
taken
from
our
physical
net
list.
That
is,
implementation
and
physical
netlist
is
something
that
describes
fully
placed
and
routed
or
non-replaced
design.
We
also
provide
two
different
schemas
one
is
for
describing
device
resources.
F
This
is
used
for
basically
defining
what
fpga
chip
has
inside.
So
what
the
target
device
actually
provides,
what
we
can
use,
what
kind
of
a
resources
we
can
use
and
the
tools
need
to
know
that
to
be
able
to
perform
their
work
and
then
run
the
algorithms
and
the
third
one.
The
first
schema
that
we
define
is
logical
networks
and
this
one
is
actually
used
for
for
describing
an
abstract
digital
circuit
that
we
get
after
the
synthesis.
F
So,
basically,
after
you,
you
go
from
a
rto
code
or
some
kind
of
a
abstract
code
through
optimization
through,
like
through
elaboration
and
and
you
get
quite
quite
abstract
description
of
the
of
the
circuit.
The
idea
is
that
each
step
is
interchangeable.
So
if
a
two
implements,
if
a
two
implements
a
certain
feature,
you
can
just
switch
the
tools
in
the
middle
of
the
flow
and
that's
the
greatness
of
this
of
this
approach.
F
So
if
you
switch
to
the
next
slide,
we'll
see
what's
exactly
there
right
now,
so
we
have
one
end-to-end
flow
that
is
fully
operational
and
when
we
can,
we
perform
tests
with
that.
So
it's
just
synthesis.
Then
we
write
interchangeably
logical
net
list.
Then
we
use
next
p
and
r
for
place
and
route.
F
Then
we
write
a
physical
net
list
and
we
can
use
rapid
write
with
vivado
to
write
a
bitstream
or
open
source
bits
in
writer
using
fpga
assembly
language,
the
fpga
definition
that
the
device
definition
is
actually
taken
from
rapid
right,
which
is
open,
source
front
end
for
proprietary,
designing
stools,
and
this
two
is
actually
developed
by
by
sidings,
and
they
contributed
to
the
effort
by
with
this
two.
F
F
So
it's
just
a
matter
of
of
making
it
more
robust
and
better
faster
fix
all
the
bugs-
and
you
know
just
make
it
work
for
any
random
design
and
if
you're
interested
more
in
that,
you
can
refer
to
xilinx,
blog
or
google
blog,
which
we
helped
to
write.
If
you
go
to
the
next
slide,
I
would
like
to
show
some
other
tools
that
are
part
of
fpga
for
pga
for
pga
effort.
One
of
them
is
something
called
fpga
perf2,
so
we
are
working
on
the
toolchains.
We
are
developing
new
algorithms.
F
We
are
developing
new
formats
for
actually
playing
with
fopg
with
fpgas
fpga
flows,
so
we
needed
a
tool
or
a
mechanism
or
a
way
of
actually
comparing
the
results
so
actually
tracking.
If
we
are,
you
know
doing
better
or
maybe
doing
walls.
How
do
we
compare.
F
Against
other
tools,
I
guess
against
vendor
tools
and
so
on,
so
we
developed
a
two
code
called
fpga
perf2,
and
this
is,
I
would
say,
more,
like
a
framework
for
running
a
set
of
tests
against
a
set
of
tests.
F
Stateless
designs
against
various
two
chains
gather
as
much
info
as
we
can
and
info
in
terms
of
like
resources,
usage
like
host
computer
resource
usage
like
cpu
time
memory,
disk
and
so
on,
but
also
quality
of
results
like
how
fast
the
design
was
in
the
end,
how
many
target
fpga
resources
we
used
and
so
on
and
so
on.
F
So
we
run
it,
we
run
it
daily
in
a
public
ci
on
github,
it's
also
a
part
of
chiefs
alliance
github,
and
we
draw
we
generate
a
dashboard
where
you
can
simply
go
and
see
like
historical
results
and
some
graphs
showing
you
know
if
we
are
going
in
the
right
direction,
maybe
wrong
direction
if
we
are
slower,
faster
and
so
on,
and
so
on.
So
that's
pretty
nice
to
to
to
understand
what's
happening.
What
is
you
know?
What's
the
best
choice?
F
What's
the
best
tool
chain
right
now
and
then,
if
there
is
any
regression
and
if
you
go
to
the
next
slide,
the
other
two
that
that
we've
been
working
on
is
something
called
fpga
database
visualizer.
So
this
one
is
more
like
a
tool
for
documenting
how
the
fpga
designs,
how
the
fpga
chips
look
like
inside
so
most
of
the
vendors.
F
They
do
provide
some
kind
of
viewers
in
this
software,
but
those
are
not
really.
You
cannot
really.
You
know,
have
a
one
single
place
to
check.
What's
inside
how
the
how
the
devices
look
like
fpga
database
visualizer
is
a
vendor
agnostic
tool
that
can
actually
grab
a
database.
It's
right
now
we
process
open
source
data
database
from
various
projects
that
are
there,
for
example,
from
vpr
place
in
route
2,
I
mean
format
used
by
vpr,
place,
android
tool
and
we
can
draw
an
interactive
like
rendering
interactive
web-based.
F
Visualizer,
where
you
can
simply
check
what
is
where
how
the
blocks
look
like
how
big
the
fpga
is.
Where
are
the
memory
blocks
or
certain
types
of
cells
that
you
are
interested
in?
As
I
said,
this
too
is
more
for
us
to
understand,
what's
inside
and,
of
course,
for
documenting
the
chips
for
for
wider
audience
and
for
people
to
who
actually
are
going
to
to
start
their
work
and
start
doing
some
fpga
designs.
If
we
go
to
the
next
slide,
there
are,
of
course,
more
than
only
those
projects.
F
We
work
on
many
smaller
tools
that
we
need
within
our
within
our
two
chain
and
those
are,
for
example,
fpga
assembly.
Parser
fpga
assembly
is
a
formal
textual
format
that
that
is
one
to
one
one
to
one
compliance
with
with
what
is
in
the
bitstream
itself
in
in
a
programming
file
called
fpga.
So
this
one
allows
us
to
to
basically
analyze
bitstreams
and
and
see
if
there
is
something
wrong
and
so
on,
and
it's
intermediate
step
in
many
flows
just
before
the
big
stream
generation.
F
We
also
maintain
jose's
plugins,
providing
various
extensions
to
yours
itself.
One
of
them
is,
for
example,
a
system,
verilog
extension,
which
provides
a
way.
A
lot
of
systems
will
lock
features
that
are
not
supported
by
default,
and
this
is
provided
with
use
of
external
power
circle
shurlock
and
intermediate
representation
called
uhdm.
F
I
actually
talked
about
that
some
time
ago
in
one
of
the
chips
alliance
workshops,
and
there
are
many
many
other
tools,
smaller
bigger,
and
so,
if
you
are
working
on
on
any
project
around
fpgas,
please
take
a
look
at
what
we
have
there.
Maybe
you
will
find
something
that
is
very
interesting,
very
interesting
and
very
useful
for
your
project,
and
I
will
give
it
back
to
michael.
G
For
the
next
slide,
please,
if
we
summarize,
as
you
can
see,
we're
building
an
ecosystem,
you
could
say
you
know
chips,
alliance,
working
both
on
fpgas
and
asics,
and
the
work
group
that
we
talked
about
today
is
specifically
focused
on
fpgas,
which
will
leave
a
fundamental
kind
of
entry
point
into
the
a6
base,
the
prototyping
platform,
a
you
know,
a
kind
of
younger
brother
of
the
asic
domain,
let's
say,
and
the
interchange
format
I
think,
is
a
particularly
interesting
topic,
because
it
shows
how
collaboration
between
many
entities,
which
includes
the
fpga
events
themselves.
G
It's
not
only
a
possibility
but
a
necessity,
and
we
believe
that's
really
the
case
for
for
any
kind
of
ambitious
effort
like
ours,
and
we
think
that
it
could
be
worthwhile
to
try
to
transplant
the
success
into
the
asic
space
in
the
future.
So
hopefully,
as
we've
shown
having
an
interoperability
format
is
very,
very
useful
and
in
fact
like
we
could
see.
Also
in
the
previous
talk
and
and
kind
of,
I
assume
it's
going
to
be
a
recurring
theme,
joint
standards,
joint
formats.
G
All
of
this
is
very
useful
because
it
gets
more
people
working
together.
We
showed
that
practical
use
cases
of
the
stuff
that
for
pga
is
doing
is
already
possible.
So,
as
I
mentioned
before,
we
have
project
products
being
built
with
the
tool
chain.
We
have
kind
of
real
serious
research
going
on
and
a
lot
of
promising
new
ideas,
and
so
what
we
want
to
do
with
f4pga
is
to
enable
you
know
kind
of
new
software
driven
approaches
to
fpga
development.
We
want
to
kind
of
enable
machine
learning
based
approaches
to
different
kinds
of
topics.
G
We
want
to
help
people
do
rapid
prototyping,
and
all
of
these
things,
of
course,
can
be
done
with
close
source
tools
as
well,
but
we
believe
open
source
lends
itself
especially
well
to
these
kind
of
activities,
because
software
people
that
we
really
need
to
lure
into
the
fpga
and
asic
space
to
help
us
out
kind
of
scale
those
topics
they
prefer
open
source.
Typically,
when
they're
building
things
they
they
want
open
source
building
blocks
that
they
can
reuse
and
develop
and
improve
right
and
that's
what
for?
G
Pj
is
really
trying
to
do
get
hardware
to
be
more
like
software
and
on
the
last
slide
that
we
have
for
today
we
have
a
call
to
action
for
you
to
join
us
in
the
effort
and
of
course
we
have
a
bunch
of
media
such
as
the
world
mailing
list
f
for
pga
hyphen
wg,
like
for
all
the
chip
slice
work
groups,
we
have
a
channel
on
the
chipsline
slack,
so
go
ahead
and
join
it.
G
If
you
have
slack
join
it
through
slack,
if
you
prefer
irc,
there's
an
isu
bridge,
we
have
an
indentation
and
we'll
share
those
slides.
Later
we
have
an
invitation
link
that
you
can
use
to
join
the
chips
line
slack
you
can
follow
us
on
twitter
or
please
do
follow
us
on
twitter
to
handle
this
very
simple
for
pga
and,
of
course,
check
out
the
code
on
github
there's
a
lot
of
repositories.
The
main
repository
is
simply
for
pga
and
from
there
you
should
be
able
to
find
all
the
links
to
the
relevant
places.
A
Hey
thanks
carol
and
michael
for
a
great
talk
on
the
f4
pga
work
group
and
all
the
activity
there.
It
sounds
like
a
lot
of
exciting
stuff
is
going
on
and
I
totally
agree
that
fpgas
are
the
way
to
help
make
hardware
design
more
malleable
or
readily
adoptable
similar
to
as
what's
occurred
in
the
software
space.
So
I
think
that's,
I
think,
that's
wonderful!
So
we
have
let's
take
a
question
or
two
here.
A
So
there's
one
from
adrian
vocal
and
this
is:
is
there
a
clear
statement
from
amd
xilinx
about
their
official
position
towards
third
third-party
tools
supporting
their
fpgas?
Obviously
someday
one
of
their
legal
team
may
wake
up
and
complain
about
this,
impacting
somehow
the
company
revenue
current
status
is
very
uncomfortable.
G
Yeah,
I
mean
obviously
I'm
not
lawyers
but
as
the
disclaimer
goes,
but
you
might
kind
of
imagine
that
xilinx
amd
participating
directly
in
the
work
group
is
definitely
a
good
sign,
and
so,
if,
if
you
know
kind
of
anyone
had
a
chance,
a
reason
to
worry,
I
assume
that
would
have
been
in
the
past
and
not
currently
right.
The
whole
point
of
the
effort
is
to
show
that
we
can
do
collaborative
vendors
and
we
don't
really
deal
much
with
the
what-ifs.
It's
it's
all
a
matter
of
industry
adoption.
G
It's
all
a
matter
of
momentum,
a
lot
of
the
great
stuff
that
we're
building
is
also,
of
course,
benefiting
the
vendors
themselves
right.
It's
and
in
fact
the
interchange
format
as
an
effort,
is
supported
and
kind
of
welcomed
by
by
amd
xilinx,
because
it's
it's
kind
of
helping
their
own
research
effort
right.
It's,
as
carl
said
it's
supported
in
their
own
framework,
called
rapid
right.
So
I
think
the
development
of
of
this
is
kind
of
to
everyone's
benefit,
so
we're
not
all
together
very
worried.
G
I
I
don't
believe
anyone
will
ever
you
know
kind
of
issue,
a
formal
statement
about
this,
because
you
know
I
don't
believe
there
is
such
a
thing.
You
could
do
to
make
sure
everyone
feels
super
comfortable,
but
I
think,
like
in
terms
of
making
people
feel
comfortable
joining
chips.
Lives
join
it
for
pga
work
group
is,
it
makes
me
feel
comfortable
anyway,.
A
I
appreciate
that,
thank
you
for
the
answer
on
that.
So
I'll
just
add
one
comment
here,
and
this
is
just
about
the
material
from
the
presentations
today,
so
we
will
make
those
available
as
well
as
a
recording
of
that
today,
so
to
keep
us
on
schedule.
I
will
end
our
the
questions
on
this
particular
topic,
but
thank
you
again,
michael
and
carol
for
this.
I
think
it's
it's
great
progress
and
I
look
forward
to
the
collaboration
folks
moving
forward
on
this
effort.
A
Thank
you
so
our
next
thanks.
So
our
next
speaker
is
dave
kellett,
who
is
a
research
scientist
from
intel
he's
a
researcher
working
on
pathfinding
for
programmable
logic.
Technology
dave
is
currently
developing
chiplets
and
chiplet
technologies
to
enable
a
new
model
of
electronic
system
development
earlier
at
intel.
Dave
was
vice
president
of
ip
engineering,
so
dave.
D
Sometimes,
when
you're
building
two
chips,
it
ends
up
being
more
work
than
one
chip,
and
so
the
real
effort
is
you
know
how?
How
can
you
make
this
just
simply
easier,
a
better
development
experience?
D
D
So
I'm
going
to
talk
about
three
things.
D
Today,
dietary
interfaces,
including
ab
2.0
and
the
newly
announced
ucie,
I'm
going
to
talk
about
a
new
chiplet
spy,
open
source
hardware,
ip
that
we
have
and
new
chiplet
axi
open
source
hardware,
iep
that
we
have
you
folks
may
have
seen
about
a
year
and
a
half
ago
intel
was
awarded
the
ship
program
by
the
united
states
government
and
the
purpose
of
the
the
ship
program
was
primarily
to
to
give
the
the
government
access
to
intel
state-of-the-art
packaging
capabilities
but
at
the
same
time,
the
reason
for
using
packaging.
D
These
advanced
packaging
technologies
is
so
you
can
build
systems
out
of
chiplets.
So
the
ship
project
also
includes
developing
chiplet
interface
standards
and
protocols,
and
we
are
using
these
ips
that
I'm
going
to
talk
about
the
chiplet,
axi
and
spyip
in
a
ship
proof
of
concept,
chiplet
and
multi-chip
packets.
That's
part
of
the
the
ship
program.
D
The
idea
is,
if
we
build
this
factory
this
capability
to
assemble
multi-check
packages,
that
we
want
to
make
sure
that
the
the
the
customers
of
this
factory
have
have
the
the
tools,
the
ip
and
the
technology.
So
they
can
build
high
performance
products.
D
In
summer
of
2020
we
announced
aib
2.0,
it's
it's
it's
our
second
generation
dye
to
die
interface.
You
can
see
some
of
the
key
metrics
on
here.
You
know
the
the
bandwidth
per
wire.
The
architecture
supports
going
up
to
6.4
gigs
per
second.
D
Now
this
is
times
it
is
as
many
wires
as
you
want
to
put
in
an
interface,
so
this
could
typically
go
up
to
eight
terabits
per
second
on
an
interface,
so
it
supports
a
variety
of
bump
densities
for
micro,
bumps
all
the
way
down
from
from
55
down
to
36
micron,
and
you
get
the
other
metrics
as
as
is
go
on
such
as
reduced
energy
per
bit,
because
it's
trying
to
send
more
bandwidth
across
you
know
it's
going
to
use
more
power,
so
we
want
to,
at
the
same
time
reduce
the
the
energy
per
bit
to
kind
of
keep
that
normalized
and
since
we
have
existing
1.0,
this
provides
backwards.
D
Compatibility
when
you
use
the
2.0
interface,
so
the
2.0
spec
is
part
of
the
chips
alliance.
It's
a
chips,
alliance
release
specification,
so
you
can
see
it
at
the
github
repository
that
I'm
showing
here,
and
we
also
have
a
2.0
rtl
for
the
aib2.05
in
this
repository
here.
So
you
can.
You
can
check
that
out
as
well,
and
this
is
actively
being
being
developed
as
we
go
through
our
our
proof
of
concept.
D
Now
you
may
have
heard
that
that
early
last
month
the
there
was
an
announcement
on
the
universal
tiplet
interconnect
express,
and
this
is
an
idea
to
to
create
a
a
an
open
ecosystem
of
chiplets
of
technologies.
D
It's
a
it's
a
specification
organization,
a
lot
like
cxl,
so
they
don't
build
products.
It's
not
open
source,
anything!
It's!
It's
a
spec
group!
Now
the
the
ucie
key
performance
indicators
include
per
wire
rates
up
to
32
gig.
So
there's
a
there's
a
little
bit
of
an
arms
race.
On
on
how
fast
can
you
take
the
the
individual
wires?
So
this
is
a
way
up
there.
D
The
uci
specification
will
include
details
on
how
to
interoperate
a
ucie
interface
with
the
aib
based
triplet
portfolio.
Now,
since
this
is
a
spec
organization,
each
uci
participant
determines
their
own
product
roadmap
and
their
own
product
availability.
It's
not
not
part
of
the
ucie
consortium
and
you
can
download
the
spec,
and
I
did
it
myself.
D
You
can
go
over
this
this
address
here
and
and
get
take
a
look
at
the
spec
and
see
what's
going
on
and
you
can
see
the
the
industry
representation
that
joined
up
with
with
ucie,
and
at
this
point
I
kind
of
want
to
take
a
a
pause
here
to
kind
of
talk
about.
D
You
know
people
who've
been
following
datadie
interfaces
in
the
industry
and-
and
I
just
kind
of
want
to
go
through
and
identify
a
few
things
here,
so
so
things
that
aib
does
well
aib
showed
that
a
high
performance
wide
parallel
die
interface
is
practical
for
volume,
production
and
five
years
ago.
This
was
not
obvious
that
that
wide
parallel
was
the
way
to
go,
because
there
was
still
a
lot
of
thought
that
30
30s
might
be
the
way
to
go.
D
Aib
built
a
multi-vendor,
multi-process
portfolio
of
chiplets
and
ai
be
led
with
open
source,
digital
and
analog
design.
So
we've
got
the
the
ib5,
the
the
digital
design
there
you
know
it
it
dac
back.
In
2020
I
talked
with
the
folks
from
blue
cheetah
about
an
aib
analog
generator
and
we've
got
a
open
source
protocols,
which
is
the
subject
of
of
the
rest
of
my
presentation
today.
D
Now
the
other
other
folks
in
the
industry,
you
folks
might
have
heard
of
the
odsa
bunch
of
wires.
So
so
some
one
thing
really
that
I
think
the
bunch
of
wires
folks
did
well
was
was
that
they
showed
that
a
wide
parallel
die,
die.
Interface
on
standard
packaging
is
interesting
for
cost
sensitive
applications,
and
we
know
that
there
are.
You
know,
iot
wireless
infrastructure,
manufacturers
that
are
extremely
cost
conscious.
D
Standard
packaging
is
generally
less
expensive
than
advanced
packaging,
and
that's
the
odsa
a
bunch
of
wires.
Folks
really
showed
that
you
could
do
the
same
approach
on
standard
packaging
and
then,
finally,
here
with
with
ucie
the
the
things
that
the
thing
that
uci
has
done
really
well,
is
it
use
the
the
indel
cpu's
industry
influence
to
align
everyone
on
ucie,
and
so
you
can
take
a
look
at
the
folks
on
here.
There's
amd
on
here
marvel
has
joined.
This
broadcom
is
part
of
this,
and
so
really
it
it's
become.
D
You
know
just
a
broad,
wide
industry
adoption
that
I
think
a
lot
of
people
were
waiting
to
see
before
they
really
went.
You
know
full
force
into
into
building
triplets.
D
So
I
just
wanted
to
say
a
little
bit
more
about
you
know,
intel
and
ucie
and
aib,
and
so
one
of
the
the
questions
is
is,
you
know,
will
intel
continue
to
develop
products
using
the
current
af
specification?
The
answer
is
absolutely:
yes,
we
have
products
out
today
we
have
products
that
have
not
released
yet
that
use
aib
interface
and
we
have
customer
products
that
are
part
of
the
ship
program
that
use
the
the
aibn
interface
that
are
in
planning
in
various
stages
of
development
and
and
release.
D
The
second
question
is:
will
ucie
support
aib?
I
kind
of
touched
on
this
earlier?
The
usa
specification
has
an
appendix
and
it
will
include
details
on
how
the
ucie
implementation
can
interoperate
with
the
ib
based
chiplets
and
what
are
intel's
plans
for
aib
going
forward.
So
we
didn't
all
along
with
the
other
folks
out
there
on
the
previous
page,
were
committed
ai
ucie.
D
We
will
eventually
migrate
ib
applications.
The
way
I
look
at
this
is
is
a
design
starts
this
year.
Design
starts
next
year.
Aib
is
the
way
to
go.
Design
starts
that
are
out
there.
You
know
2024
time
frame.
Uci
is
definitely
something
to
to
consider.
D
D
Now
into
the
the
open
source
triplet
protocol
ip,
so
I
want
to
talk
about
spy
now.
Folks
are
probably
familiar
with
spy.
It's
a.
What
we've
built
is
a
simple
protocol
on
top
of
the
spy
signaling
spy's
is
widely
used
in
the
industry
for
adc
dax
for
peripherals
for
just
general
low
bandwidth,
dyed,
guided
eye
communication.
D
Our
experience
is
that
everybody
who
builds
a
chip
needs
some
way
to
either
configure
the
chiplet
boot.
The
chiplet
get
things
going
before
you
have
your
high
bandwidth
wide
parallel
interface
going-
and
this
is.
This-
has
been
almost
universal-
that
people
have
used
this
now.
The
the
problem
is
that,
as
I'll
mention
earlier
are
coming
up,
spy
is
just
these
set
of
wires,
and
so
it
does.
You
know,
streams
of,
reads
and
writes.
D
D
Folks,
who
you
know
all
you
got
to
do
is
check
the
wikipedia
page
on
spy
and
you'll
find
different
terms
for
that
master
and
slave
read
intel.
We
don't
use
those
terms,
so
we
use
a
leader
and
follower
our
chiplet
spy
follower
plugs
into
the
aib
hard
macro
for
aab
configuration
and
it
plugs
into
what
pretty
much
everyone
needs
is:
is
a
control
and
status
register
application
or
a
firmware
download,
or
something
like
that.
D
Now,
just
as
a
refresher
here
on
on
chiplet
spy
dietary
signals,
the
you've
got
a
leader.
So
in
our
case
we
support
up
to
four
followers.
We
have
four
select
lines.
We
chose
to
have
a
separate
miso
lines
for
ease
of
debugging.
You
could
put
those
all
on
on
tri-state
and
have
a
single
wire
there.
D
D
Now,
when
you
have
these,
these
chiplets
you've
got
a
variety
of
roles
here
and
we
have
what
I
call
an
initiator.
So
this
is
this:
is
somebody
who's
really
kind
of
the
system?
Controller
deciding
okay?
Now
I
need
to
configure
these
chiplets.
I
need
to
bring
them
up.
I
need
to
need
to
get
their
their
high
bandwidth
interfaces
running
so
we
have
the
initiator.
Then
we
have
our
our
spy
ip,
which
is
the
the
leader
ip
and
the
follower
ip,
and
then
you
have
targets
the
targets.
D
D
So
now
you
recall
we're
doing
all
this
because
really
our
objective
is,
we
want
to
do
rights
and
reads
to
follow
our
targets.
We've
got
our
system
controller
that
initiator
over
on
the
right
side.
What
it's
trying
to
do
is
talk
to
these
targets
over
here,
and
the
typical
targets
are
aib5
control
and
status
registers.
D
This
is
how
you
define
exactly
how
the
aib
is
configured
on
folks
who
are
familiar
with
aib
know
that
it
has
one
to
one
modes,
two
to
one
modes,
four
to
one
modes
of
different
multiplexing
modes.
D
So
that's
that's
one
obvious
thing:
that's
what
we're
using
it
for
is,
is
configuring
aib
interfaces
and
in
the
second
one
we
have
a
need
for
with
some
of
the
protocol.
Configuration
with
other
application.
Specific
functions
is:
is
a
application
register
block,
so
we've
provided
a
very
simple
general
purpose:
chiplet
control
and
status
register
block.
D
You
folks
can
take
a
look
at
look
at
that
and
you'll
see
it's
obvious
that
you
might
want
to
put
something
a
little
more
sophisticated
there,
that
does
a
sparse
address
decoding
and
that
that
kind
of
thing,
but
that's
it's
very
easy
to
add.
So
this
is
kind
of
really
why
we
created
this
so
that
you
could
have
a
a
common
way
of
talking
to
these
these
chiplets,
so
our
triplet
spy
ip
is
out
there
on
github.
D
You
can
see
it's
under
the
the
chips
alliance
repository
under
deposit
called
aib
protocols,
and
you
can
see
there.
It
is
spy
aib,
so
you
can
go,
take
a
look
there
and
check
out
the
user
guide
and
find
the
source
code
there.
D
I
want
to
say
that
the
code
is
functioning
complete.
We
are
doing
a
extensive
verification
using
a
code
coverage
metrics
that
it
has
about
another
month
and
a
half
to
go
to
complete.
We
don't
know
of
any
any
open
bugs
with
it.
D
We
do
use
the
github
issue
tracking
mechanism,
so
if
anybody
has
questions
or
issues
with
this,
please
go
ahead
and
record
that
okay,
now
I
want
to
talk
about
axi
4
protocol
ip
one
of
the
the
the
key
piece
of
feedback
we
received
on
our
various
chip
projects
over
the
years
was
great.
You've
got
this
phi
here.
We
need
protocols,
and
this
is
the
next
area
of
interoperability
after
the
phi
you
know,
the
in
the
protocols
are
the
data
link
layer
at
the
least,
and
it
can
go
higher.
D
We've
built
protocol
adapters
on
each
side
that
the
convert
standard
axis
for
stream,
standard
axe,
e4
onto
the
aib
phi
data
wires
and
then
convert
it
back
so
applications
a
and
application
b
think
they
are
talking,
ax,
c4
or
actually
for
stream,
depending
which
variant
you're
using,
and
they
don't
see.
What's
going
on
under
the
hood
with
the
the
ai
b5,
so
we
provide
the
adapter
ip
for
both
sides
so
that
the
mapping
between
the
the
standard,
axi
interface
and
the
the
av5
wires
is
understood.
D
So
you
kind
of
take
a
look
at.
You
know
why.
Why
did
we
pick
axi
and-
and
you
know
my
experience
at
doing
protocols
and
ips-
is
that
the
applications
drive
the
protocols,
not
the
other
way
around.
I
think
there's
there's
always
been
a
desire.
Could
you
come
up
with
a
universal
protocol,
but
but
you
just
can't
because
sometimes
what
you
need
for,
let's
say
a
real-time
system
where
you
want
to
turn
around
in
in
nanoseconds
a
response,
because
there
there's
something
hostile
flying
at
you.
D
You
need
to
do
something
about.
You
may
not
want
to
use
a
distributed,
shared
memory,
coherent
protocol.
You
might
want
to
use
something
very
lightweight
instead.
So
this
is
typical
streaming
applications
here.
If
you've
got
a
adc
dac,
you
might
get
just
streams
of
samples
that
every
clock
it's
got
another
bundle
of
samples.
D
Similarly,
with
a
an
optical
transceiver,
suppose
you
have
an
ethernet
mac
there.
Now
you
have
the
the
mac
to
application
interface,
which
is
which
is
generally
aligned
packets.
At
this
point,
this
kind
of
interface
looks
a
lot
like
axey
for
stream
and
that's
a
conclusion
we
made
mutual
years
ago
now
that
there's
a
different
mode
of
operation.
This
is
where
you
want
to
do
reads
and
writes
so
memory.
Applications
are
the
perfect
example
of
it.
You
still
need
high
utilization.
You
need
high
efficiency.
You
need
low
latency.
This
looks
like
galaxy
4..
D
Almost
all
of
our
the
commercial
memory
controllers,
for
example,
use
axi
interfaces
and
then
there's
another
strange
one
which
you
may
after
you've
brought
up
your
chip
using
spy.
You
may
want
to
have
a
faster
path
to
controlling
your
chiplet
and
so
using
a
xc4
light
for
controlling
status
registers
over
your
high
bandwidth.
Bus
will
give
you
that
faster
response,
you
don't
have
to
go
through
the
spy
serialization.
D
D
Now
that
picture
on
the
previous
page
assumed
that
the
leader
can
respond
to
the
follower
in
the
same
cycle.
The
follower
says:
I'm
not
ready.
The
leader
cannot
move
on
to
the
next
next
word
of
data,
so
the
leader
has
to
be
able
to
react
within
the
same
cycle.
Now,
if
you
take
a
look
at
this
path
here,
I'm
going
through
my
adapter
I'm
going
over
the
file
going
across
the
package,
then
back
to
the
phi
and
the
follower
adapter
straight
axi.
D
The
same
cycle
response
is
not
going
to
work
because
there's
a
clock
phase
difference
between
the
leader
and
follower
and
there's
pipeline
delays
a
crossing
between
the
the
leader
and
follower,
so
that
that's
the
reason
we
have
these
adapters
for
for
axi
for
to
to
do
the
operation
and
it
allows
and
and
takes
care
of
these
phase
delays
and
pipeline
delays.
What
we've
implemented
is
a
credit
scheme.
D
Now,
here's
where
it
gets
really
interesting,
you
take
a
axi
four.
Actually,
four.
Has
these
five
channels,
the
read
address
the
read
data,
the
right
address,
the
right
data
and
the
right
response,
so
they
call
them
five
channels.
In
axi
terms,
we
have
to
map
these
to
a
tx
set
of
wires
and
an
rx
wire.
So
we
gotta
do
this,
this
five
channels
down
to
two
channels
and
then
expand
it
back
out
on
the
right
side
back
to
its
its
five
channels.
D
D
You
know
your
various
axi
parameters
here
then.
The
next
word
be
a
piece
of
write
data
that
carries
across
64
bits.
You've
got
your
you
know,
wid
indicators
and
you've
got
things
that
are
flow
control,
the
the
push
it's
a
valid
indicator
and
those
are
the
things
that
use
up
the
credits.
D
Now
this
is
configurable.
You
know
actually
has
a
lot
of
flexibility
about
how
wide
your
your
data
bus
is.
Your
your
address
size
is
so.
This
is
an
instance
that
we're
using
on
our
on
our
chiplet,
but
we've
built
a
generator
as
part
of
our
open
source
code
that
allows
you
to
select
exactly
something
that
matches
your
your
axi
configuration
now.
I
just
want
to
say
there
was
an
alternative
way
of
thinking
about
this.
Is
you
could
just
take
these
five
channels,
the
a-r-r-a-w-w
and
and
write
response
b?
D
And
you
could
just
fold
them
saying:
hey,
look,
there's
really
about
214
wires,
I'm
just
going
to
multiplex
time,
multiplex
those
14
wires
over
my
phi,
and
so
you
can
see
214
it's
going
to
take
you
three
80
bit
cycles
to
get
214
signals
across
now.
So
we
did
a
efficiency
study
of
this
so
to
do
a
right,
single
and
the
way
we've
done
it
through
packetization.
D
It
takes
two
full
rate
cycles.
If
you
were
to
fold
it,
it
takes
three
full
rate
cycles.
Now,
once
you
start
doing
bursts,
the
situation
gets
much
much
worse,
because
the
axi
four
a
lot
of
these
wires
are
not
used
all
the
time,
definitely
not
in
a
burst,
and
so
you're
wasting
your
bandwidth.
If
you
multiplex
all
the
wires
across,
you
know
once
every
every
three
cycles,
so
this
is
the
benefit
of
a
packetization.
D
Now,
when
you're
using
xc4
on
dye
monolithically,
the
slow
utilization
doesn't
really
matter,
because
your
wires
are
small,
a
lot
smaller
on
die
than
they
are
even
on
advanced
packaging,
and
so
the
packetization
is
definitely
a
provides.
This
utilization
advantage.
D
We've
also
done
some
other
interesting
things
actually
for
stream.
Folks
are
familiar
with
this.
It's
it's
really
a
it's
a
one
direction:
data
transfer
now
some
of
those
applications.
I
showed
like
the
optical
networking,
the
rf
adc
dax
they're
bi-directional.
D
We
will
pack
a
liter
follower
pair
two
two
pairs
of
those
on
a
single
channel,
because
the
channel
has
tx
and
rx,
so
you
can
pack
going
each
direction
to
maximize
the
aib
channel
utilization,
and
so
this
is.
This
is
something
we
absolutely
use
because
most
all
of
these
applications
that
we're
working
with
affect
all
the
applications
we're
working
with
for
adc
dax
for
for
networking
are
bi-directional
duplex.
D
So
our
axi
for
ip
is
out
here
on
github.
It's
that
same
aib
protocols
base
repository
and
it
has
the
xc4
memory
map,
that's
the
the
the
pure
axi
four
and
the
axi
four
stream.
So
these
are.
These
are
both
already
out
there
they're
in
the
same
state
that
they're
function
complete.
You
can
go
check
the
issues
here
on
this,
so
I
took
the
snapshot.
There
were
a
few
items
to
to
to
do
on
it,
but
we
believe
it's
function
complete
and
it's
just
it's
just
finishing
up
verification.
D
So
so,
in
conclusion,
you
know
that
there's
this
famous
gordon
moore
quote
from
his
1965
paper,
where
he
articulated
what
we
know
as
moore's
law.
He
said
something
else
in
the
same
paper
that
it
may
prove
to
be
more
economical
to
build
larger
systems
out
of
smaller
functions
which
are
separately
packaged
and
interconnected,
so
he
he
predicted
that
chiplets
would
make
sense
and
triplets
are
being
used
to
build
larger
systems,
just
as
just
as
gordon
moore
predicted,
and
we
believe
that
we're
advancing
triplet
interoperability
with
the
open
source
hardware
protocol
rtl.
D
So
at
this
point
here,
I
I
think
I'll
pause
and
and
see
if
there
any
any
questions,
rob.
A
Hey
thanks
for
a
great
talk.
I
I'm
a
big
fan
of
the
chiplet
technology
that
intel
has
provided
to
the
community
and
I
think
it's
a
definite
enabler
for
innovation.
While
we
wait
for
questions
coming
from
the
audience,
I
don't
think
comment
down.
I'm
just
kind
of
curious.
A
You
know:
has
there
been
any
uptake
in
the
usage
of
chiplet
technology
by
hardware
startups
and
what
might
that
look
like.
D
Yeah,
you
know
one
of
the
to
start:
let's
see
I'll
focus
on
ire
labs,
so
they're
a
startup
based
in
santa
clara
that
they
publicly
had
a
paper
at
hot
chips.
A
couple
years
ago,
with
with
intel
about
using
a
wide
parallel
aib
interface,
they
implemented
their
own
version
of
spy.
So
this
is
kind
of
how
we
got
inspired
to
say,
hey,
look!
If
we
could,
we
could
do
a
common
read,
write
protocol
on
top
of
spy.
D
Then
we
wouldn't
have
to
implement
the
same
thing
differently
on
all
these
triplets.
So
that's
that's
one
public
case.
I
think
the
other
other
folks
that
I'm
really
working
with
now
are
large.
Companies
that
are
are
building
chiplets
to
to
connect
up
to
other
devices,
most
often
fpgas,
and
so
that's
you
know
big
name,
defense
companies.
A
That's
great,
I
know
last
time
I'd
asked
about
support
relative
to
commercial
tools
or
proprietary
tools
for
chiplet
design,
but
I'm
just
curious.
Has
there
any
work
been
done
on
open
source
tools
for
chiplet
design.
D
That's
an
interesting
area,
I
think,
there's
a
lot
of
things
in
the
chips
alliance
that
could
be
applied
to
this.
You
know,
chisel
is
kind
of
you
know,
chiplet
or
or
soc
independent.
I
I
think
a
lot
of
those
efforts
could
be
applied
to
it
that
that
are
not
specifically
a
chiplet
based
some
other
area
that
we're
working
on
and
I'm
sorry.
This
is
an
open
source.
It's
proprietary,
it's
security,
ip
for
chiplets,
because
once
you've
disaggregated
your
device,
you
you've
created
a
new
attack
surface,
which
is
this
dietary
interface.
D
A
Oh,
that's
great,
thank
you
so
that
wraps
up
about
the
time
we
have
dave.
So
I
appreciate
your
talk
on
this
and
it
was
very
good.
Thank
you.
A
H
H
H
Hi
good
morning
rudy,
so
I
wanted
to
talk
a
little
about
accelerator
platform
that
we
at
western
digital
would
like
to
present
as
essentially
and
a
platform
on
which
we
can
implement
nvme
computational
storage
ideas
and
the
platform
itself
was
originally
built
as
a
computer
accelerator
for
video
transporting
and
machine
learning
applications.
H
So
the
platform
itself
is
essentially
based
on
a
xilinx
ultrascale,
mpsoc
fpga,
and
it's
built
as
a
u.2
form
factor
device
with
a
power
envelope
that
matches
standard
nvme,
ssd
device
about
25
watts
and,
as
you
can
see,
essentially
the
key
component
being
the
zinc.
We
the
zinc
fpga.
H
H
And
we've
tried
to
build
it
in
such
a
way
that
it
should
be
possible
to
use
it
both
as
a
vehicle
for
development,
as
well
as
a
completely
manufactured
production
ready
unit,
if
once
we
design
the
develop
the
application
fully
on
it-
and
this
is
basically
available
today
as
a
video
transcode
accelerator
from
from
the
xilinx
from
via
the
xilinx
website.
H
So
the
primary
use
case
that
we
started
for
this
board
was
that
of
a
video
transcode
accelerator.
H
The
second
use
case
we
came
up
with
for
was
used
as
a
ai
inference
accelerator
and
this
the
inference
accelerator.
We
developed
it
in
partnership
with
mixology,
and
we
can
go
a
little
more
detail
about
that
later
and
the
third
key
use
case
we
would
like
to
target
for
this
device
is
as
an
nvme
based
computational
storage
device.
H
The
the
first,
the
video
transcode
accelerator
application
essentially
uses
the
video
decode
unit
that
is
present
as
a
hard
ip
within
the
zinc
fpga,
and
it
allows
us
to
take
in
a
high
resolution,
high
frame
rate,
video
stream
and
scale
it
down
and
then
re-encode
it
as
a
variety
of
different
resolutions
and
frame
rates.
H
The
the
api
to
this
device
is
essentially
an
as
a
plug-in
to
ffmpeg,
so
essentially
you
as
a
customer.
They
only
need
to
use
ffmpeg
with
slightly
customized
input,
parameters
and
ffmpeg
in
turn,
offloads
the.
H
The
video
stream
decode
encode
logic,
the
ladder
off
to
the
device
itself,
as
I
mentioned
before,
it
is
available
now
as
a
xilinx
part
number,
and
the
the
key
benefits
you
can
see
in
this
chart
here
is
that
the
performance
wise
the
device
performs
better
than
68
or
32
core
x86
cpus
that
you
might
want
to
use
prior
to
this
accelerator
being
available
to
you.
H
So
it
translates
to
fewer
instructions
cycles
on
the
cpu
fewer
stall
cycles
on
the
cpu,
fewer
reduced
requirement
for
dram
traffic
on
the
host
cpu
and,
at
the
same
time,
the
performance
that
you
get
out
of.
This
is
scalable,
because
you
could.
H
Have
multiple
of
these
accelerators
available
in
your
service
system
and
this
basically
the
reason
I
wanted
to
highlight
this
is
it
it
shows
the
potential
for
computational
storage
in
in
a
system
where,
if
you
have
data
in
in
this
case,
for
instance,
if
the
data
is
coming
from
largely
a
storage
device,
then
it
makes
sense
to
try
to
push
this
compute
acceleration
closer
to
the
storage,
and
this
shows
to
be
to
us
the
potential
value
of
computational
storage
itself.
H
The
device
is
also
significantly
better,
even
in
when
you
consider
the
power
versus
or
performance
ratio,
and
you
can
see
that
the
the
video
transcode
accelerators
it
performs
significantly
better
than
any
standard
x86
cpu
that
you
might
want
to
use
for
the
purpose.
H
The
second
use
case
here
is
is,
as
an
ml
accelerator
inference
accelerator.
This
is
in
partnership
with
mythology
incorporated
from
france.
Here
all
of
the
inference
is
done
on
the
fpga
based
system
and
you,
you
essentially
take
a
trained
model
that
has
been
trained
either
on
the
cpu
or
a
gpu
subsystem,
and
the
software
stack
converts
the
model
into
an
8
or
inch
16
model
and
runs
offloads
all
of
the
inference
to
the
fpga
device,
and
this
allows
us
to
give
again
a
significant
performance.
H
Comparable
in
some
sense
in
to
standard
gpus
that
you
may
get
but
lower,
because
the
power
envelope
of
the
fpga
board
is
limited
to
25
watts,
but
in
terms
of
power
to
performance,
you
will
see
that
they
are.
The
numbers
are
quite
comparable
to
equivalent
generation
gpu
systems.
H
Once
again,
this
is
a
system
that
works
on
a
direct
attached,
pcie
bus.
But
if
you,
if
you
were
to
imagine
that
the
same
performance
could
be
the
same,
application
could
be
offloaded
to
an
nvme
based
system.
You
could
again
foresee
a
system
where
the
device
would
be
located
closer
to
the
source
subsystem
and
get
additional
benefits
of
reduced
power
and
traffic
on
the
network
or
pci
fabric.
H
So
this
brings
us
to
the
real
idea
that
I
wanted
to
present
here,
which
is
to
use
this
exact
device
as
an
acceleration
platform
for
nvme
based
computational
storage.
H
And
what
we
are
chasing
here
is
to
use
use
this
board
as
a
prototype
platform
for
nvme
4191,
which
is
a
proposed
standard
for
computational
storage.
H
H
What
we
are
building
in
effect
to
support
that
is
an
nvme
controller
that
that
can
actually
do
both.
That
is,
execute
hardware
kernels,
as
well
as
execute
software
binaries
that
again
invoke
software
libraries
that
are
made
available
in
the
device.
H
So
what
we
do
is
we
have
the
nvme
target
controller
functionality
split
between
software
and
hardware,
and
the
software
portion
is
malleable
enough,
that
we
can
very
quickly
adapt
to
new
command
sets
and
new
proposals
in
the
nvme
standard
reflect
them
very
quickly
and
be
able
to
prototype
ideas
for
both
the
standards
companies
proposals
themselves,
as
well
as
for
applications
that
might
want
to
use
them
and
the
accelerator
the
ideas
for
the
hardware
or
for
the
offload
of
the
host
application
functions.
H
H
When
all
you're
doing
is
transferring
data
really
from
the
ssds
to
the
device
for
compute,
and
then
you
only
have
to
manage
the
results
of
the
compute
by
transferring
them
to
the
host
for
further
functions.
H
As
I
said
before,
because
this
device
has
both
significant
compute
capability,
cpu
general
cpu,
compute
capability,
as
well
as
an
fpga
fabric
in
which
you
can
instantiate
any
accelerator
features,
we
can
have
a
very
elastic
boundary
between
hardware
and
software
as
you
continue
to
develop
applications
or
benchmark
them
for
new
ideas
on
what
we
can
offload
and
as
as
the
implementations
mature
the
boundary
between
what
is
done
in
hardware
and
software
can
can
move
and
allow
you
to
further
optimize.
What
is
the
best
way
to
do?
H
Run
the
application
in
in
this
case,
for,
for
instance,
the
shell
that
provides
the
nvme
target
control
functionality
itself,
for
instance.
What
we
did
is
we
one
approach
is
to
say
that
we,
because
bulk
of
the
value
of
such
a
system
comes
from
being
able
to
implement
custom
hardware
accelerators
in
the
fpga
fabric.
We
wanted
to
minimize
the
amount
of
fabric
that
is
used
to
implement
the
nvme
functions.
H
So
what
we
have
in
our
example,
reference
design
is
essentially
an
nvme
target
controller
implementation
that
does
just
the
bare
bones.
Hardware
uses
the
bare
bone
hardware,
resources
to
implement
nvme
and
bulk
of
the
mvv
functions
is
actually
implemented
in
software
that
runs
on
either
the
the
r5
processors
in
the
zinc
or
the
a53
processor
codes
in
the
zinc
device,
and
that
enables
us
to
offer
almost
keep
almost
70
to
80
of
the
fpga
fabric
available
for
any
accelerator
that
you
might
want
to
implement.
H
Very
briefly.
The
target
applications
that
we
are
developing
here
are
for
video
analytics
and
database
acceleration
and
ideas
for
what
we
could
do
for
accelerating
genomic
type
workloads.
H
So
I
I
wish
I
had
a
little
more
detail
that
we
can
show
here,
but
the
first
reference
design
that
we
were
working
on
is
an
implementation
where
of
for
the
acceleration
of
tf
light.
So
what
we
do
in
that
implementation
is
the
software
library
that
the
bpf
vm
exposes
to
the
kernels
is
actually
a
compiled
version
of
a
tf
lite
compiled
as
a
library.
H
So
you
can
imagine
the
host
application
can
send
invoke
tf
lite
on
any
data
that
it
sends
to
the
device.
H
The
second
step
there
is
that
we
implement
a
small
hardware,
accelerator
called
vta,
which
is
which
stands
for
a
versatile
tensor
accelerator
and
that
accelerator
ip
is
implemented
as
as
rtl
in
in
the
fpga
fabric,
and
the
tflight
library
in
in
turn
invokes
vta
to
offload
some
of
the
more
compute
intensive
functions
in
tf
lite
operators
in
tf,
lite
and
offloads
them
to
the
fpga
based
accelerator.
H
Quickly,
partition
a
given
offload
function
that
the
host
application
would
like
to
offload
partition
them
between
hardware
and
software
and
be
able
to
make
the
best
of
an
optimal
implementation.
That
also
is
very
flexible,
as
well
as
a
quick
to
sandbox
or
prototype
in
the
sandbox
development
environment.
H
The
second
application
that
we
would
like
to
show
is
database
acceleration,
where
we
so
in
the
first
application
that
the
tf
light
application.
The
idea
is
the
whole
application.
The
tf
lite
application
itself
is
offloaded
to
the
competition
storage
device.
Second
idea
for
database
acceleration,
the
idea
is
slightly
different,
is
that
the
bulk
of
the
host
application
is
really
running
on
the
host,
and
it
is
only
offloading
specific
portions
of
its
workload
to
an
optimum
computation
storage
platform,
and
in
this
case,
what
we'll
do
is.
H
The
hardware
portion
of
the
accelerator
will
be
doing
some
minimum
functions
like
string,
search
or
regular
expression,
search
or
potentially,
some
specific,
sequential
search
operations.
Operators
and
the
software
library
will
involve
involve
implementing
some
subset
of
the
sql
queries
that
a
database
might
need
and
the
host
application
will
actually
understand
which
portions
of
its
own
query
workloads
is
compatible
with
the
the
storage,
the
computational
storage
device
features
and
offload
only
those
portions
to
the
device
and
what
we
hope
to
show
with
that
is
again
a
significant,
significantly
useful
way
to
offload.
H
Some
of
the
database
computation
activity
to
the
device,
so
these
are
two
key
designs
that
we
are
working
on.
We
are
hoping
that
we
will
be
able
to
make
these
designs
public
and
hoping
that
more.
That
will
enable
more
participants
to
work
their
ideas
and
see
what
other
applications
would
be
able
to
use.
Computational
storage
effectively
and
in
the
in
in
turn,
enable
the
industry
to
make
these
ideas.
H
More
available
to
storage
devices
in
as
a
whole,
so
that
that
is
basically
what
we
wanted
to
do
just
to
mention
again
the
development
bulk
of
the
development
work
on
this
we
are
doing
it
in
partnership
with
ant
micro.
You
might
know
some
of
them
probably
on
this
meeting
also.
H
So
in
conclusion,
we
have
a
computer.
A
compute
accelerator
from
western
digital
is
a
very
versatile
device
that
offers
compelling
solutions
for
video,
transcoding
and
ml
inference
engines,
and
we
would
like
to
make
the
nvme
plus
ebpf
the
tp4091
implementation
available
to
the
industry
as
a
whole
to
further
speed
up
the
work
on
these
applications
again.
This
device
is
now
available
for,
if
you
want
to
play
with
it-
and
you
can
contact
me
at
onand.comwbc.com
and
we
can
discuss
opportunities
further.
A
A
I
was
interested,
I
guess,
probably
because
of
my
background
at
oracle
and
some
work
that
was
done
there
relative
to
database
acceleration.
A
So
I
am
intrigued
by
the
idea
of
moving
the
acceleration
closer
to
the
storage,
so,
as
you
may
know,
or
may
not
know,
but
one
of
the
things
that
oracle
had
the
spark
design
team
do
was
put
in
what
was
called
a
database
acceleration
unit
directly
into
a
a
spark
chip,
and
there
were
also
consideration
over
time
actually
of
adding
an
embedded
fpga
into
that
to
be
able
to
allow
for
more
customization
over
time,
but
that
part
at
least
never
came
to
full
fruition.
A
H
Yeah,
so
that
is
essentially
the
what
we
believe
is
that
there
is
a
I
mean,
of
course,
the
scope
of
what
sql
do
is
huge,
and
there
is
a
possibility
of
there
are,
I
think,.
H
Ideas
out
there
that
involve
implementing
a
full
sql
query
handling
in
fpga,
but
that
is
not
what
we
are
trying
here.
We
are
doing
we're
only
looking
for
portions
of
the
sql
that
can
be
effectively
handled
in
an
fpga
of
the
size
that
we
are.
We
have
available
to
us
and
there
are
mechanisms
in
on
on
the
database
application
side
to
parse
out
portion
the
sql
into
pieces
and
when
we
detect
pieces
that
can
be
effectively
handled
only
those
are
handed
off
and
there
are
optimization
mechanisms
in
the
database.
H
I
think
they
call
it
a
scheduler
in
the
database
that
you
probably
are
more
aware
of,
but
there
are
ways
by
which
the
scheduler
can
be
can
be,
can
actually
rework
the
schedule
such
that
we
can
pick
out
the
portions
that
can
be
offloaded
and
send
them
early
to
the
device,
so
that
bulk
of
the
filtering
is
done
in
the
device
before
the
more
complex
sql
portions
can
be
handled
in
the
host
itself,
that
that
is
the
approach
that
we
are
looking
at.
I
don't
know.
H
Okay,
okay,
clearly
answers
it,
so
we
are
not
actually
trying
to
implement
a
full
sql
query
here,
because
that
would
be
a
probably
be
a
task
for
a
much
bigger
fpga
and
it
will
still
be
limited
to
what
it
can
do.
So
we're
cutting
and
slicing
the
problem
down
to
portions
that
can
be
handled
handled
easily
by
this
device,
but
still
give
an
effective
speed
up
to
the
overall
application.
A
Oh
thank
you
appreciate
that.
I
just
one
other
question
from
my
side.
I
was
intrigued
that,
in
terms
of
splitting
the
machine
learning
model,
I'm
surmising
that's
done
by
your
software
partner.
I
can't
recall
the
name
I
apologize
for.
A
H
Yes,
that
is
that
is
handled
by
the
software,
so
the
the
mixology
has
a
software
called
zebra.
That
is
the
one
that
is
available.
That
is
partly
run
on
the
host
machine
and
partly
run
on
the
fpga
device
and
the
portion
that
runs
the
pre-processing
that
runs
on
the
the
host
machine
has
the
ability
to
parse
the
ml
graph
and
partition
that
into
portions
that
can
be
offloaded
to
the
fpga
and
it
does,
but
for
the
bulk
of
the
standard
machine
learning
models
that
we
would
use
for.
H
Video
classification
and
image
classification.
Bulk
of
the
models
are
run
completely
on
the
fpga
and
is
only
for
some
outlier
models
where
you
might
come
across
portion.
That
cannot
be
run
on
the
fpga.
Only
for
those
the
splitting
of
the
graph
is
carried
out
and
some
portions
will
be
run
on
the
host.
A
That's
great,
thank
you
for
that,
so
I
don't
have
any
further
questions
so
with
that.
Thank
you
so
much
for
your
informative
talk.
I
really
appreciate
your
time.
Thank
you.
A
Thank
you
bye
bye,
so
our
next
talk
is
entitled
hope
towards
open
source
models
of
cryogenic
cmos
and
the
presenters
are
brian
hoskins
and
pragya
shrestha
from
national
institute
of
standards
and
technology
they're.
Both
research
scientists
at
the
nist
group
there
and
they're
located
in
the
east
coast,
so
brian
and
you
want
to
share
your
screen.
I
Great,
so
I'm
here,
I'm
brian
hoskins
and
I
have
prague
yeah
with
me,
probably.
J
So
thank
you
for
letting
us
talk
a
little
bit
about
what
we're
doing
on
the
in
the
field
of
cryogenic
electronics
here
at
nist
and
we're
very
excited
to
share
the
stuff
that
we're
doing
with
you
all
and
I'll
start
off
the
presentation,
with
a
little
bit
of
background
about
why
cryogenic
electronics,
what
are
the
problems
that
we're
trying
to
address
here
and
then
brian
is
going
to
talk
more
about
what
we've
been
doing
towards
open
source
models
later
so
before
going
into
the
details,
just
a
little
bit
of
background
about
this
to
people
who
may
not
know
what
we
are,
so
it's
a
national
issue
of
standards
and
technology
and
it's
a
national
lab
and
the
mission
being
to
promote
u.s
innovation
and
industrial
competitiveness
by
advancing
measurement,
science
standards
and
technology.
J
So
when
I
go
forward,
you'll
know
how
much
emphasize
how
much
we
emphasize
in
like
reliability
and
accuracy
of
measurements.
Moving
on
to
cryogenic
electronics.
J
The
reason
why
we
went
into
this
cryogenic
electronics
project
is
because
of
its
application
abroad,
application
space
starting
off
with
high
performance
computing.
So,
basically,
you
can
reduce
the
temperature
of
the
electronics
that
you
have
and
boost
the
performance
of
your
electronics.
J
So
usually
it's
in
the
temperature
range
of
about
77,
kelvin
or
so
the
other
electronics
is
the
electronics
that
would
support
the
low
temperature
source
and
sensors,
like
in
the
case
of
satellites
and
space
for
space
missions,
but
the
biggest
application
space
is
the
quantum
computing
that
we're
working
towards.
Basically,
because
this
has
been
the
the
most
important
application
space.
That's
pushing
the
cryogenic
electronics
research
all
over
the
world.
There
are
a
lot
of
big
government
putting
money
into
it.
J
A
lot
of
companies
also
working
towards
it.
So
in
this
particular
case,
the
quantum
computers
have
these
basic
components
known
as
qubits,
which
are
at
really
low
temperatures
in
the
in
the
range
of
millikelvin
or
so,
and
it's
still
classical
electronics
that
controls
these
qubits
so
just
going
into
a
little
bit
more
detail
into
each
of
these
applications,
the
first
one
being
high
performance
computing.
J
So
this
idea
came
about
because,
in
the
past,
researchers
were
looking
at
the
material
system
and
the
devices
transistors
and
stuff
at
really
low
temperatures.
To
understand
them
better
and
what
they
found
out
was
these
materials
had
better
performance
at
lower
temperatures
like
a
higher
mobility,
which
meant
that
you
can
use
the
devices
for
higher
speed
and,
of
course,
the
noise
would
be
lower
for
lower
temperatures.
The
reliability
would
also
be
better
because
there
would
be
lower
thermally,
assisted
degradation.
J
You
would
also
reduce
susceptibility
to
latch
ups
and
the
power
consumption
would
also
be
low
due
to
a
lower
power
supply,
lower
supply
voltages.
But
these
are
all
mostly
you
know,
some
measurements
that
make
us
think
that
okay,
high
performance
computing
is
something
that
can
be
done
with
cryogenic
electronics,
but
a
lot
of
research
has
is
being
done
to
actually
see
if
it
is
actually
visible
or
not.
J
In
the
case
of
electronics
that
support
the
low
temperature
sensors,
the
fields
are
like
the
space
missions,
especially
a
lot
of
physics,
science
stuff
for
particle
physics
that
uses
the
low
temperature
sensors.
So
these
sensors
are
subjected
to
low
temperatures
for
various
different
reasons.
The
first
one
being
the
temperature
at
which
it
is
is
really
the
environment
of
the
temperature.
J
The
environment
at
which
this
sensor
is
at
a
really
low
temperature,
for
example,
in
the
space
or
in
this
particular
case
you
can
see
that
the
sensors
might
be
in
this
huge
tank
used
for
particle
physics.
Experiment
inside
this
argon
chamber,
which
is
at
the
temperature
of
80,
kelvin
or
so,
or
the
sensor,
may
be
subjected
to
low
temperature
to
obtain
better
sensitivity.
J
For
example,
in
this
case,
they're
counting
photons
and
the
sensor
has
a
temperature
dependent
background,
which
is,
as
the
temperature
goes
down,
the
background
reduces
and
so
does.
The
sensitivity
improves
the
other
one
is
where
the
low
temperature
is
required
for
the
sensor
to
work,
for
example,
in
this
case,
which
is
squid,
superconducting
quantum
interference
device,
which
is
a
very,
very
sensitive
magnetic
field
sensor
which
can
detect
things
like
brain
activities
too.
But
superconduction
is
required.
So
you
have
to
place
this
structure
in
at
a
really
low
temperature
for
it
to
work.
J
So
at
present,
the
electronics
that
support
these
sensors
are
either
placed
away
from
the
sensors
or
if
the
environment
is
low
temperature,
they
would
be
placed
in
a
box
where
the
temperature,
you
heat
heat.
It
to
get
the
temperature
to
about
20
degrees
c
or
so,
where
you
know
that
the
electronics
work
just
fine,
so
you
know
if
we
can
find
electronics
that
would
work
at
low
temperatures
would
really
help
these
field.
J
Moving
on
to
the
big
thing,
which
is
the
quantum
computers,
this
is
a
picture
from
google
and
they
were
using
this
to
work
on
53
qubits.
So
at
present,
the
way
how
it
is
done
is
the
quantum
processor
or
the
qubits
that
I
explained
earlier
has
to
be
at
low
temperatures
for
it
to
retain
the
information
it's
at
low
temperature,
but
everything
that
controls
it
is
still
a
classical
electronic
and
in
this
case
it's
all
at
room
temperature.
J
You
can
see
all
of
this
now
if
you
want
to
say
like
okay,
I
want
to
scale
this
system
up
to
say
tens
of
thousands
of
bits
or
or
even
millions
of
bits.
A
system
like
this
would
not
be
visible.
You
can
see
it
right,
so
what
you
would
want
to
do
is
you'd
want
to
actually
shrink
the
size
of
these
electronics
and
put
it
as
close
to
the
processor
as
possible.
J
Now,
before
I
move
into
what
is
required,
I'd
like
to
say
that
these
qubits
come
in
different
flavors,
it's
not
the
same
type
of
qubit
that
everybody
uses
it
can
be,
iron
traps
can
be
super,
conductive
can
be
any
flavored
quantum
system,
but
in
any
of
these
cases,
you'd
still
need
a
mix
of
ac
and
dc
signals
to
control
and
use
read
these
qubits,
and
they
would
also
require
some
simple
digital
circuits,
like
inverters
ring
oscillators
analog
circuits
like
voltage,
references
or
low
noise
amplifiers.
J
So
what
you
would
want
to
do
to
to
improve
the
scaling?
Is
you
want
to
shrink
the
size
of
all
the
electronics
that
you
have
here?
Put
it
really
close
to
the
quantum
processor
as
possible?
For
example,
in
this
case,
the
multiplexers
and
d
multiplexes
are
at
mainly
kelvin
stage
and
the
rest
are
placed
at
one
or
four
kelvin
stage,
so
that
is
something
that
seems
very
doable
given
like
very
simple
electronics.
That
may
be
required
to
do
this,
but
the
problem
is
not
the
size.
The
size
is
fine.
J
We
can
reduce
it.
You
can
make
these
circuits
work
too,
but
the
problem
is
how
well
these
circuits
work,
because,
when
you're
using
quantum
computers,
it
comes
in
this
dilution
refrigerator
here
and
each
of
these
stages
have
different
temperatures
down.
Here
is
the
lowest
temperature
and
the
problem
is
the
cooling
power
of
each
of
this
stage
is
different
and
as
the
temperature
goes
down,
the
cooling
power
is
reduced.
J
For
example,
in
four
kelvin
stage
you
only
have
like
tens
of
milliwatts
of
cooling
power,
but
for
millikelvin
stage
you
only
have
tens
of
microwatt,
which
means
that
if
you
have
electronic
circuit
that
works,
you
can
place
it
in
there.
But
if
it
starts
to
radiate
heat,
then
it's
just
going
to
be
a
big
heat
load
and
it
would
increase
the
temperature
of
these
stages
and
then
you
would.
You
would
not
have
the
right
information
that
you
are
trying
to
get
from
the
quantum
processor.
J
So
that's
something
that
you'd
want
to
keep
in
mind
when
building
these
electronics,
which
means
that
this
the
specifications
is
very
stringent.
It
has
to
be
operated
at
really
low
power
and
it
has
to
be
really
really
efficient
and
to
get
an
electronics
of
such
sort.
You
have
to
start
right
from
the
basics
write
down
from
the
devices
and
how
it
works.
J
Now
you
know
in
my
previous
slides
I
was
saying
like
okay,
we,
these
cryogenic
electronics,
they
work
at
low
temperatures,
77
kelvin,
four
kelvin,
but
what
I
did
not
mention
was
that
it's
not
a
simple
thing
for
these
devices
to
work
at
four
kelvin,
because
if
you
ask
a
device
person,
they
would
always
say
like.
J
Okay,
are
you
sure
these
devices
work
fine
at
that
low
of
a
temperature
where
you
would
actually
see
these
carriers
to
freeze
out
right,
so
the
freeze-out
does
occur,
which
means
that
there
are
only
few
transistors
that
could
work.
For
example,
the
bjts
would
not
work
or
would
have
a
big
problem
at
these
low
temperatures,
for
it
to
work,
but
mosfets
are
fine
because
they
can
account
for
these
freeze
out.
That
is
happening.
For
example,
you
know
when
you
think
about
frieza
freeze
out.
J
In
this
particular
case,
your
threshold
voltage
would
shift
or
increase
for
lower
temperatures.
Your
ion
increases
because
your
mobility
is
better
because
no
scattering
happening
at
that
low
temperatures
and
your
soft
threshold
swing
also
improves
right.
So
now,
if
you
look
at
these
images,
you
would
say
like
oh
wow
at
four
kelvin.
It
works
really
well
right
it.
It
has
a
good
soft
threshold
swing.
It
has
good
mobility.
So
what
is
the
problem?
J
And,
of
course,
these
are
being
you
know,
thought
of
being
used
for
high
performance
computing
right,
but
the
problem
is
when
we
put
together
a
circuit
with
the
information
that
we
had.
What
happens
is
most
of
the
time
the
circuits
don't
work.
Well,
the
logic
circuits
they
kind
of
work,
but
not
as
effectively
or
efficiently
as
you
would
want
it
to.
J
But
the
analog
circuits
have
big
issues
and
the
reason
behind
the
room,
temperature
circuit
design
is
working
really
well,
but
the
low
temperature
not
working
is
because
for
room
temperature
circuit
designing
there
there's
a
lot
of
information
that
is
already
known.
Even
if
there
are
issues
at
these
temperatures
they
already
know
what
the
issues
are.
So
the
designer
knows
how
to
get
around
it,
but
in
terms
of
low
temperatures
there
isn't
much
information
there's
a
lot
of
data
there,
but
not
a
lot
of
information.
J
So,
let's
just
say
we
don't
even
know
what
the
problem
is,
so
let
alone
solving
the
problem
right.
So
that's
the
big
gap
that
is
there,
and
this
is
trying
to
solve
that,
basically
to
fill
that
gap
up
to
understand
the
devices
better.
The
reliability
of
the
devices
better
so
that
we
can
move
on
to
making
accurate
models
to
get
accurate
circuit
designs
for
low
temperatures,
and
we
start
off
with
a
device
characterization
for
accurate
pdks
in
case
of
our
regular
commercial
device
models.
J
They
go
down
up
to
minus
40
degrees
c,
and
if
you
try
to
to
check
what
it
does
with
the
low
temperatures,
they
don't
really
the
experimental
values
don't
really
match,
because
they
do
not
account
for
a
lot
of
different
things
that
happen
only
at
low
temperatures
and
they
do
not
show
up
at
high
temperatures
like
kink
effects.
You
have
soft
throttle
slope
saturations
happening
below
like
50
kelvin
or
so
you
have
a
lot
of
mismatch
and
variability
there's
incomplete
ionization
mechanism
that
is
not
accounted
for.
J
So
this
is
something
that
we
are
working
towards,
trying
to
do
device
characterizations
to
get
accurate
models
so
that
these
parameters
can
be
put
into
it
by
adding
the
parameters
or
even
adding
sub-circuits
or
stuff
just
to
make
sure
that
your
model
is
accurate
enough
for
the
circuit
better
circuit
designing.
Now
I
want
to
put
a
further
information
here
that
we
don't.
J
We
just
don't
want
to
make
the
models
accurate
by
measuring
it
or
which
is
data,
but
with
what
we
want
to
do
is
we
want
to
also
understand
the
devices
better.
A
little
bit
understand
the
physics
behind
it
a
little
better,
because
we
want
to
make
more
informed
decisions
right.
We'd
want
to
make
sure
the
extrapolating
the
parameters
are.
Fine,
we'd
want
to
also
make
sure
down.
J
Selecting
relevant
technology
would
be
easier
and
we'd
also
want
to
know
whether
what
kind
of
reliability
concerns
is
going
to
occur
at
low
temperatures
that
we
haven't
even
thought
of,
and
it's
not
even
a
problem
for
higher
temperatures
and
again,
because
nist
is
a
public
entity.
Whatever
information
that
we
get
is
public
information,
which
means
anybody
can
access
it
anytime.
So
it's
going
to
be
a
very
valuable
resource,
so
moving
towards
understanding
the
devices
better
from
history.
J
We
know
that
if,
if
there
is
a
time
knob
in
the
characterization
or
the
measurement
in
case
of
gives
us
a
lot
more
information
about
the
device,
so
this
is
the
first
step
towards
that.
We
built
a
time
and
temperature
dependent
characterization
setup
here.
So
you
can
see
here
a
20
nanosecond
sweep
can
be
obtained
all
the
way
down
to
eight
kelvin
here.
So
just
a
reminder
that
this
20
nanosecond
and
these
low
temperatures
were
measured,
it's
a
time
dependent
measurement.
J
It's
not
a
frequency
dependent
measurement
where
you
put
that
you
measure
it
and
put
the
data
into
these
circuit
models
to
get
your
idvg
measurement,
but
this
is
actually
the
current
that
is
being
measured.
So
these
measurements
are
a
lot
more
accurate
and
reliable,
because
you're
measuring
the
current
directly.
J
So
these
time,
dependent
measurements
are
also
important
to
understand
the
feasibility
of
the
devices.
If
you
are
trying
to
look
into
some
new
devices
and
mobile
devices,
for
example,
this
is
a
simple
transistor,
but
when
operated
at
extreme
condition
like
low
temperature
and
higher
voltages,
they
can
be
used
as
memory
now
to
to
check
whether
these
are
good
memories
or
not.
You
would
need
time
dependent
measurements
because
you
would
want
to
know
how
fast
these
can
be
switched.
J
You
also
want
to
know
how
many
times
it
can
switch
back
and
forth
before
it
fails.
You
don't
want
to
know
the
retention
time
of
it,
how
long
it
stays
on
or
how
long
it
stays
off.
So
these
are
some
of
the
things
that
can
be
done
using
time
dependent
measurements.
J
This
is
a
setup
that
we're
building,
because
it's
not
as
simple
as
putting
just
a
tie
on
the
cro,
the
cryostat,
it's
a
whole
circuit
board
that
needs
to
be
put
in
when
trying
to
look
at
the
circuit.
So
it's
not
that
easy
and
also
we'd
want
to
measure
all
the
different
temperatures.
It's
not
like.
We
just
want
a
liquid
helium
and
liquid
nitrogen
temperatures.
J
We
would
want
temperatures
in
between
two,
so
we
want
to
know
that
the
temperature
is
right
in
these
cases
and
we'd
also
want
to
make
sure
that
the
time
measurements
are
also
done
right
because,
for
example,
in
this
case
we
were
looking
at
the
collective
pulse
oscillators,
which
had
frequency
the
range
of
like
six
gigahertz
or
so
with
pulses
in
the
right
with
the
rise
time
of
picoseconds
right.
J
So
we
won't
be
able
to
measure
these
things
accurately
and
reliably,
so
these
are
some
of
the
things
that
we
are
doing
right
now,
and
this
is
the
last
flight
for
me
and
now
I
transfer
it
to
a
brian
who's
going
to
talk
more
interesting
stuff.
Let's
say
about
the
open
source
models
that
is
been
being
built
with
the
partners
here.
I
Yes,
thank
you.
Thank
you
very
much
pragya
for
all
these
important
kind
of
technical
details,
so
basically,
nist
has
historically
operated
as
kind
of
a
very
basic
science
institution,
and
a
lot
of
that
is
driven
by
both
our
mission,
but
also
about
the
nature
of
the
semiconductor
industry,
as
it
exists
today
in
particular,
there's
a
lot
of
proprietary
information
that
we
and
this
are
actually
charged
to
protect.
I
But
one
of
the
things
we've
been
excited
about
recently
is
that
there's
been
an
emergence
of
open
source
manufacturer
of
old
technologies,
and
what
this
is
very
exciting
for
us
is
that
it
makes
it
a
little
bit
easier
for
us
as
nist
to
move
up
the
technology
stack
and
to
not
only
kind
of
study
the
individual,
transistors
and
devices,
but
to
actually
bring
these
basic
scientific
resources
to
bear
to
potentially
promote
and
expand
existing
commercially
available
process,
design
kits
to
support
operation
at
4
kelvin,
and
so
because
of
the
the
the
what's
now
called.
I
I
guess
the
open
sky
130
process
and
this
great
collaboration
from
google
and
e
fabulous
and
skywater
who
we
started
engaging
in.
We
came
up
with
a
plan
with
them
to
develop
models
of
all
of
the
transistors
and
various
devices
in
the
open
sky,
130
pdk
and
develop
data
at
a
four
kelvin,
so
that
people
could
potentially
begin
to
design
in
this
space.
And
so
what
I'm
about
to
describe
is
some
recent
work
in
that
effort,
where
we've
been
partnering
with
some
university
partners,
especially
at
the
university
of
michigan,
as
well
as
some
important.
I
This
contractors,
such
as
adsr,
an
israeli
design,
firm
and
perhaps
most
importantly,
cool
cat
electronics,
which
is
a
long
time.
Nist
partner
organization
and
I'd
also
like
to
take
a
moment
to
recognize
all
the
various
contributors
like
tuang,
zheng,
david
fleischer,
aachen,
akturk,
duet,
martha
tim
edwards,
jeff,
tim
ansel
and
the
other
guys
at
google
and
it's
sky
water.
I
And
so
what
we're
targeting
in
in
this
project
is
gathering
lots
of
raw
data
providing
fitted
models
based
on
modified
bsim,
but
hopefully
by
providing
the
the
data
to
the
public
and
to
the
nist
scientists,
maybe
developing
a
whole
new
generation
of
models
studying
and
understanding
basic
devices,
especially
primitives,
and
hopefully
growing
and
developing
a
library
with
the
community
of
silicon
proven
ip
that
will
work
at
4
kelvin
and
can
be
used
to
grow
and
expand
the
emerging
ecosystem
in
quantum
information
science.
I
By
having
this
kind
of
regular
tiled
interface,
which
is
particularly
designed
for
kind
of
low
frequency,
low
current
measurements,
as
can
be
extracted
from
a
conventional
probe
card,
we
can
not
only
set
ourselves
up
to
be
able
to
measure
and
generate
individual
devices,
but
depends,
but
potentially
to
produce
large
quantities
of
data
from
large
distributions
of
devices
which
can
then
be
released
to
the
public
and
be
used
not
only
for
the
most
basic
circuit
simulations,
but
potentially
for
more
sophisticated
things.
Taking
into
account
device
to
device
variability.
I
D
I
We
have
to
develop
a
system
or
structure
of
ideas
that
we
can
use
to
extract
the
most
fundamental
and
basic
parametric
parameters,
and
so,
among
the
things
that
we've
developed
and
specified
based
on
the
sky,
130
pdk
are
our
dense,
ganged
arrays
of
transistors,
including
all
the
various
different
transistor
types
thin
and
thick
oxide
p
and
n-type
devices
among
as
well
as
native
devices
among
the
different
types
we've
built
moss
cap
arrays,
designed
to
extract
both
the
the
basic
capacitance,
but
also
the
overlap
capacitances
of
the
various
device
structures,
so
that
we
can
build
accurate
models
of
the
transistor
not
only
in
their
iv
characteristics,
but
also
in
their
capacitance
characteristics.
I
From
there
we've
expanded
and
looked
at
the
various
different
back
end
of
line
elements,
as
well
as
other
devices
that
you
would
find
in
the
sky,
130
pdk.
So
we've
built
various
kelvin
measurements
for
things
like
the
precision
resistors,
as
well
as
line
and
via
resistances,
so
that
we
can
accurately
model
both
these
additional
analog
elements,
as
well
as
the
parasitic
resistances
in
between
these
lines.
I
Now,
once
you've
extracted
these
types
of
very
basic
validation
parameters,
parametric
parameters,
it's
important
to
also
have
structures
on
a
good
test
die
which
include
validation,
modules
or
modules,
where
we
use
this
parametric
data
to
make
predictions
about
what
actual
circuits
will
behave
like
and
so
for
our
starting
type
of
validation
model.
We
have
one
of
the
simplest
validation
circuits,
which
is
a
a
frequency,
divided
ring
oscillator,
and
so
once
we've
done
our
basic
parametric
extraction
and
we're
able
to
update
the
pdk.
I
Now,
because
we
have
a
parametric
interface
which
is
designed
for
this
high
density
of
interconnects.
We
didn't
specify
something
that
would
use
like
high
frequency,
gsg
probes,
rather
we're
using
the
cmos
to
assist
us
by
doing
these
enhanced
parametric
measurements,
where,
in
particular,
in
partnership
with
the
university
of
michigan,
we
built
our
144
stage,
ring
oscillator
design
with
a
nand
gate
control
to
be
shunted
into
a
1024
frequency.
Divider.
I
That
way
we
can
bring
the
frequency
down
out
of
the
high
frequency
range
into
the
low
frequency
range
compatible
with
a
a
a
traditional
parametric
probe
card
interface.
Now,
how
well
is
this
going?
Well,
I
would
say
it's
going
pretty.
Well,
we've
been
able
to
put
together
with
our
partners
manufacturable
drc
clean
results
in
test
die
which
had
besides
the
specifications
done
by
the
the
nist
scientists
in
cool
cad.
I
We
were
able
to
get
layout
from
our
partners
at
adsr
in
michigan
and
our
our
result
in
test
die,
which
fits
nicely
inside
of
a
caravel
frame
and
one
of
the
the
available
sky,
130
mpw
tape
outs
run
by
google.
We
were
able
to
fit
something
like
a
1400
pads
with
hundreds
of
transistor
test
structures,
30
different
capacitor,
test
structures,
24
ring
oscillator
structures,
18
line
and
via
chain
modules,
as
well
as
seven
diode
test
structures
and
a
few
other
goodies
in
there.
I
That
will
allow
us
to
create
a
fundamental
basic
set
of
primitives,
which
can
all
ultimately
be
used
to
build
realistic
models
and,
of
course,
you
can
of
course,
everything's
open
source.
You
can
go
and
take
a
look
at
the
gds
file
and
down
the
road,
we'll
be
providing
archive
publication
and
some
other
specifications
so
that
you
can
go
into
the
details
if
you
wanted
to
replicate
this
or
experiment
with
the
die
yourself
down
the
road.
I
So
this
die
is,
you
know
very
real.
It's
a
series
of
drc
clean
designs.
You
can
see
all
those
concepts
I
was
saying,
they've
been
laid
out
in
reality.
In
particular,
adsr
was
helpful
with
the
transistor
mos
cap
and
precision
resistor
arrays,
the
team
at
michigan,
using
things
like
gds
factory,
was
able
to
produce
some
of
the
other
line
and
via
chain
resistance.
I
Primitives
and,
like
I
said,
those
diode
modules
and
memcap
modules
come
in
flavors
with
both
full
body
designs,
as
well
as
discontinuous
designs
to
help
us
extract
anti-effects.
So
all
of
these
structures,
you're
more
than
welcome
to
go
and
take
a
look.
A
lot
of
the
designs
have
been
borrowed
from
kind
of
typically
used
structures
that
you
might
in
any
handbook
on
cmos
fabrication,
characterization
the
validation
modules,
especially
I'm
pretty
excited
about.
I
They
were
typically,
they
were
mostly
constructed
from
standard
cells
using
a
whole,
a
bunch
of
different
inverter
classes
of
standard
cells
in
the
sky,
130
library,
as
well
as
some
osu
oklahoma
state
university
standard
cells
which
have
been
developed
on
sky
130
and
so
by
we're
going
to
have
a
really
nice
time
being
able
to
check
and
validate
the
performance
of
our
various
different
models.
I
Now
that
being
said,
this
die
able
to
take
a
few
months
to
reach
us,
and
time
is
of
the
essence.
And
so,
if
you
were
wondering,
if
sky
130
works
at
4
kelvin,
I
can
tell
you.
The
answer
is
that
it
does,
because
we
partnered
with
google
and
sky
water
to
gain
access
ahead
of
receiving
our
test
die
to
other
test
structures
that
they
have
available
and
we've
already
measured
more
than
100
transistors.
I
At
4
kelvin
and
begun
the
process
of
analyzing
and
fitting
the
data,
and
hopefully
this
early
sneak
peak
models
will
be
available
soon,
so
that
people
developing
in
the
open
sky
130
can
potentially
get
a
head
start
on
designing
circuits.
That
would
have
a
really
good
chance
of
working,
as
predicted
at
4
kelvin.
So
we're
really
very
excited
to
see
how
we
can
grow
this
ecosystem
of
computing,
both
classical
digital
computing
and
analog
electronics
down
at
the
4
kelvin
stage.
I
J
I
Comments
for
us
broadcast.
J
So
this
slide
is
just
basically
a
reminder
like
there
are
a
lot
of
different
pieces
that
needs
to
come
together
for
the
cryogenic
electronics
to
be
successful.
I've
spoken
about
most
of
them
and
they'll
definitely
be
a
stuff
that
will
come
along
in
future.
That
will
have
to
be
worked
out.
I
did
not
talk
about
packaging,
but
it's
something
important
that
we
are
thinking
of
working
on
in
application
and
testing
sites
as
well.
Thank
you
all
for
listening
and
would
be
happy
to
answer
any
questions.
A
So
I
I
did
have
a
couple
questions,
and
actually
you
just
have
to
I'll
start
with
this
one,
since
you
just
touched
upon
it
there,
probably,
but
in
terms
of
packaging,
I'm
just
curious
what
you're,
looking
at
in
terms
of
providing
packaging
for
semiconductor
working
at
near
zero
degree,
kelvin
type,
temperature.
J
Also,
this
is
something
that
we've
just
started
to
work
on,
or
you
know
it's
just
thinking
about
it.
There
would
be
interconnect
we'll
be
thinking
about
especially
dealing
with
the
material
system.
That
would
be
the
best
to
give
a
good
thermal
and
electrical
contact
as
well.
At
these
temperatures.
Those
are
some
of
the
things
that
we
think
of
like
heterogeneous
integration
as
well.
J
You
know
those
are
a
few
of
the
things
that
we're
thinking
of,
but
we
have
just
to
tell
you
that
we
just
started
this
thing
since
we
you
know
putting
the
the
testing
structures
together.
We
realized
that
this
is
going
to
be
a
big
problem.
Moving
forward.
A
Oh,
that
makes
total
sense
now.
I
appreciate
that
so
one
of
the
other.
The
other
interesting
thing
is
I've,
been
in
a
semiconductor
industry
for
quite
some
time,
and
so
I've
you
know,
saw
the
evolution
of
copper
interconnect
from
aluminum
years
back
and
then
going
to
soi
based
technology,
as
opposed
to
bulk
and,
of
course,
from
to
bipolar,
not
bipolar.
Excuse
me
two
thin
fat-based
transistors,
and
each
of
these
different
changes
has
required
updates
to
parameters
that
are
put
into
the
underlying
spice
models
or
silicon
models.
A
If
you
will
right
and
then
also
often
required,
updates
or
changes
to
the
simulators
and
then
associated
higher
level.
Eda
tools
such
as
like
static
timing,
analysis
and
I
just
was
curious
if
you're,
seeing
a
need
for
some
update
to
the
underlying
model
parameters
that
you
capture
and
then
also
if
there
would
be
impact
on
the
associated
eda
tool
chain.
J
That's
a
great
question
because
these
are
yeah.
I
didn't
want
to
go
into
the
detail
technical
details
of
it
earlier,
but
these
are
some
of
the
things
that
we
would
definitely
want
to
look
into
right
depending
on
the
technology,
like
you
mentioned,
like
the
thin
fats
or
sois,
you
do
have
different
response
to
the
temperature.
Also
right
and
those
are
the
things
that
you
would
know
only
if
you
start
measuring,
especially
the
time
dependent
measurement
that
I
was
talking
about.
J
It
would
give
us
a
lot
more
information
now
like
in
terms
of
self-heating,
that
might
be
a
problem
right
for
finfets.
It
might
be
a
bigger
of
an
issue
for
low
temperature
electronics,
so
those
will
have
to
be
put
in
a
place.
That
is
definitely
right.
Thanks
for
the
question,
that's
a
very
good
question.
A
I
just
related:
do
you
think
there
will
be
a
different?
You
know,
as
we've
migrated
through
some
of
these
different
process
technology
attributes
over
the
years,
there's
also
been
changes
in
circuit
design
techniques.
In
fact,
I've
seen
processor
design
teams
migrate
from
say.
You
know
from
full
custom
domino
type
logic
to
complementary
pass,
transistor
logic
to
full,
just
full
static
depending
upon
the
underlying
process.
Performance
parameters.
Do
you
expect
to
see
changes,
possibly
in
circuit
design?
As
a
result
of
this.
A
Okay,
I
appreciate
it
any
other
questions
from
the
audience.
A
Oh,
no,
it's
great
and
our
last
speaker
today
is
tim
ansel
from
google.
So
he's
a
long
time,
googler
open
source,
contributor
and
very
strong
champion
of
many
different
things
from
google,
including
python,
as
many
are
meant
familiar
with,
and
also
of
course,
now
is
a
strong
advocate
for
building
an
open
source
hardware,
design
ecosystem
he's
the
co-chair
of
the
risk,
5
cpu
interest
group.
So
with
that
I'd
like
to
introduce
tim
to
the
audience
so
tim.
K
Great,
give
me
a
second.
K
K
K
K
The
results
of
what's
happened
in
the
last
year
for
our
no
cost
shuttle
program,
and
so
this
talk
is
going
to
probably
be
a
little
bit
different
to
maybe
what
people
expected,
but
hopefully
it
will
be
still
quite
informative,
so
the
slides
can
be
found
at
that
link,
and
I
want
to
reiterate
that
you
know
I'm
a
software
engineer.
I've
been
at
google
for
14
years.
K
I've
been
doing
open
source
for
19
since
1994,
but
I'm
not
an
asic
designer
I've
not
got
a
huge
amount
of
asic
experience,
and
these
days
I
spend
most
of
my
time
actually
doing
budgets
and
reviewing
you
know,
proposals
and
these
type
of
things
pretty
much
everything
I'm
presenting
here
is
not
work.
I've
done
but
work
that
has
been
done
by
people
in
the
community
by
various
contractors.
We've
worked
with,
and
research
groups
we've
partnered
with
to
first
take
a
step
back.
K
Why
is
google
interested
in
this
whole
open
source,
asic
world,
and
the
reason
is
because
google
runs
on
computers.
It
runs
on
a
huge
amount
of
computers
and
pretty
much
everything
we
build.
These
days
has
a
huge
demand
for
compute
resources
and
with
moore's
law
kind
of
slowing
down.
We
are
seeing
that
you
know
we're
going
to
need
to
find
new
ways
to
meet
that
growing
demand
and
that
growing
demand
is
growing
extremely
quickly.
We
don't
just
need
10x,
we
need
1000x
increases
and
there,
as
you
can
see,
from
moore's
law.
K
The
number
transistor
is
still
going
up,
but
a
lot
of
the
other
things
are
not
going
up
and
we
still
see
plenty
of
room
for
improvement
like
a
63
000
x,
speed
up
just
from
moving
some
instructions
from
python
to
cmd
optimizations
before
we
even
go
into
hardware,
and
so
we're
looking
for
another
increase
in
performance.
K
That
way
and
to
do
that,
we
need
faster,
cheaper
and
more
custom.
Silicon
and
an
example
of
where
we've
been
successful
with
this
is
with
the
tpu,
but
in
designing
the
tpu,
we
found
that
there's
an
increasing
gap
between
what
processnode
is
able
to
achieve
and
there's
skyrocketing
costs
in
doing
silicon,
which
is
actually
the
total
opposite
of
what
we
want.
K
In
the
us,
there
are
about
70
000
computer
hardware,
engineers,
which
is
growing
six
percent
year
on
year,
which
is
actually
a
pretty
good
growth
rate.
But
if
you
look,
there
are
800
000
software
engineers
and
that
market
is
growing
at
30
percent
year
on
year.
So
really
we
need
a
lot
more
of
these
software
engineers
to
do
hard
work
and
we
want
to
convert
those
software
engineers
into
hardware
engineers
and
we
want
to
speed
up
the
iteration
loop
between
developing
applications
and
developing
hardware
description,
that's
suitable
for
those
applications
and
then
implementing
it.
K
Google
published
a
paper
called
the
hardware
lottery
which
talks
a
bit
about
how
there's
this
feedback
loop,
which
is
preventing
us
from
exploring
a
lot
of
different
options,
and
we
want
it
to
make
it
much
easier
for
us
to
break
out
of
those
pre-determined
solutions
and
what
was
blocking
open
collaboration
in
our
eyes
was
a
couple
of
things,
but
one
of
the
biggest
was
a
manufacturable
pdk
that
was
fully
open
source,
and
so
I
put
together
this
idea
that
if
you
created
a
and
released
an
open
source
pdk,
you
funded
eda
tool
integration,
and
you
run
a
silicon
realization
program
that
you
would
get
a
massive
increase
in
activity
in
this
space
and
the
first
part
was
an
open
source
pdk.
K
The
second
part
was
eda
tool,
integration
with
a
large
number
of
open
source
tools
which
can
be
viewed
in
this
way
or
it
can
be
viewed
in
a
separate
way.
Looking
at
how
you
connect
to
various
things
and
a
no-cast
shuttle
program.
So
that
was
three
things
that
we
needed
to
do
to
succeed.
Here
was
the
hypothesis,
these
three
things,
and
we
really
thought
that
the
open
pdk
would
be
a
strong
driver
in
improving
both
hdl
designs
and
eda
tooling.
K
K
It
has
a
lot
of
different
features
and
what
we
discovered
is
in
the
week
after
the
we
had
released
the
pdk.
It
had
almost
as
many
stars
as
hardware
projects
like
us
and
chisel
that
have
been
around
for
five
to
ten
years.
K
Unique
users
visiting
the
sky,
water,
pdk
repository
and
we
have
2
800
members
now
on
the
slack
discussion
channel
where
you
can
go
and
discuss
the
pdk,
and
so
that
was
a
pretty
strong
signal
that
there
was
something
really
good
going
here,
and
so
we
had
the
silicon
realization
program
and
since
then,
we've
done
a
lot
of
work
in
that
area.
K
But
one
thing
I
wanted
to
point
out
is:
I
mentioned
this
in
the
original
talk
is
that
this
is
all
new
and
things
are
going
to
be
bumpy.
You
know
we
have
a
new
pdk.
Although
the
process
technology
itself
has
been
used
and
available
for
a
good
15
20
years,
this
open
source
pdk
was
new
a
lot
of
the
eda
tools,
even
though
they
have
been
around
for
a
long
time
have
only
just
been
integrated
with
the
sky
130,
and
this
was
the
first
time
we
had
run
a
silicon
realization
program.
K
Every
part
of
this
hypothesis
was
new,
and
so
the
expectation
that
things
were
going
to
be
bumpy
should
be
well
understood
here,
but
that's
kind
of
why
we're
doing
this
we're
trying
to
push
the
state
of
the
art
forward
we're
trying
to
enable
these
things
to
actually
you
know
attempt
to
do
this
and
it's
okay,
if
things
don't
work
the
first
time,
if
things
you
know,
take
a
couple
attempts
to
work.
That
is
fine.
K
We
want
to
see
people
iterating
and
pushing
things
forward,
and
so
the
plan
kind
of
looked
like
this,
that
we
had
the
first
run
at
the
end
of
2020,
and
then
we
had
a
couple
of
runs
in
2021
and
there'll
be
more
in
2022,
and
so,
if
you
put
them
on
a
timeline,
this
is
kind
of
where
the
whole
silicon
realization
map
looks
like,
and
so
in
2020
we
ended
up
doing
45,
unique
tape
outs
as
part
of
this
program,
then
in
2021.
K
We
did
over
200
designs
once
you
put
together
the
open,
mpw
program
run
by
google
and
the
chip
ignite
commercial
program
that
you
know
effortless
launched
last
year
and
we've
already
had
one
shuttle
run
in
2022
and
who
knows
how
many
submissions
we're
going
to
get
this
year
because
for
the
first
shadow
run
for
the
first
time
ever,
we
had
more
submissions
than
we
had
slots,
and
so
that's
pretty
awesome.
K
But
when
I'm
talking
here,
I'm
talking
about
people
submitting
drc
clean
ready
to
manufacture
chips,
but
there's
always
a
question
of
do
the
chips
work
well.
Sadly,
in
silicon
manufacturing
you
have
to
wait
until
they're
actually
built
and
sent
back
to
you,
and
so
where
we
were
at
for
a
long
time
was
we
got
silicon
back
from
the
initial
test
designs
and
they
were
working.
K
This
is
what
gave
us
confidence
to
launch
the
mpw
program.
We
had
five
test
designs
and
a
majority
of
these
worked.
These
were
the
strive
chips
that
muhammad
kassam
talked
about
in
his
dialogue,
talk
series,
and
so
with
that
five
working
chips,
we
moved
forward
with
the
mpw
program
and
had
mpw
one
sent
off
for
manufacturing
and
then
you
know,
launched
mpw2.
K
Sadly,
mpw1
took
a
little
bit
longer
than
we'd
like
to
get
back
as
well.
Chip
ignite
one
was
sent
off
to
be
manufactured
and
we
were
getting
close
to
finishing
mpw2
and
sending
that
off
to
be
manufactured
when
the
silicon
for
mpw1
came
back,
and
so
this
was
pretty
exciting.
We
had
the
first
silicon
back
from
our
mprw
program,
but
when
we
got
it
back,
we
started
to
notice
that
things
weren't
working.
As
expected,
we
discovered
some
issues
in
how
the
design
was
put
together.
K
That
was
preventing
the
solution
from
working
as
intended,
and
in
many
cases
many
silicon
experts
told
us
this
design
was
fatal
and
that
there
would
be
no
possibility
of
getting
working
chips
out
of
mpw1,
and
this
is
an
important
milestone
for
the
program,
because
it
really
shows
the
importance
of
why
we
need
to
build
real
silicon.
There
are
lots
of
people
who
had
built
fake
silicon
with
these
tools,
but
until
you
actually
build
real
silicon,
do
you
know
if
this
solution
will
actually
work?
K
And
so,
with
these
chips?
Coming
back
in
a
less
than
perfect
stage,
we
quickly
put
a
hole
on
mpw2
which
hadn't
started
manufacturing.
K
Yet
so
mpw2
was
in
the
process
of
getting
prepared
to
send
off
to
get
manufacturing,
but
we
had
caught
it
just
in
time
before
it
had
been
off
to
manufacturing
and
then
shortly
after
mpw3
came
in
and
we
held
on
to
mbw3
until
we
were
able
to
better
understand
the
fix
for
the
problem.
K
But
unfortunately,
chip
ignite
had
already
started
being
manufactured.
So
this
silicon
that
evapolis
was
making
was
already
in
the
process
of
being
manufactured
and
had
a
known
issue
with
it.
So
that
required
some
very
ingenious
engineering
from
the
team
at
efabulos
to
do
a
fix
to
the
design
by
only
changing
the
metal
at
the
top
layers
which
hadn't
been
manufactured
yet
and
muhammad
qasam
gave
a
very
interesting
talk
about
how
they
were
able
to
solve
this
problem.
K
Just
using
a
metal
fix
and
their
talk
is
deceptively
called,
making
open
source
chips
work,
but
it
really
should
be
called
how
to
fix
a
chip
that
is
already
half
manufactured,
which
is
a
really
exciting
thing
to
see
happen
in
the
open
that
you
can
go
and
look
at.
You
can
go
and
look
at
how
their
fixes
were
made
and
replicate
that
result
and
for
some
reason
the
slide
isn't
showing
up,
but
they
were
able
to
create
a
metal
layer
fix
that
they
believed
would
work
for
cheap
ignite.
K
At
the
same
time,
the
team
also
was
looking
at.
How
do
we
proceed
with
mpw1,
mbw2
and
mpw3,
and
they
created
a
different
fix
that
was
much
more
a
real
fix
to
the
problem
and
sent
those
to
off
for
manufacturing.
K
At
the
same
time,
these
chips
that
were
in
mpw1
that
were
supposedly
never
going
to
work.
It
turns
out
that
some
people
in
the
community
didn't
get
the
message
and
actually
started
making
those
chips
work.
It
definitely
required
a
lot
more
work
than
we
would
have
liked,
but
it
actually
showed
that,
even
with
the
somewhat
serious
issues
that
mpw
one
had,
the
community
was
able
to
get
value
out
of
this
tape
out
and
in
fact
they
got
some
of
the
chips
working
well
enough.
K
That
matt,
then,
was
able
to
verify
that
all
his
designs
on
mpw1
worked
and
now
has
a
clock
on
his
desk
powered
by
the
first
asic
he
ever
made.
I
want
the
really
cool
things
about
matt
venn
is
he'd,
never
done
icy
design
before
this
program
started,
and
so
this
is
a
really
awesome
example
of
what
beginners
and
people
with
different
backgrounds
can
bring
to
this
field.
Who
don't
have
the
preconceived
notions
that
maybe
industry
experts
have,
and
so
what
we
ended
up
with
is
that
the
mpw
one
had
issues,
but
solutions
were
found.
K
K
Western
braun
was
able
to
demonstrate
that
his
solution
on
chip
ignite
worked
correctly
without
the
issues
needed
with
mpw1
and,
in
fact,
e
fabulous
were
able
to
work
with
stanford,
who
had
a
large
number
of
chips
on
chip
ignite
to
get
pretty
much
all
their
chips
that
didn't
have
errors
from
the
design
side
in
it.
Working
so
where
we
are
today
is
that
we
have
those
five
initial
test:
working
ics.
K
We
have
the
chips
back
from
mbw,
one
which
work
with
a
bit
of
effort,
and
we
have
the
chips
that
came
back
from
chip
ignite
that
are
working
pretty
well,
thanks
to
that
metalfix
and
very
soon
we'll
be
getting
back
the
chips
from
mpw2
and
mpw3
and
we're
excited
to
see
the
results
of
that,
and
hopefully,
mpw2
and
mpw3
actually
work,
and
we
see
some
very
cool
results
from
that.
K
The
other
thing,
though,
is
all
that
silicon
we
got
back
has
been
tested
and
working
like
characterized.
So
one
of
the
test
designs
that
we
did
in
those
five
initial
test
designs
was
a
open
ram
test.
Chip
and
andrew
zotenberg
has
a
really
good
youtube
series
interviewed
from
ben
about
his
work
to
characterize
openram
and
there's
all
this
data
is
publicly
available.
K
As
well,
this
seems
to
be
a
duplicate
site,
but
we
have
silicon
back
for
these
things
and
with
silicon
back,
we
can
start
doing
imaging.
Even
if
chips
don't
work,
and
so
john
mcmaster
has
done
a
bunch
of
imaging
of
the
silicon.
I
gave
a
talk
at
the
open
tape
out
conference
about
his
imaging.
K
You
can
go
and
look
at
some
images
of
various
parts
that
he's
done,
we're
now
getting
same
images
of
cross
sections,
and
so
you
can
see
here
how
a
sem
image
of
a
cross
section
of
the
sky
water.
K
You
know,
process
technology
compares
to
the
actual
diagram
that
is
found
in
the
pdk,
and
these
are
publicly
available
that
anyone
can
go
and
look
at
and
compare
and
we're
hoping
other
people
will
produce
these
type
of
images
and
share
them
with
people.
A
really
cool
thing
here
is
this
is
matt
bens.
K
K
K
K
We
have
added
re-ram
since
mpw4
as
a
default
option
that
is
available
on
these
runs,
so
you
can
now
play
with
re-ram
on
the
sky
130
process
technology.
I'm
really
interested
to
understand
how
re-ramp
interacts
with
things
like
cryo-cool
temperatures.
Does
it
work?
Does
it
fail?
I
don't
know
I
love
to
see
us
get
to
that.
K
I
also
want
to
come
back
to
one
of
the
key
things
we've
discovered
is
that
this
open
source
solution
has
really
enabled
a
huge
number
of
first-time
designers.
K
We're
also
seeing
a
wide
variety
of
different
types
of
designs,
and
this
is
barely
covering
the
wide
variety
of
designs
we're
seeing
now.
We've
also
seen
efablus
launch
their
chip
ignite
commercial
program.
So
if
you
don't
want
your
designs
to
be
open
source,
you
can
use
the
same
open
source,
pdk
and
open
source
tools,
but
keep
your
designs
closed,
and
this
is
a
massive
drop
in
the
cost
required
to
do.
Asics
then,
has
also
enabled
you
to
take
multiple
projects
and
fit
them
in
a
single
tile.
K
This
again
dramatically
drops
the
cost
of
doing
asics
and
enables
this
huge
surge
of
new
people
who
have
never
done
this
stuff
before
so.
What's
next
well,
this
year,
we'll
be
launching
a
second
boundary
and
we're
hoping
to
have
new
programs,
one
on
an
advanced
90,
nanometer
process,
node
and
one
on
a
budget
180
process
node
and
we're
also
looking
at
cloud-based
access
to
advanced
pdks.
K
These
are
all
process
nodes,
but
what
about
more
advanced
process
nodes?
Well
we're
trying
to
figure
out
how
we
can
provide
cloud
access
to
proprietary
pdks
like
gf12op
and
intel
22
ffl,
and
when
doing
this.
The
reason
these
are
selected
is
because
gf120p
is
the
best
supported,
advanced
node
process
technology
by
open
road.
K
In
fact,
the
university
of
michigan
has
actually
taken
out
a
chip
on
gf12lp,
which
I
believe
they've
got
back,
there's
also
the
intel
22
ffl,
which
is
also
somewhat
supported
in
open
road,
and
that
is
again
why
that
technology
is
of
interest
to
us.
It's
the
fact.
The
open
source
tools
support
it
that
make
it
interesting
and,
ultimately,
though,
we're
looking
to
a
future
where
we
need
a
lot
more
portability
between
things
like
process
nodes.
K
If
you
look
at
the
variety
of
stuff
that
open
road
plans
to
support
with
this,
it
gives
us
a
lot
more
ability
to
have
cross
process
portability,
cross,
foundry
portability
and
even
things
like
cross
technology
portability,
where
we
end
up
with
a
truly
portable
solution,
where
you
don't
have
to
reinvent
the
wheel
for
every
single
design.
You
do
and
an
example
of
this
portability
is,
if
you
look
at
the
eda,
tooling
integration,
a
group
that
has
really
pushed
this
further
than
anybody
else.
K
Is
the
analog
generation
done
by
university
of
michigan
and
medi
at
the
university
of
michigan
with
the
open
fa
sock,
and
we
know
that
analog
is
one
of
the
biggest
sticking
points
in
developing
new
asics,
and
this
was
a
part
of
the
missing
ip
point
that
I
pointed
out
in
my
original
paper
and
why
I've
been
investing
in
these
analog
generators.
K
The
tool
will
still
support
the
proprietary
tools,
but
by
having
a
fully
open
source
flow,
enabled
us
to
do
the
collaborations
like
what
we
did
with
nist,
and
they
have
used
this
very
successfully
to
do
tape
outs
at
both
sky
130
and
gf
12
lp.
And
if
you
look
at
those
two
process
technologies,
I
don't
think
you
can
get
a
bigger
delta
in
difference
here.
K
One's
a
130,
nanometer,
planar
bulk
process
and
the
other
is
a
12
nanometer
finfet
process
and
by
using
the
open
source
tools
and
this
scripting
generation
approach
they've
been
able
to
target
both
these
processes
and
they
have
silicon
proving
a
lot
of
these
examples.
K
K
K
You
can
kind
of
see
this
test
I
see
here,
and
it
has
some
dc
to
dc
converters
and
a
fully
mixed
signal
sock,
and
then
they
replicated
a
whole
bunch
of
that
design
on
sky
130
and
they
were
able
to
do
this
significantly
faster,
even
though
they
had
originally
developed
this
on
gf12op
and
they
have
done
measurements
of
these
results
and
provided
quite
detailed.
Characterization
of
this
result
on
sky
130,
and
this
is
super
interesting.
K
K
And
this
is
just
another
thing,
and
so
this
is
really
showing
that
this
portable
analog
is
possible,
and
the
great
thing
about
this
is
that
gf12lp
is
significantly
more
expensive
and
significantly
harder
to
get
access
to
than
sky130,
and
you
can
see
all
these
connections
from
these
two
very
different
process.
Technologies.
K
So
there's
also
the
berkeley
analog
generator
that
we
invested
a
bit
in
and
as
part
of
that,
there
is
the
bag
three
primitives
available,
but
sadly,
at
the
moment
that
still
requires
closed
source
software
and
the
proprietary
s8
pdk,
and
so
that's
been
a
lot
less
successful
than
the
open
fa
sock
work,
which
has
had
multiple
tape
outs
and
has
a
fully
open
source
flow.
K
A
Hey
tim
thanks
so
much
that
was
a
very
informative
talk
and
update.
I
really
appreciate
it.
So
definitely
some
exciting
work
that
you
are
championing
here
in
the
open
source
community.
A
You
know
the
one
question
you
know
as,
as
you
know
well,
I've
been
in
industry
for
a
long
time,
and
you
know
I
often
think
about
the
attitude
that
the
hardware
engineers
have,
which
I
have
to
say,
is
unfortunately
ingrained
in
them
from
day
one,
and
this
is
basically
that
failure
is
not
an
option,
in
other
words,
that
once
you
take
a
chip
out
that
you
know
it
darn
well
better
work.
So
you
know
one
of
the
things
that
you
know.
A
K
So
I
totally
like,
I
think
the
hardware
people
have
been
put
in
a
really
difficult
position
if
you're
doing
a
tape
out
on
seven
nanometers
like
that
chip,
better
work,
because
you
just
paid
millions
of
dollars
just
for
your
mpw
run.
K
What
has
been
lacking
in
this
industry
is
easy
and
cheap
access
to
these
process
technologies
that
aren't
so
expensive,
it's
much
better
to
fail
on
130
nanometers
than
it
is
to
fail
on
12
nanometers,
which
is
also
what
the
sky
130
process
technology.
You
know,
example
showing
like
the
problem
that
was
found
with
mpw1.
K
I
would
much
rather
define
that
problem
on
sky
130,
where
a
tape
out
is,
you
know,
10x,
cheaper
than
find
that
out
when
we
finally
do
gf12lp
tape
out
right
right,
and
so
it's
about
reducing
the
cost
of
failure
and
utilizing
what
resources
we
have
access
to
right.
Like
these
old
process,
technologies
are
old
right,
they've
been
around
for
20
years.
K
Everybody
should
be
using
them
to
test
theories
all
the
time,
but
there's
such
a
huge
focus
on.
Oh,
I
only
care
about
seven
nanometers
that
you've
ended
up
being
a
very
risk
averse
right
like
because
you
can't
fail,
and
you
have
no
history
to
build
on
to
say,
look
well.
It
shouldn't
fail
because
I've
done
it
10
times
on
sky
130.
K
First
right,
I
have
a
lot
more
confidence
in
the
results
that
open
fa
sock
gets
for
gf-12
because
I'm
able
to
see
their
results
for
sky
130,
and
you
know
they
are
always
going
to
be
failure
right.
If
you're
not
getting
failure,
then
you're
not
pushing
hard
enough
the
boundaries
of
what
you
can
do,
and
you
know
I'm
only
new
to
silicon
valley.
K
I've
only
been
in
silicon
valley
for
about
four
years,
but
one
thing
I
have
really
noticed
is
that
they
really
understand
this
idea
that
the
next
big
thing
looks
like
a
crazy
idea,
because
if
you,
if
it
was
obvious
what
the
next
thing
is
you'd
just
be
doing
it,
and
so
you
need
to
enable
a
lot
of
crazy
ideas,
and
most
of
them
are
going
to
fail
right,
because
you
need
to
enable
that
one
crazy
idea,
which
totally
looks
like
every
other
crazy
idea,
because
it's
the
big
thing,
even
though
all
these
other
crazy
ideas
are
really
crazy
and
you
know,
are
not
going
to
work,
and
so
you
know
this
very
much
is
a
mentality
type
of
thing
as
well,
by
increasing
the
number
of
people.
K
Here,
though,
like
a
lot
of
the
people,
you
know
that
I
talked
to
in
industry
when
they
looked
at
the
stuff
that,
like
matt
ben
and
tnt,
did
they're
like
oh
yeah.
You
could
totally
have
done
that
I've
done
that
for
previous
chips.
K
When
I've
been
in
a
bind,
it
was
kind
of
a
oh
yeah,
okay,
if
you're
willing
to
put
in
that
effort,
you
could
solve
these
problems,
but
I
assumed
you
weren't
willing
to
put
in
the
effort,
and
so
I
think
that's
also
a
really
interesting
thing
here
is
that
people
underestimate.
K
A
No,
I
appreciate
that
you
know
one
other.
You
know
I,
like
your
thoughts.
You
know
wanted
to
see
what
your
thoughts
were
about
getting
more
people
involved
in
open
source
eda.
So
you
know
a
number
of
the
universities
and
you
mentioned
michigan,
and
you
know
I
know
they
do
a
lot
of
great
work
in
in
eda
and
training
young
engineers
for
that.
But
what,
in
terms
of
trying
to
get
more
people
interested
in
eda
development,
what
do
you
think
we
might
need
to
do
in
that
regard?.
K
I
think
the
big
thing
is:
we
need
to
stop
isolating
eda
from
the
rest
of
the
software
development
world
and
start
treating
it
just
like
any
other
piece
of
software
development.
There's
you
know,
eda
problems
are
easy
problems.
Don't
get
me
wrong,
but
you
know
it's
not
an
easy
problem
to
index.
You
know
the
world's
internet,
it's
also
not
a
easy
problem
to
sequence,
the
human
genome
and
these
type
of
things
either
right
like
hard
problems
everywhere,
the
more
we
can
get
and
utilize.
K
You
know
shared
knowledge,
the
more
likely
we
are
to
move
this
forward
faster
and
take
advantage
of
a
lot
of
work
that
other
groups
are
doing.
It
turns
out
for,
like
software
development,
in
many
cases,
90
of
the
work
in
software
development
has
nothing
to
do
with
the
domain
you're
working
on
it
has
to
do
with.
How
do
you
package
the
stuff?
How
do
you
get
it
onto
the
user's
workstation
in
a
way
that
it
works?
K
And
you
know
there
are
many
different
competing
technologies
to
do
this
and
it's
a
billion
dollar
industry.
You
know,
containers
and
cloud-based
stuff
is
huge
and
we
need
eda
tools
to
work
and
integrate
with
these
solutions,
because,
if
you're
reinventing
just
for
the
eda
space,
it's
not
a
big
enough
space
by
itself
at
the
moment
to
have
custom
solutions
for
everything
they
should
be
reusing.
K
You
know
things
like
python.
They
should
be
reusing
things
like
notebooks.
They
should
be
reusing
things
like
airflow,
and
you
know
these
other
things
that
you
know
the
machine
learning
people
are
using
the
data
science
people
are
using
the
geo.
People
are
using.
One
of
the
cool
things
I
was
looking
at
the
other
day
is
that
the
geo
people
have
a
really
nice
setup
for
viewing.
K
You
know
large
data
sets
of
polygons,
and
you
know
what
gds
is
a
large
data
set
of
polygons
so
rather
than
reinvent
a
new
way
to
view
the
you
know,
gds
data.
What
we
did
was
just
convert
the
gds
data
into
the
geoscience
format
and
use
their
very
fancy.
You
know
map
viewer,
to
view
the
gds,
and
you
know:
there's
parallels
there
and
the
people
doing
map
stuff,
there's
a
huge,
that's
a
huge
field
and
they've
spent
lots
of
time.
You
know
figuring
out.
K
How
do
you
cut
up
tiles
in
an
efficient
way?
How
do
you
deal
with
the
fact
that
you
know
you
bisect
polygons
in
a
way
and
their
data
sets,
you
know,
are
petabytes
in
size
in
many
cases
right
like
they're,
not
small
data
sets.
So
this
idea
that
eda
is
the
only
group
that
has
big
data
sets
and
the
only
group
that
has
any
of
these
problems.
You
know,
I
don't
think,
is
the
right
attitude.
You
should
you
know
work
with
as
many
of
other
groups
as
you
can.
K
I
was
talking
to
someone
recently
about
optical
simulation
and,
if
you
look
at
what
things
people
like
nvidia,
doing
with
modern
ray
tracing
for
games,
it's
starting
to
look
very
similar
to
the
type
of
optical
simulations,
you
need
to
do
for
very
advanced
optical
stuff,
and
you
know
there's
some
lots
and
lots
of
overlap
with
all
these
other
fields.
A
A
K
Definitely
think
there
needs
to
be
a
revolution,
because
pretty
much
every
problem
in
eda
is
an
mp
hard
problem,
optimization
problem
and
so
you've
got
an
mp
hard,
optimization
problem
of
npr'd.
K
K
And
you
know,
people
like
google
are
going
to
try
and
use
every
spare
cycle
compute
we
can
to
improve
the
performance
of
the
next
chip,
we're
doing
because
it
makes
sense
to
do
that,
and
you
know
this
is
a
really
important
aspect
of
this
industry
is
that
we
need
to
stop
doing
small
numbers
of
experiments.
K
Everybody's
individual
data
sets
is
like
a
little
point
on
this
massive
optimization
space
and
we
need
as
much
of
it
shared
with
as
many
people
as
possible,
so
that
we
can
better
understand
this
whole
space,
and
so
really
we
need
a
lot
more
open
collaboration,
which
cloud
can
make
a
lot
easier.
You
know
it's
much
harder
to
collaborate
when
you
can
only
run
it
if
you
have
the
correct
license
or
all
these
other
type
of
things,
so
I
do
think
there's
this
revolution's
coming.
K
I
don't
think
the
existing
proprietary
eda
tools
are
going
anywhere.
I
think
they
serve
a
existing
set
of
users
really
well
and
in
the
same
way
you
know
windows
continues
to
exist,
even
though
linux
and
chromos
came
about
you
know.
It's
definitely
a
case
that
the
proprietary
eda
tools
are
going
to
continue
to
exist
and
I
think
they're
going
to
explode
in
users
as
well,
just
as
the
open
source.
You
know,
user
ecosystem
exploded
with
things
like
linux
and
gcc
and
llvm
these
all.
K
You
know
really
enable
more
people
to
do
cool
stuff
and
people
are
going
to
need
to
use
proprietary
tools
for
some
things.
But
we
really
do
need
a
base
of
open
source
to
get
people
started
and
into
the
field.
K
A
That
makes
total
sense.
Thank
you
again
for
your
time
today
tim
and
an
outstanding
presentation
and
updating
the
community
on
where
things
are,
so
I
think
it's
quite
exciting.
Thank
you.
A
Thank
you
for
having
me,
oh
our
pleasure,
so
with
that
folks.
That
concludes
our
seminar
today.
We
will
make
the
recording
available
for
folks
to
view
later
on,
and
also
the
presentations
too.
So
my
thanks
to
all
the
presenters
and
all
the
hard
work
to
put
together
the
talks
and
get
them
out
to
us.
So
thank
you
again
so
hope
everyone
has
an
enjoyable
remainder
of
the
day.
Thank
you.
Bye-Bye.