►
From YouTube: Open Source SV UVM Support
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
today,
I'm
going
to
talk
about
UVM,
support
in
verilator
and
basically
scaling
to
large
commercial
designs
that
you
can
do
very
later
today,
or
perhaps
you
might
be
able
to
do
soon.
Generally
speaking,
very
later
is
kind
of
one
outlier
in
that
it's
an
open
source
tool,
that's
being
used
very
widely
and
very
effectively
in
Asic
design.
A
Today,
so
I
believe
that
we
have
some
really
good
news
to
share
and,
of
course,
like
there's
still
work
in
progress,
kind
of
a
status
report,
a
talk,
but
it's
it's
full
of
kind
of
good
things
that
are
already
going
on,
and
hopefully,
of
course,
next
time
we
meet.
We
have
even
more
good
news
to
share
starting
with
kind
of
a
little
bit
of
background
for
open
source
verification,
so
as
a
company
and
micro
is
doing
work
in
different
areas
in
fpg
and
Asic
design
and
there's,
of
course,
different
ways.
A
People
approach,
verification,
we
actually
a
company
are
quite
involved
with,
like
newer
methodologies
like
cocotv,
using
python
for
verification
trying
to
innovate.
You
know
in
the
workflow
itself,
but
as
it
happens,
the
majority
of
the
world
is
using
more
traditional
methodologies.
Like
UVM,
especially
for
Asus
design,
there's
such
a
pool
of
talent
and
there's
so
many
designs
that
and
test
benches
that
already
exist
that
just
use
UVM,
you
can't
just
say:
oh
we're,
gonna
neglect
them
and
do
our
own
thing
I
believe
that's
the
right
way
to
approach.
A
But
basically,
of
course,
this
talk
is
more
about
UVM
itself
and
and
how
we're
trying
to
support
it
for
commercial
Endeavors,
and,
of
course
this
has
been
talked
about
for
for
a
time.
It's
not
a
new
topic,
but
it
was
always
considered
some
kind
of
a
moonshot
Endeavor
right,
like
everyone
says,
yeah
we'd
love
to
have
UVM.
But
then
everyone
also
says
yeah
but
come
on
like
how
can
we
do
that
if
there's
years
and
years
and
years
of
development
in
the
proprietary
space
and
open
source,
just
like
Didn't,
Do
It?
A
We
believe
that
you
know
open
source.
Has
this
magical
ability
of
iteratively
getting
towards
an
outcome
and
as
you
go
it
snowballs
into
something
you
didn't
even
expect.
So
that's
kind
of
what
I
think
we're
achieving
here,
and
so
we
kind
of
set
out
to
add
UVM
tour
later,
and
we
knew
it
was
a
big
Endeavor.
It
wasn't
supposed
to
just
happen
overnight,
but
since
we
have
chips
Alliance
we
have
a
lot
of
companies
and
people
collaborating
together
and
we
decided
that
pulling
those
resources
might
actually
make
it
happen.
A
Sometime
down
the
road,
we
didn't
really
plan
like
okay,
it's
going
to
be
2025
or
any
specific
date.
It
was
more
like
if
we
start
doing
it
we'll
get
there
eventually
and
Western.
Digital
has
been
definitely
a
very
kind
sponsor
of
much
of
this
work,
but
it
does
the
whole
beauty
of
chips.
Alliance
is
not
that
you
know
we
just
work
with
them.
There's
actually
lots
of
other
people
we're
collaborating
with
both
within
and
outside
of
chips,
Alliance
and
verily.
A
There's
a
project
is
a
great
Community
Driven
collaborative
project
that
a
lot
of
the
developments
are
just
happening
even
independently
of
this
work
and
benefiting
this
work,
and
some
of
our
work
is
benefiting
other
people
and
it's
it's
kind
of
really
nice.
In
this
way,
one
major
Milestone
that
we'd
achieved
some
time
ago
was
adding
event-driven
simulation
into
their
later,
so
very
late.
I
originally
didn't
do
event
driven
and
so
people
didn't
consider
it
a
good
candidate
for
doing
UVM,
because
you
needed
that
and
people
thought
well
no
variators,
just
not
for
this.
A
A
It
didn't
do
it
before,
but
since
it's
already
a
good
and
widely
recognized
tool,
that's
a
very
good
thing
to
build
upon
right
because
you
could
think
of
building
something
new,
but
then
you'd
also
have
to
get
the
brand
recognition
and
all
the
plumbing
in
place
to
actually
make
it
a
functional,
simulator
and
so
on.
So
we
decided
that
sticking
with
an
established
framework
like
very
labor,
is
a
good
idea,
even
though
it
required
a
major
overhaul
of
how
it
was
doing
things
to
to
incorporate
this
so
yeah.
A
Of
course,
this
is
an
enormous
task
and
we're
not
saying
that
we're
there,
but
on
the
other
hand,
we
believe
that
picking
a
more
like
iterative
approach
and
trying
to
not
look
at
it
as
a
binary
Endeavor
like
you,
have
to
do
everything
or
nothing
but
more
like
you-
have
to
get
somewhere,
functional
and
productive
to
iterate
from
there.
We
believe
it's
it's
feasible.
A
I
talked
about
this
already
in
Paris,
so
in
may
we
had
this
event
called
risk
five
week
and
I
gave
this
talk.
You
know,
which
was
pretty
much.
This
kind
of
State
of
the
Union
talk
knowing
that
the
event-driven
simulation
capabilities
are
either
right,
so
in
May
the
capability
was
there,
but
it
wasn't
yet
mainlined
into
where
later
we
kind
of
knew
we
were
gonna
make
it
like.
It
seemed
like
we're
on
a
good
track
of
mainlining
this,
so
we
weren't
really
afraid
that
this
is
not
going
to
happen.
A
We
were
collaborating
with
Wilson
Snyder
and
others
too,
to
to
kind
of
make
sure
that
this
is
work
that
can
be
mainlined,
but
it
wasn't
yet
right.
It
was
still
a
fork
over
later
that
could
do
then
driven
and,
of
course,
at
that
point
we
also
knew
there
were
like
heaps
and
heaps
of
other
things
and
issues
and
features
that
were
there
and
needed
to
be
added
to
their
later,
to
support
UVM
style
verification.
A
But
we
also
knew
that
we
would
probably
be
working
with
them
in
the
future,
so
in
May
in
Paris
we
knew
that
some
Project
work
was
coming
our
way
to
make
it
happen,
so
we
could
be
optimistic
about
it.
So,
let's
see
if
our
optimism
was
kind
of
warranted,
called
this
part
of
a
presentation
from
Paris
to
Sunnyvale,
because
essentially,
we've
traveled,
both
in
time
and
space,
to
get
here.
So
it's
been
half
a
year
and
it's
been
many
kilometers,
but
it's
the
same.
A
It's
work
first
up
between
Paris
and
now
between
between
Paris
and
San
Diego.
We
ended
up
mainlining
this
event-driven
capability,
and
that
was
of
course
really
important
because
it's
a
big
change.
You
know
it's
a
scheduler
that
we
have
to
kind
of
optionally
enable
and
we've
worked
very
hard
on
this,
with
Wilson
Snyder
and
gazalore,
and
other
people
of
course
too,
and
and
this
kind
of
work
kind
of
complemented
each
other
and
helped
to
achieve
this.
A
Variator
5.0.2
I
think
is
the
correct
version
that
includes
this
feature,
but
generally
speaking,
it's
there.
You
can
just
turn
it
on
it's
I
didn't
put
in
the
flag
to
you
have
to
use
to
to
have
it,
but
generally
speaking,
it's
a
flag
and
it's
optional.
So,
like
everyone,
that's
afraid,
oh
no!
This
is
going
to
break
my
the
way.
I
use
very
lighter.
A
It's
not
going
to
do
that
because
it's
entirely
optional,
so
if
you're
not
doing
these
kind
of
things,
if
you
don't
want
the
event
driven
just
keep
using
very
later
as
you
did,
but
you
do
want
to
do
event,
driven
simulations
there's
a
flag
for
you
that
just
enables
that
kind
of
scheduling-
oh
yeah,
so
this
was
merged
and
then,
of
course
this
is
the
beginning.
We
knew
that
well,
UVM
is
one
word,
but
it's
actually
like
a
thousand
different
small
things
that
you
need
to
do
to
to
enable
it.
A
So
we
decided
that
we
need
to
get
on
with
it
and
do
it
in
order
to
see
where
we
are.
We
started
tracking
progress.
We
started
kind
of.
We
have
this
dashboarding
kind
of
bent
in
chips
lines.
We
have.
You
know
dashboards
for
tracking,
like
the
progress
of
the
fpga,
tooling
and
and
system
verlock,
tooling,
and
so
on.
So
for
this
specific
kind
of
Endeavor,
we
also
built
a
dashboard.
That's
online.
A
You
can
look
it
up
at
all
times
and
it
has
a
bunch
of
tests
and,
of
course,
like
this
is
not
a
complete
coverage
of
all
the
things
that
we
need
to
do.
Nor
is
it
like
very
thorough
in
you
know,
covering
everything,
every
little
aspect
and,
of
course,
if
you
want
to
contribute
like
bring
your
own
use
cases
into
the
test
framework
that
that's
great.
If
you
have
designs
that
you
see,
some
features
are
not
there
in
very
later,
but
you'd
like
them
to
to
come
to
very
later.
A
One
way
to
make
that
happen
would
be
to
add
a
test
case
that
allows
us
to
know
that
this
is
not
working
right,
so
we
can
kind
of
guess.
What's
working,
what's
not
working,
we
were
kind
of
very
familiar
with
the
very
Electric
Code
Base
by
now,
but
having
a
test
case
is
really
really
useful,
but
this
dashboard
has
been,
of
course,
coming
from
our
Direction.
A
Like
the
designs,
we
were
working
with
the
designs
that
we
want
to
enable
for
our
customers,
and
so
we
built
it
out
and
we
were
kind
of
seeing
it
Go
Green
over
time,
which
is
a
very
fun
thing
to
do,
and
let's
go
through
some
of
the
things
that
we
managed
to
implement
throughout
the
the
time
from
May
until
today.
A
The
first
kind
of
thing
that
we
needed
to
add
was
concurrent
assertions.
So,
of
course,
many
of
the
features
that
I'll
be
showing
here
on
the
slides
are
implemented
and
related
to
some
extent
right,
but
we
were
missing
various
kind
of
corner
cases
or
or
syntactic
capabilities,
and
so
on
and
note
that,
when
we're
linking
to
pull
requests
here
on
the
slides,
it's
actual
requests
to
mainly
invert
later
and
most
of
them
are
already
merged,
or
practically
all
of
them
are
already
merged.
So
this
is
where
that's
happening
in
Mainline.
A
Of
course
we
have
our
own
Forks,
but
we
try
to
kind
of
converge
them
as
soon
as
we
can.
So
as
an
example,
we
had
to
add
value
sampling,
some
keywords
that
just
weren't
there
and
thanks
to
this,
we
kind
of
we're
passing
that
part
of
our
test
Suite
without
a
problem
so
conquer
the
sessions
had
many
different
aspects
and
we
also
added
some
keywords
like
well,
not
that's
kind
of
simple
but
also
main
property.
A
So
on
this
kind
of
screen,
you'll
see
that
we're
kind
of
having
a
name
property
a
called
check
and
then
we're
using
it
inside.
You
know
another
property,
so
so
these
kind
of
little
things
is
of
course,
a
lot
of
doubling
in
the
internal
software
later
and
the
the
syntactic
kind
of
parsing
of
stuff.
But,
generally
speaking,
there's
a
lot
of
little
things
like
this
that
we
have
been
adding
throughout
this
half
year.
We'd
also
been
adding
new
types.
A
So,
basically,
there's
missing
types
signal
types
and
one
kind
of
big
missing
piece
was
wildcard
associative
arrays
that
was
kind
of
a
bit
tricky
to
implement,
but
it
it
works.
Now,
there's
stuff
like
arbitrary
string
concatenation
that
just
didn't
work
again,
just
adding
support,
there's
kind
of
examples
on
the
right,
I,
don't
know.
If
you
can
see
them
well
in
this
light,
but
generally
speaking
of
further
slides
will
be
shared
and
you'll
be
able
to
look
at
the
examples
and
and
all
the
code
is
public
right.
A
So
if
you
want
details
on
what
exactly
was
implemented
and
how
it's
kind
of
you
can
just
look
at
the
PRS,
so
I'll
just
go
go
through
these
quickly.
We've
been
adding
new
types
like
unpacked
structs
I
mean
unpacked
structs
kind
of
worked,
but
they
actually
just
behaved
like
extracts
underneath
without
telling
you
so
to
to
properly
support
them.
We
we
had
to
kind
of
again
add
proper
support,
not
just
a
fallback
kind
of
support.
A
We
improved
support
for
classes
where
some
rudimentary
support
was
there,
but,
generally
speaking,
there
was
quite
a
lot
of
issues
we
had
to
polish
and
yeah.
Now
it's
it's
much
better,
of
course,
perhaps
not
rock
solid,
but
definitely
kind
of
good
enough
to
get
us
closer
to
where
we
need
to
be,
then
there's
virtual
interfaces
that
weren't
there
so
they're
in
there
now
again.
We'll
requests
to
to
add
this
and
an
example
to
to
show
you
a
little
code
snippet.
Those
code
Snippets,
of
course,
are
parts
of
the
test
Suite.
A
So
you
can
also
go
to
the
test
suite
and
see
if
those
code,
those
are
passing,
there's
also
a
Class
Type
resolution.
So
there
are
some
built-in
classes
that
the
SD
package
provides,
but
very
later
kind
of
didn't.
Allow
you
to
use
them
properly.
You
can't
you
couldn't
actually
instantiate
a
class
that
wasn't
defined
in
this
package
that
you're
in
so
you
had
some
classes
coming
from
other
packages.
It
didn't
just
work
so
now
it
does
so
this
little
snippet
kind
of
shows
how
this
will
work.
A
That's
one
big
one,
so
UVM
is
kind
of
based
on
parameterized
classes
and
just
the
word
there.
So
we
had
to
add
it,
and
these
features
are
implemented
a
big
PR
here.
So
here's
a
list
of
the
things
we
have
to
do
and
yeah
again,
there's
there's
quite
a
lot
of
stuff.
That's
working,
I'm
sure,
there's,
there's
more
that
we
need
to
add,
but.
A
Signal
strength,
specifiers:
this
is
one
where
quite
a
bit
of
work,
we'll
probably
need
in
the
future
as
well,
because
we
have
some
support.
That's
why
you
say
very
basic
signal
strength.
Support
is
added.
It
certainly
doesn't
cover
all
the
use
cases,
but
it's
kind
of
satisfactory
for
the
ones
that
we're
working
with
today
and
yeah.
We've
added
a
bunch
of
features,
but
definitely
we'll
have
to
kind
of
rework
the
way
for
a
later
evaluates
this
currently
we're
just
kind
of
overfitting.
A
A
The
UVM
itself
kind
of
uses
some
built-in
classes
that
for
later
does
not
support
so
again,
just
added
them
in
this
PR.
We
also
needed
to
just
add
some
kind
of
capability
systems
just
of
the
like
control
flow.
So
in
the
for
each
Loop
we
had
to
kind
of
add
the
ability
to
iterate
over
a
string
as
an
example.
A
A
We
added
support
for
the
width
keyword
so
that
we
can
do
more
like
python-like
programming
and
Lambda
constructs,
which
is
kind
of
nifty,
and
you
can
see
an
example
there
of
the
like
find
with
IA
equals
requests.
So
this
code
kind
of
starts
looking
kind
of
like
kind
of
pythonic,
which
is
good,
and
this
didn't
work
in
relative
before
so
so
it's
also
kind
of
necessary
for
UVM
and
one
big
one
that
I
think
deserves
kind
of
mention
is
the
work
we
did
with
constraints,
and
this
is
definitely
ongoing.
A
Work
that'll
need
much
more
Focus,
but
so,
of
course,
like
the
whole
UVM
stuff
is
based
on
the
idea
of
constrained
randomized
testing,
and
we
had
to
basically
greatly
extend
that
capability.
We
decided
to
go
with
an
existing
solver
or
group
of
solvers
to
to
enable
this,
and
we
adopted
an
open
source.
Library
called
crave
because,
like
we
believed
it
actually
had
a
lot
of
functionality
that
we
needed,
we
didn't
want
to
really
kind
of
write
this
from
scratch.
Again
we
we
decided
to
adopt
this
project.
A
We
had
to
get
rid
of
a
lot
of
dependencies
it
had
because
it
was
dependent
on
Boost
and
other
things
and
and
for
later.
Maintainers
didn't
want
to
include
that
in
the
project
as
a
dependency,
but
fortunately
it
was
easy
to
get
rid
of
the
stuff
that
we
didn't
need
and
we
just
forked
the
project,
it's
kind
of
not
very
active
anymore,
so
I
think
our
Fork
will
just
be
part
of
the
chips
Alliance
and
we'll
kind
of
keep
developing
there.
A
But,
generally
speaking,
the
the
existence
of
the
framework
itself
was
very,
very
useful
for
us,
because
we
skipped
a
lot
of
development.
We'd
have
to
do,
and
it's
always
good
to
reuse
stuff.
So
yeah
we
have
this
crave
Library.
We
have
another
Library,
that's
kind
of
a
sub
library
that
it
uses
also
kind
of
cleaned
up
and
forked
and
cleaned
up
dependencies
and
and
made
sure
that
we
can
use
them
chips
lines.
A
So
so
this
was
already
quite
a
bunch
of
work,
but,
of
course,
there's
much
more
work
in
this
area.
There's
a
lot
that
remains
to
be
done
and
there's
a
list
here
and,
of
course,
like
it's,
not
a
complete
list,
it's
more
of
a
list
before
we
get
to
a
functional
UVM
test
bench
that
we
can
actually
run
right.
Fortunately,
we
don't
think
that
those
things
we
list
here
are
kind
of
like
an
astronomical
amount
of
work
right.
A
We
think
that
we
should
be
able
to
do
them
within
several
months,
let's
say-
and
so
hopefully,
the
next
time
we
meet
in
this
kind
of
setting
I
can
tell
okay,
we
can
run
UVM
test
benches
right,
and-
and
this
is
the
beginning
of
course,
because
as
we
when
we
get
there,
it's
going
to
be
a
next
Milestone
with
lots
of
other
features
that
we
need
to
add,
but
hopefully
collaboratively
you
know
we
can
get
there
and
we're
very
excited
because,
like
we
can
see
the
light
in
the
tunnel,
it's
it's.
A
We've
covered
a
lot
of
ground.
I've
shown
you
a
lot
of
features
that
we've
added.
We
have
this
capability
to.
Do
event
driven
simulation
and
I'll
show
in
a
second?
Why
it's
why
it's
exciting
as
well.
So,
okay,
like
a
little
add-on
on
top
of
the
the
previous
part,
where
I
believe
this
is
important
to
say
because,
like
those
two
aspects
reinforce
each
other,
we're
also
working
for
other
clients
to
scale
right
later
up
to
bigger
designs.
A
Of
course,
scales
pretty
well,
it's
already
good
framework,
but
some
of
our
customers
are
running
really
really
huge
designs
and
they'd
like
to
spawn
them
across
like
numbers
of
cloud-based
machines,
and
that's
where
you
start
seeing
those
limitations
and
our
work
for
other
clients
involves
getting
rid
of
those
implementations,
fixing
bugs
eliminating
kind
of
memory,
issues
just
optimizing
performance
and
especially
as
you
scale
massively
as
you
go
towards,
like
verification,
which
kind
of
requires
you
to
run
lots
and
lots
and
little
tests.
A
A
As
an
example,
we've
been
working
with
reducing
memory
footprint
and
on
some
designs
that
we've
gotten
from
our
customers.
We've
gone
from
72
gigabytes
to
59
gigabytes,
which
is
a
pretty
good
reduction.
As
you
can
see,
this
goes
below
the
64
gigabyte
Mark,
so
you
can
use
the
smaller
Cloud
machine
to
run
your
workload
because,
of
course,
with
72
you'd
have
to
take
a
probably
128
gigabytes
of
RAM,
but
now
you
can
go
to
a
smaller
machine.
A
So
this
is
the
kind
of
work
that
we're
doing
to
kind
of
find
optimizations
that
have
practical
value
for
customers
right
going,
Beyond
a
certain
Mark,
that's
important
for
them
from
from
the
cost
perspective
or
from
a
functionality
perspective
from
a
Time
perspective,
so
yeah
another
type
of
optimization
we're
doing
we're
improving
the
runtime
of
the
relation.
As
an
example,
We've
managed
to
go
down
from
478
seconds
to
249
seconds
for
violating
those
big
designs,
and
these
are
production
grade
designs.
These
are
not
kind
of
invented
examples.
You
know
that
are
easy.
A
For
us.
These
are
the
biggest
the
biggest
risk.
Five
designs
you
might
guess,
which
ones
these
are
and
yeah
we're
getting
really
good
results.
So
we're
pretty
happy
and
there's
a
number
of
features
in
there
later
that
we're
adding
to
like
enable
or
disabled
features
and
and
just
just
tweak
those
parameters
to
fit
your
needs
right.
A
Okay,
summarizing
I
think
this.
No,
this
one
is
nice,
so
I
woke
up
on
on
Wednesday,
and
then
this
was
like
a
tweet
that
we
found
on
the
internet
and
someone's
actually
using
our
event,
driven
related
capability
to
simulate
DDR3
model,
and
it
turns
out
that
they
got
a
third
14
times
speed
up
so
yeah
1400
percent,
I
guess
I
should
say
that
sounds
better
on
their
model,
and
this
is
just
going
from
Icarus
verlock
to
for
later
and
railers.
A
Just
couldn't
do
this
before,
but
now
it
can
and
I
think
that
we'll
start
seeing
this
kind
of
massive
improvements
in
various
areas
due
to
this
work,
which
will
kind
of
generate
people's
excitement
that
will
enable
us
to
go
further
and
get
to
where
we
need
to
be
with
you
yeah.
So
yeah
I
really
love
this
shout
outs,
it's
very
nicely
and
kindly
written.
Of
course
it's
not
just
artwork.
It's
you
know
the
further
Community,
but
certainly
we've
had
a
important
sharing
it.
So
we're
happy
to
be
kind
of
here.
A
A
That
would
enable
people
to
kind
of
use
this
in
practice
as
well,
and,
of
course,
if
you,
if
any
of
you,
think
that
it's
useful
for
you
or
you'd
like
us
to
speed
up
this
work
by
being
able
to
provide
more
resources
to
to
do
this
and
just
reach
out
and
we'll
be
very,
very
happy
to
talk
to
you
yeah.
So
that's
all
I
had.
D
Just
so
the
performance
looks
pretty
amazing
and
one
thing
that
I
don't
know
the
details
about
verilator
and
depth,
but
I
know
that
one
of
its
strengths
is
cycle
based
simulation
versus
a
venture,
and
so
how
did
you
kind
of
like
coalesce
that
whole
that
whole?
So
in
terms
of
events
scheduling?
Is
it
still
cycle
based
or
maybe
it's
a
technical
detail?
We
could
discover
offline
but
just
curious
about
if
you
could
mention
something
about
that.
A
I
think
it's
the
best
question.
It's
a
question
best
after
my
team,
but
in
general.
What
I
can
say
here
is
that
we've
been
looking
at
how
implementing
event
driven
capabilities
affects
you
know
like
how
how
it
operates,
for
example,
the
speed.
Originally
we
implemented
everything
in
a
kind
of
silly
way
that
you
know
kind
of
decreased.
The
the
original
strength
of
radiatory
stats
is
very
fast
right,
and
so
the
initial
implementation
was
just
way
slower
than
the
original
it
worked.
A
It
did
the
job,
but
you
could
kind
of
run
event
driven,
but
the
performance
was
horrible
and
then
we
ended
up
there's
a
few
blog
notes
describing
how
we
did
that
we
ended
up
using
the
C
plus
plus
core
routines
to
implement
event
driven
and
did
a
lot
of
kind
of
tweaks
and
tricks
to
make
it
fast,
and
it
turns
out
that
it's
actually
almost
as
fast
as
the
the
static
scheduler
is,
which
kind
of
almost
surprised
us
whether
we're
kind
of
keeping
some
of
the
properties
of
you
know
the
the
simulation
like
cyber
accuracy,
I'm,
not
sure,
I
assume.
A
That
is
a
design
goal
right.
So
I
can
kind
of
forward
it
to
the
team,
but
yeah
I
mean
they're
looking
at
how
do
we
keep
for
later
good
at
what
it
is
without
compromising
on
those
things,
but
at
the
same
time
add
the
new
functionality,
and
this
requires
like
constantly
rethinking
and
rewriting
the
strategy,
but
the
pirate
stock
are
focused
actually
on
this.
A
little
bit
more
were
initially
implemented
in
a
dummy
way.
A
Just
that
works,
and
then
you
find
a
good
way
to
you
know
to
just,
for
example,
use
new
C,
plus
language
features,
so
part
of
the
discussion
was
now
the
new
version
very
later
we
need.
You
know
people
to
have
a
newer,
C
plus
compiler
to
to
to
allow
that,
and
that
was
a
big
discussion
in
the
developer
Community,
but
we
decided
yeah.
It's
like
it's
worth
it
right.
The
core
routines
are
there
to
make
these
things
possible,
so
we
should
use
them
and
just
assume
that
we
have
to
move
forward.
D
B
A
We're
mostly
focusing
on
later
on
that
particular
work,
that's
kind
of
because,
of
course,
long-term
uhdm
is,
you
know,
front
end
that
allows
us
to
do
some
fancy
stuff,
not
just
in
violator
but
at
the
same
time
the
focus
of
our
customers
like
get
it
working
soon
right.
So
we
assume
that
this
is
also
going
to
converge.
A
I
I
didn't
put
it
in
the
slides,
because
I
had
some
other
topics
in
mind
as
well,
because
people
ask
like
hey,
can
you
also
do
vhdl
right
and
the
answer
is
well
probably
through
uhdm?
Yes,
it's
work
and
it
has
to
converge.
So
it's
all
a
matter
of
priorities,
but
yes,
we,
we
kind
of
want
to
bring
this
capability
to
to
converge
this
with
uhdm
to
ultimately
converge
with
vhdl,
because
why
not?
It's
just
that
you
have
like
as
a
company.
We
look
for
commercial
need.
A
You
know,
because,
without
that
you're
kind
of
just
you're
blind
you're
just
doing
something
and
you're
thinking
you're
doing
making
progress,
but
perhaps
you
just
after
the
wrong
thing,
whereas
here
we
have
real
customers
that
are
like
I
need
to
run
this
and
that's
a
very
good
goal
to
have,
because
suddenly
you
kind
of
cut
down
your
scope
and
you
look
at
this
stuff.
You
burn
down
list
or
what
needs
to
be
done.
A
E
Yeah
I
had
one
question:
thanks
Michael
in
the
things
that
you
support
was
multiple
inheritance,
one
of
them
always
had
something
not
necessary
or
not
critical.
Unfortunately,.
C
F
A
Basically,
very
later
from
right
before
making
the
changes
to
right
after
making
them
right.
So
we're
kind
of
four
versus
five
is
not
really
the
right
way
to
think
about
it.
It's
more
like
we
took
the
master
wherever
it
was
at
the
time
we
started
the
project
and
kind
of
we
compared.
You
know
forked
made
some
changes.
Looked
at
okay.
How
much
faster
do
we
get?
We
might
have
gotten
some
accidental
Boost
from
some.
A
A
What
kind
of
things
did
we
do
to
get
that,
but
we're
not
comparing
to
like
we're,
comparing
to
basically
where
later
from
right
before
we
started
to
very
later
now
and
and
I
assume
we're
kind
of
one
of
the
major
groups
focusing
on
this
particular
aspect
right
now
right,
but
it's
an
excellent
question
and
of
course,
like
again
we're
running
those
CI
based
kind
of
dashboards
to
be
able
to
drill
down
like
okay.
Why
is?
Are
we
better
now
so
yeah
definitely
happy
to
answer.