►
From YouTube: TVM Community Meeting, April 13, 2022
A
A
B
Yeah,
I
feel,
like
I
see
a
lot
of
a
lot
of
familiar
names
here
so
yeah,
maybe
maybe
not
too
many
new
people.
A
Yeah,
so
we
mostly
know
each
other,
that's
good.
Okay,
then
we
can
get
started
with
the
actual
announcements.
So
announcement
news,
we
have
a
really
new
new
reviewer,
so
gavin
has
really
announced
a
reviewer
now
and
we
have
a
new
committer
which
is
yuri
wang
lai.
So
congratulations
on
your
new
assignments
and
good
luck.
A
B
B
Of
times
where
we
have
code
and
tvm,
you
know
that
we
would
like
to
check
in
either
to
improve
you,
know,
correctness,
checking
up
tvm
or
to
improve
to
improve
kind
of
our
confidence
and
output
of
tvm
and
kind
of
some
some
examples
of
this
we
had.
This
was
kind
of
the
triggering
pr
that
caused
me
to
post
this
where's.
Where
dmitry
wanted
to
add.
B
You
know
some
some,
some
concurrent
access
correctness
checking
to
our
internal
map
implementation-
and
you
know
this
is
something
that
is
going
to
have
perhaps
a
runtime
impact.
We
use
maps
a
lot
in
you
know
just
as
a
core
data
structure.
So
adding
this
you
know
is
you
know
it's
not
free.
It
shouldn't
be
particularly
expensive.
It's
just
checking
a
single.
You
know
board,
basically
against
a
an
internet
or
state,
but
it
still,
you
know,
might
not
be
desired.
B
Might
have
some
memory
cost
as
well,
and
so
you
know
kind
of
the
question
came
up
here
of
how
can
we
disable
this,
and
so
here
you've
seen
we've
kind
of
gone
with
tdm
log
debug?
That
was
kind
of
an
attempt
to.
I
guess
we
tried
to
go
with
end
debug,
which
is
kind
of
the
more
standard
symbol.
It
turns
out,
there's
some
limitation
with
llvm,
where
somehow
the
build
doesn't
compile
properly.
B
If
we
use
ndbug
on
on
certain
older
versions
of
lvm,
unfortunately
versions
that
we
test
against,
so
we
were
kind
of
forced
to
do
this
other
symbol
and
basically
the
the
question
came
up
as
to
like
it's
totally
fine.
If
we
approve
this
message
this
pr,
I
should
say
same
thing,
but
there's
no
regression
test
for
this,
and
so
we
could
add
a
test
in
you
know.
We
have
this
container
test
here,
but.
B
You
know
this
test
doesn't
actually
compile
unless
we
pass
tv
and
long
debug
and
I
believe
we
weren't
passing
tvm1
debug
in
any
any
ci
context
due
to
not
wanting
to
slow
down
the
ci
and
so
here's
one
instance
of
of
where
it
would
be
helpful
to
have
a
set
of
checks
that
run
kind
of
with
more
correctness
enabled
in
in
the
ci
and
then
kind
of
looking
elsewhere
in
the
code.
B
We
see
any
other
instances
of
this
and
I
know
of
at
least
one-
and
I
was
kind
of
curious
if
folks
on
the
call
had
had
ideas
for
others.
The
other
area
that
I
know
about
is
in
the
lvm
code
gen.
So
we
go
over
here
and
just
take
a
look
at
how
this
works
and
essentially
the
lvm
code
gen.
It
interacts
directly
with
llvm's
kind
of
mid-level
api.
So
it
translates.
B
You
know
when,
when
a
function
is
being
added
in
code
gen
it
you
know,
it
assembles
a
an
lvm
type
to
represent
the
function
type
by
looking
at
all
the
different
parameters
and
translating
them
into
lvm
types.
And
then
it
sort
of
calls
each
of
these
individual
functions
to
like
declare
the
function
to
lvm,
and
then
you
know
looking
further
down
into
kind
of
how
you
know
the
body
works.
B
Some
of
this
stuff
might
be
an
lvm
base
in
a
base
generator,
but
yeah
you
can
see
like,
for
example,
creating
you
know,
statement
blocks
in
lvm
and
then
translating
you
know
for
loop
components
into
those
statement,
blocks
and
creating
branches,
and
things
like
that,
so
this
is
kind
of
how
our
llvm
code
emitter
works
at
this
level
and
the
the
challenge
we
have
here
is
that
there
is
a
flag
you
can
compile
lvm
with
and
it's
a
compile
time
flag.
B
Unfortunately,
where
you
can
enable
assertions
and
that's
kind
of
like
dash
w
error
in
is
kind
of
how
I
think
of
it
in
in
it's
the
lvm
analog
of
dash
w
error
in
in
a
c
code
gen,
it's
perfectly
possible
that
code
that
you
write
in
here
could
compile
with
these
assertions
with
some
of
these
assertions,
failing
and
in
fact,
the
first
time
that
I
ever
tried
to
modify
cogen
lvm,
I
submitted
code
and
it
you
know,
passed
regression
and
it
worked
okay.
B
That
was
this
link
parameters
implementation,
but
it
turned
out
that
there
were
some
inconsistencies
and
improper
usages
of
lvm
intrinsics
in
here
and
kristoff
submitted
a
pr.
B
You
know
back
to
me
saying:
hey
fixing
some
broken
and
some
bad
use
of
lvm,
and
I
said
how
did
you
get
this
and
it's
by
having
this
compile
time
flag
enabled
in
in
lvm,
and
so
you
know
all
two
times
I
just
been
working
on
this
other
pr
to
emit
aot
code
via
lvm
code
gen,
and
you
know
all
two
times
that
I've
I've
worked
on
this
I
mean,
I
guess
the
first
one
didn't
count.
B
I
didn't
know
about
it,
but
you
know
it
would
have
been
helped
basically
by
having
this
these
assertion
checks
in
in
some
kind
of
ci
so
anyway.
So
the
proposal,
then,
is,
I
guess,
sorry,
the
last
case
that
kind
of
came
up
in
discussion.
I
can't
remember
who
brought
this
up,
but
it
wasn't
me,
but
you
know,
enabling.
C
B
Like
address
sanitizer
and
the
runtime,
and
trying
to
look
for
like
memory
leaks
as
we're
running,
these
tests
was
like
one
proposed
additional
thing.
We
could
do
here
so
anyway.
The
proposal
here
is
basically,
we
would
add
another
ci
build
stage.
We
wouldn't
run
all
the
tests
in
this
stage.
We
would
probably
run
just
a
unit
test
here,
but
that's
just
kind
of
my
strawman
idea
and
we
would
basically
run
tests.
B
You
know
under
this
more
this
better
checked
environment,
to
see
if
we're
violating
any
sort
of
lvm
assertions
or-
or
I
guess
what
have
you
so
I
just
wanted
to
like
see
if
folks
had
appetite
for
adding
another
container
to
the
ci.
B
I
don't
expect
it
will
really
impact
my
time
that
much
so
I
don't
know
if
it's
really
that
big
of
a
deal
but
also
wanted
to
you
know
allow
folks
to
have
some
time
to
think
about
this
and
see
if
anything
else
comes
to
mind
that
we
should
do
in
this
container
other
checks
that
might
be
helpful
or
ways
of
running
the
test.
So
that's
what
I
had
to
say
here,
maybe
we'll
just
kind
of
open
up
the
floor.
If,
if
people
have
ideas.
C
You
know
having
the
machines
find
these
these
crazy
bugs
for
us.
Do
we
have
a
policy
on
I
check
versus
d-check?
C
I
I
found
myself
that
I
am
tending
to
self-censor
in
that
you
know,
there's
a
lot
of
eye
checks
I
could
put
in,
but
then
I
think
this
is
starting
to
get
a
little
expensive,
so
I
just
won't
put
them
in
yeah
and
I
don't
put
d-checks
in
because
it's
never
going
to
be
seen
kind
of-
and
this
you
know,
you're
probably
familiar
with
this
issue,
andrew
from
your
previous
employment,
that
code
bases
can
end
up
with
stale
d
checks
and
because
we're
not
running
them
in
debug
mode
regularly.
C
They
every
time
you
try
and
turn
them
on.
In
order
to
find
your
particular
issue,
you
uncover
the
latent.
You
know
300
just
to
get
yours
to
fire
or
whatever
that's
right,
and
then
you
and
then
you
start
to
have
these
discussions
and
people
get
drawn
into
wait.
A
minute
shouldn't
this,
be
an
actual
error
check
and
we
fail
at
runtime
and
propagate
that
failure
properly.
Yeah
all
those
sorts
of
issues
dude
do
we
want
to
you
know,
do
we
want
to
draw
a.
B
Line
in
the
sand
here-
good
question
I
mean
it
seems
like
we
should
run
those,
and
actually,
I
guess,
as
part
of
the
proposal,
we
would
be
compiling
with
user
really
debug.
So
I
think
that
turns
those
on.
If
I
remember
correctly,
so
I
guess
yeah
as
part
of
the
proposal.
That
means
we
would
be
running
with
d-checks
on
great
point.
I
didn't
actually
think
about
that
before
and
I
think
that
would
be
helpful.
D
Just
to
just
to
say
than
so,
these
would
change
the
use
of
led
and
we
do.
We
would
build
it
from
source
as
part
of
the
the
container
setup.
B
Yeah,
so
that's
kind
of
one
of
the
things
that
that's
a
yeah
great
question.
I
hadn't
written
that
up
here
on
the
dock.
Here.
I've
been
discussing
this
with
christoph
in
another
thread
and
yeah
the
big
question.
B
This
came
up
a
month
or
so
ago,
and-
and
the
question
in
my
mind,
was
what
bill
of
lvm
do
we
use,
because
I
think
we
want
to
keep
using
the
lvm
that
builds
on
ubuntu
and
the
one
that
you
would
get
when
you
install
lvm
on
ubuntu,
at
least
in
in
most
of
our
ci
containers,
but
it
kind
of
came
up
as
sort
of
a
question
of
how
should
we
like?
How
can
we
build
like
right?
Now?
B
I
don't
know,
but
I
I
looked
a
little
bit
and
basically
wound
up
scraping
together,
like
a
bunch
of
flags
from
the
lvm
build
script
like
from
the
debian
package,
build
scripts
to
get
this
to
work
for
me
locally,
but
I
don't
know
of
a
good
automated
way
to
do
that
and
so
kristoff
proposed,
basically
that
we
just
build
from,
I
guess,
sort
of
a
latest
known,
good
version.
I
think
he
even
proposed
building
from
maine
actually
like
is
part
of
our
container
build
process,
so
this
cicp
asserts
container.
B
We
would
every
day
try
to
rebuild
against
llvm
head
and
see
if
there's
a
an
error
and
then,
whenever
we
update
this
container,
it
would
be
when
we
rev
the
version
of
llvm
used
with
the
certs,
and
I
guess
the
idea
being
that
if
we
run
things
sort
of
primarily
with
you
know
in
sierra
cpu,
we're
not
you
know
we're,
hopefully
not
broadening
out
into
new
architecture,
territory
and
kind
of
you
know,
staying
within
the
sort
of
the
x86
code
gen.
B
I
guess
that's
another
question
that
we
might
need
to
ask
too
is
like
do
we
need
to
consider
cross-compiling
for
arm
and
things
like
that
here,
but
anyway,
how
yeah
the
sort
of
the
the
thought
was
that
we
would
basically
build
something
from
head
and
just
use
the
default
flat
or
use
a
reasonable
set
of
flags
that
we,
we
think,
will
help
find
errors.
E
D
I
wouldn't
particularly
recommend
you
use
the
kind
of
the
top
of
main
every
day,
because
the
the
ci
cycles
take
quite
a
long
time
to
test.
You
know,
platforms
and
it
might
be
broken
sometimes,
so
I
think
it
would
inject
some
a
lot
of
noise
in
our
ci.
If
we
were
to
to
just
blindly
trust
on
that,
so
I
I
feel
we
should
at
least
try
to
align
with
the
version
we
use,
but
then,
with
more
developer,
like
developer,
options
like
assertions
and
debug
build.
B
Yeah,
I
kind
of
agree
with
you
on
that
one
I
hadn't.
I
don't
know
if
we'd
settle
on
that.
Yet
I
think
that
was
like
one
suggestion,
but
it's
not
you
know
we
don't
have
to
go
with
that.
Yeah
yeah,
but
yeah,
especially
if
you've
got
experience
with
that
too.
B
I
think
both
you
and
kristal
have
quite
a
bit
of
experience
with
all
of
them,
but
I
think
that
you
know
it
would
be
worth
making
sure
that
we
don't
rock
the
boat
too
much
on
ci
and
and
cause
more
pain.
Basically
as
well
as
you
know,
yeah
we
have
sort
of
our
declared
supported
version
of
lvm
and
there's
no
need
to
necessarily
go
bleeding
edge
in
the
ci.
I
don't
think,
but.
B
But
yeah
we're,
I
think,
we're
having
a
discussion
about
kind
of
the
merits
of
going
one
way
versus
the
other
or
maybe
there's
like
a
last.
You
know
like
we
have
our
last
successful
tag
and
maybe
they
have
that.
D
D
Agree:
yeah
but
yeah,
I'm
in
favor
of
building
loving
with
assertions
and
the
whole
the
tooling
around
it
that
that
we
can.
We
can
build
and
with
others
and
ties
and
all
this
cool.
E
C
Now
you've
questioned:
is
there
any
runtime
assertions
we
would
want
to
enable
you
mean.
B
I
mean
I
guess
what
you're
saying
is
like
like
llvm
has
this
flag
for
enabling
asserts,
I
want
to
say
somewhere
in
here
actually
sorry,
the
c
cogen
has
this
flag,
but
not
sure
if
this
is
here,
but
I
think
one
of
these
cogens
actually
has
code
that
basically
will
well.
You
can
find
it
for
if
you
want,
but
if
you
know
one
of
the
codens
basically
has
a
if
you,
if
you
visit
an
assert
node,
it
will
emit.
E
B
On
the
other
hand,
I
think
that
you
know,
for
example,
with
the
aot
thing
we
got
away
with
a
lot
of
bad
code,
because
you
know
assertions
were
not
enabled,
I'm
not
sure
where
it
is
in
here,
but
anyway
these
versions
were
disabled,
and
so
so
we
did
not.
We
sort
of
committed
the
first
version
that
didn't
pass,
correct,
tensor,
metadata
and
kind
of
got
away
with
it.
For
that
reason,
so,
yeah
anyway
I'll
keep
joking
around
here
for
a
while,
but
one
day,
I'll
find
it.
C
B
Yeah,
let's
I
think,
ideally,
I
would
try
to
choose
a
split
point
that
we've
already
chosen
like
the
unit
tests.
Yeah
next
split
point.
One
thing
that
does
come
to
mind
now
that
we
were
talking
about
this
is
like
this
does
mean
that
I
mean
this
is
sort
of
a
cogen
level
thing
and
we've
got
all
of
these
different
lvm
code.
Gens,
I'm
back
too
far.
Now
here
and
so
we've
got
like
one
for
x86,
we've
got
one
for
hexagon.
B
We've
got
one
for
arm,
one
for
amd
gpus
and
things
like
that
and
so
yeah.
There
is
sort
of
a
little
bit
of
a
question
of
like
you
know
right
now.
We
test
the
arm
one
in
ci
arm,
for
example,
and
you
know
what
do
we
want
to
do
then?
Do
we
want
to
rerun
those
tests
and
enable
those
arm
target
tests,
but
somehow
not
actually
load
the
generated
code,
so
they
could
continue
to
pass,
and
you
know
anyway,
they're
sort
of
yeah
there
you
go
questions
there,
where
we
could.
B
You
know
at
least
at
least
exercise
the
the
asserts
that
I
don't
have
a
good
answer
to
and
that's
sort
of,
probably
that's,
probably
a
good
unresolved
question
for
the
rfc,
because
I
think
that
the
you
know
the
the
natural
way
to
resolve
this
would
be
to
create
ci
arm
asserts
in
ci.
You
know
yeah
yeah.
We
could
do
that.
I
mean
we.
Now
we
have
enough
machinery
around
the
cia
stuff
that
we
could
do
it.
It's
just
going
to
blow
up
the
number
of
containers,
but.
C
It
feels
like
we'd
get
the
most
bang
for
our
buck,
with
just
some
generic
cpu.
Only.
C
B
C
Yeah,
plus
I'm
I'm
thinking
ahead.
Well,
I'm
thinking
of
all
the
wonderful
places
where
d-checks
can
be
inserted
with
guster
through
all
the
past
machinery,
so,
for
example,
this
could
also
enable
we
talked
about
this
a
long
time
ago.
Of
course
not
done
anything
about
it,
but
adding
self
invariant
checks
to
yeah
yeah.
C
B
Any
other
thoughts
or
input
here.
F
C
Does
anyone
know
how
to
turn
on
better
standard
lib
debug
checks,
so
I'm
forever
like?
I
guess
I
got
spoiled
where
the
standard
library
would
give
me
a
nice.
You
know
assertion
failure.
If
I,
you
know
accessed,
extended,
vector
out
of
range
and
things
like
that,
and
I've
never
quite
figured
how
to
get
that
going
in
tbm.
C
B
C
That's
right,
and
instead
of
you,
know,
tracking
down
some
obscure,
you
know
cordon
you
actually
get
yeah.
This
index
is
out
of
range
great.
C
Yeah
and
then
what
I
end
up
doing
is
just
defensively
eye
checking
everywhere.
So
now
we're
paying
for
the
same
modular,
the
compiler,
inlining
and
realizing
it's
unnecessary.
A
Otherwise
we
do
the
countdown
again
going
once
going
twice.
Okay,
then
we
can
conclude
this
week's
meeting.
So
thanks
everybody
for
attending
thanks,
andrew
for
the
for
the
presentation
for
showing
your
approach
and
then
see
you
guys
next
week
again,
yeah.
B
Thanks
thanks,
michael
for
hosting
and
join
us
next
week
for
discussion
on
relax.