►
From YouTube: Apache TVM Community Meeting, 17 September 2020
Description
The Apache TVM Community Meeting for September, 2020.
Apache TVM: https://tvm.apache.org
Discuss Forum: https://discuss.tvm.apache.org
Meeting information: https://discuss.tvm.apache.org/t/tvm-community-meeting-september-17-2020/7897
A
Okay,
so
I
have
started
the
recording
for
the
meeting.
Thank
you,
everybody
and
welcome
to
the
september
17th
apache
tvm
community
run
meeting
this.
A
This
portion
of
the
meeting
is
being
recorded
right
now,
so
everything
that
you
say
will
be
will
be
captured
will
be
published
to
youtube
at
the
end
of
the
meeting,
we're
going
to
have
an
open
discussion
forum
so
that
people
who
want
to
ask
questions,
people
who
want
to
who
want
to
talk
off
of
the
record
we'll
be
able
that
we'll
be
able
to
we'll
be
able
to
ask
those
questions
and
we'll
turn
the
camera
off
at
that.
A
B
All
right,
so
I
think
that
brings
us
to
subproject,
updates
and
kicking
things
off.
Andrew
is
going
to
tell
us
about
all
the
amazing
stuff
that's
been
going
on
in
micro
tvm,
so
please
andrew,
take
it
away.
C
Great
yeah,
so
I
just
wanted
to
give
a
quick
update.
I
just
have
a
few
slides
to
go
through
just
to
kind
of
show
a
few
things
yeah,
so
I'm
working
on
micro,
tvm,
let's
see
I
just
introduced
myself,
I'm
andrew
royce,
I'm
an
engineer
at
octo
ml
and
working
on
the
micro
tvm
project
since
around
april,
and
just
want
to
give
you
guys
an
update
on
what
we've
been
up
to
kind
of
over
the
last
couple
of
months.
C
I
guess
I'd
say:
we've
been
working
on
a
thing
called
the
micro
tvm
rpc
server,
and
what
this
is.
We've
made
a
blog
post
earlier
in
the
year
that
kind
of
demonstrated
some
of
the
neat
things
you
can
do
with
micro
tvm.
This
rpc
server
expands
the
support,
basically
of
micro,
of
micro
tvm
to
to
kind
of
a
wide
variety
of
platforms.
C
It
reduces
the
requirements
that
you
need
to
to
run
micro,
tvm
in
particular,
auto
tuning
and
enables
you
to
run
auto
tuning,
basically
on
your
platform
or
soc
of
choice
and
in
particular,
so
towards
that
we've
we've
tested
our
new
approach.
This
rpc
server
approach
with
zephyr
and
mbed
on
an
stm
chip
also,
I
think
at
nordic
chip
as
well,
and
the
goal
of
this
is,
you
know,
like
I
said,
to
to
make
it
easy
to
bring
your
own
to
bring
micro
tvm
to
your
own
platform
or
soc
code.
C
At
master,
we've
checked
in
basically
the
just
the
first
building
block
that
and
it's
it's
a
good
chunk
of
the
work.
Basically
that
enables
this
this
new
approach.
Soon
we're
going
to
check
in
functionality
to
to
run
auto
tpm
on
that.
C
We
also
have
this.
It's
one
thing
just
to
speak
quickly,
too,
is
that
it
is
kind
of
tricky
to
test
things
that
rely
on
external
devices
in
a
ci
today,
we've
merged
basically
a
test
that
that
verifies
our
approach
against
a
host
simulated
device
in
the
next
few
weeks,
or
so
we
expect
we'll
probably
merge
something
that
would
test
against
a
cumulative
qmu
simulated
device,
either
a
x86
device
or
a
cortex
m3.
C
Just
to
get
to
give
you
guys
a
quick
overview
of
kind
of
how
this
might
work.
So
you
can
imagine
this
is
the
the
goal
of
our
current
arrangement
with
micro.
Tvm
is
to
to
let
you
experiment
and
and
play
around
with
models
on
bare
metal
devices,
and
so
we're
assuming
that
you've
got
a
dev
kit,
like
the
one
showing
it
right
here
plugged
into
a
laptop.
You
know
either
with
a
usb
cable
or
a
network
connection,
and
you
have
some
way
to
program
it.
C
C
You
give
it
to
our
c
backend
to
generate
c
code
for
this,
which
you
can
then
hand
off
to
any
compiler
that
you
want.
So
if
you've
got
a
proprietary
compiler
that
does
a
better
job
on
your
platform,
you
can
feel
free
to
use
that
you
build
that
code
against
a
set
of
reusable
libraries.
These
basically
contain
the
rpc
server
at
mentioned
at
right,
and
then
you've
got
this
binary.
You
can
flash
onto
your
microcontroller
just
like
any
other
firmware
binary.
C
You
might
flash
once
you
flash
that
you
can
then
drive
end
to
end
model
execution
using
a
graph
runtime
on
the
laptop
that
your
or
the
desktop
that
you're
running
from
and
that
all
is
done
over
a
link
like
a
ui
link.
It's
worth
mentioning
that
you
don't
have
to
use
uart.
You
can
use
any
other,
usb
or
or
ethernet
kind
of
a
link
as
long
as
it
provides
your
like
properties,
okay,
so
that
was
just
a
quick
whirlwind
dive.
There's
links
to
the
end
of
the
slide.
C
If
you
want
to
read
more
about
the
approach
and
kind
of
how
that
all
works
just
to
talk
a
little
about
the
upcoming
projects,
the
next
thing
we'll
do
is
we're
going
to
publish
a
pr
that
demonstrates
how
to
integrate
this
approach
with
the
zephyr
rtos
project.
C
C
We're
we're
gonna
release
a
test
that
uses
qmu
to
verify
changes
to
this
and
and
make
sure
that
it
keeps
working
so
keep
an
eye
out
for
this
pr.
C
C
So
to
that
end,
we've
actually
been
working
on
some
vagrant
boxes,
and
these
vagrant
boxes
are
contain
all
of
the
dependencies
you
might
need
it
contains,
like
the
zephyr
project,
the
zfr
sdk,
the
compiler,
as
well
as
any
flash
utilities,
and
it's
just
all
set
up
so
that
it
works
out
of
the
box.
So
we're
gonna
release
some
of
those
boxes
so
that
you
can
just
download,
run
it
against
your
local
tvm
and
and
try
out
micro
tm
against
some
attached
hardware.
C
A
couple
links
at
the
end
of
the
slides
and
feel
free
to
ask
any
questions.
If
you
guys.
D
C
Oh
you're,
muted,
it
was
just
muted
and
I
couldn't
figure
out
any
yeah.
Sorry
I
should
explain
so.
Vagrant
is
a
as
an
automation
tool.
It
lets
you
build
virtual
machines,
either
against
a
wide
variety
of
virtual
machine
hypervisors.
So
you
can
build
virtualbox
machines.
You
can
build
parallels
machines.
You
can
build
vmware
machines
to
name
a
few
know.
You
should
be
able
to
try
this
with
whatever
virtual
machine
hypervisor.
C
You
would
like
to
run
and
we
think
that
will
really
make
it
easier
to
get
started
with
micro
tvm
as
well.
B
Excellent
well,
thank
you
very
much.
Andrew
super
excited
about
all
the
progress
on
micro
tvm.
Bringing
learning
to
these
low
power
bare
metal
devices.
Is
you
know
one
of
the
places
tvm
is
really
shining
thanks
to
your
work
and
super
excited
for
the
future
too.
So
I
think
we're
gonna
in
the
interest
of
time.
Actually
we
we
originally
planned
to
do
some
update
on
the
docs.
B
Chris
has
been
putting
a
ton
of
time
and
effort
into
a
vision
of
improved
docs
for
tvm,
but
we
got
a
little
bit
of
a
late
start
and
we
had
this
recording
poll
so
we're
gonna
plan
to
bump
that
to
next
month.
You
all
seen
chris
already
today,
so
you'll
get
more
of
them
in
in
october.
Don't
worry,
but
I
think
high
jump
was
gonna.
Tell
us
a
little
bit
about
tlc
pack.
Next,
let's
see,
I
think
I
saw
him
earlier.
Is
he
still
here.
B
Okay,
well,
maybe
he
got
disconnected
it'll
come
back,
let's
go
ahead
and
move
on
to
answer
with
the
end.
Men.
E
E
Can
you
okay?
So
I
will
give
a
brief
update
for
the
answer
project.
So
answer
is
an
auto
scheduler
for
tvn
and
first
I
will
do
an
introduction
for
people
who
are
not
familiar
with
answer
and
then
I
will
discuss
the
upstream
progress
and
finally,
we
can
show
a
tutorial
and
see
the
api
so
answer
so
currently.
Tvm
heavily
relies
on
the
autotuning
mechanism
provided
by
auto
tv,
but
auto
device
is
still
not
fully
automated
because
it
requires
users
to
provide
some
manual
templates
to
define
the
search
space.
E
These
templates
are
not
easy
to
develop
because
it
requires
sophisticated
domain
knowledge
of
the
tvm
scheduling
language.
So
for
a
new
user,
it's
not
easy
for
her
or
him
to
develop
the
template
for
his
new
operator.
E
So
answer
is
a
research
project
started
at
uc
berkeley.
So
now
we
are
upstream
upstreaming
this
project
to
tv
and
mainland.
So
during
this
summer
most
of
the
answer
crew
was
upstreamed
into
the
mainland,
and
this
week
we
we
sent
the
first
tutorial
on
how
to
use
answer,
and
in
the
next
two
months
we
will
work
on
the
relay
integration
and
the
release
end
to
end
the
network
benchmarks.
E
But
so
and
if
you
are
eager
to
try
answer
for
entertaining
networks,
you
can
send
me
a
request
request
to
get
the
access
to
our
private
repo.
So,
finally,
we
can
check
out
the
tutorial.
So
the
api
of
answer
is
very
simple,
so
first
you
need
to
register
the
computational
graph.
E
It
can
be
any
graph
described
by
the
te
language.
So
here
is
a
matrix
multiplication
with
a
bias
add
and
then
we
can
create
the
search
task
and
begin
the
search
so
to
create
search
tasks.
We
need
to
fill
in
the
static
shape
and
get
a
computational
graph,
and
then
we
can
launch
the
search,
the
search
press.
The
search
process
will
be
similar
to
the
auto
tvm.
It
also
requires
some
measurement
on
the
target
machine.
E
B
F
Got
a
quick
one,
look,
you
said
you,
you
have
a
private
branch
or,
but
the
I
didn't
see
anything
of
the
to
do
about
kind
of
performance.
So
should
we
expect
what's
on
master,
to
be
kind
of
performance
ready
or
what?
What
is
the
delta
between
your
private
repo
and,
what's
on
master
now,
oh
yeah,
so.
E
It's
so
for
any
sub
graph.
The
performance
will
be
the
same
but
for
entertain
the
neural
network.
So
we
need
some
custom
relay
paths,
so
this
is
in
the
relay
integration
item
so
that
we
need
to
do
some
layout,
transform
in
the
relay
so
and
that's
performance,
that's
critical
to
the
performance,
but
for
this
subgraph
like
for
matrix
for
location,
it's.
F
Performance
ready
great
and
are
the
gpu
sketches
sketch
heuristics,
also
on
master.
E
B
Thanks
again
liam
and
that's
super
super
great
work,
supe
and
you
know
really
grateful
for
the
contributions,
so
it
looks
like
we
got
hychen
back.
I
I'm
not
sure
exactly
what
the
connection
issue
was,
but
it's
been
sorted
now.
So
without
further
ado,
I
think
we
have
some
exciting
updates
on
tlc
pack
as
well.
G
Yeah,
so
I
can
maybe
quickly
share
my
screen.
Let
me
see.
G
B
Yeah
yeah,
that's
that's
fine,
heisen!
Why
don't
you
do
that?
Real,
quick
and
while
you're
there
we'll
go
ahead
and
and
hear
from
the
one
and
only
jared
rush
about
error,
reporting
updates.
H
B
Yeah
hyton's
needing
to
do
some
permission
stuff,
so
his
screen
can
yeah.
H
I
will
go
for
some
reason.
My
oh,
I
guess
my
slides
are
in
the
dock,
so
I
can
click
on
them
there
hold
on
one
sec.
I
was
like,
of
course,
I'm
not
ready.
Zoom
share,
screen,
chrome
error,
reporting
there
we
go
all
right
cool.
Can
everyone
see
that
we
good
to
go?
Awesome
cool
so
hey
everybody
good
morning
or
afternoon
or
wherever
you
are
morning
here?
For
me,
let's
talk
a
little
about
air
reporting,
so
this
is
something
I've
been
working
on
for
the
last
couple
months.
H
I'm
just
going
to
do
kind
of
a
quick,
deep
dive,
so
I've
kind
of
been
improving
a
lot
of
the
surface
interface
of
interacting
with
relay
into
the
front
end
of
the
tvm
compiler
in
the
pursuit
of
doing
better
error
reporting.
So
actually
you
can
see
on
the
right
hand,
side.
I've
been
working
the
relay
text
format.
So
this
is
like
a
real,
simple
test
program
that
I
was
working
on
yesterday.
H
I
recently
wrote
a
syntax
highlighter
and
we're
starting
to
get
some
more
features
like
that.
This
might
seem
tangential,
but
it's
kind
of
related.
One
thing
I've
been
trying
to
do
is
make
it
possible
for
us
to
represent
any
relay
program
in
a
human,
readable
form
and
we've
obviously
had
some
of
this
functionality
for
a
long
time.
But
what
I'm
going
to
do
now
is
we're
trying
to
use
a
lot
of
this
to
aid
air
reporting.
I'm
going
to
talk
a
little
bit
about
that.
So
I
know
many
of
you.
H
We
probably
have
our
error
message
hall
of
fame
in
tvm
of
the
error
messages
we
hate
the
most
or
find
the
most
frustrating.
I
don't
I'm
not
going
to
rehash
them
here.
I
think
one
of
the
big
challenges
is
that
we
haven't
traditionally
differentiated
between
the
different
types
of
errors
in
tvm.
So
really
today,
I
want
to
talk
about.
H
Is
this
split
between
sort
of
internal
and
variant
violations
or
internal
asserts
that
that
we
want
to
fire
compiler
diagnostics
and
then
runtime
errors,
and
I'm
going
to
talk
a
little
about
the
progress
that
we're
making
on
all
of
them
and
kind
of
tie
it
back
to
this
little
image
here
on
the
right
and
then
I'll
talk
about
next
steps,
and
if
people
are
interested
in
contributing
so
sort
of
a
three-prong
approach
to
error
handling
that
I'm
suggesting
and
I
sent
rc
about
this
a
few
months
ago.
H
I
guess
now,
and
so
I
have
three
things
that
I
want
to
do.
Is
I
want
to
replace
some
of
these
checks,
which
are
checking
internal
invariance
so,
for
example,
that
an
object
in
tvm
is
defined.
H
These
things
are
things
that
users
can't
act
on,
for
example,
if
we
expect
an
api
to
be
called
the
valid
pointer
all
of
the
time,
that's
an
internal
programmer
error
and
we
should
differentiate
that
from
the
other
things
that
we're
doing.
The
other
thing
that
I
want
is
like
more
traditional
compiler.
H
You
often
get
diagnostics
out,
so
you
want
to
see
something
like
a
warning
or
some
kind
of
semantic
information,
and
traditionally
what
we've
done
in
relay,
because
we
didn't
have
any
program
representation
is
we
would
just
print
the
entire
program
and
annotate
all
the
errors.
Now
this
is
better
than
nothing,
but
it
often
creates
a
lot
of
noise
and
and
as
models
get
bigger,
for
example,
bert
is
something
like
10
or
1000,
or
more
node
assignments.
H
It
becomes
really
really
hard
to
understand
when
an
error
occurs
in
the
middle
of
the
program
and
then
finally,
the
thing
I
wanted
to
address
is
a
runtime
error,
so
you
guys
might
know
this
classic
error
of
like
rx0
and
0
is
not
equal
to
0
or
something
like
this,
and
this
often
happens
when
you
run
into
a
runtime
error.
For
example,
the
nd
array
that
you
pass
in
is
not
the
right
size
so
for
internal
errors.
Kind
of
this
is
the
first
line
of
things
that
I've
been
working
on.
H
One
of
my
goals
has
been
to
replace
the
family
of
check
macros
when
an
internal
error
occurs
with
this
family
of
I
check
macros.
So
if
you're
here
in
sort
of
the
red
box,
you
can
see
what
you're
going
to
get
now.
The
idea
is
to
provide
a
standard,
uniform
error
message
to
users
when
an
internal
error
occurs.
So
we
encourage
people
to
go
report
errors
or
check
the
discussion
forum
and
interact
with
the
errors.
H
Oftentimes,
there's
no
difference
between
a
user
error
and
an
internal
error
today
in
tvm,
so
people
just
either
like
me,
become
used
to
it
and
learn
to
ignore
the
errors
or
they
think
that
tvm's
crash,
when
sometimes
actually
a
user
problem.
So
you
can
see
there's
a
little
code
example
here
in
the
bottom.
You
know,
for
example,
I
want
to
eye
check
that
this
module
is
defined,
and
that's
this
message
right
here
on
the
right.
H
So
next
I'm
going
to
click
okay,
so
the
other
thing
that
we're
working
on
is
diagnostics.
The
goal
here
is
to
report
users
like
actionable
or
explainable
errors.
Previously,
we
just
kind
of
said
what
happened,
but
not
why
it's
like
x
should
not
be
zero,
but
why
should
x
not
be
zero?
So
in
order
to
do
this,
I've
been
building
some
interface
called
diagnostic
context.
H
Diagnostic
context
takes
a
mod,
an
ir
module
and
tvm
today
and
allows
you
to
report
errors
against
it,
and
I
deal
with
all
the
generation
of
the
the
program
errors
and
the
pretty
printing
and
all
that
stuff
for
you.
So
you
can
see
an
example
here
from
the
com2d
type
relation
which
checks
the
types
for
com2d.
This
actually
produces
an
error
message
you
can
see.
The
api
is
pretty
simple.
H
When
I
have
a
diagnostic
context,
I
call
emit,
I
can
attach
an
error
to
a
span
and
then
I
can
use
sort
of
c
out
stream
style,
error,
reporting
to
generate
an
error
message
and
you
can
see
that
what
we'll
get
out
is
an
error
message.
It
kind
of
looks
like
this.
So
now
you'll
get
a
pointer
underneath
the
program
and
it
will
tell
you
kind
of
an
explanatory
way
what
happened
right
so-
and
this
says-
and
this
message
is
telling
us
that
condu
2d
requires
that
one.
H
The
number
one
is
equal
to
the
dimension
five
and
it
explains
it
us
how
it
got
there.
So
it
divided
those
input
channels
by
one
divided
by
the
number
of
groups,
and
that
needs
a
match
of
the
second
dimension
of
the
weight,
and
it
doesn't
do
that
today,
and
this
is
not
super
perfect
yet.
But
this
is
just
an
example
of
me:
spending,
20
or
30
minutes
rewriting
some
of
these
error
messages.
H
You
can
also
see
another
one
here.
I've
actually
used
the
diagnostic
context
to
help
debug
the
diagnostic
context.
So
as
I'm
trying
to
get
the
span
information
flow
through
the
program
in
order
to
do
air
reporting,
I
can
actually
generate
diagnostics
telling
me
that
there's
a
region
where
spans
are
not
defined
and
you're
actually
seeing
those
warnings,
as
I'm
debugging
last
night,
working
on
this
branch,
you
can
see
another
thing
here.
I
can
give
you
sort
of
useful
traditional
compiler
error
messages,
so
here
in
the
last
online
97,
I
made
a
typo.
H
I
forgot
a
percent
in
front
of
the
variable,
and
it's
actually
telling
me
that
I
forgot
a
percent
right
there
and
you
can
see
the
errors
have
really
come
a
long
way,
at
least
in
my
opinion,
from
what
we've
had
before
and
so
we're
we're
working
on
shipping
this
and
then
finally,
there's
a
runtime
errors,
which
we
haven't
really
made
much
progress
on
yet.
H
So
one
suggestion
I
had
was
to
define
symbols
that
the
kernels
will
call
into
at
runtime
so
that
we
can
more
flexibly
control
how
errors
are
generated
where
today,
we
sort
of
inline
the
error
message
into
the
generated
code
and
there's
not
a
lot
of
flexibility
there.
Another
thing
we
could
do
is
potentially
use
the
diagnostic
information
in
either
the
graph
runtime
or
the
vm
to
help
report,
runtime
errors
against
the
static
program,
and
obviously
we
might
want
to
be
able
to
turn
this
off
and
we
can
discuss
some
of
those
features.
H
But
I
think
me
and
a
couple
other
people
in
the
community
are
going
to
start
to
work
on
this
part
next,
once
I
shift
the
diagnostics.
So
finally,
next
steps
on
this
is
there's
a
post
on
the
discussion
board.
That's
linked
in
these
slides.
This
is
sort
of
the
anchoring
point
for
talking
about
error
reporting
and
that's
where
the
new
internal
error
is
going
to
send
you.
B
B
B
Okay!
Well,
I
think
that
we
will
have
a
few
minutes,
hopefully
at
the
end,
to
discuss
more,
but
I'm
really
excited
about
error
messages.
I
think
you
know
anytime
somebody's
getting
up
to
speed
with
tvm
at
first.
You
know
having
these
nice
pointers
to
exactly
what's
going
wrong
is
going
to
make
a
huge
difference,
so
very
very
stoked
for
that,
so,
okay
friends,
we're
going
to
try
again
now.
B
I
know
there
have
been
several
talks
in
a
row,
so
I
want
to
to
just
pause
and
have
everyone
hold
their
breath
collectively
with
me,
we're
going
to
take
a
crack
at
getting
high
chin
screen
up.
Okay,
hi
china.
Are
you
ready?
Oh
victory,
perfect,
hi
chin!
This
is
awesome,
awesome,
cool,
okay,
everybody!
Well,
after
a
couple
of
tries,
we
got
the
super
exciting
new
news
on
tlc
pack.
Please
hygiene!
Take
it
away
all.
G
Right,
thank
you.
Thank
you.
Thank
you
zach.
So
can
you
guys
all
see
my
screen?
Oh
good?
Okay,
so
this
is
just
some
quick
announcement
so
because
we
know
like
the
in
the
tbn
par
in
the
past,
I
could
only
release
the
source
code
and
we
also
will
continue
to
do
that.
But
we
also
want
to
to
make
like
the
the
new
users
to
more
you
user
friendly
to
be
able
to
install
the
tvm
from
just
using
the
piping
install.
G
So
we
prepared
the
a
pip
wheel
called
tlc
pack,
so
we
didn't
use
the
tvm
name
directly,
because
this
is
like
due
to
some
the
apache
science
finishing
policy
it
disallows
like
not
actually
compatible,
binaries
and
because,
like
they
could
have
a
lot
of
like
common
use
cases
like
in
the
tvm,
so
we
also
want
to
include
a
cuda
to
enable
the
cuda,
which
will
inc
include
somehow
like
the
cuda
libraries
into
the
wheels.
G
So
that's
why,
like
we
it's
this
allowed
to
use
the
tvm
according
to
the
patch
policies,
so
we
change
a
different
name
called
the
tlc
pack,
which
is
the
tensor
learning
compiler
name
for
that,
and
now
you
will
be
able
to
install
the
tvm
using
the
page,
install
tlc
pack
dash
f.
This
is
like
tlc
pack.ai
wheels.html,
so
we
didn't
like
the
regular
host
on
the
pipeline
because
there's
a
file
size
limitation.
G
If
we
include
a
cuda
version,
then
the
file
size,
the
wheel
size,
will
becomes
around
400
megabytes
which
exceeds
that
the
file
limitation.
We
already
filed
a
request
to
increase
the
limitation
but
like
it
takes
quite
a
while
to
to
be
approved
by
the
pipeline.
So
that's
why
we
first
host
that
on
the
like
customize
the
custom
website,
so
you'll
be
able
to
install
that.
G
So
after
you
install
the
tlc
pack,
you
would
just
directly
import
tbm,
you
don't
import
tlc
pack,
the
tvm
will
be
available
and
there
are
currently
four
different
versions
of
tlc
pack,
so
tlc
pack
is
only
for
the
cpu,
only
it
bundles
with
the
lvm.
So
you
don't
need
to
install
the
lvmr
machine,
so
it's
already
included
all
the
lvm
share
library
that
is
required
by
the
tvm
to
do
the
compilation
and
for
the
tlc
paku
100.
G
You
can
do
like
both
cpu
and
gpu,
but
it
comes
with
a
cuda,
10.0
version
and
so
on
so
forth.
You
also
have
the
kuda
one
10.1
10.2,
and
we
currently
support
python
version
from
3.6
to
3.8,
and
so
so
far
like
that,
we
are
only
having
the
linux
platform
supporting
so
the
wheels
for
the
max
os
and
windows
will
be
released
in
the
future
and
also
we
have
a
plan
to
produce
this
binary
every
month.
G
Only
using
the
tvm
mass
branch
so
just
to
keep
up
like
the
latest
feature
that
being
updated
into
the
tvm.
So
you
can,
you
can
always
try
to
see
like
the
doing
the
upgrade
from
the
like
every
month.
If
you
see
like
some
feature
is
not
inside
that,
I
can
give
like
a
quick
demo
to
see
how
we
can
use
that.
So
here,
let
me
just
create
a
new
environment
called
test
tlc
pack
using
the
python
3.7.
G
G
Let's
copy
paste
this,
so
I
have
the
cuda
10.1
installed
on
this
machine.
So
let's
just
do
install
tlc
pack
cool.
G
A
We
think
mac,
os
and
windows
packages
will
be
available.
G
G
Try
to
include
that
because
you
can
do
cross
compilation,
but
you,
you
probably
won't
be
able
to
run
that
I'll.
Think
about
that
for
the
windows
version
it'll
probably
take
a
little
bit
longer
time,
because
I
don't
have
like
a
windows
machine
or
maybe,
if
someone
else
like
it
would
be
interesting,
contributing
please
contact
me
or
either
in
the
slack
channel
or
just
you
can
directly
reply
in
this
thread
in
the
discussion
forum
of
the
tvn.
Do
you
think,
do
you
think.
G
A
G
That's
a
good
question.
I
can
yeah.
H
H
G
G
J
G
G
Right:
okay,
now
you
can
do
like
this.
Okay,
you
need
this
like.
We
don't
continue
that
so
with
that,
I
can
and
also
just
to
be
clear,
so
I'm
using
a
little
newer
version
of
the
centos,
because
we
want
to
use
the
latest
version
of
the
g
plus
plus
gcc,
on
the
on
the
center
os.
G
G
I
think
team
charlie
removed
the
move,
the
ci
like
docker
image
into
the
tlc
pack
on
the
docker
hub
on
the
hub,
like
docker
hub.
I
think
I
remember.
K
B
Thank
you
so
much
ajin.
This
is
super
exciting.
It's
gonna
make
people's
lives
a
lot
easier
for
for
incorporating
these
things
and
and
still
keeping
all
of
our
apache
obligations.
You
know
up
to
snuff,
so
one
thing
chris
and
I
are
just
realizing
it
looks
like
this
agenda
ended
up
being
a
bit
aggressive
folks
had
too
many
good
ideas
for
one
meeting,
so
we
might
not
get
to
everything
we
hoped.
But
but
one
thing
that's
really
exciting.
B
B
If
that
ends
up
being
the
case,
please
do
just
stay
tuned
and
watch
your
posts
from
chris,
where
you'll
be
able
to
have
those
ideas
and
we'll
figure
out
the
right
pacing
in
following
months
as
we,
you
know,
keep
exploring
this
this
space.
So
with
that
mick,
do
you
want
to
take
it
away.
J
Sure
so
share
screen.
Do
you
see
my
screen?
Okay,
okay,
so
my
name
is
smith.
I'm
an
intern
at
octonelle,
I'm
demoing
today
relay
visualizer.
So
what
is
relay
relay
is
a
representation
of
deep
learning
models
or
executions
in
tvm.
So
what
about
it?
Why?
Why
do
we
need
visualizer?
What
is
the
problem?
So
the
first
problem
is
really
cool
is
in
text
and
it
it
is
hard
to
see
how
the
data
flows.
So
here
is
the
example
of
the
the
relay
code.
J
Here
it
doesn't
look
really
good
on
how
how
to
analyze
it
right,
so
it
luckily
we
have
the
ast
of
the
code.
So,
let's
see
how
it
looks.
So
here
is
the
ast
of
the
code,
but
it's
still
very
messy
and
you
don't
really
know
what
what
the
data
like,
how
the
data
flows
here.
So
here's
the
project
that
I
am
doing
to
extract
only
the
important
notes
that
I
think
helps
tbm
developer
to
see
how
the
data
flow.
J
So
here
you
can
see
that
you
have
the
width
and
then
it's
passed
into
const
2d
and
the
result
is
in
passing
to
another.
Come
to
the
end
and
both
the
result
is
added
and
becomes
the
final
result.
So
here's
the
first
problem.
The
second
problem
is
the
performance.
Data
is
separated
from
relay
program
and
it's
hard
to
make
the
connection
between
the
two.
J
So
let's
look
at
the
same
code,
so
we
have
the
port
here
and
then
we
have
the
performance
performance
for
in
the
right
and
we
don't
really
know
how
to
connect
them,
and
this
is
a
small
code
of
just
three
lines
of
code,
so
I
create
a
visualization
that
connect
between
the
graph
relay
and
the
performance.
J
So
here
is
the
visualization
of
a
relay
graph.
Let's
say
this
is
a
small
example,
so
we
have
constant
two
consent,
one
and
zero
and
one,
and
then
we
add
together
to
get
the
relay
function.
J
J
J
It
used
to
look
a
lot
messier
than
this
and
we
trimmed
down
a
lot
to
make
it
look
cleaner.
Let's
try
expanding
so
here's
the
we
can
expand
like
one,
I'm
not
going
to
expand
all
because
it's
going
to
take
like
30
seconds
what
else?
J
J
So
you
can
see
that
the
graph
is
different,
because
this
is
just
like
a
dummy
graph.
I
happen.
So
it's
like
a
in
progress
right
now.
I
haven't
put
the
graph
in
the
previous
viewing
to
this
one.
Yet
so
the
plan
is,
we
have
a
graph
and
we
have
the
performance
and
we
make
the
connection
right.
You
see
the
line.
So
if
we
hover
over
a
node,
we
can
see
that
what's
the
performance
in
the
overall
performance
is
corresponding
to,
for
example,
this
one.
You
see
that
this
is
the
performance
of
this
node.
J
You
can
also
sort
the
run
time.
Oh
no,
oh
yeah.
Sorry,
the
run
time
by
the
run
time
or
g
flop.
If
you
would
like
to
and
the
connection
line,
also
change
the
order
with
it
so
yeah.
The
purpose
of
this
one
is
to
make
the
dvm
development
a
little
bit
easier
and
debug
easier.
J
B
That's
beautiful,
I
love!
Oh,
can
you
leave
the
slides
up
for
us
for
a
minute
sure
this
is
just
so
cool,
so
I
was
wondering
you
know
on
the
when
you're
showing
the
visualization
of
like
resnet.
You
can
imagine
so
this
lets.
You
see
the
structure
much
much
better,
but
man,
it's
very
tempting
right,
like
I
already
have
all
this
interaction.
I'd
love
to
just
you
know,
click
an
operator
and
change
it
or
drag
it.
Have
you
thought
about
editing
capabilities
in
this.
J
We
have
talked
about
it,
but
it's
not.
This
called
off
the
word
right
now
right
so
so
I
think
the
priority
is
integrating
the
performance
first
and
then
interaction
on
like
making
change
on
the
notes
right,
I'm
there
because
changing
the
note
replies
like
how
how
to
like
populate
it
back
to
the
relay
chord.
L
H
Of
like
a
set
of
open,
typescript
components
or
whatever
we
could
say
like
embedded
in
visual
studio
and
like
have
sort
of
like
inline
viz,
which
I've
done
stuff
like
this
before
for
lean.
But
I
think
it'd
be
kind
of
cool
here,
because
you
can
have
some
of
the
language
server
protocol
deal
with
the
like
compiler
to
generate.
Keep.
B
That's
that's!
That's
a
joke!
Friends!
You
can
make
your
code
fast
by
sliding
okay,
oh
man,
when
am
I
going
to
learn
not
to
even
bother
on
zoom,
okay,
chanwa
thanks
so
much
for
the
great
talk,
any
other
questions
about
the
the
the
viz
work
that
we
just
saw.
B
Okay,
so
jumping
back
to
lay
agenda,
we
did
have
a
a
couple
of
slots
for
shout
outs
and
some
time
for
open
discussion.
We
you
know,
we
recognized
that
we
didn't
leave
enough
time
for
that
discussion
this
month
and
we'll
certainly,
you
know,
plan
accordingly
in
october
here
in
about
a
month,
but
I
think
we're
going
to
go
ahead
and
see
if
there
are
any
shout
outs
chris.
I
don't
know
if
you
want
to
describe
the
shout
outs
process.
A
A
Now
is
the
time
for
you
just
to,
like
you
know,
give
him
a
shout
out
and
and
and
say
thanks
and
acknowledge
the
work
that
they've
done,
and
it's
just
a
way
to
like
recognize
everybody
who
contributes
and
makes
tvm
such
a
great.
B
Project
so,
in
addition
to
error,
reporting
and
all
the
other
contributions
we've
seen
this
week,
I've
been
really
excited
about
the
byod
tpr
that
finally
went
up.
So
I'd
like
to
give
a
shout
out
to
gus
smith
and
andrew
liu
have
been
working
on
that
really
hard
for
several
months.
B
Everyone
has
that
mute
button
on
right,
and
it's
just
so
much
effort
to
to
mouse
over
there
and
just
click
that
little
mute
button.
Nobody
wants
to
be
the
first.
Nobody
wants
to
be
the
first.
I
know
you
all
have
stuff
that
happened
this
month.
That
made
your
life
easier.
I
know
it.
Some
pr
landed
and
you
felt
a
sense
of
relief
as
soon
as
it
did.
M
H
Double
muted
yeah
I
actually
was
able
to
write
a
small
command
line
utility
which
used
the
russ
bindings
to
parse
the
program
from
disk
in
the
relay
text,
format
and
print
it
out
like
print
parts
of
it
out.
So
almost
the
relay
bindings
on
my
open
pr
are
almost
done
and
then
max
has
a
bunch
of
improvements
in
the
pipeline
and
I
think
as
well.
N
On
that
note,
one
of
the
posts
in
the
discourse
that
I'm
excited
about
is
the
the
object.
Well,
I
guess
like
what's
what's
supposed
to
talk
about
the
the
schema
yeah
object
schema.
Thank
you.
That's
what
I
was
looking
for,
because
a
lot
of
the
pain
in
making
you
know
a
new
first
class
member
of
the
ecosystem
is
like
re-you
know.
N
Reading
the
data
layout
and
copy-pasting
it
exactly
right-
and
you
know
this
could
even
in
its
most
basic
form,
would
alleviate
that
that
level
of
pain
on
bringing
up
a
new,
a
new
language
into
the
ecosystem.
So
that's
very
cool.
B
All
right,
friends!
Well,
I
think
that
we
are,
you
know
just
about
a
minute
out
from
time
being
up.
So
if
there's
any
more
shout
outs,
you
know
please,
please
do
holler,
but
I
did
just
want
to
take
a
minute
to
thank
everybody
for
showing
up
whatever
time
it
is
for
you
wherever
you
are.
I
know
a
bunch
of
folks
on
the
west
coast.
It's
a
bit
early,
so
super
appreciate
it
great
to
see
so
many
shining
faces
and
hear
about
so
much
great
work
happening
in
the
community.
B
Please
do
stay
tuned
for
announcements
for
next
month
we're
going
to
want
to
collect
topics
and
we're
going
to
tweak
the
agenda
a
little
bit,
hopefully
for
more
discussion
time
and
make
sure
you
get
your
cfps
in
your
your
proposals
in
for
the
the
tvm
conf
cfp,
it's
going
to
be
a
really
spectacular
event.
A
And-
and
one
last
note,
if
you
have
any
feedback,
if
you'd
like
to
see
anything
done
differently,
if
you
have
suggestions,
I
would
love
to
hear
it.
This
is
this
meeting
is
for
the
entire
community,
it's
for
everybody,
and
I
want
to
make
sure
that
you
feel
like
it's
great.
You
feel
like
it's
valuable
and
and
I'm
open
to
any
and
all
feedback,
and
thank
you,
everyone
so
much
for.