►
From YouTube: Apache TVM Community Meeting, 15 October 2020
Description
The Apache TVM Community Meeting for September, 2020.
Apache TVM: https://tvm.apache.org
Discuss Forum: https://discuss.tvm.apache.org
Meeting information: https://discuss.tvm.apache.org/t/tvm-community-meeting-october-15-2020/8112/2
B
Oh
boy,
exciting
hello
out
there
in
tv
land,
so
we're
just
about
at
the
905
mark.
So
in
about
a
minute
we'll
we'll
start
things
up,
but
following
our
ritual
I
think
we
should
probably
you
know,
while
we're
waiting
for
our
friends
start
with
a
a
good
pun.
So
you
know
the
other
day.
I
was
talking
to
my
buddy
jared
and
he
was
saying
that
I
really
need
to
spice
up
my
speech
a
little
bit
right
be
more
articulate,
and
so
he
he
got
me
a
thesaurus.
B
B
All
right
folks:
well,
it
is
905
pacific,
so
welcome
everybody
to
another
tvm
community
meeting.
This
is
our
our
second
in
this
new
format.
So
we're
still
learning
it's
an
experiment.
Please
make
sure
to
send
us
feedback,
but
today
we're
going
to
be
talking
about
the
tvm
conference.
Getting
some
sub
project
updates,
see
a
couple
of
quick
demos
and
then
have
shoutouts
for
great
things
happening
across
the
community.
So
I
think
chris,
if
you
wanted
to
start
with
tvm
conference,
or
do
we
want
to
do
a
round
of
intros
first.
A
A
I
helped
to
organize
these
meetings
and
and
put
them
together.
So
I
want
to
say
a
big
welcome
to
everybody.
B
My
name's
zach,
I'm
a
researcher
at
the
university
of
washington,
where
we
work
on
a
lot
of
tvm
related
projects,
and
I
also
help
out
with
some
compiler
work
at
octo.
C
Ml
hi
everybody,
I'm
gus,
I'm
zach
student
at
uw.
I
also
work
with
luis
at
uw.
I
was
working
on
a
tv
based
project
for
my
quals
starting
in
2018,
and
I've
worked
a
lot
with
tvm
in
the
past,
and
my
new
project
also
involves
tvm
so
happy
to
be
here.
B
B
E
B
All
right:
well,
we
do
a
few
more
people
showing
up
so
you
know
in
between
sessions.
Oh
one
did
you
want
to
say
hi
yeah,
oh.
F
To
say
hi,
I'm
listening
in,
I
just
joined
dartmouth
less
than
two
weeks
ago
and
more
on
the
business
side,
but
I'd
love
to
understand
how
the
community
works.
So
thank
you
for
letting
me
in
welcome.
B
Awesome
awesome
and
as
more
people
show
up,
if
anybody
wants
to
to
pipe
up
and
say
hi,
please
it's
it's.
You
know,
especially
during
lockdown
and
pandemic,
it's
nice
to
hear
people's
voices
see
faces,
etc,
but
I
think
we'll
go
ahead
and
last
time
we
ran
out
of
time
so
keep
moving
through
the
agenda.
Chris
was
going
to
tell
us
a
little
bit
about
all
the
exciting
stuff
coming
up
for
tvm
conf
here
in
december.
So
please
chris.
A
Yeah
so
tvm
conf
is
coming
up
this
december.
It's
going
to
be
the
first
week
of
december,
we're
we're
making
it
a
three
day
event.
This
year
so
day,
zero
is
going
to
be,
is
going
to
have
a
full
day
of
tutorials,
starting
all
the
way
from
beginner
introductory
tutorials,
going
all
the
way
up
through
more
advanced
tutorials
about
how
to
write
your
own
kernels.
How
to
do
your
own?
You
know
how
to
do
optimizations,
and
so
we're
really
excited
about
that.
A
We
had
an
amazing
cfp
with
over
40
talk
submissions,
so
the
review
committee
is
in
the
process
of
reviewing
those
right
now
we're
going
to
be
building
out
a
schedule
through
this
week
and
through
next
week,
notifying
submitters,
and
so
if
you've
submitted
a
talk
and
you're
waiting
to
hear
back
on
the
results
of
that
you'll
hear
back
before
the
end
of
the
month,
and
then
our
plan
is
to
have
the
schedule
ready
to
go
by
october
30th
so
that
we
can
open
registration
by
november,
2nd
so
tvm
conf
is
free.
A
It's
going
to
be
entirely
online
this
year,
when
registration
opens
up
we're
going
to
be
sending
swag
to
everyone
to
try
we're
going
to
try
to
get
it
to
you
beforehand,
so
that
it'll
be
ready
for
the
conference
itself
and
we're
super
excited
about
it.
So
we're
hoping
that
we'll
see
everyone
there
and
we'll
have
a
really
we'll
have
a
really
big
attendance
for
it.
B
And
just
to
get
everybody
stoked,
I
saw
some
of
the
early
designs
for
for
some
of
the
swag.
It's
going
to
be
great
stuff
you're
going
to
be,
I
think,
really
pleased
with
some
of
the
logos
and
stuff
that
are
coming
out.
Super
super
excited
and
if
you've
never
been
to
a
tvm
conference,
it's
a
really
fun
event.
I'm
particularly
excited
about
the
edition
of
the
tutorial
day.
I
think
that'll
help
continue
growing
the
community
okay,
so
next
phase
are
sub
project
updates
and
kicking
things
off.
We
have
lily
who's.
G
B
B
G
All
right,
so
I'm
just
going
to
be
giving
I've.
So
I'm
a
engineer
at
octo
ml
and
the
last
month,
or
so
I've
been
working
on
quantization
in
tbm
and
we're
part
ways
through
implementing
it.
G
So
this
is
just
a
quick
sneak
peek
at
you
know
what
we've
been
doing,
what
I've
been
doing,
so
this
is
sort
of
our
proposal,
so
we're
going
to
be
building
on
top
of
the
q
n
ops,
which
is
the
existing,
which
is
the
what
it
already
exists
in
tbm
to
do,
quantization
and
so
right
now,
q
n
only
in
support
supports
importing
previous
previously
quantized
graphs
like
a
quantized
graph
from
pi
torch,
for
example,
and
so
we're
going
to
be
taking
a
more
traditional
compiler's
approach
with
multiple
transformation
passes
to
take
existing
relay
models
and
turn
them
into
quantized
models
that
you
can
run
so
sort
of
on
the
right
here.
G
Let
me
see,
do
I
have
a
pointer
here
on
the
right
here.
We
have
sort
of
what
already
exists,
so
you
take
your
pre-quantized
model,
you
import
it.
It
goes
into
the
q
n
dialect
and
then
we
realize
it
and
then
eventually
we'll.
G
Hopefully
be
able,
we
will
be
able
to
do
training
on
the
q
n
dialect,
but
relay
doesn't
support
that
yet
so
that's
sort
of
in
the
works,
and
then
we
have
so
what
I'm
working
on
right
now
is
the
taking
an
existing
relay
model
quantizing
it
calibrating
it
and
then
also
constructing
a
mapping.
So
you
can
do
so.
G
You
can
figure
out
what
scales
and
zero
points
should
be
when
you've
after
you've
imported
the
graph
and
so
we're
sort
of
I'm
done
with
the
quantization
pass,
and
this
constructing
mapping
between
pre
and
post
quantize
nodes
and
where
I'm
in
the
progress
of
doing
doing
the
calibration.
G
So
this
is
sort
of
some
example
code,
so
for
quantizing,
just
a
resonant
18,
that's
imported
from
pytorch.
So
this
chunk
of
code
here
is
just
sort
of
importing
an
unquantized
pi
torch
model
into
relay,
and
then
here
we
we
quantize
the
model.
So
we
just
we
you.
The
only
thing
you
have
to
do
is
call
this.
You
know
quantizepass.quantize
pass
in
the
module
pass
in
the
parameters
and
then
we
also
allow
have
an
option
that
will
allow
you
to
skip
layers.
G
So
it
won't
quantize
a
subset
of
of
the
layers
in
the
graph,
because
I
know
because
a
lot
of
the
times
when
you're
quantizing
a
model
you
want
to
skip
either
the
first
few
layers
and
leave
those
in
full
floating
point
or
maybe
for
some
reason.
You
want
to
skip
some
arbitrary
number
of
layers
so
we're
we
have
that
functionality.
G
So
this
is
an
example
of
the
imported,
resonant
18.
So
it's
just
the
screen
clipping
of
the
output
of
that
code.
So
here
we
can
see
the
the
original
so
there's
like
a
conf2d
batch
norm
and
then
a
relu
and
then,
when
we
quantize
it
for
each
sort
of
op
like
the
com,
2d
we'll
insert
a
quantize,
and
so
then
we
insert
relay
variables.
G
I
insert
relay
variables
two
quantize
as
placeholders
for
the
scale
and
zero
points,
and
we
do
this.
I
do
this
throughout
the
entire
graph
and
the
purpose
for
doing
this
is
so
that
in
calibration
we
can
just
set
them
really
easily
and
we
can
set
them
to
whatever
we
want
yeah,
and
so
if
people
aren't
familiar
with
qnn,
this
graph
itself
is
not
actually
runnable.
There
are
some
other
steps
that
we
need
to
do.
G
We
need
to
like
run
a
canonicalized
pass
and
realize
pass
so
that
it
will
actually
run
okay.
So
the
next
question
you
might
be
wondering
is:
why
are
there
so
many
quantize
and
dequantize
up?
So
we
want
to
provide
the
flexibility
for
the
user
to
quantize.
Anything
they
want
and
qnm,
which
is
the
existing
quantization
framework,
is
just
heavily
based
on
pi
torch,
so
there
it
will
in
q
and
n
there's
sort
of
this.
G
G
It
doesn't
really
make
sense
to
insert
re-quantize
there.
So
we're
going
to
be
writing
a
optimization
pass
to
get
rid
of
the
redundant
quantize
and
dequantize
stuff
later
and
so
then,
in
the
calibration
pass,
so
we'll
just
essentially
replace
the
relay
variables
with
values.
So
this
is
sort
of
a
global
calibration
path.
So
we
just
replace
all
the
scales
with
one
number
and
all
the
zero
points
with
another
number.
G
C
What
is
canonicalization
actually
is
it
the
process
of
putting
in
the
constants
or.
G
No,
I'm
not
actually
sure
where
the
name
comes
from,
but
it's
just
it's
essentially
lowering
taking
the
quantize
it
it's
taking
these
quantized
functions
and
turning
them
into.
Like
you
know,
oh.
G
B
H
I
just
want
to
chime
in
the
actual.
The
motivation,
for
this
is
a
flexibility,
because
tvm
today
does
have
quantization
support,
but
it's
difficult
to
modify,
especially
if
you
wanted
to
implement
your
own
calibration
pass.
There
are
a
lot
of
different
ways.
You
can
choose
the
scales
and
zero
points
for
a
model
and
we
kind
of
only
support
one
of
them.
H
So
the
goal
here
is
that
we
provide
a
good
framework
for
being
able
to
implement
any
calibration
pass.
You
want
so
that
for
your
specific
architecture,
you
can
have
as
high
accuracy
as
possible.
So
so
that's
kind
of
how
lily
has
architected
it.
It's
really
easy
to
substitute
in
whatever
sort
of
calibration
algorithm
you
want.
So
I
think
it'll
be
pretty
cool.
C
H
In
practice,
it's
not
like
functionally,
it
enables
the
same
sort
of
work
flow
that
you
have
with
the
existing
quantization.
The
downside
of
what
we
have
in
tvm
today
is
that
it's
it's
it's
not
arc
designed
for
flexibility.
Right
like
you,
couldn't
go
in
and
change
how
it's
assigning
the
zero
points.
For
example,
easily
you
just
kind
of
have
to
accept
it,
as
is.
H
That
heavily
utilizes
qnn,
which
we're
not
using
in
our
existing
quantization
today.
So
now
we
have
like
a
unification
between
relay
quantized
models
and
pre-quantized
models
as
well.
I
see
which
is
kind
of
nice,
so
there
are
a
couple
benefits.
Main
ones
are
flexibility
and
unifying
around
q
m.
I'm
sorry
well,.
G
Also,
isn't
right
now,
most
most
of
the
workflow
that
people
do
is
they'll,
take
a
pre
like
a
pre-quantized
graph
and
import
it
into
qnn
and
then
use
it.
B
Makes
sense
that
makes
sense
any
other,
quick
questions
for
lily?
B
Okay,
so
a
minute
ago,
gus
was
asking
like
well,
why
is
it
called
canonicalization
like
what's
going
on
there,
and
this
is
a
symptom
of
you-
know,
a
problem
that
we
have
pretty
often
in
tvm,
which
is
that
you
know
it's
hard
to
go
to
the
docs
and
get
answers
to
these
kinds
of
questions,
and
the
issue,
of
course,
is
that
we
need
different
kinds
of
docs
for
different
members
of
the
community.
B
B
A
Here
we
go
all
right,
so
everyone
can
see
that
okay,
all
right
so
so
so
my
name
is
chris
hodge.
I'm
developer
advocate
for
apache
tvm,
and
I
wanted
to
talk
about
some
of
the
work
that
I
that
I've
cut,
that
I've
started
doing
on
on
helping
to
refactor
the
apache
tvm
docs.
A
Okay,
so
I
wanted
to
kind
of
give
an
overview
of
what
we
have
right
now.
Tvm
has
an
extensive
amount
of
documentation
like
this
doc
site
map
right
here,
which
I
I
spent
some
time
collecting
just
to
get
an
idea
of
what
was
there
and
how
much
we
had.
This
is
just
like
half
of
what
we
have
out
there.
A
I
mean
there
are
there's
all
sorts
of
information
to
how
to
install
tbm,
how
to
contribute
how
to,
in
all
sorts
of
tutorials
on
how
to
like
compile
models
from
different
from
different
frameworks,
and
it's
really
fantastic.
A
But
but
one
piece
of
feedback
that
we've
that
we've
kind
of
gotten
is
that
it
can
sometimes
be
hard
to
find
the
piece
of
of
documentation
that
you're
looking
for,
and
there
are
aspects
of
the
documents
that
aren't
that
that
haven't
of
the
system
that
haven't
been
recorded-
and
you
know
even
like
when
other,
when
researchers
are
interested
in
learning
about
tbm
and
applying
it
to
their
own
research.
They
have
a
hard
time
finding
explanations
of
what
exactly
is
happening
with
the
documentation.
A
And
so
one
of
the
things
that
I
wanted
to
do
was
kind
of
take
a
larger
holistic
view
of
what
of
what
we
have
with
the
documents,
what
we
could
do
and
kind
of
come
up
with
a
new
vision
of
how
to
how
to
build
a
documentation
framework
which
serves
a
large
number
of
tbm
users.
A
It's
really
amazing
and
it's
so
great
to
see
everything
that
you
know
that
that
that
is
there
and
one
of
the
really
neat
things
about
the
documentation
is
that
we,
we
run
gate
checks
against
it,
and
so,
if
there's,
if
there's
any,
if
there
are
any
documents
that
have
code
that
that
work
through
examples
that
work
through
tutorials,
we
actually
run
that
code
against
against
a
production
system
to
make
sure
that
that
it
that
it
compiles
and
that
the
user
can
be
successful
with
it.
A
A
Okay,
so
how
are
we
going
to
take
advantage
of
the
things
that
we
have
and
also
you
know,
and
and
and
make
those
make,
those
more
available
and
easier
to
find
and
easier
to
use,
but
also
build
out
a
plan
for
writing
new
documents
and
and
building
the
system
out
and
the
model
that
I've
proposed
for
the
for
the
community
and
that
we've
already
started
implementing
within
the
within
the
site,
map
and
starting
and
how
things
are
moving
around
talks
about
four
principal
types
of
documents,
and
these
cover
a
number
of
different
intentions
that
that
you
want
that
you
want
to.
A
They
want
to
try
to
try
to
address,
and
so
on.
The
first
of
these
you
have
there
are:
there
are
different
stages
in
people's
life
cycles
of
when
they're,
when
they're,
when
they're
learning
about
our
project.
There's
when
they're
strictly
learning
about
it
like
they
want
to
figure
out
how
it
works,
they
want
to
see
how
it
works,
they
kind
of
want
to
see
the
results
they
get
out
of
it.
A
You
have
the
people
who
are
who
are
working,
who
actually
they're
they're,
trying
to
accomplish
some
sort
of
task
either
it's
contributing
to
the
project
itself,
and
so
they
are
working
on
adding
new
code
to
the
project
or
they're,
trying
to
compile
models
and
put
them
into
production
and
and
they
need
to
be
able
to
to
to
compile
them
and
and
have
successful
results
with
that
and
so
along
the
x-axis.
We
have
this.
We
have
this
learning
and
you
know
to
working,
and
then
we
have
the
from
the
practic
on
the
on
the
y-axis.
A
We
have
from
practical
to
theoretical
practical
means
that
you
just
want
to
learn
something
you
just
want
to
figure
it
out.
You
just
want
to
be
able
to
run
the
code.
You
want
to
be
able
to
contribute
to
it
versus
on
the
theoretical
side.
If
you
want
to
understand
why
it
is
that
tvm
does
something
how
the
how
tir
works,
how
quantization
works,
and
with
this
model
in
mind,
we
can
break
up
documents
into
four
different
major
types.
The
first
is
the
tutorials.
A
Tutorials
are
primarily
made
for
onboarding
for
introducing
a
user
to
the
system
and
getting
them
successful
right
away,
and
so
the
very
first
thing
they
do
is
they
arrive.
They
get
the
software
installed,
they
run
through
compile
some
models
and
they
see
the
the
positive
results
that
come
out
of
that
similar
to
tutorials
are
how
to's.
A
Now,
where
a
tutorial
is
meant
to
introduce
someone,
just
get
someone
to
be
successful,
they
just
they're
successful.
They
see
what's
happening
with
the
with
the
with
the
system
and
they're
able
to
use
it
right
away.
How
to's
are
more
about
problem
solving
it's
for
the
person
who
has
installed
tbm
they've
started
working
with
it,
and
then
they
have
particular
problems
that
they
want
to
solve.
So,
for
example,
importing
an
onyx
model.
There
could
be
a
how-to
on
on
how
to
do
that.
There
could
be
a
how-to
on
how
to
write
your
own.
A
How
to
write
your
own
data
type
for
the
system,
and
so,
and
so
these
are
focused
on
solving
problems
that
a
user
has
right
now
now,
moving
down
to
the
more
theoretical
side,
we
have
the
explainers
which
describe
the
why
the
system
was
designed.
The
way
it
is,
and
so
I
think
a
perfect
example
of
this
is
the
research
papers
that
are
associated
with
tbm.
A
These
explain
exactly
what
tvm
is
doing
and
what
all
its
components
are
doing
at
a
theater
at
a
theoretical
level
that
helps
the
user
understand
how
the
optimization
is
working,
how
the
templating
is
working
and
then,
finally
there
you
know
these
are
primarily
meant
for
understanding,
and
then
the
final
type
of
documents
are
references.
These
are.
A
These
are
just
like
the
descriptions
of
how
the
software
works,
that
developers
can
use
to
create
new
things,
and
so
it's
the
api
references
which
functions
do
you
need
to
call
to
be
able
to
to
to
tune
an
object
or
to
launch
a
launch
a
service
so
that
you
can
so
that
you
can
so
that
you
can
tune
a
model
remotely?
A
Well,
the
first
is
writing
a
new
introductory
tutorial.
The
goal
of
this
tutorial
is
going
to
be
to
get
the
user
all
the
way
from
installing
the
software
to
using
it
successfully
on
some
basic
models.
This
is
a
work
in
progress
right
now.
It's
right
now
we're
waiting
on
a
few
bits
of
code
to
land
to
be
able
to
expand
this
further
and
so
we're
looking
at
right.
A
Now
we
have
tvm
is
pip
installable,
and
so
that
takes
away
the
burden
of
of
needing
to
install
from
source,
although
we
have
directions
from
installing
from
source
we're
also
looking
at
integrating
tvmc
into
this
introductory
tutorial,
so
that
it
it
lowers
the
barrier
to
entry
as
much
as
possible
and
we're
going
to
be
talking
about
tvmc
in
just
a
few
minutes
here.
A
Under
this
framework,
the
tutorials
are
actually
they
fall
more
under
the
how-to
section,
because
they're
focused
on
how
do
you
accomplish
particular
tasks,
and
so
we're
going
to
be
looking
at
all
the
tutorials
figure
out,
which
ones
are
truly
tutorials,
which
ones
are
how
to's
and
refactor
those
appropriately
we're
also
going
to
be
writing
new
how
to's
and
expanding
the
explainers,
and
so
that
was
just
a
quick,
really
fast
overview
of
kind
of
what's
been
going
on
with
the
docs
project,
and
I
would
be
happy
to
take
any
questions
that
people
have
about.
C
This,
I
think
your
explanation
was
sufficiently
clear.
I
I
was
I
do
have
a
question.
I'm
wondering
part
of
you
know
I've
used
tvm
for
the
past
two
years
now.
I
do
feel
like
a
big
part
of
documentation
that
I'm
missing
is
often
just
like
simple
code
documentation.
I
do
feel
like
you
know.
Large
driving
documents
would
have
been
super
helpful
for
me
as
well,
but
I
almost
do
think
that
just
like
enforcing
some
code,
documentation
is
certainly
necessary
as
well.
C
A
Yeah
I
mean
so
I
I
mean
I
think
that
there
are
a
couple
of
answers
to
that
I
mean
the
first
is
at
at
the
at
the
developer
and
like
the
the
steering
level
like
there
are
part
of
part
of
building
documentation
into
into
an
open
source
project
is
making
it
a
requirement
for
code
to
be
merged,
and
so
one
of
the
things
that
I've
wanted
to
do
to
help
improve
the
documentation
is
to
just
start,
adding
a
checklist
to
pull
requests.
A
That
say
these
are
the
things
that
you
have
to
do
before
you
before
you
merge
a
request,
and
so
that
might
be
have
you
provided
documentation,
but
I
think
that
a
good
way
to
do
that.
You
know
to
address
that
also-
and
this
is
largely
neat-
is
kind
of
it's
kind
of
left
up
to
the
developer
community
to
decide
this,
but
have
a
checklist
of
is
the
code.
Is
the
code
documented
like?
Have
you
documented?
Your
functions?
A
Have
you
there
are
linters
that
you
can
that
you
can
use
for
various
projects
that
enforce
that
also,
but
sometimes
it's
just
at
the
code
review
level,
somebody's
saying
hey:
we
need
to
have
a
documentation
in
it.
You
know
in
the
code
and
it
and
it
meets
the
standard,
and
if
you
don't
have
it
explaining
why
you
don't
have
it
before
the
code
merge
merges.
G
One
thing
that
I
think
so
I
started
working
on
tvm
about
four
months
ago
now.
One
of
the
things
that
was
really
difficult
for
me
was
that
a
lot
of
the
code
is
just
not
commented
and
function.
Descriptions
will
just
essentially
restate
the
function
name
sometimes,
which
I
think
you
know
like.
G
I've
discussed
this
with
chris
a
little
bit,
there's
kind
of
a
gap
between
like
like
there's
like
documents
for
people
who
are
you
know,
experts
in
tbm
like
apis,
etc,
and
then
there's
docs
for
people
who
are
just
trying
to
like
press
run
and
there's
not
a
ton
in
between
of
getting
someone
from
like
user
to
person
who
can
develop
it
or
developed
in
tbm,
and
so
I
think
you
know
maybe
just
requiring
people
to
comment
their
code
before
they
commit.
It
would
be.
G
You
know,
really
helpful
or
you
know
maybe
having,
like
you
said,
gus
like
documents
where
maybe,
if
there's
a
common
pattern,
people
take
chunk
of
code
and
intensively
comment
it
so
that
people
can
sort
of
see.
What's
going
on.
B
100
and
actually
this
summer
there
was
a
big
effort
to
push
a
new
pr
through
and
we
sort
of
tested
out
some
of
these
ideas
about.
You
know
how
to
fit
it
into
this
new
documentation
scheme,
and
so
it
turns
out.
We
actually
have
the
authors
of
that
pr
here
with
us
today,
andrew
and
gus
and
they're,
going
to
tell
us
about
that
effort.
Byodt
bring
your
own
data
type
and
maybe
in
the
description
they
could
take
a
moment.
B
Just
to
you
know
highlight
how
the
documentation
part
of
that
experience,
went
and
maybe
their
thoughts
on
incorporating
it
into
pr
requirements
in
the
future.
So
with
that
andrew
and
gus,
please
take
it
away.
C
Thank
you,
zach
yeah.
If
we
forget
to
talk
about
that,
please
remind
me
at
the
end,
because
we
we
can
definitely
talk
about
that.
I'm
just
gonna
present
really
quick.
I
have
one
slide
just
kind
of
overview
and
then
I'm
gonna
hand
it
over
to
andrew
to
do
the
demo,
and
you
know,
talk
in
more
detail
so
just
to
introduce
myself
again.
My
name
is
gus
smith.
I
work
with
zach
and
also
with
luis
my
I
when
I
started
my
phd
at
uw.
C
I
started
working
on
this
project
in
tvm
that
I
called
bring
your
own
data
types.
I
worked
on
that
from
you
know,
2018
to
the
beginning
of
this
year
and
then
andrew
came
on
and
helped
me
like.
You
know
clean
it
up,
put
the
the
you
know
the
bow
on
it
and
and
ship
it
out
and
really
andrew
kind
of
took
over
the
project
from
there
and
and
really
cleaned
it
up
and
and
helped
get
it
merged
into
tvm.
So
he's
gonna
be
talking
about
a
lot
of
the
technical
details.
C
He's
gonna
be
showing
off
that.
You
know
some
of
the
documentation
that
we
that
we
wrote
that
also
serves
as
the
tutorial,
but
I
just
wanted
to
kind
of
give
the
one
slide
high
level
overview,
and
you
know
just
kind
of
get
it
into
your
mind.
What
what?
What
bring
your
own
data
types
is
all
about.
C
So
so
the
bring
your
own
data
types
framework.
The
whole
point
is
basically
to
allow
users
to
easily
define
their
own
custom
data
types,
and
I
kind
of
elaborate
a
lot
on
this,
just
just
to
make
it
clear
exactly
what
the
scope
is.
So
when
I
say
custom
data
types,
I'm
talking
about
scalar
data
types
like
how
a
sequence
of
32
or
64
or
however
many
bits
translates
to
you,
know
some
real
number.
C
So
you
know
tvm
natively
supports
float32
and
float64
and
ins,
and
and
now
it
actually
supports,
b
float
too,
which
is
a
different.
You
know
floating
point
type,
but
there's
actually-
and
you
might
be
surprised
to
find
this,
but
there's
actually
a
bunch
of
other
number
formats
out
there
that
aren't
just
the
ieee
754
floating
point
type.
You
know
bfloat
is
a
good
example
of
that,
but
there's
a
there's
many
beyond
that.
We
talk
a
lot
about
posits,
there's
a
really
interesting
example
that
people
are
really
excited
about.
C
But
there
are
a
bunch
of
you
know:
kind
of
data
types,
researchers
who
are
building
these
new
type
systems
or
these
new
data
type
formats.
C
So
the
bring
your
own
data
types
framework
is
basically
you
know
allowing
users
to
register
and
define
those
custom
data
types,
and
it's
really
we're
hoping
that
it
will
allow
users
to
experiment
with
these
unique
data
types
in
tvm,
because
it
can
be
really
hard
to
actually
test
your
new
data
type
to
see
if
it's
actually
numerically
correct,
or
if
it's
going
to
have
the
numerical
properties
that
you
want
it
to
have
on
the
workloads
that
you
care
about.
C
You
know
often
that
will
you
know
if
I
make
a
new
data
type
and
I
kind
of
build
a
prototype
of
it
in
software,
I'll
need
to
like
hack
that
that
software
prototype
into
the
workloads
that
I
care
about.
But
if
we
can
kind
of
build
a
system
on
top
of
tvm,
where
tvm
essentially
just
compiles
those
data
types
right
into
your
program,
then
it's
really
easy
to
test.
C
You
know
real
workloads
with
your
data
type
and
similarly
you
know
if
I'm
a
machine
learning
researcher
and
there's
this
new
fancy
data
type.
That
has
these
numerical
properties.
That
might
be
really
good
for
deep
learning,
specifically,
which
you
know
something
like
deposit
claims
to
claims
to
have.
Then
I
would
like
you
know
the
chance
to
really
easily
play
around
with
those
new
data
types,
because
maybe
it's
going
to
make
my
network
a
lot
better,
and
our
hope
is
that
you
know
this
framework
is
going
to
allow
people
to
do
that.
C
So
just
to
be
clear
about
really
the
scope
and
what
we're
doing
here.
So
this
is
currently
limited
to
just
software
emulated
versions
of
data
types,
we're
not
talking
about
compiling.
You
know
custom
instructions
and
custom
number
formats
for
custom
hardware
that
supports
you
know
custom
number
formats.
Yet
that's
you
know
pretty
far
down
the
road.
This
is
really
just
for
people
who
want
to
test
out
software
emulated
versions
of
custom
data
types.
C
So
you
know
somebody
builds
a
library
that
emulates
the
behavior
of
some
number
format
that
they
care
about
under
the
hood
on
this,
it's
running
on
the
cpu
using
floats
or
using
ins
or
whatever,
but
it's
it's
implementing
their
number
format.
This
will
allow
you
know.
Byodt
will
allow
you
to
plug
that
that
software
emulated
version
of
the
data
type
into
tvm
and
have
tvm,
compile
your
workload
to
use
that
that
library,
and
so
we've
already
actually
used
the
framework.
C
So
so
for
my
qualifying
exam,
you
know
at
the
beginning
of
this
year
I
use
the
framework
kind
of
as
a
test
case,
to
show
that
it's
useful.
I
did
an
examination
of
how
changing
the
data
type
of
a
pre-trained
model
affects
the
the
accuracy
of
the
the
kind
of
how
does
the
model
retain
its
accuracy
with
a
range
of
different
data
types
that
I
essentially
just
pulled
off
the
shelf
across
the
internet?
D
D
I'm
currently
an
undergraduate
here
at
the
university
of
washington,
working
with
gus
and
zach
and
yeah
I'll,
be
kind
of
walking
through
a
quick
demo
of
the
brignon
data
type
framework,
so
high
level.
What
we're
going
to
do
is
we're
first,
going
to
create
a
simple
tdm
program
that
doesn't
use
custom
data
types
and
then
we're
gonna
introduce
custom
data
types
to
that
program
and
kind
of
highlight
some
of
the
key
constructs
we
have
developed
to
support
custom
data
types.
D
So
to
start
off
with,
we
have
a
very
simple
tbm
program
that
just
takes
in
two
inputs
x
and
y
and
outputs.
D
To
sum,
I
think
everyone
should
pretty
much
understand
what
this
is
doing
and
then
we're
gonna
generate
random
inputs
and
then
run
the
program,
and
so
we
get
output
and
it
looks
like
expected
so
now:
let's
try
adding
custom
data
types
to
this
program,
we're
going
to
take
our
x
and
we're
going
to
cast
it
to
our
custom
data
type,
which
we
have
called
my
float
and
the
way
that
we
tell
tvm
that
this
is
a
custom.
D
Data
type
is
using
this
type
of
syntax,
where
we
use
the
word
custom,
and
then
we
wrap
our
custom
data
type
name
around
square
brackets
like
this,
so
our
custom
data
type
name
is
going
to
be
called
my
float
under
the
hood.
It's
going
to
work
exactly
like
floats,
but
we're
going
to
introduce
it
using
custom
data
types
channel
to
show
how
users
would
use
their
own
custom
data
types.
D
So
we're
gonna
cast
x
and
y
to
my
float,
we're
gonna,
add
the
two
and
then
we're
gonna
cast
it
back
into
a
float
functionally.
It
should
work
exactly
the
same
as
above,
but
when
we
run
this,
we
get
an
error
that
the
type
name
my
floats
have
not
been
registered
yet
so
tvm
needs
to
tvm
doesn't
know
what
myfloat
is
yet,
and
the
reason
for
this
is
that
users
kind
of
need
to
register
their
custom
data
type
and
the
way
we
do
that.
D
So
we
created
a
function
where
you
pass
it
a
name.
So
this
is
the
name
of
your
custom
data
type
as
well
as
a
type
code,
so
in
tvm,
there's
kind
of
a
one-to-one
mapping
between
type
names
and
type
codes
and
we're
just
going
to
arbitrarily
arbitrarily
assign
150
to
our
new
custom
data
type,
and
so
we
can
run
that
and
then
try
writing
the
same
program
as
above
and
now
that
we
have
the
program
we
can
print
it
out
and
see
that
we
do
indeed
have
kind
of
custom
data
types
in
our
program.
D
We
start
off
with
float
32
xyz,
and
then
we
cast
it
into
our
custom
data
type.
My
floats,
we
add
the
two
and
then
we
cast
it
back
into
a
float.
Okay.
So
now,
let's
try
running
the
program
and
we
get
another
error.
That's
the
lowering
function
for
target
l,
vm
destination
type,
150
source
type
2
is
not
found.
D
So,
although
we
kind
of
have
created
a
program
using
custom
data
types,
we
haven't
yet
defined
how
tvm
should
compile
the
custom
data
type
tdm,
doesn't
really
know
how
what
myfloat
is
or
like
how
to
do
operations
on
my
floats
and
like
intuitively,
we
can
see
from
this
error
that
tdm
doesn't
know
how
to
cast
from
source
type
2,
which
is
floats
into
destination
type
150,
which
is
our
custom
data
type.
D
My
flirt
and
kind
of
the
way
we
the
way
we
allow
users
to
introduce
lowering
functions
that
tells
tvm
how
to
handle
compiling
custom
data
type
is
using
a
function
called
register
op.
The
first
parameter
is
the
first
argument
is
a
function,
and
then
we
have
the
name
of
the
operator.
We
have
the
target
and
because
we're
doing
a
cast,
we
have
a
source
type
name
as
well
as
a
destination
type
name.
D
So
the
first
argument
I
mentioned
is
a
function,
and
so
the
function
should
take
in
a
tir
operator
and
then
return
another
tir
operator
that
tells
tvm
how
to
basically
compile
this
code
and
we
created
a
helper
function.
That
kind
of
handles
a
common
use
case
where
users
may
want
to
create
or
may
want
to
call
external
c
functions
called
create
lower
func.
So
we're
going
to
call
our
helper
function
and
we're
going
to
pass
a
dictionary.
D
So
float2custom32
is
actually
defined
in
an
ex
in
kind
of
code
and
tvm
right
here
and
basically,
what
it
does
is
it
takes
in
a
float
and
then
it
returns
a
uint32
which
basically
is
the
raw
bit
representation
of
the
floats.
D
C
Okay,
so
yeah,
so
for
what
it's
sorry
to
interject
here,
if
I
can
andrew
for
what
it's
worth,
that
that
file
that
he
was
just
showing
is
built
into
tbm
just
for
testing
purposes,
because
we
wanted
some
kind
of
dummy
custom
data
type
to
use.
But
that
is
this
kind
of
the
c
plus
file.
You'd,
probably
be
writing
if
you
were
writing
a
custom
data
type
you'd.
Have
this
you'd
be
building
a
library
you'd
be
exposing
these
types
of
functions.
C
A
How
is
how
is
the
runtime
made
aware
of
of
new
data
types
like
if
I
write
my
do
I
have
to
recompile
tvm?
If
I
bring
my
own
data
type
or
or
I
can,
or
is
there
a
way
to
link
the
the
the
the
the
new
data
type
to
the
runtime
without
recompiling
yeah.
C
That's
a
great
question,
so
that's
actually
happening
all
at
runtime
andrew's,
going
to
show
that
in
just
a
second
but
the
the
register
function,
or
I
guess
it
did
just
show
up,
but
the
the
register
function
and
the
register
op
function.
Those
are
actually
manipulating
tvms.
You
know,
like
runtime
data
structures
to
essentially
enter
put
a
new
entry
into
some
tvm
data
structure.
Saying
you
know
at
runtime
the
user
can
say
you
know
we
are
now
registering
this
function.
C
This
type
code
you
know
is
for
this
is
for
this
data
type
and
and
also
you
know,
when
you
see
this
data
type,
you
can
use
these
functions
to
to
lower
the
code,
to
something
that
tbm
can
actually
understand.
So
this
is
all
all
of
this
that
he's
showing
the
you
know
target
tbm.target.datatype.registerop.
C
D
Okay,
I
guess
we
can
continue
so
kind
of.
We
have
just
registered
a
lowering
function
for
casting
between
floats
to
my
floats.
So
if
we
try
running
the
same
program
as
above,
we
get
a
slightly
different
error
that
the
add
function
isn't
defined
yet
for
our
mic
flips.
So
we
kind
of
have
to
do
the
same
thing
where
we
define
a
lowering
function
for
add
for
my
flows.
I've
watched
for
casting
between
my
floats
and
floats
inside
the
dictionary
these
names.
D
These
strings
are
basically
named
to
the
functions
that
we
exposed
in
the
myfloat.cc
file
that
we
can
look
at
if
you're
interested.
So
we
can
try
running
it
now
and
then
you'll
see
that
the
float32,
as
well
as
a
micellar,
printed
out
exactly
the
same
thing,
which
is
kind
of
what
we
expected.
However,
myflow
is
a
custom
data
type
that
we
introduced
to
tvm
so
yeah.
D
So
now
we're
kind
of
going
to
speed
run
a
quick
example
of
how
you
would
use
custom
data
types
over
your
model
and
I'm
first
going
to
find
like
some
helper
functions,
to
kind
of
get
mobilenet
and
get
like
some
data
and
then
run
mobilenet
using
basically
float32s
at
first.
So
we
have
something
to
compare
against.
D
So
this
is
the
first
10
values
in
the
flip
32
example
of
mobilenet,
and
then
we're
just
going
to
find
a
few
more
helper
functions,
and
here
we're
going
to
use
a
task
called
change
data
type.
That
is
a
relay
path
that
we
developed
and
basically
what
it
does
is
it
takes
a
relay
model
or
a
relay
function.
D
It
takes
a
source
data
type
and
a
destination
type
and
converts
everything
from
the
source
data
type
to
the
destination
data
type.
So
this
is
just
a
relay
task
to
help
users
kind
of
convert
their
own
to
convert
their
own
relay
functions
into
something
using
custom
data
types.
So
we're
going
to
do
that
here.
D
And
we
should
get
an
error,
because
yeah
and
of
course,
when
we're
kind
of
using
entire
models,
there
are
usually
a
lot
more
operations
that
we
have
to
register
that
the
model
uses.
So
we
have
like
all
those
functions
here
is
yeah,
so
we
can
just
run
that
and
finally
run
the
complete
model
using
custom
data
types,
and
this
is
going
to
take
a
little
while
you
may
notice
that
using
the
bring
your
own
data
types,
we
have
this
disable
vectorize.
D
So
vectorize
vectorization
is
currently
like
not
implemented
in
bring
your
own
data
types,
and
there
are
a
few
other
kind
of
yeah
optimization
things
that
we're
looking
into.
But
we
can
see
if
we
compare
this
output
using
our
custom
data
type
with
the
output
above
from
floats.
We
can
see
that
they're
exactly
the
same,
which
is
kind
of
what
we
wanted,
because
myfloat
is
basically
an
opaque
version
of
floats.
C
Thank
you,
okay,
one
second
yeah,
thank
you
andrew
for
that
that
that's
actually
so
that
was
actually
kind
of
a
notebook
version
of
the
developer
tutorial
that
we
have
so
that
that
same
notebook
is
available
on
in
the
developer.
Tutorials
section-
and
I
I
mean
I'm
really
just
concluding
with
a
a
a
slide
of
link-
so
there's
not
much
here,
but
really.
C
I
just
want
to
kind
of
re
reiterate
that
we're
hoping
people
interested
in
using
data
types
in
their
models
and
also
people
interested
in
developing
new
data
types
kind
of
those
two
groups
of
researchers
both
can
find
this
useful.
C
So
if
you
know
anybody
who's,
a
data
types
researcher,
if
you
know
anybody
who
wants
to
try
out
data
types
in
their
models,
please
give
them
a
give
them
these
links
to
our
tutorials
and
we're
always
happy
to
talk
and
and
collaborate
with
those
people
and
help
them
use
the
the
framework
thanks.
Everybody.
B
Thank
you,
andrew
and
gus.
So
in
the
interest
of
time
I
think
we'll
we'll
hold
off
on
on
more
questions
but
congrats
again
on
getting
the
big
pr
merged
at
the
end
of
the
summer.
It
was
quite
a
lot
of
work,
so
major
kudos.
We
also
didn't
get
to
talk
about
the
docs
aspect,
but
you
know
maybe
maybe
next
month.
I
did
want
to
make
sure
that,
though,
that
we
jumped,
we
had
a
lot
of
demand
for
demos
this
week,
so
we're
gonna
hear
an
update
on
tvmc
by
leandro.
B
I
think
I
saw
him
he's
here.
Perfect,
yeah,
yeah,
okay,.
I
Good,
so
I'm
gonna
share
my
screen
and
you
let
me
know
whether
you
can
see.
I
I
Okay,
cool
so
yeah,
so
as
they
said,
my
name
is
leandro
and
I
work
at
arm
and
I'm
here
to
present
a
quick
demo
on
tdmc.
I
So
tvmc
is
something
that
we
been
talking
for
a
while,
and
we,
I
even
demonstrated
it
in
a
few
meetups
ago
and
tbmc
is
this
tool
that
intends
to
be
a
command
line
driver
for
tdm
and
by
command
line
driver.
What
we
mean
is
that
it
provides
access
to
some
features
of
tdm
using
command
lines.
So,
as
you
can
see
on
my
screen,
that's
that's.
I
Basically,
all
the
all
the
kind
of
a
quick
run
through
about
tbmc,
and
you
can
also
have
a
look
on
our
tutorial
as
tvmc
was
recently
released
as
an
experimental
feature
zone
feature
on
dvm
0.7.
I
So
this
tutorial
will
show
you
most
of
the
information
that
I'm
gonna
present
and,
of
course
you
can
find
this
information
live
in
the
website
so
that
you
can
run
it
yourself
if
you
like,
there
is
also
a
proposal
on
team
conf
for
tbmc,
so
it
would
be
a
very
extended
version
of
what
I'm
going
to
present
now
well
talking
about
today.
I'm
going
to
present
this
short
walk
through
the
tutorial
and
I'll.
Also
answer
any
questions
you
have
to
start.
I
Let
me
just
get
my
command
line
to
start.
What
I
have
here
is
a
tdm
from
today,
so
if
we
grab
for
dvm
you
just
see
this
is
kind
of
a
checkout
from
today.
I
installed
it
on
a
virtual
length
and
I
I
have
it
available
so
once
I
installed
it,
I
have
a
tdmc
as
a
command
line
available
installed
by
the
package
so
that
we
can
see
the
tvmcs
as
part
of
tdm
dvmc's,
the
same
version
of
dvm.
I
You
can
have
a
look
as
well
on
pvmc
help
to
find
out
what
informations
are
there
and
the
the
one
interesting
feature
that
that
you
might
notice
is
that
tdmc
is
extensible
via
subcommands
at
the
moment.
In
this
version
we
have
three
sub-commands.
One,
then,
is
tune
to
run
a
tuning
or
access
some
features
of
tuning
on
tvm,
compile
and
also
run.
I
I
So
in
order
to
access
the
compilation
process,
we
go
tbmc
compile
and
then
there
are
specific
options
that
you
can
use.
Then
you
can,
you
can
see
them
using
help
and
there
are
many
options
you
can
use
that
are
some
of
the
or
a
subset
of
the
things
that
dvm
offers
in
terms
of
compilation.
I
In
practical
terms,
I
guess
the
simplest
thing
I
can
show
you
is
how
to
compile
a
model,
and
I
I
have
the
command
here
just
so.
I
can
explain
it
before
running
it.
So
that's
a
tvmc
compile
as
something
that
the
future
we
want
to
access.
We
support
targets,
inline
targets
as
tdm.
So
if
that's
something
that
there
is
some
target
you
use
or
you
see
on
the
tutorials
and
your
tdm
supports
compiling
to
that
target,
you
can
just
get
that
target
string
and
use
it
on
tvm.
I
We
also
there
was
this
recent
discussion
about
the
json
specification
on
targets
that
you
can
use
that
here
as
well.
If
you
have
an
json
file,
you
can
point
that
file
here
and
we
will
use
it
so
targets
everything
that
is
covered
as
a
tdm
target
you
can
use
on
on
tmc
today.
I
I
and
then,
of
course
you
need
a
model,
so
the
the
model
serialized
as
a
file
you
at
the
moment
we
support
four
different
front
ends
or
five
different
front
ends:
keras
onyx,
protobuf
or
tensorflow
tf
light
and
by
torch.
I
I
And
we
wait
a
little
bit
and
it
works,
so
you
get
access
to
a
feature
of
tvm
which
is
based
on
the
graph
runtime
of
course,
and
you
will
have
inside
of
the
turbo.
So
these
are
all
messages
from
from
tdm
you.
I
You
will
see
if
you
open
that
table,
you
see
your
graph
as
a
file,
you
see
your
module
and
your
parameters
all
serialize
as
files,
so
that
you
can
use
it
for
for
other
things
that
that
you
want
you,
you
get
them
as
files,
so
just
two
seconds
more
and
we
will
end
up
with
our
module.
I
Okay
cool,
so
now,
probably
yeah.
So
now
we
have
this
compiled
module
here
and
we
can
use
it
to
run.
So,
as
you
can
see
in
the
you,
you
can
read
the
an
extended
version
of
this,
but
in
the
interest
of
time
I
just
comment
two
things
about
models
and
this
sort
of
tools
that
intend
to
be
generic.
I
You
can
have
you
can
create
your
own
module,
you
can
you
can
you,
of
course,
will
define
inputs
and
outputs
as
as
you,
these
are
decisions
based
by
the
people
or
person
who
creates
a
network
or
a
model,
and
in
order
to
be
as
generic
as
possible,
we
are
using
in
tdmc
inputs
and
outputs
are
numpy
arrays
or
serialize
numpy
arrays,
and
that
requires,
of
course,
from
you
as
a
creator
of
a
model.
You
will
have
to
do
some
work,
of
course,
to
prepare
inputs
and
how
to
process
outputs.
I
So
I'm
doing
this.
This
is
something
that
we
are
interested
in,
improving
and
discussing
about
how
to
improve
it,
to
make
it
simpler
for
people
to
use,
but
by
for
the
moment
we
support
this
mpz
format.
That
is
generic
enough,
that
it
would
cover
any
model
that
you
can
input
data
as
numpy
arrays.
I
For
today,
what
we
have
is
some
pre-processing
and
post
and
post
processing.
Scripts
that
you
can
use
so
in
this
pre-processing
is,
is
very
small.
I
would
just
show
it
very
quickly
this.
What
what
it
gets.
You
will
have
some
image
processing.
I
got
an
input,
a
picture
of
my
my
own
dog,
so
I
don't
trust
this
models
with
cats.
You
see
that
you
always
get
cat
and
not
food,
so
I
got
my
own
an
image.
I
took
that
that
I
trust
what
what
else
we
do.
I
We
resize
that
image
and
do
all
the
imagenet
treatment
that
it
needs
or
normalization
that
it
needs
in
the
end,
I'm
saving
it
as
my
input,
dot
and
pz.
I
So
what
I
want
to
do
is
run
that
model
and
I
will
just
put
inputs
as
my
input
I'll
set
out
output
as
yeah
I'll
call
the
same
name
as
what
they
called
before
the
same
name.
You
can
see
in
the
tutorial
and-
and
then
you
just
say,
your
compiled
module
generating
predictions
is
quick
when
compared
to
compilation
and
what
you
get
as
an
output
are
serialized
arrays
as
well.
So,
whatever
your
module
outputs
as
a
race,
we
will
get
those
outputs.
I
We
need
some
way
to
post
process
this
to
generate
something
that
we
can
understand,
and
we
can
read
so
to
do
that.
I
have
this
post
processing
python,
which
knows
what
my
model
outputs
and
will
generate
meaningful
or
human
readable
results
for
me,
and
I
can
have
a
look.
We
can
have
a
look
on
this
and
what
it
does
it
downloads
a
list
of
labels
based
on
categories
that
that
model
outwards.
I
It
will
read
those
outputs
and
will
match
those
outputs
with
the
outputs
that
I
got
on
my
predictions
file
so
that
we
will
read
that
prediction
stuff
and
is
that
and
we
output
which
class
and
with
which
probability
as
these
are
outputs
that
resnet
model
that
I
used
it
don't
do
the
soft
max
part
to
get
the
outputs
and
really
put
them
in
a
in
a
series
of
classes
and
probabilities.
So
we
do
that
in
the
post-processing
as
well.
I
If
the
model
did
that
by
itself,
so
it
would
be
easier
for
the
post
processing
screen
for
now.
What
do
we
do?
We
just
say
python,
post
processing
and
then
with
this,
what
I'm
doing
I'm
reading
the
predictions
file
and
matching
this
with
labels
that
I
downloaded
so
what
it
does.
It
will
give
us
predictions
so
shit-to-dog,
some
other
bridges
that
I
don't
know
much.
But
to
conclude
well,
to
conclude
this
part,
I
will
just
show
what
that
was:
that's
that
was
that
so
it
matches.
I
I
As
of
today,
as
I
said,
if
you,
if
you
have
other
options
that
are
relevant,
we
can
of
course
discuss
about
it
and
it's
extendable,
but
for
now
what
I'm
gonna
do
is
just
run
a
quick
session
of
tuning
just
to
explain
more
or
less
how
it
works
on
tvmc
and
to
do
that,
we
do
tdmc
tune.
We
say
which
target,
of
course,
obeying
and
then
comply
with
the
tvm
targets.
I
I
So
with
that
output
log,
you
can
do
mainly
two
things.
One
is
to
use
them
as
bays
to
run
more
tuning
sessions
and
get
even
better
tuning
or
even
more
tuning
time
trying
to
get
better
results,
or
you
can
also
run
it
as
in
the
end
goal
of
tuning,
which
is
to
run
compilation
assisted
by
tuning
results.
I
So
this,
as
you
can
see,
is
being
populated
by
the
auto
tuner.
There
is
one
issue
that
is
being
discussed
about
mac
os
this.
This
machine,
of
course,
is
on
linux,
but
you
can
see
in
the
on
github
and
also
in
the
forum.
There
are
some
discussions
in
there
just
in
case.
A
Yeah
and
and
and
tristan
I
know,
I
know
that
we're
we're
a
little
bit
over
on
time
and
I
was
wondering
leandra.
Maybe
if
maybe
next
month
we
could
go
deeper
into
running
tvmc
with
tuning,
and
I
want
to
give
a
shout
out
to
tristan
for
doing
the
hard
work
and
in
solving
some
of
these
threading
issues
that
were
causing
problems
on.
I
I
Yeah
yeah
yeah,
so
so
I
saw
his
patch,
but
I
I
I
didn't,
try
it
on
any
mac
machine,
yet
so
yeah.
So
just
to
conclude
this,
this
is
more
or
less
a
very
quick
walkthrough
tbmc.
I
If
you
experiment
it
and
if
you,
if
you
find
anything
that
is
not
working
as
expected,
just
reach
out,
we
can.
We
can
chat
about
it,
we
can
discuss
and
we
can
yeah.
B
This
is
awesome.
Thank
you
so
much.
I
I
think
it'll
make
it
a
lot
easier
for
scripting.
A
lot
of
workflows,
especially
at
scale
doing
things
like
nightlys
and
cis,
is
a
really
really
slick.
Okay,
friends!
Well,
we
we've
gone
a
little
bit
over
time.
Again
again,
these
meetups
are
still
an
experiment,
so
we're
learning
as
we
go.
It
was
really
exciting
to
see
the
demos,
especially
today
thanks
everybody
for
preparing.
B
I
think
we're
going
to
go
ahead
and
you
know
end
the
call,
but
you
know
the
discuss
is
always
up,
please
make
sure
you're
getting
stoked
for
a
tvm
conference.
You
know,
keep
your
eyes
peeled
for
amazing
swag
coming
out
and
as
always,
if
you
have
something
you'd
like
to
talk
about
or
something
you
think
your
friend
or
neighbor
ought
to
bring
to
this
meeting.
Please
do
pipe
up
for
next
month,
but
until
then
I
think
that
we'll
go
ahead
and
sign
off
so
have
a
great
day.
Folks,
thanks
everybody.