►
From YouTube: Apache TVM Community Meeting, July 22 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
welcome
everybody
to
the
july
apache
tvm
community
meeting.
Typically,
we
like
to
start
with
introductions.
If
there
is
anybody
who
is
new
to
the
community,
if
you'd
like
to
introduce
yourself
p,
please
please
feel
free.
I
think
we
have
a
a
few
people
at
octoml
who
might
want
to
introduce
themselves.
B
I
could
go
first,
if
you
like,
I'm
mark
shields
I'm
into
week.
Two
so
still
got
a
little
bit
of
deer
in
the
deer
in
the
headlights.
I'll
be
working
with
jared
is
right.
Next
to
me
on
the
core
compiler.
C
A
All
right,
so,
if
there's
not
anybody
else,.
D
E
A
All
righty
so
moving
on
to
the
next
item
on
the
agenda,
we
have
a
few
announcements.
The
first
is
we're.
We
want
you
to
save
the
date
for
tvm
conf.
This
is
going
to
be
our
fourth
annual
conference.
It's
going
to
be
taking
place
from
december
1st
this
year.
A
Once
again,
it's
going
to
be
a
primarily
virtual
event,
but
we're
hoping
to
have
some
sort
of
in-person
component
with
it.
So
if
you
are
in
the
seattle
area-
or
you
think
you
can
be
in
the
seattle
area
during
those
times
please
head
over
to
the
discussion
forum,
we
have
a
little
poll
there.
Do
we
just
want
to
get
a
sense
of
how
many
people
are
going
to
be
attending
virtually
how
many
people
people
if
there's
an
in-person
component,
would
like
to
attend
for
possibly
one
day
for
a
little
watch
party.
A
While
we
have
the
virtual
event
we're
going
to
be
announcing
the
cfp
and
other
dates
that
are
related
to
the
event.
Also,
so
we'll
have
a
cfp
that'll
be
open
for
some
amount
of
time
and
then
we'll
have
a
review
process
and
we'll
have
the
schedule
put
together.
So
keep
your
eyes
on
the
discuss
forum
to
be
able
to
see
when
these
dates
are
going
to
be
coming
and
how
to
participate
in
the
conference.
A
Over
the
last
few
months,
we've
also
had
a
number
of
promotions
within
the
tvm
community.
We
want
to
welcome
cody
and
junru
as
new
pmc
members
within
the
within
apache
software
projects.
Pmc
members
are
people
who
have
not
only
been
given
contributor
rights
to
a
project,
but
also
have
a
deeper
responsibility
for
being
part
of
the
project
managing
committee.
So
we
want
to
welcome
them
for
their
hard
work
and
for
everything
they've
done
for
the
project.
A
We
are
also
welcoming
a
few
new
reviewers
wang,
yu-chan
and
igor
choirev.
Our
newer
reviewers
to
the
project
reviewer
is
a
bit
of
a
special
status
within
the
apache
tvm
community.
It
means
that
you've
been
engaged
in
the
community.
You've
you've
been
making
after
active
contributions
and
your
voice
is,
is
being
a
little
bit
more
elevated
and
recognized.
As
somebody
who
understands
understands
the
project
understands
aims
of
the
projects
and
are
able
to
provide
additional
reviews
to
the
project.
A
We
also
want
to
welcome
trevor
morris
as
a
new
committer.
This
is
kind
of
in
between
the
reviewer
and
the
pmc
member
stages
within
the
the
apache
tvm
ladder.
This
means
that
you
actually
have
the
the
authority
to
merge
code
into
the
project
and
it's
it's
it's
a
major
recognition
that
somebody
has
been
involved
with
the
project.
A
They
understand
how
it
works
and
that
they're
trusted
now
to
merge
code
under
the
main
project,
and
so
we
want
to
welcome
trevor
twelve
trevor
as
a
commit
as
a
as
a
new
committer
to
the
project
and
with
that,
are
there
any
other
announcements
that
that
anyone
else
from
the
community
might.
A
A
Okay,
so
moving
on
to
the
next
topic,
so
kind
of
going
forward
on
these
meetings,
we're
going
to
be
looking
at
drawing
more
topics
from
the
community,
in
particular
at
work
that
is
actively
happening
inside
of
the
community
and
with
this
we're
going
to
start
looking
at
some
of
the
some
of
the
rfcs
that
are
active,
especially
through
the
new
rfc
process,
and
have
some
of
the
community
members
come
to
the
community
meetings
and
talk
about
these
rfcs.
A
What
their
aims
are,
what
the
design
is
and
to
cur
to
encourage
some
more
community
discussion
on
those,
and
so
with
that
we'd
like
to
invite
leandro
nunez
from
from
arm
to
discuss,
auto,
build
and
automatically
building
ci
testament
docker
test
images
that
are
gonna
be
built
nightly,
so
that
we
can,
you
know,
make
sure
that
all
the
requirements
for
for
tvm
are
up
to
date
and
are
building
correctly,
and
so
with
that,
I
am
going
to
hand
this
over
to
leandra.
E
Yeah
will
you
share
my
screen.
E
E
So
I
suppose
you
can
see
my
browser,
yes,
okay,
so
so
this
always
starts
with
the
infrastructure
we
use
in
the
project
to
run
our
tests
and
builds
so
for
every
single
pr
that
we
send
in
the
code.
E
You
see
some
github
actions
that
are
triggered
and
some
continuous
integration,
jobs
that
will
validate
your
patches
and
sometimes
we
want
to
update
what
is
installed
on
those
on
that
infrastructure
and
that
so,
if,
if
you're
not
familiar
with
with
docker
or
any
of
this
I'll,
try
to
just
give
a
little
bit
of
a
summary
on
that.
E
So
every
time
you
you
submit
your
prs,
it
will
trigger
a
job
using
gankings,
which
is
a
way
to
automate
those
tasks
and
it
will
run
the
tests
into
you
check
your
code
and
it
will
do
everything
so
the
this
infrastructure
that
runs
the
jobs
it
is
composed
by
docker
containers,
and
these
docker
containers
contains
a
lot
of
dependencies
and
very
often
we
want
to
update
those
dependencies
and
to
do
that,
we
want
to
update
those
docker
images
that
are
basically
the
proper
term.
E
But
we
don't
do
that
if,
if
we
submit
a
change
on
a
docker
image
that
we
use
on
this,
it
won't
be
automatically
updated
in
the
infrastructure.
So
if
you
install
something
new
on
on
these
infrastructure
pieces,
it
won't
be
done
automatically
for
you,
and
this
is
a
project.
This
is
kind
of
a
process
in
the
project
that
calls
causes
massive
pain.
E
Every
time
we
decide
to
update
the
doctrine,
images
it's
been
like
months
since
it
was
done
before,
and
then
we
face
all
sorts
of
issues
with
dependencies
that
that
release
new
versions,
for
example-
and
we
don't
lock
the
versions
on
the
docker
container,
so
it
will
install
the
newest
ones
and
that,
in
conjunction
with
changes
that
we've
done
in
the
code
might
break,
some
dependencies
might
be
outdated.
E
Links
could
be
invalid
and
this
causes
kind
of
all
sorts
of
pain.
So,
in
the
last
time
it
was
done
in
the
community,
so
it
was
more
or
less
or
the
work
was
organized
by
matthew,
bruckhart
and
and
andrew
from
mac
to
ml,
and
you
can
see
so
there
is
this
issue
here,
8177
in
the
project
that
you
can
see.
Basically,
all
the
saga
and
all
the
pain
in
updating
those
images
because
of
all
sorts
of
problems
that
happen
at
that
point,
this
is
the
problem.
E
Now,
the
a
while
ago,
we've
been
discussing
on
having
some
automation
on
that
area
that
that
aims
to
give
the
community
visibility
of
those
potential
problems
that
might
be
accumulating
as
that
in
the
in
the
docker
images,
so
the
simplest
way
that
we
could
implement
this
is
to
if
that
causes
a
lot
of
pain.
We
do
that
build
and
we
build
and
test
that
thing
very
often,
so
we
keep
attention
on
that
to
keep
paying
attention
to
that.
E
So,
if
you
see
this
is
a
something
we
posted
a
while
ago,
which
which
contains
a
proposal
of
how
to
automate
that
jenkins
image,
rebuild
or
docker
image
rebuild
using
jenkins
and
how
to
structure
that
so
that
we
give
some
visibility
of
the
problems
that
might
happen.
E
So
the
first
step
described
here
is
basically
to
daily
or
once
a
day,
rebuild
those
images
and
notify
us
whether
the
images
are
now
broken,
or
there
is
something
wrong
with
the
images
just
by
the
simple
process
of
rebuilding
them.
One
of
the
reasons
we
don't
do
that
in
the
projects-
it's
not
it's
not
because
we
don't
want
it
it's
just
because
it
takes
some
time.
E
So
if
we
put
that
time
to
review
the
images
on
top
of
the
the
time,
we
need
to
actually
run
the
tests,
so
that
will
delay
the
new
pr's
being
submitted
to
the
project
too.
E
I
would
estimate
in
about
five
hours,
at
least
from
you,
submitting
a
change
on
you
getting
feedback
on
those
changes.
So
as
a
first
step
to
improve
the
to
point,
we
are
right
now
and
give
some
visibility.
This
is
something
that
that
is
being
implemented
at
the
moment.
So
if
you
want
to
read
a
little
bit
about
about
that
word,
you
can
check
this
this
post
on
the
disqus
forum
and
you
can
as
well
check
an
ongoing
pr.
E
We
have
on
tlc
pack
with
that
infrastructure
to
run
the
image
rebuilds
daily
or
every
day
or
every
night,
depending
on
on
what
your
time
zone
is.
So
this
is
mostly
a
jenkins
pipeline,
so
it
has
some
tasks
to
be
run.
I
won't
go
into
details
on
this
I'll,
just
show
kind
of
a
briefly
on
on
how
it's
structured
so
for
every
image.
What
we
do
basically
is
to
rebuild
the
image
which
is
done
on
that
line.
E
E
Here
is
an
example
on
this
running,
so
this
run
last
night,
so
I'm
in
the
uk
time
zone,
but
this
rebuilt
all
the
images
so
rebuilt
the
arm
image,
cpu
gpu
and
everything,
386
and
and
everything.
So
if
that's
successful,
we
get
those
images
and
we
upload
them
to
docker
hub.
E
So
if
we
see
this
one
here,
for
example,
we
can
go
to
tags,
and
these
were
kind
of
some
builds
that
were
made
on
this
on
this
image.
E
So
I
want
just
to
explain
how
we,
how
we,
how
this
is
organized
and
and
what
we
plan
to
do
next,
so
each
image
is,
is
tagged
with
something
which
is
a
timestamp.
E
So
when
was
this
generated
and
a
hash,
so
this
hash
corresponds
to
in
the
tvm
repository
up
to
which
point
in
git
this
image
contains
is
generated
with.
So
if
we
go
to
tvm
just
go
there
and
list
the
comments,
and
if
we
copy
that
thing
we
should
see
we
should
see
up
to
which
point
this
was
generated.
E
Oh
yeah,
so
this
one
was
yeah,
so
this
one
it
includes
less
characters
here.
So
this
the
last
image
we
generated
it
contains
up
to
this
pr
and
it's
merged
in
there.
E
So,
basically
we
are
doing
this
at
the
moment.
We
are
running
this.
You
know
on
a
temporary
jenkins
instance
at
the
moment,
but
we
are
getting
some
visibility.
If
this
goes
wrong,
we
will
get
a
notification.
E
If
this
goes
right,
we
will
also
get
a
notification,
so
the
notifications
are
being
sent
on
discord
so
discord.
If
you
are
not
aware,
it's
a
it's
a
the
newest
communication
channels
we
are
using
in
order
to
to
chat
within
the
community
or
tvm
community.
E
If
you're
not
there,
I
recommend
you,
you
join,
and
if
you
go
to
that
to
this
channel
ci
image
build
when
the
job
finishes,
it
will
tell
you
what
happened
so
new
images
are
published,
we're
using
this
tag
and
you
can
check
the
images
tags
and
how
to
download
them
and
how
to
use
them
and
which
images
are
published.
A
Who,
who
wants
to
to
join
the
the
discord?
We
have
a
link
to
it
at
the
at
the
tvm.apache.org
community
page,
and
so
you
can
follow
that
link.
It's
an
invitation
to
the
discord
and
you
should
be
able
to
to
join
and
follow
along
with
with
this
and
other
discussions
that
are
happening
there.
E
Yeah
so
yeah,
so
I
recommend,
if
you're
not
there,
join
the
the
discord
and
so
one
of
the
new
benefits
and
things
we
are
implementing.
Are
these
automated
notifications
so
that
we
can
see
and
sort
of
health
check
our
docker
images
on
a
daily
basis,
so
just
to
to
finish
off
where
we
are
going
from
from
here?
So
the
next
step
is
to
make
this
production
ready.
E
So
we
are
going
to
make
this
job
run
in
our
kind
of
official
jenkins
server,
and
it
will
just
publish
comments
and
everything
when
something
happens
in
there,
then,
the
next
big
step,
which
we
we
are
planning
to
do
soon
still
in
the
next
weeks,
probably
is
to
connect
to
to
this
job
that
that
publish
the
images
and
and
generates
new
images
that
are
valid
to
actually
run
all
the
tvm
tests
on
this
image
to
validate
that
with
these
dependencies
and
everything
do
our
tests
run.
E
E
So
to
sum
up
with
all
these
initiatives,
we,
what
we
want
to
do
is
to
reduce
the
pain
of
updating
the
images
manually
on
somebody's
somebody's
machine
and
upload
them
manually
to
the
infrastructure
and
make
this
process
a
little
bit
more
transparent
with
the
community
and
and
so
that
we
can
well
share
the
ownership
of
the
images.
E
So,
if
something's
broken,
we
as
a
community
will
be
able
to
have
a
look
and
and
deal
with
the
issues
rather
than
somebody
at
some
point,
volunteering
to
update
the
images
and
then
well
suffering
all
the
the
pain
that
the
that
matthew
and
andrew
and
and
lots
of
other
people
who
got
involved
in
that
process
went
through
during
last
month,
so
yeah.
So
this
is.
This
is
really
sort
of
a.
E
This
is
not
only
me,
of
course,
I'm
I'm
presenting
it
here,
but
it
has
kind
of
a
collaboration
of
many
people
in
the
community,
so
andrew
from
from
octomell
is
helping
a
lot.
She
also
helped
giving
some
giving
permissions
we
needed
to
you
know
into
the
servers
and
yeah,
so
lots
of
people
contributed
to
this
one.
A
E
So
they
use
the
the
exact
same
on
the
tip
from
the
tvm
repository:
okay,
okay,
yes,
so
the
the
ones
we
have
on
tlc
pack,
so
just
to
give
some
context
to
everybody
else,
tlc
pack
is
is
a
is
compared
to
tvm
is
a
small
project
that
aims
to
generate
a
python
package,
previewed
python
packages
for
tvm.
E
So
it's
named
differently
because
we
link
lots
of
dependencies
and-
and
in
that
way
we
probably
need
to
to
name
it
something
else,
because
tvm
is
sort
of
distributed
as
code
officially.
F
I
have
a
question
thanks
for
doing
this
liandro.
This
is
really
great,
so
I
was
wondering
if
there's
any
plan
for
releasing
the
docker
images
to
the
to
the
ci
like
weekly
or
I
don't
know
if
you
guys
decided
on
that-
and
the
other
thing
is
that
I
guess
the
next
question
is:
if
there
is
like
a
need
for
updating
and
see
a
docker
image
based
on
some
comment,
how
do
we
request
that.
E
Right,
so
I
guess
on
the
first
question,
something
that
we
definitely
want
to
do
is
to
update
the
images.
More
often,
that's
that's
that's
given
in
in
this
sort
of
work.
We
are
going
towards
being
able
or
being
ready
to
update
the
images.
E
More
often
for
I,
I
know
that
when,
when
people
submit
changes
to
the
docker
containers,
obviously
they
want
to
see
their
changes
reflected
as
early
as
possible,
and
in
this
case,
by
the
end
of
this
work
of
of
what
we
we
are
planning
to
do,
we
would
still
need
to
consciously
go
there
and
update
the
images.
So
it's
not
something
we
want
just
to
automatically
deploy
the
images.
E
I
think
there
is
an
understanding
within
the
project
that
we
want
to
track
that
and
consciously
go
update.
The
images
with
that
require
a
pr,
I
think,
that's
the
that's
the
only
thing
that
we
have
as
a
kind
of
a
manual
step
once
this
work
is
completed.
E
Seen,
oh
yeah,
so
as
a
process,
so
there
is
a
process
today
that
people
follow.
I
don't
know
whether
this
is
written
somewhere,
but
usually,
if
you,
if
you
tag
somebody
who's
a
committer
and
say
this-
requires
a
docker
image
update.
So
there
is
a
when
you
go
to
pull
requests.
For
example,
I
just
got
one
at
random:
you
can
set
a
label
on
it,
which
is
so
you
need
to
be
a
committer
to
do
that
and
it
says,
need
ci
container
updates.
E
So
currently
it
requires
manual
steps,
so
somebody
needs
to
go
and
build
on
their
machines
and
upload
the
container
but
yeah.
So
that's
something
you
want
to
change
and
and
get
the
containers
from
from
the
docker
hub.
A
So
so
leandro
is
it,
so
is
it
possible
to
use
these
images
like
like?
Do
we
have
good
hooks
to
use
these
sort
of
images
for
development
like
like?
I
know
that
like
building
and
installing
tvm
can
be
a
bit
of
a
challenge
is:
is
there
like
space
to
use?
This
is
a
starting
point
for
it
to
like,
like
be
able
to
mount
tvm
into
these
images
so
that
you
can
do
development
and
testing
locally.
E
So
I
I
think
it
does
because
they
once
start
we
start
using
them
for
for
the
ci.
They
will
reflect
exactly
what
we
have
on
ci.
So
there
is
no
reason
why
we,
we
wouldn't
be
able
to
to
use
them
for
local
development,
and
it's
actually
once
we
have
this
in
place.
I
think
this
should
be
encouraged
for
people
who
can
just
download
the
images
from
from
upstream
and
use
them,
because
they
will.
They
are
supposed
to
give
you
the
exact
same
test
results
as
the
asda
of
cmci.
A
Yeah,
I'm
also
wondering
too,
like
you
know,
if,
if
maybe
we
might
be
able
to
leverage
some
of
these
things
with
tlc
pack,
also
because
I
know
that,
like
like
I've
been
like,
I
have
my
own
docker
image,
my
own
docker
files
that
I
use
to
kind
of
do
builds.
Then
I've
take
I've
taken
some
of
the
ideas
from
upstream
and
I've
taken
some
of
the
ideas
from
tlc
pack,
so
that
I
can
do
things
like
build
minimized
releases
or
you
know
python
installable
packages
or.
B
A
Like
individual
binaries
and
you
know,
but
I
don't
know
if,
like
the
the
needs
of
ci,
are
you
know
or
somewhat
orthogonal
to
the
needs
of
distribution.
E
Yeah,
so
so,
if
there
is
a
reason
why
we
have
different
images
on
tlc
pack
when
compared
to
to
tvm,
so
the
tlc
pack,
images
are
based
on
on
on
something
that
is
called
many
linux.
So
many
linux
is
a
with
a
documented
standard,
linux
directory
structure,
let's
say
and
and
and
infrastructure,
to
generate
python
packages
that
will
be
accepted
in
any
linux
distribution,
whereas
the
tvm
infrastructure
is
at
well
at
least
is
is
agreed
at
the
moment
that
it
runs
on
ubuntu
1804.
E
So
that
is.
This
is
kind
of
the
the
disjointed
part
in
this
puzzle
so
to
generate
compatible
packages.
You
want
to
run
on
many
linux,
whereas
our
ci,
we
run
it
on
ubuntu
180..
If
we
were
to
kind
of
standardize
on
many
linux
or
something,
then
we
could
be
able
to
reuse
the
images
across
dlc
pack
and
dbm.
E
E
I
guess
it's
just
the
just
a
matter
of
I
mean
somebody
in
the
community
passionately
wanting
to
migrate
to
ubuntu
20..
It
is
possible,
but
there
are
some.
Obviously
there
will
be
some
problems
so.
G
Yeah
there
are
also
like
considerations
for
ci.
We
need
to
think
about
coverage,
for
example,
you
know,
and
we
want
to
be
able
to
allow
most
developers
to
be
able
to
use
tbm
and
that's
why,
like
our
previous
and
thought
was
you
know
we
wanted
to
support
ubuntu
the
ods
lts
and
there's
also
a
reason
why,
when
we
are
building
wheels
for
tlc
pack,
we
are
using
centos
because
like
in
order
to
build
many
linux.
Well,
that
is
like
linux
compatible.
G
We
will
need
to
use
the
very
old
version
of
well
it's
not
not
too
old,
but
somewhat
old
version
of
centos,
and
that
I
got
what
python
communities
use
so
yeah.
E
I
mean
at
some
point
we
will
need
to
move
to
20
or
something
newer,
but
I'd
say
I
mean
it's
not
very
urgent
at
the
moment,
considering
that
18
is
still
still
supported
and
still
getting
security
updates
and
things.
H
H
A
Any
more
questions
about
about
this
project
and
the
work
that
that
we
that
the
team
has
been
doing
on
this.
H
I
wonder
if
anyone's
tried
to
run
run
the
test
suit
entirely
under
a
mini
linux,
2014
package
like
plc
pack,
and
whether
all
the
distributions-
sorry,
whether
all
the
dependencies
sort
of
map
under
there.
I
suppose
one
of
the
mismatches
between
not
testing
what
packages
we're
building
versus
ci
is
is
an
interesting
one
to
think
about.
E
For
this
call,
but
it's
just
sparking
yeah,
so
there
are
some.
I
can
I
mean
I
know
some
some
of
the
dependencies
on
the
dependency
installation
scripts
were
up
kind
of
translated
to
to
many
linux
and
it
would
require.
I
mean
the
obvious
name,
tweaks
in
package
that
are
that
are
the
same
package,
but
named
differently
across
different
distributions.
G
G
So
the
approach
that
we
have
taken
so
so,
luckily
tbm,
don't
have
a
lot
of
like
we
are
very
careful
about
external
dependencies,
so
the
biggest
dependency
so
far
is
lvm
and
the
approach
we
take
there
is
we
statically
link
our
vm,
so
so
so
so
that
that
solves
the
lvm
problem
and
then
another
biggest
pain
is
the
cuda
cuda
runtime
and
cuda
api,
and
that
part
is
really
hard.
G
So
so
we
can
only
build
say
for
a
specific
version
of
cooler
and
and
ideally
the
basically,
you
need
to
build
a
tlc
pack
cu
101,
that's
only
works
for
10.1
and
you
need
another
cuda
version
and
to
to
to
do
that.
So
so
I
would
imagine
fireball
if
you
are
going
to
introduce
additional
libraries
like
say
certain
arm
library
and
so
on.
If
it
allows
static
link
of
the
library
likely
it's
going
to
be
where
it's
going
to
work.
G
A
Allow
yeah
my
my
experience
with
dependencies
is,
that
is,
that
is
like
tvm
packages
up
fairly
well,
but
when
you
start
thinking
about
like
when
you
start
thinking
about
other
python
packages
that
you
need
to
have
installed,
it
makes
it
harder
to
distribute
tvm
as
a
more
minimized
system
because
to
get
oftentimes
to
get
some
of
those
other
packages.
You
need
to
have
like
a
whole
development
tool.
A
You
know,
tvm,
but
also
a
lot
of
like
common
dependencies
and
being
able
to
ship
those
under
under
tlc
pack,
I
think
is
it
makes
it
a
bit
harder,
especially
you
know
when
we
have
c
plus
plus,
and
we
have
dynamic
loading
of
libraries
too.
Like
oftentimes,
I
get
a
warning
that
like
if
I
try
to
statically
link
something
that
that
it's
probably
gonna
break
anyway.
H
Yeah
I
mean
building
on
many
linux.
2014
will
probably
be
reasonable
because
that's
that's
a
reasonably
old
gillipsy,
as
well
as
a
reasonably
old,
lipstick,
c,
plus
plus,
and
you
know,
because
of
that,
the
compatibility
of
the
binaries
with
many
linux
2014
will
be
reasonable
and
you'd
be
able
to
cover
a
lot
of
the
newer
distributions.
H
A
I
mean
I
haven't,
I
mean
I'm
not
I'm
not
sure
that
there's
been
a
there's
been
a
huge
amount
of
uptake
on
on
like
people
installing
from
the
tlc
pack,
pip
images
like
I.
I
try
to
track
those
those
statistics
and
we
we
maybe
get
a
few
downloads
a
day,
and
you
know
you
don't
I
don't
know
where
those
who's
who's
doing
those
downloads
or,
if
they're
using
them.
So
you
know,
might
it
might
make
sense
to
have
as
part
of
the
tlc
pack
repository
like
running.
E
I
think
I
mean
as
a
community
if
we
want
to
give
incentives
to
that,
we
should
advertise
it
more
and
also
one
thing
which
is
a
bit
of
a
caveat.
I
think,
for
somebody,
who's
new
is
that
that
package
is
not
published
on
pipi,
so
it
requires
a
bit
more
flags
or
some
configuration
locally
to
to
have
that
package.
A
I
I
think
it's
just
that
things
weren't
ready
to
find
where
ready
is
up
to
be
argued
about.
I
think,
unless
cnc
has
another
reason
that
we're
not
publishing
them,
I
think
we
squatted
on
the
name
and
everything
so.
G
So
so
the
main
problem
is
there's
a
file
says
limit
like
a
pipe.
I
had
a
5000
meter
of
I
think
100
megabytes
or
so
on,
and
our
cuda
will,
and
most
of
the
binary
will
exist
that
size.
So
so
so
that's
why
it's
not
yet
on
pi
pi,
but
hopefully
the
installation
command
is
simple
enough
like.
I
Their
server,
that
was
my
point
about
ready,
yeah
like
there
wasn't
like
a
stable
release
to
cut
tension,
though
why
doesn't
mxnet
release
their
cuda
builds
like
wires?
You
need
to
request
for
a
size
increase
and
usually
their
response
is
like
very
slow,
so
I
see
so
that
we
we
can
do
it.
We
just
need
to
someone
needs
to
make
it
on
the
very
list
right.
So.
G
Do
we
have
a
request
in
place
for
that?
I
I
think
so
for
tlc
pack
cuda,
we
used
to
have
a
request,
but
that's
not
yet
being
answered.
So
that's
why
we
just
went
with
the
github
pack,
upload
and
and
the
installation
command
yeah.
I
mean
sorry.
A
It
seems
like
when
we
stabilize
our
release
process
too,
like
as
like
you
know.
I
know
that
that
we
kind
of
you
know
you
know.
Maybe
when
we
update
the
the
product
roadmap,
you
know
and
and
have
more
milestones
in
a
1.0
release.
It
might
make
sense,
then,
to
go
to
to
say
we
have
a
stable
release
and
it's
not
going
to
be
changing
nightly,
and
you
know
you
know
that
people
can
count
on
the
api.
E
J
H
E
Yeah,
so
one
thing
that
is,
that
we
had
a
look
and-
and
you
can
mark
your
known
release
package
as
no
release,
pictures
and
pipe.
I
won't
give
you
that,
as
as
default,
you
need
to
ask
for
non-release
packages,
so
I
mean
yeah.
I
think
there
is
an
opportunity
for
us
to.
If
we
want
to
put
those
factors
in
there.
I
think
it
would
simplify
a
bit
the
communication
with
especially
new
people
joining
the
community.
A
Yeah
and
looking
at
yeah,
looking
at
the
tlc
pack
releases
like
like
just
regular
cpu
releases,
are
coming
at
like
35
megabytes,
which
is
not
that
big.
But
the
cuda
releases
are
like
380
megabytes,
so
they.
B
A
G
So
so
that
that
converse
implications
right.
So
the
the
the
assumptions
that
you
need
to
agree
to
put
our
cla
ui,
which
yeah
so.
E
Yeah,
so
I
I
don't
know
about
about
the
the
the
license
license
implications,
but
I
was
going
to
point
out
that,
regarding
the
size
of
the
packages,
if
you
have
a
look
on
my
screen,
the
sizes
of
the
docker
images
we
produce
every
night
and
upload
to
docker
hub
yeah
kind
of.
A
You
can
actually
get
these
to
be
pretty
small,
and
you
know-
and
if
we're
like
you
know,
thinking
about
what
you're
doing
like
like
as
a
development
image
like
a
a
seven
to
nine
gigabyte
image,
I
think,
is,
I
guess
reasonable.
I
mean
it's
what
I
build
and
what
it's
it's,
what
I
develop
on,
but
for,
like
you
know,
using
some
of
the
tricks
to
like
pull
these
things
out
and
be
able
to
like,
like,
I
think,
telling
people
like
oh
yeah,
download
this
nine
gigabyte
image.
A
A
A
lot
of
growth
for
like
what
what
we
do
with
our
docker
images
and
how
we
package
and
distribute
to
kind
of
like
make
tvm
more
widely
usable
from
people
who
don't
want
to
like
download
an
entire
source
package
and
spend
time
and
just
building
it
and
configuring
it
and
all
the
things
that
we
that
we
do.
On
top
of
that.
E
A
Right,
I
guess,
and
one
other
thing
I
mean
this
is
kind
of
going
kind
of
far
afield
of
like
you
know
what
what
this,
what
this,
what
this
project
is
talking
about
too,
but
also
you
know
in
terms
of
like
how
we
link
and
how
we
distribute
images.
I
don't
know
if
anybody's
tried
to
like
build
tvm
inside
of
something
like
an
alpine
container,
which
is
which
skips
using
glib
c
and
instead
uses
something
that's
a
little
more.
A
You
know
because
what
I'd
like
to
do
is
like
be
able
to,
like
you
know,
tune
a
whole
bunch
of
models
inside
of
a
kubernetes
cluster
and
and
again
I
don't
want
to
be
sending
off,
like
multi
gigabyte
images
off
to
every
node
to
you
know
to
start
up
a
tuning
job,
but
what
I've
ended
up
doing
to
like
make
that
image.
E
A
All
right:
well
thanks
thanks
deandro
and
thanks
andrew
and
matthew
and
everyone
else
who
who
participated
in
this.
If
we-
and
hopefully
we
didn't-
we
didn't
miss
you,
but
this
has
been
like
a
fairly
important
issue
and
I
think
that
it's
it's
been
kind
of
like
a
thorn
in
the
side
of
us
for
for
a
while
now,
and
so
this
is
like
really
important
work
and-
and
you
know,
I
think
it
speaks
a
lot
to
like
the
iterative
improvements
that
we
make
to
the
to
the
ci
and
that
we
make
to
the
community.
A
So
I
want
to
thank
everyone
for
this
work,
yep
cool,
so
we
have
we.
You
know
these
meetings.
Typically,
you
know
we've
scheduled
them
for
45
minutes.
We
typically
go
for
for
about
an
hour.
We
had
also
put
christopher
sidebottom
on
to
the
under
the
agenda
to
talk
about
his
his
rfc
for
additional
target
hooks.
I
don't
know
if
we
have
enough
time
to
like
do
a
full
discussion
about
it.
K
Yeah,
that
sounds
good.
Let
me
share
my
screen.
K
Cool
so
should
have
the
rfco,
so
I'll,
just
quickly
go
through
the
journey
of
how
this
came
about.
I've
been
looking
at
sims
cnn,
which,
which
is
a
library
which
gives
you
pre-built
kernels,
for
running
machine
learning,
models
to
put
in
as
part
of
a
bring
your
own
code
generator
type
flow.
K
So
this
is
like
an
extension
of
that
idea
using
the
same
relay
graph
partitioning,
but
instead
of
going
the
whole
way
around.
So
I
use
this
diagram
to
illustrate
the
current
byoc
external
code.
K
Generator
flow
sort
of
goes
around
the
outside
of
tvm
and
what
is
being
proposed
here
is
instead
to
sort
of
break
that
down
a
little
bit
so
that
we
can
have
multiple
hooks
attached
onto
the
current
target
mechanism
and
the
reason
for
that
is
sort
of
this
here
this
bit
in
the
middle-
and
this
is
where,
like
a
lot
of
rfcs,
we've
had
in
the
embedded
space
they
sort
of
connect
together.
K
So
what
I'm
referring
to
here
is
like
a
memory
planning
exercise
from
another
rfc,
which
I
won't
cover
for
time
reasons,
but
it
uses
the
tier
output
that
we've
output
from
the
existing
lowering
passes
to
decide
how
much
memory
we
need
to
allocate,
and
if
for
a
library
such
as
sims
27,
you
go
around
the
outside.
K
You
don't
get
the
benefits
of
this
and
that's
what
sort
of
prompted
it
and
you
can
see
a
sort
of
parallel
in
the
rfc
that
one
of
my
colleagues
manuka
put
up
recently,
which
is
the
ethos?
U,
which
has
its
own
sort
of
tea
to
tear
process
as
well.
So
it's
already
doing
that
internally,
it's
just
it
doesn't
get
exposed
to
this
middle
bit.
K
So
I
wanted
to
start
breaking
down
how
you
do
this
custom
lowering
and
the
first
part
of
that
is
this
rfc,
which
is
breaking
down
related
tier,
which
is,
as
it
sounds,
a
custom
way
of
just
bypassing
the
normal
lowering
process
inside
of
the
compiler,
which
means
that
you
can
do
cool
things
such
as
writing
a
simple
library
call
in
sort
of
tier
in
python,
which
then
gets
replaced
into
the
host
code
gen
if
it's
compatible
with
your
targets,
code
generation
and
then
the
sort
of
extension
from
that
is
that
if
you
had
something,
that's
like
one
of
simsysnn's,
more
interesting
functions,
it
uses
these
structures
which,
at
the
minute
tier,
doesn't
have
support
for
structures
and
passing
those
back
in.
K
So
instead
it
makes
sense
to
have
a
tier
to
run
time,
which
kind
of
overrides
the
current
target.build.
So
those
are
the
two
hooks
that
that's
all
being
proposed
here
to
bypass
or
make
more
modular
certain
parts
of
tvm.
K
The
reason
to
start
registering
them
on
the
targets
is
just
to
make
a
bit
simpler,
to
discover
these
things
and
give
more
of
the
advantages
of
the
target
system.
Back
to
these
external
flows
and
just
for
reference,
this
isn't
a
completely
new
idea.
I
know
that
there's
been
some
earlier
documents
from
the
the
folks
at
octoml
involving
this
as
well,
so
I'm
not
selling
it
as
a
brand
new
idea
at
all
and
the
one
of
the
cool
things
about
this.
A
K
Yeah,
I
think,
if
anything
like
the
main
thing
to
look
at
is
probably
the
the
fancy
pretty
diagram
and
then
yeah.
I
guess
we
have.
We
got
time
for
a
few
questions
here,
or
is
it
just
best
to
go
onto
the
discuss
forums
at
this
point.
A
E
K
So
the
way
that
I
envision
introducing
this
is
kind
of
very
piecemeal
so
that
it
goes
on
top
of
the
existing
infrastructure.
K
So
the
current
the
two
hooks
that
we're
sort
of
proposing
are
related
here
and
tier
to
runtime,
and
then
the
sort
of
extra
sets
could
be
added
for
related
runtime
and
the
constant
updater
to
reattach
those
onto
targets
and
the
the
the
main
thing.
That's
kind
of
that's
interesting
here
is
the
existing
t,
compiler
refactor,
because
I
know
that
the
initial
implementation
of
this
sort
of
adds
to
all
of
the
places
where
we're
already
seeing
a
bit
of
duplication.
K
I
think
that
refractor
is
going
to
solve,
as
as
sort
of
illustrated
by
my
draft
pull
request.
So
to
answer
your
question
andrew,
hopefully
you
know,
tests
within
tvm
should
incrementally
be
updatable,
so
we
don't
break
anything
because
we're
doing
this,
but
it
does
impact
areas
which
are
currently
being
worked
on.
E
E
A
All
right!
Well,
if
there
aren't
any
any
more
questions,
I
want
to
thank
everybody
for
for
coming
to
the
meeting
and
thanks
chris
for
for
presenting
on
this.
A
A
We're
gonna
be
drawing
more
topics
from
rfcs,
and
so
as
as
we
as
we
kind
of
are
using
this
new
rfc
process,
you
know
having
people
having
community
members
come
in
and
talk
about
the
work
that
they're
doing
and
the
changes
that
they're,
hoping
to
bring
to
tvm
are
are
going
to
bring
a
lot
more
topics
to
the
community
meeting
and
give
us
a
chance
to
talk
about
them
one-on-one.