►
From YouTube: Apache TVM Community Meeting, May 11, 2022
A
Okay,
so
good
morning,
everyone
and
welcome
to
the
may
11th
edition
of
the
tbm
community
meeting,
I'm
andrew
I'm
one
of
the
tm
community
members,
and
so
today,
on
our
agenda.
Let's
see,
we've
got
our
usual
introductions,
additions
and
changes,
announcements
and
then
just
some
demos
to
walk
through
so
yeah.
This
time
just
wanted
to.
A
I
think
most
people
on
the
call
are
familiar,
but
I
just
wanted
to
open
the
floor
in
case
anyone
new
wants
to
introduce
themselves
or
tell
us
who
you
are
what
your,
what
part
of
tv
I'm
interested
in
or
what
interests
you
about
tvm,
what
you
might
like
to
work
on.
B
I
guess
I'm
the
first
time
the
first
time
that
I
joined
this
this
meeting
most
of
the
time
I
just
I
just
forget
about
it,
look
it's
on
my
calendar.
I
mean
it's
on
on
the
website,
but
then
I
forget
to
join
anyway,
so
my
name
is
christoph
christopher.
I
work
at
qualcomm
innovation
center
and
recently,
we've
really
ramped
up
the
effort
to
port
or
to
add
hexagon
as
a
full,
fledged
target
to
tvm
and
there's
a
lot
of
work
going
on.
B
That's
that
andrew
is
also
participating
in
and
and
some
folks
from
october
and
so
yeah.
I
was
here,
that's
a
sort
of
result
of
that.
I've
been
getting
more
actors
in
the
community
and.
A
I
hope
to
be
attending
these
meetings
absolutely
great
to
have
you
here
and
yeah.
Kristoff
is
pretty
active
on
the
forum,
so
you
probably
you
guys,
will
probably
recognize
his
username
and
all
that
as
well,
so
yeah
great
to
have
you
joining
anyone
else
want
to
introduce
themselves
or
say
hi.
A
Okay,
hearing,
no
others
we'll
go
on
to
our
our
next
little
bit
here,
so
just
want
to
open
up
for
any
additions
or
changes
to
the
agenda.
If
anyone
wants
to
bring
anything
up
and
talk
about
anything,
you
can
also,
let
me
know
here
or
just
add
it
to
the
agenda
list
and
hearing
none
of
those
we'll
move
on
to
announcements.
So
excuse
me
yeah
this
week
we
just
have
a
couple
new
reviewers,
so
I
guess
it's
been
a
couple
of
weeks
since
we've
done
this,
so
this
was
maybe
cumulated.
A
Couple
weeks,
but
ashitosh
and
nikola
from
arm
are,
are
now
reviewers
welcome,
as
well
as
alton
from
optimal,
and
we
have
one
you
committer
shio,
sao
also
from
optimal,
so
welcome
and
congratulations
and
so
moving
on
to
discussion
topics
this
this
morning
we
did.
We
have
a
couple
of
things
we
want
to
talk
about
today.
I
was
gonna,
give
an
update.
A
I
posted
an
rfc,
probably
a
year
ago,
at
this
point
now,
so
it's
been
sitting
around
for
a
while,
and
I
wanted
to
give
an
update,
since
I
kind
of
started
to
actually
work
on
implementing
it
again
and
and
get
folks's
feedback
and
and
kind
of
see
which
direction
we're
going
in
here,
and
so
this
is
just
around
how
we
capture
the
python
dependencies
in
the
tdm
containers
and
then,
after
that,
we
wanted
to
talk
a
little
bit
about
doing
quarterly
releases,
which
is
something
we'd
like
to
well.
A
I
think
it's
something
that
has
been
coming
up
in
the
community
and
you
know
the
idea
of
getting
to
a
faster
release
cadence
so
that
we
can
get
changes
out
the
door
into
sort
of
a
pivot
solvable
form.
You
know
stable
consolable
form
much
more
quickly
will
kind
of
help
drive
adoption
of
tbm.
We
think
so
anyway,
we'll
talk
about
that
in
a
minute,
so
updating
just
on
the
the
python
dependencies
side
of
things.
A
Just
to
give
you
some
background,
so
I've
been
working
on
tvm
for
a
couple
of
years
now
and
when
I
started
you
know
we,
what
we
would
do
is
we
would
build.
The
python.
Excuse
me,
build
the
ci
containers
kind
of
manually
locally
and
then
upload
them
to
docker,
hub
and
kind
of
run.
A
A
test
run
run
a
ci
run
the
ci,
using
that
you
know
container
basically
to
test
it
and
and
see
to
see
if
anything
broke
against
the
new
container
and
since
then,
we've
actually
made
substantial
improvements.
There,
we've
we've
created
a
bot
that
automatically
builds
all
of
the
different
ci
containers
and
then
that
bot
then
can
then
you
know,
run
the
ci
kind
of
in
a
test
mode.
A
After
finishing
the
build-
and
so
you
know
now,
one
of
the
problems
that
we
have
with
rci
containers
is
that
in
the
past
we've
been
building
all
of
them
kind
of
at
separate
times,
and
it's
still
like
the
case
today
that
we
we
may
build
new
containers
by
the
bot,
but
not
rev,
every
single
container
at
the
same
time,
and
that
means
that
there's
a
chance
that,
if
you're
trying
to
debug
something
in
one
container
like
let's
say
you're
in
ci
gpu-
and
you
see
a
failure
in
ci
hexagon,
for
example,
you
know
it
may
be
that
by
the
looks
of
the
failure
there
isn't
really
anything
in
the
ci
gpu
container
or
sorry
in
the
in
the
in
the
ci
hexagon
container.
A
That
should
cause
that
failure
to
to
be
specific
to
it.
So,
like
you
know,
if
we
have
the
same
python
packages
in
both
containers,
you
know.
Maybe
this
is
just
a
unit
test
failure
that
should
be
reproducible
in
both
containers.
A
Basically
inside
of
this
docker
install
directory,
and
it's
just
a
series
of
pip
install
commands
now
prior
to,
I
think
it
was
like
october
2020,
these
pip
install
commands
because
they
were
kind
of
scattered
around
resulted
in
a
virtual
environment
installed
in
the
container,
or,
I
guess,
a
python
environment
in
the
container
that
could
contain
inconsistent
dependencies.
In
other
words,
if
tensorflow
depended
on-
let's
say
asher's,
you
know
version
18.,
but
sphynx
then
depended
on
azure's
version.
20.
and
tensorflow
said
it
wasn't
compatible
with
20..
A
A
So
you
know
this
is
sort
of
part.
Two
of
three
in
an
effort
to
kind
of
address
kind
of
problems
in
the
python
dependencies
in
in
tvm,
and
so
in
the
first
step
that
I
kind
of
made
progress
on
last
year
was
to
check
in
a
script
that
allowed
us
to.
A
You
know
in
theory,
state
all
of
the
requirements
by
piece
in
this
one
particular
file
here
and
then
you
know,
even
if
we
had
the
same
requirement
like
tensorflow
here
listed,
you
know
in
kind
of
two
different
pieces,
we
could
state
the
a
constrained
version
in
another
place.
A
You
know
just
once
so
all
of
the
packages
kind
of
have
you
know,
version
constraints
placed
in
one
spot,
and
so
you
know
this
is
kind
of
the
first
step
in
my
mind,
to
kind
of
consolidating
these
python
dependencies
into
a
single
location
rather
than
being
spread
out
across
a
bunch
of
different
docker
install
scripts,
now
they're,
all
just
right
here
in
this
file.
Now
the
problem
with
this
is
a
couple
of
things.
One
there's
there's
no
teeth
to
this.
A
You
know
this
file
here
it
it
informed
the
setup.pi
which
is
kind
of
what
we
used
to
generate
the
install
requires,
but
it
didn't
it
didn't
have
any
bearing
on
the
ci.
We
were
still
using
these
and
today
we're
still
using
these
docker
install
scripts
to
actually
set
up
the
ci
packages,
and-
and
so
you
know,
these
constraints
are
kept
in
sync
just
by
convention.
It's
not
there's
nothing.
A
You
know
requiring
us
to
keep
ethos,
uvella,
pin
to
3.2.0
in
the
setup.pi,
but
rev
to
you,
know
4.2.0
in
the
ci.
So
the
next
piece
that
I
kind
of
wanted
to
give
you
guys
an
overview
of
today
is
how
we're
going
to
actually
enforce
this
file
or
or
a
proposal.
I
should
say
for
how
we
could
enforce
this
file
and
we
don't
have
to
do
this.
A
This
is
something
that
you
know:
we've
been
kind
of
cooking
up
and-
and
you
know
it's
it's
a
possible
way
to
solve
this,
but
it
could
be
complex
it.
It
could
be
you
know
or
or
it
could
just
be
reflective
of
kind
of
the
the
nature
of
dealing
with
python
packages.
So
that's
what
I
want
to
kind
of
give
you
a
tour
of
today.
A
So,
let's
see
it's
actually
been
a
while,
since
I've
read
this
rfc,
so
I'm
just
thinking
about
it:
okay,
so
kind
of:
what's
what's
the
the
next?
How
do
we
then
make
this
step
from
having
this
gen
requirements,
file
and
and
moving
into
a
world
where
this
reflects
itself
inside
the
docker
containers?
So
let
me
flip
over
to
terminal
here
and
the
right
terminal.
A
Cool
okay,
so
so
this
branch
is
is
freeze
dependencies
and
you
can
see
this
on
my
github.
I
just
pushed
kind
of
my
latest
work
here
a
minute
ago
or
two.
So
I
will,
you
know,
show
this
up
in
the
browser
a
little
bit
later,
but
looking
at
the
let's
see
so
the
gen
requirement
script
largely
stays
the
same,
and-
and
you
can
run
this,
although
I
I'll
be
at
I'll,
say
I
haven't
actually
done
this.
A
So
this
might
break
here's
some
debugging
information
and
it
will
generate
it-
will
generate
basically
a
a
list
of
all
the
python
dependencies
by
by
part,
and
so
here
we've
broken
up
tvm's
kind
of
dependencies
into
core
into
one
piece
for
each
different
front-end
importer,
one
piece
for
tvmc
and
one
piece
for
things
like
ethos.
U
and
and
lastly,
development
dependencies-
and
you
can
kind
of
see
well,
okay.
A
This
is
now
I've
broken
this,
but
you
can
see
that
this
is
a
representation
of
the
the
different
dependencies
that
you
need
to
have
to
run
like
the
docs
and
the
lint
and
and
and
the
testing
stuff
in
tvm.
A
This
is
broken
now,
but
but
typically
looks
like
a
requirements.txt
file
that
you
can
install
with
pip
install
okay.
A
So,
given
this,
then
what
do
we
want
to
do
we're
taking
these
this
gen
requirement
script
and
generating
what
we
want
to
do
is
generate
sort
of
a
locked
list
of
or
a
constraints
list
of
packages
for
each
architecture,
and
so
what
I
mean
by
that
is
that
if
you
pip
install
a
package
on,
say
x86,
it
will
go
and
look
for
a
binary
package
compiled
for
x86
64-bit
and
that
binary
package
might
be
different
if
you
were
kind
of
pip
installing
on
say,
arm,
64
or
84..
A
Build
our
docker
images
to
start
with
a
base,
docker
file,
and
so
the
idea
here
is
this
is
sort
of
taking
a
common
piece
of
most
of
the
64-bit
x86,
docker
images
and
kind
of
building
a
base
image
on
which
we're
going
to
install
all
the
python
packages.
A
So
this
takes
from
our
base
ubuntu
distribution,
which
is
solid,
1804
installs,
some
of
the
core
compiler
tools
rust,
because
that's
a
python
package
dependency
and
then
python
itself,
as
well
as
like
pip,
and
this
other
tool
called
poetry
that
I'll
get
to
in
a
minute.
A
And
then,
on
top
of
this
there's
a
script
called
freeze,
depth
and
freeze
depth
takes
the
information
from
this
gen
requirement
script
and
it
synthesizes
a
pi
project,
dot
toml,
so
you
guys
might
have
seen
this
pi
project
automal
in
our
root
directory
and
up
until
now,
it
kind
of
has
looked
like
this.
You
know
has
this
tool.black
here
and
that's
just
about
it.
So
what
I?
A
What
I'm
the
tool
I'm
using
to
to
kind
of
enforce
this
python
to
consolidate
this
path
on
dependencies
is
called
poetry,
and
we
don't
have
to
use
this
as
just
one
option,
but
it's
it's
one
that
I've
used
in
the
past.
So
it's
something
that
I
was
just
playing
around
with
here
and
so
the
idea
here
is.
You
know
you
give
some
metadata
about
the
package
itself.
A
A
There's
a
different
repository
outside
the
typical
python
repository
where
you
pull
packages-
and
you
list
this
all
in
here
and
then
you
run
this
freeze
depth
tool
and
let's
see
it
then
adds
to
this
file
all
of
the
dependencies
that
are
given
inside
the
gen
requirements
file
and
so
we're
then
left
or
then
you
know
to
a
point
where
we
have
sort
of
a
consolidated
list
of
all
the
different
python
dependencies
and
their
versions,
and
then
once
you've
run
this
free
steps
tool,
you
run
this
command,
poetry,
lock
and
see.
A
So
what
this
does
is,
I
guess
this
is
okay,
so
it
it
basically
reads
this
pi
project.tomol
and
it
it
looks
up
all
of
the
possibilities
of
packages
that
could
be
installed
to
satisfy
those
requirements
and
it
kind
of
runs
a
constraint
solver
across
all
those
python
packages.
A
The
constraint
solver,
then
you
know,
determines
a
selection
of
python
packages
that
satisfy
the
constraints
and
satisfy
all
of
their
declared
required
version
dependencies
and
then
produces
what
they
call
a
lock
file.
And
so
you
can
see
that
I've
done
this
now
for
say
the
x86,
64
or
sorry
I'll
switch
over
to
the
internals.
I
don't
ruin
the
output,
but
for
the
x864
architecture
in
this
docker
build
and
I've
checked
in
these
files.
A
So
you
can
take
a
look
at
these
on
on
github
as
well,
so
the
lock
file
looks
kind
of
like
a
a
longer
version.
I
guess
of
the
pi
project
automo
for
each
sort
of
transitive
closure
of
all
the
dependencies.
A
It's
selected
a
version
and
it's
indicated
sort
of
whether
or
not
it's
it's
optional
and
it's
kind
of
decided.
It's
it's
pulled
that
package
and
let
me
see
if
I
can
find
that
at
the
bottom
here
it
will
for
each
package
list
kind
of
the
sha-256
so
that
when
you
go
and
install
this
package
from
this
point
forward,
you've
captured
you
know
the
file
signature.
A
And
so
then
what
we
can
do
is
is
copy
this
poetry,
log
file
into
into
a
docker
container
and
run
poetry
install,
and
then
only
those
packages
that
have
been
installed
or
or
selected
during
the
lock
process
will
be
installed
and
so
for
each
image.
You
know
for
each
image,
let's
say:
ci
qmu,
ci,
hexagon,
ci
cpu
that
sort
of
guarantees
them
to
use
the
same
python
dependencies
when
built
at
the
same
time.
A
So
I
guess
I
wanted
to
pause
here,
because
I've
been
talking
for
a
little
bit
and
and
just
like
ask
if
there
are
any
questions
about
this
to
start
with
here,
there's
a
couple
other
aspects:
I
wanted
to
briefly
discuss
around
how
we
specify
the
constraints,
but
I
just
wanted
to
pause
here
and
see
if
that
made
sense
to
everyone.
I've
kind
of
been
going
through
terminal
for
a
little
bit
here.
D
A
That's
okay,
that's
great
to
hear
there
are
a
couple
of
areas
I
want
to
talk
through
about
like
possibilities,
but
I
think
I
want
to
talk
through
the
the
constraint
specification
first
and
see
you
know
if
everyone
kind
of
agrees
with
that.
B
A
We
do
use
that
on
windows,
I
believe-
and
it
might
be
worth
looking
at
that
as
well
the
one
so
one
of
the
things
that
I
think
we
talked
about
when
we
first
started.
This
effort
was
like
what,
if
someone
doesn't
want
to
use
poetry-
and
I
think
you
could
apply
that
to
what,
if
someone
doesn't
want
to
use
conda
tng
was
very
much
in
the
camp
of
like
we
need
to
have
this.
In
fact,
the
main
reason
we
have
this
requirements
here,
I
guess
in
some
other
repo.
A
The
main
reason
we
have
this
requirements
directory
here
was
so
that
we
could
reduce
this
all
to
something
you
could
install
with
pip
install
dash
r.
Maybe
that's
the
the
main
reason
why
I
kind
of
liked
poetry
is
that
it
doesn't
force
you
into
that
specific
ecosystem.
You
can
still
go
through
and
then
take
this.
A
See
if
I
had
exported
that,
I
don't
know
if
I
exported
that
you
can
take
this
log
file
and
you
can
export
it
into
a
requirements.txt.
E
Yeah,
I
think
my
my
experience
with
working
with
conda.
It
does
a
lot
more
as
well,
so
it
like
you
can
install
not
just
python
dependencies
but
loads
of
other
things
into
the
environment
which
can
become
quite
it
becomes
quite
cumbersome
to
manage
the
contour
environments.
I've
actually
had
teams
migrate
off
conda
into
poetry
for
python
dependencies,
because
it's
just
that
it
kind
of
wraps
all
the
python
specific
problems
without
doing
much
else
yeah
and
it's
it's
hard.
F
Go
ahead,
I
stepped
on
someone,
oh
no
yeah.
No,
I
I
mean
I've.
I've.
I've
tried
to
use
condom
before,
especially
like
in
the
tv
we're
working
with
tvm,
because
I
think
we
might.
We
might
have
some
condo
installs
that
we
do
and
then
what
I
found
is
that
when
condo
goes
wrong,
it
really
goes
wrong
and
that
it'll
take
it'll.
Take
the
dependencies.
Solvers
will
take
hours
if
you,
if
you
absolutely
haven't
kept
it
up
to
date,
and
I
just
I've.
A
On
the
other
hand,
we
do
use
it
in
the
windows
build
as
well,
and
that
seems
to
work
pretty
well,
and
so
having
that
I
mean
it
seems
like
we're
going
to
keep
that
around
to
some
degree
and
so
yeah
it
could
be
just
worth
watching
one.
You
know
one,
you
know
question
about
kind
of
moving
outside
the
pi
pi
world
or
moving
into
a
different.
I
don't
know
if
it
actually.
A
I
need
to
to
double
check
on
this,
because
I'm
not
sure
if
I'm
completely
up
to
speed
on
exactly
how
conda
imports
python
packages,
but
you
know
when
you
use
like
the
debian,
installed
python
packages.
I
think
that's
that's
now
widely
considered
to
be
harmful
because,
like
it
doesn't
really,
you
know
then
you're
dependent
on
your
operating
system's
package,
maintainer
to
just
you
know,
to
rev
the
package.
So
it's
like
one
extra
person
in
the
the
loop
there,
but
that's
a
I'm.
A
I
mean
I
think
that
people
may
solve
those
problems
in
other
ways
too.
So
I'm
not
probably
need
to
learn
a
little
bit
more
about
condos
to
know
whether
or
not
that
would
be
acceptable.
But
anyway,
those
are.
Those
are
some
concerns.
Yes,
folks,.
C
A
Lightweight
yeah
and
it
does
use
pip
and
it
to
actually
do
the
package
install.
I
believe
the
dependency
revolution
resolution
is
is
done
outside
of
pip.
I
was
going
to
show
that
here,
but
well
I'll,
get
into
that
in
a
minute,
so
is
that
I
don't
know
does
that
anyone
else
have
any
thoughts.
Here
too,
I
mean
I'm
happy
to
keep
talking
here
and
then
I'll.
I
can
also
cover
some
things
about
the
constraints
as
well.
A
Yeah,
so
let
me
talk
a
little
bit
about
the
constraints
and
I
want
to
leave
some
time
to
talk
about
the
quarterly
release
rfc
as
well
in
case
folks
have
thought
about
that.
So
you
know
one
of
the
challenges
I
think
we
have
is
you
know
suppose
we
hold
back
a
package.
You
know
the
question
is
like:
why
do
we
hold
it
back
and
there
can
be
a
couple
of
reasons.
A
So
if
you
take
a
look
in
the
gen
requirements
we
put
in
you
know
kind
of
these
constraints
here,
and
we
have
mostly
placed
comments
in
this
file
indicating
why
this
this
constraint
that
we're
imposing
on
tbm's
dependencies
is
here
and
one
of
the
the
kind
of
asks
in
this
gen
requirements
file
is
that
we
only
impose
functional
constraints.
A
So
the
idea
here
is
that
let's
say
that
someone
adds
a
package
and
they
depend
on
a
particular
version,
or
you
know
it
must
be
above
or
below
a
certain
version
of
a
python
dependency
and-
and
it's
known
at
the
time
that,
like
you
know,
let's
say
it's
you're
saying
I
can't
build
anything
past
0.7
on
on
something
like
here's.
This
doc
utils
package,
it's
known
at
the
time
that
you're
you
know
introducing
this
dependency,
that
the
later
version
of
the
python
package
makes
an
api
change.
A
It's
completely
incompatible,
and
so,
like
you
know,
there's
no
question
that
this
is
going
to
break.
If
you,
if
you
use
the
latest
version
of,
say
docutils-
and
you
know
here-
we've
got
some.
You
know
workaround
that
we're
we're
using
here
for
now,
so
this
gen
requirements
is
kind
of
in
in
this
particular
proposal.
The
place
to
put
that
that
kind
of
a
thing.
Now,
that's
not
the
only
reason
you
may
want
to
constrain
a
package,
and
so
the
question
is:
where
do
the
others
go?
A
So
what
I
have
right
now
is
another
file
called
ci
constraints
and
currently
it's
in
a
different
format,
but
I
think
there
may
be
some.
You
know
massaging
needed
here
on
this
idea
to
to
get
things
in
the
same
format,
and
there
are
other
reasons
you
might
want
to
hold
packages
back
like
let's
say
that
you
know
I
come
along
and
I
want
to
add
just
some
new
utility
dependency
to
to
tvm.
A
Well,
if
we
didn't
hold
back,
let's
say
tensorflow
like
see
we're
a
couple
of
versions
behind
tensorflow
right
now
and
we
just
always
pulled
the
latest
one.
A
Maybe
we
just
were
in
a
state
where
we
hadn't
updated
the
ci
images
in
a
while,
and
you
know
what
would
happen
is
if
someone
came
along
and
wanted
to
add
just
one
kind
of
utility
python
dependency
to
tvm
you'd
then
be
stuck
with
trying
to
route
tensorflow
to
2.9
or
whatever
their
latest
version
is,
and-
and
this
means
that
kind
of
it
makes
it
difficult
to
make
these
smaller
changes
in
tvm.
A
Without
kind
of
confronting
these
larger,
you
know,
version
drifts
that
that
can
happen
in
the
repository,
so
the
freeze,
depth
tool
actually
goes
and
reads
through
this
constraints
list
from
the
functional
constraints
and
then
augments
it
with
the
constraints
in
this
ci
constraints,
basically
allowing
us
to
impose
two
different
sort
of
types
of
constraints
on
the
on
the
sort
of
built
package,
and
this
also
lets
us.
Then,
by
separating
these
lets,
you
run
general
gen
requirements
and
produce
sort
of
like
a
pip
insoluble
list
of
of
sort
of
functional.
A
That,
then,
that
you
can
then
sort
of
trial,
locally
and
sort
of
float
those
ci
constraints
up
to
the
latest
version,
if
you
want
to
without
necessarily
sort
of
dragging
those
in
so
the
idea
here
is
that
when
we
build
these,
you
know
when
we
freeze
the
dependencies
on
these.
On
top
of
these
architectural
base
revisions.
A
We
then
pull
these
kind
of
constraints
in
here,
and
this
list
allows
you
to.
You
know,
make
changes
to
the
ci.
Without
necessarily,
you
know,
modernizing
different
importers,
and
things
like
that,
so
you
know
that's
another
piece
here.
Is
that
basically
having
separated
constraints?
Now,
where
is
this
going?
A
So
the
third
piece
of
this
kind
of
consolidated
python
dependencies
thing
that
I'd
like
to
to
work
on
once
we
you
know
address
this
sort
of
ci
centric
issue
is
okay
as
part
of
the
quarterly
releases
process
that
we're
kind
of
coming
up
to,
and
just
also
in
general,
we
want
to
release.
You
know:
pip
installable
packages
of
tvm.
Well,
those
pivot
soluble
packages
should
probably
have
some
kind
of
install
requires,
and
right
now
we
don't
necessarily
say
tensorflow
has
to
be
even
in
the
2.6
family.
A
It's
you
know,
it's
really
like
you
know,
tensorflow
is
any
version,
and
what
we
actually
do
is
maintain
a
separate
list
of
of
install
requires
again
in
the
tlc
factory.
I
believe
that
generates
the
you
know
the
declared
dependencies
or
declared
requirements
for
for
tensorflow,
so
we'd
like
to
clean
that
up
in
a
way
that
reflects
what
we're
actually
testing
against
into
the
the
generated
python
package,
and
so
you
know
there's
a
couple
of
ways
about
this.
A
One
way
is
that
we
take
the
log
files
from
from
poetry
and
you
know,
scan
those,
and
perhaps
we
like
relax
some
of
the
constraints
given
here.
So
here
we've
pinned
tensorflow
to
2.6.2,
and
you
kind
of
want
to
do
that
for
the
ci,
because
you
want
to
make
sure
that
you
didn't
rev
tensorflow
and
maybe
it
was
sure.
Maybe
it
was
a.
You
know,
point
release
a
revision
release,
but
you
know
revving
anything
or
changing
anything
can
easily
drag
in.
A
You
know
functional
test
changes
that
then
we
have
to
go
update
so
kind
of
in
the
spirit
of
maintaining
the
separate
ci
based
constraints.
We
want
to
say
you
know
exact
dependencies
here,
but
if
we
were
to
translate
this
into
something
we
wanted
to
put
into
a
wheel
that
people
installed,
we
might
want
to
allow
some
flexibility
in
case.
You
know
they
want
to.
You
know,
float
forward
to
2.6.4,
you
know,
there's
a
security
release
or
something
like
that
that
you
know
in
theory,
should
not
affect
you
know:
performance
or
functionality.
A
This
kind
of
thing
varies
per
package,
so
it
depends
on
the
versioning
scheme
packages
that
use
semver.
You
know,
then,
have
this
sort
of
revision
schema
that
says
that
things
should
be
backwards
compatible,
but
other
packages
release.
You
know
based
on
like
a
date
based
release,
versioning
number
or-
or
you
know
some
some
different
versioning
number-
that
we
need
to
kind
of
think
about.
So
where
is
this
all
going?
A
I
think
that
kind
of
the
the
part
three
would
be
then.
Now
that
we've
got
this
all
codified
into
ci.
A
Let's
then
extract
some
some
representation
of
the
the
versions
that
we
expect
to
work,
perhaps
relaxing
the
ci
constraints
and
functional
constraints
a
little
bit
and
then
fill
that
into
the
install
requires
in
in
the
wheel.
So
that's
that's
kind
of
the
piece
I
wanted
to
talk
about
constraints,
wise.
B
A
There
are
tools
like
talks
that
basically
test
under
several
different
combinations
of
things,
but
I
think
that
what
this
is
really
getting
at
is
how
do
we
test
the
released
wheel
because
right
now,
actually
we,
I
think
we
actually
run
the
unit
test
against
the
wheel
in
tlc
pack.
A
If
I,
my
memory
might
not
be
serving
me
right
there,
but
certainly
right
now,
the
way
that
we
run
the
unit
test
in
the
ci
is
we
run
them
sort
of
in
tree
or
in
repo,
and
you
know
that
means
that
we
don't
necessarily
have
any
expectation
of
success
if
we
were
to
then
go
and
run
the
the
same
unit
tests
against
an
installed
apache
tvm
wheel,
but
without
the
tvm
repository
kind
of
in
the
current
working
directory.
A
So,
as
a
starter,
I
think
I'd
like
to
solve
that
problem
and
then
yeah
you're
right.
We
could
consider
doing
something
like
a
pre-release
test
when
we
released
the
wheel,
so
at
least
at
the
time
that
we
released
the
wheel,
we
kind
of
were
confident
that
if
we
allow
these
dependencies
to
float
a
little
bit
more
like
at
least
the
unit
tests
pass,
or
we
could
do
more
testing,
I
think
that's
a
reasonable
thing
to
to
consider
it
here.
A
A
I
guess
utilities
of
the
user,
and
one
of
the
things
that
can
be
bad
about
overspecifying
install
requires
is,
if
you
have
two
packages
that
specify
it's
all
requires,
and
they
don't
agree,
pip
can
refuse
to
install
it,
and
so
we
want
to
be
a
little
bit
judicious.
I
think
about
what
we
actually
state
as
a
explicit
version
requirement
an
install
requires.
It
actually
might
even
be
interesting
to
generate
two
such
python
packages,
one
which
does
in
fact
constrain
these
versions.
A
You
know
to
a
relaxed
version
of
the
ci
constraints
and
maybe
one
that
just
only
has
the
functional
constraints
listed.
So
we
might
also
think
about
that,
but
that's
that's
another
thought
I
wanted
to.
Let
folks
also
comment
more
on
this.
G
G
One
thing
I
would
point
out
is
like
when
someone
hits
a
weird
version
issue
and
they
want
to
fix
it.
They're
gonna
go
look
at
the
setup.pi
and
try
to
you
know
change
it
there.
So,
like
the
further,
we
depart
from
a
normal
quote
like
way
to
specify
python
packages.
G
It
does
make
it
a
little
harder
to
contribute.
So
we
have
to
be
very
careful
to
you
know,
document
the
process,
everything,
and
so
it's
just
you
know
extra
work
for
you
andrew.
I
had
one
other
point
about
the
architectural
split
for
the
packages.
So
one
thing
I've
been
thinking
about.
You
know
our
ci
docker
images
are
huge
they're,
like
20
plus
gigabytes,
which
is
big
even
for
docker
images,
and
one
way
I
was
thinking
we
could
fix.
G
That
is
if
we
split
it
up
by
what
it's
actually
being
used
for,
for
instance,
the
gpu
unit
tests.
Probably
don't
need
a
lot
of
the
dependencies
that
are
in
the
gpu
docker
image
and
if
we're
installing
everything
for
architecture
rather
than
for
what
it's
used
for
we
end
up.
G
You
know,
including
everything
every
time
so
just
I
mean
this
is
like
very
informed
idea,
but
we
might
want
to
split
docker
images
out
based
on
like
their
intended
tests
rather
than
the
architecture,
but
I
don't
think
that
changes
like
the
design
of
this.
It's
just
like
more
base
images.
A
Yeah-
and
I
think
that
yeah,
I
guess
that
that's
kind
of
interesting-
we
have
to
think
a
little
bit
about
like
how
many
different
images
we
we
have
and
like
whether
or
not
we'd
like
share
layers
there,
because
one
of
the
things
you
can
do
with
docker
images
that
we
really
don't
do
right
now
is
you
know
you
can
share.
A
You
know
if,
like
one
image,
is
a
superset
of
the
other,
you
can
run
kind
of
all
of
the
scripts
that
form
the
superset
after
you've
run
the
the
base
set
of
scripts
that
build
the
image,
and
so
you
can
wind
up
with
kind
of
one
image
that
provides
say,
two-thirds
of
the
layers
and
then
on
top
of
those
two-thirds.
You
can.
A
You
know,
supply
sort
of
a
delta
edition
and
so
it'd
be
interesting
to
think
like
what
we
could
actually
reuse
and
not
reuse
and
kind
of
do
some
more
analysis
there
to
see
if
that's
kind
of
think
about
bang
for
buck
and
things
like
that
there
but
yeah.
That's
definitely
I
mean
that
seems
useful,
so
we
could
maybe
explore
that
yeah
further
thoughts
about
kind
of
consolidating.
All
this
stuff
is
everyone,
like
generally
supportive
or
is
this
seem
super
complicated
or
yeah.
E
I
think
this
is
a
generally
like
a
huge
improvement
over
what
we
currently
have
yeah
installing
non-deterministic
packages
introduces
all
kinds
of
problems
like
if
you
get
some
bad
actors
or
something
like
in
the
mpm
community.
They
had
some
packages
that
contained
like
bitcoin
farmers
and
stuff,
so
yeah.
I
think
this
is
just
good
practice
to
lock
this
down.
A
Some
of
the
things
that
can
be
tricky
here
is
you
know
if,
if
tensorflow
depends
on
one
version,
you
know
we're
we're
subject
to
the
same
things
that
we're
concerned
about
when
we're
filling
say,
install
requires
when
we're
doing
you
know
this
locking
here
so
tensorflow
depends
on
some
version
of
I
think
like
there
was
a
package
called
gas,
for
example,
that
paddle
paddle
used
one
version
and
tensorflow
use
a
different
version
so
like.
If
you
look
in
my
gen
requirements,
I've.
A
Actually
you
know
one
of
the
things
that
this
highlighted
is
that
we
actually
can't
install
technically
these
two
things
at
the
same
time,
because
they
have
an
incompatibility
in
a
sort
of
a
diamond
dependency,
and
so
I've
been
able
to
work
around
that
with
a
thing
called
environment
markers
here,
but
it
remains
to
be
seen.
A
You
know
to
what
degree
we
start
highlighting
these
things
and
and
what
degree
they
actually
impact
functionality.
You
know,
as
we
kind
of
go
down
this
road,
so
I
think
there's
going
to
be
some
operational
questions
as
we
do
this,
and
I
have
some
thoughts
about
how
we
can
kind
of
address
this
specific
issue.
But
you
know
those
are
they're
kind
of
like
let's
solve
that
problem
when
we
get
there
kind
of
a
thing.
A
Another
thing
I've
done
here
is:
if
you'll
notice
tensorflow
has
different
packages
for
gpu.
When
you
want
to
use,
I
guess
cuda
or
gpu
acceleration,
as
well
as
for
art
64,
and
so
what
is
kind
of
nice
is.
There
is
a
way
to
select
a
version
dependency
based
on
the
platform
which,
like
the
the
declared
machine,
type
and
poetry,
actually
does
take
this
into
account
when
it
is
doing
the
dependency
resolution.
So
it's
able
to
actually
use
these
criteria
to
exclude
those
incompatible
dependencies
from
being
installed.
At
the
same
time,.
A
Something
we
can
work
around
okay.
So
if
there's
no
more
comments
on
this,
I'd
like
to
switch
gears
and
talk
about
the
quarter
releases
and
maybe
I'll
kind
of
turn-
that
over
to
david
to
discuss-
and
I
can
go
back-
oh
go
ahead-
yeah
yeah.
B
A
B
And
same
for
paddle,
paddle,
and
so
just
the
thought
that
came
to
my
mind,
right
now
is,
if
we
could,
let's
say,
have
like
a
quality
vm
that
doesn't
have
any
importers
in
it
and
then
have
additional
packages
with
importers,
so
that
someone
who
is
not
interested
in
paddle
paddle
wouldn't
have
to
install
everything
all
at
the
same
time.
They
can
just
install
let's
say
tensorflow
or,
or
you
know
whatever
else
they
need
so
that
they
are
less
likely
to
run
in
some
kind
of
conflict.
A
And
so
like
the
way
we've
handled
this
right
now
is
that
we
declare
so
these
pieces
here
are
things
that
can
be
installed
separately.
So,
like
I,
have
an
importer,
paddle
paddle
here.
This
means
that,
if
you,
you
know
what
you
would
do
is,
you
would
say,
pip
3
install
apache,
tvm
importer
paddle,
and
this
would
only
install
paddle
paddle
and
then
I
think
the
syntax
is
like
this.
A
If
you
wanted
to
install
two
of
them,
so
right
now,
kind
of
what
you're
saying
is
is
true,
like
and
and
poetry
will
will
do
that.
What
this
doesn't
solve
is
if
you
wanted
to
do
this,
this
simply
wouldn't
work
right
now
and
that's
because
you
get
this
sort
of
inconsistency
in
the
dependency
tree.
A
The
way
to
solve
that
is
probably
to
move
the
importers
outside
of
the
virtual
environment
and
go
to
sort
of
a
plug-in
style
importer
system
where
the
process
of
importing
a
model
is
basically
tbm.
Invoking
a
sub
process
that
then
spits
out
relay
like
text
and
that's
a
more
invasive
change
side.
You
know
talk
about
that.
If
we
get
there,
I
think,
but
I
think
that's
that
anyway,
that's
that's
my
idea
of
how
we
could
solve
it
and
there's
certainly
open
entertaining
other
ideas
there
too.
A
Okay
yeah!
No,
I
wasn't
aware
that
it's
already
kind
of
done.
Yeah,
yeah,
no,
no
worries
I
mean,
and
you
know
this
will
all
be
surfaced
more.
I
think
as
as
it
you
know,
lands
and
all
that
and
and
you
can
actually
run
these
commands
so
yeah,
okay,
so
yeah,
let's
talk
a
little
bit
about
the
quarter
quarterly
releases.
Let's
see,
I
can
share
this
screen.
If
you
want
david
or
if
you
wanna,
I
can
do
it.
G
All
right,
you
guys
can
see
the
rfc,
yes,
okay,
cool,
so
I'll.
Just
give
a
brief
summary,
and
I
think
a
lot
of
people
here
have
already
left
comments
on
it.
So
we
can
just
sort
of
jump
right
into
the
last
few
discussion
points.
So
basically,
tvm
hasn't
done
a
real
release
since
last
november,
which
is
some
number
of
months
ago
now.
G
So
this
rfc
proposes
that
we
will
do
a
release
sort
of
automatically.
It's
not
really
a
right
word,
but
just
every
three
months
we
would
try
to
get
a
release
out
the
door
just
so
people
can
get
new
changes
from
tbm,
so
this
rfc
sort
of
lays
that
out
in
the
the
various
details
of
like
the
schedule
and
how
we
actually
do
the
release.
G
So
the
main
point
is
that
we
would
have
a
single
person
a
contributor
or
committer,
that
is
the
release
manager
that
would
sort
of
run
through
all
these
steps
do
everything
required
for
an
apache
release
as
well
as
some
of
the
new
stuff
we're
working
on
to
get.
You
know,
pi
wheels,
published
and
stuff
like
that.
G
So
if
you
want
the
details,
you
can
read
the
rfc,
it's
rfc
67
and
the
rcs
repo,
but
there's
been
a
lot
of
good
discussion
so
far
on
the
comments
thread.
Some
of
the
main
questions.
I've
seen
that
are
not
really
resolved
is
number
one.
What
versions
do
we
choose
chris
hodgepod
at
this
point,
but
right
now
I
think
we've
just
been
bumping.
The
minor
version
every
time
we
update,
which
is
sort
of
a
nonsense,
versioning
scheme.
G
So
we
might
want
to
change
that
as
well
as
other
things
like.
How
do
we
get
people
to
use
feature
flags,
so
we
can
have
better
support
for
different
versions.
You
know
as
we
move
across
time,
and
how
do
we
support
like?
What's
our
support
plan
for
old
versions?
Do
we
just
completely
drop
them
after
we
release
a
new
one
things
like
that
so
opening
the
floor?
If
anyone
has
any
thoughts
or
anything
like
that,.
E
I
would
just
say
that,
like
naught,
releases
typically
are
unstable,
I
think
in
semantic
versioning.
They
are
considered
to
be
like
they
anything
can
change.
So
it's
I
think
it's
totally
acceptable
at
that
point
to
not
give
any
guarantees.
I
just
start
rolling
them
out
rather
than
worrying
too
much
about
the
the
versioning.
A
All
right
yeah,
it's
like
because
we're
on
zero
dot,
something
or
other
so
at
some
point
we'll
switch
to
1.,
you
know
1.0
and
then
at
that
point
we
might
need
to
start
thinking
about
stabilizing
apis
and
well.
I
guess
before
that
point
when
we
start
thinking
about
lighting
apis
and
you
know
making
sure
that
we
don't
break
anything
too
much.
G
Yeah,
like
I
don't
know
too
much
about
the
historic
releases,
but
it
seems
like
at
least
for
the
most
common
apis
they're
already
kind
of
stable,
and
I
think
it's
good
to
get
in
the
practice
of
doing
that.
Even
if
we're
not
1.0
but
yeah.
I
agree
like
we
shouldn't,
spend
a
ton
of
time
on
it
right
now,
right,
I'm
just
getting
releases
out
the
door.
A
Anyone
have
any
other
thoughts,
I
think.
Releasing
you
know
on
a
cadence
makes
a
lot
of
sense
and
we
should.
We
should
move
to
to
do
that,
and
I
know
that
we
have
at
least
one
blocking
or
I
don't
know
blocking,
but
we
have
one
issue
that
I
I
need
to
resolve,
and
so
we
can
start
doing
that,
and
maybe
the
resolution
is
that
we
don't
resolve
it
just
around
the
tv
and
relay
build
api.
A
But
well,
if
we
say
our
apis
aren't
stable,
then
maybe
we're
good
yeah
but
yeah.
We
do
I
mean,
even
today
we
do
think
about.
We
do
a
sort
of
attempt
to
not
cause
a
lot
of
turn
in
terms
of
tvm
versioning
and
all
that.
So
it's
still
good
to
think
about
this,
even
if
we,
even
if
we
are
sort
of
in
this
policy
of
like
yeah,
we're
zero
dot.
So
we
may
change
things.
G
Well,
there's
already
been
a
lot
of
discussion
on
the
rfc,
so
once
it
practices
like
50,
cones
or
something,
but
anyone
has
thinks
of
anything
else.
After
the
fact
we
can
just
comment
there,
yeah.
A
Definitely
all
right
thanks
so
much
for
giving
that
talk
there,
david
and
yeah
anything
else
from
anyone
before
we
call
it
a
week.
A
Yeah
hearing
nothing
else,
yeah
join
us
next
week.
We'll
have
some
more
topics
for
discussion
and
see
you
guys,
then.