►
From YouTube: Apache TVM Community Meeting, February 18, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
everybody
welcome
to
the
tvm
community
meeting.
This
meeting
is
being
recorded
so
that
we
can
post
it
to
youtube
later.
I'm
chris
hodge
and
I'll
be
running
the
meeting
this
this
month,
a
reminder
that
we
have
a
meetings
every
third
thursday
of
the
month
at
9
a.m.
Pacific
time,
let's
get
started
with
the.
A
Agenda
so
we're
going
to
start
off
with
with
with
some
announcements.
We
don't
have
that
many
this
month,
but
we
want
to
say
congratulations
to
new
reviewer
trevor
morris.
I
want
to
thank
him
for
his
work
and
for,
and
you
know,
for
being
being
added
as
a
new
contributor
to
the
project.
A
We've
there,
we've
posted
a
a
link
for
an
issue
on
the
tvm
on
tvm
github
to
to
track
this
looking
at
in
april
and
may
time
frame
for
this
release,
and
I
think
that
maybe
we
wanted
to
go
through
this
and
see
if,
if
it,
if
it
kind
of
covered
what
we
were
expecting
or
if
there
were
going
to
be,
if
we
thought
there
might
be
any
delays
or
any
other
additional
features
that
we
would
want
to
see
in
the
0.8
release.
A
Our
expectation
is
that
this
is
going
to
be
the
last
stable
release
before
the
last
stable
development
release
before
we
get
to
the
1.0
release,
which
which
is
going
to
be
happening
sometime
later
this
year
is
what
is
what
the
the
community
is
planning
on.
B
B
Was
not
on
there
and
I'm
making
the
pr
pretty
soon
so.
A
A
Okay,
yeah
yeah,
it
was
yeah.
I
was
actually
surprised
that
quantization
wasn't
on
there,
but
so
I'm
glad
that
that's
going
to
make
it
into
the
into
the
next
release.
Is
there
anything
else?
That's
that's.
That's
missing,
or
we
don't
think
is
going
to
make
it
like
are
most
of
these
things.
Most
of
these
things
have
implementations
in
them.
D
D
A
D
E
One
thing
I
talked
to
terry
about
is
potentially
pushing
more
on.
You
know:
coverage
like
onyx,
test,
suite
coverage
and
report
generation
for,
for
you
know,
back-end
op
support,
and
I
don't
know
if
I
guess
it'd
be
nice
to
maybe
put
that
under
coverage.
I
can.
I
can
put
a
note
on.
A
A
Yeah,
and
do
we
and
do
we
know
like?
Are
we
thinking
late
april
early
may
or
or
do
we
have
a
do?
We
have
a
sense
of
when,
when
the
release
is
going
is
going
to
happen?
Are
we
looking
like?
Are
we
looking
at
a
timed
release
where
we're
just
going
to
say
that
the
features
at
this
at
this
time
are
going
to
be
released,
or
are
we
waiting
for
particular
features
to
land
before
we
make
the
release.
A
Okay,
and
and
who
are
any,
is
anything
on
this
list
been
been
checked
off?
Should
we
should
we
be
checking
these
off
right
now,
dessert?
Do
we
do
we
have
someone
assigned
to
go
through
and
make
sure
that
that
some
sort
of
release
manager
to
that
as
these
features
land
we
can,
we
can
mark
them
down
yeah.
D
Usually
in
the
past-
and
maybe
we
should
aerate
on
this
a
little
bit
this
year,
but
because
we've
been
doing
feature-based
releases,
we
would
assemble
this
list
and
then,
as
things
got
merged
or
whatever
we
would
come
back
incrementally
and
updated,
and
eventually
someone
from
the
pmc
or
whatever
will
become
the
release
manager,
that's
usually
how
it
works.
It
might
be
good
to
have
some
other
people
involved
this
year,
kind
of
spread
some
of
the
load
between
the
community
and
then
we'll
cut
a
release.
D
E
Also,
I
I
know
he
tianjin
mentioned
that
we
could
tag
or
people
could
volunteer.
But
when
there's
examples
like
the
mlir
hlo
importer,
where
there
is
a
community
member,
that's
been
working
on
this,
should
we,
you
know
tag
them
there
just
so
they
know
that
to
let
people
know
if
they,
you
can't
do
it
or
I
guess
it's
also
part
of
the
release
management
kind
of
process.
You're
talking
about
chris.
A
D
Yeah
usually
tng
is
like
just
want
to
talk
to
pmc
members
and
committers
directly.
Okay,
I.
D
Yeah,
I
think
we
yeah
yeah,
that's
right,
there
might
need
to
be
a
bridge.
Release
too
depending
depends
on
where
how
we
want
to
cut
it.
I
I
could
see
the
argument
for
that.
This
is
something
we
ran
into
when
I
was
working
on
the
russ
1.0
or
like.
A
Yeah,
you
know,
I
think
that
the
option
is
feature-based
releases
or
time-based
releases,
and
I
would
I
would
hesitate
to
have
1.0,
be
anything
but
a
feature-based
release
to
say
that
these
are
the
minimum
features
that
we
would
have
to
to
say.
That
tvm
is
a
is
a
complete
and
stable
project
that
that
will
have
a
stable
set
of
apis.
A
For
the
for
the
sake
of
momentum,
I
can,
I
could
see
it
going
either
way
to
say
that
between
0.8
and
1.0,
you,
just
you
have
time
based
releases,
and
so
you
can,
you
can
try
to
make
pushes
to
you
know
to
have
have
a
base
level
of
stability
before
you
make
another
release.
A
One
one
place
where
I
think
it
impacts
us
is
that,
as
as
we
work
on
documentation
and
try
to
try
to
improve
upon
that,
the
the
the
the
the
main
development
branch
has
moved
far
enough
of
0.7
that
it's
really
hard
to
recommend
that
people
use
0.7.
A
A
Okay,
so
the
other
item
on
the
agenda
is
the
it
actually
is
related
to
this,
and
and
what
are
the,
what
are
the
things
that
the
community
wants
to
see
in
a
1.0
release
like
like
we're
talking
about
a
1.0
release?
What
what
are
the
things
we
want
to
see
in
it
and
what?
What
in
some
sense
is
the
is
the
the
primary
purpose
of
this
release.
A
Also,
I
think
it's
it's
important
to
think
about
like
what
like
what
this,
what
a
support
policy
is,
what
a
testing
policy
is,
what
the
support
policy
is
because,
because
with
it
with
with
a
stable
release,
you
begin
talking
about
promises
that
you're
making
to
the
community,
and
I
think
that
not
only
do
we
need
to
decide
what
is
going
into
the
software,
but
what
sort
of
promises
the
community
is
making
towards
the
end
users
of
the
software.
A
Is
you
know,
for
example,
like
you
know,
will
bug
fixes
be
cherry
picked
back
to
to
stable
releases?
How
long
will
you
do
that?
For
you
know,
how
long
will
that
you
know
that
you
know
you
know,
essentially
how
long
will
support
for
for
for
past
releases?
Last.
A
You
know
one
of
the
one
one
of
the
assumptions
is:
is
that
once
you've
reached
a
a
1.0
release,
you
you're
more
likely
to
start
appearing
in
as
stable
packages
within
vendor
packaging
like
to
me
it
would
be.
It
would
be
a
really
positive
outcome
for
tvm.
If
we
started
appearing
in
in
distribution
level,
packaging.
D
Yeah,
I
think
the
meta
for
me
is
just
like
figuring
out
and
getting
people
to
agree
to,
like
all
of
the
stabilization
work
like
what
we
agree.
Stabilization
means
like
in
terms
of,
for
example,
like
are
we
gonna
audit?
D
All
the
apis
like
when
I,
when
we
work
in
rus,
the
standard
library
was
deleted
and
rewritten
like
four
times
in
the
last
six
months
or
like
parts
of
it
were
because
people
like
kept
running
into
issues
with
the
apis
like
they
used
to
have
like
a
green
threading
runtime,
for
example,
didn't
work,
wasn't
the
right
design
they
deleted
it.
D
I
think
we're
gonna
have
to
think
about
things
like
that.
The
other
option
is
sort
of
like
the
kubernetes
approach.
I
think
where
you
use
a
lot
of
beta
and
versioning,
like
you
know,
a
lot
of
those
like
kubernetes,
specs
or
whatever
are
not
stable,
so
they
like
get
a
strong,
hard
cleave
on
the
api
surface
and
then
stabilize
like
a
much
smaller
core
and
then
re-stabilize.
D
You
know
for
update
them
like
every
couple
releases,
so
I
think
we
need
to
think
about
like
what
that
entails.
For
us.
The
other
thing
is
like:
how
do
we
communicate
stability?
Like
you
know,
some
tools
and
languages
have
like
used
that
may
have
make
every
use
of
the
like
deprecation,
instability,
annotations
or
whatever
they
put
it
in
the
docs.
Maybe
they
categorized
by
it.
I
think
it's
another
big
thing
to
think
about.
B
D
Don't
know
how
we
do
it,
but
we
probably
need
to
do
something
like
that.
Like
I
don't
know,
the
microsoft
onyx
rt
guys
were
saying
that
they're
going
to
provide
support
based
on
the
offset
versions-
and
maybe
that's
not
perfect,
but
that
gives
a
pretty
strong
indication
on
like
whether
the
ops
are
like
in
a
release
or
not
in
release
right
now,
it's
sort
of
a
free-for-all
where,
like,
if
there's
a
compute
definition,
we
technically
support
it
and
we
have
like
the
support
levels,
but
we
don't
necessarily
communicate
those
the
end
user
very
much.
E
With
relay
ir
and
t
you
know
the
new
tensor
ir
specifically
is:
is
it
feasible
to
talk
about
defining
those
more
strictly
or
are
they
just
too
open
for
for
that?
To
make
sense,
I
mean
even
rust
doesn't
have
like
core
or
like
under
the
hood
of
the
linux
cisco
api.
E
D
D
C
I
think
that
you
know
we
have
this
runtime
packsplunk
interface
and
we
haven't
really
documented
it
very
well,
it's
it's
there,
but
a
lot
of
the
documentation
is
examples,
and
so
I
think
it
would
be
helpful
to
think
about
whether
or
not
we
want
to
standardize
on
that.
I
personally
think
it
would
be
good
to
do
that,
because
we
have
a
lot
of
examples
of
using
their
own
time
with
different
front
ends,
and
I
think
that
might
be
a
common
use
case.
Yeah.
C
Documentation
generator
for
that,
like
we,
a
sphynx
thing
that
generates,
you
know
like
an
html
page,
you
can
go
to
and
look
at
for
every
class
like
what
are
the
methods
and
all
that,
but
for
packed
bunk,
there's
no
way
to
go
to
just
like
a
nice
page
that
says
like
here
is
a
module
and
here's
all
the
functions
you
can
call
in
it
and
here's
our
arguments.
C
So
I
think
we,
I
would
like
to
see
us
move
towards
something
like
that,
as
for
maybe
a
limited
subset
of
the
whatever
we
think
is
stable.
Basically,
the
other
thing
I
would
say
too
is
you
know.
C
I
think
that
the
compiler
is
fairly
intertwined
with
python,
and
you
know
it
seems
like
it
might
be
an
easy
thing
to
I
mean
I
guess
just
if,
as
we're
kind
of
trying
to
figure
out
how
to
sort
of
what's
put
a
line
in
the
sand
and
say
this
side
of
the
line
is
stable
for
for
our
release.
C
You
know
the
compiler
is
fairly
intertwined
with
python
and
so
saying
that
the
compiler
python
apis,
some
subset
of
those
are,
are
the
stable
ones,
make
sense
and
then
sort
of
on
that
note,
then
it
would
be
nice
to
add
tests
that
you
know
actually
enforce
that.
I
think
our
apis
are
not
super
rigorously
tested
right
now.
I
think
we
could.
We
could
do
a
little
bit
better,
especially
around
things
like
user
errors,
or
you
know
just
like
you
know.
C
If
you
pass
like
an
integer
where
you
should
be
passing
a
string,
we
should
be
raising
a
particular
kind
of
error
and
asserting
that
we're
raising
that
error.
So
you
know
simple
things
like
that
and
whether
or
not
those
would
go
in
the
ci
and
maybe
that
one
could
but
other
more
complicated
things
might
go
into
release
tests.
D
Related
to
that,
like
as
we
audit
them
as
like
slicing
some
apis
because
like
right,
I
feel
like
some
of
the
design
is
turned
into
like.
Let's
make
the
python
function,
do
everything
and
that
actually
impacts
our
ability
to
be
stable
long
term,
because
it
means
like
the
conversion
code
and
the
actual
implementation
or
three
implementations
are
all
actually
the
same
api.
So
it
might
make
sense
to
cut
out
a
stable
subset,
stabilize
it
under
one
name
and
then
come
back.
So
I
think
things
like
that.
C
Yeah,
it
would
be
good
to
you
know,
audit
each
feature,
I
guess
of
tdm
and
figure
out.
If
there's
any,
you
know
if
we
can
sort
of
cut
it
down
to
a
stable,
subset
and,
and
maybe
like
in
particular,
for
for
features
that
are
sort
of
undergoing
heavy
development
like
monetization
new
contributions,
we
probably
don't
necessarily
want
to
put
a
high
bar
there,
where
those
apis
are
required
to
be
immediately
stable,
so
having
a
core
subset
of
apis
that
are
broadly
useful,
I
think,
would
go
a
ways
towards
that.
C
The
last
thought
I
had
about
sort
of
releasing
and
kind
of
how
we
should
handle.
That
is
that
I
would
like
to
see
the
work
with
the
ci
and
dependencies
and
kind
of
like
making
sure
that
we're
actually
testing
dependencies
that
we
would.
C
We
would
actually
expect
people
to
install
with
when
they're
actually
doing
you
know,
pip,
install
or
honda
install
or
whatever
they
do
in
our
ci
yeah
and
propagating
those
dependencies
outwards
to
the
pip
package
themselves
as
requirements.
That's.
D
I
also
think
actually
auditing
what
is
not
optional
or
not,
because
some
of
the
stuff
is
optional,
but
not
really
optional
because
of
the
import
structure.
Like
I
ran
into
this
literally
last
night,
where
I
was
gonna
message
you
at
like
11
p.m,
where
I
like
tried
to
run
some
rust
code
that
calls
some
python
code
and
the
python
code
calls
some
imports
implicitly
some
testing
library.
That
means
mxnet
must
be
required
in
order
to
like
do
something
not
related
to
mxnet,
and
it's
like
I.
A
I
I
ran
into
that
exact
same
problem,
and
I
and
I
thought
it
was
I
I
thought
that
it
would
in
part
be
like.
Oh
it
just
like.
A
Well,
can
we
just
in
time
load
some
of
these
things
and
it
turns
out
that
we
do
because
I
wanted
to
go
in
and
and
and
compile
some
high
torch
and
and
tensorflow
models,
and
I
had
to
go
import
those
libraries,
and
so
it
was
interesting
that
you
know
that
we
have
a
dependency
on
mxnet
and
on
conda
I
wasn't
able
to
get
mxnet
to
to
solve
for
an
installation,
so
I
had
to
install
it
with
pip
right
and
so.
C
Right,
like
one
of
our
challenges
along
those
lines,
is
that
like
when
we
package
for
condo
or
package
for
pip
and
we
install
those
packages
on
different
particular
machines
like
build
binary
packages
for
different
machines
like
windows
or
linux
or
osx,
the
available
set
of
packages
may
be
different
in
each
of
those
scenarios,
and
yet
we
kind
of
want
to
try
to
have
one
set
of
dependencies
in
the
ci.
So
that's
what
kind
of
makes
the
ci
thing
a
pipe
dream,
but
anyway,
yeah
well.
A
C
So
we
do
have
a
plan
for
that,
where
those
are
there's
a
way
in
tip
anyway
to
say
that
there
are
extras-
and
we
haven't
really
looked
at
the
kanda
side
of
things
to
see.
At
least
I
haven't
looked
at
the
side
of
things
to
see
how
that
is
represented
there,
and
maybe
for
that
you're
right.
Would
you
need
an
extra
like
a
separate
condo
package
that.
D
Yeah,
I
know
people
like
using
condo,
but,
like
my
hot
take
on
this,
is
that
like
having
pip
packaging
being
robust
is
like
step
key
step
number
one
like
every
single
person
like
I
talked
to
sasha
yesterday,
who
was
like
posting
on
the
message
board
a
couple
days
ago
and
like
his
literal,
like
number,
two
complaint
was
like
the
packaging.
It
was
like.
I.
C
Definitely
think
that
pit
packaging
would
go
a
long
way,
so
I
I
but
I
don't
want
to
discount
condo
support
as
well,
because
I
know.
D
C
The
thing
is:
is
that
like
pip,
packaging
doesn't
require
much
more
than
what
we
have
already
checked
into
the
repo
like
you
can
technically,
I
think,
build
a
wheel
from
the
tv
repo
with
the
correct
scripts,
and
so,
and
I
know
that
condo
is
also
just
sort
of
a
matter
of
scripts.
But,
like
you
know,
pip
reads
the
same
setup.py
and
I
don't
remember
what
it
does
so.
D
The
thing
that
I'm
hammering
on,
I
guess,
is
more
just
the
internal
code
structure,
because
the
way
python
is
written,
it
has
no
real
strong
requirements
or
connection
to
package
manager.
So
it's
like,
we
could
say
mxnet
is
optional,
but
then,
if
someone
imports
it
in
the
wrong
place,
it
becomes
actually
non-optional,
which
I
think
is
happening
yesterday
is
it
people
are
using
like
absolute
imports
would
trigger
a
cascade
of
like
inits
to
be
imported
and
one
of
the
inits
also
imports
mxnet.
D
So
even
though
I'm
not
using
the
code
now
it
needs
to
be
in
scope,
and
so
I
think
there's
some.
I
had
the
same
problem
with
scipy
talking
about
platform
portability
like
I
was
running
on
an
m1
last
night
and
like
yeah,
it's
just
like
sci-fi
doesn't
exist
yet
yeah
and
or
is
not
as
stable,
and
so
there's
things
to
think
about
like
this,
where
it's
like.
Do
you
actually
need
sci-pi?
C
D
C
Have
used
static
analysis
tools
in
the
past
to
kind
of
like
at
least
generate
a
list
of
what
a
bunch
of
python
files
or
spider,
python
files
and
sort
of
pretend
to
be
the
python
interpreter,
that's
importing
them,
and
that
does
work
so
long
as
you
don't
have
late
imports.
Basically
yeah.
C
Up
that
you
know
and
like
in
particular,
I
think
the
c
runtime
gets
a
lot
of
attention
because
it
isn't
that.
But
at
the
same
time
there
are
a
bunch
of
use
cases
like
if
you're
just
running
on
a
raspberry
pi.
You
don't
necessarily
need
the
c
runline.
You
could
use
this
equals
question
time
and
use
something
more
full
featured,
but
we
don't
really
have
like
actually
I've
seen
in
a
lot
of
cases,
even
when
we
do
compile
against
the
sql
plus
runtime.
C
We
have
a
series
of
includes
that
includes
c
source
files
in
a
file
called
runtime.cc
and
so
like.
We
don't
even
necessarily
link
our
cmake
built,
shared
library
in
in
situations
where
we
could
do
that.
So
like
we
just
are
sort
of
doing
an
ad
hoc
deployment
flow,
even
in
our
own
source,
tree
yeah.
So.
D
D
I
I'm
a
big
believer
in
shipping
like
the
train
version,
even
even
if
we
figure
out
like
the
timelines
or
whatever,
but
allowing
people
to
opt
into
features
with
a
big
warning
that
the
feature
is
not
stable
and
allowing
people
to
like
pick
and
choose
those,
because
I
think
that's
one
of
the
biggest
problems
right
now
is
that
we
we
don't
provide
any
messaging
on
stability.
So
you
actually
a
lot.
D
The
the
stable
features
are
actually
pretty
stable,
but
you
have
no
ability
to
tell
what's
stable
and
not
so
it's
like
people
wander
in
and
they're
like.
Oh
I'm
trying
this
new,
auto
scheduler
thing,
and
then
we
post
on
forums
like
hey,
that's
not
actually
done
yet
and
we're
gonna
ship,
another
api
that
competes
with
it
and
the
way
that
you
know
again
just
harkening
back
to
when
we
work
on
rust
and
actually
ghc
and
a
bunch
of
other
compilers.
Do
this.
D
Now
too,
where
you
know
you
just
opt
into
experimental
extensions
and
sometimes
it
can
even
be
competing
ones
and
then
eventually
the
ecosystem
will
coalesce
and
after
a
couple
cycles,
they'll
become
stable
and
then
you
don't
have
to
turn
them
on
anymore.
But
the
nice
thing,
then,
is
that
people
know
they're
gonna
get
broken,
and
so
we
can
really
have
a
clear,
hard
line
on
like
where
we
draw
the
stability
boundary.
We
can
also
be
pickier
about
contributions
and
stable
features,
then,
which
I
think
is
something
that
we've
argued
about
before.
D
It's
like
stifling
research,
verse
or
like
experimentation
versus
stable
contributions,
like
I
think,
for
example,
like
the
graph
runtime,
compiler,
apis
and
runtime
apis
are
like
pretty
stable
and
so
like.
We
should
stabilize
those
probably
but
like
quantization's,
not
stable,
yet
or
like
micro
tvm's,
not
really
even
stable.
Yet.
E
Do
you
think
that
would
solve
the
problem,
because
I
think
one
issue
I
was
going
to
bring
up
is
that
right
now
it
appears
that
all
implementations
all
back
ends
are
equally
mature,
which
is
not
the
case,
and
so
you
know
is
this
carving
out?
Not
only
apis,
but
also
implementations
of
you
know,
opencl,
cuda
and
x86
are,
are
all
you
know
stable,
but
everything
else
is
under
feature
flag
to
indicate
that
you
know
wasm
support
might
not
be
as
great
or
vulcan
or
metal
yeah.
So.
D
D
D
Ship,
stable
or
whatever,
and
the
stable
releases
will
warn
people
about
this,
because
I
I
would
wager-
and
we
don't
have
everyone
here
to
talk
about
this-
but
wager
we
have
effectively
like
a
bimodal
user
distribution
where
there's
some
people
who
like
try
to
use
the
releases
because
they
think
they
want
disabled
product
and
then
they're
the
people
who
are
like
all
of
us
who
are
nightly
users
and
don't
care.
We
know
everything's
gonna
break
every
week
and
we've
like
crossed
that
threshold
or
hurdle,
and
we
bought
into
things
breaking
all
the
time.
D
C
And
as
a
developer
too,
it's
like
the
the
bar
is
some
arbitrary
thing
that
we
have
all
kind
of,
as
we've
been
using
the
code,
we
kind
of
all
come
to
know,
what's
stable
and
what's
not
and
and
that's
not
really
listed
anymore.
So
what
I
really
do
like
about
feature
flags
is
that
it's
like
this
very
explicit
thing
where
it's
like.
C
C
D
B
Another
thing
before
1.0
that
might
be
good
is
to
like
review
apis
before
they
become
stable
and
just
make
sure
that,
like
they
make
sense
and
like
get
rid
of
overloaded
terms
like
today,
I
was
looking
at
making
a
pass
like
transform
function
and
there's
ctx
in
there
and
that's
the
build
context,
which
is
completely
different
from
like
the
hardware
context
that
you
like
build
stuff
on.
And
it's.
B
D
D
Is
like
doing
api
auditing,
I
think,
is,
is
really
important
like
for
example,
I
really
want
to
delete
build
because
I
just
don't
think
it
makes
any
sense
like
it's
just
so
so
vague
and
it
doesn't
allow
us
to
standardize
like
if
we're
gonna
build
an
aot
and
vm
flow
like
we
should
have
all
the
steps
in
every
flow
be
identically
named
and
have
similar
shaped
apis
like
it's.
Okay,
if
we
have
extra
arguments
or
whatever,
but
I
think
we
really
want
to
standardize
around.
Like
compiler
function.
D
Thing
goes
in
with
some
compiler
configuration,
shared
module
or
whatever
android's
talked
about
yesterday.
Maybe,
like
a
you
know,
whatever
we're
going
to
call
the
new
module,
metadata
module
or
whatever
comes
out
and
then
you
know,
we
have
a
runtime
flow
which
takes
one
of
those
and
starts
executing,
and
then
we
have
that
for
the
vm
or
aot.
D
Maybe
it's
just
a
compiler
api
and
then
we
can
invoke
it
and
run,
but
the
api
is
for
the
same
flows
or
not
do
not
contain
the
same
set
of
steps
like
there
is
a
really
good
comment
from
terry
recently,
where
he's
like.
Oh,
this
is
so
ergonomic.
I
didn't
expect
it
to
work,
which
is
not
a
not
a
good
endorsement
of
you
know
of
the
the
other
api
surfaces.
D
If
you
feel
like
you
know,
it's
not
going
to
work,
and
so
I
think,
like
managing
expectations
is
something
that
we
want
to
do
there
were
to
take.
You
know,
I
think,
a
lot
of
the
like
language
designer
type
people
are
always
talk
about
this,
but
like
that
consistency
is
really
important
for
human
beings,
like.
A
E
Like
I
think
you
split
that
conversation
first,
it's
you
know
as
a
community.
What
do
we
want
for
from
1.0,
tvm
and
then
kind
of?
When
do
we
want
it?
And
then
you
know
we
we
put
that
as
the
the
milestone
and
then
we
you
know
divvy
that
up
and
divide
and
conquer
as
a
community
separately.
E
I
don't
know
I
I
don't
think
we
can
do
it
any
other
way.
I
guess
this
one
like
we'll
go
back
internally
at
octoml
and
figure
out
which
parts
of
this
we
can
do
ourselves,
but
as
the
community,
I
think
we
have
to
take
it
kind
of
you
know,
definition,
declare
declaration
first
and
then
kind
of
crowdsourcing
implementation.
Second,
I
don't
know
curious
if
there's
other
thoughts.
D
Yeah,
I
would
assume
that
we'll
have
to
build
little
working
groups.
I
mean
like,
for
example,
api
auditing.
I
would
assume
a
lot
of
people
will
be
opinionated
about,
but
very
few
people
will
want
to
do
any
work
on
just
given,
like
my
historical
priors
or
error
messages,
so
I
think
the
only
way
that
that
stuff's
going
to
happen
is
if,
if
after
we've
set
the
milestone
and
agreed
upon
what
we
want
to
do-
and
I
kind
of
had
the
meta
discussion-
that
we
form
little
working
groups
and
go,
do
the
work
together.
D
Like
you
know,
I
know.
Other
people
in
the
community
who
aren't
here
today
are
interested
in
parts
of
this,
and
so
I
think,
coming
with
a
once.
We
like,
as
a
community,
decided
on
a
plan
and
like
trying
to
drive.
You
know
little
groups
of
people
to
go.
Make
some
of
this
stuff
happen.
I
think
really
to
me
also.
We
need
to
figure
out
the
ordering
is
really
important
so
to
me
like
adding
future
flags
or
things
like
the
ci
changes
and
like
requirements
and
stuff
txt.
D
Those
things
are
like
need
to
happen
first,
so
that
we
can
make
the
other
changes
happen,
because
I
think
the
ordering
is
very
important,
like
it's
really
hard
to
say,
stabilize
the
apis
if
we're
still
like
working
on
the
release
process
or
like
the
the
like
core
features
needed
to
allow
us
to
do.
Stabilization
like
when
I
was
working
on
rust,
like
there
was
a
the
way
that
we
had
done.
D
It
was
there's
a
huge
push
to
add
all
those
like
stabilization
markers
that
people
use
now
in
the
language
to
mark
apis
like
they
have
like
a
stable
sense
attribute,
and
they
spent
you
know
a
couple
months
doing
all
that
work
beforehand
and
then
then
they
did
the
great
purge
where
a
lot
of
stuff
was
actually
destabilized
and
jettisoned
from
the
standard
library,
and
I
I
feel
like
we
might
have
some
in
in
this.
We
might
need
to
do
that
where
we
like.
First,
we
add
this.
D
These
features,
we
add
all
the
requirements
and
then
we
go
and
we
shrink
the
requirements
txt.
We
eject
libraries
from
the
core,
we
eject
features
from
the
core,
and
then
we
work
to
stabilize
them
at
another
time
or
past
1.0
or
something
because
I
think
trying
to
stabilize
the
entire
api
surface
is
going
to
be,
is
going
to
be
too
time-consuming.
Probably.
D
D
You
know,
get
the
people
who
care
about
whatever
sub
issues
they
want
to
work
on
for
the
release
and
get
them
together
in
some
kind
of
cadence,
like
we've,
been
doing
with
the
amazon
folks
and
then
to
meet
about
the
stuff
that
we're
working
on
and
then
push
it
that
way.
Yeah.
C
I
think
also
one
thing
too,
that
would
be
helpful
in
sort
of
just
even
driving
these
working
groups
would
be
something
more
like
more
of
an
rfc.
I
guess
around
all
of
this,
like
a
road
map
to
to
1.0
kind
of
a
thing,
and
I'm
sorry,
maybe
I'm
saying
this
and
I
haven't
read
something,
but
I'm
not
sure
if
something
like
that
exists
right
now.
No.
C
To
just
like
write
that
out,
basically
and
especially
a
brief
summary
of
like
what
are
feature
flags
and
and
kind
of
what's
the
scope
like.
Is
it
simple,
splash
python,
that
kind
of
a
thing
just
you
know
like
a
one
paragraph
about
like
what
what
is
it
we
actually
want
to
do,
and
then
you
know,
hopefully
that's
consumable
and
easy
to
read
so.
D
Yeah,
I
think
the
idea
would
be
like
to
take
all
of
this
and
synthesize
it
to
like
an
action
plan
and
circulate
that
as
like
here's,
the
meta
like
high
level
process,
we
think
we
need
to
take
to
1.0
like
like
I'm
just
spitballing
here,
but
it
would
be,
like
you
know,
stabilize
dependencies
stabilize
feature
flags
introduce
the
use
of
feature
flags
audit.
You
know
areas
abcdefg
et
cetera.
With
these
people
running
the
audits
agree
on
what's
stable,
not
stable,
you
know
vote
refactor.
D
You
know
like
that
kind
of
state
like
process,
I
think,
would
be
really
useful
and
that's
why
I
was
saying
we
ended
up
with
a
bridge
release
in
in
rust,
because
yeah.
E
D
The
process
it
was
discovered
that,
like
you,
couldn't
do
x
with
the
way
the
standard
io
library
was
written,
and
so
they
literally
just
deleted
it
and
rewrote
it,
and
it
was
like
a
three-month
setback
or
something.
Hopefully
we
don't
have
many
of
those.
But
we
should
be
aware
that
that
might
be
something
that
we
want
to
do
in
the
process
of
doing
this,
like,
for
example,
or
like
maybe
auto
tir,
takes
too
long
to
land,
and
we
want
to
stabilize
that
and
we
need
more
time
to
stabilize
it
and.
C
Yeah-
and
I
think
that
also
you
know
kind
of
beginning
to
move
in
that
direction-
I
guess
and
and
do
some
of
this
sort
of-
I
guess
starting
to
do
something
like
the
surveying
work
for
for
a
lot
of
these
things
like
what
what
feature
flags
might
we
want
to
have,
and
that
kind
of
thing
putting
together
some
proposals
for
that
would
kind
of
help
generate
interest.
I
guess-
and
these
kinds
of
things
so.
E
And
and
the
reason
she
actually
isn't
here
is
he
had
a
conflict
unfortunately
today,
but
the
I
I
do
think
it's
we
do
want
to
have
something
up
for
1.0,
even
though
0.8
is
the
next
party.
I
know
tianji
was
a
little
bit
saying.
Oh,
let's
do
0.8
first
and
but
I
think
you
know
towards
your
point
or
everyone's
points,
that
we
want
to
be
able
to
get
some
features
in
0.8,
for
you
know,
1.0
and
think
about
what
we
need
to
get
in
0.8.
If
1.0
is
more
stabilization.
D
It's
actually
just
a
shift
in
development
style
too.
That
is
going
to
be
harder
to
digest.
Maybe
where
it's
like,
when
you
move
to
the
stability
world,
you
actually
have
to
plan
like
much
further
out
and
pipeline
the
planning,
because
it
does
take
a
long
time,
and
so,
if,
like
we
want
a
release
on
the
docket
by
october
or
something,
then
we
probably
need
to
be
thinking
about
it
like
today,
in
my
mind,
because
there's
yeah
there's
just
a
lot
of
pipelining.
D
That
has
to
happen,
I
think,
and
and
especially
if
we're
doing
active
development,
I
mean
that
that's
like
you
know
again,
just
because
I
was
there
for
us
as
they
stabilized
like
that
was
like
the
whole
conversation
that
entire
year
was
how
to
go
from
a
project
where
you
could
change
literally
anything
at
any
time,
with
very
little
review
to
something
that
had
a
stable
release
process
and
yeah.
Just
it's
going
to
change
the
way
that
we
like
before.
We
can
just
throw
up
this
sort
of
feature
list
and
then
ship
it.
A
C
Oh
go
ahead.
Andrew,
oh
go
ahead,
I
was
gonna
say
I
think
one
thing
that's
missing
is
I
would
kind
of
like
to
see
a
little
bit
of
a
longitudinal
roadmap
towards
1.0
with
the
eye
of
release
stability,
not
necessarily
features.
We've
talked
a
lot
about
features,
but
I
guess
maybe
these
are
features
in
their
own
s.
C
What's
the
plan
for
kind
of
all
of
these
nice
properties
of
a
1.0,
I
guess
tlc
pack
wheel
outside
of
just
what
features
we
have.
So
you
know
how
are
we
going
to
make
sure
that
we
have
the
right
dependencies
listed
as
requirements?
What
are
our
packet
like?
What
are
the
sort
of
projects
we
need
to
undertake
to
get
the
packaging
story
kind
of
solid?
C
What
you
know
additional
things
we're
gonna
do
like?
Are
we
gonna
work
on
the
logging
subsystem
to
make
it
log
to
whatever
the
front
end
logger
does
and
in
the
runtime?
Are
we
gonna,
add
feature
flags
and
and
kind
of
what
all
right
we'll
be
expecting
rfc
there?
It
would
be
nice
just
to
like
organize
everything
that
we
think
we
need
to
do
as
projects
for
a
1.0.
That
also
might
in
and
of
its
own
right.
C
You
know
if
we
post
an
rfc
sort
of
with
a
roadmap
like
that
we
might
find
some
new
projects
to
work
on
as
well
from
the
community
as
well
yeah.
I
I.
D
Think
the
idea
is
like
just
for
sake
of
like
initialization
or
whatever.
I
think
the
best
thing
to
do
is
actually
try
and
like
write
down
like
a
version.
What
andrew's
talking
about
sort
of
like
more
of
a
concrete
road
map
versus
notes-
and
I
think
we
push
that
out
and
then
try
to
get
feedback
on
that.
I
don't
know
if
we
directly
do
an
roc
or
more
of
a
discussion,
but
I
think
we
should
say
like
look.
We
want
you
know.
A
lot
of
us
are
interested
in
stability
this
year.
D
Here
are
the
things
we
think
we
need
to
do
for
stability.
Like
here's
our
conversation,
here's
our
road
map,
because
I
would
imagine
that
people
will
come
out
of
the
woodwork
in
terms
of
there
are
a
lot
of
people
who
want
stability
features,
and
so
I
I
think
there
will
be
a
lot
of
people
who
show
up,
or
at
least
like
out
of
our
core
users.
So
I
actually.
A
I
would,
I
would
like
us
to
define
what
we
mean
by
stability,
because
stability
can
mean
very
different
things
to
different
people
like
like.
A
Is
it
stability
and
apis
yeah?
I
mean,
I
think
we
mean,
or
you
know,
or
in
the
in
the
code
generated
or
or
just
in
the
runtime
stability
like
like,
like
like
each
of
these
means
different
things,
and
I
think
it's
worthwhile
to
to
to
really
state
explicitly
what
we
mean
by
by
that
stability.
D
Complaint
is
like
people
want
to
be
able
to
correlate
failures
of
the
software
with
what
went
wrong
and
if
the
apis
and
the
signatures
and
things
change,
often
it's
very
hard
to
do
that.
The
messages
aren't
clear
and
tagged
it's
hard
to
do
that
if
the
api
location
moves
a
lot,
it's
hard
for
me
to
develop
code
against
tvm,
because
things
are
constantly
changing.
So
I
think
there's
tons
of
examples
like
this.
You
know
even
the
dependency
example
like
if
every
week
I
need
to
install
a
new
package
and
there's
no
check
for
it.
C
Would
like
maybe
I
agree
that,
like
this
isn't
really
like
you
can
pick
and
choose.
You
can
argue
that
all
of
those
are
part
of
stability,
but
I
would
argue
that
we're
talking
we're
thinking
about
a
one
point
release
the
point
of
one
point:
releases,
api
stability
and
wrapped
up
in
that
is
like
what
versions
of
packages
do
we
depend
on,
and
that
does
affect
our
api
stability
because
it
affects
how
the
apis
perform
like
what
like
you,
can
give
this
input.
Will
it
work?
C
Basically
that's
a
part
of
an
api,
but
basically
for
for
the
purposes
of
this
roadmap,
though
I
think
like
like
that
the
point
is
we're
going
to
be
doing
some.
Like
one
point,
I
was
at
the
end
of
the
road.
We're
gonna
be
doing
more
development
on
this,
and
so
there's
kind
of
it's
like
what
are
the
features
we
need
to
add
to
tbm
to
enable
us
to
continue
developing
but
provide
this
api
stability
that
we're
talking
about,
and
I
don't.
C
I
don't
think
that
I
think
that
if
we,
if
we
go
down
the
road
of
trying
to
think
about
how
we
stabilize
more
of
tvns
like
the
code
generation
and
things
like
that,
that's
where
I
think
we
might.
It
might
be
very
easy
to
kind
of
like
go
down
a
rabbit
hole,
and
you
might
be
thinking
about
that
for
a
long
long
time,
and
and
it's
not
that
I
don't
think
that's
a
worthwhile
conversation.
C
But
I
don't
know
that
that
really
impacts
our
packaging
very
much,
because
the
easiest
way
to
to
have
reproducible
code-
you
know,
assuming
we
think
our
algorithms
are
fairly
deterministic-
is
to
not
is
to
to
do
a
better
job
than
we're
ever
going
to
be.
Do
be
able
to
do
in
in
capturing
the
system.
C
Dependencies
like
we're,
not
gonna,
be
able
to
put
every
single
c
library
dependency
in
a
pip
requirement
specification
so
like
if
you
really
want
actual
stable
output
from
tvm.
You
know,
even
today,
our
current
gold
standard
is
docker
containers,
and-
and
so
I
think
that
you
know
kind
of
along
that
line.
I
think
at
least
I
would
defer
sort
of
output
stability
to
a
different
rfc,
because
I
don't
know
that
it
really
impacts
it.
That
much.
A
A
D
Broaden
a
little
bit,
I
think,
for
me,
one
of
the
goals
of
1.0
would
also
be
creating
like
in
in
conjunction
with
sort
of
the
ci
efforts
and
stuff
is
also
creating
some
regression
stability,
like
some
form
of
like
I,
I
really
think.
Given
the
amount
of
time
everyone
both
outside
of
octomellon
inside
octomell,
has
asked
for
complaints
talked
about
nightly
regressions
or
tracking
information,
that
that
should
be
something
we
push
in
the
1.0
at
some
level
for
the
end
user.
D
Know
if
they
should
upgrade
like
like
how
do
we
add
like
carter?
The
problem
is
like
how
do
we
advertise
to
people
that
it's
worth
upgrading
and
I
I
think
that
is
actually
an
interesting
question,
because
I
think
when
you
get
outside
the
core
users
of
octomell
or
amazon,
a
lot
of
people
pick
versions
and
stay
on
them.
For
a
long
time
like
I
saw
someone
yesterday
who,
like
pasted
a
piece
of
code,
which
is
the
line.
D
Numbers
are
not
right
anymore,
which
means
that
they're
on
some
wildly
different
version
and
that
code
hasn't
changed
in
a
while,
and
I
think
we
will
find
out
that
there's
a
lot
of
quiet
users
who
are
pinned
and
if
we
don't
communicate
sort
of
like
quantitative
improvements,
it
will
be
hard
to
argue
for
people
to
upgrade.
I
think
and,
like
you
know,.
A
This
is
this:
is
this
is
standard
across
every
open
source
project?
At
some
point
you
know,
especially
once
you
start
making
releases
like,
which
is
why,
which
is
why
even
stating
things
like
this
is
how
long
we
also
we
will
offer
support,
is
important
because,
because
people
for
you
know
for
for
policy
reasons
or
for
for
any
number
of
reasons,
they're
going
to
want
to
use
older
versions,
it's
because
it's
one
that
was
security
audited,
and
so
it's
the
one
that
they
can
use
in
their
environment,
yeah,
yeah.
A
And-
and
this
is
important-
and
it's
actually
an
important
aspect
of
how
a
project
grows
and
matures
and
gets
wider
adoption.
D
Yeah
like
another
good
example,
or
like
different
examples
like
rails,
for
example,
the
web
framework
like
when
they
started
to
make
these
big,
like
3.0
4.0
5.0
releases
a
lot
of
the
communication
about
upgrading,
because
it
was
a
huge
lift,
was
about
performance
and
stability
and
security.
But
a
lot
of
that
was
quantifiable
because
they
had
big
applications
to
use
their
regression.
D
They
could
talk
about
before
and
after
performance,
and
actually
that's
even
simpler
for
us,
because
a
web
app
is
way
more
complicated
than
compiling
a
neural
network
in
some
ways
like
where
it's
much
easier
for
us
to
take
a
neural
net
and
run
it
on
two
versions
of
tvm
than
it
is
to
like
build
a
web
app
like
a
500
000
lines
of
code
import
it.
So
I
think
we
should
really
be
able
to
communicate
at
least
some
kind
of
base
information-
and
I
know
we're
talking
about
it.
E
D
To
clarify
jason
that
the
work
that
we
need
to
do
for
1.0
is
being
road
mapped
into
0.8,
because
right,
I
think,
we've
been
talking
about
it.
It's
like
these
two
separate
epochs,
but
we
are
now
well.
I
think
it's
now
clear
after
today's
conversation-
and
maybe
just
was
anyways
that
there's
a
lot
of
like
prior
work
that
needs
to
be
done.
C
C
E
And
then
I
think
two
is:
we
want
0.8
to
be
a
time
based
release,
but
1.0
probably
not
to
be,
and
this
part
is
part
of
the
broader
planning
I
think
was.
D
So
we
should
keep
doing
this
way
as
a
bad
argument
to
make,
I
think
for
anything
in
the
world,
but
I
I
think
we
should
consider
adapting
or
iterating
on
the
release
process,
because
to
me
a
stable
release
is
a
much
different
beast
than
like
an
unstable
feature,
based
release
like
and
a
lot
of
projects.
I've
seen
migrate
in
the
stability
period
to
new
release
processes
like
that
yeah
that'd
be
good
to
put
in
the
rfc.
The
road
map
has,
I
mean.
C
Also
think
that,
like
once,
you
release
a
stable
thing
too,
that
becomes
a
much
more
attractive
like
runtime
dependency,
and
so
I
think
people
might
start
to
to
speak
up
more.
D
C
Right,
I
mean,
I
think,
that's
a
function
of
how
many
tests
we
can
manage
to
write
for
for
things
for,
for
that
encapsulate
and
use
your
case.
Okay,.
A
So
I'm
gonna,
I'm
gonna,
write
this
down
as
a
concrete
next
step
as
proposal
release
process,
I
wanna
we
only
have
a
minute
left
and
so
before,
and
we
have
other
meetings
that
are
happening
at
ten
ten
o'clock.
A
E
I
would
point
people
post
these
notes
and
and
point
people
to
the
video
recording
that
we'll
post
of
this
and
then
we
can
either
have
another
round
of
discussion
or
and
or
post
an
rfc.
You
know
inclusive
of
this
and
then
have
another
round
of
discussion.
Yeah
get
more.
E
A
Async
or
sync
that
sounds
great
all
righty
well
we're
at
time,
if
there's
any
other
parting
thoughts
that
anyone
has,
we
can
make
them
quickly.
Otherwise,
thank
you.
Everyone
for,
for
such
a
lively
discussion.