►
From YouTube: Apache TVM - µTVM Community Meeting - May 26, 2021
Description
* Introductions
* Announcements / News
* Model Library Format PR landed
* Unified Static Memory Planning RFC -
* New RFC Process Approved
* µTVM Roadmap - Andrew
* TVMC update - Leandro
* Update from Arm / Memory planning - Ramana / Manupa
A
You
know
we'd
go
ahead
and
try
to
do
something
off
on
the
side.
It
seemed
like
given
the
velocity
of
development,
and
you
know
things
going
on
with
pull
requests
and-
and
you
know,
just
kind
of
discussion
in
general,
it
seemed
wise
to
maybe
try
something
off
on
the
on
the
side.
You
know
that
would
be
dedicated
wholly
to
micro
tvm.
This
is
an
experiment,
so
if
we
need
to
tune,
we
certainly
will
as
andrew
sort
of
rightly
brought
up
at
the
you
know
at
the
top
of
the
as
people
were
still
joining.
A
But
before
we
we
started
the
recording.
You
know
we
do
want
to
be
sensitive
to
time
zone
issues.
We
do
think
there
is
probably
a
need.
You
know
to
be
fair
to
the
people
over
in
asia
or
india
to
have
a
meeting
where
you
know
those
of
us,
maybe
in
europe
or
the
u.s.
We
have
to
share
the
pain
and
and
have
something
that
would
be
a
little
bit
off
hours,
but
still
advantageous,
then
for
people
that
would
be
in
you
know,
in
japan
or
china,
or
anything
like
that.
A
So
you
know
we
do
have
an
agenda
document
and
let
me
grab
the
the
link
here
and
put
it
into
the
chat
for
this
particular
meeting,
so
I
won't
generally
try
to
to
broadcast
this
verbally
screen
shared
in
the
meeting
and
during
the
course
of
the
meeting.
So
this
is
this
is
sort
of
our
agenda
today,
and
so
what
I'm
going
to
try
to
do
with
each
meeting
going
forward
is
about
a
week
before
the
meeting
would
transpire
I'm
going
to
try
to
lock
down
an
agenda.
A
So
if
you,
you
know,
want
to
go
and
be
on
the
agenda
or
anything
like
that.
You
know
please
post
on
the
forum
or
get
a
hold
of
me
directly
or
anything
like
that.
We
you
know,
and
then
you
know
talk
to
me,
one-on-one
or
you
know
again
post
something
for
me
to
say:
hey
I'd
like
to
have
some
time,
and
you
know
we'll
get
things
solid
to
make
carve
out
some
time
for
you.
A
So
while
we
have
an
agenda
that
we
put
together,
does
anybody
have
any
additions
or
changes
that
they
would
like
to
see
for
something?
That's
not
on
agenda
that
they
think
is
you
know
something
that
they'd
like
to
bring
forward
to
the
group.
A
Okay,
so
hearing
silence,
that's
okay!
You
know,
if
you
think,
of
something
during
the
course
of
the
meeting
we'll
have
it
or
you
know
around
the
any
other
business
sort
of
phase
at
the
very
end,
hello
welcome
hello.
A
So
let's
go
ahead
and
get
get
started.
I
think,
from
the
announcement
sort
of
news
perspective,
one
of
the
things
that
was,
I
think,
pretty
exciting
this
week-
that
at
least
I
like
to
highlight
and
the
there's
a
link
in
the
agenda
that
makes
note
of
this
is
the
model
library
format,
pull
request,
landed
so
yay.
Congratulations!
Everybody
thanks
to
everybody
who
contributed
code
to
that
or
you
know,
was
part
of
the
the
rfc
and
all
that
kind
of
good
stuff.
It's
it's
great
to
see
things
move
forward.
B
So
there's
just
been
a
there's,
been
a
ton
of
rfcs
coming
out
and
and
additional
follow-on
work
on
the
aot,
and
I
think
it
all
seems
like
it's
going
in
a
good
direction.
So
I
think
that's
that's
great.
I'm
sure
we'll
get
an
update
on
that
and
sounds
like
we're
starting
to
consider
memory
planning
and
things
related
to
that.
So
just
I
think
overall
great
direction
so
far
so.
A
Okay
yeah,
so,
if
you
want
to,
could
you
put
a
link
to
that
that
post.
D
A
In
the
enter
the
news
section
that'd
be
awesome.
Thank
you.
D
Yeah
and
and
speaking
of
rfcs,
just
yesterday,
the
steering
committee
approved
the
new
rfc
process,
so
we're
going
to
be
starting
to
post
rfcs
to
a
github
account
now,
so
that
we
have
a
central
place
to
locate
and
then
you
know
we'll
have
the
tracking
issues
inside
of
the
tvm
repository
to
track
the
work
that
happens
on
those.
So
I
will
post
a
link
to
that
repository,
so
you
can
take
a
look
and
see
the
new
process
good
thing
to
highlight:
christy
yeah.
Thank
you
for
that.
E
A
E
Add
some
comments
on
on
your
announcement
about
model
library
format
for
those
which
might
not
be
following
that
closer,
so
we
basically
are
trying
to
get
tvmc
to
to
work
with
micro,
tri
targets
so
with
a
micro
tvm,
and
that
pull
request
with
which
tom
mentioned
is
basically
the
first
step
to
get
into
that.
E
And
so
the
next
step
will
be
to
work
out
the
project
api
which
andrew
has
a
prototype
and
which
will
probably
we
will
be
working
on
that
in
the
following
days,
so
just
to
give
you
some
context
for
those
which
are
not
following
too
close,
the
tvmc
plus
micro
targets.
That's
where
that
pull
request
belongs
to
and
feel
free
to
reach
out
to
me
and
leandro.
A
That
per
request
sounds
good
thanks
for
sharing
that,
okay.
So
in
the
interest
of
time
it's
about
10
after
and
we
do
have
some
things
on
the
agenda,
so
let's
keep
moving
forward.
So
very
first
thing
we
have
on
the
agenda
is
andrew
and
he's
got
some
slides
to
go
through
sort
of
the
micro
tvm
road
map
and
other
things
that
he's
got
his
mind
so
andrew.
Please
take
it
away.
B
Hey
great
yeah,
I
didn't
want
to
drown
us
all
in
slides,
so
I
didn't
put
together.
I
basically
put
together
something
pretty
similar
to
last
week's
a
meeting,
but
I
also
wanted
to
chat
about
I'm
going
to
just
quickly
go
over
kind
of.
I
guess
a
project
status
update
and
let
me
just
do
this.
B
This
here
we
go
so
I
just
wanted
to
oops
hold
on.
I
don't
know
what
happened
here:
okay,
well,
okay,
I
just
wanted
to
do
a
quick
status
update
and
then
I
have
some
slides,
I
thought
and
and
arm
folks,
if
you
guys
wanna
to
cover
kind
of
the
aot
part
of
this.
Let
me
know-
and
I'm
happy
to-
let
you
guys
chat
about
that.
I
just
wanted
to
go
quickly
over
everything,
just
to
kind
of
bring
everyone
to
speed.
B
I
won't
spend
very
much
time
on
this
because
I
think
a
lot
of
people
are
kind
of
familiar
but
kind
of
the
way
we
organize
our
our
project.
Schedule
is
through
an
m2
roadmap
and
or
excuse
me
through
roadmaps
that
we
post
on
our
discuss
forum-
and
so
here
is
the
most
recent
one
here-
the
micro,
tv
and
m2
roadmap.
Generally
speaking
so
far,
we've
kind
of
been
writing
these
and
the
work
is
spanned
around
six
months.
B
I
think
this
one
might
wind
up
taking
a
little
bit
longer,
but
you
know
kind
of
at
the
end
of
these
cycles,
we'll
come
back
and
kind
of
think
about.
I
guess
themes
or
areas
that
were,
I
think
the
community
is
interested
in
focusing
on,
and
so
this
is
like
a
community
driven
effort.
It's
not
anything
that
you
know,
I'm
dictating
or
anything
like
that.
It's
just
sort
of
I
think
trying
to
capture.
B
I
guess
you
know
what
what's
on
everyone's
mind
and,
and
you
know
which
projects
we
were
working
on,
so
I
guess
kind
of
to
this
and
there's
a
bunch
of
projects
that
are
here's
the
list
of
projects
that
we
we
wrote
out
for
for
this
quarter,
and
actually,
I
guess
it
might
be
good
to
quickly
chat
through
the
goals
kind
of
at
the
beginning
of
this
roadmap.
B
We
had,
we
had
a
kind
of
a
something
we
had
a
tool
that
you
could
use
to
to
run
a
model
standalone
on
a
device,
and
there
are
a
couple
different
deficiencies
of
the
tool.
One
is
that
you
know
we
kind
of
only
demonstrated
on
one
device.
We
wanted
to
support
more
runtime
environments.
B
Another
deficiency
is
that
it's
kind
of
hard
to
use.
You
have
to
write,
and
this
is
still
kind
of
true.
You
have
to
write
a
python
script,
basically
to
to
drive
execution
and
it's
hard
to
use
it
as
like
a
tool
in
your
development
flow
and,
and
then
the
third
limitation
is
that
it's
still
still
tricky
to
ask
kind
of
basic
questions
about
the
performance.
B
The
resource
demands
in
particular
of
the
model,
and
so
we
kind
of
wanted
to
this
cycle
enable
these
goals
and
towards
that
end
we
have
these
projects,
and
we
kind
of
list
them
out
here
and
you
guys
can
read
them
on
your
own
time
on
your
own
computer,
which
will
be
much
easier
kind
of
talking
about
where
things
are
going.
So
the
first
project
was
this
library
generator
project
and
that's
what
we're
referring
to
is
model
library
format
that
I'm
putting
a
green
check
mark
here.
It's
it's.
B
We've
landed
kind
of
the
initial
model
level
support
there.
I
have
a
follow-on
pr,
that's
under
review
to
export,
basically
operator
level
or
just
standalone
operators
in
model
library
format
and
the
goal
for
this
is
to
allow
model
library
format
to
carry
kernels
that
are
being
auto
tuned
and
I'll
talk
more
about
that
later.
The
next
is
auto-tuning
support
and-
and
that's
actually
been
demonstrated
kind
of
a
few
months
ago.
B
I've
got
a
pr
for
that
out
and
and
kind
of
the
sticking
point
here
is
kind
of
all
this
inner
work
with
the
live.
The
model
library
format,
with
this
project
level
api
that
I'll
discuss
later,
I'm
leaving
that
pr
unmerged
until
we
we
finish
these
two
things,
because
you
know
it,
it
dragged
in
some
extra
dependencies
to
the
tvm
code
base
that
are
just
going
to
get
deleted
later.
B
Next,
we
have
this
ahead
of
time,
runtime,
which
you're
going
to
hear
a
lot
more
about,
and
we'll
talk
about
that
a
little
more
later,
we
wanted
to
broaden
support
for
our
for
other
architectures
to
demonstrate
that
we,
we
aren't,
you
know,
confined
to
just
one
architecture,
so
we've
done
some
more
towards
risk.
Five
isa
support
just
as
a
separate
architecture.
We
basically
just
have
something
running
in
kuemu.
B
I
think
in
terms
of
kind
of
proper
support
for
that
we
don't
really
have
schedules
or
anything
like
that
that
take
advantage
of
any
intrinsics
yet
and
so
there's
some
community
work,
I
believe
going
actually
from
thu.
B
If
I
remember
if
I
said
that
correctly
to
check
in
schedules
for
that-
and
we
have
a
comprehensive
memory,
planner-
which
I
think
now
deserves
a
clock-
signal-
a
symbol
to
indicate
it's
in
progress
conference,
the
idea
being
here
that
we
currently
do
at
the
graph
level,
we
do
memory
planning
for
tensors
that
are
serve
as
to
hold
activations
and
outputs,
but
not
for
a
kind
of
scratch
pad
memory
at
the
graph
level
or
nor
do
we
do
it,
for
I
guess,
kind
of
those
two
things
combined
and
so
actually
maneuver
just
mentioned
that
he's
got
an
rfc
on
this.
B
The
project
level
api
is
an
effort
to
basically
drive
build
tools
like
the
west,
build
tool
or
the
arduino
build
tool
kind
of
at
a
project
level,
and-
and
we
do
this
basically
to
to
support
auto
tuning
kind
of
in
arbitrary
embedded
frameworks
and
this
kind
of
contrasts
with
what
we
do
right
now.
So
this
is
sort
of
an
improvement
on
that
allows
you
to
implement
a
project
generator.
B
I
guess,
in
other
words,
take
a
model
library
format,
artifact
from
tbmc
and
produce
a
firmware
project
that
you
could
compile
and
run
on
a
device
kind
of
on
disk.
B
Next
is
this
tvmc
integration
that
basically
adds
support
to
the
tpm
command
line
tool
to
do
this,
and
so
there's
been
some
work,
as
we
discussed
on
that
and
kind
of
the
rest
of
these
things
are
kind
of
more
forward-looking
things
that
we
have
we'll
kind
of
be
able
to
get
to
as
the
previous
projects
finish.
B
Basically
so
things
like
pinning
memory,
pinning
tensors
into
specific
memory,
addresses
estimating
memory
footprint,
exploring
kind
of
a
new
scheduler
that
we're
working
on
called
auto
scheduling,
which
has
the
potential
to
make
it
much
easier
to
define
accelerated
schedules
in
tvm
and,
lastly,
support
for
multi-core,
cpus
and
accelerator
based
inference,
which
is
sort
of
handling
parallelism
in
the
tbm
microgdm
runtime,
so
kind
of
with
that,
I
was
going
to
go
through
this.
What
it
like
kind
of
discuss
the
what
is
the
ahead
of
time
runtime
does
anyone
feel
like?
B
I
need
to
do
that,
or
should
we
skip
over
this?
I
I
wanted
to
go
through
this
for
people
who
haven't
necessarily
been
following
the
conversation
so
and
I
don't
want
to
certainly
don't
want
to
leave
anyone
out,
but
I
also
realize
we
have
a
lot
to
discuss
and
I
think
we're
going
to
have
a
presentation
about
aot,
so
maybe
I'll
skip
over
this
for
now,
if
everyone's
okay
with
that.
B
If
anyone
wants
to
raise
their
hand
and
ask
for
an
overview
of
it,
maybe
I'll
just
briefly
say
that
it's
that
currently
tbm's
kind
of
strategy
is.
It
takes
a
relay
program
and
it
outputs
three
different
pieces.
It
kind
of
segments
the
program
into
operators,
which
you
know
you
can
see
this
example
constituted
by
assad,
followed
by
a
match.
Pole
2d.
B
It
then
outputs
an
operator
graph
that
explains
to
a
runtime
component
later
on
how
to
fuse
those
together,
as
well
as
parameters
that
have
been
modified
in
the
compilation
process
to
hopefully
be
simplified
or
smaller,
and
then
a
more
complex
runtime
needs
to
consume.
All
of
these
pieces
and
sort
of
schedule
them
on
the
target.
Devices
and
aot
is
basically
a
project
to.
B
B
So
think
I
won't
go
over
too
much
of
this,
but
basically
the
idea
is
that
this
operator,
graph
in
the
compilation
pipeline,
is
fed
into
sort
of
a
post-compilation
pass
in
some
sense
that
generates
a
what
we
call
a
tir
function
and
this
tr
function
is
sort
of
our
pre-code
gen
intermediate
representation,
and
so
that
allows
us
to
basically
call
different
functions.
B
I
have
some
here's
sort
of
an
example
function.
You
can
kind
of
see
the
different
parts
of
it,
so
you
can
see
that
we
kind
of
maintain
a
call
stack
for
the
sort
of
tvm
level
arguments
we
allocate
intermediate
tensors
for
holding
activations
set
up
the
call
stack
here
and
call
the
operator
function,
and
this
is
all
work
that
manoopa
and
giuseppe
and
kind
of
all
their
arm.
Colleagues
are
working
on,
and
so
I
don't
wanna,
I
don't.
B
I
don't
wanna
speak
too
much
to
it,
because
I
think
they're,
the
real
experts
here
and
they've
been
doing
the
implementation.
So
I
just
want
to
say
that.
Lastly,
I
just
I
guess
I
don't
know
if
you
guys
are
going
to
talk
more
about
kind
of
like
the
different
project
overview
statuses.
I
don't
know
if
anyone
from
arm
wants
to
say
if
it's
worth
for
me
going
through
all
this
or
or
not,
but
hearing
no
complaints
I'll
just
quickly
go
through
kind
of.
B
I
guess
where
I
think
we
are
as
far
as
aot
sub
projects
so
kind
of
the
first
couple
of
projects
we
have
were
to
our
first
first
couple
of
periods
we
had
were
to
make
a
top-level
tar
function
to
essentially
mimic
this
graph
executor
run,
and-
and
this
is
basically
saying
that,
as
I
mentioned
earlier,
there
are
those
three
components
that
get
fed
into
sort
of
a
runtime
component
and
we
usually
call
this
graph
executor
and
kind
of
the
first
part
of
aot.
B
Is
this
core
piece
where
we
define
a
code
generated
function
that
calls
all
of
the
operator
pieces
in
order
by
passing
the
given
the
arguments
needed?
B
There's
some
following
projects
that
are
kind
of
in
progress
right
now,
there's
projects
to
reduce
the
as
you
might
have
noticed
in
that
earlier
code,
sample
kind
of
the
intermediate
tensor
metadata
tends
to
be
allocated
on
the
stack,
and
this
you
know
does
blow
up
the
stack
and
so
there's
projects
to
reduce
that
stack
usage.
B
There's
projects
to
create
a
pc
api
which
kind
of
requires
less
metadata
and,
and
so
that
also
helps
to
improve
the
stack
there's.
Basically,
projects
also
ongoing
now
to
integrate
with
kind
of
memory
planning.
B
I
guess
at
the
around
the
concept
of
memory
pools,
which
is
sort
of
kind
of
a
way
of
of
just
saying
that
we'll
tell
tvm
that
it's
going
to
have
a
certain
number
of
contiguous
memory
regions
and
we'll
place
a
memory
in
there
I'll
I'll.
Let
manupa
speak
more
about
this,
because
he
he
just
released
his
proposal
and
I'll
just
finish
up
quickly
here
and
then
turn
it
over
to
others.
I
think
that's.
B
B
Lastly,
I
just
have
a
survey
of
kind
of
the
outstanding
rfcs
and
I
should
add
manuba's
memory
planning
rfc
here
these
roughly
map
to
the
projects
that
I
went
over
and
so
they're
on
the
slides
here.
If
you
want
links
to
that
okay,
so
that
was
kind
of
my
update.
I
didn't
want
to
oh
ignore
that
I
didn't
want
to
take
too
much
of
the
time,
but
just
to
kind
of
give
everyone
a
brief
overview
and
and
hopefully
to
kind
of
bring
people
more
into
the
same
page.
B
Does
anyone
want
to
bring
up
anything
kind
of
at
the
project,
level
or
roadmap
level,
and
then
I'll
turn
it
over
to?
I
think
next
on
the
agendas
is
manoopa
and
leandro.
Sorry,
actually,
let's
talk
about
tbmc.
Yes,.
B
F
But
my
light
is
just
basically
those
no
problem.
Yeah.
F
Cool
so
yeah,
so
I'm
here
to
talk
about
a
little
bit
about
the
tvmc,
and
I
appreciate
that.
Well,
we
have
quite
a
lot
of
people
in
the
call,
and
some
of
you
might
not
be
familiar
with
what
tbmc
is.
So
if
you
on
your
own
time,
if
you
want
you
can
click
on
that
link
and-
and
that
will
give
you
the
introductory
tutorial
about
tvmc,
which
is
a
command
line
for
tdm.
F
So
basically
it
allows
you
to
accomplish
some
tasks
that
you
can
do
with
tvm,
but
instead
of
using
the
c
plus
api
or
the
python
api,
you
would
be
using
a
command
line.
So
that's
something
we
we
are
implementing
and
contributing
since
last
year.
So
recently
there
have
so
we
have
basic
support
for
compilation
of
models.
You
generate
packages
and
you
can
run
them
also,
you
can
access
some
tuning
features
as
well
and
we
are
kind
of
constantly
improving
it
and
receiving
some
prs
around
that
area.
F
So
two
of
recent
relevant
improvements
we've
made
and
made
by
the
community,
are
so
I
listed
those
two
pr's
in
there.
The
top
one
basically
expands
the
way
you
can.
F
We
use
to
offer
tvmc's
api
instead
internally
to
tvm
and
make
that
available
so
that
new
users
that
want
to
use
a
python
api
can
have
that
simplified
api
and
don't
need
to
bother
with
many
of
the
internal
stuff
and
just
will
access
compilation
features
such
as
generating
packaged
models
for
for
targets,
basically,
as
well
as
tuning
and
running
models,
and
so
on.
So
this
was
contributing
by,
I
think,
josh
from
october
ml.
F
It
was
very
it's
a
good
improvement
to
the
in
terms
of
the
explaining
the
api
to
new
users,
so
that
that
was
something
relevant
contributed
recently.
F
Second,
one
is
gustavo's
patch,
so
star
wars,
meeting
as
well
start
from
dinaro,
and
basically
it
integrates
this
model
library
format,
which
is
a
a
way
to
distribute
packages
that
contain
source
aimed
at
this
point,
mostly
for
micro
targets,
and
you
get
that
package
and
you
can
integrate
that
with
your
project.
So
it's
everything
is
kind
of
interlinked
with
with
the
talks
of
other
people
so
far
in
this
call,
so
that
patch,
the
second
one
in
the
least
8086
integrates
tvmc
so
that
you
can
generate
those
patches
using
the
command
line.
F
So
if
you,
if
you
are
interested-
and
you
didn't
know
about
tbmc
and
you
have
a
look,
we
are
very
interested
to
collaborate
with
other
people
and
please
kind
of
raise
any
suggestions
or
anything.
You
can
do
that
as
issues
or
or
as
topics
in
the
discus
forum
as
well,
so
yeah.
So
if
you
are
interested
to
get
in
touch,
I
will
be
in
this
call
where
me
and
the
rest
of
the
team
as
well
will
be
updating
about
tvmc
developments
regularly
so
yeah.
Thank
you.
That's
that's
me.
B
Great
thanks,
andrew
and
and
one
other
thing
I'll
add
to
is
that
tvmc
is
kind
of
a
tool
that
we're
that's
being
developed
kind
of
broadly
to
support
all
kinds
of
tvm
usage.
So
micro
non-micro
and
I
think
the
micro
use
cases
has
been
recently
pushed
forward
quite
a
bit
by
gustavo
with
integrating
this
model
library
format,
which
is
kind
of
how
we
tend
to
export
models
to
microcontroller
firmware
projects.
But
there
are
there's
more
to
come
as
well.
B
So
so
I
think
that
there
are
still
a
few
more
pieces
left
there.
Maybe
gusta
will
correct
me
if
I'm
wrong,
but
I
think
there's
a
few
more
pieces
there,
just
in
kind
of
along
the
direction
of
building
from
where
flashing
from
where
kind
of
doing,
host,
driven
execution
of
of
models
on
on
devices.
So
just
a
few
more
directional
are
kind
of
in
them.
Yeah.
E
Yeah
one
of
the
ideas
is
that
we
will
have
a
new
context,
call
called
micro
which
will
integrated
with
the
project
api
that
andrew
mentioned,
and
that
will
have
new
comments
under
the
micro
context,
which
will
allow
one
to
you
know,
build
and
flash
the
firmware
and
run
on
micro
targets.
That
kind
of
stuff.
B
Yeah,
yeah
and-
and
you
mean
like
a
sub
commands
like
c
tvmc
micro
bill
right
and
and
the
only
reason
we're
putting
this
build
in
flash
here
into
tvmc.
Rather
than,
and
of
course
you
can
do
it
on
your
own
as
well-
is
that
you
know
for
auto
tuning
we
kind
of
have
sort
of
an
end-to-end
workflow
that
involves
doing
this
kind
of
in
an
automated
fashion.
So
that's
kind
of
kind
of
some
some
background
there.
F
Yeah,
I
guess
I
guess
the
the
other
benefit
of
doing
it.
This
way
is
that
you
can
integrate
tvm
as
a
tool,
without
bothering
with
all
the
internals
you
and
you
can
integrate
it
with
your
build
system
and
with
your
other
tooling
kind
of
more
transparent
way,
right
yeah.
So
the
other
thing
I
just
wanted
to
briefly
mention
in
the
end
that
we
are
also
planning
to
do
some
work
around
error
handling.
F
E
Cool
andrew
I've
got
actually
a
question
for
you
regarding
project
api.
I
believe
that
one
of
the
prem
premises
for
having
project
api
is
about
to
auto
tuning,
and
but
I
think
that,
if
I
go
to
you
correctly
on
our
last
chat,
you
said
that
we
could
do
that
with
aot
as
well.
E
So
I'm
kind
of
trying
to
understand
if
it
that
still
holds.
If
you
know,
project
api
on
grapher
and
time
to
be
more
exact
is
still
essential
for
auto
tuning.
B
Well,
okay,
so
there's
yes,
there's
a
couple
of
different
motivations
for
kind
of
using
project
api.
I
guess
the
current
way
we
we
build
code
right
now
is
we
do
a
fairly
tight
integration
with
the
downs
kind
of
with
a
with
some
off-the-shelf
rtos
in
this
case
zephyr,
but
could
be
others
like
arduino
if
you
wanted
it
to
be
in
the
future.
B
Just
zephyr
is
the
one
that
we
we
support
mostly
today
and-
and
it's
been
really
great
so
far,
so
I
I
you
know,
I
think
that
having
this
kind
of
broad
target
support
through
an
rtos
has
been
really
helpful,
just
in
terms
of
targeting
new
hardware,
since
kind
of
the
thing
that
tvm
wants
to
do
is
is
ensure
that
it's
generating
instruction
op
codes
rather
than
become
too
mired
in
the
business
of
configuring.
B
Each
different
hardware
target
now
tpm
at
some
point,
does
have
to
actually
be
able
to
build
and
execute
code
so
that
it
can
try
out
different
schedules
on
on
hardware
targets
and
and
decide
kind
of
the
correct
loop
order
to
use,
and
so
that
process
is
called
auto
tuning
and
yeah
on
an
auto-tuning
front.
B
Currently,
what
we
do
is
we
kind
of
build
a
series
of
libraries
and
then
link
them
together
and
and
sort
of
into
a
single
firmware
binary
and
then
drive
kind
of
some
timing
flow
using
the
uart
typically
from
from
the
host-
and
I
guess
with
the
aot
there.
I
guess
so
so
that
said,
there
are
a
couple
of
other
things
you
could
do
once
you've
got
this.
B
We
we
have
tvm
actually
before
micro,
tvm
existed
already
had
this
rpc
kind
of
binary,
rpc
protocol
defined,
and
so
in
order
to
support
kind
of
a
natural
extension
of
doing
host
driven
execution
on
microcontrollers.
What
we
did
is
we
just
ported
the
rpc
protocol
to
work
on
microcontrollers
so
that
actually
buys
you
a
few
different
things
you
can
get
you
can.
B
You
can
either
run
auto
tuning,
which
involves
just
running
one
operator
in
a
graph
at
once,
or
you
can
run
the
whole
graph
either
remotely
or
locally,
and
so
what
I
might
have
been
talking
about
with
with
aot
was
that
you
could
run.
You
could
drive
the
aot
over
the
rpc
server.
That's
work.
B
We
haven't
really
gotten
towards
because
we're
considering
more
of
a
deployment
use
case
kind
of
more
immediately
and
I
think
the
project
api
is
more
concerned
with
basically
what's
the
build
process
and
and
how
do
we
talk
to
the
target?
So
I
think
I
could
have
got
myself
twisted
around
or
I'm
not
sure.
If
I
said
the
right
thing
earlier
is
all
so.
I'm
just
wondering
if
I
was
was
thinking
of
something
I
am
not
thinking
of
right
now.
G
So,
hopefully,
some
of
some
of
what
I'm
about
to
say,
which
is
essentially
a
rehash
of
what
I
said
in
the
devcon
in
the
developer
summit
end
of
last
year
is
is,
is
gearing
towards
that.
So
I
I
frankly,
I
think
the
model
library
format
is
good
because
it
standardizes
the
output
and
it
sort
of
tells
embedded
users
how
to
pick
out
artifacts
that
come
out
of
tvn,
but
really
the
artifacts
that
come
out
of
tv
should
not
be
bound
to
any
our
toss
in
any
way.
G
B
No,
no,
no-
and
this
is
a
good
take.
I
don't
want
to.
Although
I've
been
talking
a
lot
about
this
project,
I
don't
want
to
get
to
give
the
impression
that
we
expect
everyone
to
be
using
tdm
is
a
build
rule.
You
know,
that's
certainly
not
what
we
want
to
do.
It's
it's
more
about
having
different
pieces,
and
you
know
at
some
level
someone's
going
to
have
to
do
auto
tuning
whether
or
not
then
they
publish
the
schedules
and,
and
then
the
tbm
end
user
consumes
the
schedules.
B
You
know
that's
a
different
story
basically,
and
for
that
for
what
I
think
what
I've
found
so
far
in
working
with
tbm
microcontrollers
is
that
kind
of
in
the
process
of
developing
the
schedules.
It's
helpful
to
have
a
tool
that
does
things
like
driver
mode
execution
so
that
you
can
easily
test
different
implementations
of
the
model
operator,
for
example,
but
yeah
as
ramana
said,
I
think,
he's
I
think
he
definitely
hit
the
nail
on
the
head.
It's
it's
not
clear.
B
We
want
to
have
a
sort
of
a
menu
of
options
for
for
people
who
are
coming
to
tbmc
as
end
users,
and
certainly
taking
a
model
and
producing
model
library
format
is,
is
probably
sufficient
for
a
good
number
of
embedded
developers.
G
That
that
would
be
their
entry
point
into
it
and
as
they
become
more
power
users,
and
they
want
to
get
to
eking
out
more
performance
out
of
tvm
on
their
targets,
they
would
get
to
auto
tuning.
So
we
have
to
do
both,
but
we
need
to
do
them
in
a
sort
of
yeah.
We
can
do
that
in
the
opinion.
That's
right,
yeah
and
that's
sort
of.
I
think
what
what
the
two,
what
what
the
various
approaches
in
the
micro
tvm
project
or
the
micro
part
of
tvm
are
trying
to
achieve.
G
Micro,
tvm,
isn't
really
a
separate
project.
It's
it's
really
tvm
for
microcontrollers!
That's
right!
Yeah
yeah!
So
I
think
we
should
remember
to
keep
that.
Keep
that
umbrella
term,
always
in
our
in
our
messaging
and
communication.
B
Yeah
absolutely-
and
I
think
certainly
also
one
thing
that
I
hit
me
the
other
day
too,
is
that,
like
we
haven't,
really
wrote
a
spec
doc
yet
for
micro,
sorry
for
model
library
format,
and
I
think
that's
probably
something
that
I
should
write
very
soon
so
because
I
was
starting
to
make
a
few
revisions-
and
I
was
thinking
like
I
should
probably
write
this
down
somewhere.
We
don't
have
anyone
to
write
this
down,
so
I'll
publish
a
pr
that
includes
a
spec
for
michael
for
model
library
format.
Soon,.
B
So
with
that,
should
we
I
don't
want
to
take
up
all
the
time
for
memory
planning,
so
should
we
move
on
to
talk
about
that?
Are
there
any
other
concerns
along
this
line
that
anyone
wants
to
bring
up
before
we
do
that.
G
G
G
So
this
is
sort
of
a
cut
down
slide
from
my
top
last
year.
I
think
leandro's
already
covered
mostly.
Are
you
able
to
see
my
screen?
Okay,
yeah?
Okay,
yes,
so
so
leandro
has
covered
already
the
things
that
we're
doing
around
tvmc
and
the
way
we're
looking
to
push
that
along.
We
view
tbmc
as
really
the
entry
point
to
a
lot
of
embedded
developers
to
to
compile
and
create
artifacts
they
can
link
into
other
embedded
projects.
G
G
Getting
tvmc
run
absolutely
right
through
the
through
the
tvmc
flow
is
probably
going
to
be
really
difficult
because
you've
got
varieties
of
tool.
Chains,
architectures,
compilers,
it's
not
going
to
be
straightforward.
G
So
that's
that's
the
story
on
tvmc
we're
very
interested
in
terms
of
what
we
can
do
with
respect
to
packaging
and
deployment,
so
we're
very
interested
in
how
do
we
make
sure
that
any
improvements
that
land
into
tlc
pack
that
it'll
land
into
tvm
mean
that
we
can
get
plc
pack
based
packages
and
binary
packages
that
are
easily
usable?
The
end
goal
for
us
is
people
should
be
able
to
do
a
pip,
install
tlc
pack
or
and
get
what
they
need?
They
don't
have
to
jump
through
hoops
to
build
things
from
source.
G
We
really
don't
think
graph.
Runtime
is
suitable
for
embedded
applications.
We've
got
to
get,
we've
got
at
least
start
with
something
which
is
less
heavier
than
that
size
code.
Size.
Fundamentally,
matters
then:
we've
got
the
whole
thing
around
memory
planning,
because
dynamic
memory,
allocation
and
fragmentation
of
memory
is
actually
a
pretty
big
problem
in
in
embedded
devices,
and
we
are
interested
in
supporting
not
just
our
sort
of
cortex-m
architecture.
G
We
are
also
interested
in
putting
in
a
first-class
ethos:
u
port
into
tvm,
which
is
something
that
the
team
have
been
working
on,
so
hopefully
that
that
rfc
and
that
should
start
appearing
pretty
soon.
So
we're
working
towards
that,
and
we
should
have
some
of
that.
So
a
lot
of
the
work
with
aot
with
memory
planning.
G
While
it
starts
off
looking
at
cortex-m,
it's
all
geared
towards
being
able
to
put
in
cortex-m
support
and
have
that,
but
also
towards
being
able
to
support
the
combination
of
our
ethos,
you
accelerator
and
cortex-m
in
the
tvm
framework,
in
terms
of
why
we
like
micro,
tvm
and
tvm.
It's
because
you
know
we've
got
this
chance
of
accessing
quite
a
lot
of
frameworks
through
tvm
and
we'd
like
to
we.
We
like
the
way
in
which
we
can
work
in
in
the
pbm
community.
G
So,
that's
that's
really
why
we're
sort
of
working
in
this
area
so
yeah.
I
think
I've
covered
most
of
this
and
that
sort
of
gives
you
a
flavor.
I
think
I'm
we've
talked
a
lot
about
aot
in
the
last
two
meetups,
so
I
think
it's
probably
worth
yielding
the
floor
to
manoopa
to
for
him
to
go
through
the
memory
planning
side
of
things
and
how
we
see
that
fitting
in
and
how
that
works.
G
I'm
going
to
stop
here
and
probably
manoop.
I
can
have
a
go
after
this.
H
G
So
we
we
see
the
cmsis
nn
optimized
kernels
as
something
that's
really
useful
for
performance
that
would
be
going
through
a
byoc
route
and
it
will
end
up
generating
an
external
call
to
it
will
fit
with
the
aot
approach
and
it
will
end
up
producing
an
external
call
to
a
cmsisnn
function,
and
then
you
can
just
link
to
that
library
and
it
will
all
just
work.
G
We
think
this
is
useful
for
us
to
get
initial,
initially
good
performance
on
cortex-m
devices
for
frameworks
like
tensorflow
lite,
where
we'd
be
able
to
match
both
the
accuracy
as
well
as
performance
that
that
cmc
snn
gives
us.
So
I
think
that's
a
good
place
to
start
with,
but
our
ultimate
aim
over
the
period
of
time
as
we
can
spend
time
and
work
with
the
community,
is
to
help
improve
the
native
code
generation
for
tbm
as
well.
G
H
G
That
I
think
that
it
there
are
enough
people
enough
papers
that
show
that
it
it
performs
today
better
than
the
auto
tuned
kernels.
Okay,
I've
seen
that
I've
seen
quite
a
lot
of
the
papers
outside,
but
I
haven't
done.
We
haven't
done
any
any
sort
of
comparison
yet
because
we
are
still
in
the
process
of
building
up
aot
and
that
so
it
is
likely
that
the
handwritten
kernels
will
be
better
to
begin
with.
B
Yes-
and
it's
also
worth
saying
too,
like
you
know,
there's
kind
of
the
auto
tvm
kind
of
v1
approach
right
now
and
then
there's
kind
of
potentially
this
auto
scheduler
approach
that
we're
working
on
kind
of
for
v2,
which
just
allows
you
to
describe
basically
hardware
intrinsics
and
let
tvm
kind
of
produce
scheduled
sketches
around
that,
and
so
I
think
kind
of
my
my
standpoint
on
this
is
that
we
want
tbm
to
allow
you
to
use
the
library
that
makes
sense
for
you
and
whether
that's
sort
of
an
auto-tuned
or
auto-scheduled
kernel
or
a
hand,
hand
optimized
kernel.
B
I
think
there
are
definitely
places
where
those
make
sense,
basically
to
use.
Certainly
the
search
process
can
take
some
time
so,
depending
on
kind
of
your
project
timeline.
C
B
Or
your
particular
hardware
configuration
you
might
want
to
choose
one
of
those
approaches
to
kind
of
fit
your
situation,
so
so
I
think
it
makes
sense
to
have
that
option.
C
Thank
you.
Let
me
share
my
screen.
C
Right,
can
you
see
my
script?
We
can
yeah,
so
this
is
an
rfc.
I
just
posted,
as
I
said
literally
one
hour
ago,
yeah,
so
we
have
been
working
inside
arm
regarding
with
aot
and
having
a
lot
of
discussions
with
andrew
and
others
upstream
as
well.
So
we've
been
capturing
thoughts
from
that
the
rfc
is
being
published
regarding
the
requirements,
what
they
producer
needs,
and
I
think
it's
widely
accepted
that
memory
optimization
has
to
be
more
aggressive
to
fit
to
an
embedded
target.
So
I
skipped
the
background.
C
I
just
stayed
good
with
the
motivation,
so
so
the
idea
of
this,
this
brief
run-through
of
the
rfc,
is
to
invite
people
to
have
right
at
it
and
and
express
your
thoughts,
so
we
can
integrate
into
the
design
and
make
it
a
collaborative
effect.
So
yes,
so
the
so
the
main
motivations
we
I
identified
was
in
currently
in
the
tvm.
That's
as
andrew
mentioned
before,
where
there
are
operator
level
ir,
which
is
the
relay
and
inside
the
operator.
C
So
so
one
thing
is
that
this,
this
kind
of
bottlenecks
what
could
be
really
achieved
and
and
it
can
lead
to
local
optimizations
points,
if
you
just
optimize
inside
the
operator
and
then
do
a
subsequent
pass,
which
is
known
as
graph
planned
memory
afterwards
to
plan
it
holistically.
So
these
are
the
main
motivations
for
this
work.
C
So
I
guess
in
terms
of
the
goals,
what
we're
trying
to
achieve
is
that
by
the
end
of
this
work
we
would
not
be
generating
tbm
backend
analog
workspace,
which
are
like
kind
of
mallocs,
but
for
micro,
tvm
and
aot.
What
we
have
done
is
we
are
just
kind
of
wired
into
a
stack
allocator.
We
just
increment
and
decrement
the
pointer
to
make
it
lightweight
at
the
minute,
but
hope
he
said
by
the
end
of
this
work
that
that
it
would
not
be
needed.
C
The
stack
allocator
and
one
thing
captured
from
the
discussion
that
is
going
on
and
another
references
that
has
done
in
terms
of
light
area
is
that
that
there
are
different
memory,
planning,
algorithms,
with
different
tradeoffs
and
so
the
graph
topologies,
and
how
the
schedules
of
operators
for
different
targets
that
can
define
which
memory
planning
algorithm
is
suitable
and
and
and
there
are
there
can
be
many
debates
over
the
compilation
runtime.
How
aggressive
you
want
to
be
or
just
a
greedy
one
is
good
enough
for
you.
Things
like
that.
C
So
so
one
of
the
considerations
given
to
this
particular
design
is
that
the
algorithm
should
be
easily
changeable
and
the
other
one
is
the
pooling
support.
So
we
included
constants
as
well.
So
the
idea
is
that
the
user
is
able
to
provide
pools
or
buffers
from
the
application
itself
for
tvm
to
be
used
to
hold
the
constants,
which
should
be
populated
before
you
give
them
and
and
and
the
workspace
buffers
that
tvm
can
use
as
sketch
pad.
So
so
those
are
the
goals
going
there.
C
So
I
will
not
go
into
technical
details
referred
in
the
rfc,
but
I
will
just
show
the
example
use
cases
that
we
think
could
be
useful
and-
and
I
should
also
mention
this-
has
been
based
on
top
of
the
work
that
has
been
done
with
chris
and
giuseppe,
who
has
been
pushing
the
iot
and
refining
the
c
runtime
apis.
So
this
is
this
is
kind
of
like
the
final
piece
of
that
work
to
fit
everything
together.
C
So
so
this
kind
of
reflects
that
those
discussions
as
well,
so
we
need
to
we
try
to
keep
alignment
with
the
tvmc
propose
the
apis
with
the
currently
proposed
runtime
apis
for
the
c
unpacked
api.
So
going
for
the
basic
use
case.
I
think
this
is
something
reasonable
that
you
should
want
to
compile
to,
and
that
is
just
given
the
model
and
specify
you
want
to
use.
Aot
executor
and
output
format
is
modular
format
and
just
give
target
c.
C
So,
in
which
case
we
expect
the
I'm
only
showing
the
quadrant
artifacts
that
get
affected
with
this
work,
which
would
be
the
metadata
modules
one.
We
have
one
columnar.c
and
I
think
we
need
to
introduce
an
header
that
kind
of
could
get
generated
corresponding
to
the
models.
C
So
so
in
this
particular
use
case,
tvm
can
generate
the
workspace
buffer
and
and
and
the
parameters
buffer
or
the
constant
data
that
that
includes
the
compiler
generated
constants,
which
are
not
exactly
same
as
the
model
original
weights
and
biases
because
they
have
undergone
certain
optimizations.
C
So
there's
this
entry
point
that
traps
the
main
function.
So
these
these
things
get
generated.
So
the
use
application
would
look
like
something
like
this,
so
you
have
external
linkage
to
the
compiled
model
and
these
structures
would
be
ones
that
are
generated
here.
So
you
need
to
pass
in
the
input
and
output
space.
Where
you
need
the
output
to
get
populated,
then
you
run
tbm
execute
with
the
model.
C
So
this
is
the
simplest
use
case
that
that
we
plan
to
enable-
and
so
the
next
one
is
that
where
users
want
to
spin
the
workspace
buffer
or
maybe
share
them
so
so,
in
which
case
we
plan
to
say
user
to
specify
hey,
I
need
this
workspace
buffer
just
give
a
name
to
it
and
also
give
the
target
association
with
it.
So
this
is
important
this
this
kind
kind
of
says
this
workspace
buffer
could
be
used
by
this
many
targets.
So
so
that's
that's
how
it
should
be.
C
So
then,
when
you
compile
these
two
models,
it
will
similarly
generate
same
artifacts,
but
compared
to
the
previous
one,
the
workspace
buffer
will
not
be
generated.
Instead,
there
would
be
a
new
struct
created
with
workspaces
with
that,
given
name
sram
and
and
similarly
for
model
2.
There
would
be
another
one
with
a
workspace
with
the
given
name
sram,
so
the
user
application
would
look
like
something
like
this.
C
You
can
have
a
linkage
to
the
two
models
and
here
one
could
calculate
the
maximum
of
them
offline
or,
alternatively,
you
could
use
a
malloc
to
to
figure
out
the
max
if
that
is
permitted
and
decide
in
the
system,
and
so
the
idea
is
that
we
should.
You
should
give
a
workspace
buffer.
That
is
large
enough
for
the
both
models,
where
you
can
pass
a
pointer
to
it
to
the
workspace
struct
that
was
generated
and
used
in
the
tvm
execute.
So
so.
C
In
this
way
the
model
will
get
a
sequentially
executor,
but
could
share
the
workspace
buffer.
So
the
memory
the
cleaning
of
the
workspace
is
out
of
tvm.
I
think
that
was
one
of
the
concerns
I've
seen
in
the
discussion
so
far,
so
we
can
factor
that
in
and
other
one
is
how
you
could
pin
different
memories.
So
this
is
like
the
hardcore
example.
C
So
so
here
I
have
included
the
accelerate
as
well
so
we'll
be,
as
ramana
mentioned,
we'll
be
putting
another,
let's
say
to
say
how
we
separate
those
you
so,
in
which
case
we
need
to
ex.
We
need
the
user
to
be
able
to
say
which
workspace
buffers
can
be
used
for
the
targets,
so
that
association
is
printed
mentioned
as
this
and
and
and
we
also
think
when
there
are
multiple
workspace
buffers
involved,
one
could
just
give
a
hint.
C
I
would
like
this
to
be
around
this
size,
so
we
can
factor
that
in
in
the
planning,
when
you
have
multiple
access
buffers
to
play
with,
so
it
will
also
generate
a
similar
structure
as
before.
The
difference
is
that
now
there
are
two
workspace
pointers
to
be
populated
and
two
parameters
to
be
populated.
C
So
the
idea
is
that
in
the
runtime
in
the
user
application
they
can
pin
them
to
correct
sections
if
they
have
alignment
requirement,
that
can
also
be
specified,
so
those
pointers
could
be
passed
into
the
structs
and
and
it
can
execute
the
same
way
yeah.
So
this
is
what
we
think
initially
could
be.
The
user
experience
so
feel
free
to
jump
in
and
yeah.
Suppose
things
ask
questions,
so
we
can
can
make
it
a
bit
of
the
same.
B
B
I
was
curious
if
you
thought
a
little
bit
about
kind
of
at
the
run
time.
You
know.
One
thing
we
do
with
the
c
plus
runtime
is
each
memory
when
we
allocate
memory
for
a
particular
accelerator.
That
memory
comes
with
the
device
context.
B
I
think,
and
so
the
the
copy
operations
are
basically
either
handled
with
mem
copy
or
with
sort
of
a
device
specific
api
call,
and
so
you
know
if
you're
copying
something
into
a
gpu
there's
a
gpu
driver
that
handles
sort
of
the
dma
copy,
and
I
was
wondering
for
kind
of
on
the
micro
side.
B
Obviously,
typically,
the
cpu
has
a
bit
more
access
to
different
memories,
but,
depending
on
the
architecture,
it
may
be,
you
know
necessary
to
use
some
peripheral
to
to
initiate
memory,
copies
or
or
otherwise
you
know
maybe
involve
sending
it
over
some
serial
bus
like
quad
spy
or
something
like
that
just
depending
on.
Maybe
that's
a
little
bit
outlandish,
but
you
know
some
some.
B
There
may
not
be
like
a
direct
linkage
between
the
accelerator
memory
and
the
cpu
is,
I
guess
what
I
was
getting
at,
and
so
I
was
wondering
if
you've
thought
a
little
bit
about,
if
there's
any
how
we
might
model
that,
basically
at
this
level,
as
far
as
emitting
instructions.
C
So
so
I
think
that
is
a
reasonable
thing
to
be
expressing
here.
So
I
think
this
goes
in
line
with
the
cash
read
scheduling
primitives.
So
if
the
accelerator
requires
such
a
explicit
copy
because
it
was
not
accessible,
that
primitive
has
to
be
inserted,
and
I
think
that
this,
that
is
something
that
will
not
be
captured
in
the
planning.
So
because
it's
an
explicit
copy,
we
cannot
plan
for.
B
Okay,
so
basically,
as
far
as
that's
I
mean
and
you're
right,
this
is
more
of
a
device
side
or
I
guess
a
coach
inside
thing,
rather
than
something
specific
to
specific
to
the
memory
planner.
The
memory
planner
is
mostly
just
trying
to
make
sure
that
things
fit
and
and
figure
out
where
which
buffer
should
live,
where
yeah.
C
C
So
this
shows
a
certain
access
pattern,
so
there
are
like
system
sram
that
are
both
accessible
by
cortex-m
and
it
has
two
and
there
are
tightly
coupled
memories
that
is
only
accessed
by
cortex
m55,
so
which
is
kind
of
representative
in
this
example
that
I've
shown
here
that
that
you
might
need
more
than
one
workspace,
and
you
might
also
need
to
partition
the
constants
as
well
to
to
have
two
parameters
buffers
you
can
pin
it
in
in
different
areas.
C
B
Cool,
I
think
we've
got
two
minutes
left
and-
and
I
I
think
we
will
probably
be
discussing
this
for
for
a
little
while
I
you
know,
aot
has
been
one
of
these
things.
We've
been
working
on
for
the
last
few
weeks
or
months-
I
guess
and
and
yeah
months
by
now
and
and
I'm
sure
we'll
this
will.
You
know
we'll
probably
be
discussing
this
for
quite
some
time
to
as
we
go
forward
here,
so
so
certainly
be
more
discussions.
B
One
thing
I
wanted
to
say
too
is
you
know
there
are
a
bunch
of
people
here
and
I
know
we've
had
involvement
from
others
in
the
community
as
well,
and
I'd
really
like
to
encourage
everyone
to
you
know
participate
in
the
forum
discussion.
If
there
are
questions
about
you
know,
memory
planning
and
things
like
that
or
or
questions
on
any
of
this
kind
of
our
implementation,
we'd
love
to
have
more
feedback
from
from
others.
B
If
there
are,
you
know,
concerns
or
even
just
like
hey
this
design
looks,
looks
suitable
for
us
or
or
or
unacceptable
it'd,
be
great
to
have
some
feedback
as
we're
you
know
continuing
to
develop.
I
don't
wanna
make
it
seem
like
we're
the
only
people
in
the
room
here.
So
certainly
we
welcome
more
feedback
and
more
contributions
as
as
we're
moving
forward
with
all
this.
D
And
if
you
also
and
and
along
those
lines
too,
if
you
have
any
any
questions
that
you
that
are
you
know
you
you
want
to
just
like
interact
with
people
one-on-one
and
have
some
some.
You
want
to
have
a
synchronous
conversation
with
anyone,
an
unofficial
conversation.
We
also
have
a
new
tvm
discord
server.
So
it's
it's
kind
of
like
slack
and
you
should
be
able
to
go
and
there's
a
fairly
active
micro
tvm
channel
in
there.
D
So
if
you
have
questions
or
the
things
that
you
know,
aren't
necessarily
appropriate
for
the
discuss
forum
feel
free
to
drop
in
there
and
and
talk
to
anyone
in
there
and
I'll
post
a
link
to
how
you
can
sign
up
for
that.
A
Great
thanks,
chris,
okay,
everyone.
We
are
at
the
top
of
the
hour,
so
that
does
mean
we're
we're
a
bit
out
of
time,
but
I
want
to
before
we
go.
Thank
those
that
kind
of
led
some
discussions
today,
so
manoopa,
ramana
andrew
you
know
and
leandro.
You
know
thanks
again
and
we'll
see
you
next
week
sounds
like
it's
gonna,
be
memory
planning
memory,
planning
memory
planning
for
a
little
bit,
but
that's
all
right.