►
From YouTube: Apache TVM - µTVM Community Meeting, July 7 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
here
we
go
welcome
everyone
to
this
july,
7th
edition
of
the
community,
get
together
for
micro
tvm
in
particular,
so
welcome
and
thanks
for
being
here.
As
always,
this
is
a
you
know:
community
get
together
the
discuss,
forums
are
the
you
know,
master
place
or
should
say
the
definitive
place
for
discussions.
So
you
know,
while
we
may,
you
know,
have
all
sorts
of
fruitful
discussion
and
consensus
reached
here.
It's
what's
out
in
the
discussed
form
that
ultimately
counts.
A
So
we've
got
what
looks
to
be
three
items
on
the
agenda
today.
Besides
the
usual
introductions
and
getting
started
and
announcements
and
things
so
one
of
one
of
the
items
was
added
and
I'm
not
sure
who
it
came
from.
So
I'm
just
gonna
add
ask
who
wanted
to
discuss
the
hardware
and
loop
testing.
B
I
think
this
was
me
like
a
week
ago,
okay-
and
I
think
the
let's
see
if
I
actually
I'm
glad
you
brought
this
up,
because
I
had
been
thinking
about
product
api
for
the
last
couple
of
days
and
since
we
came
back
from
the
from
the
weekend,
but
I
think
I
put
that
on
on
here
last
week,
because
let's
see
we'd
had
some
discussions
about
this
and
to
follow
up
on
this
with-
I
don't
know
if
theo's
here
or
if
he's.
A
Not
yet,
but
you
know,
tell
you
what
I'll
ping
him
just
to
see
if
he's
around,
he
sure
should
be
yeah.
He
probably
is.
B
Kind
of
like
we
might
be
good
to
sink
in
person
now
that
people
are
starting
to
think
about
things,
and
you
know
we're
working
on
this
optimal
as
well,
so
yeah,
it's
it'd,
be
great
to
to
sync,
even
if
briefly,
we
could
potentially
leave
that
for
the
last
agenda
item.
If
you
want
to
give
theo
a
couple
minutes
to
join
okay.
B
Let
me
yeah,
let
me
discuss
that
first
briefly,
it's
just
basically
the
the
idea
is
that
so
one
of
the
limitations
of
the
tbm
kind
of
open
source
ci,
the
one
that
all
the
prs
run
against
is
that
we
rely
heavily
on
emu
on,
I
guess,
simulation
or
emulation
of
the
underlying
hardware
in
order
to
kind
of
validate
our
our
code,
and
so
that
can
leave
some
some
parts
of
the
code
untested
right.
I
mean
all
the
all
the
hardware
communication
logic
and
also
in
particular,
you
know
code.
B
That's
basically
used
for
for
testing.
I
guess
you
know
exercising
the
actual
schedules
on
device.
Sometimes
that
works
in
simulation
and
emulation.
But
you
know
it's
not
perfect,
and
so,
while
it's
it's
going
to
be
very
difficult
to
add,
you
know
hardware
in
the
loop
test
to,
and
so
hardware
just
means,
basically
that
you
know
when
we're
running
the
ci
or
or
some
some
test
suite
of
tests
that
it's
running
on
physical
hardware.
B
So
you
know
in
the
past
I
guess
traditionally
the
tpmci
is
has
just
relied
on
cloud
hardware
to.
I
guess
both
to
promote,
I
guess,
to
to
make
the
ci
accessible
to
everyone
and
sort
of
ensure
that
we're
not
leaving
anyone.
You
know
kind
of
requiring
that
someone
who's
not
familiar
with
micro,
tvm
buys
you
know
some
some
unheard
of
development
board
and
become
an
embedded
engineer
just
to
fix
a
test
to
contribute
to
tbm.
B
However,
you
know
that
doesn't
mean
that
we
can't
run
a
test
separately,
basically
to
make
sure
that
we,
you
know,
confirm
performance
numbers,
make
sure
that
we're
you
know
staying
green
basically
on
non-real
hardware.
So
sorry,
that's
the
topic.
B
Okay,
thanks
now
now
theo's
here,
so
I
guess
we
could
kind
of
decide
what
order
we
want
to
jump
into
things.
Let
me
also
see-
I
think
america
also
has
been
thinking
about
this
lately
as
well.
Let
me
ping
him
and
see
if
he's
back
yet.
A
Okay,
while
you're
doing
that,
I
did
have
one
additional
note,
which
was
to
say
that
I
have
noticed
in
the
google
doc
that
we're
using,
for
you,
know,
sort
of
the
zoom
link
and
setting
the
agenda
and
all
that
kind
of
stuff.
The
one
thing
I've
not
been
doing
so
bad
moderator,
bad
moderator
is,
I
haven't
been
going
back
and
putting
a
link
to
the
recordings
in
here.
So
I'll
def,
that's
something
I'll,
definitely
go
in
and
do
I've.
A
A
Okay-
and
so
I
guess
the
next
item
on
the
agenda
that
we
do
have
is
if
anybody
is
new
to
the
call-
and
it
hasn't
participated
in
one
of
these
community
meetings
before
if
you'd
like
to
it's
not
required,
we
do
like
have
a
a
moment
set
aside
for
introductions.
So
if
you'd
like
to
talk
a
little
bit
about
yourself,
which
your
interests
are
to
the
group,
we'd
love
to
hear
about
it,
what
you're
up.
E
So
I
I
will
try,
I
will
try
myself.
Yeah
go
for.
E
Here
from
nxp,
so
I
mean
we
are
a
semiconductor
company
developing
devices
both
on
based
on
cortex,
a
application,
processor
and
cortex-m
microcontrollers
with
various
accelerations
for
machine
learning,
both
in
terms
of
hardware
and
software.
E
E
We
have
a
couple
of
solutions
that
we
support,
in
particular
for
microcontrollers.
We
have
helium
tensorflow,
lite
and
also
glow
compiler,
but
we
are
also
looking
actively
at
tvm
and
we
have
a
small
investigation
effort
to
to
see
if
we
can
enable
micro
tvm
on
our
one
of
our
microcontroller
devices,.
E
B
Cool
and
I
heard
from
murdaden
yeah-
maybe
we
could
discuss
the
hardware
in
the
loop,
maybe
towards
the
end
of
the
meeting
and
then
we'll.
We
can
start
with
coverage
api
and
go
through
that
how's.
B
A
All
right,
okay,
is:
are
there
any
announcements
or
news
that
we
want
to
quickly
cover
pointers
to
rfcs?
That
kind
of
thing.
B
Yeah,
I
was
actually
I
could
say
one
thing
quickly
about
some
announcements
is
one
one
thing
that
came
up:
we've
been
kind
of
working
with
sd
microelectronics,
folks
for
a
little
while
now,
and
they
have
kind
of
raised
a
little
bit
of
an
incongruity
with
the
the
runtime
api
and
in
particular
the
we
kind
of
at
present
split
our.
I
guess
our.
We
have
these
the
concept
of
c
runtime,
api
and
c
backend
api,
and
these
are
functions
basically
an
api
functions.
B
You
can
use
kind
of
on
the
microcontroller,
and
some
of
these
functions
are
used
kind
of
strictly
by
the
inference,
function
or
sorry,
the
compiled
operators
or
the
kernels,
and
so
those
functions
are
strictly
required
for
deployment
in
all
scenarios,
and
other
of
these
functions
are
used
more
in
cases,
for
example,
when
you're
auto
tuning
on
a
device,
and
so
they're,
not
necessarily
required,
and
one
thing
came
up
in
the
kind
of
review
of
the
st
code.
B
Was
that
not
all
of
the
functions
that
are,
I
guess
strictly
by
the
the
documentation
required
by
inference.
Functions
in
deployment,
are,
are
in
fact,
actually
placed
in
the
correct
file.
Basically,
if
that
makes
sense,
so
it's
hard
to
split
these
two
files
up
at
deployment
time,
and
so
we've
got
an
rfc
to
basically
do
a
little
bit
of
a
reorganization
there,
I'll
link
it
in
the
I
guess
in
the
dock
or
the
chat,
I'm
not
sure
which
one
is
better,
but
probably
the
dock.
B
So,
let's
see
if
I
can
fill
one
second,
while
I'm
just
finding
that
topic
and
then
we'll
start
on
yeah
I've
posted,
let's
see
I'll
post
this
at
the
end
of
the
dock,
and
maybe,
if
you
guys
want
to
reorganize
you're,
welcome
too
cool
okay.
So
I
wanted
to
just
give
a
brief
overview
for
now.
B
Moving
on
to
the
project
api
topic,
I
wanted
to
give
a
brief
overview
for
everyone
here
of
kind
of
what
we've
been
working
on
with
product
api,
what
it
is,
how
it
impacts
micro,
tdm
and
then
I
think
I'll.
Actually,
I
am
not
sure
I
I
had
kind
of
just
done
a
quick
demonstrate,
basically
a
test
working
to
show
that
to
kind
of
give
everyone
kind
of
a
little
bit
of
an
introduction
to
what
how
product
api
is
organized.
B
I'm
not
sure
if
I
was
gonna
give
a
quick
demo
with
that
with
some
hardware,
but
I
wasn't
sure
if
gustavo
also
had
a
demo,
so
I
didn't
want
to
steal
the
spotlight.
C
Well,
I
I
do
have
one
but
using
them
tvmc,
so
yeah
I
will
build
and
compile
and
the
flash
on
a
device.
So
I
have
recorded
a
demo
for
that
and.
B
Cool
okay.
Well,
actually,
maybe
more
demos
is
better.
Just
because
you
know
it's
more
interactive,
so
yeah
we
can
kind
of
go
through
different
aspects
of
that
so
yeah.
Let's,
let's
see
we
can
start.
I
I
have
a.
B
I
might
start
just
by
sharing
the
rfc
that
so
we've
kind
of
the
way
the
tvm
development
process
has
changed
in
the
last
couple
of
months
is
we
now
have
a
formal
rfc
process,
and
so
first
we
begin
by
discussing
some
things
in
the
discuss
forum,
and
so
excuse
me,
you
guys
might
have
seen
a
discuss
post
and
then
what
happens
is
once
things
kind
of
once
things
are
sort
of
solidifying
on
a
topic.
B
You
open
a
rfc,
a
pull
requests
basically
against
this
tdmrs
repository
here
and
so
the
pull
request.
You
know
the
goal
of
it
is
to
basically
write
up
kind
of
the
results
of
the
forum.
Conversation
in
you
know
in
some
way
that's
going
to
be
more
stable
and
stick
around,
and
we've
had
quite
a
few
forum
topics
that
have
have
ballooned
basically
over
the
years,
and
so
you
know
it
becomes
really
hard.
B
When
you're
gonna
go
back
to
read
those
forum
topics
to
understand
exactly
what
what
was
decided,
if
that
makes
sense,
so
we're
switching
to
this
sort
of
post
discussion,
write-up
style,
basically
to
try
to
address
that.
So
I
just
wanna-
I
don't
go
through
the
whole
text
of
this
here,
but
to
start
off
you
know
what
is
product
api
product
api?
B
We
did
an
integration
last
year
with
zephyr
and
you
know
it
worked
decently
well,
but
there
were
a
couple
of
problems
with
that
and
there
are
a
couple
of
scaling
problems,
in
particular
with
that
integration
that
we
wanted
to
address
basically
through
a
plug-in
style
infrastructure,
and
I
guess
the
the
kind
of
the
gory
details
are
written
up
in
this
rfc.
I
will
actually
link
the
pull
request.
Okay,
let's
see
if
I
can
do
that
quickly
before
we.
B
Let's
see,
we
link
this
in
our
meeting
doc
here,
if
you
guys
want
to
read
it
for
background,
while
I'm
discussing
so
we
did
this
integration
with
with
zephyr
last
year
and
and
one
of
the
problems
we
ran
into
there
were
a
few.
The
first
was
that
the
api
we
used
was
was
pretty
tightly
integrated
to
the
compiler,
so
tvm
would-
and
I
guess
I
guess,
to
give
some
background
tbm.
Basically,
you
know,
as
of
last
year,
what
we
did
is.
B
We
would
produce
a
bundle
of
c
files
and
then
kind
of
call
out
to
external
platforms,
compilers
from
tvm
to
actually
build
code
and
and
then
flash
it
onto
microcontrollers
and
and
so
in
sort
of
doing
this,
what
we
were
doing
was
kind
of
creating
a
a
temporary.
I
guess,
if
you
will
project
in
that
in
each
sort
of
platforms,
build
system.
B
So,
for
example,
we
might
create
a
project
for
just
for
the
purposes
of
compiling
a
the
model
operators
and
then
another
project
for
the
purposes
of
filing
or
runtime,
and
then
finally
create
a
a
an
overarching
kind
of
like
glue
project
which
was
kind
of
used
to
actually
compile
and
flash
things.
And,
as
you
can
imagine,
this
got
to
be
one
really
hard
to
debug
and
two
just
difficult
to
to
kind
of
reason,
about
kind
of
the
state
of
things,
without
really
understanding
the
the
guts
of
micro
tvm.
B
B
Build
tool,
basically,
that
you
use
for
your
embedded
system
and
you
need
to
to
customize
or
or
modify
basically
how
things
are
built
project
api
is
basically
a
a
spec
that
you
can
use
in
order
to
in
order
to
allow
tvm
to
to
build
code
with
your
with
your
platform
rtos,
and
so
you
know,
the
next
question
you
might
want
to
ask
is
like:
why
should
tvm
need
to
build
code
for
your
particular
platform?
B
Rtos,
because
you
know
in
some
sense
you
could
see
tvm
as
a
tool
right
tvm
is
supposed
to
take
in
this
machine
learning
model,
and
then
it
should
produce
some
sort
of
artifact
and
then
the
user
should
be
able
to
actually
consume
that
artifact
downstream,
and
so
one
thing
to
start
with
saying
is
that
first
of
all,
it
does
do
that
and
I
think
gustav
will
kind
of
make
this
a
bit
clearer
later
on.
B
When
I
guess
I
don't
want
to
speak
for
a
demo,
but
hopefully
our
work
towards
this
tbmc,
we'll
we'll
demonstrate
that
workflow,
a
little
bit
tbm
can
produce
a
an
artifact
called
model
library
format.
That's
basically
just
a
tarball
that
contains
all
of
the
code
that
implements
the
model,
and
so
this
this
project,
api
kind
of
starts
with
that
model,
library,
format
and
it.
In
fact,
let
me
see
if
I
can
find
an
example
of
that
on
disk
just
to
yeah
here's
one.
B
So
if
we
take
a
look
at
this
finder
window,
I
don't
know
if
this
is
big
enough
for
everyone
to
see.
I
know
if
I
shrink
the
window
size
that
will
help.
Does
that
help?
Is
that
a
lot
bigger
for
everyone,
yeah
cool,
so
yeah?
So
if
you
take
a
look
at
this-
and
this
is
actually
diving
into
a
couple-
a
bit
more
of
the
guts
than
I
well
the
project
api,
but
you
can
see
this
model
directory
here
and
inside
of
here
that's
kind
of
the
contents
of
this
model.tar.
B
So
this
is
one
of
these
model
library
format
artifacts.
So
you
can
see
there's
kind
of
this
code,
gen
host
and
src,
and
we've
got
a
bunch
of
organization
here
aimed
at
supporting
heterogeneous
execution
and
kind
of
future,
more
complex
exports
from
from
micro
tvm.
But
for
the
most
part,
what
you'll
get
is
you'll
get
these
you
know
lib1
and
lib
0.c,
and
I
guess
you
can't
see
my
preview
here,
but
these
contain
basically
the
implemented
operator
code.
Then
kind
of
the
remainder
of
the
archive
is
basically
dedicated
to.
B
The
graph
here
and
then
also
some
parameters-
and
I
actually
think
this
src
oh
yeah,
and
then
this
is
this-
contains
basically
a
relay.text
which
just
shows
you.
Basically,
if
you
can
see,
this
is
kind
of
just
like
the
source
code
in
tbm's
source
format,
that's
sort
of
the
canonical
model,
representation
format
of
tvm.
B
That
explains,
basically
what
model
is
actually
contained
within
this
archive
and,
lastly,
there's
this
metadata.json,
which
just
is
a
machine
parsable
bit
of
metadata.
Let's
see
if
I
can
size
this
up
a
little
bit,
that
kind
of
explains
the
tensors
used
and
kind
of
the
basically
tries
to
explain
the
amount
of
memory
that
is
needed.
B
Targets
involved-
and
you
know-
has
a
versioning
key
and
also
just
explains
the
the
different.
I
guess
runtimes
right
now
called
executors,
basically
that
tvm
uses
to
drive
full
model
inference.
So
the
idea
of
this
artifact
is
basically
to
give
you
as
much
like
what
you'd
want
to
get
out
of
tvm
if
you
wanted
to
do
some
downstream
integration,
and
so
this
is
kind
of
where
we
start
with
for
for
the
product
api.
B
Let
me
jump
back
to
this
rfc,
I'm
not
going
to
go
through
all
the
details
here,
but
the
just
to
kind
of
outline
some
of
the
workflows
that
we
can
imagine
you
doing
with
the
project
api.
Well
now,
at
this
point,
let's
say
that
you
start
with
this
model
library
format,
and
you
want
to
do
something
like
generate
a
project
so
that
a
user
could
kind
of
begin
working
kind
of
a
hello
world
scenario
with
their
model.
B
You
can
use
product
api
to
do
that.
You
can
also
use
product
api
to
build
and
flash
that
project.
It
also
contains
enough
api
functions
to
support
host
driven
inference,
and
so
that's
kind
of
where
you've
got
a
generic
runtime
compiled
onto
the
device
and
communicating
with
your
laptop
using
serial
so
basically
uart,
and
this
allows
basically
a
user
to.
I
guess
if
you
will
like
try
out
a
model
on
on
their
device.
B
So
basically
this
they
can
drive
execution
through
tvm
on
the
host
device.
And
then
you
know
do
things
like
time,
operator,
implementations
or
or
validate
correctness
of
operators
and-
and
that
leads
to
the
last
use
case
here,
which
is
auto
tvm
and
that's
you
know
kind
of
in
the
case
that
we
want
to
provide
or
produce
these
tuning
logs.
To
tell
tvm
kind
of
you
know
the
best
way
to
implement
an
operator
on
any
particular
device.
B
Tbm
does
this
by
sort
of
a
search
based,
optimization
technique,
and
so,
in
order
to
do
that,
it
basically
needs
to
be
able
to
compile
a
bunch
of
code
and
then
flash
on
a
device
and
time
its
execution,
and
so
this
product
api
is
really
kind
of
built
around
these
four
use
cases
and
here's
how
it
works.
You
start
by
defining
an
instance
of
this
project,
api
handler
class
in
a
python
file
inside
of
a
template
project.
So
let
me
see
if
I
can
find
an
example
of
that.
B
The
product
generator
here,
there's
there's
a
couple.
So
there's
let
me
start
with
the
simplest
one.
The
simplest
example
we,
the
simplest
kind
of
I
guess,
target
we
have
for
micro.
Tvm,
is
sort
of
a
posix
emulation.
So
all
this
does
is
launch
a
sub
process
and
the
subprocess
sort
of
accepts
commands
from
tvm
and
then
runs
execution
kind
of
on
those,
and
so
that
would
be
in
src,
runtime
crt
host.
B
And
this
is
something
we
use
mostly
for
unit
testing
in
the
tbm
ci.
This
doesn't
depend
on
any
hardware
it.
Basically
it's
just
it's
simply
a
main
that
contains
basically
a
bunch
of
the
platform
hooks
needed
to
support
microtvm,
plus,
basically
a
small
set
of
code.
B
I
mean
this
is
basically
178
line
c
code
c
file
here,
so
this
is
just
sort
of
doing
the
bare
bones
of
kind
of
launching
sort
of
this
generic
runtime
in
a
posix
sub
process
and
then
accepting
commands
from
tvm
to
allow
it
to
drive
inference,
and
so
we
use
this
a
lot
for
kind
of
end-to-end
unit.
Testing
of,
I
guess,
somewhere
between
unit
testing
and
integration,
testing
of
microtm.
B
So
to
start
with,
you
define
this
subclass
of
project
api
handler
in.
Let
me
see
if
I
can
make
this
bigger
inside
of
the
project
and
there's
a
bunch
of
different
functions
that
you
can
implement,
but
all
of
these
functions
kind
of
are
they're
stubs,
basically,
that
allow
that
allow
tbm
to
drive
these
four
use
cases
with
external
rtos.
B
B
While
incorporating
one
of
these
model
library,
format,
artifacts
we've
got
basically
the
build
flash
hooks
that
are
used
to
build
and
flash
code
on
the
the
target
device
and
then
lastly,
we've
got
a
set
of
functions
that
they
all
end
in
transport
here
and
what
these
do
are
basically,
as
part
of
kind
of
the
host
driven
and
kind
of
auto
tdm
use
cases.
B
You
know
each
each
platform
has
a
way
of
communicating
with
the
on-device
code,
and
so,
for
example,
if
you
have
decided
that
to
use
uart
to
communicate
with
your
own
device
micro,
tv,
rpc
server,
and
you
do
that.
Basically,
this
is
the
generic
runtime.
I've
discussed
for
doing
sort
of
like
host,
driven
or
auto
tvm
like
host,
driven
inference
or
auto
tvm
on
your
device.
B
You
provide
a
function
here,
open
transport
and
this
kind
of
allows
you
to
tell
tvm
how
to
connect
to
the
the
connected
the
the
attached
device
to
do
to
perform
inference
so
there's
basically
an
open,
close
and
a
read
and
a
write,
and
so,
when
you
implement
this
file,
tvm,
then
to
use
this
project
api,
it
will
actually
launch
this
python
file
in
a
sub
process
and
it
will
send
it
commands
basically
using
json
rbc
and
then
basically,
the
file's
job
is
to
interpret
the
commands
and
perform
the
actions
basically
requested.
B
So
it's
it's
a
sort
of
an
rpg
server.
If
you
will,
although
it's
basically
doing
sort
of
inter-process
communication
here,
we
don't
have
any
designs
to
run
this
over
the
network.
Now
you
might
ask
like
why
go
through
all
this
effort
to
place
this
in
a
separate
python
file,
invoke
it
in
a
sub
process.
Use
this
rpc
here
and
the
reason
is
that
it's
about
dependencies.
B
This
is
kind
of
a
plug-in
style
infrastructure
here,
and
the
idea
here
is
that
one
of
the
biggest
limitations-
and
I
guess,
if
you
all
the
scaling
problems
we
had
when
we
were
integrating
with
zephyr-
was
that
you
know
zephyr
had
a
bunch
of
dependencies,
there's
a
few
python
dependencies
that
we
were
required
to
basically
add
to
tbm
as
tvm's
own
dependencies,
so,
for
example,
parsing
a
yaml
file,
for
whatever
reason
we
didn't
have
that
as
a
requirement
for
tvm,
and
so,
if
you
wanted
to
use
zephyr
the
current
integration
with
tvm,
you
ended
up
having
to
install
a
bunch
of
extra
dependencies
into
your
tvm
virtual
environment
and
one
other
thing
that
one
of
the
things
that
gets
problematic
about
that
is.
B
You
know
when
we
when
we're
building
tvm,
and
we
are
beginning
to
button
up
the
python
dependencies
here.
Python
dependencies
tend
to
have
conflicts
and
tend
to
work
basically
with
a
specific
set
of
other
dependencies.
So
if
you
depend
on
tensorflow
1.4.2,
for
example,
you
know
there's
certain
other
libraries
like
if
you
install
tensorflow
1.4.2,
it
assumes
that
keras
1.4.2
is
installed,
I
believe
the
corresponding
library.
I
think
I
got
that
those
numbers
a
little
bit
off,
so
you
know
double
check
me
on
the
on
those.
B
I
don't
want
to
claim
that
that's
exactly
right,
but
but
it's
that
kind
of
a
thing,
and
so
when
we're
producing
tvm,
you
know
kind
of
we're
producing
a
wheel,
for
example,
of
tbm
that
contains
a
set
of
python
dependencies.
You
know
we're
designing
this
for
to
work
within
tvm's
own
immediate
dependencies
and
when
you
then
take
that
dependency
set
and
try
to
add
independencies
from
you
know,
an
arbitrary
additional.
You
know
platform.
B
It
constrains
both
projects,
because
both
projects
can't
work
together
unless
they
use
a
compatible
set
of
dependencies,
so
by
forking,
a
sub
process
here
and
launching
micro,
tv
and
api
server
in
the
sub
process.
It
allows
this
api
server
here
to
exist
in
its
own
virtual
environment,
and
so
if
there
are
dependencies
like
the
pi
yaml.
B
On
another
particular
framework
like
tensorflow,
and
you
want
to
depend
on
a
different
version
in
order
to
just
implement
these
basic
functions
like
generating
a
project
building
flashing
debugging-
you
can
do
that
basically
from
here-
and
you
know,
one
thing
I
haven't
mentioned
here
is
kind
of
to
support
that
there's
the
ability
to
launch
this
from
another
shell
script,
which
has
the
opportunity
to
basically
set
up
some
virtual
environments
before
a
python
script
is
launched
here,
okay,
so
kind
of
chatted
through
that
I
don't
know
if
I
hope
that
what
I
said
kind
of
makes
sense
to
everyone,
I've
kind
of
been
working
on
this
for
a
little
while,
so
I
I
know
that
it,
you
can
tend
to
get
a
little
bit
of
tunnel
vision,
but
maybe
just
a
brief
pause
to
see.
B
If
anyone
has
questions
or
wanted
to
talk
about
anything
in
particular.
If
not,
I
can
then
go
and
do
a
quick
demo
just
to
show
the
kind
of
here's
the
workflow
here's
the
unit
test,
that
we
are
here's
a
unit
test
that
we
were
running
before
with
the
old
workflow
and
here's
kind
of
showing
some
some
of
how
the
new
workflow
lines
up
there.
So
yeah
brief
pause.
If
anyone
has
questions.
B
It
should
be
the
right
window
cool,
so
I
haven't
run
this
this
morning,
but
hopefully
it
worked.
It's
worked
since
last
night,
so
what's
happening
in
my
demo
here,
I've
got
the
micro
tvm
reference
virtual
machine
launched.
B
I've
got
one
of
these
nucleo
boards
attached
that
we've
been
using
for
kind
of
our
other
demos,
and
I've
got
this
branch
checked
out
here
in
in
tvm,
and
this
is
attached
to
yeah
the
nuclear
wars
attached
to
the
the
vm,
and
so
here
I'm
launching
the
zephyr
integration
tests,
which
is
the
test
that
we
use
to.
Basically,
you
know
validate
zephyr
in
the
ci,
we're
launching
it
against
the
attached
the
physical
hardware
platform.
B
So
just
maybe
one
quick
aside,
as
we
were
discussing
the
hardware
and
the
loop
testing,
you
know
in
the
ci
we're
using
basically
a
qmu-based
platform
here
and
so
we're
saying
that
when
you're
running
this
zephyr
integration
test,
you
know
run
it
against
qmu
rather
than
that
the
physical
hardware,
but
here
of
course
we're
overriding
that
and
setting
it
to
the
hardware.
I'm
also
going
to
send
debug
logs
out
to
the
cli.
So
you
guys
can
see
kind
of
what's
going
on.
B
Let
me
run
that
real,
quick,
and
so
let
me
show
you
a
couple
of
new
things
here.
You
can
see
here
the
debugging,
the
logging
from
the
product
api,
here's,
the
json
rbc.
B
I
guess
commands
and
responses
that
we're
sending
over
to
the
api
server,
and
so
you
know
what
we
send
are
you
know
we
do
sort
of
do
a
generate
project
which
copies
this
template
project
into
a
new
directory
and
integrates
this.
B
You
know
temporarily
created
model.tar,
that's
the
model,
library
format
and
we're
going
to
create
it
in
this
directory.
B
Here,
okay,
we
finally
caught
up
to
ourselves
here,
then
we'll
connect
to
that
to
the
api
server
and
that
new
generated
project
and
we'll
issue
like
a
build
command,
and
you
can
see
that
here
it's
running
the
separate,
build
tool
and
there's
a
lot
of
output
to
skip
through
there
and
then
we'll
issue
a
flash
command,
and
so
it's
running
the
you
know
the
zephyr
make
flash
command
here
and
you
can
see
that
finishes
here
and
then
we
sort
of
begin
talking
to
the
device.
B
Here
we
say
open
transport
and
we
we
then
can
send
right,
commands
and
you'll
see.
This
is
kind
of
the.
If
you've
worked
with
micro,
tvm
you'll
see
this
is
kind
of
the
familiar
kind
of
api
traffic
debug,
but
now
we've
actually
got
two
different
debug
logs.
One
is
this:
for
each
one
of
these
writes
you'll,
get
a
right
transport
and
for
each
read,
you'll
get
a
read
transport
command
here,
and
so
you
can
see
this
is
kind
of
encoded
in
that
in
that
protocol.
Here.
B
So
nothing
super
exciting
to
see
here
other
than
a
green
light
here,
but
this
is
basically
doing
a
simple
model
inference
on
the
device
and
and
sort
of
receiving
reading
the
result
off
the
device.
B
Lastly,
I'll
just
show
kind
of
the
generated
like
what
happened
on
disk,
which
is
actually
the
same
thing
I
was
showing
before,
but
just
to
kind
of
link.
The
two
up
here.
Let's
see.
B
So
here
this
this
directory
here,
I
think
I
don't
know
if
you
guys
noticed
pay
attention
to
too
much
before,
but
these
workspace
directories
are
where
we
keep
the
generated
code
and
this
product
api
server
used.
Sorry,
this
project
api
driven
test.
It
used
a
template
project
over
in
over
an
apps
micro,
tvm
zephyr
template
project.
So
here's
like
the
zephyr
you
can
see
this
has
kind
of
got
a
cmake
list.
It's
got
a
prj.com.
B
B
This
workspace
directory
here
and
then
so
it's
a
similar
structure
here,
but
now
we've
got
this
model
directory
here
that
contains
the
the
parameters,
and
so
actually
it
contains
the
the
generated
code
as
well
as
the
parameters
and
then
the
cmakelist
sort
of
knows
how
to
find
the
source
files
from
this
host
src
directory
here
and
include
that
in
the
the
built
artifact.
So
you
can
see
then
here
this
is
where
we've
built
out
the
micro
tv
in
binary
okay.
So
that's
my
demo.
B
Any
other
questions
on
that,
or
I
guess
I'd
just
at
the
to
wrap
up
here-
I'd
love
for
people
to
kind
of
review,
the
rfc
or
or
try
things
out
and
see.
If
things
see
how
things
work
locally,
I'm
going
to
try
to
push
on
this
pr
fairly
soon
and
that
should
unblock
our
ability
to
add
the
auto
tuning
result,
which
I
kind
of
just
started
the
auto
training
functionality
into
the
main
tree,
which
I
kind
of
let
be
blocked
because
of
this
dependency
problem
I
discussed
earlier.
C
Away
sure,
thanks
tom
andrew
thanks
for
fixing
this
serial
issue
yesterday,
I
tried
your
new
code
so
just
to
give
you
some
context
to
people.
C
Here
we
were
facing
some
performance
regression
on
the
when
using
the
project
api
in
the
more
specifically
when
you
use,
we
used
the
transporter
for
it
and
andrew
fixed
that
this
will
be
would
affect
mostly
when
you're
trying
to
run
or
to
transfer
data
from
the
device
to
the
host
and
back
and
forth,
and
what
I
would
like
to
show,
which
is
in
addition
to
what
andrew
have
shown
us
to
us,
is
the
tvmc
for,
for
those
which
doesn't
know
have
never
heard
about
it.
C
Tvmc
it's
a
command
line
for
tvm.
The
c
is
not
related
to
c
language.
It's
for
come
online.
As
far
as
I
can,
I
got
it
so
so
the
idea
here
is
is
to
show
how
tmc
can
use
the
project
api
to
drive
the
process
of
getting
a
model,
let's
say
in
a
tensorflow
like
format
and
get
it
flash
it
to
a
device
and
and
and
finally
run
into
that
device.
So
I'm
gonna
share
my
screen.
C
I
have
actually
recorded
one
session
so
feel
free
to
stop
me
at
any
time
to
ask
questions
so.
C
Can
you
see
my
my
screen?
Oh
yes,
yes,
okay,
okay,
so
so
you
can
start
basically
by
downloading
this
as
a
model,
dot
tf5
light
file.
This
is
the
same
model
you
find
under
tutorials
micro
and
it
infers
value
for
for
a
scene
of
a
given
value.
C
So
it's
quite
simple
and
now
you're
going
to
try
to
run
that
model
on
a
disco
board.
So
the
first
thing
you
download
the
model
and
the
you
use
the
classic
or
traditional
compiled
comment
from
tvmc.
C
You
pass
the
the
model
and
you
specify
a
target
and
the
the
output
will
be
a
model
library
format
archive
and
you
have
to
specify
the
output
format
to
be
specifically
a
model
of
the
library
format
archive
and
of
course,
you
can
pass
past
config
options
and
also
you
can
disable
passes
when
compiling
the
compiling
the
relay.
C
So
after
that
we
have
a
centaur
file,
which
is
a
modular
format
and
after
that
is
where
things
start
to
get
interesting
about
the
micro.
So
there
is
a
new
micro
context
here,
which
four
additional
comments:
the
create
project,
which
will
be
helpful
to
base
it
on
a
template
directory,
create
a
project
directory
for
from
where
you
can
build
flash
and
run
a
model.
C
So
we
use
that
create
project
comment
in
the
micro
context
to
create
the
very
same
directory
that
andrew
have
demonstrated
the
files
we
we
find
like
the
model
directors
uncompressed
there.
I'm
gonna
show
that
as
well.
So
so
what
you
need
for
a
create
project
is
just
to
pass
a
template
directory
where
you
find
your
application
that
kaiser
in
that
kind.
In
that
case,
it's
a
zephyr
application.
C
Then
you
pass
the
output,
which
is
the
project
which
we
call
the
project
directory
itself
and
you
pass
the
model
model
format
archive
which
was
generated
by
the
compile
command.
Then
we
use
specify
a
board
and
it
will
generate
the
the
wall
structure.
The
wall
tree
necessary
to
build.
C
The
project
in
that
directory,
you
will
find
the
operators
inside
it
and
also
the
application
source
code,
which
would
be
necessary
to
build
the
wallet
project
so
inside
source.
For
instance,
we
have
the
main
file,
which
is
basically
the
code
which
will
which
we'll
call
the
rpc
server.
So
this
is
the
main
loop
which
we'll
call
the
rpc
server.
So
the
code
is
is
inside
the
project
directory
and
also
we
have
the
operators
here
produced
by
the
compile
stage.
C
I
mean
that
project
directory.
So
after
that,
after
you
create
that.
C
Directory
you
are
able
to
build
from
there.
Another
thing
is
that,
to
save
time
from
you
know
specifying
all
the
the
path
to
the
template
project,
it
is
also
possible
to
specify
instead
a
type
to
use
a
default
template
project
given
a
type.
So
in
that
case,
if
you
pass
just
type
zephyr,
it
will
bring
the
same
template
I
have
specified
and
if
you
try
to
build
that,
create
a
project
in
a
place
which
already
exists
exists,
one
it
will
refuse.
C
So
you
need
to
force
the
recreation
of
the
of
the
project,
and
so
you
can
override
it.
So
you
can
pass
a
force
in
that
case
here.
I
pass
it
the
force
because
it
has
refused
to
override
the
previous
created,
prog
project
directory,
and
now
we
have
all
the
files
with
the
operators
and
the
application
code.
C
So
starting
from
from
there,
it
is
possible
now
to
build
the
project
itself,
so
there
is
a
new
context
called
build
and
where
you
just
pass
again,
the
project
directory
just
created
and
the
board
type
for
it.
You,
of
course,
don't
need
our
model
library
format
archive
anymore
and
it
will
build
the
world
project
finally
generating
a
zephyr
dot.
L
file
read
to
be
flashed
to
the
device
here.
It's
an.
C
I
nice
statistic:
statistics
as
at
the
end
for
the
zephyr
file
generated
in
the
build
directory
inside
the
project
directory.
C
So
another
option
is
that
you
can
pass
dash
v
uppercase,
so
the
build
gets
more
verbose
and
usually
it
will
refuse
to.
You
know
overwrite
again
the
build,
so
you
need
to
force
it
so
now
we
build
it
again,
but
getting
a
more
verbal
output.
C
And
finally,
after
we
have
a
zephyr
alpha
file,
we
are
pretty
much
ready
to
you
know:
flash
the
image
to
the
device
so
for
that
we
just
use
the
new
command
flash
and
pass
the
project
directory
and
it
will
flash
the
generated,
zephyr,
dot,
elf
image
to
the
device
using
the
new
methods
from
the
new
project
api.
So
at
this
point
we
have
the
the
model.
C
I
mean
we
have
everything
we
need
flash
it
to
the
device
and
the
the
idea
here
is
is
just
to
give
an
overview.
So
the
idea
is
to
have
a
run
command,
so
a
tvmc
microrun
comment
which
is
not
implemented
yet
so
I'm
going
to
run.
I
just
would
like
to
show
I
will
run
from
a
script,
but
the
the
flash
memory.
C
The
image
is
already
flashed
in
the
device,
so
we
just
need
to
open
a
session
to
the
device
and
input
set
the
inputs
and
ask
it
to
run
the
model
for
us.
So
I
just
would
like
to
just
to
show
you
how
it
looks
like
from
a
script
point
of
view.
You
just
need
to
use
the
generated
project
class
and
from
pro
from
directory
method,
pointing
to
the
the
project
directory.
C
We
have
the
image
and
and
everything
else
we've
created,
and
once
we
have
that,
as
as
andrew
said,
there
is
a
new
transporter
and
just
like
before
we
use
the
we
use
the
tvm.micro
session
to
open
the
session
against
the
device
and
we
create
a
executor
for
it.
We
set
the
inputs,
and
in
that
case,
since
it
is
a
scene
model,
we
are
just
passing
the
value
0.5
for
it,
and
then
we
ask
it
to
run
just
like
before
and
we
grab
the
results
here.
C
So
if
we
run
it
from
the
command
line,
it
should
just
return
the
value
of
a
scene
0.5,
and
the
value
seems
luckily,
since
not
much
off
from
the
real
value,
which
is
this
one,
so
the
inference
just
work
it
from
the
device.
C
So
to
sum
it
up,
the
runtime,
the
run
command
is
yet
to
be
implemented
on
tvmc,
and
I
intend
to
finish
this
probably
this
this
week,
since
the
serial
is
now
working
properly
after
andrew's
fixed
for
it
and
yep.
I
think
that's
pretty
much
what
I've
got
to
show.
C
Let
me
stop
sharing
here.
Okay,
so
that's
why
I
got
any
questions
or
comments
about
it.
I'm
concerned.
B
Just
say
thanks
for
thanks
for
the
demo:
that's
that
was
fantastic
and
yeah.
Just
I
think
like
where
we're
going
with
this
too,
is
you
know
right
now,
tvm
is
often
driven
from
these
python
scripts,
but
leandro
and
gusavo.
A
B
Others
have
been
working
on
this
tbmc
command,
basically
as
a
kind
of
a
more
polished
command
line
interface,
and
you
know,
as
we
move
towards
distributing
tbm
as
a
wheel.
We
think
that
this
will
provide
a
much
easier
kind
of
entry
point
for
for
people
that
want
to
try
out
models
and
and
kind
of
get
started
with
tvm
without
you
know,
reaching
deep
into
the
internals
of
of
you
know.
What's
going
on
to
customize
things
so
yeah,
I
think
this
is
great
right.
A
B
Yeah
we
can
get
a
bit
of
a
start
on
this.
I
think
so,
I'm
kind
of
having
to
go
back
and
remember
exactly
the
the
context
in
which
we
were
talking
about
this,
but
I
think
you
know
if
I
remember
right.
Theo
was
discussing
some
things
on
the
discuss
forum
about
you
know,
trying
to
set
up
some
ci
for
testing
micro
tvm,
and
you
know
at
octomell.
We've
also
been
working
on
this
here
as
well.
B
Basically,
the
idea
that,
being
that
we
want
to
run-
maybe
some
nightly
testing
basically
to
see
you
know
what
the
to
measure
performance
verify
that
we're
not
regressing
in
performance,
make
sure
that
things
are
working
on
device.
So
actually
I
didn't
know
if
theo
had
anything
you
wanted
to
discuss.
I
think
the
reason
I
added
this
to
the
agenda
was
because
you
brought
this
up
in
the
discuss
forum
and
the
other
thing
is
that
murdot
has
actually
been
working
on
some
of
these
efforts
from
the
octoml
side
too.
B
So
I
don't
know
if
either
you
have
sort
of
brief
updates.
On
kind
of
you
know
what
you've
been
working
on
and
kind
of
future
directions.
I'll
turn
over
to
you
guys
for.
F
Sure
so
so
far,
I've
mainly
been
trying
to
get
kind
of
up
to
scuff
on
terraform.
Since
we're
not
planning
on
using
an
aws
provider,
I've
kind
of
just
been
trying
to
parse
the
community
providers
and
decide
what
we
want
to
do
and
and
get
up
to
date
on
how
terraform
actually
works
so
there's
a
lot
less
documentation
on
the
community
providers
than
the
standard
providers
like
ado
aws.
F
F
There
would
be
a
docker
image
with
a
lot
more
dependencies
already
installed
and
the
the
agent
would
be
set
up
through
ansible
through
ansible
scripts,
and
then
the
micro
tvm
test
would
be
set
through
intel
scripts
too,
and
we
plan
to
have
several
micro
boards
m-class
boards,
as
well
as
maybe
a
qemu
environment
for
testing.
F
I
looked
through
your
tlc
pak
ci.
Unfortunately,
I
don't
know
much
about
crane,
which
seems
to
be
what
you're
relying
heavily
on
so.
B
F
B
That's
just
another
docker
container,
actually.
F
B
F
So
so
far,
where
I'm
at
with
terraform
is
I've
gotten
terraformed
to
provision
a
docker
container,
and
so
it
creates
the
image
and
then
creates
the
container
I'm
working
with
a
ansible
provisioner,
which
should
take
a
ansible
build
and
then
provision
that
docker
container,
I'm
working
out
some
chinks
here
and
there
due
to
the
kind
of
guide
and
documentation,
is
working
with
stage
connections,
whereas
currently
I'm
not
doing
that
so
by
making
progress
on
on
that
end,
obviously,
it
would
be
good
to
kind
of
discuss
how
we
can
kind
of
connect
our
efforts
a
little
bit
and
see
what
we
what
we
can.
C
Go
ahead
because
I
don't
know,
I
was
just
about
to
say
that
after
we
get
that
at
that
point,
that
theo
mentioned
the
idea
is
to
attack
the
issue
on
having
the
devices.
C
You
know
correct
numbers
inside
the
corrected,
docker
images
to
be
you
know,
used
and
the
idea
that
leandro
mentioned
over
having
an
image,
an
image
type
associated
to
a
single
device.
So
we
can
run
the
process
in
parallel.
So
that's
the
idea.
So,
basically
start
quite
simple:
have
a
host
with
a
bunch
of
devices
connected
with
that
infrastructure
built
as
fuel
said,
so
we
can
move
okay
or
replicate
in
different.
You
know
hosts
or
data
centers,
whatever
it
would
be
once
it.
C
It
has
proven
to
be
useful
for
us
and
yeah
and
have
at
some
point
you
know.
Maybe
it
get
it
hook
it
to
the
tlc
pack
ci,
just
as
a
kind
of
dry
run
to
see
how
it
goes.
That's
what
we
are
looking
for.
You
know.
B
Yeah,
I
think
it
would
be
great
to
think
a
little
bit
about
you
know
providing
some
of
these
results
kind
of
as
a
nightly
thing
or
I
I
think
it
will
be
well.
I
mean
I
think
we
have
to
kind
of
try
things
out
and
see
what
the
throughput
of
the
of
the
hardware
and
the
loop
ci
is.
I
mean
I
guess,
the
the
tv
mci
isn't
exactly
the
fastest
thing
in
the
world
right
now,
but
you
know
to
kind
of
keep
the
paradigm
of
you
know,
keeping
with
publicly
available
stuff.
B
You
know
I
mean
I
guess
it's
certainly
something
we
can
consider
expanding
and
I
think
that
people
in
the
community
have
kind
of
expressed
different
opinions,
different
ways
on
whether
or
not
we
should
have
you
know.
Should
we
just
have
all
the
hardware
in
in
the
tv,
mci
and
open
it
up
kind
of
open
season,
or
should
we
have
sort
of
like
a
non-voting,
you
know,
but
still
provide
results.
So,
for
example,
if
you
uploaded
a
pr
you
know,
you
could
run
this
micro
tv
on
algorithm
ci,
but
the
cio
might
fail.
B
You
know
this
this
hardware
specific
one
might
fail,
but
that
doesn't
necessarily
mean
you
can't
merge.
You
know
that
kind
of
thing.
So
so
we
could
kind
of
explore
ways
to
integrate
the
results.
You
know-
and
I
guess
the
the
simplest
one
is
basically
just
having
a
nightly
dashboard
of
like
you
know,
last
night
it
failed
today
and
so
it's
somewhere
in
you
know,
depending
on
the
throughput
and
the
reliability.
B
I
think
we
can
definitely
sort
of
see
what
makes
sense
there
I
think,
having
in
general
having
this,
though,
makes,
will
help
us
make
sure
that
we're
we're
remaining.
You
know
staying
green,
basically
on
real
hardware.
So
that's
great
one
of
the
problems,
just
to
kind
of
highlight
that
I
had
had.
I
had
originally
tried
to
build
a
well.
You
know
we
have
the
ci
qmu
docker
image
and
I
had
originally
tried
to
build
basically
a
reference
docker
image
that
was
supposed
to
become.
B
B
If
it
was
linux
or
windows,
subsystem
for
linux
or
or
osx,
you
couldn't
really
assume
how
things
were
laid
out
on
the
host
disk
and
why
that's
important
was
that
when
you
actually
need
to
convince
a
docker
image
to
talk
to
real
hardware,
what
I
had
found
was
that
the
cleanest
solution
was
basically
to
mount
the
slash
dev
volume
from
the
host
machine
into
the
docker
image,
and
that
gets
a
little
bit
tricky
if
you're
going
to
be
doing
things
like
flashing
boards
and
sort
of
resetting
the
usb
devices
that
you'd
like
to
talk
to,
because
some
of
the
devices
are
managed
by
a
kernel
driver.
B
So,
for
example,
the
serial
ports
are
managed
by
a
kernel
driver
and
then
others
of
them,
for
example
the
open
ocd.
You
know
programmers
can
be
managed
by
lib
usb
and
so
the
live
usb
ones
are
fairly
straightforward.
You
just
mount
dev
bus
usb
or
something
like
that
into
the
docker
container.
It's
been
six
months.
So
don't
quote
me
on
that
path,
but
I
think
it's
something
like
that.
B
The
host
the
the
serial
ports
are
a
little
bit
trickier,
particularly
if
you've
got
multiple
devices
connected
to
the
same
machine,
because
what
you
need
to
do,
then,
is
mount.
The
you
know,
slash
dev,
slash,
ey,
usb
0,
1,
2
or
3,
or
something
like
that
and
then
that
node
number
needs
to
remain
stable
as
you
flash
the
device.
B
So
if
you
flash
the
device,
you
know
usb
1
or
something
has
to
come
back
up
as
usb
1,
the
next
time
it
reboots
and
most
of
the
time
that
happens,
but
I've
certainly
ran
into
cases
where
that
didn't
happen
before,
depending
on
how
the
device
rebooted
and
how
the
usb
driver
was
feeling
at
that
point
in
time.
So
yeah
I
am
curious.
I
it
would
be
like.
I
think
our.
B
C
One
of
the
things
that
occurred
to
me
is
that
when
you
create
the
vm,
are
you
setting
the
same
number
of
cpus
that
you
are
using
on
the
docker
image,
because
the
docker
image
will
eventually
share
the
number
of
cpus
in
the
host
and.
B
Yeah
I
mean
that's
one
of
the
that's
the
area
that
it's
harder
to
be
flexible
with
on
the
vm
right,
because
you
have
to
kind
of
you
know,
set
it
in
advance
and
it
can't
sort
of
scale
up
and
down
because
it
needs
more
cpu,
so
yeah.
I
have
to
explain
that.
C
I
I'm
actually
just
asking
that,
because
one
of
the
drawbacks
of
using
the
vm
it's
that
it
would
be
much
slower
than
in
comparison
to
a
docker
image
right,
but
I
was
wondering
if,
when
you
did
that
test
or
merdab
did
that
test,
you
were
comparing
the
same
number
of
cpus.
To
start
with.
You
know.
G
Yeah
I
can
explain
that
yeah
okay,
so
I
haven't
used
the
same
number
of
cpus
and
the
reason
was
that
I'm
kind
of
testing
it
on
a
shared
server
with
others
and
if
I
set
it
to
the
maximum
which
jackie
does
it
will
completely
blow
out
the
computer.
G
But
then,
but
I
have
a
feeling
that,
even
if
you
set
it
at
that
level
for
some
clock
excuse
that
we
see
between
the
reference
vm,
it
tries
to
rebuild
some
stuff
like
tries
to
review
the
tvm
and
the
zvr,
and
it
will
increase
the
build
time.
G
G
B
Yeah
definitely
well
we're
just
about
actually
a
minute
over
time,
so
I
think
it
would
be
great
to
come
back
and
have
a
longer
discussion
about
this.
You
know
as
we're
making
progress,
so
maybe
we
can
put
this
on
the
agenda
again
in
a
week
or
two,
and
we
can
also
talk
about
some
of
the
the
things
that
we
actually
want
to
exercise
in
the
hardware
and
loop
test.
For
example,
what
are
we
planning
to
do
regression
tests
and
things
like
that
on
so
yeah?
B
That
sounds
good
great.
Okay,
that
sounds
great
turn
it
back
over
to
tom.