►
From YouTube: ONNX20210324 V08 ONNXonMicrocontrollers
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
guys,
my
name
is
rohit
sharma
and
shortly
I'll
talk
about
bringing
onyx
models
to
microcontrollers
and
other
iot
devices.
A
Agenda
includes
challenges
and
opportunities
of
bringing
onyx
model
on
mcu's,
an
open
source
framework
for
edge,
compiler,
deepsea
and
applications
thereof.
Tiny
means
very
small
and
ml
means
machine
learning.
So,
for
this
talk,
tinyml
means
running
ml
models
and
applications
on
tiny
devices
operating
on
batteries.
For
months,
tiny
ml
refers
to
machine
learning,
technologies
and
applications,
including
hardware
algorithm
and
software,
capable
of
performing
on
device
inference
at
extremely
low
power.
Targeting
battery
operated
devices.
A
Hardware
comparison
for
high
performance
servers
with
accelerators
single
board
computers
and
bare
metal
microcontrollers
is
noteworthy.
Compute
resources
of
mcu's
running
in
few
megahertz
flops
fall
short
in
comparison
to
several
teraflops
on
gpu
servers.
This
is
a
whopping
6
order.
Magnitude
of
difference,
similar
order
of
differences
is
observed
in
ram
capacity
and
storage
size
of
these
devices.
A
6
or
higher
order
of
difference
in
flops,
ram
and
storage
presents
significant
challenges
for
application
development
on
mcu's.
It
is
noteworthy
to
mention
that
ip
and
mcu
providers
continue
to
improve
hardware
resources,
while
ai
research
community
is
working
tirelessly
to
reduce
the
model
size.
As
a
result
of
these
two
number
of
machine
learning,
applications
running
on
tiny
devices
are
increasing
at
a
rapid
rate.
A
Despite
these
challenges,
ml
applications
in
energy
management
is
growing
at
a
rapid
cagger
and
video
image.
Recognition
is
expected
to
have
a
largest
market
size
in
edge
by
2026
other
than
growth.
These
applications
also
bring
socio-economic
benefits,
including
low
energy,
low
latency
and
enhanced
privacy.
A
Initial
growth
in
these
select
areas
will
open
disruption,
opportunities
in
other
industries
and
verticals.
In
coming
years,
all
tiny
ml
apps
are
edge.
Ai
apps,
but
not
all
edge.
Ai
apps
are
tiny
mlaps.
The
difference
is
rooted
in
power.
Consumption,
tiny
ml
apps
typically
consume
less
than
100
milliwatt
power
to
be
able
to
run
on
batteries.
A
A
Another
popular
tiny
ml
application
is
predictive
or
prescriptive
maintenance
based
on
anomaly
detection
in
industrial
sector
power,
consumption
typically
increases
with
increasing
performance
resulting
in
increased
cost,
so
power
and
cost
economics,
tailwinds
tiny
ml
applications
with
headwinds
caused
by
performance,
tiny
ml,
app
development
starts
from
gathering
the
data.
The
next
two
steps
are
typical
ml
development,
design
and
train
the
model.
The
step
after
that
is
to
export
the
onyx
model.
Why
onyx?
A
Because
it
is
an
interoperability
standard
now
one
can't
simply
serve
onyx
models
on
microcontrollers
because
of
several
challenges,
including
limited
hardware,
resources
discussed
before
and
lack
of
operating
system.
You
need
a
compiler
that
can
convert
onyx
format
to
binary
format,
supported
by
mcu
of
your
choice.
The
compiler
must
support
all
kinds
of
software
development
models
and
need
to
provide
c
c,
plus
plus
assembly
or
web
assembly
code,
static
library
and
executable
binary
with
compatible
linking
instructions.
A
It
is
available
on
github,
with
url
bit
darkly
slash,
deep
dash
c
deepsea
takes
onyx
model,
as
input
creates,
compute
graph
to
optimize
and
schedule
ml
operations,
and
finally,
it
generates
the
code
in
embedded
c,
embedded
c,
plus,
plus
webassembly
static
library,
bare
metal,
binary
and
os.
Elf
deepsea
is
an
open
source.
Aot
compiler
released
under
apache
license.
It
supports
python
and
allows
development
of
custom
machine
learning
algorithms
under
an
inference
framework.
A
A
No
code
platform
lets
you
choose,
compile
and
download
from
its
app
store
gallery
of
over
70
tiny
ml
applications.
Low
code
web
apps
lets
you
customize
the
tiny
ml
application
to
your
data
set.
For
example,
you
can
record
your
own
voice
data
set
to
train
and
compile
wakeword
detection
app
in
a
matter
of
few
minutes.
A
A
Tiny
ml
use
case
I
speak,
I
speak
is
a
wearable
glove
poc
designed
for
mute
people,
it
converts
sign
gestures
into
spoken,
words
and
sentences.
Gestures
are
captured
with
gyroscope
and
accelerometers
placed
on
fingertips
integrated
with
mcus.
This
gesture
to
speech
technology
in
the
hand.
Glove
is
integrated
with
the
speaker
to
create
speech
inferred
with
the
machine
learning
model.