►
From YouTube: 010 ONNX 20211021 Lyalin ONNX and the OpenVINO Ecosystem
Description
LF AI & Data Day - ONNX Community Meeting, October 21, 2021
ONNX and the OpenVINO Ecosystem
Speaker: Sergey Lyalin
A
Hello,
everybody,
my
name
is
sergey
lehren,
I'm
working
at
intel
in
openwinner
project.
I
will
describe
the
role
that
onyx
plays
in
openvin
ecosystem,
connecting
us
to
deep
learning
frameworks
and
what
services
we
are
providing
for
onyx
in
open
winner.
Let's
start,
first
of
all,
what
is
openvideo
openwinner
is
the
end-to-end
solution
for
inference
across
intel
hardware.
A
Open
openview
ecosystem
includes
a
number
of
tools,
but,
as
we
are
talking
about
inference
first,
the
central
component
is
inference
engine.
It
provides
an
api
to
build
python
or
c
plus
plus
to
run
the
inference
inference.
Engine
as
a
company
doesn't
have
any
code
for
inference.
Instead,
it
relies
on
plugins.
A
A
Okay,
let's
extend
the
picture
to
opposite
direction,
describing
where
the
model
comes
from
inference,
engine
lost
the
model
represented
in
openwindow
intermediate
representation;
it
is
produced
by
a
separate
company,
and
that
is
called
model.
Optimizer
model
optimizer
can
load
various
types
of
the
original
models
from
deep
learning
frameworks
and
convert
it
to
ir.
A
A
A
Besides
this
flow,
where
we
can
convert
onyx
with
model
optimizer
to
ir
just
like
other
file
formats,
we
have
supporting,
we
have
something
that
is
currently
available
for
onyx
only.
We
can
load
linux
model
directly
to
inference
engine
runtime
skipping
all
the
offline
steps.
A
A
First
create
the
core
object,
which
is
an
entry
point
for
the
inference
api.
It
allows
to
read
the
models
and
load
them
to
plugins.
Here
we
are
reading
model
from
onyx
file.
After
reading,
you
can
do
some
model
adjustment
like
setting
a
new
input
shape
or
for
model
inputs
and
then
load
to
one
of
the
plugins
to
run
the
inference.
In
this
example,
it
is
cpu
device.
A
A
A
A
A
The
onyx
community
can
help
us
to
grow
faster,
to
build
really
complete
onyx
backend
by
providing
feature,
requests
and
highlighting
existing
problems
and
the
current
tonic
support.
I
encourage
to
create
tickets
at
openvina
github
to
achieve
that,
we
are
constantly
monitoring
such
external
requests
and
it
help
us
to
make
the
right
decisions
in
the
development.
A
A
The
user
can
write
the
inference
code
for
such
custom
operations
in
open
window
plugins,
as
we
can
read,
onyx
models
directly
in
inference
application
without
using
an
offline
step.
There's
model
optimizer.
You
can
call
open
window
inference
right
in
your
pytorch
application
to
evaluate
performance
opportunities.