►
From YouTube: An Overview of the PyTorch-ONNX Converter
Description
This session will present an overview of the PyTorch-ONNX converter, its implementation, and recent improvements to support a wide range of models.
Bowen is a software engineer working on the PyTorch-ONNX converter. He's contributed over 400 pull requests to PyTorch since 2018. He's also contributed to ONNX and ONNX Runtime. https://github.com/BowenBao
A
Hello
guys
I'm
bowen
and
I'm
a
software
engineer
working
on
the
pi
torch,
onyx
converter
at
microsoft.
So
today
I'm
going
to
give
a
briefly
overview
of
the
converter,
basically
what
it
is,
what
it
does,
how
it
does
it
some
interesting
features
and
the
future
road
map?
A
Okay.
So
here's
the
like
brief
overview
of
the
architecture
and
flow
of
the
converter,
so
I
think
throughout
the
slide,
throughout
the
talk
I
will
use
the
word
export
and
convert
interchangeably.
A
The
converter
lives
inside
pi
torch.
It's
part
of
the
pytorch
package.
Its
source
code
is
in
the
pythons
repository
under
github.
So
to
start
you
have
a
pi
twitch
model.
You
have
some
sample
input
and
together
you
pass
it
to
the
torch,
onyx
export
api,
the
first
step
we
utilize
the
torch,
tracer
and
torque
script
to
convert
the
pi
switch
model
to
the
torch
intermediate
representation
or
what
we
call
torch
ir.
A
A
A
A
And,
of
course,
there's
other
ways
to
do
that,
for
instance,
if
in
some
cases
you
might
already
have
a
specif
specified
kernel
in
your
favorite
back
end,
and
in
this
case
you
might
want
to
capture
a
subgraph
of
the
pi
torch
model
as
a
single
onyx
node
in
the
onyx
model.
A
A
So
in
this
case,
you
can
utilize
this
api
to
register
a
custom
symbolic
function
so
to
tell
like
exporter,
basically
how
to
export
it.
And
of
course
you
can
write
it
as
either
exported
as
standard
onyx,
ops
or
like
any
custom
ops
in
a
custom
domain.
A
So
I
want
to
first
introduce
a
little
bit
of
the
background
and
motivation
a
lot
of
times.
The
common
complaints
we
receive
is
for
the
exported
onyx
model
is
that
the
nodes
are
all
flattened,
so
in
a
sense
it
does
not
like
reflect
the
hierarchical
relationship
between
layers
and
modules,
so,
for
instance,
for
this
lure
norm
since
I
think
back,
then
it
was
not
a
standard
onyx
op.
A
It
was
exported
as
a
composition
of
different
nodes
and
they
are
in
some
way
sort
of
like
flattened
mixed
up
with
other
nodes
in
the
onyx
model.
A
So
this
makes
it
like
visually
hard
to
figure
out
what
was
what
back
in
the
pie
switch
model,
and
it
also
it's
kind
of
hard,
sometimes
for
back-end
to
do
a
fusion
and
optimization.
A
This
node
has
type
function
and
the
previously
seen
subgraph
or
the
composition
of
ops
now
are
stored
as
the
function
body
inside
the
onyx
model
locally.
So
this
provides
a
means
for
back-end
to
still
be
able
to
run
this
node
or
model
without
like
having
a
specified
kernel.
A
Okay,
a
mixed
precision
so
yeah.
We
support
both
torch,
autocast
and
nvidia
apex
mp.
There's
no
really
like
extra
step
needed.
You
can
export
the
decorated
model
as
how
you
would
with
any
regular
model,
and
you
would
have
the
cast
nodes
inserted.