►
Description
Event: LF AI & Data Day - ONNX Community Meeting, October 21, 2021
Talk Title: ONNX SIG Converts
Speakers/Co-Chairs: Guenther Schmuelling (Micosoft), Kevin Chen (Nvidia), Chin Huang (IBM)
B
Yeah,
okay,
hello,
everybody,
I'm
grinter
from
microsoft,
and
I
have
some
front-end
converter
updates.
B
D
Start
with
the
easy
ones,
first,
so
pi
dodge
exporter
is
supporting
pi,
dodge
1.10
and
up
to
offset
14.
and
the
team
added
support
for
more
ops
and
enhance
the
shape,
inference
and
and
hand
shape.
Inference
allows
more
models
to
to
be
converted
and
then
for
sklearn.
D
It's
supporting
offset
15
and
sklearn
1.0,
and
we,
the
team,
tested
more
models
like
under
25
models
that
are
now
converting
and
we
did
performance
measurements
and
for
batch
sizes.
Up
to
ten
thousand
one
x,
runtime
is
quite
a
bit
faster
than
sk
learn,
so
those
are
the
easy
ones.
Now
comes
the
complicated
one
tensorflow
to
onyx.
So
internally
we
had
a
res
limited
resources,
and
so
we
stopped
active
development
on
kerastronix
and
one
reason
is:
we
are
short
on
resources.
D
D
We
moved
all
unit
tests
from
keras
to
onyx
under
tf2
on
x.
We
tested
like
a
couple
of
hundred
keras
models,
so
we
are,
we
are
kind
of
sure
we
are
not
breaking
much,
so
we
we
expect
very
little
to
very
little
friction
to
switch
the
api,
the
python
api
for
for
tf2
onyx.
We
we
improved
it
and
it
looked
very
similar
to
the
one
that
chaos
onyx
is
using.
So
we
hope
that
is
not
a
problem,
but
there's
one
gotcha.
D
So
the
big
difference
between
kerastronix
and
tensorflow
to
onyx
is
that
the
kerastronix
picks
this
up
picks
up
the
model
on
a
very
high
level
on
the
keras
level.
So
if
you
have,
for
example,
lstm
in
keras.
D
Kerastronix
will
see.
Oh,
this
is
a
lstm,
so
it's
very
easy
to
map
it
to
the
onyx
lstm
for
tf2onyx.
It's
very
different.
We
asked
tensorflow
to
give
us
a
graph
for
this.
Keras
models
and
tensorflow
doesn't
really
have
a
lstm
operation,
so
we
see
just
control
flow
at
a
very
low
level
and
we
have
no
idea
if
it
if
it
is
a
lstm
or
not
so
our
codes
that
need
to
match
part
of
the
graph
and
and
and
say
oh
actually,
this
is
the
lstm
and
you
can
replace
it
with
the
onyx
lsem.
D
Then
that
code
is
very
complicated
and
we
have
issues
with
this
code.
We
hope
we
fixed
most
of
them
it
it
definitely
improved,
but
there
might
be
still
some
issue
that
we
are
not
finding
a
tensorflow,
lstm
and
converted
correctly
to
a
onyx
lsm,
and
the
result
of
this
is
basically
the
model
will
still
work,
but
it's
using
control
flow
to
implement
the
lstm
just
like
tensorflow
would
do,
and-
and
we
could
do
you
take
a
performance
hit
because
we
are
not
using
the
onyx
lstm.
D
So
if
you
run
into
this,
please
file
an
issue
and
we'll
try
to
fix
it.
D
Let
me
see,
and
then
tensorflow
to
onyx
supports
the
latest
version
of
tensorflow,
which
is
2.6
and
we
support
offset
15
in
in
the
master.
We
have
not
released
this
because
we
are
waiting
for
fixed
in
the
onyx
package.
D
The
way
tensorflow
to
onyx
calls
the
shape
inference,
creates
a
problem
in
in
the
onyx
package
and
it's
fixed,
but
it's
not
released
yet.
So
we
are
waiting
for
this
before
we
release
offset
15.
and
then
we
added
a
new
feature.
D
So
we
we
added
support
for
tensorflow.js
models,
so
you
can
just
take
a
tensorflow.js
model
and
tf2onyx
will
convert
that
as
well.
So
we
are
now
converting
all
style,
tensorflow
models,
keras
models,
tf
light
models
and
tensorflow.js
models,
and
we
tested
pretty
much
with
every
tensorflow.js
models
that
we
found
in
the
tensorflow
model,
zoo
in
yeah
and
yeah.
That's
basically
it
and
with
that
I
hand
it
to
kevin,
to
talk
about
tenza,
rt.
E
Thanks
commenter,
hopefully
everyone
could
hear
me
fine,
so
I'm
kevin
from
nvidia
talking
about
the
updates
that
we
made
to
on
extensor
rt.
So
the
major
updates
here
is
that
we've
released
a
new
1030
version.
The
ea
was
released
on
9
30
and
the
ga
is
coming
soon
either.
You
know
end
of
this
month
or
early
next
month
and
kind
of
the
large
updates
that
we
have
here
is
that
we've
added
new
operator
support
for
seven
new
operators
and
we've
updated
operator
support
for
a
few
as
well.
E
So
yeah
look
forward
to
using
the
new
103rt
8.2
version
with
more
operators
supported
yeah.
So
that's
the
updates
for
onyxcentrit
chen
can
go
ahead
and
provide
the
updates
for
the
other
vacuum
converters.
C
C
The
total
coverage
is
standing
at
about
93
percent
on
the
onyx
back-end
scoreboard,
we're
up
to
the
latest
tensorflow
2006
and
added
the
ability
to
to
load
onix
models
with
external
data
in
separate
files.
C
C
C
So
a
couple
of
general
discussion
topics
here:
first,
we
found
is
getting
more
and
more
difficult
to
maintain
code
to
support
all
onyx
offset
versions
right.
Our
unit
tests
are
typically
done
at
a
point
in
time,
so
it's
not
very
easy
to
rerun
all
like
old
operator
tests
when
a
new
version
is
introduced.
C
So
here
are
a
couple
of
questions
right.
Can
we
totally
depend
on
the
on
expression
converter
to
do
the
job
right
to
to
make
the
model
into
the
newest
versions?
If
that's
the
case,
all
back-end
converters
can
focus
mainly
on
getting
the
the
most
recent
spec.
You
know,
working
with
the
latest
backend
frameworks,
yeah,
the
other
one
is:
can
we
somehow
deprecate
some
older
versions?
I
believe
they're
still
like
upset
one
and
two
models
in
the
model
zoo
right
now?
C
C
For
instance,
a
converter
developer
recently
found
a
couple
of
broken
tests
and
eventually
fixed
them.
Of
course.
However,
this
could
have
impacted
others
to
understand
and
verify
the
meaning
of
these
operators.
C
C
Yeah,
as
far
as
the
road
map
topics,
of
course,
the
most
relevant
are
the
offset
conversions
and
some
particular
converter
improvements
for
better
performance.
C
The
the
version
converter,
we
believe,
should
operate
generally
between
offset
versions
rather
than
focusing
on
like
individual
operators
or
certain
models.
So
we're
very
happy
to
see
a
good
progress
already
made
right
to
make
this
version
converter
as
a
key
element
of
this
whole
onyx
ecosystem.
C
Some
others
are
also
on
our
you
know.
Agenda
to
you
know,
review,
for
instance,
the
meta
information
end-to-end
pipeline
in
one
graph
and
new
operators
for
data
processing,
because
eventually
all
these
will
have
to
be
converted
from
and
to
some
frameworks,
all
right,
that's
it.
Thank
you
very
much.