►
From YouTube: ONNX20210324 V17 SIG Converts
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
hello,
I'm
guinta
schmidling
from
microsoft,
and
I
do
have
some
converter
updates
for
you.
Let's
start
with
the
pytorch
exporter,
the
pytorch
exporter
is
supporting
dodge
1.8
and
it's
supporting
offsets
13..
The
team
has
added
support
for
12,
more
and
more
new
pytorch
operators
and
it
updated
the
support
for
10
existing
operators.
A
A
A
A
We
noticed
that
more
people
start
using
tf2
onyx
to
convert
keras
to
onyx
and
many
of
those
seem
to
be
using
the
python
api.
Our
old
python
api
wasn't
very
user
friendly,
so
we
wrapped
our
python
apis
to
make
this
much
easier.
A
We
made
some
changes
to
avoid
using
having
the
converter
use
the
gpu
during
the
conversion,
so
it
doesn't
interfere
with
the
model.
That's
already
on
the
gpu.
B
Thanks
hunter
hi
everyone,
my
name
is
kevin
from
nvidia
and
I'll,
be
giving
a
quick
update
on
the
onyx
tentariky
backend.
B
Since
the
last
update,
we
have
moved
to
monthly
container
releases
for
onyx
sensor
rt,
and
some
of
the
highlights
over
the
past
two
releases
are
that
we've
added
support
for
eight
new
operators,
and
we
have
also
added
support
for
offsets
or
obtained
definitions
of
existing
operators.
B
B
C
Thank
you
kevin,
I'm
qing
from
ibm.
I
will
talk
about
pencil
flow
back-end
and
share
a
couple
of
road
map
ideas.
We
have
discussed
for
the
tensorflow
back-end
converter.
We
have
added
model
testing
for
all
model,
zoo
models.
It
is
part
of
our
ci
now
and
the
conversion
results
are
published
to
our
github
wiki.
As
you
can
see
in
the
link,
we
just
completed
the
offset
13
support
and
will
have
a
new
release
pretty
soon.
C
Another
new
feature
is
a
simple
training
scenario
that
is
based
on
the
onyx
inference
graph,
including
a
few
examples
as
well
yeah.
As
for
the
next
ideas
and
items,
we
will
look
into
upset
14,
onyx
native
training
and
possibly
tf
light
to
some
extent
next
piece
yeah.
As
for
the
roadmap
ideas,
the
first
one
is
a
common
question:
how
do
we
support
channel
last
data
format?
Neonics?
C
C
This
leads
to
two
situations
that
can
be
optimized.
One
is
if
the
model
input
is
channel
last.
Some
data
transformation
has
to
happen
in
the
model
two.
If
the
the
back
end
prefers
channel
last,
which
is
common
for
for
many
edge
devices.
C
C
The
last
one
on
the
list
is
that
we
would
like
to
explore
a
sort
of
a
standard
model
zoo
test
for
all
backends
to
ensure
high
quality
conversion,
so
the
community
can
easily
see
not
only
the
operator
level
coverage
but
also
the
successful
rate
for
converting
real
world
models
right.
That's
all
we
have
today.
Thank
you.