►
Description
Event: LF AI & Data Day - ONNX Community Meeting, October 21, 2021
Talk Title: ONNX Steering Committee (SC) Update - Host Welcome, Progress, Governance, Roadmap, Release.
Speakers: Rajeev Nalawadi (Intel), Wenming Yer (AWS), Alexandre Eichenberger (IBM),
A
A
A
So
all
the
workshop
presentations
and
the
seg
working
group
sessions
community
presentations
will
be
recorded
and
made
available
publicly
afterwards
conference.
Probably
like
monday
of
next
week
and
like
logistics
of
the
meeting,
I
will
be
asking
everyone
to
mute
accept
when
presenting
and
you
can
post
your
questions
in
onyx,
general
or
or
post
here
in
the
zoom
chat
and
we'll
pick
it
up
and
ask
the
presenter
itself.
A
So
that
is
like
our
ask
here
today,
so
from
agenda,
we'll
be
starting
out
with
the
main
logistics
of
the
onyx
working
group
here
and
the
and
the
community
meetup.
Now
we
are
going
through
the
logistics
as
of
now
and
later
on,
we'll
have
wen
meing
present
and
followed
by
alex
and
then
we'll
go
into
the
community
presentations.
We
have
about
11
of
the
community
presentations
here
very
exciting.
A
And
are
looking
forward
to
it
and
followed
by
the
third
part
of
the
agenda,
which
is
the
various
sig
and
the
working
group
members
presenting
like
what's
happening
in
their
in
their
specific
working
groups
and
the
things.
So
with
that.
I
will
pass
it
on
to
wenming
here
to
cover
the
state
of
the
state.
C
Thank
you
rajiv
and
welcome
to
everyone
everyone.
So
my
name
is
wen
ming
yi,
I'm
a
product,
research,
product
manager
at
aws,
ai,
so
some
of
the
products
that
I
cover
include
auto
glue
on,
which
is
an
automatic
machine
learning
framework
and
also
a
dgl
or
deep
graph
library.
C
C
C
That
makes
us
a
relatively
large
community
of
very
active
contributors
and
on
this
on
the
get
up
stars,
we've
increased
the
number
by
about
16
and
the
dependencies
repos
for
onyx
has
increased
by
63
percent.
Number
of
forks
also
went
up,
and
I
also
want
to
call
out
the
two
most
exciting
statistics
here.
C
One
is
the
model
zoo
number
of
models
in
our
models,
which
has
continues
to
grow
in
a
very
healthy
clip
at
41,
going
to
about
55
models
and
as
the
ecosystem
grows,
we
would
love
to
have
more
models
in
our
model
zoo
and
the
most
exciting
stance
is
that
our
monthly
downloads
has
increased
almost
400
percent
to
1.6
million
per
month,
and
that
really
shows
that
the
ecosystem
is
growing
very
in
a
very
healthy
way
and
the
number
of
users,
including
you
know,
the
developers
which
are
you
know,
truly
starting
to
impact
the
the
the
and
users
that
is
getting
the
benefits
from
onyx
next
slide.
C
C
So,
of
course,
the
the
growth
and
the
ecosystem
is
really
dependent
on
the
contributions
of
tools
and
support
from
companies
in
this
ecosystem.
So
I
want
to
mention
a
couple
of
new
participants
in
this.
C
One
is
djl,
which
is
one
of
the
deep
java
library,
deep
java
library
project
from
amazon,
we're
seeing
a
tremendous
amount
of
growth
from
end
users
to
be
able
to
do
a
one
command
line,
deployment
of
onyx
able
to
serve
deep
learning
models
using
onyx
and
also
the
it
really
has
built
a
very
good
experience
for
the
java
community,
essentially
kind
of
cross
communicating
from
the
onyx
projects
to
into
the
java
community,
not
just
limiting
it
to
the
c
plus
plus
or
the
python
community,
and
I
want
to
also
call
out
some
of
the
commercial
companies
that
are
helping
to
support
the
ecosystem
and
providing
the
opportunity
for
for
onyx
to
become
the
industry
standards.
C
C
Let's
also
talk
about
governance
and
also
welcome
the
new
members
of
the
steering
committee
so
alex,
and
also
myself,
mayank
from
nvidia
and
also
reggie
from
intel
are
the
newer
members
of
the
steering
committee
and
our
special
interest
group
has
also
grown,
including
architecture
and
infrastructure
operators,
converters
model,
zoos
and
tutorials,
and
I
would
love
to
call
out
for
additional
participation,
given
the
current
state
of
where
we
are,
but
for
the
next
six
months
would
love
to
have
people
participate
next
slide,
please
in
a
more
specific
way.
C
The
roadmap
discussions
are
very
important
that
helps
us
to
move
in
the
right
direction
to
help
the
community
move
in
the
right
direction
together
and
also
invite
people
to
our
slack
channel.
So
currently
we
have
about
1100
participants
in
the
slack
channel
would
love
to
be
able
to
double
that
in
the
next
half
a
year
or
so
and
also
do
participate
in
the
q
a
on
github.
There
are
a
lot
of
github
issues.
C
A
lot
of
you
know
active
active
discussions,
but
we
would
love
to
have
more
questions
answered,
so
your
participation
is
really
appreciated.
How
would
you
contribute
so
for
someone
who
who's
like
me,
who's
new
to
the
onyx
community?
I
would
like
to
start
with
the
documentation,
so
that
requires
very
small
amount
of
your
effort
and
it's
a
very
low
entry
into
the
contribution
process
and
then
we'd
love
to
have
more
blogs
that
can
talk
about
onyx.
So
please
do
share
the
experience
from
your
development
experience
as
well.
C
Your
customers
experience
in
terms
of
using
onyx
and
also
also
we
would
love
to
have
more
community
members,
go
out
and
talk
about
onyx.
So
if
you
have
a
talk
that
you
are
you're
presenting
at
a
specific
conference
definitely
go
to
the
general
channel
and
let
everybody
know
about
it.
So
that
is
my
part
of
the
presentation
and
next
I'll
hand
that
over
to
alex.
B
Thanks
for
this
wonderful
description
of
the
states,
so
I'm
going
to
follow
up
with
the
roadmap,
which
is,
as
I
mentioned,
super
important,
I'm
alex
eichenberger
and
I
work
at
ibm
research
and
I'm
responsible
for
the
onyx
mlr
component
project,
where
we
try
to
lower
onyx
models
down
to
batteries
for
cpu
and
accelerators.
B
B
So
here
are
the
three
topics
that
we
presented
in
our
first
discussion:
the
in
the
first
presentation,
nakaikei
from
ibm,
proposed
three
new
pre-processing
operators
to
better
support,
kegel
pre-processing
with
these
three
new
operators
and
along
with
using
existing
ones,
they
were
able
to
cover
significant
part
of
the
kegel
benchmark
directly
to
onyx,
and
so
that
is
currently
a
proposal
that
is
for
pre-processing
and
the
operator
sig
in
the
second
presentation.
Phukok
from
oracle
talked
about
the
c
api
to
enable
model,
checking
and
modifying
at
oracle
like
in
many
other
companies.
B
They
are
working
predominantly
in
java
and
a
c
interface
would
be
very
useful
to
them,
and
so
that
is
assigned
to
the
architecture.
Infra
in
the
third
presentation
also
argued
for
better
support
for
emitting
models
in
other
languages
than
python,
for
example,
c,
sharp
and
java,
and
that
is
for
the
same
committee.
B
C
B
B
B
Of
models
of
sig
groups
in
the
third
presentation,
sika
also
discussed
about
the
importance
for
many
vendors
to
have
converters
to
onyx
to
emit
the
high
functionality
operators
such
as
lstm,
rnn
and
gru,
as
they
are
easier
to
optimize.
We've
seen
great
progress
in
this,
but
they
see
a
little
bit
work
to
remain
to
be
done,
and
that
is
both
for
the
converters
and
the
operators.
B
The
goal
here
is
to
increase
robustness
of
converters
and
better
represent
quantized
model,
and
that
is
covering
basically
architecture,
architecture,
infra
and
conversion
segs,
as
well
as
release
for
better
checking
in
the
second
presentation.
Sfs
from
intel
talked
about
developing
and
better
supporting
onyx
model
for
e
to
e
distributed
training
scenarios,
and
that
is
primarily
a
model
tutorial
seg
issue
in
the
third
presentation.
B
B
And
here
are
the
third
three
topics
presented
in
the
first
discussion
in
the
first
presentation,
macarthur
from
light
matter
talked
about
his
company
need
for
shape
with
symbolic,
computation
and
possibly
unknown
ranks.
There's
been
a
good
effort
actually
trying
to
do
that
in
the
current
release,
and
that's
for
the
arch
ends
in
front
sig.
B
In
the
second
presentation,
karumanchi
from
intel
discussed
the
use
of
metadata
processing
properties
to
improve
the
integrity
of
data
and
models.
The
goal
is
to
have
integrity,
information
from
data
modeling,
all
the
way
to
model
inference,
and
so
that
covered
a
wide
range
from
architecture
infra
to
models,
pre-processing
training
as
well
as
runtime.