►
From YouTube: ONNX 4 9 20 Workshop SIG&WG Update
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
this
special
twitch
group
to
present
their
stylist,
and
I
would
like
to
do
a
little
bit
of
a
change
that
I
like
to
present
to
hang
on
right
after
your
presentation
to
start
taking
questions.
A
I
know
that
for
this
portion
of
the
workshop
there's
going
to
be
a
lot
of
discussion
and
engagement
because
you
know
the
top
of
specific
work
items
or
to-do's
or
you
know
what's
next
on
on
the
agenda
things
like
that,
so
I
like
for
the
the
the
questions
we
asked
for
the
presenter
right
after
they
they
finished
presenting
right.
A
So
you
know
please,
please
hang
on
for
the
for
the
q
a
right
after
all,
right
so
with
that,
I
think
according
to
the
plan,
looks
like
infra
and
architecture
will
be
up
next.
So
let
me
put
up
that
slide.
B
C
Is
rama,
unfortunately,
neither
nor
lou
from
the
infrastructure
sig
is
able
to
join
us,
so
I
will
be
making
that
presentation
on
their
behalf.
C
C
C
This
is
work
in
progress
where
a
group
is
investing
in
building
an
mlior
dialect
for
onyx,
as
well
as
other
infrastructure,
to
help
lower
the
onyx
dialect
into
standard
and
layer
dialects
and
so
on
next
slide.
C
Please
there's
going
to
be
a
later
presentation
by
saclana
covering
the
training
support
in
onyx.
So
I
will
just
give
a
very
brief
view
at
this
point.
C
C
So
two
key
extensions
that
are
used
to
enable
this
are
a
special
operator
called
the
gradient
operator
for
computing.
The
gradient
of
some
computational
sub
graph,
as
well
as
another
operator,
called
the
graph
call
operator
that
allows
us
to
call
another
graph,
and
in
particular
this
allows
the
training
step
to
call
the
inference
graph
essentially,
and
so
just
a
couple
of
more.
C
As
you
all
know,
onyx
has
been
primarily
a
functional
version
that
is
side
effect,
free
without
the
notion
of
assignments
or
side
effects.
Once
we
have
training.
Of
course
it
effectively.
You
need
to
describe
some
have
some
side
effects,
because
the
training
process
is
going
to
update
the
weights
and,
as
shown
here,
basically,
we
use
a
notion
of
a
binding
to
effectively
specify
updates
to
weights,
and
hopefully,
stratlana's
presentation
will
cover
more
details
of
the
training
support
next
slide,
please.
C
So
one
of
the
dilemmas
we
face
in
the
context
of
onyx
relates
to
the
set
of
operators.
C
On
one
hand,
we
have
new
models,
new
applications
and
new
research
driving
the
need
for
new
operators,
but
there
is
a
cost
to
adding
new
operators
in
terms
of
developer
cost.
To
support
it
in
the
back
ends
and
if
you
opt
want
to
optimize
it
to
exploit
hardware
features,
there
is
even
higher
cost
in
implementing
it,
and
the
function
construct
was
introduced
in
onyx
to
help
deal
with
this
trade-off.
Basically,
this
allows
the
unexpected
described
by
default
implementation
of
an
onyx
op
in
terms
of
other,
more
primitive
onyx
apps.
C
This
gives
back-ends
the
flexibility
to
add
customized
and
optimized
implementations
at
their
own
pace
without
compromising
expressiveness,
so
the
construct
exists
and
going
forward.
C
We
would
like
to
make
use
of
the
function
construct
more
effectively,
in
particular,
we'd
like
to
identify
a
core
set
of
primitive
ops,
and
that
can
be
used
to
express
other
ops
as
functions
of
these
primitive
ops,
and
we
would
like
to
leverage
the
learning
from
many
implementation
frameworks,
including
ongoing
work
in
the
mli
or
space
in
compiling
models
down
to
hardware,
and
this
this
is
also
related
to
the
ongoing
work
on
the
in
the
on
mlar.
E
C
So
let
me
now
briefly
describe
the
extension
to
the
function
construct
I
mentioned
earlier,
so
the
new
api
allows
us
to
define
functions
where
the
function
body
depends
on
statically,
available
context,
information
such
as
the
values
of
attributes
or
types
of
inputs.
C
A
C
C
So.
Finally,
there
is
interesting
and
important
work
to
be
done,
and
we
welcome.
You
know
your
contributions
and
participation.
G
C
Of
the
other
ongoing
work
includes
improving
the
testing
and
checking
infrastructure,
improving
the
build
and
setup
process,
release,
process,
ca
and
so
on.
So
that's
the
end
of
my
presentation,
so
thank
you.
A
So
rama,
maybe
I
can
ask
the
question
so
you
talk
a
lot
about
the
onyx
mlr.
Is
that
going
to
be
a
part
of
this
intra
architecture's
sake,
or
will
they
form
a
separate?
You
know,
maybe
even
the
level
of
interest
now
with
mri.
Would
that
be
a
separate,
your
working
group
or
sick
to
focus
on.
C
All
mri
that
that
that's
a
good
question,
actually
maybe
press
sean.
D
The
repository
is
owned
by
the
infrastructure
and
architecture
sig.
So
it's
part
of
this
sig.
A
Okay,
all
right
so
we'll
be
working
closely
with
with
rama
and
kerr
and
and.
A
All
right,
I
guess,
if
not
yeah,
if
you
have
any
questions
later
on,
you
know
please
feel
free
to
to
post
in
the
chat
window.
So
with
that,
I
think
we
can.
Let's
move
on
to
the
next
presentation,
so.
I
H
Okay:
okay,
okay,
thanks
everyone
for
coming,
so
I
am
imad
bursum
me
and
me.
I
I
I
We
have
five
primary
goals
that
we
are
trying
to
either
to
first
keep
up
with
the
latest
ai
progress
and
make
sure
that
we
have
to
list
of
operators
that
cover
new
mode
models
that
are
ready
for
production,
improve
the
quality
of
onyx
operator,
and
this
is
very
challenging
and
I
will
discuss
later,
why
reduce
ambiguity
and
increase
the
clarity
a
lot
of
operator
if
the
description,
if
it's
not
well
defined
or
the
unit
test,
is
not
comprehensive.
I
Different
around
time
marked
interpretive
different
and
this
something
we
are
trying
to
address
them
avoid
onyx
bloating
the
spec.
The
goal
of
onyx
is
not
to
be
the
catching
stink
of
all
possible
operators
from
all
possible
framework.
Onyx
is
designed
to
be
interchangeable,
for
mac,
for
inference
and
for
training,
but
the
focus
more
on
operators
that
are
already
in
production.
You
can
add
custom
operator
on
it,
but
the
the
focus
is
to
make
that
keep
the
spec
small,
but
cover
a
lot
of
primitive
ops
that
you
from
it.
I
You
can
compose
to
more
complex
up,
keep
up
with
pr
and
open
issue,
and
this
is
we
need
all
the
help
that
we
can
get.
Okay.
We
have
a
lot
of
participants
that
attend
our
our
sig
meeting
group
so
feel
pleased
to
join
here
our
communication.
We
we
have
a
channel
in
getter
operator
challenge.
We
will
discuss
all
the
issue
also
on
this
channel.
We
announce
our
meeting
with
link
to
the
zoom
link
for
the
meeting
itself.
So
it's
any
meeting
we
we
do
for
the
seg
meeting.
I
It's
open
to
everyone
so
feel
free
check
the
channels
check
any
announcement.
Also,
if
you
have
any
issue
about
operator
or
question
follow
in
in
the
channel
also,
you
can
open
pr
and
issue
regarding
operator
as
a
doc
of
the
operator
is.
Oh
and
all
the
meeting
note
is
under
onyx
sig
operator
last
workshop.
We
discuss
the
criteria
of
adding
new
operator.
This
has
been
ratified
and
updated
during
the
seg
meeting,
and
the
final
dock
is
is
in
add
new
op.
This
is
covered
both
adding
new
op
and
updating
existing
one.
I
Onyx
1.7
will
be
released
pretty
soon
on
it.
It's
a
cover.
A
lot
of
new
update
from
the
previous
release,
primarily
is
a
major
update,
as
rama
said,
is
the
training.
So
now
we
have
lost
function
like
soft,
max
negative
log
likelihood
and
mean
square
operator.
There
are
operators
for
the
training
and
operator
for
for
for
not
only
training.
I
The
biggest
operators
that
got
added
that
was
highly
request
requested
by
a
lot
of
people
is
einstein,
some
notation,
which
is
available
in
most
framework,
which
is
very
flexible
to
to
do
to
do
any
mathematic
or
tensor
operation
operation
on
it.
There
are
other
operators
like
greater
than
or
equal
or
less
than
equal
and
unfold
inverse.
Some
of
those
operators
are
actually
a
function.
I
The
biggest
one
of
those
besides
understand
some
is
gradient,
and
this
is
for
the
training,
so
gradient
operator
actually
is
generated.
The
gradient
of
a
graph
quantization.
We
already
have
quantization
operator
that
added
in
onyx
we
updated
max
pool
reduced
min
and
reduced
max
to
support
n8
for
quantization
for
training.
We
added
momentum,
sgd
and
adequate
for
the
optimizer.
I
The
velocity
is
more
than
the
ones
that
we
close
so
please
as
much
as
possible
help
in
those
regard,
but
but
we
are
trying
to
address
or
look
at
most
of
the
pr
and
issue
some
of
the
pr
proposed.
I
want
to
this
op
and
I
want
this
op
feel
free
to
submit
a
pr
of
this
op,
assuming
it's
added
to
the
a
requirement
of
adding
new
up.
It
need
to
at
least
be
available
in
a
framework,
and
there
is
a
models
that
use
this
up.
I
So
any
pr
that
for
operator
we
add
label
operator
on
it,
so
it's
easy
to
filter
and
to
see
all
that
are
an
issue
that
are
related
to
operator
op.
A
contributor
group
will
review
the
as
the
pr
according
to
the
admin
operator
guideline
and
based
on
this
feedback.
If
it's
signed
off
will
merge
it,
but
any
big
pr
or
big
change
proposal
exchange.
We
usually
will
invite
you
to
attend
the
seg
operator
meeting
to
review
it
and
have
an
open
discussion
before
we.
We
we
move
forward.
I
We
have
also
a
lot
of
open
discussion
about
operator
in
github
and
we
are
active
in
in
github
to
discuss
any
operator
issue
or
pr
or
on
getter.
We
we
decide
when
some
some
of
the
discussion
we
we
decided
once
the
discussion
is
done
to
be
closed.
If
you
look
for
the
best
way
to
triage
a
large
number
of
open
issue.
I
Feel
free
to
propose
any
improvement
for
the
operator
or
the
process.
For
example,
better
testing,
validation
or
coverage
for
onyx
operator
feel
free
to
add,
propose
more
better
documentation
even
submitting
a
new
operator.
There
is
some
manual
step
so
feel
free
to
propose
or
submit
a
pr
to
automate
some
of
that
work
for
any
big
proposal.
As
I
said,
we,
you
will
be
invited
to
the
seg
meeting
to
present
them.
I
We
discuss
last
workshop
dynamic
ship,
currently
onyx
standards,
support
a
dynamic
shape
and
have
a
loop
and
a
variable
lens
input.
It
depends
on
the
model
and
this
is
problematic
in
accelerate
for
iot
devices
with
limited
memory.
So
in
last
workshop
there
was
an
open
discussion
about,
should
we
add
a
flag?
Should
we
at
the
end
to
to
exercise-
and
we
discussed
after
the
workshop
follow-up
meeting
with
a
lot
of
our
partner
and
in
in
in
in
that
there
are
three
cases.
I
For
example,
inferable
up,
not
a
verbal
op
and
dynamic
input.
For
example,
non-zero
up
the
output
can
depend
on
how
many
known
zero.
Should
we
put
a
hand?
Should
we
have
a
mode,
an
attribute
to
fix
the
size
and
boil
down
the
discussion
to
what
should
be
part
of
the
stack
and
what
should
be
run
time
implementation
if
we
added
maximum
size
hint,
but
this
should
be
part
of
the
spec
onyx
from
time,
is
a
disk
format.
I
Once
you
do
that,
you
can
convert
and
add
any
restriction
you
want,
if
we
add
max
size,
hand
who
decides
that
is
the
researchers
that
created
the
model
without
knowing
which
hardware
it
will
run
on
is
the
engineering
that
deploying
this
is
hardware
if
the
size
usually
is
not
to
fix,
it
is
depend
on
the
target
hardware
and
its
limitation
if
it's
hint,
it
can
be
ignored.
So
if
something
can
be
ignored,
it's
that
can
be
part
of
the
stack
or
should
be
implementation
detail.
I
Even
if
some
op
can
output
variable
size
and
the
underlying
implementation
can
have
a
fixed
output
size
and
anything
beyond
that,
it
can
be
clipped
or
error
out
it.
It
shouldn't
so
that
so
eventually
the
sig
operator
group
decide
that
dynamic
shift.
It
shouldn't
be
part,
sorry
fix
it
size,
it
shouldn't,
be
part
of
the
spec
and
should
be
around
time
implementation.
I
Next
item
is
coding
style.
If
you
look
at
some
of
the
operator,
we
have
an
op
convention,
md
file,
to
discuss,
what's
the
convention
about
the
coding
style
of
submitting
a
new
operator
in
onyx
format,
currently
how
many
people
know
actually
that
we
have
a
convention
mg5,
and
if
you
look
at
some
of
the
operator,
we
we
are
doing
a
lousy
job
to
to
add
their,
even
in
the
pr
review
to
adhere
to
this
convention.
I
In
someone's
opinion,
some
of
the
people
don't
actually
adapt
to
this
convention.
So
is
that
something
should
we
enforce
it?
A
part
of
the
ci.
Should
we
add
the
code
check
that
you
are
using
camel
case,
a
capital
or
or
or
not,
as
so
when
you
submit
it
in
pr,
it
will
fail
and
to
make
this
something
we
are,
we
will
discuss
soon
about
automating,
adding
a
coding
style
verification
in
any
pr
that
got
checked
in.
I
Okay,
another
issue,
pr
apr
and
issue
a
lot
of
time.
We
see
people
opening
pr
and
we
ask
a
question
and
we
don't
get
any
reply.
So
if
the
author
did
not
follow
up
in
x
week
x
week,
should
we
close
the
issue
and
what
is
the
good
value
of
of
x?
I
I
We
want
to
start
asking
people
for
help
and
try
to
to
see
how
we
can
access
have
a
better
turn
around
in
reviewing
pr
and
issue.
We
are
also
open
to
how
to
improve
the
process.
If
anyone
have
a
a
proposal
or
better
idea,
it's
all
welcome,
feel
free
to
propose
in
getter
feel
free
to
attended
the
state
meeting
and
proposing
one.
I
Thank
you
for
coming.
Please
look
at
get
getters
channel
for
for
any
announcement
and
also
feel
free
to
ask
questions
there.
Operator,
artifact
and
documentation
are
all
under
our
safe
group.
Thank
you.
Any
question.
A
I
Is
that
a
great
question
is
there
are
two
to
answers
to
this,
and
I
and
drama
already
already
mentioned
that
in
his
stock
we
have
a
function
so
operator.
We
are
trying
to
be
primitive,
but
to
any
complex
operator.
We
can
write
it
as
a
function,
and
this
is
give
you
the
best
of
both
world.
So
you
still
as
a
converter,
can
convert
to
a
function
and
not
operator
it's
up
to
the
converter.
Of
course,
choices
they
can
implement,
convert
directly
to
the
primitive
op.
I
The
function
itself
is
have
a
graph
attach
it
to
it,
which
is
compose
a
primitive
to
implement
the
function
and
runtime
it's
up
to
them.
If
they
don't
want
to
implement
the
high
level
up
as
a
fuse
up
for
performance,
they
can
simply
evaluate
the
attached
graph,
but
if
they
want
to
improve
the
performance
for
example,
or
having
hardware
acceleration
that
implements
this
as
a
single
function,
they
can
ignore
the
attached
graph
and
implement
the
function
as
one
fused
up
so
operator.
I
We
are
trying
to
make
operator
more
primitive,
so
give
you
more
flexibility
to
implement
complex
op.
However,
some
complex
op
will
be
submitted
as
a
function
so
similar
that
the
loss
functions
that
you
see
for
the
training
are
all
function.
None
of
them
are
operator,
so
it's
up
to
the
time
that
can
optimize
it,
and
this
will
help
the
convert
does
not
need
to
convert
everything
to
the
low
level.
A
I
C
So
the
basically
onyx
format
has
support
for
representing
sparse
tensor
constants
in
the
model
file
itself.
C
Sorry,
I
didn't
completely
follow
the
question,
is
it
is
it
about?
Are
you
talking
about
using
sparse
representations
of
tensors
in
a
back
end
at
runtime,
or
is
it
about
sparse
tensors.
A
A
B
A
H
I
D
I
A
All
right
final
question:
if
not
I
like
to
keep
moving
and
after
operators
is
converters.
J
J
First
off,
can
you
do
a
quick
poke
if
possible,
we
will
review
the
the
results.
Let's
move
on
to
the
front
end
comforters,
so
emma
or
kante,
please
go
ahead.
E
Yeah,
I
I
can
take
it
yeah
hello,
I'm
grinder
from
microsoft
and
with
a
few
converter
updates
for
pytorch
for
the
pytorch
exporter,
the
current
pi
dodge
version,
1.4
supports
offset
11.,
and
the
team
was
working
very
hard
on
adding
new
operators,
several
yeah
and
the
the
main,
the
main
sorry,
the
main
feat.
A
new
feature
is
that
we
can
export
models
larger
than
two
gigabyte.
E
What
you
would
need
to
do
for
gtp2,
there's
more
integration
with
the
onyx
checker
to
make
sure
the
onyx
model
popping
out
of
the
exporter
are
correct
and
there
are
big
improvements
for
the
out
of
box
conversion.
E
E
For
keras
offset
the
the
release
version
of
the
keras
to
onyx
converter
support,
offset
11.
and
tensorflow
2.0
and
10
and
tensorflow
2.1.
E
The
master
supports
tensorflow
2.2
bi-directional
on
n
is
fully
supported
and
there
was
a
lot
of
out-of-box
model
conversion
testing
for
pretty
much
all
the
keras
applications
and
for
hugging
phase
transformers
for
sk
learn:
onyx
converter.
We
support
offset
11.
E
E
So
they
see
quite
a
bit
of
improvements
on
the
number
of
nodes
like
like
a
reduction
on
50
of
the
nodes
and
quite
a
bit
of
performance
gain,
because
because
of
this
next
is
tensorflow
to
onyx.
E
The
release
version
supports
offset
seven
to
offset
11
and
tensorflow
1.5
to
1.14
and
after
1.14
the
tensorflow
release.
1.15
is
a
bridge
between
the
tensorflow
1
and
tensorflow
2..
So
there
are
quite
a
bit
of
changes
required
to
get
tensorflow
1.15
to
work
and
that
support
is
in
the
master,
and
it
also
includes
experimental
support
for
tensorflow,
2.1
and
2.2.
E
So
in
siri
you
should
be
able
to
take
a
tensorflow
model
in
eager
mode,
and
you
can
export
this
directly
to
onyx.
E
E
The
unit
tests
for
those
are
not
enabled,
but
we
are
working
very
hard
on
this
yeah
and
I
want
to
pass
out
a
big
thank
you
to
all
the
people
that
are
working
very
hard
on
converters
because
it's
so
crucial
to
get
models
into
the
onyx
ecosystem.
K
Okay
for
the
online
tensorflow
converter,
okay,
we
support
both
tensorflow
2.x
and
one.x.
In
our
next
release.
You
will
see
two
version,
one
for
convert
to
tensorflow
2
dot
x.
It's
just
still
sharing
the
screen.
K
If
you
want
to
get
the
latest
code,
then
please
go
to
our
repository
to
download
the
code.
There
master
branch
support
the
conversion
to
tensorflow
1.x
msr
2.x.
If
you
are
looking
for
a
support
for
the
tensorflow
1.x,
you
need
to
use
our
tensorflow
1.x
branch.
K
And
we
are
also
planning
to
let
users
to
save
their
converted
model
as
saved
model.
We
are
in
good,
good
progress
on
supporting
opposite
eleven.
You
can
see
our
latest
up
support
in
our
support
status
page
in
our
repo.
Another
thing
we
add
into
our
converter
is
dynamic.
Input
shape,
test,
dynamic,
shape,
input
means
you,
don't
know
the
value
of
the
input
to
the
operator
at
the
graph
conversion
time.
You
only
know
the
data
type
and
the
rank
of
the
input.
K
A
Maybe
we
lost
chin
for
some
reason.
Let
me
see.
A
J
B
J
Yeah
yeah,
we
had
some
discussions
on
whether
we
you
know
do
training
or
not
in
which
way
right.
This
is
the
first
scenario
we
believe
you
know
with
the
current
onyx
inference
model
and
graph,
we
should
be
able
to
take
it
to
a
backend,
either
framework
or
converter.
Let
me
runtime
to
execute
it,
of
course,
to
have
this
scenario
working
the
front-end
converters
do
not
have
to
do
anything
right.
You
just
take
the
existing
model
and
you
know
make
it
a
train
in
the
back
end
or
you
know,
run
time.
J
Of
course,
some
kind
of
assumptions
or
defaults
will
be
produced
right
for
the
training,
hyper
parameters,
loss
functions
and
optimizers.
We
heard
matlab
already
did
that
in
some
way
right.
So
I
like
to
hear
the
experience
there.
Maybe
you
know
others
can
you
know,
do
the
same?
Okay.
So
that's
the
first
scenario,
it's
quite
straightforward.
J
We
can
just
you
know,
use
existing
models.
Can
we
go
to
the
next
one?
J
The
second
scenario
actually
involves
both
front
end
and
back
end.
As
you
can
see
in
the
chart
right,
the
front
end
converters
will
produce
this
onyx
trainable
model.
You
know,
including
everything
for
the
training
information
right
then
the
back
end
converters
either.
You
know,
take
it
and
train
in
the
framework
or
train
in
onyx
runtime
or
some
other
runtime,
basically,
okay.
J
As
far
as
that,
the
framework
converters,
we
have
not
decided
whether
you
know
it's
gonna,
be
able
to
produce
onyx
model
okay,
but
for
sure
the
framework
should
be
able
to
just
run
or
do
the
training
in
a
particular
framework
right.
So
let's
move
on
to
the
next
one.
J
Right,
I
think
that's
the
reason
I
put
the
vote
there
right.
The
poll.
I
want
to
see
whether
most
converters
have
or
train
to
support
training
right.
So,
as
far
as
I
can
see,
I
think
you
can
see
that
as
well.
It's
actually
splitting
in
five.
You
know
from
completely
supporting
to
no
plane.
J
J
Okay,
so
do
we
have
more
training
apis
in
the
core
to
help,
let's
say
the
back-end
converters
and
run
time
to
validate
it
works
right.
So
for
this
kind
of
questions
we
will
continue
in
in
our
sick
discussions.
So
please
join
our
meeting.
Okay.
Thank
you
next
one,
please.
I
think
we'll
move
on
to
the
interesting
new
project
onyxr,
so
doroth.
G
Please
take
over
yes,
hello.
Can
you
guys
hear
me?
Yes
excellent?
So,
yes,
we
started
this
project
towards
the
end
of
last
year
and
considering
our
expertise
in
you
know,
llvm
and
both
lvm
and
onyx.
We
decided
well,
it's
mlr
provided
us
with
a
way
to
connect
it
to
really
nicely
so
if
we
go
to
the
pre
next
slide.
Thank
you.
So
one
of
the
first
things
that
I'd
like
to
say
is
that
mlir
opened
the
possibility
for
us
to
programmatically,
define
and.
G
Define
new
representations
as
well
as
transformations
on
these
representations,
so
some
of
the
things
that
we
wanted
to
do
is
to
create
a
representation
for
onyx
inside
this,
this
infrastructure
that
was
released
in
lvm
about
mid
last
year.
G
G
Ingests,
essentially
the
onyx
specification
and
based
on
that,
we
automatically
define
what
the
mlr
calls
a
dialect,
which
is
a
family
of
operations
that
create
that
sort
of
capture
the
onyx
the
onyx
model
breath.
So
basically,
we
support
all
these
operations.
We
are
able
to
define
them
automatically
and
also
perform
transformations
on
them.
G
So
mlir
makes
it
easy
for
us
to
perform
perform
this
because
it
supports
a
type
of
representation
called
table
chain
which
can
easily
be
which
can
be
used,
as,
let's
say,
a
blueprint
for
defining
these
operations
very
stick
simply
and
then
from
that
it
can
automatically
generate
all
the
code
and
all
the
classes
and
all
the
c
plus
plus
code.
That.
G
Is
used
to
define
these
objects,
so
we
we
absolutely
leverage
all
that
functionality
to
define
our
operations
as
well
as
to
define
transformations
of
these
operations.
So
we
have
a
an
onyx
dialect
and
which
captures
the
semantics
of
the
onyx
specification.
G
There
are
several
stages
to
this
transformation,
and
one
of
the
first
ones
is
to
infer
all
the
shapes
of
for
all
operations.
So,
given
them
presented
with
a
model
we
can
transform,
we
can
transform
this
model
into
its
onyx
dialect,
counterpart,
infer
the
shapes,
and
once
that
is
done,
we
can
actually
perform
transformations
on
this
code,
so
we
can
make
transformations
that
are
both
within
onyx,
the
onyx
dialect
itself
and
or
and
or
we
can
transform
this
to
other
lower
level
abstractions.
G
We
have
defined
our
own
kernel
at
the
kernel
dialect,
but
also
there
are
other
out
of
the
box
dialects
that
mli
are
offers
and
one
of
them
being
the
lvm
dialect
itself
to
which
we
lower
all
the
onyx
operations
that
the
model
contains,
and
then
we
are
able
to
actually
produce
the
lvm
ir
corresponding
to
that.
So
basically,
this
would
be
like
the
the
end-to-end
image
of
what
our
tool
does
if
we
can
go
to
the
next
slide,
please.
So
this
is
how.
G
Mlir
encodes
an
operation,
and
it
basically,
as
you
can
see
it's
very
similar
to
how
onyx
is
represented,
with
a
few
of
course,
representational
quirks
that
mlir
adds
so
it
is.
It
keeps
track
of
all
the
inputs,
the
types
of
the
inputs
and
all
the
different
attributes-
and
this
can
this
currently
can
happen
for
this
example
for
the
convolution
operation
and
all
we
do
is
basically
we
we
use
this
to
generate
an
essentially
an
onyx
dialect
version
of
the
of
the
input.
G
So
if
you
can
go
to
the
next
one,
please
in
this
particular
example,
this
is
an
example
of
how
table
gen
is
used.
As
I
said,
mlir
uses
this
to
succinctly
define
operations.
So
basically,
what
we
do?
We
automatically
re:
transform
the
operators
dot
md
input
into
this
type
of
representation,
where
each
operation
has
its
own
definition,
its
own
input
and
output
types
specified,
and
several.
G
Other,
let's
say
attrib
attributes
and
traits
like,
for
example,
that
it
supports
shape
inference
or
it
has
no
side
effects.
All
of
this
can
be
succinctly
represented
and
from
this,
the
mlr
infrastructure
can
actually
generate
the
code.
So
if
we
can
go
to
the
further
to
the
next
slide,
please
next
slide.
G
So
yeah
that
that
was
an
example
of
how
we
can
define
the
operation
itself,
but
we
can
also
define
transformations
using
table
gen.
So
this
is
a
very
simple
merge
operation
where
we
can
merge
a
math
multiply
with
an
ad
all
using
the
pattern.
Matching
powers
of
table
gen.
So
all
we
need
to
do
is
specify
that
particular
pattern.
That's
in
the
middle
of
the
screen
and
then
mli
r
will
do
the
rest.
G
So
basically
it
will
perform
the
transformation,
the
conversion,
if
you
will,
from
the
top
the
bottom
automatically
and
the
next
please
so
currently
where
we
are
with
the
development.
So
we
are
supporting
several
operations,
end
to
end
quite
an
extensive
list
that
allows
us
to
actually
run
nist.
G
The
mnist
model
we're
currently
working
to
run
other
models
such
as
resnet
and
several
of
the
models
that
onyx
has
in
this
model
zoo
and
we're
slowly,
adding
that
functionality
we're
looking
to
add
more
tests
in
the
future
and
any
contributions
external
contributions
that
you
guys
would
like
to
make
or
would
like
to
learn
more
about
the
project.
Please
don't
hesitate
to
contribute
and
participate.
G
Our
project
is
now
part
of,
as
you
know,
it's
veronics,
so
it
makes
it
very
easy
for
people
to
interact
with
us.
So
please
get
in
touch.
A
Yeah
so
michael,
I
I
think
michael
has
a
question.
A
E
A
A
So
yeah
yeah,
okay,
yeah
all
right.
So
michael
we'll
we'll
answer
your
your
question
in
the
chat
or
take
it
offline,
all
right,
okay!
So
with
that,
let's
move
on
to
with
the
next
speaker.
So
this
is
the
model
zoo.
L
Yes,
I
am,
would
you
like
me
to
share
my
screen.
L
Hello,
everyone
I've
recently
taken
over
the
leadership
of
the
model,
zoo
and
tutorial
sig,
and
I'm
really
excited
to
be
here
with
you
today.
So
first
I'd
like
to
provide
some
updates
and
some
information
about
what
we're
doing.
Since
this
is
the
first
time
the
model
zoo
and
tutorial
sig
has
been
launched.
L
We
kicked
off
in
late
january,
and
we've
we've
come
up
with
a
formalized
charter
on
what
we
represent,
specifically
we're
responsible
for
the
collection
of
state-of-the-art
onyx
models,
as
well
as
the
onyx
model,
zoo
and
onyx
models
repository,
and
we
are
also
responsible
for
making
it
easy
for
users
to
get
started
with
onyx
and
the
ecosystem
around
it.
This
involves
tutorials
as
well
as
the
tutorials
located
in
onyx
docker.
L
So
I
have
some
updates
to
share
with
you
specifically.
I
want
to
talk
first
about
what
models
we
have
as
part
of
model
zoo.
As
you
saw
in
harry's
original
presentation,
we
have
31
models
in
the
model
zoo,
which
is
up
24
from
last
year.
We
specifically
have
28
vision.
Models
of
those
15
are
image:
classification,
nine
are
object,
detection
and
image
segmentation,
and
we
have
four
other
image
models,
including
gesture
analysis
and
image
manipulation.
L
So,
with
our
efforts,
we
continually
aim
to
add
more
models
as
we
go
on
and
we
welcome
community
participation
in
this
as
well,
so
some
updates
on
what
we've
been
working
on
in
the
last
few
months.
If
you
go
to
the
onyx
sigs
repository
you'll
find
a
mission
statement
with
all
of
the
projects
that
we
aim
to
work
on,
as
well
as
our
10
core
priorities,
both
on
the
model
zoo
side
and
on
the
tutorial
side.
So
we
encourage
you
to
take
a
look
at
that
other
than
that.
L
We've
proposed
several
models
and
we're
compiling
a
list
of
state-of-the-art
models
that
we
would
like
to
consider,
adding
to
the
model
zoo
that
you
can
also
find
in
the
onyx
repository
in
our
folder,
and
we've
also
decided
to
start
tracking
upcoming
community
onyx
events,
and
we
hope
to
include
this
on
the
onyx
ai
website
going
forward.
Now,
one
of
the
main
technical
concepts
that
has
been
assigned
to
us
are
is
in
our
charter
is
developing
the
model
zoo
ci.
L
This
is
originally
proposed,
I
believe,
by
huawei
a
while
ago,
but
we've
decided
to
implement
this
in
a
in
a
slightly
different
way.
The
first
step
is
to
move
all
of
our
onyx
models
in
the
model
zoo
to
get
lfs,
which
is
the
large
file
storage
system.
L
The
idea
is
if
we
include
our
model
zoom
models
within
the
repository
itself,
instead
of
including
links
from
various
different
file
servers
from
various
different
companies
and
whoever
the
model
implementer
is
we
one
have
an
ability
to
download
all
models
just
once
as
well
as
implementing
a
model
zoo
ci,
far
more
easily
and
two
community
users
might
find
it
much
easier
to
select
specific
subsets
of
models
or
to
find
the
locations
without
having
to
individually
go
to
links
in
the
readme.
L
So
you
can
see
our
progress
towards
this
initiative
in
issue
271,
which
is
tracking
our
progress.
Half
of
our
models
have
been
moved
to
lfs
already
and
we're
working
on
the
other
half
and
after
that,
we'll
soon
work
on
the
ci.
The
next
main
update
is
in
regards
to
user
experience,
so
we've
developed
two
new
onyx
docker
containers,
onyx
base
and
onyx
dev,
along
with
our
third
offering
onyx
ecosystem.
L
These
three
docker
containers
will
enable
users
to
get
started
quickly
with
the
published
onyx
package
from
pi
pi,
as
well
as
the
jupiter
notebook
environment,
for
getting
started
quickly
with
onyx
models.
So
in
that
we
have
all
the
different
notebooks
for
the
various
converters
and
how
to
upload
a
model
converted
to
onyx
and
getting
started
with
inference
using
onyx,
runtime
and
other
inference
engines.
L
So
you
can
definitely
check
all
of
that
out
on
onyx
docker,
but
also
these
models
are
published.
These
images
are
published
on
docker
hub
on
our
horizons
in
the
next
few
months.
Our
guidelines,
specifically
we've
proposed
some
model
zoo
and
tutorial
entry
guidelines
that
we're
iterating
on
to
essentially
standardize
the
experience
of
models
in
the
model
zoo
additionally
we're
working
on
discoverability.
L
We
want
to
expose
these
materials
on
the
onyx
website,
and
this
is
in
regards
to
the
community
events,
but
also
in
regards
to
tutorials
and
making
sure
users
can
find
all
the
information
in
one
place,
we're
also
working
on
analytics,
so
we're
tracking
model,
zoo
and
tutorial
github
usage
and
various
others.
We're
looking
for
solutions
in
that
and
that's
one
area
we're
focusing
on
as
well
as
participation.
So,
like
I
mentioned
our
first
meeting
was
january.
30Th
we've
had
a
few
meetings
since
then.
L
All
for
meeting
notes
and
recordings
have
are
located
in
our
six
repositories,
so
you
can
always
follow
up
with
that,
and
we
have
great
members
from
microsoft,
ibm
and
nvidia.
We
are
always
looking
for
more,
and
so
at
the
end
of
this
presentation,
I
will
show
you
how
you
can
get
involved.
L
That
being
said,
I
wanted
to
share
some
quick
numbers
in
terms
of
model
zoo
traffic.
I
just
pulled
data
from
the
last
two
weeks,
based
on
github
traffic.
We've
had
around
5000
unique
visitors
with
20
000
views,
which
means
that
each
visitor
is
visiting
multiple
pages
in
the
model,
zoo
and
specifically,
some
of
the
most
common
pages
or
the
most
popular
models
that
can
be
inferred
from
that
are
resnet
mobile
net
and
tiny,
yellow
v2.
L
Additionally,
onyx
tutorials
is
seeing
a
lot
of
traffic
as
well.
We've
seen
46
clones
in
two
weeks
and
specifically
we've
seen
over
10
000
views
with
3
000,
unique
visitors,
and
we
can
see
that
some
of
the
tutorials
regarding
pytorch,
tensorflow,
mnist
and
model
visualization
are
the
most
popular,
so
more
analytics
are
coming
forth
in,
hopefully
our
updates
in
the
next
meetings.
But
what
we'd
like
you
to
do
is
join
us
on
gitter
join
us
on
github.
Our
next
meeting
is
next
week
on
april
13th
at
1pm.
L
A
All
right,
I
know
that
we
gonna
run
a
little
bit
over.
I
know
it's
come
up
at
12
noon
right
now,
but
so,
let's
move
on
to
the
next
one.
Think
of
it
later
for
your
presentation,
okay
and
so
lana.
A
I
think
you're
next
and
this
is
gonna-
be
the
last
update
for
the
sick
and
working
group.
A
A
M
Sorry
about
that,
yes,
hello,
all
again,
I'm
svetlana
and
I'm
legion's
onyx
trained
working
group,
and
today
I
will.
I
know
we
are
already
out
of
time.
So
I
should
be
very
brief.
M
So
the
working
group
was
created
more
than
a
year
ago
and
I
have
been
leading
it,
but
the
main
work
actually
was
done
mostly
by
vaishank
gene
from
microsoft
and
so
basically
yeah.
We
have
a
number
of
pull
requests
that
have
that
wasting
mostly
created
with
some
input
from
other
people,
and
it
is
going
into
onyx
1.7
release,
as
has
been
mentioned
already,
but
onyx
runtime
and
converters
do
not
yet
support
this
on
extraneous.
So
it's
more
like
a
preview
feature
now.
Why
do
we
need
linux
training?
M
So
this
is
a
brief
summary
from
the
pr
about
training.
So,
basically-
and
fortunately
today,
this
was
already
described
by
rama,
so
maybe
we
don't
need
to
spend
too
much
time
on
it.
So,
basically,
we
now
have
this
new
message:
training
info
proto
that
includes
initialization,
algorithm,
initialization,
binding
and
update
binding,
and
all
this
goes
into
the
model
proto.
M
M
Those
are
not
operators
but
functions
because
it's
a
combination
of
a
number
of
operators
and
there
are
new
optimizers
as
a
new,
I'm
not
sure
if
those
are
operators
or
functions,
but
we
have
the
optimizers
the
last
one
adam,
I
think,
did
not
get
into
1.7
release
or
I'm
not
sure
exactly
about
that
one,
but
the
first
two
definitely
got
into
there
and
all
these
new
operators.
So
functions
are
in
the
new
domain,
ai
dot
on
x
or
onyx
dot
training,
and
you
can
find
them
there.
M
So
in
terms
of
the
next
steps,
one
of
the
issues
that
chin
my
colleague
raised
was
that
we
need
some
examples
of
onyx
training
models
so
that
the
converters
can
work
with
them.
M
So
we
will
work
with
washing
to
create
some
examples
and
in
terms
of
gradient.
Well,
as
you
know,
by
chain
rule,
we
have
to
have
gradients
for
all
the
operators
in
onyx
to
to
be
able
to
compute
gradient
of
the
composite
function,
so
we'll
have
to
divide
this
work
because,
as
you
know,
linux
has
more
than
100
operators.
So
we
need
to
create
those
gradients
for
probably
all
of
them
or
most
of
them.
M
That
would
be
a
lot
of
work
that
we
need
to
add
into
documentation
for
each
operator
what
the
gradient
is
and
also
we
need
to
create
helper
functions
in
onyx,
so
that
it's
easy
for
people
to
create
some
training
info
proto
and
so
that
they
could
take
a
model
which
has
just
influence
information
and
create
a
training
model
out
of
it
and
the
vice
versa
as
well.
So
if
you
have
a
model
with
training,
maybe
they
want
just
to
use
it
for
inference.
M
Then
we
need
to
be
able
to
remove
the
training
info,
so
we
will
work
with
converters
teams
to
help
them
support,
onyx
training
and
and
see
if
they
find
any
problems
with
the
current
specs,
then
we
need
to
update
the
spec
to
make
sure
it
actually
works.
Well,
and
also
we
need
to,
we
need
your
answers
about
whether
we
need
to
include
auto
diff
in
into
this
onyx
pack
or
auto
gif
would
be
done
by
converters
or
runtime.
M
Well,
that's
basically
it.
I
think
any
questions
comments.
A
M
That's
a
great
question,
so
I
think
currently
I
believe
there
is
some
place
in
onyx
where
some
metadata
can
be
placed
and
depending
on
where
you
are
exporting
from
the
model
producers
might
be,
might
be
putting
some
of
that
information
into
the
model,
for
instance,
but
I'm
not
so
sure
if
they
do.
N
I
guess
the
question
is
that
do
we
preserve
the
optimizer
state,
like
momentum
or
number
of
training
iteration
has
been
conducted
so.
F
I
don't
I
don't
specifically
mean
the
state,
I
do
mean,
like
the
hyper
parameters
of
the
optimizer,
like
what
the
momentum
parameter
was
set
to,
rather
than
the
momentum
values
during
training,
because
tracking
a
model
from
from
training
all
the
way
through
to
inferences.
Because
then
you
know
what
the
parameters
were.
When
you
get
some
random
model,
you
have
to
try
and
recover
how
to
train
it.
M
Guys
this
question
is
not
really
about
model
training
it's
in
about.
I
think
it's
more
about
just
model
metadata.
It
already
exists
in
onyx
right.
C
I
think
the
model
format
allows
a
user
to
store
this
information,
but
it
is
up
to
the
model,
creator
and
exporter
to
make
sure
it
stores
the
necessary
information.
J
A
All
right,
okay,
so
thank
you
everyone.
I
guess
that
bring
up
to
the
end
of
the
workshop.
A
B
A
Made
available
sometime
soon
expect
that
we
will
send
out
an
email
to
follow
on
on
this
and
as
well
as
share
that
on
the
mailing
list.
Okay,
so
please
stay
tuned
and
check
for
that.
If
there's
one
thing
I
would
like
to
ask,
all
of
you
is
to
please
stay
engaged
and
continue
to
contribute
to
onyx
and
help
my
mechanics
be
successful.
Great.
B
Quick
question
on
on
the
recordings
posting,
I
know
it's
going
to
be
on
the
it's
going
to
be
on
the
onyx
github
and
as
well
as
on
the
lfa
event,
page
yeah.
A
G
A
Yeah
yeah,
we
will
let
you
know,
yeah,
okay
and
then,
please,
remember
to
you,
know,
use
the
different,
honest
resources
listed
here
right,
go
to
our
website.
Your
honest
ai
go
to
github
getter,
you
know,
but
more
importantly,
you'll
join
the
mailing
list.
I
think
that's
why,
where
we
can,
you
can
get
the
information
that
we
want
to
announce
everyone
all
right,
but,
more
importantly,
please
again
stay
engaged
and
continue
to
to
contribute
all
right
so
with
that,
on
behalf
of
I
guess,
ibm,
lfai
and
also
the
onyx
thing
committee.