►
From YouTube: ONNX Roadmap Discussion #5 20201001
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Well,
let's
go
ahead
and
get
started
so
for
this
week
we
wanted
to
review
the
impact
analysis
spreadsheet
that
was
set
up
after
after
last
week's
call,
and
this
was
basically
a
way
to
kind
of
capture.
You
know
different
people's
thoughts
on
what
what
is
important,
and
so
thanks
to
jim
for
suggesting
it
and
thanks
to
harry
for
for
setting
it
up.
So
we
we
basically
listed
out
all
the
different
features
that
were
suggested
right
in
the
previous
meetings,
and
then
we
had
scores
from
different
people.
A
This
was
posted
on,
I
think
on
the
at
least
in
the
slack
channel.
So
we
have
you
know
folks
from
the
steering
committee
who
added
their
thoughts
as
well
as
a
number
of
other
folks
from
the
community
who
added
it,
and
you
can
still
go
and
go
in
and
add
it
after
this
meeting
as
well.
It's
onyx
that
ai
slash
impact
is
a
short
link
to
get
to
this
spreadsheet.
You
can
add
your
own
column
there.
A
So
one
of
the
things
we
noticed
is
there
are
a
number
of
features
where
everyone
seems
to
agree.
It's
either
really
important
or
it's
not
important
at
all,
and
so
you
know
there
are
some
things
like
you
know:
reducing
the
number
of
primitive
ops
or
improving
the
tutorials
on
model
zoo.
You
know
expanding
the
model
tests
on
all
the
models
used.
You
know
everyone
seems
to
say
that
these
are
really
important
the
highest
score
here.
A
A
There
are
a
few
others
that
we
noticed
where
you
know
it's
kind
of
a
you
know,
bipolar
reaction
from
the
community,
for
example,
you
know
pre-processing,
there
are
some
really
high
scores
or
some
really
low
scores.
I
think
format
for
op
reference.
Implementation
is
another
one
where
we
have
a
couple
of
ones,
a
couple
of
threes,
so
it's
kind
of
split.
A
You
know
across
the
board
on
opposite
ends,
so
we
wanted
to
use
a
little
bit
of
time
to
talk
about
the
ones
that
are
split
and-
and
you
know,
get
more
discussion
on
that
on
those,
and
then
I
think
it's
probably
if
we
have
time
it's
also
useful
to
talk
about
the
next
steps
for
the
ones
that
we
feel
are
are
important.
The
ones
that
we've
kind
of
unanimously
agreed
are
are
important.
A
We
should
figure
out
the
next
steps,
for
does
that
sound
like
anything
else
that
people
want
to
make
sure
we
cover
today.
B
One
suggestion
is
also:
if
you
look
at
the
average,
I
think
we
need
to
kind
of
think
about
how
we
want
to
set
up
the
criteria
for
which
one
we
think
there's
a
high.
So
obviously
three
is
the
highest
one,
but
we
need
to
give
like
a
little
bit
of
a
fuzzy
boundary
in
terms
of
what
we
consider
is
quote
unquote
important.
So
that's
one
selection.
B
We
should
try
to
figure
out
what
actually
what's
the
kind
of
the
the
range
that
is
acceptable
as
like
considered
to
be
high
priority.
I
guess
or
high
impact.
A
Yeah
I
was,
I
was
thinking
I
mean
I
was
probably
thinking
to
sort
it.
You
know
in
reverse
in
descending
order
and
then
it
basically
worked
down
the
list
and
then
so
right
now,
it's
probably
anything
above
a
2.5
is
2.5
or
higher
is
probably
high.
B
A
C
A
So
it's
it's
more,
just
the
guidance
for
so
yeah.
Certainly
you
know
if
there's
some
something
that
goes
against
the
principles
and
it's
considered
a
bad
idea.
We
want
to
eliminate
those,
but
if
there's
something
that
is
reasonable,
but
is
maybe
not
as
high
priority
for
the
overall
community,
but
you
know
someone
is
willing
to
do
it.
That
should
be
fine,
but
this
is
just
to
kind
of
give
a
rough
idea
where
what
the
community
feels
is
important,
as
bridge.
A
All
right,
so
I'm
I'm,
I
was
kind
of
working
my
way
up
from
the
bottom
to
find
some
ones
that
were
fairly
bimodal
like
high
scores
and
low
scores.
So
if
I'm
looking
at
this
right,
I
think
the
the
first
one
I
kind
of
see
is
this
one
change,
onyx
ir
to
multiple
irs
and
centralized
ir,
optimization
libraries.
A
So
it
looks
like
yeah,
so
we
have
a
couple
of
ones.
Couple
of
twos,
a
couple
of
threes
and
yeah.
You
have
some
comment
on
this
yeah.
B
B
E
E
Yeah,
the
the
speaker
is
alex
eichenberger.
E
Right,
yeah,
yeah
and
yeah
I
mean
we
can
talk
about
the
scoring.
A
Do
you
want
to
speak
briefly
about
what
you
feel
is
you
know
like
what
what
do
we?
What
is
the
advantage
of
doing
this
just
to
make
sure
everyone's
level
set
on
that?
Well,
I.
E
You
know,
I
think
there
was
a
feeling
that,
in
in
terms
of
the
onyx
ir
that
you
know
there
were,
there
were
a
lot
of
things
that
could
be
reduced
to
kind
of
concatenation
of
sort
of
simpler
ops
and
that
so,
if
you
had
a
multi-level
ir
that
that
would
assist
in
in
doing
that,
I
mean
some,
I
think
maybe
was
it
harry
that
that
made
this
proposal
on
on
on
the
onyx,
mli
or
github,
I
think
so,
but
but
that
yeah.
E
A
And
when
we
say
ir,
are
you
referring
to
kind
of
the
people's
plus
implementation?
That's
in
onyx,
today
or
or
something
else.
E
E
But
you
know
I
mean
whoever
suggested
it
maybe
could
clarify
that.
I.
B
C
C
If
that
was
my
interpretation,
my
I
guess,
okay,
then
you
can
also
look
at
it
and
see
if
that's
the
same
kind
of
interpretation.
C
A
Okay,
so
yeah
thanks
for
starting
that
discussion,
I
guess
we
can.
You
can
follow
up
on
there
and
continue
discussing
this
one
since
we're
splitting
it.
I
know
people
may
want
to
probably
change
their
scores
for
the
for
the
second
part
of
this,
a
centralized
ir,
optimization
library.
A
Again,
I
don't
know
which
of
the
previous
scores
kind
of
reflected
this
part,
as
opposed
to
the
other
one.
For
me,
like
my
score,
was
partly
determined
by
this,
and
my
you
know.
My
thinking
is
that
the
core
onyx,
it
doesn't
have
optimization
libraries
right.
We
actually
had
a
separate,
optimization
repo
that
we
kind
of
split
things
off
into
and
things
can
be
done
there,
and
you
know
if,
if
it's
important
to
someone
to
add
these
there,
that's
fine,
but
it's
not
part
of
the
core
onyx
spec.
F
We
we
do
not
mandate
that
the
optimizers
be
also
updated,
and
this
is
this
has
led
to
all
the
like
a
lot
of
optimizers
being
in
a
in
a
buggy
state
right
now,.
B
Okay,
I
know
that
there's
a
lot
of
optimization
libraries
that
are
out
there,
I
think
when
we
went
to
china
last
year.
There
are
two
talks
that
are
kind
of
talking
about
the
optimization
library
that
they
built
for
the
onyx
ir,
and
so
maybe
we
can
actually
we
don't
have
to
centralize
it,
but
maybe
we
can
have
like
a
place
where
we
can
kind
of
show
that
these
are
the
tools
that
are
available
to
the
people
who
are
going
to
use
onyx.
Instead
of
trying
to
centralize
the
all
your
optimization
libraries.
F
F
B
E
A
So
it
sounds
like
there's
a
lot
of
agreement
that
this
is
lower
priority
and
people
can
enter
their
scores
to
to
reflect
that
all
right
kind
of
working
up
the
list.
Let's
see
other
items
that
are
divergent
okay,
introduce
format
for
op
reference
implementation,
so
this
is,
I
believe
this
is
specifically
about
requiring
ops
to
be
implemented
in
a
specific
way
when
adding
the
op
to
onyx
right,
like
saying
it
needs
to,
there
needs
to
be
a
python
implementation
of
it
or
something
like
that
part
of
that
already
exists
today.
G
Yeah
so
to
to
add
new
apps
today
we
need
to
provide
pricing
information
for
for
the
edit
up.
Yes,
but
now
the
problem
here
is
that
they
don't
have
a
unified
format.
G
Style
or
what
it's
a
combined
box,
it's
a
standard,
interface
and
coding
style.
There's.
G
Currently
they
just
the
only
shared
property
is
that
they
are
python
functions
and
that's
it.
G
It's
for
testing
and
also
to
to
to
tell
you
so
what
the
text
description
in
the
spec
means.
Sometimes
text
description
can
be
ambiguous
and
the
implementation
can
complement
it.
A
So
I
know
earlier
in
one
of
the
other
means
there's
a
discussion
about
whether
there
should
be
a
requirement
or
not.
I
mean
the
fact
is.
It
is
currently
a
requirement,
and
I
think
this
line
is
basically
a
tweak
on
that
saying
it's
already
a
requirement,
but
when
you
submit
it,
it
needs
to
look
a
certain
way.
Is
that
right,
yeah?
Okay?
In
that
case,
I
mean
I
think
I
would
probably
change
my
personal
score
here.
C
G
Yeah
and
if
we
have
a
universal
format
for
all
of
them,
actually
we
can
have
a
small
now
optimal
runtime
seeking
our
next
variable.
F
Yeah
see
that
there
is,
I
think,
if
we,
until
we
are
just
talking
about
having
a
reference
implementation,
it
completely
makes
sense
to
me
for
for
the
reasons
that
you
already
mentioned,
but
the
moment
we
start
talking
about
having
a
simple
runtime,
I
think
we
will
get
into.
F
We
make
this
very
complicated,
because
now
you
don't
just
need
an
implementation
for
all
these
operators.
You
need
to
have
an
execution
flow
and
you
need
to
have
an
internal
ir
which
can
process
the
model
and
everything
right.
I
don't
know
if
onyx
requires
that
sort
of
complexity.
Like
I
don't
know
what
use
case
we
will
solve
by
having
this
ir
implementation.
G
It's
really
called
test
testing
programs.
It
doesn't.
F
G
H
H
C
E
C
C
F
So
I
think
everything
like
everything
related
to
the
op
itself,
as
in
resolving
ambiguity
or
ensuring
correctness
and
everything
that
can
be
achieved
by
the
op
level
testing
function,
testing.
Also
we,
if
you've
spoken
about
this
before
that
it
can
be
achieved
by
using
onyx
runtime
to
validate
the
expanded
functions.
F
To
say
like
can
we
can
we
like?
Can
we
use?
Can
we
still
use
onyx
runtime
for
testing
something
like
this.
C
A
G
F
So
onyx
runtime
also
has
a
way
of
excuse
me:
plugging
python,
ops
right,
I'm
I'm
not
entirely
familiar
with
how
or
whether
it
can
this
can
be
used
in
the
current
scenario.
But
since
we
are
already
mandating
that
we
have
a
python
implementation
for
every
op,
we
can
use
onyx
like
we
can
use
this
to
plug.
We
can
use
this
implementation
for
plugging
in
for
a
custom
python
off.
F
And
continue
using
because
the
issue,
the
main
issue
is
when
we
add
operators
to
onyx,
they
are
not
available
in
onyx
runtime
or
any
other
runtime
for
that
matter.
So
if
it's
just
this
matter
of
one
or
two
ops
that
we
are
adding
in
a
new
offset,
then
I
think
we
should.
We
should
see
if
the
python
or
thing
can
be
utilized
over
here.
G
Yep,
I'm
totally
okay
with
the
idea
of
using
on
on
extreme
time
to
to
execute
some
complete
graphs,
yeah,
I'm
not
totally
with
it.
I
I
just
want
to
make
sure
the
definition
of
the
upsell
correct.
G
Yeah,
so
if
we
want
to
use
some
extra
networks
to
the
python
here,
the
python
code
here
need
to
meet
the
requirements
from
only
extraordinaire
python
up.
F
I
For
the
python
op,
actually,
we
will
have
some
prototype,
but
this
is
a
good
idea
that
we
can
use
a
python
off
to
to
validate
the
off.
So
the
only
limitation
I
think
of
is
that
python
off
is
based
on
the
custom
of
api.
It
doesn't
include
every
schema
that
other
onyx,
so
the
schema
validation
may
be.
The
missing
part
for
the
whole
pipeline
for
the
other
purpose
is
totally
fine.
A
A
good
item
to
follow
up
on
and
see
how
we
can
make
use
of
it
if
we,
you
know,
make
use
of
these
reference
implementations
all
right.
So
now
the
other
item
I
see
where
there's
very
divergent
numbers,
it's
a
pre-processing,
so
this
one
also
is
one.
I
think
we
talked
about
briefly
in
one
of
the
other
meetings,
and
I
can
I
can
explain
my
score
and
then
would
love
to
hear
from
others
as
well.
A
A
You
know
for
that
pre-processing
to
basically
have
to
go
re-implement
the
pre-processing
on
different
platforms
and
optimize
that
to
work
and
so
on,
and
so
from
that
perspective
you
know,
it'd
be
nice
to
have
more
pre-processing
capabilities
inside
the
onyx
graph.
So
it's
more.
H
Portable,
so
I
have
a
sort
of
question
about
that:
how
how
wide
a
set
of
pre-processing
things
are
you
sort
of
considering
yeah.
H
A
Exactly
and
that's
why
it's,
I
don't
think
it's
possible
to
cover
all
possible
data
transformations,
but
there
may
be
some
common
ones,
so
image
based
ones.
You
know
scaling
and
things
like
that.
Those
some
of
those
are
already
in
onyx,
but
there
may
be
some
other
ones
that
we
need
to
look
at
adding.
But
you
know
we
need
to
take
a
closer
look
at
the
types
of
image
models
are
out
there
and
see
what
kind
of
pre-processing
they're
doing,
especially
with
like
bounding
boxes
and
things
like
that.
A
I
don't
think
we
handled
those
well
and
then
on
the
text.
Models
yeah,
the
tokenization,
that's
a
much
bigger,
complicated
set.
I
haven't
looked
into
it
in
depth.
I
don't
know
if
there
are
some
common
things
that
we
might
be
able
to
abstract
and
and
make
available
or
whether
it's
just
so,
you
know
independent
for
every
every
scenario.
That's
not.
J
Hi
there
it's
it's
nick
here
from
nick
country
from
ibm.
I
know
recently,
the
tensorflow
community
had
a
kind
of
rf
rfc
for
text
processing,
including
kind
of
birth
style
tokenization,
it's
pretty
comprehensive.
So
that's
something
we
might
want
to
look
at
as
a
potential
guide
for
the
specifically
the
kind
of
text,
pre-processing
tokenization
aspect.
It
may
be
that
we
can
support
again,
as
I
said,
not
not
everything,
but
with
a
fairly
wide
range
of
of
kind
of
common
use
cases.
H
J
Yeah,
definitely,
I
think
the
assumption
would
be
that
it's
it's
a
you
know,
they're
all
they're,
all
kind
of
export.
Only
you
know
for,
for
you
know.
Obviously,
if
it's
simple
tokenization
like
white
space
or
regex
or
character
based
or
something
then
that's
kind
of
fun
but
yeah
for
for
the
the
bird
style,
it
would
be
like
yeah
exporting
the
pre-trained
tokenizer.
E
I
I
I
just
like
to
say
that
pre-processing
on
the
data
frame
as
input
output
up
at
the
top,
I
think
those
are
probably
also
linked.
I
mean
at
least
some
of
the
interests
that
we
have
in
the
data.
Fram
is
precisely,
for
you
know
some
kind
of
pre-processing
before
your
model.
H
A
J
Ops,
let's
say,
for
example,
a
tokenization
operator
which,
which
is
effectively
going
to
work.
You
know
row-wise
across
the
column
of
a
data
frame,
but
then
the
data
frame
extraction
itself
has
a
bag
of
of
columns
or
whatever
cells
or
whatever
you
want
to.
You
know,
whichever
way
you
kind
of
want
to
look
at
it
is
something
slightly
different
and
it
would
have
you
know
a
map
operator
that
maps
the
pre-processing
operator
itself
across
the
rows
of
a
column.
So
I
guess
at
a
high
level,
that's
the
separation
of
abstraction.
There.
E
Yeah,
I
guess
I
see
that,
but
I
I
just
I
think
that
you
know
some
of
the
rationale
for
for
both
of
these
things
is
is
is
similar,
and
maybe
we
should
look
at
them
in
conjunction.
J
Yeah
very
closely
linked.
A
So
we're
at
time
for
this
session,
so
thank
you
all
for
the
input
and
the
discussions
feel
free
to
add
your
own
columns
to
this
with
your
scores.
For
this,
I
think
an
action
item
is
we're.
Gonna
I'd
like
to
turn
this
into
something
more
actionable.
Like
probably
some
of
these
need,
like
rfc
or
something
written,
you
know
describing
what
what
exactly
we're
proposing
so
I'll
work
with
harry
and
other
student
committee
members
offline
to
you
know
kind
of
turn
this
into
something
like
that.
A
So
we
can
start
making
progress
on
some
of
these
things
and
also
follow
up
with
like
the
models
you
folks,
for
example,
you
know
see
what
we,
what
the
plans
are
there?
Okay,
we're
one
minute
over
so
we'll
end
the
meeting
at
this
time
thanks
everyone.