►
From YouTube: ONNX Roadmap Discussion #2 20200909
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Yeah,
okay,
hopefully
everyone
can
hear
me
now
so
thanks
to
everyone
for
joining
the
second
meeting
of
the
onyx
roadmap
discussions
and
we
post,
we
have
the
last
meeting
last
week
and
the
recording
of
that
has
been
posted.
We're
also
recording
this
so
that
we
can
have
it
available
for
future
referencing.
So
today,
harry
helped
put
together
a
couple
of
topics
that
we
want
to
cover
primarily
around
off
definition,
shape,
inference
and
the
ir
itself
in
the
roadmap
documents.
A
We
had
a
number
of
comments
from
several
people
about
these
topics,
so
these
are
the
ones
we
wanted
to
kind
of
dive
into
in
the
half
hour
today.
So
the
first
one
is
around
shape
inference,
and
so
I
want
to
start
the
the
conversation
with
shape
inference
before
moving
on
to
the
other
ones.
A
A
Here
roshan,
can
you
hear
us
hey
hi?
Yes,
hey
how's,
it
going.
So
thanks
thanks
for
providing
this
these
feedback
and
these
comments
in
the
roadmap
document.
Do
you
want
to
say
a
little
bit
about
what
you're
proposing
here.
B
So
yeah
so
basically
like
in
the
past,
I
had
lots
of
issues
with
shaken
sense,
so
I
was
referring
to
like
making
or
building
a
generic
shape
influence
infrastructure
where
we
basically
whenever
let's
say
something,
changes
in
ir,
then
we
basically
recompute
the
shape
and
we
make
it
up
to
date
with
all
the
new
information.
So
it
was
very
generic
towards
the
shipping
trends.
B
B
Right,
that's
one
aspect
of
it,
for
example,
let's
say:
let's
look
in
the
hybrid
case
that
we
have
a
model
but
and
right
now
let's
say
some
sample
off
and
we
don't
know
the
shape,
but
we
we
do
copy
propagation,
and
now
we
have
some
information
available
and
now,
with
this
new
information,
what
we
can
do
is
you
can.
B
Let's
say
we
know
the
screen
now,
for
example,
a
very
big
base
case
where
earlier
in
python
script
to
onyx,
it
is
to
add
more
ops
for
up
sample
it
for
scaling,
it
used
to
add
multiplication
division
and
they
need
to
start
up
sample.
So
when
we
do,
let's
optimization,
we
remove
those
particular
orbs
and
that
one
we
know
this
value
of
the
scheme.
B
So
I
was
referring
more
in
in
that
region
that
how
we
can
basically
how
how
we
should
ensure
that
shape
influence,
optimization
just
go
hand
in
hand,
and
basically,
we
improve
optimization,
as
well
as
shape,
inference
to
together.
B
D
So
this
has
to
do
more
with
how
best
we
can
write
optimizations
so
that
they
can
also
aid
in
shape
inference
after
the
optimization
right.
D
But
for
the
current
shape
inference
it's
difficult
to
it's
difficult
to
achieve
that
because
you
don't
know
what
optimizations
can
be
applied.
B
D
So
like
how
I'm
seeing
this
is,
this
is
more
of
this
applies
more
to
optimizations
than
to
directly
to
shape
incidence.
Am
I
right.
B
Correct
me,
if
I'm
wrong
so
so
we
have
optimization,
passes
right
and
we
have
some
inference
as
I
like
to
upgrade
with
separate
entity
but
like
like
I
work
so
right
here
like
I'm,
I
was
referring
to
making
a
genetic
improvements
in
shape
influence,
but,
like
I
was
also
pointing
to
that.
Okay,
maybe
this
can
also
be
connected
to
optimizations
to
improve
both
together,
but
but
I
agree
that
if
it
is
segregated,
if
you
keep
it
segregated,
then
this
becomes
like
two
independent
issues.
C
A
All
right,
so
it
looks
like
there's
some
probably
low
hanging
fruit
around.
You
know
updating
model
checker
and
things
like
that.
That
can
be
done,
and
then
there
are
some
other
things
that
might
require
more
discussion.
It
is.
A
C
D
Rama
put
it
earlier
like
so
even
today,
what
is
being
requested
is.
It
does
exist
in
the
sense
that,
once
you
run
optimizations,
if,
if,
if
the
unknowns
which
are
now
known
after
optimizations,
then
you
can
all
you
can
always
run
shape,
inference
again
and
use
that
to
propagate
the
shape
so
say
after
optimizations
of
scale
of
scale
is
now
known
for
up
sample,
then
just
running
shape
incidence
again
should
should
work.
C
B
I
think
what
I
have
proposed
is
already
there
it's
just
that
we
need
to
call
both
in
consecutive
loops
to
achieve
that,
but
but
the
infrastructure
is
already
in
place.
C
Yeah
yeah,
so
I
think
yeah.
I
think
this
is
also
related
to
the
point
prashanth
was
making
in
the
sense
and
that
the
way
the
code
is
structured,
we
have
the
shape
inference
methods
for
each
operator
in
place,
but
a
a
different
runtime
could
choose
to
invoke
the
shape
in
fronts
for
nodes
at
a
suitable
point
in
time.
C
So
in
principle
I
mean
it
is
already
possible
for
a
runtime
to
do
optimizations
and
compute
known
values
and
then
call
the
existing
shape,
inference
method
where
it
exploits
the
information
but,
as
prasan
said,
the
way
code
structure.
Now
the
optimizations
belong
to
a
different
framework,
but
that
framework
can
call
the
shape
inference
to
make
use
of
the
information
that
the
framework
has
computed
via
optimization.
A
All
right,
so
I
want
to
make
sure
we
cover
some
of
the
other
topics
as
well.
So
for
this
one,
I
guess.
If,
if
there's
more
discussion
to
be
had,
I
think
the
the
slack
channel
for
the
architecture,
infrastructure
group
or
the
discussions
board
on
github
would
be
great
places
to
flush
out
some
more
of
these
details
about
what
what
is
needed
and
then
we
can
take
it
into
account
for
roadmap
work
items.
A
So
for
the
operator
definitions,
we
had
a
number
of
folks
suggesting
to
reduce
the
number
of
primitive
operators,
since
other
operators
can
be
composable
with
with
kind
of
lower
level
operators
and
also
a
couple
comments
about
reference
implementation.
A
C
C
C
C
It
is
but
yeah
I
mean,
I
think
it
is
a
requirement.
It
is
a
requirement
right
now
to
add
a
reference
implementation
for
the
op,
but
the
point
I
was
making
was
that
it
is
not
organized
in
a
fashion
so
where
you
can
conveniently
call
the
these
different
ops
reference
implementation
in
order
to
evaluate
a
sequence
of
ops,
for
example,
if
you
want
to
evaluate
a
sub
graph
or
a.
D
So
so
rama
you
mentioned
that
so
yeah.
I
do
agree
that
we've
already
said,
like
we've,
already
made
it
mandatory
to
check
and
reference
implementation
for
new
ops,
but
to
the
point
that
you're
making
like
to
to
to
be
able
to
run
sub
graphs
right
and
special,
especially
for
validating
function,
definitions
I
was.
D
I
was
actually
thinking
of
adding
some
runtime,
like
onyx
runtime,
to
the
ci's
to
do
just
this
because,
usually
like
we
construct
functions
from
primitive
operators
which
which
are
pre-existing,
which
means
that
any
runtime
which
claims
like,
for
example,
onyx
runtime,
does
claim
that
it
supports
all
operators.
So
it
does
have
implementation
for
those
primitive
ops.
D
So
we
should
be
able
to
verify.
We
can
verify
the
function
sub
graph
using
onyx
runtime,
and
we
use
that
for
the
validation.
Instead
of
going
this
route,
where
you
not
only
add
a
reference
implementation
for
individual
operators,
but
also
make
it
possible
to
run
entire
subgraphs.
B
And
adding
adding
to
this,
for
example,
when
let's
say
reference,
implementation
is
not
runnable,
then
basically
it's
very
easy
for
it
to
get
outdated.
For
example,
if
something
changes
and
it's
possible
that
runtime
implementation
might
diverge
with
what
author
is
expecting
it
to
work
on.
So
it's
also
like
easy
to
basically
maintain
it
as
a
let's
say
unit
test,
and
we
ensure
that
runtime
is
not
diverging.
B
Whatever
is
the
reference
instrumentation
and
let's
say
tomorrow
the
runtime
changes
and
some
practitioners
for
the
runtime.
Then
we
have
that
standalone
reference
implementation
always
to
basically
validate
the
parity.
C
So
the
judge
to
respond
to
ashwinis
earlier
committee,
I
mean
I
think
I
mean
if
there
is
a
reference
implementation
based
on
what
of
courte.
Of
course,
that's
also
an
alternative.
I
mean
that
is.
C
D
Just
like
what
I
was
mentioning
was
directly
using
onyx
runtime
in
the
cis
for
validating
the
function.
So
so,
when
we
write
a
function,
we
create
two
test
cases
right:
one
with
the
expanded
function
body
and
one
with
the
actual
function
itself
as
the
node.
So
we
can
use
ort
to
validate
the
function
body
itself
as
you,
because
this
is
based
on
the
assumption
that
the
function
is
composed
off
of
primitive
ops
which
exists
in
onyx.
D
But
we
can't
use
onyx
runtime
for
for
new
operators.
If
you
were
to
have
a
sub
graph,
which
includes
new,
primitive
ops,
which
are
added
in
in
the
new
release,
then
we
can't
expect
it
expect
any
other
runtime
to
have
that
implementation,
because
onyx.
D
F
D
So
I
think
so
so
the
first.
The
first
point
is
that
we
do
need
reference
implement
a
separate
like
we
do
need
a
reference
implementation
for
every
op.
We've
already
made
it
mandatory
that
when
you
are
checking
in
a
new
operator
it
needs
to
be
the
reference
implementation
for
that
should
be
included
and
that
reference
implementation
in
python
and
how
we
do
it
today
is
we
add
it
with
the
test
case
itself.
D
D
What
we
are
discussing
is
should
we
have
a
way
of
also
running
a
complete
sub
graph
using
this
reference
implementation.
D
A
lot
of
other
work
and
for
this
particular
and
rama,
mentioned
the
use
case
of
being
able
to
verify
function.
Bodies
to
that
I
was
saying
instead
of
creating
something
different
and
onyx,
we
can
just
use
onyx
runtime,
based
on
the
assumption
that
all
the
ops
inside
the
function
body
will
be
the
pre-existing
primitive
ops.
E
This
is
sorry,
so
I
would
like
to
trim
me
a
little
bit
actually
doing
this
kind
of
thing
we
are.
We
are
making
onyx
very
complex,
so
so
we
do
not
need
any
kind
of
reference
implementation.
Indeed,
we
do
not
need
that
we.
What
we
need
is
what
we
are
talking
about
here
is
honest
compliance
right.
E
When
we
define
an
on
onyx
operator,
we
want
to
let
people
know
what
does
that
mean
that
if
he
or
she
wants
to
implement
this
off,
then
his
her
logic
is
cracked.
E
E
Who,
who,
who
judged,
who
will
judge
the
logic
right?
How
how
how
you
can
how
you
you
can
I
mean
honestly,
so
I
I
technically
I
I
understand
that
when
you
guys
say
you
care,
I
understand
that,
but
step
back
a
little
bit
right
honestly,
the
spec,
a
spec
means
that
I
have
a
description
on
all
the
operators
in
the
description.
I'll
also,
let
you
know
the
algorithm,
I
mean
how
the
logic
of
this
of
this
operator
right.
Basically,
it's
an
english
right.
E
C
All
right
wait,
so
the
power
of
the
reference
implementation
search
to
first
remove
the
ambiguity
in
the
english
text
right.
So
it's
not.
I.
E
Mean
I
mean
I
mean
we
can
advocate
so
in
in
our
current
test
cases.
Right
in
our
current
test
cases,
we
are
now
advocating
people
to
say.
Okay,
can
you
write
a
piece
of
code
actually,
instead
of
just
listing
data
there
to
say?
Okay,
this
is
input,
and
that
is
output.
This
is
the
test
data.
Can
you
can
you
also
write
a
piece
of
logic
to
say
to
be
able
to
randomly
generate
some
test
cases
there
right
we're
advocating
people
there,
but
I
don't.
H
Support
car
here
I
don't
think
onyx
as
a
spec
needs
to
require
reference
implementation.
I
think
I
think
like
if
you
look
at
the
standard
apis
in
general,
like
you,
you,
it
comes
with
a
conformance
test
in
the
spec,
but
it
doesn't
come
with
a
reference
implementation.
You
know.
D
H
Cases
in
our
spec
we're
trying,
I
think,
rather
than
trying
to
solve
this
problem
with
the
reference
design
shouldn't,
we
make
the
spec
more
clear
and
make
sure
that
the
compliance
test
that
goes
with
the
spec
actually
tests.
All
these
corner
cases
that
we're
worried
about.
E
I
agree,
I
agree,
the
reference
implementation
actually
is,
is
there
and
the
people
is
trying
to
ask?
That
is
because
our
spec
is
not
clear,
so
I
mean
the
spec.
Remember
honestly,
the
standard
this
bag
is
a
very,
very
cheap
chip
thing.
It's
not
a
heavy
engineering
stuff,
so
the
spec
should
be
clear
enough.
D
So
I
think
reference
implementation
sounds
like
a
very
heavy
term
and.
D
Be
the
reason
why
you
think
like
we
are
complicating
this,
but
actually
like
we've
seen
like
we've,
seen
examples
where
that
english
description
or
definition
that
we
are
talking
about
has
not
been
enough
in
an
in
the
ideal
world?
Well,
yes,
what
do
you
say
what
you
are
suggesting
should
be
enough,
but
so.
E
C
So
so
sorry,
maybe
there
are
two
things.
Let
me
try
to
distinguish
them.
One
is
something
we
already
have
as
a
requirement,
which
is
some
implementation
for
the
op.
C
I
assume
there
is
no
suggestion
that
we
should
remove
that
requirement,
that
that
we
are
going
to
continue
to
have
that
requirement
right
so,
but
that
is
the
other
thing
which
I
was
suggesting,
which.
C
Well,
as
noted
in
the
comment
up
there,
I
guess
this
incompleteness
in
terms
of
earlier
operators.
I
guess,
and
then
I
mentioned
the
need
for
testing,
so
that
is
an
extension
there.
I
I
understand
I
can
buy
the
argument
that
if
you're
saying
extra
implementation
effort
is
not
worth
it
and
that's
guess
I
understand
that,
but
I
mean
for
the
individual
ops
themselves.
You
need
something
formal,
which
is
what
we
already
have
as
a
requirement
right,
and
that
seems
to
make
sense
to
me.
D
Yeah
yeah,
I
think
we
should,
in
my
opinion,
we
should
keep
that.
E
B
E
D
D
We
have
some
reference
implementation
for
the
op
which
is
being
checked
in
that.
D
Data,
yes,
but
if
yeah
I
mean,
if
you,
if
a
function,
takes
input
for
your
operator
and
generates
the
output,
then
you
have
to
write
the
code
to
implement
that
operator
right
it
it
and
we
are
calling
it
reference,
because
no,
there
is
no
sense
of
perf,
optimization
or
anything
expected
from.
D
F
E
Actually
I'm
also
advocating
that
I'm
not
against
people
writing
some
python
codes
to
generate
the
test
data
in
the
when
when
they
when
they
propose
an
operator
or
they
propose
an
operator
update,
I'm
not
against
that,
but
I
am
definitely
against
to
say
having
reference.
Implementation
is
a
must
when
proposing
an
operator
spec,
I'm
I'm
definitely
against
it.
E
But
let's
say
let's
say:
if
I
I
can
so
when
I,
when
I'm
proposing
an
operator
spec
I
can
when,
when
go
to
the
test
data
part,
I
can
write
a
python
function
to
read
the
random
input
and
the
gen
then
generate
output.
Then
I
say
this
is
the
test
data.
I
can
do
that.
I
can
also
do
I
just
give
you
fixed
data
set
to
say:
okay,
this
is
a
fixed
list
of
inputs
and
the
corresponding
fixed
set
of
outputs.
D
No,
the
problem
with
the
second
case
that
you
mentioned
is
it's
just
not
enough
for
someone
who's
implementing
this
operator
in
a
runtime
to
to
properly
test
their
implementation,
because
you
like
what
we
have
seen
is
there
are
one
or
two
test
cases
or
even
if
there
are
five
or
seven
test
cases,
there
are
operators
like
resize
and
all
and
others
also.
D
So
complicated
they
have
so
many
different
attributes
which
change
the
outputs
that
they
are
not
enough
for
you
to
test
your
implementation.
Instead,
if
you,
if
you
have
a
method
like
a
function
which
implements
this,
then
someone
who's
writing
who's.
Implementing
this
in
the
runtime
has
has
the
ability
to
properly
test
what
they're
writing.
D
I
have
also
like.
I've
also
came
across
this
issue
where
I've
just
seen
already
generated
data
for
the
test,
and
it
has
just
not
been
enough
and
it's
difficult
to
find,
find
the
author
of
this
operator
and
go
ask
questions
right
so
having
this
like
having
some
simple
code
available,
just
solves
this
problem
for
everybody.
F
Yeah
yeah
clearly,
as
you
asked,
which
operator
I
would
say,
start
with
resize
and
see
how
many
test
cases
how
many
variations
we
can
make
with
the
current
code.
Okay,
I
implemented.
D
Resize
and
onyx
runtime,
so
I
can
tell
you
and
resize
fortunately
had
python
implementation,
but
without
it
it
would
have.
H
Actually,
one
of
the
reason
I
don't
like
reference
design
is,
if
you
have
a
spec
and
well-defined
test
cases
that
covers
all
the
corner
cases
that
we
care
about,
and
there
may
there
may
be
a
corner
case,
that's
which
is
not
covered
by
these
test
cases
that
that
runtime
may
decide
to
you
know
it's
basically
undefined
behavior
at
that
point.
Right
and
runtime
should
have
a
freedom
to
do
whatever
they
want.
If
it's
an
undefined
behavior
cases.
F
F
How
can
we
know
that
the
expected
results
is
based
on?
You
know
five
different
attributes
if
there's
reference
implementation
for.
A
Hey
folks,
this
is
a
really
great
discussion,
but
we
are
over
time.
So,
if
folks
are
okay
with
going
on
for
a
few
more
minutes,
I
think
we
can,
but
probably
in
about
10
minutes,
we're
going
to
want
to
wrap
this
up
and
then
we'll
probably
want
to
continue
offline,
so
go
ahead
and
we'll
continue
for
a
few
more
minutes.
E
Okay,
so
I
just
want
to
add
one
more
one
more
here:
it's
kind
of
like
the
the
reason
that
you
want
to
add
a
reference
implementation
here
is
that
you
think
that
these
test
cases
are
not
good
enough
coverage
is
not
enough,
so
think
about
it.
If
we
are
saying
that
we
want
people
to
have
some
test
cases
there,
how
do
they?
How
do
they
generate
those
test?
Cases
right
suppose
that
they
they
do?
They
do
have
some
logic
offline.
I
believe
they
do
have
some
logic
offline.
E
The
the
only
thing
I
don't
want
to
add
the
workload
and
also
the
burden
here
is
to
to
to
to
I
mean
kind
of
like
a
convert,
their
offline
codes,
maybe
in
say
maybe
in
first
pass,
maybe
in
go
and
then
to
this
kind
of
python
thing.
E
E
C
So
if
you
think
that's
not
a
good
idea,
we
should
actually
go
and
change
because
the
existing
requirement
says,
when
you
add
a
new
app.
We
should
provide
a
reference
implementation
in
python,
but
I
mean,
but
I
agree
that
I
think
most
of
us
are
in
the
high-level
goal
that
the
spec
has
to
be
complete
and
ambiguous
in
whatever
ways
we
can
make.
I
think
that's
clear,
but
yeah.
D
And
and
to
add
to
what
rama
just
said
like
yes,
the
specs
should
be
complete,
but-
and
we
all
agree
on
this
point
now-
the
next
question
is:
how
do
we
enforce
it
right?
Because,
yes
over
here,
they
all
agree
but
the,
but
the
reality
is
the
spec.
Is
there
are
issues
and
that's?
Why
that's
why
we
decided
like
that's
why
we
came
to
conclusion
that
we
want
some
reference
implementation
right.
D
E
D
F
E
D
Yes,
so
usually
ops
go
in
first
and
onyx
and
then
a
runtime
implements
it.
So
an
implementation
in
ort
is
not
always
available.
It's
only
available.
If
we
have.
E
E
So
this
back
thing
is
id,
I
mean
by
design.
It
is
something
coming
after
the
the
industry,
engineering
framework
and
also
industry.
Runtime
is
actually
something
after
that.
Unfortunately,
that's
the
fact,
so
that
means
actually
people
when,
when
people
have
some
practice
and
also
see
the
that
okay,
this
is
a
real
common
thing,
and
this
is
widely
used
in
our
models.
E
D
No,
no
so
one
of
the
requirements
for
adding
a
new
op
is
also
that
it
should
be
implemented
in
a
well-known
framework.
What
I
meant
is,
it
cannot
always
be.
It
may
not
always
be
on
accident.
E
Time,
regardless
of
regardless
of
where
this
kind
of
operator
proposal
from
where,
where
it
is
from,
whenever
wherever
it's
from,
I
believe
this
is
what
I'm
going
to
trust.
I
believe
the
one
who
proposed
operator
he
or
she
has
already
have
a
very,
very
good
implementation
in
his
wrong
time
on
his
framework
and
actually
he
or
she
also
has
a
very
good
usage
of
that,
and
it
proves
to
people
and
that
operator
is
useful
and
then
he
or
she
goes
to
onyx
to
say
now,
I'm
going
to
propose
it
to
the
spec.
D
Would
say
all
of
this,
except
that
there's
one
more
requirement
that
there
also
be
a
reference
implementation
in
python.
Now
this
is
already
approved
and
I
think
we
can
use
the
operator
sig
meeting
to
to
to
like
go
over
this
again
because
right
now,
I
don't
think
nikal
or
ahmad
are
here.
So
we
can.
We
should
use
the
operator's
sig
meeting
to
have
further
discussion
if
people
don't
mind,
because
this
is
something
which
already
exists.
A
Right
if
the
proposal
is
to
change
the
the
policy
for
adding
a
new
option,
the
operator
sig
is
the
right
place
to
to
do
it.
So
I
want
to
I
want
to
wrap
up
the
meeting
for
today
since
we're
actually
already
15
minutes
over.
Thank
you
for
the
discussions
and
thank
you
for
the
inputs.
I
think
this
is
a
really
interesting
and
an
important
topic
to
resolve.
A
I
would
suggest
that,
in
addition
to
the
sig
conversation,
you
can
also
start
a
discussion
on
the
github
discussions
board
so
that
we
can
get
more
community
members
thoughts
on
this.
I
think
this
really
boils
down
to
you
know
what
is
what
is?
Are
we
talking
about
reference,
implementations
or
compliance
tests?
You
know
kind
of
kind
of
kind
of
get
to
the
root
root
issue
here,
rather
than
about
kind
of
the
structure
of
the
reference
implementation.
Can
we
run
some
graphs
on
it?
A
I
think
we
can
get
down
to
the
lower
more
fundamental
question
that
we
probably
need
to
answer
first
so
anyway.
I
I
want
to
say
thanks
to
everyone
and
in
the
interest
of
time,
we'll
we'll
wrap
up
here
and
we'll
see
everyone
next
week
next
week.
The
meeting
is
on
thursday
and
we'll
we'll
see
you
all
then,
to
continue
discussions
on
the
road.
Now.