►
From YouTube: ONNX Roadmap Discussion #3 20200916
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right-
and
so
I
I
think
we
can
alternate-
this
might
be
too
many
topics
for
just
one
30-minute
session,
and
so
I
think
that
we
might
want
to
kind
of
talk
about,
say
large
model
support
later,
but
I
thought
that
we
could
at
least
start
with
three
other
topics,
so
error
handling
and
then
kind
of
revamping
or
enhancing.
B
A
Zoo
and
I'm
also
kind
of
on
the
quantization
site.
This
was
also
tied
to
some
of
the
models
through
side
of
things,
but
basically
implementing
more
operations
or
operators,
or
I'm
also
kind
of
hosting
the
quantization
or
quantized
models
on
model2.
So
those
are
kind
of
the
topics
that
were
suggested
by
the
community,
and
I
wanted
to
have
this
chance
to
talk
about
it
with
the
people
who
actually
suggested
this
so
yeah.
Shall
we
go
through
the
list,
any
any
questions
before
we
any
questions
or
comments
before
we
get
started.
C
Can
I
have
a
general
question?
First
yeah
yeah,
I
know
we
have
a
serious
meetings
to
go
over
road
map
items.
I
I
wonder
what
would
be
the
impact
to
release?
One
eight
wait.
Are
we
going.
A
Yeah,
so
we
actually
talked
about
it
right
before
this
particular
roadmap
session.
Basically,
we
want
to
utilize
the
next
three
sessions
to
talk
about
the
cost
and
impact
office,
so
that
will
give
us
a
sense
of
privatization
which
of
the
items
that
were
suggested
by
the
committee
that
we
should
work
on.
First.
A
Ideally,
it
has
the
highest
impact
and
low
cost,
and
so
that
will
follow.
We
kind
of
so
far
kind
of
covered
the
three
sessions
that
we've
done,
including
this
one.
We
wanted
to
cover
the
horizontal
perspective
and
then
the
vertical
perspective
and
then
the
four
of
the
onyx,
and
so
those
are
kind
of
the
three
sessions
that
we
wanted
to
utilize
to
kind
of
get
all
the
the
items
that
were
suggested
by
the
community
to
talk
about
them
and
then
the
next
three
sessions.
A
We
want
to
kind
of
apply
the
coffins
in
back
of
the
office
figure
out
the
privatization,
and
then
we
also
wanted
to
map
who
might
be
interested
in
working
on
the
ones
that
we
hired
to
play
as
a
high
impact
and
low
cost
projects
for
the
items.
So
that's
kind
of
what
we
talked
about
prior
to
this
meeting.
A
I
forgot
that
the
question
was
kind
of
typed
it
at
1.8
for
these.
So
thanks
thanks.
C
So
essentially,
all
we
are
talking
about
here
will
be
post
1.8,
not
if
not
all
about
most
of
them
will
be.
C
D
C
A
Yeah
no
worries
thanks
for
bringing
that
up
and
I'll
also
try
to
post
this,
so
that
it
brings
a
bit
more
of
a
clarity.
This
is
the
the
first
time
that
we
open
up
the
discussion
to
the
public,
so
there
are
some
think
that
we
have
to
do
better
and
having
that
feedback
really
helps.
So
let
us
know
what
we
can
improve
and
then
we
can
implement
it,
and
so
I
think
we
can
start
with
the
first
item
at
air
handling.
A
I
did
see
adam
polka,
who
suggested
one
of
the
items
here
and
so
adam.
Would
you
be
able
to
kind
of
talk
through
the
particular
item
that
you
suggested
here.
E
Sure
so
sorry,
so
that's
it's
mainly.
E
You
know
we
want
to
use
the
the
runtime
to
to
load
in
models
and
serve
them,
and
we
would
like
at
the
moment
the
runtime
calls
through
to
the
onyx
model,
loading
code
and
the
protobus
inside,
and
if
those
protobufs
are
malformed
in
any
way
it
segments
the
checking
and
then
it
takes
the
whole
runtime
down
and
the
jvm
that
was
attached
to
it
because
we're
working
in
java-
and
that's
that's
not
great,
so
just
the
the
model
checker
we
really
under
the
loading
code.
E
We
really
don't
want
to
ever
terminate
ever
right,
I'm
fine
if
it
throws
an
exception
or
returns
some
error
code
or
something
but
but
segfaulting
is
really
a
horrible
user
experience
for
us,
because
we
have
to
bring
the
web
server.
That
was
serving
things
back
up
again
and
that's
obviously
very
difficult
and
not
great
for
deployment.
E
So
it's
really
just
ensuring
that
the
checker
is
robust
to
a
wide
variety
of
models.
You
know
if
we
want
to
load
in
something:
that's
not
especially
trusted
that
has
its
own
issues
right,
but
but
if
we
want
to
do
that,
we
we
really
don't
want
the
untrusted
model
to
be
able
to
take
down.
E
A
Okay,
thanks
for
your
session,
and
I
think
what
champion
suggested
that
the
top
also
kind
of
reflect
what
you're
mentioning
here,
causing
the
code
and
preventing
crash,
and
I
think
he
also
wanted
to
have
a
put
deception
free.
A
But
I
I
know
that
there's
a
lot
of
comments
from
the
community
that
was
bit
against
this
one
and
it
kind
of
also
talked
about
something
that
other
people
talked
about
last
time,
which
is
basically
the
consideration
that
auditors
expect
and
we
need
to
keep
it
that
way
and
so
forth,
but
at
least
on
the
the
making
sure
that
the
code
doesn't
or
the
crash
doesn't
happen.
I
think
that
is
something
that
we
we
do
need
to
really
look
into.
C
Just
to
clarify,
I
believe,
onyx
runtime
has
a
bit
different.
Checking
from
onyx
checker.
Is
that
correct.
E
I
believe
so
so
this
model
that
we
had
that
fuzzed.
It
also
crashed
the
checker
when
you
run
it
through
python
and-
and
so
you
get
like
that
error
message,
saying
that
the
the
checker
just
went
away-
and
here
is
some
error
message
and
I
think
that's
probably
the
same
root
thing
where
the
checker
might
be
in
a
different
process
and
it's
like
faulted
and
the
python
process
went.
Oh,
it
went
and
died.
E
So
I
I
think
that
improving
the
error,
handling
of
the
loading
code
in
general
will
will
help
I'm
I'm
not
clear
on
how
the
internals
of
onyx
run
time
actually
loads
models.
But
when
I
raised
it,
they
pointed.
D
I
don't
think
onyx
runtime
directly
uses
the
onyx
model
checker
or
the
onyx
protobuf
loader,
but
would
be
curious
to
dig
into
that
more
have
you
have
you
just
tried,
using
like
the
onyx
code,
to
load
an
invalid
model
and
that
crashes
somehow.
E
Yes,
well
so
you
can
load
the
model
in
and
then
you
run
the
model
checker
on
and
it
looks
like
the
model
checker
itself
crashes.
Oh.
E
That
so
it
it's
not
clear
to
me
how
lazy
that
loading
is
yeah.
I
I
haven't
had
done
a
full
sort
of
dive
into
that,
but
when
I
raised
the
issue
on
linux
runtime,
it
occasionally
got
pointed
back
to
the
onyx
protobufs
and
some
code
that
was
in
there.
That
was
failing.
I
I
I'm
not
a
c
plus
plus
programmer
really
so
I
I
didn't
follow
exactly
the
details
of
that.
Okay.
D
Well,
the
maybe
a
bit
I
don't
remember-
is
the
onyx
model
checker
written
in
c
plus
plus,
or
is
it
written
in
python,
honest
runtime
uses
equals
plus
so
right.
A
A
D
Prasad
yeah,
I
mean
probably
follow
up
with
him
on
what
he
means.
If
it's
talking
about
the
same
thing
that
adam
is
reporting,
then
it's
probably
related.
If
it's
something
else,
I
don't
know,
and
what
does
he
mean
by
crash?
I
guess
okay
he's
not
on
the
call,
and
I
don't
think
that
on
a
run
time,
I'm
not
sure
they're
they're
developers
on
call,
but
we
can
follow
up
with
them.
A
A
Okay,
so
we
can
move
on
to
model
2
and
then
adam
you're.
Also.
You
also
commented
on
this
one.
So
would
you
give
us,
would
you
explain
the
suggestion
that
you
have
here.
E
Sure,
sorry,
just
let
me
I'm
just
pasting
something
into
the
chat
for
the
previous
issue,
so
I
write
a
library
called
trivial
which
launched
this
week,
which
has
sort
of
provence
and
tracking
information
as
one
of
the
core
components
in
the
machine
learning
system
and
we
interact
with
the
onyx
ecosystem,
bionx
runtime.
We
we
serve
onix
models
using
that
system
and
we'd
like
to
expose
the
metadata
fields
and
we'd
like
to
record
the
metadata
fields
in
the
near
future.
E
When
we
get
around
to
exporting
onyx
models
from
trivial,
which
currently
we
don't
do
but
we're
planning
to
do
but
tribunal
models
expose
a
huge
amount
of
metadata
and
the
the
sort
of
metadata
schema.
That's
the
model
metadata
schema,
that's
encoded
in
the
onyx
format
is
pretty
lacking
right.
There's
that
custom
field
and
you
can
put
lots
of
stuff
in
there,
but
unless
there's
any
sort
of
schema
on
that
custom
field,
it's
difficult
to
know
what
you're
reading.
E
So
I
guess
my
point
here
is
mainly
it
would
be
nice
if
there
was
a
bunch
of
predefined
schema
field
like
there
are
for
version
and
provider
and
a
couple
of
others
that
I
forget
that
the
details
of
I
think
I
listed
some
of
them,
but
it
would
be
nice
if
there
was
a
field
for
like
the
timestamp
when
this
model
was
created.
Whatever
training
algorithm
was
used
for
this
model,
maybe
what
source
library
was
used
to
generate
it?
E
Those
can
all
be
optional,
and
that's
you
know
I
I
wouldn't
expect
everybody
to
fill
them
in,
but
we'd
fill
them
in
and
then
we'd
know
that
we'd
be
able
to
read
the
right
fields
out
and
I
feel,
like
other
people
serving
onyx
models
that
would
be
useful
to
them
if
it
was
just
denoted
that
this
is
a
valid
metadata
field.
D
E
Yeah,
I
think
that
will
probably
be
sufficient
and
like
if
you
can
type
them
as
well.
I
mean
I
know
they're
string
strings,
but
you
know
if
it
was
like
this
metadata
key
will
will
be
a
string,
but
it
should
be
able
to
be
parsed
as
a
date
time,
or
you
know
that
kind
of
thing
right.
I
think
that'll
be
sufficient.
It's
just
having
some
schema
on
it,
so
that
libraries
that
wanted
to
write
to
it
could
do
one
would
have.
It
would
understand
that
that's
the
format
of
it.
D
Right,
I
think,
there's
actually
a
document
in
the
in
the
docs
folder
that
describes
some
metadata
fields
that
have
been
proposed.
We
should
probably
update
that
with
the
additional
fields
that
you're
looking
for.
A
A
And
then
the
second
topic
within
the
model
2-
and
although
it's
not
super
what
the
atom
says,
it's
not
super
related
to
models
here.
I
unfortunately
just
put
it
in
on
the
model,
so
sorry
for
the
mysql
application
I'll
try
to
expand
it
to
the
responses,
but
at
the
same
time
chemin
also
kind
of
had
a
session
on
the
model
too,
and
I
I
thought,
denissa
also
kind
of
had
some
ideas,
but
basically
sort
of
thing
that
we
need
to
have
the
the
model
test.
A
The
I
don't
think
that
this
here
today,
but
he
wants
to
have
the
model
test
extended
to
cover
all
the
models
right.
The
model
do
right
now
there
are
only
a
few
models
that
are
being
covered
and
then
initial
kind
of
second
into
model,
zoo
that
it
should
have
good
examples
and
it
should
be
high
quality.
I
think
finitra
was
saying
that
she
wanted
to
list
out
how
the
data
should
be
ingested
for
the
the
models
on
the
model,
2
that
that
example
should
be
shown,
but
I
think
that
we
should
talk
to.
A
Okay
yeah,
so
I
think
we
should
focus
for
one
thing
for
the
folks
who
are
on
the
call.
Denitra
is
going
to
work
if
she's
going
to
switzerland
to
get
our
phd,
so
she's
handing
it
over
to
one
thing:
to
take
care
of
model
2,
and
so
that's
why
we're
talking
about
here.
A
All
right,
let's
move
on
to
quantization
then,
and
so
we
collected
two
comments
from
channing
and
then
also
xiaomi
was
mentioning
that
we
need
to
have
a
quantization
physical
optimization,
and
I
think
that
there
is
also,
for
example,
that
he
posted
here
is
from
glo.
A
So
he
wanted
to
include
a
qualitative
bishop
of
magicians
to
onyx
and
then
co
wanted
to
in
general,
expand
the
model
zoo
so
that
it
has
the
the
quantized
model
house.
I
think
we
do
offer
the
script
for
quantity
quantizing
the
model
for
onyx,
but
I
don't
think
that
we
have
a
quantized
models
in
the
model.
So
yet,
if
I'm
not
mistaken,.
D
Yeah,
that's
correct.
There's
been
several
requests
to
to
do
that.
Hopefully
we
can
get
some
contributors.
A
I
think,
and
we
work
with
parking
space-
is
that
something
that
could
be
posted
on
models
or.
D
A
Can
be,
I
mean,
as.
D
You
know
we
publish
a
blog
where
the
quantized
onyx
model
you
know
from
hugging
face
does
pretty
well
it's
possible
to
post
it
here.
I
I
I
mean
I
would
probably
have
to
get
hungry
face's
permission
to
do
that.
A
I
see,
okay,
that
better
be
a
really
good
example,
because
that's
like
a
model
that
can
be
run
in
production
settings
and
also
microsoft
and
space
both
work
on
it.
So
everybody
knows
that
it's
a
very
high
quality.
A
And
I
guess,
just
in
general
that
I
know
the
article
you
guys
published,
it
was
talking
about
oneplus
model,
but
I
was
wondering
if
the
phrase
their
transformer
library
has
a
lot
of
models
in
there
and
maybe
the
top
one.
A
A
And
I
unfortunately
didn't
get
a
chance
to
look
at
the
quantization
specific
optimization,
that's
in
glow.
A
A
D
A
Runtime,
quite
possibly
runtime
because
it's
coming
from
glow,
but
you
have
to
check
so
sorry,
sorry
for
not
being
the
homic.
B
Regarding
the
optimization,
the
unexpect
itself
does
not
really
specify
how
to
optimize
the
graph.
That's.
B
Left
up
for
others
yeah,
but
there
is.
There
was
some
discussion
on.
So
if
you
do
like
a
late
up
level
test
with
the
the
q
and
dq
in
the
in
the
middle
of
the
graph,
do
we
need
to
enforce
that
up
level
conformance
or
the
conformance
can
be
done
at
the
graph
level,
because
the
runtime
can
do
optimization
of
the
graph?
B
A
Yeah,
I
think
this
will
be
an
interesting
topic
to
talk
about
on
the
ir
side
as
well.
D
Yeah
I
mean
in
general,
I
think
optimizations
are
best
left
out
of
the
onyx
spec
scope.
If
there's
some
sort
of
constraint
checking
that
needs
to
be
added
that
could
be
part
of
onyx
checker.
You
know
just
to
make
sure,
although
all
the
pieces
are
set
up
correctly.
A
Okay,
how
about
we
follow
up
with
car
and
see
what
you
think
on
this
particular
topic.
A
Okay,
so
I
thought
that
we
actually
had
a
lot
of
things
to
talk
about.
We
have
five
minutes
left
any
questions.
I
know
that
there
are
a
lot
of
people
participating
in
this
discussion,
so
any
any
questions
from
the
community
or
comment
on
the
topics
that
we
discussed
so
far.
C
I
noticed
that
model
zoo
models
are
in
different
upset
versions.
Is
there
any
attempt
to
move
them
to
most
recent
ones.
D
Yeah
that
was
discussed
a
long
time
ago.
Originally
there
was
an
attempt
to
kind
of
always
whenever
we
do
a
release.
We
also
update
all
the
onyx
models
and
that's
what
the
version
converter
was
meant
to
be
used
for,
so
you
don't
have
to
go.
You
know,
go
to
the
source
framework
and
wait
for
them
to
update
the
converter
and
and
re-export
them
all.
You
can
just
upgrade
the
model,
but
I
don't
think
the
version
converter
is
necessarily
fully
up
to
date,
so
we
haven't
been
able
to
do
that.
D
I
don't
think,
and
then
the
second
issue
has
been.
You
know
it's
not
clear
whether
that's
necessary.
I
mean
this
is
not
true
for
every
runtime
but
at
least
for
honest
run
time.
It
supports
all
the
versions
of
onyx.
So
even
if
you
have
an
older
model,
it'll
still
run.
So
you
don't
need
a
newer
offset
model
unless
you're
using
a
that's
in
the
newer,
offset.
C
Yeah
we
found
one
with
version,
including
upsimple,
that's
deprecated,.
D
That's
true
there
there
have
been
a
few
cases
where
they're
using
deprecated
apps
and
those
models.
Yes,
those
need
to
be
updated,
but
in
general,
if
the
model
contains
ops
that
have
not
been
deprecated
but
they're
using
an
older,
offset
they're
strictly
not
required
to
switch
to
the
new
offset.
A
And
that
all
that
goes
what
sinistra
I
brought
up.
I
think
a
couple
of
days
ago
on
the
onyx
document
and
so
yeah,
thanks
for
bringing
that
up,
tim.
E
Others,
so
I
guess
that
the
the
other
thing
I'm
interested
in
is
different
language
bindings
for
the
onyx
toolset
in
general.
I
haven't
had
a
chance
to
do
a
lot
of
research
on
this
particular
topic
yet
so
it
might
just
be
that
I've
not
found
the
appropriate
bit,
but
obviously
most
of
the
tooling's
in
python.
At
the
moment
I'll
see
I'm
I'm
interested
in
having
tooling
from
java,
so
I
can
write
models
out
from
java
or
from
the
jvm
more
generally,
so
I
don't.
E
Presumably
the
scope
of
the
of
the
current
onyx
code
base
is
just
to
support
python
and
c,
and
I
guess
is
there:
what's
the
interest
of
people
about
different
platforms
and
supporting
those.
F
Hey
hi,
it's
nick
pentruth
here
from
ibm,
and
I
think
it's
also
worth
noting
that
jvm
support
in
particular
is
quite
interesting
for
for
kind
of
spark
potential
potential
support
for
spark
and
spark
ml.
F
I
know
at
the
moment
it's
you
know
the
the
python
api
is
supported,
but
but
not
the
kind
of
scholar
or
jvm
java,
so
yeah,
I'm
also
kind
of
interested
in
the
in
the
plans
for
the
java,
your
potential
java,
api
or
bindings,
which
would
make
things
like
spark
support
a
lot
more.
You
know,
native.
E
Yeah,
I
think
I'd
like
to
be
able
to
load
in
a
like
model
file
and
just
inspect
it
and
look
through
it
and
sort
of
validate
it
from
java,
and
just
so
I
know
what
it's
in
it
and
then
yeah.
I
I
want
to
be
able
to
write
a
converter
from
my
library
into
the
onyx
format
and
at
the
moment,
without
sort
of
java
tooling.
I
think
I'm
going
to
have
to
do
that
all
in
native
code
and
it'll
be
significantly
more
painful
than
it
would
be.
D
I
mean
as
a
community,
someone
has
contributed
at
our
library,
onyx
r,
I
think,
but
yeah
no
one
has
contributed
java
one.
Yet.
E
F
There
may
be
a
the
approach
that
that
a
few
other
libraries
have
taken
is
to
use
the
java
cpp
presets.
So
it
kind
of
auto
generates
stubs
based
on
the
the
c
api,
and
that
might
make
that
might
be
a
good
starting
point,
but
agreed
it's
it's
a
it's
a
non-trivial
undertaking,
but
you
know,
for
it
would
make
things
a
lot
easier
for
jvm
land
and,
for
example,
again
in
spark,
just
speaking
from
from
my
kind
of
experience
as
a
commits
on
that
project.
F
In
python,
in
particular,
for
machine
learning
and
the
deep
learning
side
so
that
so
python
is
kind
of
great,
but
typically
the
way
you
you
do
things
is
is
implement
them
on
the
scholar
side
and
then
provide
a
python.
You
know
wrapper
around
the
scholar
or
java
side,
so
it
definitely
is
a
more
native
way
of
doing
things
and,
I
think,
would
be
more,
ultimately
more
sustainable
for
spark
converters.
E
Because
there
are
things
coming
down
the
pipeline
in
new
agile
versions
that
will
make
native
interop
significantly
easier
like
it
will
automatically
generate
bindings
for
you
but
yeah.
If
we
need
to
target
eight,
then
that's
a
lot
more
work.
Sadly,.
F
Spark
is
is
well,
I
mean,
I
suppose
it's
both
positive
and
negative,
but
you
know
it's:
it's
such
a
kind
of
large
piece
of
infrastructure
code
that
backwards
compatibility,
super
critical,
so
yeah.
A
Okay,
so
I
think
this
would
be
interesting
to
see
when
we
do
the
mapping.
If
there
will
be
enough
interest,
I
mean
there
is
a
serious
undertaking,
so
if
there's
enough
interest
in
it
people,
if
the
communities
are
willing
to
work
on
it,
then
I
think
this
would
be
a
good
good
thing
to
explore.
So,
thanks
for
bringing
this
up
naked.
A
Adam,
we
are
officially
out
of
time,
but
are
there
any
more
before
we
go.
A
Okay,
thank
you
guys,
thanks
for
joining,
and
thanks
for
submitting
with
me
back
as
well,
really
appreciate
it.
D
And
the
recording
will
be
posted
later
today.