►
From YouTube: ONNX Roadmap Discussion #1 20200903
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
harry
I'm
going
to
turn
it
over
to
you
to
kick
off
the
roadmap.
Okay
sounds
good,
so
let
me
share
my
screen.
Oh
could
you
meet
me
host
jim
and
sorry,
there's
like
a
truck
that's
outside,
so
you
might
hear
some
noise
we're
good.
Okay,.
A
Okay,
so
thank
you,
everybody
for
joining
and
also
contributing
to
the
roadmap
document.
I
read
it
through
and
there
are
a
lot
of
good
suggestions
and
so
the
roadmap
document
that
I'm
referring
to.
If
you
haven't
seen
it,
is
this
one
and
there's
a
shortcut
where
you
can
access
access
it
pretty
easily.
So
if
you
were
to
type
in
onyx
dot,
ai
roadmap
into
your
browser,
then
you'll
be
able
to
access
this
document.
A
It's
open
to
public
and
basically
a
lot
of
folks
already
has
inputted
their
ideas
for
the
the
next
version
of
the
onyx,
and
so
we
are
trying
to
host
this
roadmap
discussion
so
that
we
can
actually
get
direct
feedback
from
our
contributors
and
then
actually
implement
the
changes
that
our
contributors
want,
and
so
we
are
going
to
have
six
roadmap
discussions,
they're,
hosted
weekly
and
then,
by
the
time
when
we
finish
our
discussion,
we'll
probably
have
a
community
meeting,
that's
planned
for
october
14th,
but
we
wanted
to
kind
of
divide
the
roadmap
discussions
into
three
main
parts.
A
I
kind
of
mentioned
it
in
the
introduction,
but
the
three
main
parts
are
basically
one
involving
the
onyx
core
and
then
two
trying
to
extend
onyx
on
a
machine
learning
pipeline.
So
this
is
expanding
horizontally
and
then
the
last
part
is
basically
trying
to
expand
or
trying
to
make
onyx
go
deeper
vertically
by
getting
closer
to
the
hardware,
and
so
those
are
kind
of
the
three
main
topics.
A
I
thought
that
would
be
nice
to
start
with
the
the
second
topic,
which
is
basically
trying
to
extend
onyx
on
the
pipeline,
and
so
there
are
two
specific
things
that
we
wanted
to
discuss
for
this
meeting
and
they
are
basically
evolving
data,
how
you
either
ingest
it
or
pre-process
it,
and
then
there's
also
another
contribution
from
folks
from
ibm
who
were
interested
in
extending
onyx
onto
horizontal
pipelines
such
as
qflo.
And
so
those
are
some
of
the
topics
that
we
like
to
talk
about.
A
But
if
you
have
any
other
suggestions,
please
let
us
know
as
well,
and
please
also
let
us
know-
because
this
is
our
first
time
hosting
a
weekly
discussion.
We
would
like
to
hear
from
you
whether
this
is
the
right
way
to
host
this
roadmap
discussion,
and
so
any
feedback
would
be
welcome
and
you
can
either
voice
your
opinion
here
or
you
can
also
put
your
thoughts
into
the
vote
doc
and
then
we
will
definitely
read
it
and
then
make
actions
accordingly.
A
Okay,
sinus
is
golden,
and
so
what
I
have
done
here
is
basically
I
took
out
some
of
the
suggestions
just
pertaining
to
data
and
machine
learning
pipeline,
and
then
I
put
it
on
under
github.
If
you
go
to
onyx
steering
committee
roadmap,
then
you'll
find
this
markdown
file
and
so
you'll
be
able
to
see
what
we
are
going
to
discuss
today
and
then
in
the
coming
weeks.
A
We'll
also
try
to
put
in
the
markdown
files
in
advance
so
that
you
can
check
out
what
the
discussion
topics
will
be
and
so
with
without
a
further
ado,
we'd
like
to
start
with
the
the
data
portion,
and
so
I
can
give
you
a
brief
summary
of
what
people
suggested.
I
thought
that,
starting
with
a
suggestion
that
came
from
taquia
from
ibm,
I
know
he's
going
to
actually
attend
the
meeting.
A
A
That
would
be
useful
for
things
like
pre-processing,
and
then
this
was
a
pre-processing
kind
of
idea
was
seconded
by
somebody
like
it
was.
He
mentioned
himself
as
an
snnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn.
A
A
I
think
he
also
entered
one
line
about
how
he
wanted
to
support,
have
onyx
support
things
like
audio
spectrogram,
processing
or
passport
fourier
transformation,
and
so
those
are
some
of
the
things
that
were
suggested
and
then
a
little
bit
broader
topic
is
shang
has
been
working
on
the
kind
of
the
pi
data
consortium,
and
so
he
also
suggested
that
we
need
to
align
with
the
numpy
operator
definitions
with
onyx,
and
so
that's
another
related
topic,
and
then
I
now
that
I
think
about
it.
It's
not
super
related
to
data.
A
Maybe
it
is,
but
there's
a
data
layout
format,
obviously,
and
yuankai
from
alibaba
wanted
to
have
broader
support
for
data
layout.
So,
instead
of
only
supporting
or
a
kind
of
detect
was
supporting
nchw
format.
A
We
wanted
to
expand
the
format
so
that
the
format
that
is
being
supported
better
for
intake
or
flood
16
data
types
should
be
supported
in
onyx
as
well,
and
then
this
also
will
lead
into
some
of
the
core
discussion
that
we'll
probably
have
in
the
future,
where
folks
have
suggested
that
we
need
to
have
quantized
models
in
the
model
zoo.
A
And
lastly,
I
wanted
to
kind
of
reserve
some
time,
maybe
10,
to
15
minutes
on
the
proposal
from
ibm
from
chin-
and
I
think
svetlana
also
mentioned
this
too
in
the
box,
but
basically
trying
to
extend
the
ml
pipeline
or
extend
onyx
influence
on
the
ml
pipeline.
So
those
are
kind
of
the
things
that
were
mentioned.
A
A
Pre-Processing,
we
actually
had
the
working
group
going,
but
we
were
actually
not
able
to
do
this
and
so
the
working
group
kind
of
got
disbanded
without
having
much
of
a
progress.
Unfortunately,
I
think
last
year,
but
do
you
guys
think
well,
first
of
all
supporting
data
frame
within
onyx
and
then
also
having
the
the
operator?
Expansion,
in
my
opinion,
is
a
good
thing
to
do
and
also
it
will
align
with
the
pi
data,
greater
pi
data
kind
of
a
consortium.
A
I
wanted
to
open
up
to
the
floor
and
hear
more
about
what
you
guys
think
about
it.
Maybe
shang.
If
you
want
to
give
us
what
you
what
you
think
that'll
be
really
helpful.
B
Yeah
sure
so
I
think
I
should
maybe
introduced
a
bit
on
the
python
data
api
consortium
efforts
for
those
who
are
not
familiar
with
that.
So
there
is
a
working
group
trying
to
define
the
the
future
of
data
api
in
python
for
the
purpose
of
fixing
the
fragmentation
in
the
frameworks.
B
So
right
now
lots
of
the
deep
learning
frameworks,
as
well
as
the
the
numpy
ecosystems,
are
evolving
independently
and
lots
of
the
missed
opportunity
comes
from.
You
know
slightly
different
definition
of
array,
api
and
the
lack
of
interoperability.
B
So
the
goal
of
this
group
is
really
to
build
together
a
data
api
standard
for
array
and
data
frame
so
that
the
libraries
that
each
one
build
could
inter-operate
with
each
other.
So
in
terms
of
timeline,
I
think
the
array
api
standard
would
have
a
draft
pretty
soon.
B
B
B
So
there's
a
bit
more
work
to
do
there.
I
think
so
any
question
on
the
python
data
api
consortium
efforts.
A
So,
in
terms
of
the
operators,
things
that
were
suggested
by
taquia
san
from
ibm,
things
like
mapreduce
group
by
those
operations
are
so.
I
know
that
we've
kind
of
talked
about
how
we
want
to
have
the
diff
in
terms
of
the
numpy
operator,
definitions
and
onyx,
but
with
those
because
I
haven't
really
looked
into
all
the
operators
sets.
That
numpy
has
would
those
actually
belong
into
some
of
the
things
that
were
going
to
be
supported
with
the
new
pi
data
consortium.
B
So
you're
talking
about
a
aligning
the
operators-
I
guess
right
yeah,
so
I'm
currently
working
on
the
diff
and
it
looks
like
there
are
some
more
grounds
to
to
cover
in
onyx
so
that
they
can
be
as
expressive
as
that
data
api
standard
yeah.
So
what
I'm
proposing
here
is
is
that
we
should
consider,
as
a
data
exchange
format,
to
support
and
adopt
the
python
data
api
standard.
B
A
I
see,
would
you
recommend,
then
I
guess
that
the
consortium
is
going
to
put
up
the
standard
for
the
operators
you
submit
september
and
also
the
data
frame
around
november.
Do
you
think
that
it'd
be
better
to
align
that
time
schedule
so
that
we
actually
align
with
the
the
greater
consortium
and
then
better
be
easier
for
users
to
actually
use
onyx.
B
Yeah,
I
think
so
so
for
these
standards
there
are
still
lots
of
discussions
that
are
happening
and
for
array
api.
I
think
it
would
come
out
earlier,
so
we
should
consider
maybe
taking
the
array
api
standard
into
account
and
for
data
frame.
We
can
probably
wait
until
the
the
other
rfc
comes
out.
B
A
Okay,
any
thoughts
from
any
other
thoughts.
C
I
have
a
general
question
for
for
this.
It
sounds
like
there
is
a
barrier
for
onyx
to
adopt
traditional
data
structure
like
data
thread,
but
the
underlying
mathematical
operations
are,
I
guess
they
are
already
well
defined,
using
current
onyx
operators
like
min
max
or
reduce
sound
or
something.
C
So
I'm
not
sure
if,
if
we
want
to
extend
the
onyx
definitions
or
have
like
what
the
authors
say
have
more
better
integration
with
I'm
not
sure
what
is.
Is
it
not
python
data
api?
C
I'm
not
sure
if
it's
an
api
problem
or
it's
a
operator
and
definition
problem,
so
yeah,
that's
my
sure.
So
myself,
yeah.
B
Sorry
for
the
python
data
api
standard,
it's
supposed
to
be
a
standard
that
library
authors
could
adopt
so
that
it
guarantees
interoperability
among
all
other
libraries
that
are
of
the
same
standard.
B
Of
you
hear
me:
yes,
okay,
so
yeah
so
for
onyx.
I
think,
given
that
other
libraries
are
adopting
this,
I
think
by
adopting
the
same
standard
you'll
make
you
know,
model
exchange
with
onyx
on
such
libraries
very
easy.
So
that's
the
motivation.
B
In
terms
of
approach,
I
think
for
array:
api
lots
of
the
operations
are
already
quite
low
level,
so,
instead
of
using
the
current
operator
standard,
I'm
sorry
instead
of
using
the
current
onyx
operator,
set
to
try
to
build
for
those
operation.
I
think
we
should
just
introduce
them.
As
you
know,
primitive
operations.
B
A
Any
other
thoughts,
so
this
is
this
session-
is
business.
This
is
the
first
one
we're
not
going
to
try
to
make
a
decision
here.
Basically,
we
want
to
hear
the
ideas
and
then
we
would
like
to
think
about
it
and
then
maybe
in
the
following
session
for
the
that
particular
kind
of
three
main
areas,
we
can
kind
of
have
some
closure
in
the
second
meeting,
so
yeah.
D
Is
there
any?
Is
there
any
spec
for
this
bison
data
api
but
because
it's
it's
a
little
bit
vague
right
like?
Can
we
read
this
up
somewhere.
A
There
is,
I
think,
if
I'm
not
mistaken,
I've
seen
it.
I
think
it
should
be
readily
available.
If
I'm,
let
me
see,
is
that
correct.
A
B
B
Guess,
okay,
so
I
I
don't
think
I
I
mean
lots
of
it
are
still
working
progress,
so
I
don't
have
something
that's
ready
to
to
share.
I
posted
a
link
in
the
chat,
the
blog
for
for
this
consortium,
so
the
the
rfc
for
array.
Api
standard
is
due
to
come
out
in
about
two
weeks
and
we
can
at
that
time
we
would
be
able
to
see
what
are
the
definitions
for
the
apis.
D
A
A
A
So
we
only
have
10
minutes
left
or
11
minutes
left,
and
so,
let's
move
on
to
the
next
topic,
this
data
layout
format
any
thoughts
on
this
or
we
can
park
the
discussion
to
until
we
actually
talk
about
onyx
core,
where
we
can
actually
talk
about
also
kind
of
supporting
the
quantization
models
on
the
model
zoo,
and
that
could
be
another
good
discussion
topic
there.
A
Okay,
if
not,
we
can.
We
can
talk
about
this
collectively
with
a
quantization.
I
think
so
chen
we
have
about
10
minutes
left.
Do
you
could
you
kind
of?
Let
us
know
what
your
thinking
here
is
for
implementing
onyx
or
within
qflo.
Could
you
give
us
some
detail
as
to
what
you're
thinking
about.
E
Sure
the
first
thing
is
the
concept
of
a
pipeline
right
in
world.
It
means
to
have
a
workflow
that
includes
all
components
right
to
work
together
and
to
to
make
sure
the
end-to-end
ml
development
can
be.
You
know,
organized
and
orchestrated
and
orchestrated
I
mean
so.
A
pipeline
in
kuflow
in
particular,
includes
two
parts.
One
is
the
in
the
content
like
the
model,
the
data
and
then
the
second
part
is
the
infrastructure.
E
So
coop
flow
combines
these
two
together
so
that
the
end
user
can,
you
know,
work
on
their
application,
their
model
and
their
issues
and
not
having
to
worry
about
too
much.
You
know
about
the
you
know
underlying
you
know,
infrastructure.
You
know,
distribution
cloud
and
so
on
right.
So
I'm
trying
to
see
how
we
can
bring
onyx
into
this
flow.
You
know
pipelines
so
that
the
end
users
can,
you
know,
start
leveraging
onyx
in
a
broader
context.
E
We
know
onyx
is
a
good
standard
to
specify
models
and
so
on,
but
the
goal
here
is
to
see
how
we
can
you
know,
make
it
easier
to
be
consumed,
especially
if
they're
like
users
using
kubernetes
as
their
infrastructure
and
environments
using
cool
flow
already.
Can
they
leverage
onyx?
A
F
E
E
Like
one
quick
chart,
so
so
people
will
understand
the
idea
what
is
a
pipeline
in
kuvlow
and
what
you
know
we
might
have
if
we
introduce
onyx
into
that.
You
know
space.
E
Oh,
I
think
my
probably
jim
has
to
do
it.
Okay,
I
think
that's
okay,
so
so
I
can
talk
about
that
quickly.
So,
right
now
I
have
seen
one
kuflow
that
works
was
amnest.
Basically,
this
has
like
three
steps
right
from
the
hyper
parameter
tuning
to
training,
to
influence,
basically
very
simple,
three
steps.
E
You
see
my
chart
now,
yes
yeah,
so
essentially,
this
is
the
example
in
kuflow
to
you
know,
get
mnist
right
from
tuning
to
training
to
model
deploy.
So
I
put
onyx
here
just
to
see
whether
you
know
we
can
go
end
to
end
using
onyx
because
of
course,
right
now,
codeflow
is
using
tensorflow.
E
Okay,
let's
do
example.
I
know
kai
torch
is
working
into
kuflow
as
well.
What
is
it
working
into
is
to
build
certain
constructs
and
components
so
that,
for
instance,
training
can
be
distributed
in
kubernetes
clusters?
E
Okay,
so,
as
we
all
know,
onyx
model
can
describe
the
inference.
So
the
last
part
here
is
no
brainer,
and
once
we
have
training
all
described
in
onyx,
we
have
honest
runtime
or
some
kind
of
back
ends
supports
onyx
training.
We
can
achieve
this
one
as
well
right
so
similarly
for
for
the
hyper
parameter
tuning,
we
can
use
combination
of
training
and
you
know
influence.
So
I
think
we
can
do
this
eventually.
Okay,
this
the
last
one
I
want
to
show
is
with
a
more
comprehensive
or
complex
scenario.
E
Here
you
see
a
data
processing
here
right.
Can
we
describe
that
right?
Now
we
cannot.
That
means.
Onyx
model
is
not
going
to
work
right
now,
but
this
is
a
common.
You
know
use
case
right.
You
have
some
issue
in
data.
You
want
to
go
through
some
machine
learning.
You
know
process
to
make
sense
of
it
and
then
we
go
through
pre-processing
transformation.
E
E
It
makes
sense.
So
as
far
as
what
specifically
we
need
to
develop,
I
have
some
ideas.
One
is
that
if
we
go
back
to
the
in
this
example
here,
the
model
training
is
done
by
a
component
called
tf
job.
E
E
Right
so,
as
far
as
you
know
how
we
want
to
see
the
angle
I
I
would
like
to
see
whether
someday
you
know
we
can
a
very
clear
path
from
the
issue
from
the
data
you
use
onyx
as
early
as
possible.
So
then
you
do
a
lot
of
things
in
a
framework
and
only
convert
to
onyx
for
inference
right,
so
so
that
that's
the
idea
behind
this.
You
know
proposal
here
any
questions
or
comments
on
this.
A
So,
in
this
case,
what
you're
suggesting
is
we
expand
qflo
to
have
onyx
job
to
enable
distributed
training,
and
the
next
slide
is,
if
you
were
to
expand
even
beyond
that
you
need
data,
processing
and
then
transformation
to
tie
up
to
the
to
get
to
the
model.
Training
part.
E
Right,
okay,
right
without
anything
right
now
we
can
do
the
last
part
we
can
somehow
convert
to
alex
model,
and
you
know,
deploy
to
a
you
know.
Kf
serving
onyx
has
a
runtime
there
already.
I
believe,
that's
what
that's
what
we
can
already
do
right
now,.
E
E
Onyx
runtime
itself
supports
training
transformer
models
from
pytorch
today,.
A
I
see
okay,
so
I
think
here
there
are
two
things
I
guess
in
terms
of
when
you
look
at
this
there,
data
processing
parts
that
a
lot
of
people
mentioned
before,
which
is
a
slide
five
and
then
site
four,
basically
strengthening
the
the
training
part
and
then
also
having
it
implemented
under
keyflow.
G
Right
one
yeah,
I
guess
a
lot
of
things
are
in
relate
to
the
details
like
pre-processing.
G
E
E
A
Would
it
make
sense
to
start
with,
so
we
have
the
models
in
the
model
zoo,
and
we
now
have
a
lot
of
models
on
nlp
as
well,
and
so
maybe
starting
with
the
the
models
that
we
have
in
model
zoo
and
see
how
the
data
needs
to
be
pre-processed
and
transformed
for
the
models
that
are
on
the
zoo.
And
then
that
would
be.
That
might
be
a
good
starting
point.
If
you
were
to
think
about
pre-processing
right.
E
H
E
Does
not
cool
flow
is
much
higher
level.
It
allows
you
to
define
your
own
component
so,
for
instance,
this
each
you
know
box
here
is
a
component.
It's
a
combination
of
software
and
usually
is
a
docker
image
because,
like
I
said
earlier,
it's
more
about
you
know:
deployment
distribution,
okay,
yeah,.
A
I
see
okay,
I
noted
your
proposal
in
the
roadmap
document,
but
if
you
could
send
me
this
slides,
I
can
also
link
it
to
the
the
roadmap
document
as
well.
A
Yeah
this
was
a
very
interesting
proposal,
thanks
for
putting
this
together,
yeah
no
problem,
okay,
so
we're
a
little
bit
over,
but
I
think
in
the
subsequent
meetings.
I
think
that
we
will
be
able
to
assess
the
impact
and
then
we'll
try
to
also
have
a
bit
more
closure
on
the
what
we
want
to
implement
at
least
horizontally,
and
so
thank
you
so
much
for
participating
and
please
look
for
the
next
discussion.
I
will
have
five
more
coming
up.
So
any
last
questions
or
comments
before
we
move
on.
A
Okay,
please
let
us
know
if
you
have
any
feedback,
we
will
definitely
take
it.
You
can
put
it
on
the
roadmap
or
you
can
also
kind
of
submit
as
a
pr
on
the
steering
committee.
So
thank
you
so
much.
Thank
you.