►
From YouTube: ONNX Roadmap Discussion #4 20200923
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let's
go
ahead
and
get
started
with
the
roadmap,
discussion
and
harry,
I
guess
I'll
turn
it
over
to
you
and
you
can
kick
it
off
and
let
me.
B
Know
two
things
because
I
see
people
are
still
joining
for
the
roadmap
discussion,
goodbye
just
a
little
bit
more
time,
so
I'm
gonna
update
just
so
everybody's
aware.
I'm
gonna
be
updating
the
zoom
url
because
going
forward
we
have
to
have
a
passcode
built
into
the
zoom
url.
So
just
I
just
want
to
make
sure
everybody
is
aware,
probably
in
the
next
few
meetings.
B
If
you
use
the
old
url,
the
url
that
you
used
to
get
in
this
into
this,
it
won't
work
so
just
be
aware
that
you're
going
to
need
an
updated
zoom
url
with
passcode
and
I'll
get
that
distributed
and
announced
on
the
various
live
channels
for
next
time.
Just
wanted
to
get
that
out
and
carry
it
over
to
you.
Okay
sounds.
C
Good,
thank
you,
john,
so
I
think
buchan
and
rama
and
jackie
just
joined,
and
so
I
think
ashwani
was
here
when
we
were
talking
about
it,
but
basically
so
far
we
covered
three
kind
of
sections
of
the
roadmap
discussion
and
so
one
starting
with
kind
of
the
horizontal
plane
where
we
kind
of
talked
about
the
ml
pipeline
and
data
processing
and
so
forth,
and
then
the
the
second
part
was
basically,
we
talked
about
how
things
like
op
definition
or
irs,
which
is
considered
to
be
somewhat
vertical,
because
it's
kind
of
getting
close
to
the
hardware
and
then
the
third.
C
This
question
was
was
around
the
core
robotics,
so
we
talked
about
model,
zoo,
error,
handling
and
then
competition.
So
those
are
some
of
the
topics
that
were
covered
and
I
think
that
we
had
enough
coverage
in
those
three
meetings,
and
so
we
thought
that
I
actually
didn't
actively
advertise
this
meeting,
because
we
wanted
to
have
a
smaller
group
and
then
try
to
go
through
the
features
that
was
just
now
that
we
know
and
try
to
do
some
impact
analysis
and
definitely
the
the
starting
point
was
basically
as
president.
C
If
you
could
scroll
down
to
the
one
that
we
were
looking
at
before
I,
this
is
kind
of
the
roadmap
document
and
then
and
towards
the
end.
I
just
added
a
table
that
had
the
list
of
the
features
that
community
suggested
and
then
I
was
kind
of.
I
only
put
a
steering
company's
name
here,
but
definitely
we
want
to
expand
it
so
we're
talking
about
expanding
so
that
we
have
a
better
tool.
C
So
it's
not
just
five
or
five
people
just
kind
of
voting,
but
we
actually
wanted
to
hear
from
the
community
which
one
they
thought
that
was
important,
and
so
this
is
kind
of
the
first
iteration
and
I
kind
of
went
through
the
features
that
I
thought
was
pretty
important,
and
this
is
actually
quite
core
screen
we're
not.
C
We
think
that
right
now
course
period
analysis
is
should
be
good
enough
to
kind
of
get
to
the
next
stage,
because
I
think
doing
something
super
detailed
is
not
going
to
it's
going
to
be
very
time
consuming,
and
so
the
high
medium
lows
are
basically
indication
of
these
are
the
things
that
are
must
have
so
that's
high
and
then
medium
is
good
to
have,
and
then
low
is
something
that
we
could
have,
but
not
at
this
moment,
so
that
was
kind
of
how
I
rated
these
things
impact
analysis,
and
it
is
here
by
gotcha,
but
we
wanted
to
spend
the
next
30
minutes
discussing
why
some
of
these
things
were
kind
of
rated
high
and
so
forth.
C
So
that
was
kind
of
what
we're
going
to
do
any
comments
and
or
questions
before
we
get
started.
D
C
Right
yeah
you're
correct,
so
I
thought
about
putting
in
what
that
actually
meant,
because
there
are
three
different
things
that
were
suggested
and
I
just
didn't
have.
C
I
guess
I
should
break
them
into
the
the
specifics,
but
I
thought
that
maybe
in
the
document
you
can
go
back
and
refer
it,
but
if
you
were
to
come
with
actually
a
tool
to
that,
then
we
definitely
need
to
break
it
down
to
more
specifics.
D
Yeah,
I
think
it's
good,
for
example,
for
a
few
of
these
items
or
features
right
like
there
are
multiple
things
suggested
so,
for
example,
shape
inference.
We
we
may
not
like.
We
may
not
want
each
of
them
to
be
rated
as
high.
For
example,
there
is
request
for
symbolic
shape,
inference
also
and
other
fixes
to
shape
inference,
so
it
would
be
good
to
especially
for
the
topics
like
topics
like
shape,
confidence
which
are
which
are
pretty
central
and
important
to
one
ex.
It
would
be
nice
to
break
it
down.
C
Yes,
I
agree
with
you,
so
I'll
go
back
and
do
this
again
to
make
it
more
specific.
Initially,
when
I
was
doing
this,
I
was
thinking
that
it'd
be
too
much
detail,
but
I
think
you're
correct.
So
thanks
for
the
suggestion.
E
C
Yeah,
so
that's
a
part
that
I
didn't
mention
which
we
actually
talked
about
right
before
this
meeting
and
thank
you
for
bringing
that
up.
Basically,
after
this
impact
analysis,
we
were
going
to
go
back
to
the
sick
leaders
and
then
try
to
get
the
engineering
cost
estimated
for
each
item,
and
so
when,
once
you
have
the
impact
and
cost
analysis,
then
you
can
do
at
least
a
four
screen
prioritization.
So
that's
kind
of
the
thinking
here:
okay,
yeah,
thank
you.
C
C
Okay,
so
I
don't
think
that
I'll
be
able
to
cover
everything
in
23
minutes,
but
I
can
start
at
least
get
things
started,
and
then
I
mean
this
is
the
first
time
that
we
are
doing
it
like
this.
So
suggestions
are
definitely
welcome
because
we
want
to
make
it
make
sure
that
the
the
community
voices
are
reflected
here
and
so
yeah.
C
I
can
just
get
started
just
kind
of
going
these
line
by
line
and
why
I
thought
it
was
important,
but
would
that
be
a
good
way
to
handle
it
or
how
and
definitely
open
things.
B
Yeah
just
get
through
get
through
as
many
as
you
can.
Okay,
I
think
you
know
it
would
be
good
to
make
sure
that
you
do
cover
a
couple
lows
as
well
as
the
high
so
yeah
just
so.
We
understand
too
point
you
on
it.
C
Yeah
sure
so
the
first
entry
is
data
frame
support,
and
I
think
this
is
pretty
crucial
if
you
want
to
support.
There's
a-
and
I
think
this
is
also
kind
of
tied
to
initiative-
that
shang
was
driving
on
the
pi
data
things
too,
but
basically
having
the
data
frame
support.
I
think
it
will
be
tremendously
valuable
for
data
scientists
to
be
able
to
use
onyx,
and
so
that's
why
I
rate
it
high,
and
this
also
has
connection
to
the
pre-processing.
C
That
does
pre-processing,
and
if
you
were
to
be
able
to
read
that
pre-processed
data,
then
I
think
that
you
should
be
able
to
support
pre-processing
using
an
external
library
that
could
be
read
into
onyx,
and
so
it
might
solve
both
problems
at
the
same
time.
That
was
that's
kind
of
what
I
was
thinking
here.
A
For
the
data
frame
support,
are
you
seeing
that
needed
for
certain
types
of
models
more
than
others.
C
Right,
I
think
the
suggestion,
if
you
were
to
scroll
up,
I
believe
it
was
from
keeps
my
memory,
but
there
are
two
implementations
that
they
were
suggesting.
One
was
kind
of
grouping
a
sequence.
C
Oh
yeah,
it's
from
takoya.
I
think.
C
C
Okay,
so
I
guess
to
answer
your
question:
I
don't
have
a
specifics
actually,
but
I
I
will
definitely
go
back
and
look
into
it.
A
A
F
Types
and
tensor
this
is
just
a
bunch
of
tensors.
Why
do
we
still
need
tensor
list
so
from
lower
left
perspective?
We
already
have
some
sort
of
tesla
list.
We
just
put
some
tensors
into
the
model.
E
Yeah,
but
I
think
that's
different
from
what
a
data
frame
is:
data
frame
is
more
a
heterogeneous
collection
of
tensors.
Really
it's
a
collection
of
tensors.
I
mean
to
some
extent
you
can
do
things
like
data
frame
by
just
flattening
it
out
into
a
collection
of
tensors
without
having
an
explicit
support.
But
if
you
do
want
to
add
a
support,
you
could
add
a
new
type,
but
it's
more
like
a
tuple
or
a
record
or
a
table.
Now
it's
not
really
sequence.
H
And
also
sequence,
right
now
is
with
the
same
data
type
data
frame
can
have
different
data
types.
E
H
E
Yeah
yeah
yeah,
it's
it's
yeah,
it's
different
from
sequence.
I
think
all
right
so.
G
G
C
There's
this
so
yeah
talking
through
this
actually
helps
with
assessing
the
the
importance
so.
A
I
think
it'll
be
useful,
like
for
each
of
the
features,
probably
for
us
to
kind
of
articulate
like
what
is
the
this.
Is
a
nice
capability,
but
like
what
will
it
enable
are
we,
you
know
going
to
be
able
to
support
new
classes
of
models
like
what
type
of
classes
and
models
are
we
going
to
be
able
to
support?
A
C
So
basically
the
the
thinking
was
that
we
wanted
to
spend
maybe
a.
I
C
On
getting
the
the
impact
figured
out
and
then
we
have
two
additional
sessions
that
are
scheduled,
but
we
wanted
to
reach
out
to
the
secretors
to
kind
of
figure
out
the
engineering
cost,
so
that
was
that
was
kind
of
the
thinking
here.
Okay,
oh
yeah,.
H
Guess
they'll
be
good
to
have
one
line
or
two
for
each
item
as
the
roi.
C
In
in
the
way
that
it's
done,
how
would
you
implement
it?
I
have
a
hard
time
thinking
saying
there:
could
you
explain
I've
been
watching.
H
What
I'm
trying
to
say
is
once
as
a
group
we
decide,
one
feature
is
high.
Maybe
we
want
to
provide
a
line
or
two
to
to
highlight
why
this
is
high,
why
this
is
low?
I.
C
E
Yeah
one
comment
about
the
pre-processing:
I
mean
you're
talking
about
pre-processing,
I
didn't.
Maybe
I
didn't
fully
understand
it,
but
but
in
the
context
of
something
like
training,
it
is
valuable
to
have
all
the
necessary
steps
captured
within
an
onyx
model
because,
rather
than
relying
on
some
external
libraries
or
something
else
to
do
pre-processing
steps,
if,
if
it's
because
it
can
have
an
impact
on
the
performance
in
a
training
context,
right.
A
D
A
C
H
G
We
will
be
hitting
a
lot
of
type
related
scenarios
with
pre-processing
data
frames
are
kind
of
like
symptomatic
of
that.
There
are
heterogeneous
data,
how
that
gets
processed
into
a
tensor
that
it's
a
big
funnel
in
right.
So
there's
a
lot
of
different
types,
different
common
types
that
people
use,
pillow
image
library
produces
a
certain
type
and
opencv
decoder,
jpeg
decoders
produce
a
certain
type,
converting
that
to
tensor
those
could
at
least
be
standardized
if,
in
general,
pre-processing
the
universe
of
pre-processing
will
encompass
all
of
those
types.
C
F
H
F
That's
not
only
the
engineering
course
like
it's,
it's
easy
to
standardize,
mathematical
equations
in
deep
learning
models,
but
it's
hard
to
standardize
or
even
formulate
the
decoding
process
of
jpeg
files.
Yeah.
E
F
But
even
if
we
consider
operators
for
pre-processing,
it's
still
hard
right,
there
are
a
numerous
amount
of
image
format
and
we
need
a
lot
of
operators
to
cast
images
to
tensors.
If
we
want
to
standardize
free
process.
C
And
it
was
also
suggested
in
the
document
I
forget
the
name
but
the
pre-processing.
Some
of
the
examples
they
wanted
was
audio
spectrogram
and
fast
fourier
transformer
I
fast
fourier
transform,
and
so
it
was
not
just
like
a
vision,
related
processing
either,
and
so
oh
well,
audio
spectrum.
I
guess
it's
converting
to
image,
but
so
the
scope
can
be
really
broad
and
figuring
out
what
you
want
to
support.
I
guess
that's
also
very
difficult.
I
think.
C
Okay,
but
at
least
having
this
discussion
actually
really
helps.
So
thank
you
guys
so
much
for
kind
of
going
through
the
items
and
then
participating
in
the
discussion,
so
we'll
definitely
expand
this
column
so
that
or
we'll
try
to
come
up
with
a
tool
where
we
can
actually
get
your
feedback
on
the
impact
of
our
impact
analysis,
so
that
we
can
actually
hear
from
you.
So
thank
you.
C
I
This
is
just
one
comment
about
pre-processing
I
mean
like
when
we
let's
say
free
processing:
it's
not
that
we
need
to
support
every
data
type
out
there,
for
example,
opencv,
and
something
like
that.
If
you
have
a
standard
that
okay,
we
support
image
to
tensor
conversion.
Basically,
we
support
image
preprocessing.
I
So
we
pick
one
format.
Let's
say
we
support,
fill
image
format
and
then
we
we
basically
provide
all
the
utilities
to
process
over
those
image
and
now
the
if
user
has
let's
say,
opencv
or
some
other
image
format,
they
just
have
to
reprocess
it
to
pill
image
and
after
on,
like
their
onwards,
we
support
all
all
the
preprocessing
that
we
can
do
in
the
image.
G
I
would
largely
agree
with
this:
I
mean
pre-processing.
I
think
one
way
to
approach
this
would
be
to
figure
out
which
are
the
heavy
hitting
ones
and
then
figure
out
on
a
case-to-case
basis
to
standardize
that
maybe
tokenization
is
a
common
pre-processing
step
for
nlp
models
that
can
be
standardized,
but
one
thing
to
be
careful
about
here
also
is
that
in
a
lot
of
these
cases
there
can
be.
This
can
be
a
big
effort
because,
for
example,
something
like
image
images
getting
resized
is
a
pretty
common
pre-processing
step.
G
G
So
I
guess
this
is:
I
agree
that
there
is,
there
may
be
a
way
forward
here
by
kind
of
collecting
common
pre-processing
steps
and
addressing
them.
First,
the
just
the
comment.
My
comment
is
more
on
the
engineering
cost
that
it
it
even
simple
things,
things
that
seem
very
simple
for
pre-processing
might
take
effort,
significant
effort
to
standardize.
C
Peninsula
also
mentioned
in
the
document
above
that,
I
think
she
wanted
to
consider
the
pre-processing
for
the
models
in
the
model
zoo.
So
I
think
that
kind
of
is
shared
with
your
sentiment.
C
Good:
okay,
any
more
comments
on
the
pre-processing.
C
C
Okay,
so
if
if
there
is
no
more
comments
than
the
end-to-end
machine
learning
pipeline,
I
don't
really
want
to
rely
on
the
my
personal
choice
anymore.
So
any
comments
here
kind
of
my
my
idea
here
was
that
there's
so
many
other
items
that
I
thought
it
was
a
so
much
higher
impact
at
this
point,
and
so
that's
why
I
kind
of
ended
up
on
the
lower
spectrum,
just
in
my
personal
opinion,
but
I
would
love
to
hear
from
you
from
the
community.
F
C
Right
and
I
think
when
I
think
chin,
if
I'm
not
mistaken,
when
you
recommended
this,
I
think
you
were
talking
about
pre-processing
as
well.
It's
a
part
of
the
end
to
end,
and
that
could
be
a
starting
point.
So.
H
This
is
to
make
onyx
in
the
end
to
end
pipeline
or
environment,
so
people
realize
in
which
places
at
which
you
know
faces
honestly
play
a
role.
It's
sort
of
example,
and
also
illustration
of
how
honest
can
be
applied
in
general.
Yes,.
A
Sure
I
mean
I
think
that's
that'll
be
useful
to
do,
but
I
I
mean
it
doesn't
really
involve
changes
to
onyx
itself.
It's
more
like
building
a
end-to-end
example,
or
you
know
something
like
that,
and
also
I
don't
think
it's
just
onyx
right.
You
need
to
bring
in
other
components
like
onyx
runtime,
for
example,.
H
Exactly
yes,
we
need
to
have
onyx
run
time.
Like
you
said
earlier.
If
we
have
pre-processing
web
data
frame,
then
we
can
have
a
more
complex
pipeline,
utilizing
more
onyx.
Otherwise
you
can
imagine.
We
only
deal
with
sort
of
the
inference
phase.
H
C
Idea
might
be
that
you
could
at
least
start
a
little
bit
smaller,
and
so
maybe
you
could
because
chen.
I
know
that
you're
working
on
training
part
and
basically,
if
you
were
to
allow
onyx
to
kind
of
be
fine-tuned
and
then
maybe
apply
afterwards,
then
that
could
be
one
component
that
you
can
start
with
and
then
you
can
gradually
expand
out
of
that
kind
of
compute
cycle.
H
Yeah,
yes,
so,
as
you
can
imagine,
there
will
be
multiple
phases
or
stages.
The
last
one
is
the
inference.
What
we
can
do
right
now,
once
our
training
becomes
more
mature
and
you
know
ready
and
then
we
can
add
another
piece
into
the
pipeline
using
onyx
once
we
have
pre-processing,
we
can
have
another
one
right
so
so,
eventually
the
goal
is
to
be
able
to
do
multiple
phases
using
onyx.
H
C
Okay!
So
if
you
were
to
start
on
something
for
this
end-to-end
missionary
pipeline,
then
what
would
be
the
immediate
first
step?
What
would
be
your
recommendation.
H
Well
that
that's
something
I
asked
earlier,
if
you
know
the
engineering
effort
is
part
of
this
equation.
If
it
is,
then
I'll
have
to
go
back
and
do
some
more
research
and
see
how
much
and
you
know
what
we
might
need
to
do
to
get
this,
you
know
realized,
I
I
think,
as
the
same
for
the
other
items
right
eventually,
if
you
say
go
or
no
go,
you
want
to
know
who
can
comment,
how
much
effort
is
needed.
C
Okay,
we're
actually
approaching
the
end
of
the
hour
or
top
of
the
hour,
so
this
discussion
was
really
helpful,
but
I
don't
think
that
we'll
be
able
to
actually
get
through
all
the
items.
Unfortunately,
the
only
scratch
surface
of
it.
So
we'll
try
to
go
with
a
tool
to
facilitate
this
discussion
even
further
and
not
just
have
five
columns,
but
have
everybody
contribute
to
the
importance
of
or
the
impact
analysis
of
this
so
we'll
follow
up.
C
So
please
look
out
we'll
try
to
post
it
on
social
media
and
as
well
as
probably
on
the
onyx
website
as
well.
But
thank
you
for
so
much
for
your
discussion.
Any
comments
before
we
go.