►
From YouTube: ONNX 4 9 20 Workshop Partners Update
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
To
kick
that
off,
let
me
see
if
I
can
kick
that
off.
It's
my
pleasure
to
have
stephen
elliot,
he's
ibm
vp
of
the
chief
data
office
to
share
with
us
his
thoughts
on
the
ionic
adoption
into
the
use
cases
inside
ibm
so
over
to
you.
C
Since
I
joined
ibm
three
years
ago,
we've
gone
through
numerous
changes.
I
can
tell
you
in
the
early
beginnings
of
this
we're
talking
and
developing
a
common
tensor
format
and
we
jumped
on
the
onyx
bandwagon
really
really
quickly.
C
I
had
discussions
far
before
onyx
formation
with
many
of
the
companies,
the
founders,
facebook,
microsoft,
amazon
and
it
just
it
was
such
a
wonderful.
You
know
experience
to
jump
on
that
and
now
you
know
we
have
the
linux
foundation
involved,
and
it's
just
you
know
even
brighter
days
ahead,
and
this
has
definitely
made
you
know
what
we
have
internally
so
much
better
than
what
we
could
ever
envisioned.
C
So
today,
within
ibm,
we
have
pretty
wide
adoption
within
the
company,
we're
always
trying
to
push
it
further,
though,
and
it's
a
challenge
that
we
have
you
know
certain
obstacles
and
such,
but
I'm
going
to
talk
a
little
bit
later
about.
You
know
how
potentially
the
community
can
help
us,
but
we
have
wide
support
in
watson,
machine
learning,
to
power
systems
and
all
the
way
down
to
our
ibm
z
system.
Mainframes.
C
Now,
there's
a
number
of
important
focus
groups
within
the
onyx
community
from
edge
inference
and
one
that
I
love
and
it's
close
to
my
heart,
training
and
more,
but
for
wider
internal
adoption
within
the
ibm
company
and
abroad.
C
C
Our
clients
are
demanding
it
and
the
use
cases
are
heavily
weighted
towards
non-deep
learning
methods
and,
providing
you
know,
bright,
broader
functional
support
in
onyx
machine
learning
would
ensure
you
know
basically
the
triple
crown
for
onyx
within
ibm,
and
I
think
you
know
the
broader
community
now
onyx
a
little
bit.
You
know
diving
in.
C
I
can't
give
specific
stories
and
such,
but
I
can
tell
you
that
has
enabled
our
clients
to
train
deep
learning
models
in
the
modality
of
their
choice,
which
is
kind
of
the
foundation
statements
of
why
onyx
exists
and
effectively
use
the
data
where
they
want
to.
You
know
they
could
train
in
watts
and
train
elsewhere
and
provide
inference
on
the
mainframe
or
you
know,
on
their
power
systems
or
on
x86,
whatever
it
would
be,
and
the
scoring
has
got
to
be
close
to
the
data,
and
some
of
these
are
transactional
based
systems.
C
You
know
it's
got
to
be
close
to
the
data
and
this
has
absolutely
enabled
those
clients
and
that
we
are
appreciative
to
the
community
for
because,
like
I
said
it
would
have
been,
you
know
much
more
difficult
to
kind
of
go
down
those
paths
alone,
so
my
hat
is
tipped
right
now
and
those
who
know
me
typically
will
see
me
in
a
hat.
I
don't
have
much
hair,
so
I
typically
wear
that
hat
and
I
definitely
tip
my
hat
to
the
community.
We
really
appreciate
all
the
open
collaboration.
I
think
we
have.
C
C
Thank
you
so
much
for
for
the
help
on
this,
and
if
you
have
any
questions,
I'd
open
it
up
right
now,
if
you
want
to
know
what
we're
doing
within
ibm
or
just
send
me
an
email
thereafter,
but
that's
the
the
basis
of
it,
you
know
we're
trying
to
adopt
it
and
use
it
more,
and
it's
really
if
we
could
hit
the
machine
learning
umbrella
more
generally.
That
would
really
enable
us
and
that's
about
it,
thomas.
Thank
you.
A
Yeah,
thank
you
steve
all
right.
I
think
we're
doing
good
on
time.
So
let
us
keep
going.
Let
me
move
to
zpang.
Let
me
stop
share
and
find.
D
Okay,
perfect,
hello,
everyone,
I'm
howard
from
huawei.
We
actually
worked
with
microsoft
host,
hosted
the
very
first
onyx
china
workshop
two
years
ago,
and
we
also
participate
in
in
the
second
one
last
year
and
really
glad
to
be
here
in
the
third
consecutive
years.
Hopefully,
we'll
have
another,
whether
it's
online
or
a
face-to-face
workshop
again
q4
this
year.
D
So
our
team
and
also
huawei
has
has
always
been
a
big
believer
in
onyx
from
the
beginning,
and
we
are
also
very
exciting
to
see
onyx
into
lfi.
Okay.
So
today
I'm
gonna
talk
about
mindsport
and
onyx.
D
So
monspar
is
a
new
open
source,
deep
learning
framework
from
huawei.
We
just
open
sources.
I
think
less
than
two
weeks
ago
is
licensed
under
apache
2.0
and
you
can
check
our
code
from
either
gt
or
we
have
a
mirror
at
github.
D
So
myspare
is
a
new
framework
that
designed
to
be
used
for
mobile
edge
and
cloud
scenarios
and
also
highly
optimized
for
huawei's
ascend
ai
processor.
So
this
is
a
really
high
level
overview
of
the
general
framework
architecture.
So,
as
you
can
see,
we
have
a
python
front-end
enabling
a
developer
to
write
to
write
a
graph,
and
then
we
have
a
minus
bar
ir.
Basically,
we
provide
a
source
code
based
like
auto
differentiation
and
also
we
provide
a
ascend
power
auto
parallel.
D
So
currently
we,
you
can
run
the
mansport
model
either
like
on
the
cloud
cpu
gpu
or
with
a
assigned
910,
and
also
on
the
edge
with
ascend
310
or
on
your
mobile
phone.
D
So
we
are
hugely
inspired
by
onyx
governance
model.
So
we
also
set
up
a
steering
technical
steering
committee
which
composed
of
14
delegates
across
the
globe
and
also
we
have
a
special
interest
group
and
working
groups,
all
modeled
after
onax
governance.
So
we
are
slowly
but
gradually
starting
to
creating
the
sigs
and
help
them
to
start
to
run
open
meetings
and
open
development,
and
we
have
various
community
partners.
D
D
So,
as
you
can
see
from
the
figure,
if
you
use
my
support
to
train
to
train
a
model,
you
can
actually
select
to
export
a
a
onyx
model.
If
you
want
to
like
to
inference
on
like
a
cpu
gpu
of
your
choosing
so
in
in
this
example
after
you,
you
have
the
onyx
model.
You
can
use,
for
example,
onyx
runtime
to
to
to
deploy
your
your
honest
model
for
like
cpu,
gpu
or
fpga.
D
D
So
mir
is
another
like
really
interesting
area
for
us.
We've
been
like
quietly
lurking
and
following
the
onyx
mlr
development.
Recently,
so
from
our
understanding,
the
onyx
mlir
provide
a
front-end
which
can
like
generate
a
onyx
dialect
which,
by
the
reference,
lowering
and
lower
to
the
mir
dialect
and
then
to
the
dialect.
So
by
having
a
onex
front
hand,
basically,
we
can
have
onex
model
leverage,
the
mlr
infrastructure.
D
So
what
this
means
for
my
sport
is
that
if
there's
a
customers
or
users
have
like
new
hardwares,
which
claim
it
came
with
vm
support,
they
can
actually
have
mansport
to
export
a
onyx
model
and,
by
the
help
of
the
onyx
mlr
project,
to
run
to
run
a
model
on
the
new
hardware,
which
will
be
a
very
exciting
feature.
I
think
another
new
area
that
we
we
are
really
interested
in
is
onex
training.
D
So,
as
just
mentioned
with
1.7,
we
already
have
some
tech
preview
of
onyx
training,
support
and
one
sport
community.
D
If
there's
like
any
experimental
work
that
we
can
do
to
help
either
validate-
or
you
know,
you
know
try
out
with
the
onyx
training
model,
we
are
definitely
want
to
help
out
with
that.
D
D
You
can
join
our
stack
and
also
our
mailing
list
so
include
in
conclusion,
we
really
like
onyx
community
and
we
want
to
further
collaborate.
Okay,
thank
you.
A
A
All
right,
so
with
that,
I
guess
we
have
emma
named
for
microsoft-
is
up
next
like
to
see
if
you
see
the
new
screen
cheers
so
emma.
E
Okay,
sorry,
okay,
good
morning,
everyone-
and
this
is
emma
from
microsoft.
I
want
to
share
our
recent
optimizations
in
onyx
on
time
to
accelerate
transformer
inference
on
both
cpu
and
gpu
yeah.
Our
b
has
been
using
cutting-edge
nlp
techniques
to
better
understand
the
user,
queries
on
web
pages
and
other
documents.
So
with
transformer
models,
ping
has
delivered
the
biggest
quality
improvements
onto
user
to
search
users
on
next.
E
E
Yeah,
let's
see
what
bin
did
with
transformer
model
previously
being
a
phantom
12
layout,
which
improved
the
precision
and
the
coverage
a
lot,
but
due
to
the
significant
computation
required
influencing
12-layer
boat
at
a
high
scale,
cannot
meet
the
strict
latency
constraint.
The
first
used
knowledge
destination
to
condense.
The
original
model
to
three
neighborhood,
then
to
optimize
it
further
binting
re-implemented
the
entire
model,
using
tensorflow
c
tensor,
rte
c,
plus
plus
apis
to
take
a
full
advantage
of
nvidia
gpu
architecture.
E
E
E
Okay,
then,
how
can
we
optimize
both
in
onyx
runtime
or
we
identify
some
opportunities?
The
first
transformer
models
in
megabots
consist
of
a
lot
of
elementary
operators.
E
Each
one
requires
a
read
and
write
which
results
in
a
huge
amount
of
elementary
computations,
but
one
good
thing
is
bird
is
made
up
of
multiple
transformer
cells.
Actually,
the
high
level
design
of
single
transformer
cell
is
simple,
so
you
can
see
the
design
on
the
right.
Our
goal
is
to
optimize
the
graph
as
symposia
design.
E
So,
on
the
other
side,
alex
runtime
itself
offers
two
measure:
acceleration
approaches
in
its
architecture,
graph,
optimization
and
execution
providers.
The
execution
provider
is
used
to
integrate
different
hardware
accelerators
each
hardware.
Accelerator
has
a
specific
optimized
kernel
for
proof,
acceleration
on
that
target
hardware,
so
based
on
onyx
runtime
architectures,
we
optimize
the
volt
model
by
fusing
key
sub
graphs
into
single
corners
and
implement
related
optimized
columns
for
both
cpu
execution
provider
and
the
gpu
execution
provider.
E
E
E
E
Last
one
is
skip
layer.
Normalization
next
here
are
all
the
graph
optimizations
we
added
into
our
next
runtime.
For
put
you
can
see
the
cover
both
basic
level
and
extended
level,
but
the
majority
of
them
are
at
extended
level.
E
According
to
optimized
graph,
we
also
implemented
related
kernels
of
making
the
most
of
hardware
orchestra
capabilities
for
both
cpu
and
gpu.
For
example,
on
gpu
we
leverage
tensor
core
to
take
advantage
of
gpu
architecture
on
cpu
side.
We
significantly
increase
the
parallelization
and
the
fully
leveraged
available
cpu
cost
of
attention
or
for
self-attention
karma.
E
With
all
the
optimizations
here
are
the
latency
and
throughput
numbers.
We
got
with
all
this
runtime
so
from
the
numbers
on
the
left
or
with
public
bold
squad
model,
we
could
get
1.7
milliseconds
for
three
for
12
layovers
and
on
four
milliseconds
for
24
nailbit
on
the
right
on
the
right
with
being
a
three-nil
boat,
we
achieved
17
times
performance,
speed
up
on
cpu
and
on
gpu,
even
increasing
the
batch
size
from
4
to
64.
E
After
binting
office
team
also
adopted
onyx
runtime
in
downboard
mode
shipping,
this
is
also
three
layer,
but
with
onyx
runtime,
the
p50
latency
was
reduced
by
three
times
compared
to
the
original
solution,
and
the
development
cost
was
also
reduced
significantly.
E
E
And
lastly,
I
would
like
to
share
some
data
about
honest
toronto.
Adoption.
Honest
ronta
has
been
integrated
into
both
internal
platforms
like
windows
and
external
platforms
like
ulroc.
A
variety
of
products
like
bina
office,
azure,
connective
cognitive
services
are
powered
by
onyx
runtime,
not
only
for
transformable
models,
unless
one
time
can
also
accelerate
other
kinds
of
machine
learning
models,
including
cm
iron.
We
were
observed
up
to
18
times
performance
improvement
compared
to
the
previous
solution.
A
Thank
you
emma
for
the
presentation,
and
this
is
very
interesting.
Thank
you
so
with
that
I
think
we're
gonna
keep
moving
here,
and
so
next
we
have
our
yamaha
to
talk
about
the
film.
Let
me
switch
to
the
presentation.
G
So
good
morning,
good
afternoon,
good
evening,
I'm
yaman-
and
today
I
want
to
tell
you
about
our
project
finn
and
how
we've
been
using
onyx
to
make
it
possible
next
slide.
Please
I'm
a
member
of
xilinx
research
labs
in
dublin,
where
our
mission
is
to
quantify
the
value
proposition
of
xilinx
devices
and
machine
learning
you
may
have
heard
of
xilinx
the
inventor
and
the
largest
founder
of
field,
programmable
gate,
arrays
or
fpgas
bindings
makes
adaptable
hardware
solutions
that
balance
the
performance
and
efficiency
of
custom
chips
with
the
flexibility
of
programmable
platforms.
G
G
So
the
fin
project
is
all
about
exploring
what
you
can
do
by
customizing,
both
the
algorithmic
side
of
the
nns,
as
well
as
the
underlying
hardware,
which
is
made
possible
by
the
flexibility
of
fpgas,
we're
specifically
interested
in
what
we
call
streaming
dataflow
architectures
on
the
hardware
side.
Next
slightly
in
these
architectures,
we
allocate
compute
resources
to
each
layer
according
to
their
compute
requirements
and
run
all
of
them
in
parallel
next,
please,
and
by
allocating
the
these
computational
resources,
according
to
what
that
particular
layer
needs
in
proportion.
G
G
So
this
lets
us
keep
more
weights
and
activations
on
chip
which
greatly
which
overcomes
the
external
memory
bandwidth
bottlenecks
and
greatly
increases
performance
next
slide,
please
we've
been
exploring
the
combination
of
qubit,
qns
and
streaming
data
flow
architectures
for
the
past
few
years,
and
these
are
a
few
highlights
from
some
of
our
prototypes.
G
So
on
simpler
data
sets
like
nist,
we
can
achieve
inference
performance
and
the
millions
of
frames
per
second
and
sub
microsecond
latency
here
bricks,
for
instance,
this
is
a
let's
say,
middle
class
fpga
of
the
previous
generation
of
our
product
line,
which
could
do
about
12
million
frames
per
second
with
310
nanosecond
latency
for
a
single
image.
G
If
you
can
just
go
on
next
four
times
I'll
skip
the
animation.
This
is
how
finn
was
reborn
as
an
open
source
project,
and
what
you
see
here
is
the
summary
of
our
mission
statement.
In
essence,
what
we
want
to
do
is
through
the
power
of
open
sourcing
by
tapping
into
existing
open
source
ecosystems
like
onyx.
We
want
a
wider
community
to
be
able
to
explore
a
few
bit
qnns
and
their
custom
hardware
deployments
on
the
djs
and
the
next
slide.
Please.
G
This
diagram
here
gives
you
a
quick
overview
of
the
fin
stack
at
the
top
of
the
stack
we
have
brelitas
our
pytorch
library
for
training,
neural
networks
with
quantized
ways
and
activations
and
being
aware
of
the
quantization
at
training.
Time
is
essential
to
achieve
high
accuracy
with
these
few
bit
networks.
Unfortunately,
due
to
time
constraints
today,
I
won't
be
able
to
tell
you
much
more
about
predators,
but
please
do
check
it
out.
It's
also
open
source
on
github,
and
we
have
a
few
examples
of
a
few
videos
with
the
training,
scripts
and
retraining
modes.
G
There
we
have
the
film
compiler
in
the
middle.
The
thin
compiler
is
responsible
for
going
from
a
veritas
python
down
to
an
fpga
history.
It
makes
heavy
use
of
onyx,
which
I'll
be
talking
about
in
the
next
few
slides
and
on
the
bottom.
We
have
pink
platforms
for
deployment,
which
is
another
open
source
project
from
guidance,
research,
labs.
G
For
a
quick
overview
of
the
10
compiler
in
general,
the
film
compiler
is
not
unlike
many
of
the
other
dnm
compilers
out
there.
We
have
a
python
library
of
graph
transformations
and
we
manipulate
a
graph
representation
via
transformations
step
by
step
until
we
get
to
a
bit
file.
In
our
case,
perhaps
most
interesting
for
the
linux
community
is
our
intermediate
representation.
G
Our
intermediate
representation
is
essentially
onyx
with
a
few
custom,
annotations
and
operators,
and
that
is
what
all
those
graph
transformations
are
operating
on.
Next,
please,
the
particular
backend
target
for
our
compiler
is
our
own
devoto,
hls
or
high
level
synthesis
library
with
templated
data
types
and
configurable
parallelism.
So
you
can
explore
different
aspects
and
scale
the
performance
up
and
down
and
explore
different
precisions
next
slide.
Please.
G
G
G
Next,
please
another
thing
we
do,
which
is
probably
not
that
exotic
is
to
use
custom
up
types
to
represent
non-standard
ops.
We
need
at
different
levels
of
abstraction,
so
this
can
be
all
the
way
up
to
high
level
abstract
ops
like
into
call
or
a
different
data
layout
version
of
max
pool
to
actually
implementation
specific
optics.
G
We
also
have
python
wrappers
of
these
layers
to
provide
implementations
for
their
execution
for
correctness
and
code
generation.
Next,
please,
we
have
a
hybrid
runtime
that
lets
you
mix,
mix
and
match
different
execution
modes
on
different
nodes.
These
include
microsoft's
onyx
runtime
for
the
standard,
onyx
ops,
our
own
vivalda,
hls
c,
plus
plus
implementations
for
certain
layer,
types,
custom,
python,
implementations
and
even
verilog
using
pi
verilator.
I
should
stress
at
this
point:
this
hybrid
runtime
is
only
for
checking
functional
correctness
while
using
the
compiler
it's
not
for
high
performance
on
its
deployments
just
by
itself.
G
That
is
the
output
product
of
the
compiler
and
next
slide.
Please
continue
with
some
more
phenolic
highlights.
As
I
was
saying,
we
have
a
library
of
python
graph
transformations
all
operating
on
onyx
representations.
This
is
quite
similar
to
what
the
onyx
optimizers
as
part
of
the
core
onyx.
You
already
do,
except
it's
all
in
titan,
so
we
have
transformations
ranging
from
constant
folding
to
convolution
lowering
and
the
shape
and
friends
which
is
a
mix
of
our
own
shape,
inference
routines
mixed
with
the
standard
onyx
of
shaping
front
next
slide.
G
Please
another
thing
we
do
to
deal
with
few
bit:
sensors,
which
we
need
to
do
is
custom
data,
layouts
and
packing.
You
know,
if
you
have
eight
separate
one
bit
values
you
don't
necessarily
want
to
carry
these
around
separately.
You
do
want
to
pack
them
into
as
few
bits
as
possible
to
decrease
your
memory
footprint,
so
we
do
use
these
custom
data
layouts
and
the
packing
unpacking
utility
algorithms
to
actually
pack
and
unpack
things
like
packing.
Eight
separate
funded
values
into
sim
lately,
right
and
next,
please.
G
Last
but
not
least,
since
onyx
is
our
intermediate
representation.
We
use
netron
for
visual
debugging
and
inspection
of
crops
at
any
intermediate
stage,
which
has
been
hugely
helpful
for
checking
things
like
you
know.
We've
transformed
the
graph
a
couple
of
times.
Does
the
topology
still
look
like
what
we
expected
to
do?
We
use
the
node.
G
G
You
hugely
benefited
from
onix's
open
ecosystem
as
you've
already
seen
in
this
presentation,
and
we
are
keen
to
find
out
how
we
can
contribute
back
to
the
audience
community
in
terms
of
cupid
quantized
models
and
graph
transformations
and
more
so.
If
any
of
this
sounds
interesting,
please
do
get
in
touch
with
me
and
next
slide.
A
Thank
you
so
much
emma
for
your
presentation,
so
stop
share,
move
to
the
next.
J
Yes,
so
hello,
this
is
kishwar,
I'm
going
to
give
you
an
update
of
our
project
that
we,
where
we
use
on
x,
for
genome
analysis
with
nanopore
sequencing
next
slide,
please.
J
So
the
way
you
do
genome
analysis
is
the
first
step
is
doing
sequencing,
so
you
collect
a
sample,
you
run
it
through
a
sequencing
machine,
and
then
you
get
a
lot
of
short
reads
next
slide,
please
so
from
from
the
from
the
reeds.
What
you
can
do
is
you
take
all
the
reeds
align
them
to
each
other
and
then
derive
a
de
novo
assembly,
and
this
process
is
really
expensive
and
to
do
human
population
analysis,
the
nova
assembly
is
essential
and
next
slide.
J
J
So
last
year
we
published
a
paper
and
which
has
been
accepted
in
azure
biotech.
Now
that
showed
that
you,
we
can
do
efficient
de
novo
assembly
of
human
genomes
using
nanopore
sequencing
and
the
toolkit
that
we
developed
in-house
in
collaboration
with
chan,
zuckerberg
initiative
next
slide,
please.
J
So
the
whole
pipeline
looks
like
this,
so
you
have
a
sequencing
machine.
Then
it
goes
through
a
base
color,
then
an
assembly
and
polishing
step,
and
we,
what
we
showed
is
you
can
do
a
population
level
or
human
genome
assembly
within
nine
days
next
set
please
so
the
first
is
the
shaft
assembler.
This
is
this
is
the
most
expensive
step
of
doing
human
genome
analysis.
J
So
at
this
point,
what
you
do
is
you
take
all
the
reads:
you
do
a
rundown
encoding,
you
tag
all
you
align
all
of
the
reads
that
match
to
each
other
and
then
try
to
derive
a
linear
sequence
that
represents
the
underlying
sequence
of
of
a
genome,
can
be
human,
can
be
microbial
or
viral
or
any
other
genomes
that
you
that
you
just
sequenced
with
a
sequencing
machine.
J
Next
slide.
Please,
and
then
you,
the
assembly
that
we
get
from
the
shaft
december
are
highly
erroneous.
Due
to
the
due
to
the
underlying
errors.
You
have
from
the
nanopore
sequencing,
so
we
use
a
graph
based
assembly
polisher
that
takes
the
assembly
and
creates
a
partial
order.
J
Alignment
graph
and
this
tool
that
we
developed
in-house
can
also
output
the
state
of
the
graphs
and
tensor
format
next
slide,
please
that
can
then
take
the
graph
states
as
as
tensor
format
as
an
input
of
the
next
tool,
which
is
helen,
where
we
use
a
deep
neural
network
that
walks
through
the
state
of
the
graphs
and
then
uses
two
layer,
deep
neural
network
recurrently
on
network
and
predicts
a
correct
base
and
the
number
of
times
that
base
is
repeated
per
position
in
the
genome,
and
this
is
where
we
use
the
deep
neural
network,
and
this
and
this
entire
pipeline
produces
a
high
quality
human
genome
assembly.
J
Next
slide,
please
so
the
base
level
accuracy.
This
is
the
base
level
accuracy
you
we
get
from
the
assemblers.
They
look
pretty
good
they're,
almost
99
accurate
most
of
the
times.
These
are
four
assemblers
that
you
can
have
that
are
available
out
there,
and
you
can
see
that
most
of
the
assemblers
generate
almost
similar
base
level.
Accurate
assemblies
but
99
is
not
very
good
in
terms
of
doing
sensitive
analysis
of
gene
expression
or
of
or
any
kind
of
transcriptomic
analysis
next
slide,
please.
So
what
we
do
with
margin,
polish
and
helen.
J
If
you
use
the
polisher,
then
the
the
quality
of
the
assembly
is
much
higher
and
so
you
from
1998
to
99.1,
we
go
to
9.2,
which
is
highly
accurate,
and
we
show
that
if
we
polish
the
genomes
with
this
polisher,
then
we
can
do
sensitive
analysis
next
slide.
Please
so
in
terms
of
runtime,
shasta
is
really
fast.
It
can
do
a
human
genome
assembly
within
six
hours
and
it
costs
only
70
dollars
compared
to
the
other
assemblers
that
are
really
expensive.
J
Next
slide.
Please,
and
for
the
polishing
you,
you
can
see
that
the
polisher
is
also
really
fast,
the
polisher
that
we
developed
compared
to
the
currents
compared
to
the
state
of
the
art.
So,
and
here
the
trouble
was
that
it
was
optimized
for
gpu
and
we
had
a
lot
of
users
around
the
world
who
wanted
to
try
this
polisher
on
their
cpu
machine,
and
we
recently
this
year
we
incorporated
onyx
runtime,
and
this
was
in
pytorch.
J
So
what
we
do
now
so
every
time
you
do,
you
want
to
do
cpu-based
inference
you
export
the
python's
model
into
onyx
and
then
run
onyx
runtime,
and
we
can
easily
parallelize
the
sessions
on
its
runtime
sessions
and
then
collect
all
the
predictions
and
stitch
and
then
generate
the
polished
sequence
next
species
and
so
far
what
we
have
seen.
J
J
Genome
model
due
to
this
modification
or
incorporation
of
onyx
models
into
the
pipeline,
and
we
also
are
trying
to
do
trying
to
see
how
much
of
an
improvement
we
can
get
if
we
want
to
do
a
population
level
of
cpu
only
or
probably
with
gpu,
on
the
assembly
and
polishing
pipeline.
With
this
framework,
we
haven't
done
that
part
yet
and
yeah.
So
next
slide,
please.
I
think
this
is
the
end
of
the
slides
that.
K
J
Yeah
we
have
a
huge
group
and
we
collaborate
with
a
lot
of
people
and
sure
if
you
think
that
this
work
is
interesting
and
you
want
to
know
more
about
us-
please
let
me
know
in
me
and
yeah
and
thanks
for
all
the
work
that
you
guys
do,
it's
been
really
helpful
for
from
our
viewpoint,
to
have
a
platform
where
we
can
support
cpu
based
bioinformatics
usage.
Thank
you.
A
A
All
right,
let's
see
we
have
a
young
shin
on.
I
Hey
here,
can
you
hear
me
all
right
cool?
Thank
you,
hello,
everyone!
So
I'm
young
chen,
I'm
from
the
microsoft,
the
azure
cognitive
service.
I
I
work
in
the
ocr
team
and
for
about
over
one
year
we've
been
working
with
annie,
strong
thai
team
to
use
the
honey
strong
time
to
take
our
state
of
art.
Oci
engine
to
production
so
today
is
to
share
the
gionee.
I
So
this
is
just
a
quick
introduction
about
our
product.
The
microsoft
cognitive
survey
is
a
set
of
a
cloud-based
ai
offering
so
that
includes
things
like
speech,
vision,
language,
so
on
so
forth
and
for
artists
specifically.
So
we
work
on
this
read
api,
which
is
a
step
art,
the
optical
character,
recognition
engine,
I
mean
commonly
called
ocr
right,
so
the-
and
this
is
a
engine
actually
coming
from
the
microsoft
research.
So
it's
really
built
from
scratch.
I
Using
the
deep
learning
and
the
the
thing
that
unique
about
it
compared
to
the
more
traditional
ocr
is
that
we
actually
internally
call
it
one
ocr.
That
means
we
just
need
to
use
one
ocr
engine
to
deal
with
a
different
kind
of
scenario.
So,
for
example,
you
can
deal
with
the
printed
text.
You
can
support
recognition
of
handwritten
text
well
and
then
also
from
the
sample
on
the
right
right.
So
it
actually
worked
pretty
well
for
the
real
world
images
right.
I
So,
no
matter,
it's
a
product,
private
product
label
that
you
capture
using
the
mobile
phone
or
a
quick
snapshot
that
you
take
with
on
the
slide
or
the
like,
receive
in
different
angle.
So
that
means
different
angle.
It
all
work
pretty
well
and
feel
free
to
search
for
cognitive
service,
computer
vision
for
more
detail
next
slide.
Please
all
right
so,
and
I
would
like
to
talk
about
our
journey
to
yeah
so
to
onyx
before
it's
actually
in
late
2018.
I
So
at
that
time
we
are
using.
So
we
are
an
engineering
team.
We've
been
working
with
the
researchers
and
then
we
take
in
the
the
model
the
research
yeah
so
give
to
us
and
then
there's
a
home
brew.
The
hunger
inference
engine
right
so
so
inference
on
that
model.
I
But,
as
you
know,
the
pi
torch
is
really
getting
popular
at
a
time.
So
researchers
now
want
to
use
a
python
to
continue
their
research
work
and
then
we
do
have
this
mismatch
about
the
research
and
production.
The
other
thing
is
around
the
performance
challenge.
Although
our
homebrew
engine
has
the
advantage
of
like
knowing
the
model
yeah
so
closely,
so
they
try
to
optimize
it.
But
the
truth
is
that,
since
we
are
not
a
team
that
really
focus
on
the
inference
engine
right
so
it
starts
to
lag
behind
yeah.
I
So
in
terms
of
the
set
art
of
the
like
that
technique
and
our
first
actually,
the
try
is
that
to
see
yeah
if
the
cafe
ii
can
meet
our
our
requirement,
but
unfortunately
some
prototype
yeah,
so
the
in
terms
of
performance
again,
it's
just
not
there
next
slide,
please,
okay,
so
this
is
where
we
start
to
yeah
so
switch
to
the
rx
model
right.
So
because
the
one
is
that
yeah.
I
So
now
it's
a
modern
framework
both
for
research,
training
and
inference,
and
then
this
really
allowed
that
as
a
research
start
to
use
in
the
pythons
and
then
can
use
python
to
export
the
audience
model
to
us,
and
then
we
can
just
use
in
the
in
this
case
yeah,
so
some
of
their
code
will
be
in
python.
I
Then
then
we
will
actually
convert
yeah,
so
the
althouse
python
code
and
then
also
take
the
honest
model,
the
export
and
then
into
our
production
c
papa's
code
and
then
also
because
alex
runtime
is
really
optimized
right.
So
it
actually
worked
well
for
the
service
deployment,
because
this
is
cloud
service
and
then
we're
also
very
lucky
to
have
a
close
working
relationship
with
any
strong
time
team
to
support
this
journey
next
slide.
Please.
I
All
right,
so
our
protein
experience
is
actually
pretty
straightforward.
I
Our
the
read
2.0
version
is
a
window
based
cloud
service
because
we
did
some
initial
with
the
cafe
too,
we
are
able
to
use
the
cafe
to
to
export
the
model
to
onyx.
We
do
some.
We
do
have
some
custom
up
so
that
we
use
the
cpapers
to
implement
that
in
onyx
and
then
then
we
use
this
very
to
use
onyx
api
yeah,
so
to
do
the
inference
and
then,
after
that,
just
do
the
testing
yeah
so
on
the
performance
and
on
the
service
deployment.
I
Next
slide,
please
yeah
and
we
are
actually
very
happy
to
see
the
performance
scan
that
we
are
able
to
achieve.
Originally,
our
goal
is
to
achieve
three
times
the
latency
improvement,
but
after
using
the
honest
runtime
we
we
actually
able
to
over
achieve
it
yeah
so
by
three
par
to
achieve
3.7,
so
that
really
helped
to
reduce
our
cloud
cost
and
not
only
that
we,
we
have
been
very
happy
with
the
reliability
and
then
also
the
yeah.
I
So
like
the
memory,
yeah
memory,
consumption
of
the
honest
runtime
in
our
service
environment
next
slide.
Please
all
right!
So
after
we
ship
the
2.0,
we
actually
know
that
we
want
to
switch
from
our
window-based
cloud
service
to
to
to
kubernetes
right
so
because
kubernetes
compared
to
the
original
windows
service
that
we
we
have
provides
a
better
scalability
reliability.
I
On
the
other
hand,
we
also
have
a
lot
of
customers.
After
using
our
cloud
service
came
to
us
to
say
they
actually
want
to
have
linux
container,
so
they
can
deploy.
Our
staff
are
ocr
in
their
own
network
environment.
I
So
this
is
why
that
we
start
to
doing
the
work
porting
from
window
to
linux
and
because
we
use
the
linux
onyx
and,
due
to
it's
across
platform,
nature
really
make
this
the
journal
to
be
easy.
Frankly,
actually,
most
of
the
code,
most
of
changes
really
are
on
code
yeah.
So
it's
really
not
on
the
onyx
inference
code
and
one
of
the
the
good
surprise,
although
it's
not
that
unexpected
is
that
by
switch
to
the
linux
with
honest
runtime,
we
actually
get
better
performance.
I
Well,
we
are
seeing
about
1.2
times
performance
get
after
three
to
the
linux,
and
one
thing
actually
I
should
mention
in
the
very
beginning-
is
that
so
for
the
course
reason
we
all
of
our
influence
is
cp,
only
yeah
so
and
then
the
so.
Even
for
the
cpu,
honest
runtime
is
really
yeah,
so
pre-optimized
for
our
purpose.
I
Next
slide,
please
all
right
so
and
overall,
we
are
really
happy
with
the
our
honest
journey
right.
So
just
as
a
summary
right,
so
it
really
helped
us
to
have
a
good
pipeline
from
the
research
to
production.
There
is
no
gap
anymore
in
terms
of
model
and
then
for
the
runtime
itself.
Yeah,
so
it's
ease
of
is
easy
to
develop
is
easy
to
deploy.
I
We
got
a
great
performance
as
well,
and
then,
of
course,
it
worked
well
for
the
cooperness
of
the
deployment
and,
as
a
matter
of
fact,
all
of
our
new
yeah,
so
research
future
demand
now
all
centralized
honest
and
on
its
wrong
time.
So
we're
definitely
looking
forward
to
keeping
improving
this
step
up
engine
by
using
onyx
yeah.
That's
all
from
me!
Thank
you.
A
L
Okay
cool,
so
thank
you,
thomas
and
prashant
for
giving
us
the
opportunity
to
talk
today,
so
I'm
shanik
mitra,
I'm
the
product
manager
for
deep
learning
at
mathworks
and
I'm
with
tingsu,
who
is
the
development
manager
for
deep
learning,
so
today,
we'll
be
talking
about
onyx
with
matlab
in
its
essence,
we'll
be
covering
three
main
topics,
starting
with
our
investments
in
onyx,
how
matlab
users
use
onix
and
our
current
goals.
L
So
so
matlab
has
a
deep
learning
library.
If
people
don't
know,
it's
called
deep
learning
toolbox
that
can
be
used
to
design
and
train
deep
neural
networks
using
automatic
differentiation,
custom,
training,
loops
and
all
those
fancy
stuff.
So
what
we
have
seen
in
the
recent
past
for
about
a
year
or
so
is
that
collaboration
in
the
ai
ecosystem
has
been
greatly
enhanced.
L
With
the
inception
of
onyx,
onyx
has
opened
up
communication
channels
for
us
to
talk
to
frameworks
that
are
used
that
are
used
extensively
by
users
in
conjunction
to
deep
learning
toolbox.
So
matlab
can
both
read
from
onyx
and
write
into
onyx
in
terms
of
our
support
for
onyx.
L
L
We
started
supporting
onyx
about
a
little
over
two
years
ago
and
over
the
period
of
time,
we
have
supported
a
lot
of
operators
and
layers
for
onyx.
That
has,
in
turn,
enabled
us
to
be
able
to
import
the
state-of-the-art
free
trend
models
from
onyx
model
zoo
into
matlab
that
we
can
make
available
for
our
users
to
use
so
that
they
don't
have
to
start
from
scratch.
They
can
already
use
some
reference
models
to
start
with
now
the
onyx
importer
workflow
is
kind
of
an
important
part.
L
For
example,
users
choose
any
framework
that
they
want
to
work
with,
convert
their
deep
learning
models
into
an
onyx
model
format
and
one
of
the
ways
to
import
that
into
matlab
when
all
the
layers
and
operators
are
supported
is
via
import
onyx
network.
L
Once
you
import
that
what
you
get
is
a
matlab
neural
network
model,
which
is
basically
a
native
matlab
environment
model.
Now
this
is
an
ideal
scenario.
When,
once
you
import
your
onyx
model
into
matlab,
you
can
make
use
of
all
these
downstream
goodies.
That
matlab
provides
you
with
like
code
generation
system
integration
that
will
be
going
through
when
we
talk
about
the
use
cases.
This
is
also
an
ideal
scenario.
L
This
is
true
for
the
deep
learning
models
that
are
highly
curated,
where
layers
and
operations
are
supported
by
all
the
frameworks.
Now,
what
happens
when
not
all
the
layers
and
operators
are
supported?
This
is
something
that
we
have
seen
quite
being
quite
predominant
among
users,
because
they
want
to
customize
their
deep
learning
networks.
L
So
in
that
case,
what
we
allow
is
that
we
still
would
allow
you
to
import
onyx
models
into
matlab,
we'll
allow
importing
of
onyx
layers
and
we'll
introduce
placeholder
layers
in
place
of
the
layers
that
we
do
not
support.
L
Now,
if
the
user
knows
about
the
layer,
specs
and
the
mathematics
behind
the
layers,
can
author
custom
layer
and
can
also
after
that,
can
also
replace
the
placeholder
layer
with
the
custom
layer
once
they
do
that,
it's
as
good
as
a
matlab
neural
network
model
that
can
make
use
of
all
the
downstream
workflows,
including
code
code
generation
of
custom
layers.
That
is
something
that
we
are
actively
working
on
to
support.
L
So
what
we
have
usually
seen
typically
is
for
users
to
start
with
something
called
as
a
labeling
step,
wherein
they
bring
in
their
images
or
data
into
an
app
which
is
an
annotation
tool.
For
example.
In
this
case,
what
you're
looking
at
is
an
image
labeler
annotation
tool.
You
drop
in
the
images
here
say
you
have
hundreds
of
frames
or
images,
you
would
label
a
couple
of
frames
or
images,
and
you
would
want
to
automate
the
entire
process.
L
The
labeler
app
allows
you
to
do
that
by
defining
region
of
interests,
drawing
bounding
boxes
and
once
you're
happy
with
the
bounding
boxes,
you
can
automate
the
labeling
process
using
the
algorithms
that
we
provide,
or
you
can
also
write
your
custom
algorithm.
So
this
is
a
typical
workflow
that
users
use
to
quickly
label
the
data
set,
so
not
only
the.
If
so
we
so.
We
provide
labeling
apps
for
not
only
images
but
also
for
signals
or
lidar
point
cloud
labeling
for
adas
kind
of
applications,
video
labeling
as
well.
L
Next
up
users
have
used
matlab
for
decades
for
their
engineering
applications
across
various
domains
like
audio
comparison,
control,
slider,
etc.
So
this
is
a
strong
part
when
users
use
matlab
for
then,
then
then
comes
the
training
part
of
the
workflow,
wherein
users
use
a
thing
called
as
deep
network
designer
app.
L
L
L
Now
this
is
the
export
to
onyx
workflow,
there's,
also
an
input
from
onyx
workflow,
so
users
typically
use
matlab
as
a
platform
for
code
generation,
visualization
retraining,
the
models
with
some
some
of
the
custom
data
sets
and
simulink,
which
is
also
flagship
product
of
mathworks
for
system
integration.
So
we'll
go
over
each
of
these
things
in
a
little
bit
of
detail
here.
So
this
is
the
workflow
that
you
have
already
seen.
You
have
onyx
model
you
import
onyx
network
into
matlab.
What
you
get
is
a
matlab
neural
network
model.
L
L
L
One
of
the
other
reasons
is
also
for
visualization
and
debugging.
It's
most,
it's
mostly
related
to
why
a
model
is
picking
up,
picking
up
an
inference
number
it's
more
regarding
the
explanability
and
interpretability
of
models.
L
Our
customers
also
bring
in
onyx
models
into
matlab
to
analyze
the
network,
to
visualize
the
network
and
see
what's
the
activations.
What
are
the
issues
going
on
in
the
with
the
network?
It's
similar
to
what
netron
does,
and
they
can
also
read
retrain
using
automatic
differentiation
because
from
training
loop
or
weight
sharing
now
the
last
point
is
kind
of
an
important
point
and
it's
something
that
we
have
been
seeing
quite
a
bit
in
the
last
few
months
or
so.
L
This
is
something
in
lines
with
what
stephen
from
ibm's
mentioned,
that
onyx
is
being
used
for
non-deep
learning
purposes
as
well,
and
this
is
something
that
we
have
seen
as
well.
For
example,
if
you
were
to
build
or
simulate
a
highway
lane
following
application.
L
Deep
learning
is
a
very
small
com
component
of
the
perception
module
of
that
ada
system.
There
are
various
other
sub
systems
or
sub
components
like
vision,
detector,
sensor,
fusion
lane
following
decision
and
control,
vehicle
dynamics
and
so
on
and
so
forth.
Now
that
needs
to
work
as
a
system.
So
deep
learning
in
its
essence
will
be
a
very,
very
small
part
of
that
system.
L
L
Now
talking
more
about
system
integration,
the
integration
of
model
based
design
and
ai
is
central
to
some
of
our
users,
for
example
denzo
10,
which
is
a
a
huge
automotive
electric
electronics
manufacturer
from
japan.
They
have
used
ai,
they
have
used
ai
and
simulink
for
complex
vehicle
control
issues.
L
L
So
what
they
did
was
they
integrated
ei
model
into
the
existing
control
systems
used
the
apps
for
quick
network
construction
so
that
they
can
do
rapid
prototyping
to
see
if
a
workflow
works
or
it
doesn't
and
work
together
with
matlab
and
simulink,
working
together
with
language
of
tech
computing,
as
well
as
model
based
design
approach,
to
build
a
system
together
for
an
ei
application.
L
To
discuss
our
current
goals,
so
our
current
goals
are
threefold
for
onyx.
Firstly,
we
aspen
aim
to
import
90
percent
of
the
models
released
on
onyx
model
zoo.
That
is
something
that
we
take
very
seriously
and
we
treat
that
as
a
benchmark
that
we
can
import
the
latest
models
shared
on
onyx
model,
zoo.
L
We
support
for
quantization
and
multi-platform
code
generation
for
imported
onyx
model
has
always
been
a
strong
point
for
us
over
the
last
year
and
a
half,
and
we
would
continue
to
support
that
so
that
our
users
keep
engaging
with
us
with
onyx
and
other
code
generation
workflows
and
also
support
export.
Ninety
percent
of
the
of
the
deep
learning
models
trained
in
matlab
to
onyx.
Now
this
is
90,
because
there
are
some
operator
mismatches
of
the
things
we
support
for
some
things.
L
Onyx
doesn't
support,
and
things
like
that
that
we
would
want
to
keep
on
increasing
and
making
sure
that
we
support
almost
all
the
models
that
we
train
in
matlab
to
onyx,
to
make
it
an
open
ecosystem
for
people
to
collaborate
and
work
in
the
ai
ecosystem,
and
that
was
it
for
me.
L
Can
you
guys
hear
me
could
could.
L
Sorry
sorry,
I
was,
I
couldn't
hear
your
questions.
Can
you
please
repeat
that
question.
B
Could
you
say
a
little
bit
more
about
the
operator
mismatches
which
specific
operators.
F
Yes
up
here
so
currently,
most
mismatched
model
actually
operators,
either.
Operators
that
are
involving
changes,
number
of
dimensions
like
a
scatter
or
gather
something
which
we
like
mata,
doesn't
really
focus
on
supporting
those
models
and
operators
and
that's
the
direction
that
we
are
working
on.
F
L
Yes,
we
do.
We
do
use
our
own
inference
engines
via
matlab,
coder
and
gpu
folder
that
helps
you
target
cc,
plus
plus
and
cuda
codes.
So
yeah
thanks,
yep.
A
All
right,
I
think,
yeah
we
can
keep
going.
This
is
the
end
of
the
you
know,
partner
and
end
user
story,
so
you
can
open
up.
You
know
for
the
next
5-10
minutes
for
q
a
so
I
guess
you
all
free
to
chime
in
and
ask
questions
on
any
other
presenter.
A
Yeah
for
the
benefit
of
everyone,
if
you
don't
mind
yet
just
share
that
this
one,
so
everyone
came
here,
I'm
not
sure
if
the
chat
is
at
the
private
chat
or
to
everyone.
B
L
So
once
you
once
you
import
the
model
into
onyx
model
into
matlab,
you
can
set
your
own
trading
options.
You
can
set
your
own
hyper
parameters
like
learning
rates
and
batch
size
and
other
things,
and
can
also
add
layers
that
our
deep
learning
toolbox
provides
and
maybe
do
some
transfer
learning
to
it
and
read
and
retrain
the
onyx
model
on
your
custom
data
set
that
you
might
have
labeled
or
pre-processed
in
matlab.
E
L
So
they
are
using
our
own
inference
engine,
that's
mainly
for
the
importing
workflow
for
the
export
workflow
when
people
when,
when
users
train
in
our
ecosystem,
they
exported
to
onyx
runtime
for
some
of
the
targets
that
we
don't
support,
for
example
fpgas,
so
that
those
are
the
cases
where
people
use
or
low
for
low
compute
low
power
dsps.
L
E
Okay
and
the
audience
run
time
you
mean
the
honest
runtime
developed
by
your
team
right.
E
My
question
is
so
he
mentioned
that
the
customer
is
using
onyx
wrong
path.
I
just
want
to
make
sure
that
the
wrong
time
is
is
developed
by
your
company
or.
F
I
believe
when,
when
shannon
said
once
around
time,
what
he
means
is
wrong
time
developed
by
microsoft.
So,
basically
in
metros,
we
provide
two
directions.
We
provide
the
flexibility
to
allow
our
users
to
convert
model
like
matlab,
deep
learning,
what
those
two
ones,
and
then
we
also
allow
them
to
bring
onyx
model
to
my
life,
so
they
can
do
either
way.
E
Okay,
all
right,
so
those
models
are
classical
motion.
Learning
models
right
most.
F
A
M
M
Hi
this
is
srivaddhi.
How
do
we
compare
pml
pmml
format
with
onyx
or
are
we.
N
Hello,
this
is
svetlana
from
ibm.
Yes,
I
have
worked
on
pml
for
many
many
many
years
and
now
I'm
working
with
onyx
and
I'm
working
on
a
document
comparing
the
two.
So
basically,
pml
has
very
good
support
for
traditional
machine
learning,
but
does
not
support
deep
learning,
while
onyx
does
support
deep
learning
pretty
well
and
has
support
for
traditional
machine
learning,
at
least
in
the
for
escalant.
N
M
I'll
wait
for
that
comparison
report,
thank
you
and
how
big
of
a
data
sets
have
we
tried
in
all
these
examples
or
or
that
comes
to
your
mind,
that
various
companies
have
tried.
L
So
yeah,
so
I
can.
I
can
speak
on
behalf
of
mathworks.
We
have
we
have.
We
have
worked
with
gigabytes
and
terabytes
of
data
for
image-based
workflows
that
can
be
used
via
data
stores
that
integrate
well
with
onyx
at
our
side.
H
H
Yeah
for
microsoft,
you
use
honestly
yeah
a
large
data
set
and
we
have
large
model
like
large
gpg
on
other
and
we
use
it
on
also
gigabyte
of
data.
M
H
Sorry
onyx
is
a
standard,
does
not
so
it's.
It's
does
not
constrain
single
machine
or
single
accelerator
or
multiple
accelerators.
So
is
that
you're
talking
about
onyx
or
on
an
extra
time.
N
Actually,
I
have
a
question
for
matlab
team:
are
you
going
to
support
onyx
training.
L
Onyx
training,
so
that
is
something
that
we
had
briefly
discussed
about,
but
it's
something
that
we
are
currently
not
focusing
thing.
You
can
correct
me
if
I'm
wrong
here,
so
this
is
something
that
we
haven't
gotten
much
requests
from
the
users
yet
so,
and
it's
very
user
driven
so,
but
this
is
something
that
is.
That
is
something
that
we're
aware
of,
and
but
it's
something
that
we
might
not
be
actively
working
on.
But
then
you
can
add
your
comments
here.
F
We
hear
some
requests
for
supporting
pml
for
classical
machine
learning
models,
but
I
don't
think
we
support
pml
for
now.
N
N
Check-
and
you
said
you
don't
do
you
export
onyxml
for
traditional
machine
learning.
F
I
M
General
question
before
you
close:
what
are
your
priorities
that
you're
looking
from
the
country
community
contributors
like
us?
I
am
new
here.
A
Perhaps
prasanta
harry
and
joe
from
the
student
committee,
will
you
answer
that
question.
O
Yeah,
I
guess
in
the
next
section,
each
of
the
different
things
and
working
groups
is
going
to
give
an
overview
of
the
current
work
and
the
upcoming
work,
and
so
that's
going
to
be
the
place
where
you
can
learn
about
the
type
of
things
that
are
going
on
and
how
you
can
best
get
involved
at
the
highest
level.
You
know
the
things
are
going
to
be
focused
on
specific
areas
of
onyx
in
general.
You
know
we
encourage
you
to.
You
know,
join
the
different
cigs.
O
You
can
participate
in
the
steering
committee
meetings
which
are
open
to
everyone.
You
can,
you
know,
facilitate
kind
of
a
adoption
and
usage
amongst
your
customers
and
your
products
as
well.
So
there
are
a
number
of
ways
that
you
can
kind
of
contribute
to
the
onyx
community.
A
B
K
Okay,
this
is
chin.
I
just
have
one
sort
of
request,
because
I
heard
conversation
about
ml
and
also
who's
going
to
support
training
I'll,
put
a
link
in
the
chat
I
like
to
have
you
know
people
working
on
converters
or
even
you
have
any
interest
providing
your
views
on.
You
know
where
you
are
in
terms
of
email,
support
and
also
training,
and
we
will
look
at
the
poll
results
later.
Okay,
thank
you.
A
All
right,
so
so
with
that,
I
guess
we've
been
going
for
almost
two
hours
now.
Maybe
we
should
take
a
short
five
to
ten
minutes
break
all
right,
so
that
gave
us
a
chance
to
take
a
quick
break.