►
From YouTube: ONNX20210324 V04 PopART support ONNX on IPU en
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
First
of
all,
let
me
introduce
my
company
graphcar.
Presently,
graphcar
develops
ai
processor,
specifically
designed
for
a
n
and
machine
learning,
workloads
called
ipu.
We
provide
hardware
and
software
and
platforms
as
part
of
a
complete
solution
to
enable
innovators
to
solve
current
and
future
machine
learning
problems.
A
We
have
announced
our
second
generation
ipo
processor,
the
gc200,
each
gc200
processor
has
nearly
1500
precisor
cores
inside
possessor.
We
have
been
able
to
fit
900
megabytes
of
high
speed
rhyme
and
also
we
have
announced
our
new
product,
the
ipum
2000
system,
which
consists
of
four
gc200
ipo
processors,
multiple
ipu,
I'm
2000,
can
be
directly
connected
to
build
large-scale
ipo
port
system.
A
A
A
A
Once
we
created
the
session,
we
use,
you
need
anchor
arrays
to
initialize
the
host
buffer
to
store
the
result
and
the
important
one
is
line
14
that
we
call
the
prepared
device.
It
did.
A
lot
of
work
include
performing
graph,
optimization
and
compile
and
loading
graph
onto
the
device
once
it
has
completed.
A
And
then
we
invoke
the
sessionrun
method.
We
give
it
the
pi
step.
Io
pi
stybio
is
used
to
map
the
input
output,
tensor
name
with
the
house
date
buffer,
so
that
call
will
execute
on
ipu,
send
the
input
data
from
house
to
the
ipu.
And
finally,
we
got
we
get
the
result.
The
result
will
store
to
the
host
buffer.
A
A
Finally,
this
update
node
will
also
be
added
into
the
graph
in
this
way.
We
finally
got
the
graph
for
training,
and
this
is
the
code
example
for
training
and
you
can
find
that
the
code
structure
is
pretty
much
the
same
as
inference
so
this
time,
rather
than
pass
the
lnx
file
directly
into
session,
we
use
the
builder
class
to
load
model.
A
A
A
A
A
So
multiple
batches
of
data
can
be
executed
in
parallel
after
the
warm-up
stage,
the
ipo
utilization
reaches
100
percent.
Okay,
that's
all
by
the
way
powerpart
is
already
open
source.
You
can
get
more
information
through
our
graphcore
website
and
github.
You
can
also
send
me
email
if
there's
any
question.
Thank
you.