►
Description
LF AI & Data Day - ONNX Community Meeting, October 21, 2021
ONNX as standard format for institution with legacy
Speaker: Haixuan Xavier Tao ( Banque France)
A
A
A
We
are
also
in
charge
of
the
financial
stability
of
the
financial
system,
and
recently
we
have
been
trying
to
improve
our
activities
with
artificial
intelligence
and
for,
for
example,
we
are
trying
to
to
increase
the
detection
of
fake
banknotes.
One
of
the
problem
with
banknotes
is
that
it
can
be
really
dirty
really
dry
or
really
wet,
and
this
is
going
to
create
a
lot
of
noise,
but
thanks
to
computer
vision,
we
have
been
able
to
reduce
this
noise
and
increase
the
signal
to
noise
ratio
for
fake
banknotes,
for
example.
A
A
Most
of
our
transaction
is
around
millions
of
euros,
and
so
each
person
increase
of
fraudulent
transaction
detected
is
going
to
have
a
huge
impact
on
the
bank
and,
lastly,
we
we're
also
trying
to
improve
our
retail
bank
supervision
capabilities.
A
So
the
thing
is
that,
with
nlp
inner
and
computer
vision,
we
can
be
really
agile
and
really
provide
the
software
that
will
be
able
to
to
to
to
to
work
on
different
banks
with
different
I.t
system,
different
type
of
data,
and-
and
this
is
really
new-
and
we
have
been
able
to
provide
software
that
can
reliably
work
on
on
varying
banks,
and
this
has
been
really
great
and
I
want
to
deep
dive
a
little
bit.
How
onyx
is
helping
us
on
this
last
case.
A
So
let's
say
we
we're
going
to
supervise
a
bearing
together.
Let's
say
we
want
to
check
for
credit
supervision.
We
want
to
make
sure
that
the
bank
is
making
the
credit
for
the
right
reason
or
another
example
is
to
do
terrorism,
financing
or
money
laundering.
All
the
things
we
want
to
do.
Supervision
on
the
thing
we
do
at
bank
of
france
is
that
we
send
an
agent
to
a
bank
and
we're
going
to
collect
the
database
of
the
bank
so
who
are
their
customers?
A
A
A
You
will
be
able
to
increase
the
speed
of
the
deep
learning
model
with
openvino
or
direct
ml,
and
if
it's
an
nvidia
laptop,
you
will
be
able
to
increase
it
with
nvidia
coder,
and
this
is
great
because
the
more
the
faster
the
deep
learning
model
will
be
able
to
run
and
the
more
the
the
bigger
the
client
pool
we
can.
We
can
actually
control
and
and
the
larger
the
coverage
of
the
bank
we
have
been.
We
have
been
able
to
do,
and
so-
and
so
this
is
great.
A
But
but
the
thing
is
from
a
maintenance
perspective.
Having
a
lot
of
execution
provider
can
provide,
can
provide
the
risk
of
insulation
issues
or
maintenance
issues,
or
if
you
want
to
do
upgrades,
and
so
we
have
been
thinking
about
building
kind
of
a
a
unique
onyx
provider,
execution
provider
that
will
work
on
all
platforms
and
the
way
we're
doing
this
is
using.
We,
we
have
been
trying
to
to
do
this
with
web
gpus
web
gpus
seriously,
could
work
on
windows,
linux
and
mac
os
and
there's
actually
an
implementation
of
web
gpus.
A
That
is
graph
space
called
wgpu
that
can
that
can
support.
Fulcan,
metal
and
dx12,
and
therefore
support
almost
all
platforms
and
and
in
terms
of
performance,
is
actually
pretty
performant
being
faster
than
webgl,
and
so
it's
it's,
it's
really
promising
so
I've.
So
I
tried
that
and,
and
so
I've
built
a
researchable
project
called
wonks,
which
is
built
on
rus
and
web
gpu
and
and
I've
tried
to
to
see
if
it
could
be
done
and
if
it
could
work
on
all
platform
and
so
I've
implemented.
A
40
of
the
onyx
operators
and
I've
successfully
built
an
inference
model,
use
an
mnist
onyx
model
and
run
it
and
had
the
right
results
and
did
a
similar
thing
with
quiznet
and
so
and
so,
and
so
it
could
be
done
and
I
think
I've
done
the
proof
of
concept
that
it
could
be
done
and
actually,
but
now
the
bottleneck
is
the
optimization
of
the
of
the
provider.
As
of
now,
the
sweetness
interference
time
with
wonks
is
about
the
same
time
as
with
the
cpu
with
the
microsoft
onyx
runtime
provider.