►
From YouTube: ONNX20210324 V07 ONNXonPaddlePaddle
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
A
A
We
started
from
scratch
last
year
and
now
it
is
already
an
11k
star
repo
with
more
than
25
multilingual
text,
recognition
features
for
developers
all
over
the
world
and
we
have
paid
some
of
more
than
71k
family
stars
in
our
paddle
family,
as
well
as
18k
forks
with
many
rebels
on
github
training
lists
such
as
pedal
detection,
pedal
gun
and
pedal
x,
and
we
also
named
the
top
two
open
source
project
last
year
in
china
by
kyung
soo,
not
to
mention
that
we
have
more
than
5k
developers
that
have
contributed
to
our
community
and
we
have
a
lot
of
self-organized
study
groups
in
cities
and
colleges
and
had
hosted
ai
contests
of
over
500
colleges.
A
So,
aside
from
our
success
in
community
and
academic
growth,
pedophile
has
also
been
widely
adopted.
Commercially
over
all
kinds
of
industries,
as
well
as
enterprises
take
one
of
our
most
leading
customers.
Daplo,
for
example,
catr
is
the
world's
top
three
battery
provider
and
by
adopting
our
platform,
algorithm
catl
successfully
improved
their
efficiency
in
production
by
a
huge
amount
and
has
become
one
of
our
biggest
customers
ever
since.
A
First
of
all
will
be
x2,
paddle,
so
x
to
pedal,
equips
onyx
developers
to
easily
convert
models
into
pedals
format
and
leverage
paddle's
high
performance
influence
engines,
and
it
supports
frameworks
from
onyx,
tensorflow,
cafe,
pie,
torch
and
so
on,
and
the
other
part
will
be
pedotronix.
A
So
what
is
x2
battle?
It's
all
about?
It
is
about
using
a
simple
command
to
convert
all
kinds
of
third-party
framework
models
into
pedals
format
with
two
kinds
of
outputs.
The
first
one
will
be
the
inference
model
for
you
to
deploy
on
all
kinds
of
inference.
Engines
pedals
have
the
other
will
be
the
python
call
for
you
to
do
the
secondary
development,
and
we
have
done
a
little
benchmark
here.
Take
our
ultra
lightweight
high
performance,
inference
engine
pedal
light,
for
example,
as
you
can
see
in
below
when
it
comes
to
floating
ma
or
models.
A
So
users
can
easily
transform
their
pedal
models
into
on
experiments
and
to
deploy
to
as
many
background
back-ends
as
they
want.
As
long
as
onyx
runtime
supports
and
beyond
that,
we
have
done
lots
of
native
integration
between
pedotonix
and
some
popular
accelerators,
such
as
intel's,
openvino
and
nvidia's
track
and
inference
server.
As
for
the
cte's
80
lick,
so
users
can
seamlessly
deploy
their
paddle
models
onto
one
of
these
accelerators
without
pin
and
starting
from
paddle
2.0.
A
A
So
for
more
information,
please
figure
it
out
on
our
github.
You
can
explore
more
about
our
projects
and,
if
you,
if
you
think
they
are
beneficial
to
your
projects,
please
start
them,
and
you
can
also
visit
our
website
for
more
information.