►
From YouTube: Webinar: MindSpore and Cloud Native Ecosystem
Description
In this webinar we will introduce MindSpore, the newly open sourced deep learning framework that could be used for mobile, edge and cloud scenarios, and how we utilize cloud native ecosystem with projects like Kubeflow, Kubernetes to make a simple deployment of MindSpore.
Presenters:
Zhipeng Huang, Open Source Community Manager @MindSpore
Yedong Liu, Open Source Engineer @Huawei
A
A
I'd
like
to
thank
everyone
to
joining
us
today,
welcome
to
today's
en
safe
webinar
mines
for
and
cloud
native
ecosystem,
I'm
Kristin
dance
club
consultant
at
number
25
and
a
CN
CF
ambassador
I'll
be
moderating.
Today's
webinar
and
I
would
like
to
welcome
our
presenters
today,
which
is
Howard
one
and
you
don't
you
both
community
manager
and
open-source
engineers
at
Kauai.
Please
bear
with
me
in
correcting
pronunciation
and
correct
me
later
few
housekeeping
items
before
we
get
started
during
the
webinar.
You
are
not
able
to
talk
as
an
attendee.
A
There
is
a
QA
button
at
the
bottom
of
your
screen,
so
right
below
the
presentation,
please
feel
free
to
drop
your
questions
in
there
and
we
will
get
to
as
many
as
we
can.
At
the
end.
This
is
an
official
webinar
of
the
CN
CF
and,
as
such
is
subject
to
the
CN
CF
code
of
conduct.
Please
do
not
add
anything
to
the
chat
of
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
of
all
and
your
fellow
participants
and
presenters.
A
B
B
B
Okay,
my
spore
is
a
new
open
source
of
learning
for
mark
so
think
about
chance
of
rope.
I'd
Forge,
you
know
MX
net.
So
boy
is
a
new
addition
to
the
slew
of
open
source,
deep
learning
frameworks,
so
we
open
source
said
Saturday
last
week,
so
is
fresh
out
of
the
oven.
Much
more
is
designed
for
developer
or
users
to
easily
use
for
mobile
age
and
Clemson
arrows.
B
Hopefully
we
can
provide
with
very
fun
friendly
design
for
developer,
to
use
and
also
efficient
execution
for
scientist,
and
my
sport
is
highly
optimized
for
always
send
a
processor,
but
we
also
support,
like
general
Howard
like
CPU
and
GPUs,
so
you
can
visit
our
official
website.
Here's
and
we
provide
both
Chinese
and
English
version.
All
of
the
website,
the
main
rip
hole.
B
B
Boar
has
a
Python
written
front-end
so,
like
data
scientists
could
write
a
machine
learning
deep
learning
models
in
Python
quickly
and
easily,
and
then
we
have
a
C++
packed
implementation
of
several
key
features
and
we
also
have
another
module
called
graph
engine.
So
graph
engine
is
sort
of
like
the
backend
Engine
4
4
minus
4.
It
provides
many
of
the
like
the
low-level
optimization
pipeline
parallel
on
device
execution.
For
example,
you
can
actually
upload
an
entire
graph
through
engine
on
to
ascend
air
processor,
so
you
can
gain
the
maximum
performance
out
of
it.
B
So
several
like
key
features
that
much
more
brings
to
the
world.
The
first
one
is
auto
differentiation.
Auto
differentiation
is
not
a
new
thing
per
se,
but
much
more
offers
source
code
based
auto
differentiation.
So
for
those
of
you
are
familiar
with
completion
tech
technologies,
so
source
to
source
compilation.
B
B
If
you
writing
the
model,
actually,
you
can
just
add
one
line
and
you
can
switch
between
a
static
graph
and
a
dynamic
graph,
so
static,
graph
versus
dynamic
graph
is
kind
of
a
like
forever
I'm
going
a
struggle
in
the
deep
learning
community
so
for
production
people
usually
prefer
to
static
graph
and
but
for
like
debugging
and
development,
people
usually
prefer
to
Democrat.
So
much
more
kind
of
provide.
The
data
sent
is
both
way,
just
add
one
liner
and
you
can
switch
between
the
two
mode.
B
Another
thing
Bob
greens
is
at
Auto
parallel.
So
typically
in
deep
learning,
we
have
data
parallelism
and
model
parallelism,
so
that
means
we
usually
like
distributed
training.
You
can,
you
can
have
either
like
data
this
really
across
the
cluster
or
you
can
also
have
model.
It
is
tribute
across
the
cluster.
So
sometime,
you
can
have
a
hybrid
parallelism
to
take
advantage
of
both
data
and
model
parallelism
for,
for
we
support
both
type
of
parallelism
and
also
it
similar
to
the
set
static,
graph
and
dynamic
graphs,
which
is
also
like
one
liner
change.
B
B
B
B
So,
technology
aside,
we
also
embraced
a
open
governance
model
that
we
learn
from
since
you
have,
and
also
kubernetes.
So,
for
example,
we
have
a
technical
steering
committee
set
up
with
40
members
from
various
universities,
companies
startups
institutions
actually
across
the
globe
from
China
Europe
UK
us
and
we
want
to
make
sure
we
have.
The
community
have
actually
open
and
global
technical
governing
body.
B
B
So
we
welcome
like
further
seeks
and
the
working
groups
establishment
if
there
is
any
like
need,
for
example,
like
research,
working
group
or
security
working
group.
So
all
the
like
establishment
of
the
six
working
groups
will
be
approved
by
TSE
and
everything
is
across
is
will
be
done
accordingly,
that's
much
harder,
so
these
governance
structure,
actually
we
want
to
guarantee.
We
have
a
open
development
procedure.
B
We
also
have
community
partners
that
not
necessary
you
involved
in
one
spoors
community
governance
per
se,
but
could
like
collaborate
in
open
source,
for
example
like
the
DJI
lab,
which
is
really
good
at
graph
neural
networks
and
open
source
project
from
alpha
I
like
Nova's,
which
is
a
very
great
project,
providing
the
vector
processing,
and
so
we
can
build
a
index
searching
engine.
Basically.
C
So
if
we
take
a
look
at
other
deep
learning
frameworks,
including
tensorflow
python,
MX
net,
these
frameworks
benefit
from
implementing
the
TF
job
or
title
job
or
MX
net
job.
These
custom
resource
definition
were
CRTs
and
using
this
series
to
create
and
manage
deep
learning
jobs
in
kubernetes
cluster,
mainly
for
distributed
training,
as
Howard
mentioned,
my
spore
has
some
highlighted
technical
features,
including
automatic
differentiation
and
auto
parallel.
C
So
if
we,
if
-4,
can
also
leverage
the
resource
allocation
and
the
management
capabilities
of
kubernetes,
the
distributed
training
is
much
easier
and
more
controllable
to
achieve
a
container
container
environment
plus
monitoring
the
job
is
also
visible
through
operators,
so
MS
operator
is
something
we
want
to
achieve
in
a
short
time.
So
you
can
see
a
plus
plus
my
sport
and
mines
for
operator.
C
Ms
operator
is
a
my
sports
coat
right
now,
since
my
story
is,
you
know
very
young,
only
four
days
old.
So
right
now
we
only
finished
a
proof
of
concept
of
training
simplest
model
using
CPU
ink
ribbon
Ares.
Hopefully
we
can
see
distributed
training,
multiple
patterns,
including
CPU
GPU,
and
how
a
simple
sensor
in
the
near
future
for
more
demos,
next
fight,
so
I
want
to
talk
about
something
about
my
sport
and
cube
flow
ecosystem.
C
Here,
since
cube
flow,
just
announced
is
major
1.0
release
recently
with
graduation
of
a
set
up
of
core
applications,
including
coop
flows,
UI
jupiter,
notebook,
jupiter,
a
web
tribute,
a
notebook
controller
and
web
at
TF,
job,
title
job
and
tree
of
CTO,
and
so
on.
So
q
flow
is
a
in
our
eyes,
are
very
matured
community
to
cooperate
and
uses
their
powers
together
with
my
support
to
push
both
of
us
forward.
C
So
the
my
support
community
is
also
driving
to
collaborate
with
a
group
flow,
as
well
as
making
the
MS
operator
more
complex,
well-organized
and
always
dependencies
and
packages
up
to
date.
So
all
these
components
will
make
it
easy
for
machine
learning,
engineers
and
data
scientist
to
use
the
cloud
assets,
post,
public
and
on-premise
for
machine
learning
workloads.
So
minus
four
is
looking
forward
to
enable
our
developers
to
use
jupiter,
which
is
our
one
of
tasks
to
develop
models.
C
Developers
in
future
can
use
cube,
float
roles
like
ferry
to
floor
space
on
a
decree
SDK
to
build
containers
and
create
Cuban
engine
resources
to
train
their
my
support
models.
Once
the
train
is
completed,
we
can
also
use
the
EF
serving
to
create
and
deploy
a
server
for
inferencing
so
that
we
can
completing
the
lifecycle
of
the
machine
learning.
Another
thing
I
want
to
talk
about
is
distributed,
training,
distributed
trainees
and
other
fields
that
much
more
we'll
be
focusing
on.
C
There
are
two
major
distributed
training
strategies
nowadays,
one
based
arm
parameter
servers
like
the
transfer
flow
and
other
based
on
collective
communication,
primitives
such
as
or
reduced,
so
the
MPI
operator
is
already
implemented
and
be
used
in
the
cube
flow
community.
So
MPI
operator
is
one
of
the
core
components
of
cube
flow
and
it
is
easy
to
run
synchronized
or
reduce
style.
Distribute
training
on
kubernetes,
so
MPI
operator
also
provides
a
sturdy
for
defining
a
training
job
on
single
CPU,
GPU,
multiple
CPU,
GPU
or
even
Martino's.
C
It
also
implements
a
custom
controller
to
manage
the
CR
DS,
create
dependent
resources
and
we
can
style
the
desired
States.
So
if
my
sport,
together
with
MPI
operator,
I,
think
we,
together
with
a
impaired
operators
and
my
sports
with
multiple
backends,
including
the
holy
essentials
within
high
performance
or
lesson
ships,
it
is
possible
that
the
mind
sport
will
bring
the
distributed
training
to
another
new
high
level.
Alright,
next
slice-
and
this
is
the
MS
operator-
workflow
I-
imagine
in
the
future.
C
So
this
is
a
high-level
set
of
tasks
needed
to
run
the
mine
support
job
on
coop
flow.
So,
first
we
write
or
we
reduce
the
Python
training
code
and
then
build
the
Yama
file
based
on
the
surgery,
definition
of
MS
job
describing
the
training,
job,
The
Container
image
as
a
program
or
the
training
file.
We
use
a
we
write.
We
wrote
in
step
1
for
training
execution
on
the
setting.
C
Our
parameters
then
find
and
container,
or
build
a
docker
container
image
containing
all
the
code
and
dependencies
and
last
one
is
just
send
the
job
Yama
file
to
the
cluster
for
execution,
which
keeps
it
here
at
least
cube.
Cto.
Can
command
so
all
of
the
box?
Kubernetes
doesn't
understand
how
distributed
my
spore
works.
Kubernetes
only
needs
to
help
understand
where
the
payments
for
Ronnie
and
how
they
talk
with
one
another.
C
C
Actually,
there
are
some
fun
facts
about
the
installing
issue.
As
we
mentioned,
the
minus
4
is
just
open
source
for
4
days,
so
it's
only
4
days.
Oh
it's
super
young
and
the
most
issues
we
encountered
in
our
open
source
community
is
installing
or
building
because
many
developers
they
want
to
build
from
the
source.
But
many
fails.
You
cannot
build
from
the
source
or
maybe
sometimes
is
compiling
error.
Sometimes
their
environment
is
not
suitable.
C
Sometimes
they
want
to
install
on
Mac,
but
right
now
my
sport
cannot
support
just
directly
build
on
Mac
Macintosh,
but
we
have
some
alternative
solutions,
so
we
prepared
my
spawn
docker
images
for
users,
posts,
effusion
and
GPU
version.
Actually
it
turned
out
that
this
is
a
great
solution
to
these
installing
issues
here,
as
you
can
see,
he's
a
right
button,
one
of
our
developers.
She
said
it
was
more
comfortable
installing
from
doctors
and
building
from
source.
This
is
a
translation,
no
pain,
installing
no
pain
running
the
demo.
The
starting
experience
is
fantastic.
C
I
strongly
recommend
everyone
is
story,
my
small
by
doc
Creek.
So
that's
the
power
of
doctor
and
cognitive,
okay.
Next
one:
okay
in
this
demo,
I
recorded
a
video
of
training
early
net
with,
and
this
dataset
using
my
Sephora
see
you
on
single
node
in
kubernetes
cluster.
How
can
you
go
to
the
YouTube
okay.
B
C
C
A
B
B
C
B
Okay,
so
as
we
mentioned,
this
is
a
new
business
project
and
we
definitely
want
every
developer
who's
interested
in
tip
learning
development
to
participate.
So
there's
a
lot
of
ways
to
participate
in
the
community,
so
we
can
check
out
code
as
I
mentioned.
Our
main
development
will
happen
on
kitty.
Kitty
also
have
the
English
support
is
very
nice
and
easy,
but
if
you
like
still
prefer
github,
you
can
still
use
our
mirror,
mirror
repo.
B
For
using
talker,
this
is
probably
by
far
the
most
convenient
way
we
saw,
so
we
prepared
actually
with
the
help
with
another
developer
who,
where
we
do
know,
but
to
just
help
answering
the
issue
providing
the
build
instructions
for
the
codetalker
container.
So
we
prepared
to
version
of
the
cuda
container
and
so
in
April,
for
the
China
region
actually
will
open
up
the
whole
cloud
like
that.
B
The
community
service
for
the
ascent
back-end
cluster
so
like
that
abhi
the
ascent
back-end
talker,
you
can
experimenting
with
so
for
discussion.
We
are
on
slack
sorry
for
the
long
link,
but
you
can
join
our
discussion
on
slack
or
we
strongly
advise
you
to
register
or
subscribe
to
our
mailing
list.
If
you've
got
any
questions
or
other
things
you
want
to
discuss
with
the
community.
B
C
A
A
A
A
Okay
sounds
like
that's
it
so
yeah,
thanks
again
for
the
great
presentation:
how
I,
don't
you
don't
and
that's
all
we
have
for
today
thanks
everyone
for
joining
us
again,
the
webinar,
recording
and
slides
will
be
on
my
later
today
we
are
looking
forward
to
see
you
at
a
future
CN
CF
webinar
and
have
a
great
day.
Thank
you
so
much
thank.