►
From YouTube: Deep Learning For Science
Description
Steven Farrell of NERSC presents a talk on Deep Learning For Science. Recorded live via Zoom at GPUs for Science 2020. https://www.nersc.gov/users/training/gpus-for-science/gpus-for-science-2020/ Session Chair: Yan Zhang.
A
So
or
final
speaker
topic
is
deep
learning
for
science
from
steven
fario
steven
farrell
is
a
machine
learning
engineer
and
nurse.
He
supports
scientific,
deep
learning,
workflows
on
hpc
systems
through
software
development,
benchmarking,
user
support
and
training.
His
research
interests
include
applications
of
deep
learning
to
high
energy
physics
and
proteins,
as
well
as
deep
learning
methods
for
structured
data
such
as
graphs,
stephen's,
co-op,
co-chair
of
the
ml
pro
hpc
working
group
and
was
previously
a
member
of
the
atlas
experiment
at
cern.
So
welcome
steve.
B
Thank
you
again,
thanks
for
the
nice
introduction,
thanks
for
folks
who
are
still
here,
despite
it
being
time
for
lunch,
at
least
for
those
of
you
in
my
time
zone.
I
think
my
talk
is
a
little
bit
different
than
others,
because
it's
going
to
be
less
about
how
to
use
gpus,
how
to
do
things
interviews
a
little
more
high
level
about
deep
learning
for
science
with
gpu
stuff
in
there
and
demonstrations
of
on
gpus
and
a
little
bit
about
our
corey
gpu
system.
So
let's
just
get
into
it.
B
So
the
three
main
ideas
here
that
I'm
going
to
try
to
get
across
is
that
well,
science
is
is
getting
bigger,
solving
bigger
harder
problems
with
bigger
data
sets,
and
meanwhile
there's
this
new
tool
set
called
deep
learning,
which
provides
powerful
new
tools
that
that
can
work
pretty
well
for
these
big
science
problems
and
then
third,
these
new
emerging,
deep
learning
for
science
workloads
need
large
compute
resources,
and
we
can
address
that
at
least
partially,
with
gpus,
increasingly
with
hpc
systems
and
with
nurse.
B
So,
for
some
of
the
relevant
brief
stuff
about
deep
learning,
I
mean
I,
I
assume,
we've
all
got
some
familiarity
with
deep
learning.
We
probably
know
deep
learning
is
powering
many
recent
technologies.
It's
transforming
a
lot
of
companies
from
the
ground
up
it's
appearing
in
things
like
language,
translation,
speech,
recognition,
it's
powering
the
captions
that
you
see
on
the
bottom
of
the
screen.
B
For
example,
people
get
excited
about
applications
in
healthcare
and
autonomous
driving,
and
then
there's
cool
stuff
too,
like
applications
to
art
and
games,
and
many
more
of
course,
so
deep
learning,
of
course,
is
this
subset
of
machine
learning
and
ai,
which
is
basically
powered
by
deep
neural
networks.
B
So
that's
you
know,
neural
networks
that
have
several
layers
of
computation,
usually
lots
of
parameters,
a
lot
of
capacity
for
for
map
for
learning,
mappings
of
functions
from
inputs
to
outputs.
So
the
idea
of
neural
networks
is
not
at
all
new.
Of
course
the
perceptron
was
proposed
back
in
the
50s,
and
then
things
were
popularized
in
the
80s,
but
clearly
we're
in
this
deep
learning
revolution
nowadays
since
around
2012
2013.
So
the
reason
for
that
is
kind
of
a
few
fold.
B
The
first
one,
of
course,
is
availability
of
data,
so
we
have
a
lot
more
large
curated
data
sets
available
and
deep
learning
methods.
You've
probably
heard
can
do
better
than
traditional
machine
learning
methods
or
other
types
of
methods.
If
you
have
enough
data
and
then
the
second
one
which
is
particularly
relevant
for
us
today,
is
the
availability
and
usage
of
gpus
for
these
applications.
B
The
plot
here
is
this
fairly
famous
image:
net
competition
and
the
error
rate
is
in
the
red,
and
the
blue
is
the
usage
of
gpus
over
time
around
2012
and
2013.
I
think
it
was
2012
when
deep
learning-
first
one
and
from
then
on
it
was
just
always
winning
and
that's.
At
the
same
time,
the
gpus
were
really
ticking
off,
so
things
like
this
one's
deep
learning
method
started
using
gpus
and
started
winning
competitions
like
this
one.
B
In
particular,
it
really
sort
of
built
the
hype
around
deep
learning
and
then
a
lot
of
people
started
getting
into
it
and
trying
out
ideas
and
pushing
on
this,
in
particular,
on
this
third
area
here,
which
is
the
algorithmic
advances,
I'm
not
going
to
go
into
any
of
that
stuff,
but
just
to
call
out
that
you
know
there's.
B
Going
on
than
just
the
first
two
here
so
to
expand
a
little
bit
on
that
gpu
thing.
So
why
gpus?
Why
are
they
chosen
for
deep
learning
they're?
Definitely,
at
least
as
of
today,
the
accelerator
of
choice
for
implementing
deep
learning
workloads,
and
why
is
that
this
may
have
been
mentioned
yesterday
or
today,
I'm
not
sure
so:
apologies,
but
well
neural
networks.
They
have
a
lot
of
potential
parallelism
in
them
at
various
levels.
B
So,
usually
you're,
you
know,
sampling
from
a
data
set
and
different
samples
of
the
data
set
can
be
processed
mostly
in
parallel,
but
even
just
for
the
processing
of
one
sample
in
a
neural
network.
There's
a
lot
of
parallelism
there
in
the
computation
itself
and
that
computation
is
fairly
regular
in
some
sense.
So
it's
mainly
just
linear
transformations
like
big
matrix,
multiplications
and
point
wise
functions
like
these
non-linearities,
like
the
rectified
linear
unit.
B
So
much
like
how
gpus
are
great
for
graphics,
processing.
Gpus
are
great
for
this
for
deep
neural
networks.
The
computation
patterns,
these
motifs
are
fairly
simple,
they're,
sim,
they're,
sorry
similar
enough
and
that's
yeah.
So
gpus
have
many
simple
cars
compared
to
cpu,
and
then
they
have
a
higher
memory
bandwidth
to
feed
those
cores
and
keep
them
churning
these
plots.
I
took
from
a
blog
here
that
just
showed
on
the
top
the
peak
for
flop
performance.
B
But
gpus,
so
getting
deep
learning
to
run
on
tv's
is
not
at
all
the
end
of
the
story.
It's
not
like
deep
learning
machine
learning.
Practitioners
were
suddenly
able
to
run
on
gpus
and
they
never
needed
anything
else
right.
That's
never
the
way
anything
works
when
it
comes
to
computing.
Obviously
folks
started
applying
deep
learning
to
more
and
more
complex
tasks
and
to
do
that
they're
using
larger
and
larger
models,
and
this
translates
into
requiring
more
and
more
compute.
B
So
this
plot
on
the
right
is
from
a
blog
post
from
open
ai,
which
shows
here
on
the
access,
the
basically
the
amount
of
compute
needed
to
train
various
well-publicized.
Deep
learning
results
notice
how
this
is
a
log
scale,
so
this
is
really
like
exponential
explosion
over
time
in
terms
of
the
amount
of
compute
needed
to
train
these
models,
and
this
doesn't
even
include
the
biggest
stuff
from
the
last
year,
in
particular,
openai
recently
put
out
a
humongous
language
model
with
over
100.
A
B
B
So
the
usual
way
we
tackle
things
like
this
is
to
throw
more
hardware
at
the
problem
right.
So
we
want
to
throw
more
gpus
to
train
these
models
and
do
some
parallelization
of
the
training.
B
So
there
are
different
ways
to
do:
parallelization
of
training
of
deep
neural
networks.
The
common
approaches
fall
into
these
two
categories,
so
data
parallelism
and
model
parallelism
in
data
parallelism.
Basically,
what
you're
doing
is
you're
partitioning
your
data
across
the
devices
and
you're
replicating
the
model
across
the
devices.
So,
if
you're
doing
stochastic,
gradient
descent
and
you're
sampling,
many
batches
of
data
from
your
training
set,
you
take
that
mini
batch
and
you
split
it
up
across
your
gpus,
and
each
gpu
will
have
essentially
the
same
model
and
process
its
local
subset.
B
In
contrast
with
model
parallelism,
what
you're
doing
is
actually
partitioning
the
model
instead
across
your
devices,
and
even
that
can
be
done
in
a
few
different
ways,
so
you
could
do
it
layer
by
layer.
You
could
have
different
layers
of
a
neural
network
on
different
devices
or,
as
this
little
illustration
down
here
shows
you
can
actually
split
up
the
linear
algebra
operations
within
a
layer
and
have
them
be
distributed
across
devices
generally.
B
It's
called
model
parallelism,
though,
if
the
weights
of
the
model
are
somehow
distributed
or
partitioned
across
devices,
so
yeah,
there
are
very
subcategories
of
these
two
and
there
are
ways
to
combine
them
in
practice.
The
most
widely
used
technique
is
what's
called
synchronous,
data
parallel
training,
and
so
that's
basically
data
parallelism,
but
everything's
happening
in
in
sync.
So
all
the
workers
are
processing
their
local
chunk
of
the
mini
batch
and
then
at
the
end
of
that
training
step
they
do
they.
B
They
do
a
synchronization
of
the
results,
so
they
they
do
an
all
reduce
of
the
gradients
and
then
do
their
own
updates.
And
then
all
processors
have
the
same
set
of
model
weights
throughout
the
rest
of
training,
so
to
apply
this
to
big
problems
to
really
scale
this
up.
What
folks
are
doing
in
practice
to
try
and
make
training
as
fast
as
possible,
is
they're
trying
to
train
with
the
largest
possible
batch
sizes
and
the
largest
possible
learning
rates,
and
these
are
kind
of
related,
but
with
large
batches
you
can
parallelize
better.
B
You
can
spread
it
across
gpus
and
with
large
learning
rates.
Then
you
can
try
to
converge
the
answer
in
the
fewest
possible
steps
because
you're
taking
larger
steps
and
basically
large
batches,
let
you
use
larger
learning
rates,
but
only
to
a
certain
extent.
This
is
not
at
all
a
free
lunch
and,
in
particular,
numerous
algorithmic
challenges
arise
when
you're
doing
this
a
really
large
scale,
basically
instability
and
overfitting.
So
I
won't
go
into
any
more
details
of
how
this
works
in
practice,
but
we
do
have
a
tutorial
and
there's
some
resources
at
the
end
here.
B
So
now
I'll
start
to
tie
this
back
into
deep
learning
for
science.
Deep
learning
can
certainly
transform
science
and
I
think,
we're
seeing
that
happen
now
because
of
the
powerful
capabilities
of
deep
neural
networks.
They
can
automatically
learn
patterns
from
your
high
dimensional.
A
B
They
can
encode
inductive,
biases
and
symmetries,
which
can
be
really
important
for
science
problems,
for
example,
convolutional
operations,
they're
translationally
equivariant,
so
if
you
have
that
kind
of
symmetry
in
your
data
or
if
they
use
localized
kernel
patches.
So
if
your
data
has
localized
features
and
can
build
like
hierarchical
representations
of
these
features,
then
then
it's
a
good.
You
know
it's
a
good
type
of
model
and
there
are
lots
of
things
like
that
in
deep
learning.
B
So
there
are
many
possible
application
areas,
but
some
of
the
ones
that
I
think
have
been
emerging
is
particularly
promising.
Lately,
in
these
still
early
days,
are
things
like
analysis
of
large
scientific
data
sets?
For
example,
the
large
hadron
collider
at
cern
is
producing
lots
of
data,
and
you
can
use
deep
learning
to
find
new
physics
signals
in
this
data
and
potentially
get
more
out
of
the
raw
data
features
than
you
can
with
handcrafted
traditional
physics-based
criteria.
B
The
next
one
is
to
accelerate
expensive
simulations.
So
a
lot
of
science
domains
have
this
problem
of
very
computationally,
expensive
simulations,
and
so
various
fields
are
looking
at
generative
models
and
things
like
that
to
supplement
or
replace
those
simulations
with
something
faster
and
then
the
third
is
real-time
control
and
design
of
experiments.
B
So
adoption
is
on
the
rise
in
terms
of
using
deep
learning
for
science.
The
science
communities
are
definitely
drinking
the
kool-aid
there's
a
growing
number
of
papers,
both
investigating
and
proposing
methods,
but
also
we're
starting
to
see
papers
that
are
real
applications
and
peer-reviewed
journals
that
are
using
deep
learning,
which
is
great,
still
got
a
long
way
to
go
in
terms
of
conferences.
B
You
see
a
really
growing
presence
in
machine
learning
and
deep
learning
applications
both
in
the
pure
machine
learning
conference
space,
things
like
nurips,
which
is
an
immensely
popular
conference
now,
but
also
in
the
science
domain,
the
domain
science
conferences,
you
see
various
tracks
and
more
and
more
machine
learning
based
contributions
every
year
and
there's
also
been
recognition
of
achievements
in
in
ai
with
awards
like
the
touring
award
and
the
gordon
bell
prize
in
2018..
B
Additionally,
the
doe
is
drinking
the
kool-aid
in
terms
of
these
new
methods,
so
there's
been
several
funding
calls
in
ai
for
science
over
the
last
year.
Last
year
we
had
this
big
town
hall
series
across
four
national
labs
that
produced
this
300-page
report.
So
a
lot
of
work
went
into
that
and
there's
this
anticipated
ecp-like
program
on
ai
for
science.
B
B
This
is
kind
of
cherry
picked
examples
from
things
that
folks
at
nurse
have
been
working
on,
showing
that
you
could
use
deep
learning,
for
you
know,
generative
models
in
cosmology
and
things
like
super
resolution
problems,
high
energy
physics,
applications
down
here
both
for
reconstruction
and
for
inference
with
simulations
this
one
on
the
left,
the
climate
analytics
one.
So
this
is
particularly
interesting
to
note,
because
this
is
a
it's.
A
climate
segmentation
case
run
on
climate
simulation
data,
large
data,
and
it
was
scaled
up
on
summit
to
around
27
000
gpus.
B
It
broke
this
exa
flop
barrier
in
fp16,
and
for
that
it
won
the
right.
It
shared
the
gordon
bell
prize
in
2018..
It
really
shows
what
you
can
do
with
gpus
for
deep
learning
and
hpc
some
extra
additional
examples
coming
out
of
our
nisab
for
learning
program
at
nursk.
B
So
things
like
catalysis,
deep
learning
for
thermal
chemistry,
more
generative
networks
for
energy
physics
to
replace
expensive
simulations,
generative
models
for
turbulent
flow
reinforcement,
learning
for
controlling
light
sources
and
special
temporal
modeling
on
really
large
data
sets
like
brain
imaging
and
climate.
B
B
We
have
primary
prioritize
support
for
the
most
popular
frameworks,
for
example,
tensorflow
gears
and
pi
torch
and
distributed
training
libraries
that
we've
investigated
and
know
map
well
onto
our
systems
and
have
good
performance
like
uber's,
horovod
native
pytorch
distributed
and
the
create
plug-in
from
cray
oops.
We
we
support,
at
least
in
a
prioritized
sense,
a
couple
of
parameter
tuning
libraries.
So
this
is
something
that
all
folks
need
to
do
and
hpc
is
good
for
because
they
have
large
compute
resources.
B
Cray
has
an
hbo
tool
and
ray
tune
came
out
of
ryze
lab
and
we
also
support
or
enable
workflow
solutions
through
things
like
jupiter,
shifter,
etc.
So,
for
example,
we
have
nvidia
containers
that
folks
can
use
via
shifter
on
our
on
our
gpu
systems.
So
this
is
the
stuff
that
we
prioritize
support
for,
but
also
we
we
really
try
to
make
sure
to
enable
that
users
can
deploy
their
own
their
own
frameworks,
their
own
tools
whatever
so
after.
B
The
software
is
the
hardware
of
course-
and
I
think
corey
gpu
and
promutter
were
already
described
yesterday
here
today,
but
these
are
the
kind
of
systems
that
we're
looking
at
and
promoter,
of
course,
is
coming
a
little
bit
later.
But
for
now
we
have
the
corey
gpu
system,
which
is
very
nice
for
us,
we're
using
it
to
prepare
our
machine
learning
worker
loads
for
perlmutter.
B
So
that
means
we're
developing
tuning.
The
software
stack
we're
using
it
to
understand
performance
and
do
benchmarking
and
we're
also
doing
some
cool
research
projects
as
well.
If
you
look
at
how
the
system
is
being
used
now,
actually
machine
learning
is
the
dominant
workload,
which
is
interesting.
So
in
terms
of
system
hours,
it's
something
like
greater
than
75
percent.
B
This
is
a
plot
on
the
right
showing
some
cumulative
usage
over
time,
but
also
jobs
are
running
on
many
gpus,
so
we
see
some
jobs
that
are
even
requesting
it
about
like
full
system
scale,
which
is
around
you
know,
16
nodes
times
8
or
17
ish
kind
of
nodes.
So
this
is
nice
and
I
think
it's
a
promising
indicator
of
the
enthusiasm
for
machine
learning
and
deep
learning
on
perlmutter
and
obviously
we're
excited
about
that.
B
So
to
assess
the
performance
on
corey
gpu,
we
use
a
variety
of
benchmarks,
so
we're
testing
different
frameworks,
different
kinds
of
models,
we're
looking
at
well
we're
comparing
hardware
or
cpu
systems
and
the
gpu
systems
which
you
see
down
here
in
the
lower
left
for
for
pytorch.
We
compare
different
communication
libraries
and
we
look
at
scaling.
B
So
it's
kind
of
not
very
surprising,
but
of
course
we
see
very
good
acceleration
running
things
on
our
gpus
compared
to
the
cpu
systems.
So
this
table
here
is
the
training
throughput.
So
higher
is
better
on
the
right.
We
have
scaling
plots
of
just
a
few
examples
of
pi
torch
up
here
in
tensorflow
down
here
and
the
tensorflow
one
shows
comparisons
of
the
using
the
optimized
nickel
libraries
from
nvidia
compared
to
the
yellow
is
just
a
very
unoptimized
mpi
library.
B
So,
in
summary,
I
think
the
system
is
performing
well
and
we're
gonna
kind
of
keep
working
on
it
then
just
really
quickly
to
wrap
up.
We
also
do
a
bit
of
outreach
and
training
events
kind
of
like
this,
but
specifically
for
deep
learning
for
science.
So
the
first
one
I'll
mention
is
we
have
this
deep
learning
for
science
school
last
year.
We
did
a
week-long
event
with
a
very
comprehensive
program.
There
was
hands-on.
There
was
posters,
you
can
check
out
all
the
material
online
here,
their
videos
and
the
code.
Examples
are
there.
B
We
had
folks
running
on
our
corey
gpu
system
actually,
and
that
worked
out
really
well
this
year
because
of
covet
19
we're
not
doing
an
in-person
thing,
so
we're
doing
a
more
spread
out
weekly
webinar
series,
but
it's
going
to
be
a
really
great
program,
there's
already
a
lot
of
stuff
being
put
together
on
the
agenda,
which
you
can
see
here,
and
I
encourage
you
to
register
at
this
link
to
get
the
connection
details.
B
Additionally,
we
have
a
deep
learning
at
scale
tutorial
that
we've
done
at
a
bunch
of
conferences
jointly
organized
with
crane
nvidia,
there's
again
material.
You
can
find
here,
it's
accepted
again
at
sc
this
year,
so
check
it
out.
I
don't
know,
maybe
it'll
be
virtual
but
check
it
out
anyway,
and
we
also
have
the
seminar
series,
which
sometimes
has
more
educational
stuff.
So
I
think
I'll,
just
let
you
read
the
conclusion.
Thank
you
for
listening.
A
I
have
a,
I
have
one
question
for
the
users
like:
if
saying
I'm,
I'm
a
new
user,
I'm
not
a
dl
expert
or
a
machine
learning
expert.
I
just
want
to
try
some
model
on
my
data
sets.
Does
nurse
provide
a
list
of
simple
non-sophisticated,
deep
learning
models
for
plug
and
play.
B
Well,
we
do
have.
We
do
have
some
examples.
We
do
have
some
examples
in
our
tutorial
programs,
for
example
at
the
deep
learning
for
science
school
and
there's
code
there
that
folks
can
try
out
and
use.
We
don't
have
like
a
library
of
simple
examples
that
folks
can
just
plug
into
their
problem
at
the
moment,
but
it's
stuff
that
we've
talked
about
doing
and
tried
to
potentially
provide
and-
and
we
have
you
know,
consulting
services
that
can
help
folks
with
getting
things
started.
A
I
see
I
see
there's
another
question:
what
sort
of
scaling
difference
have
seen
between
different
distributed
back
ends?
Horrible
and
high
torch
distributed.
B
Right
so
I
don't
think
we
explore
everything
on
corey
gpu,
for
example.
I
don't
know,
for
example,
with
tensorflow
we're
mainly
looking
at
horror
mod,
and
I
don't
know
that
we've
done
like
real
performance
comparisons
to
like
built-in
tensorflow
distribution
strategies.
Here.
I
think
we
might
need
to
look
into
that
and
for
pytorch
also
we're
mainly
just
looking
at
the
native
distributed
library,
but
looking
at
the
different
communication
backends.
B
So
I
don't
have
it
here,
but
I
have
compared
like
nickel
to
glue
to
things
like
mpi,
and
so
we
at
least
have
those
numbers,
and
we
know
you
know
you
know.
Nickel
is
the
best
and
horvat
performs
well
horrible,
performs
especially
well
with
nickel
rather
than
mpi,
but
there
are
other
things
that
we
could
probably
look
at
more
closely.
A
I
see
another
question
from
audience:
how
is
the
work
on?
Am
I
perf
giving
you
insight
for
your
your
work
at
nurse.
B
Yeah
so
well,
one
thing
is
that
we're
actually
using
our
mlperv
hpc
benchmarks
to
assess
the
performance
of
our
systems
here,
so
both
our
cpu
and
our
gpu,
in
fact,
I'm
one
of
the
top
users
on
corey
gpu
or
these
staff
users,
because
I'm
always
running
the
ml
pair
of
hpc
benchmarks
there.
So
it's
part
of
our
benchmarking
strategy
and
helps
us
helps
inform
us
on
on
how
well
things
are
how
well
things
are
doing.
I
think
that's
mainly
it.
There
may
be
other
more
subtle
things
that
I'm
not
sure
on.
A
B
Yeah,
so
I
I
think
I
don't
know
that
we're
going
to
specifically
say
you
should
absolutely
use
just
this
one
solution,
but
we
at
least
have
a
you
know,
sets
of
recommendations
that
are
kind
of
reflected
on
what
you
see
here
on
this
plot,
so
for
pytorch
you
know
we'd
recommend
using
either
pytorch
distributed
or
pytorch
plus
horizon
with
nickel.
Definitely
as
the
communication
back
in
and
then
with
tensorflow.
B
Well,
we
we,
we
mostly
look
at
horabad,
but
I
think
the
crate
plug-in
also
performs
well
and
if
you're
using
corebot,
it
should
use
nickel.
So
there
are
sets
of
recommendations
that
we
have.
A
I
see
so
two
questions
are
very
similar,
so
how
do
you
measure
the
vl,
deep
learning
workload
on
core
gpu
and
what
is
the
major
performance
bottleneck
like
memory
load
balance,
insufficient
parallelism.
B
Right
yeah,
so
I
think
these
are
kind
of
asking
different
things,
but
I
think
like
how
do
you
measure
the
dl
workload
is
more
about
like
how
do
we
know?
B
How
are
we
like
inferring,
what
the
jobs
are
and
who
the
users
are,
and
we
do
have
ways
to
collect
that
data,
so
some
of
it
is
the
way
we
instrument
our
module
loads,
so
folks
load
our
software,
then
we
kind
of
we
log
a
message
and
we
know
sort
of
who
they
are
and
what
the
job
is,
and
we
can
do
that
to
figure
things
out.
We
also
kind
of
have
been
tracking,
as
we
add
users
to
corey
gpu
and
have
a
sense
for
who
is
actually
doing
machine
learning.
B
What
kinds
of
workloads
are
running
this
is
possible
now
because
the
system
is
not,
you
know,
there's
not
like
thousands
of
people
on
it.
I
don't
know
the
number,
but
it's
it's
manageable.
At
this
point,
we
can
actually
go
through
the
spreadsheet
and
say:
there's
machine
learning,
there's
machine
learning,
there's
machine
learning
and
with
these
things,
we're
able
to
kind
of
understand
how
much
machine
learning
is
actually
running
for
the
bottom
performance
bottlenecks
kinds
of
things
it
actually
just
depends
a
lot.
B
A
I
see
cool.
Thank
you.
Thank
you,
steve.
I
think
right
now,
as
steve
floyd
getting
really
really
popular
and
with
a
lot
of
application
here
at
hpc,
so
yeah.
We
hope
we
can
continue
providing
better
services
for
the
users.
Thank
you,
steve.