►
From YouTube: TensorFlow.js - Bringing ML and Linear Algebra to Node.js - Ping Yu & Sandeep Gupta, Google
Description
TensorFlow.js - Bringing ML and Linear Algebra to Node.js - Ping Yu & Sandeep Gupta, Google
No Python required - this session will highlight unique opportunities by bringing ML and linear algebra to Node.js with TensorFlow.js. Nick will highlight how you can get started using pre-trained models, train your own models, and run TensorFlow.js in various Node.js environments (server, IoT).
A
Okay,
I
guess
we
can
get
started.
So
thanks
again,
thank
you
so
much
for
joining
us.
My
name
is
Sanjeev
Gupta
I'm
from
the
tensorflow
team
in
Google,
and
this
talk
is
about
machine
learning
and
JavaScript.
I
also
have
my
colleague
Bing
here,
and
he
will
go
present
with
me.
So
again
thanks
and
welcome.
A
A
Okay,
so
a
relatively
small
number,
because
many
of
you
are
somewhat
new
to
the
field.
Let
me
just
take
a
couple
minutes
and
introduce
some
terminology
here,
and-
and
this
will
give
you
a
flavor
of
why
these
kinds
of
methods
are
becoming
so
popular
right,
so
in
most
of
classical
programming
right
when
we
are
trying
to
solve
a
problem
or
write
a
program
for
a
computer
to
solve
a
problem.
A
The
way
we
have
usually
done
this
is
that
we
first
come
up
with
these
rules
and
we
try
to
write
explicit
code
to
codify
those
rules
right.
So,
for
example,
if
you
are
trying
to
write
a
program
that
takes
images
and
tries
to
detect
whether
it's
an
activity
of
let's
say
walking
or
running
or
bicycling,
you
might
come
up
with
some
features
or
rules
to
describe
that.
One
way
to
do
that
might
be
that,
let's
measure
the
speed
of
the
person.
A
If
the
speed
is
less
than
some
number,
then
we
call
this
walking
if
the
speed
is
more
than
that
number.
Maybe
it
is
running
right,
so
you
come
up
with
these
kinds
of
rules
and
you
implement
a
computer
program
to
to
solve
the
problem
that
you
to
solve.
The
issue
with
these
approaches
is
that
they
very
soon
run
into
limits
right.
A
You
encounter
situations
where
your
new
rules
no
longer
work,
and
even
if
your
rules
work
well
for
the
problem
you
are
trying
to
solve,
they
are
generally
not
generalizable
or
extensible
to
a
slightly
different
problem
that
you
want
to
solve
another
time
right.
So
that's
where
classical
programming
runs
into
its
limits,
like
in
machine
learning,
it
turns
this
whole
concept
sort
of
on
its
head.
It
turns
it
upside
down
and
the
way
you
approach
this
is
that
what
if
I
had
some
examples
where
I
already
know
the
answers
and
what?
A
If
I,
can
feed
a
lot
of
these
examples
and
answers.
So
we
call
this
training,
data
and
answers
on
that
training,
data
into
a
computer
program
or
a
model,
and
this
model
has
the
property
that
it
can
learn
from
these
examples
that
it
has
fed
so
that
it
can
come
up
with
what
these
rules
are
right
and
these
rules
may
be
in
a
form
that
humans
can
understand,
or
it
may
be
in
some
abstract
form
that
a
computer
program
is
choosing
to
describe
that
problem.
A
So
this
becomes
a
very
generalizable
way
of
solving
a
problem,
and
so
in
practice,
the
way
you
do
that
is
you
collect
a
lot
of
training
data
and
you
have
that
data
labeled
this,
as
these
are
called
human
generated
labels.
You
feed
it
into
a
machine
learning
program
or
a
model
and
then
outcome
these
rules
or
a
trained
models.
So
this
is
the
training
phase
of
the
machine
learning
process
and
once
you
have
trained
a
model
now,
so
this
is
some
representation
of
the
model.
A
Now
you
can
feed
new
data
into
this
model
and
then
that
model
is
ready
to
give
you
new
answers
or
new
predictions
on
this
model
right.
So
this
is
called
the
inference
phase,
so
this
is
sort
of
the
most
you
know
at
a
high
level
conceptually.
This
is
how
a
machine
learning
way
of
solving
a
problem
will
look
like
so
just
to
look
at
this
little
bit
visually.
Let's
say
you
are
trying
to
classify
images.
A
This
is
sort
of
what's
happening
there
right,
so
a
model
is
a
collection
of
layers
or,
and
all
these
layers
are.
These
are
just
computational
blocks
right.
Each
layer
and
each
element
in
that
layer
is
just
doing
a
very
simple
math
operation.
It's
taking
in
some
numbers,
it's
multiplying
those
numbers
with
some
other
numbers
and
it's
producing
a
new
output,
and
you
do
all
this
and
you
feed
this
forward
and
you
wire
this
model
so
that
when
an
image
is
fed
in
it
produces
an
output
that
is
close
to
the
output
that
we
expect.
A
If
it
is
not
cat,
then
we
calculate
an
error
or
a
difference,
and
we
propagate
that
error
back
through
our
model
and
we
sort
of
tweak
or
adjust
all
the
parameters
of
our
model
until
we
get
the
right
answer
right
and
we
keep
on
doing
this
for
lots
and
lots
of
examples,
and
then
we
have
a
trained
model
for
that
particular
task
that
we
are
trying
to
solve.
So
this
is
sort
of
how
machine
learning
approach
of
solving
a
problem
works
right
now.
A
The
reason
why
machine
learning
is
really
taking
off
now-
and
it
has
become
such
a
important
part
of
problem
solving
today-
is
for
three
main
reasons.
First,
is
that,
as
we
just
saw,
it
relies
on
availability
of
lot
of
data
and
not
just
quantity
of
data,
but
good
quality
data
data,
that's
labeled
and
curated,
and
and
represents
the
full
variety
of
the
situation's
you
will
encounter
in
real
life.
A
So
now
the
good
news
is
there
are
lots
and
lots
of
these
very
large,
publicly
available
data
sets
which
make
it
easy
for
any
developer
to
get
started
and
trained
powerful
machine
learning
models.
The
second
aspect
is
that
these
models
can
be
computationally
quite
expensive,
although
these
are
simple
computations
that
they're
running,
but
they
just
run
millions
and
millions
of
them,
and
so
you
need,
you
know
very
significant
computation
power
to
be
able
to
run
these
models
in
a
practically
useful
time
frame.
And
now
there
are
these
custom
hardware's.
A
There
are
GPU
advances
and
new
types
of
accelerators
that
are
coming
up,
which
have
made
it
very
practical
to
run
machine
learning
models
in
a
very,
very
reasonable
amount
of
time,
and
then,
lastly,
the
research
in
the
field
of
ml
has
been
growing
and
advancing
at
a
very
fast
pace.
There
are
new
publications
that
come
out
all
the
time
and
new
ways
of
solving
problems.
A
So
basically,
all
of
these
things
kind
of
now
give
everyone
the
ability
to
do
this
for
what
a
problem
you
are
interested
in-
and
this
is
where
frameworks
like
tensorflow
come
in
right.
So
so
tensorflow
is
an
open
source
library
for
doing
machine
learning,
and
what
that
means
is
that
lot
of
this
mechanics
and
logistics
of
training,
a
model
of
doing
this
kind
of
back
propagation
and
and
adjusting
all
the
weights
of
running
your
experiments
of
creating
a
deployed
model
which
can
then
be
deployed
and
used
in
production.
A
All
of
these
things
are
managed
for
you
by
a
library
like
tensorflow,
so
you
don't
have
to
reinvent
all
of
this
stuff.
It
also
has
a
bunch
of
pre
trained
models
available
that
you
can
use
off-the-shelf
right,
but
tensorflow
was
written
with
a
Python
front
end
right,
and
this
is
where
most
of
the
machine
learning
tools
that
are
out
there
today
require
one
to
learn.
A
So
motivated
by
that,
we
released
this
library
called
tensorflow
javascript,
which
is
basically
a
version
of
tensorflow
that
is
javascript
native,
and
so
it
lets
you
run
machine
learning
models
and
even
train
machine
learning
models
in
the
browser.
So
you
can
run
it
in
in
web
browsers
through
JavaScript.
You
can
run
it
server-side
with
node.js
and,
as
we
will
show
you,
you
can
run
it
in
variety
of
other
platforms
where
JavaScript
can
be
used.
A
This
library
is
GPU
accelerated
and
again
we
will
talk
a
little
bit
more
about
performance
later,
but
we
use
WebGL
acceleration
in
the
browser.
So
it
is
very,
very
performant
for
the
common
types
of
machine
learning
models
and
it
is
completely
open-source.
So
it's
it's
open
for
the
community
to
use
buried
on,
extend
and
contribute
back
into
the
into
the
library.
A
So
when
we
release
this,
our
hope
was
that
this
gives
web
developers
and
JavaScript
developers
back-end
developers
an
easier
way
to
get
started
with
ml,
and
we
have
been
very
happy
actually
to
see
the
and
uptake.
This
is
an
interesting
sort
of
example,
this
person
pyramids.
He
gives
a
lot
of
very
influential
talks
in
the
JavaScript
community
and
he
gave
a
talk
recently
at
nordic
das
and
he
sort
of
highlighted
some
of
exactly
these.
A
These
motivational
points
that
we
had
is
that
you
know
this
whole
excitement
of
ml,
like
the
JavaScript
community,
is
kind
of
missing
out
on
and
now
it
tends
to
flow
Jas.
You
know
there's
an
easier
way
to
get
started
and
to
be
able
to
use
machine
learning
and
in
fact
he
says
down
there
that
now
you
can
bring
the
power
of
machine
learning
to
your
web
application
or
JavaScript
application
with
ten
lines
of
code
and
and
sort
of
we
love.
We
love
this
testimonial,
except
that
it's
five
lines
of
code.
A
It's
not
even
ten
lines
of
code.
I'll
show
you
that
so
here,
for
example,
this
is
how
you
know
this
is
sort
of
the
rough
template
of
bringing
in
machine
learning
into
your
application.
There
are
first
two
lines
of
code
which
are
basically
importing
our
library,
and
here
we
are
showing
the
node
example
on
top,
so
you
import
the
TF
GS
node
package
and
the
second
line
is
importing
one
of
our
many
pre
trained
models.
A
A
The
image
object
and
you
get
back
your
predictions.
So
what
you
get
back,
you
see
that
image
on
the
right.
It
has
identified
a
cup
and
a
phone
and
a
mouse,
and
it
puts
bounding
boxes
on
those
object
and
you
get
a
JSON
object
which
tells
you
the
names
of
those
objects,
and
it
tells
you
the
coordinates
of
where
these
are,
and
it
also
gives
you
a
probability
of
how
confident
it
is
of
that
prediction.
So
you
know
super-powerful
model
five
lines
of
code.
A
Your
web
application
can
now
be
using
an
object,
detector
right
and
you
can
do
similar
things
with
text
and
speech
and
lots
of
other
types
of
models.
So
why
is
this
a
good
idea
so
on
client
side
in
the
browser?
There
are
many
many
advantages
of
running
machine
learning
in
in
client-side
browser
gives
you
a
lot
of
interactivity
right.
It's
easy
access
to
sensors
like
webcam
and
microphone,
etc.
So
you
can
immediately
take
advantage
of
all
this
sensor
data
and
put
it
into
your
machine
learning
model.
A
So
in
that
object,
detection
case,
for
example,
the
images
could
be
coming
from
a
webcam
stream.
There
is
nothing
to
install,
so
you
know
you
can
share
with
your
users.
You
just
share
a
URL
link
and
they
have
a
web
page
which
has
the
model
in
it,
and
you
know
they
download
that
and
they're
there
and
they
are
using
that
model
directly
from
from
that
URL.
A
It
has
huge
privacy
implications
because
you
are
running
these
models
locally
client-side.
No
data
is
going
to
the
server
right
so
for
healthcare
or
any
other
privacy,
sensitive
type
of
applications.
You
this
has
enormous
implications.
Also,
it
reduces
server
side
costs
because
you
don't
have
to
stand
up
complicated
architecture.
A
Project
it,
oh
really,
okay,
I
can
I
think
the
connector
was
a
little
loose
yeah.
Okay,
let's
hope
this
works,
and
then,
lastly,
as
I
mentioned
that,
because
we
use
WebGL
acceleration,
you
get
really
really
good
performance.
On
the
server
side
you
can,
you
can
run
more
powerful
models
that
may
not
be
practical
to
run
client-side
in
the
browser,
and
you
can
basically
take
full
advantage
of
whatever
hardware
you
have,
and
these
could
be.
You
know
multiple
code,
machine
or
GPUs
or
even
other
custom
hardware's.
A
There
is
a
very
large
NPM
package
ecosystem.
So
if
you
are
using
machine
learning
in
node,
you
can
benefit
from
this
whole
ecosystem
of
node
packages,
and
you
can
bring
machine
learning
directly
into
your
node
stack.
So
you
don't
have
to
have
like
separate
Python
data
science
teams
and
a
separate
back
and
node
team
who
sort
of
are
not
really
talking
to
each
other.
You
can
bring
machine
learning
directly
into
node
and
have
like
a
single
stack,
and
we
because
we
bind
to
tensorflow,
see
library.
A
So
there
are
three
main
ways
of
using
this
library.
One
is,
and
that
object.
Detection
example
was
was
one
example
of
this.
You
can
take
a
existing
machine
learning
model,
whether
it's
a
tensorflow
JavaScript
model
or
whether
it's
one
of
your
Python
models
and
you
can
bring
it
in
and
run
it
with
tensorflow
javascript
second
way
to
do
this
is
to
take
a
pre
trained
model,
but
then
often
you
have
to
customize
it
on
your
own
data
to
solve
a
particular
problem.
A
Because
javascript
is
such
a
versatile
language
and
it
runs
on
many
many
platforms.
You
can
use
tensorflow
J's
in
all
these
places
right,
so
you
can
run
it
in
browser.
You
can
run
it
on
native
mobile,
hybrid
platforms,
such
as
react
native.
We
just
recently
announced
integration
with
react
native
and
you
get
first-class
support
with
WebGL
acceleration
through
react.
You
can
run
it
with
node
and
then
desktop.
A
So,
as
I
said,
we
prepackaged
a
bunch
of
ML
models
for
common
ml
tasks,
and
here
are
some
examples.
There
are
models
for
doing
image.
Classification
object,
detection,
there
are
models
for
recognizing
human
pose.
We
had
some
of
those
demoed
at
our
booth,
and
you
know
you're
welcome
to
stop
by
after
this
and
see
some
more
of
these.
We
do
pose
detection,
there's
a
very
nice
model
for
audio
commands.
A
So
if
you
speak
words,
it
can
recognize
the
spoken
words
and
you
can
use
that
to
drive
actions,
and
then
we
are
increasingly
doing
more
and
more
around
text.
So
text
has
a
variety
of
use.
Cases
like
sentiment
and
toxicity,
and
all
of
these
models
can
be
either
just
done
used
with
a
scripts
source
from
our
hosted
scripts
or
you
can
npm
install
them.
A
So
here
are
some
more
examples
and
using
these
models
as
building
blocks,
you
can
build
applications
that
solve
these
types
of
problems
like
accessibility
or
sentiment,
analysis,
conversational
agents
and
a
variety
of
different
things.
So
all
of
those
examples
you
are
seeing
on
the
right
are
instances
of
models
just
running
client-side.
A
So
let
me
just
take
a
couple
minutes
and
show
you
a
very
quick
demo
and
just
to
show
you
how
easy
it
is
to
retrain
a
model
like
this.
So
this
is
a
an
application
called
teacher
machine.
Has
anybody
seen
teachables
machine
so
far,
so
this
is
something
that
I
would
encourage
you
to
try
out
on
your
own
time
as
well
after
this.
But
let's
just
you
know,
see
how
this
works
so
I'm
going
to
skip
this
tutorial.
A
Classifier
and
this
web
session
has
already
loaded
a
powerful
image
classification
model
called
mobile
net
and
it's
running
in
my
browser
session,
and
we
are
going
to
modify
this
model
to
do
very
simple:
rock-paper-scissors
classification,
okay,
so
we're
going
to
output
the
word
rock
for
my
first
class
paper
for
second
and
scissors
for
third
okay,
and
now
we're
going
to
record
these
training
images.
So
this
green
sample
green
class
will
be
rock,
then
this
will
be
paper
and
this
will
be
scissors
and
we'll
just
you
know,
record
some
images
from
my
webcam.
A
B
B
A
A
And
the
nice
thing
about
this
is
that
you
can
very
similarly
train
a
speech
model
or
a
pose
model,
and
now,
with
the
new
version
of
tea
table
machine,
it
gives
you
the
ability
to
once
you
have
trained
this
model
to
save
it,
and
you
can,
you
know,
put
it
on
on
your
shared
Drive
somewhere
or
you
can
download
a
80
fjs
compatible
model
that
you
can
run
offline,
so
very,
very
approachable
way
of
getting
started
with
machine
learning.
So
at
this
point,
I
want
to
turn
it
over
to
my
colleague
pink
who
will
talk.
C
C
And
the
core
fine-grain
to
come,
show
the
new
network
internally
and
how
you
be
able
to
find
you
know,
control
where
it's
gaining
and
as
future
and
as
sended
mentioned,
we
support
most
concise
execution
of
a
server-side
on
the
client-side.
We
use
WebGL
also
give
you
automatic
GPU
salvation
and
recently
we
just
create
a
alpha
3ds
for
web
assembly.
So
give
me
your
better
acceleration
on
CPU
side
as
well.
C
Oh
no
side,
we
use
no
GSD
binding
directly
into
tensorflow
c
library,
so
you
can
utilize
this
enough
to
flow
on
inside
your
node
running
environment
and
also
we
support
GPU
support
how
many
devices
that
have
who
doesn't
work
with
like
NVIDIA
GPU
cards.
You
can
use
the
flow
of
GPU
library
and
also
we
have
something
called
an
STL.
If
you
what
it
does
give
you
that,
if
you
have.
C
You
is
the
WebGL,
it
sounds
weird
well
give
you
the
WebGL
API.
Actually
you
could
use
other
way
if
you
want
to
so
alright.
So
now
you
we
do,
give
you
a
lot
of
models
right,
but
the
fact
is
that
you
may
have
your
own
models
like
you
have
an
ml
department
that
build
your
own
model.
All
you
have
seen
some.
You
know
nice
model
outside
you
want
to
bring
into
your
JavaScript
application.
C
So
that's
for
the
browser
side,
but
for
server
side
we
just
announced
a
new
API
there's.
Basically,
if
you
want
to
run
Python
model
inside
no,
now
you
don't
need
to
convert,
you
can
directly
run
them
using
our
C
library
and
which
means
you
have
better
app
support.
Our
converter
actually
support
about
200
apps,
the
core
apps
of
tensorflow,
but
actually
tenable,
has
about
thousand
ops.
C
So
this
will
give
you
a
thousand
offs,
so
you
can
run
really
powerful
machine
learning
models
inside
to
know
right
now
and
it
support
both
10
to
flow
1.0
version
or
two
no
version,
and
it
gives
you
better
performance.
Why
is
that
because
pie
song
10th
fellow,
actually
runs
nice
time
pie
song,
so
it
has
a
Python
layer.
It
give
actually
caused.
A
lot
of
you
know
delay,
but
you
know
VA
is
much
faster
than
pie
sauce.
So
that's
why
we
are
slightly
better
than
Python.
C
When
you
run,
you
know,
10
flow
model
directly
inside
no
Jess.
So
here
are
some
perform
member
for
mobile
net.
That's
what
you
know
sandy
was
showing
there's
an
image
recognition
model.
1000
G
has
give
you
about
20
milliseconds,
inference
time,
which
means
every
recognition
of
the
image
takes
about
20
misses
which
give
you
50
frames
per.
Second,
if
you're
building
a
you
know
like
real-time
application,
that's
plenty
for
you
to
to
play
with
and
TF
lie.
If
you
don't
know,
is
a
native
implementation
Google's
another
open
source
project,
it
runs
on
the
mobile
device.
C
C
You
guys
ready
for
some
code.
Alright,
so
now
you
can
build,
you
can
load
your
model.
You
can
lo
our
pre
trainer
model.
What
about
you
want
to
build
your
own
model?
I
want
to
show
you
how
to
do
that.
You
first
we
want
to
tackle
the
high
level
API,
so
you
promised
primitive
way.
This
is
loading
our
NPM
packages
getting
our.
C
So
this
is
loading
out
no
packages
so
baby.
We
know
magic.
We
have
the
C
binding
into
you,
know
the
tender
flows,
C
library.
We
also
package
you,
the
C
library
inside
there's,
no
NPM,
and
also
you
can
load
the
GPU
version.
If
you
have
GPO
enabled
a
car,
let's
go
back
to
the
you
know,
image
recognition
model
that
shown
earlier.
So
you,
as
sandy
mentioned
a
typical
neural
network,
are
kind
of
constructed
layer
by
layer.
Each
layer
extracts
certain
feature
out
of
previous
layer
and
passed
on
the
result
from
to
the
next
layer.
C
We'll
show
you
how
to
do
that.
The
example
will
show
you
how
to
build
a
image
recognition
model
using
the
layer
API.
So
it's
kind
of
crazy
writing
a
lot
of
code,
but
I
mean
to
be
honest.
Lee
is
not
a
bad.
If
you
consider
like
using
like
your
curry
or
anything,
it's
not
more
complicated
than
that.
So
here
the
first
thing
is
you
use
a
screen,
shell
model.
C
What
is
sequential
model
is
basically
this,
like
90%
of
the
model
out
there
are
sequential
so
what
it
means
that
you
layer,
the
the
layers
one
by
one.
So
the
next
we
add
a
couple
layers.
Those
are
very
typical
numerical
layers,
the
combo
to
D,
give
you
you
know
a
feature
extraction
type
of
feature
and
the
max
pooling
gives
you
a
kind
of
zoom
in
kind
of
feature.
So
baby.
You
can
look
at
detail
of
that
image.
C
At
the
end,
we
will
flatten
the
image
into
a
one-dimensional
vector,
so
you
know
the
first
output,
the
classification
for
the
class
that
we
want.
So
what
this
last
layer
will
output
is
a
probability
for
each
class.
So
that's
about
it.
I
mean
that's
the
model
we
we
built
and
then
we
use
compile
messer
to
to
basically
set
up
how
we
would
train
this
model.
C
So
particularly,
we
set
up
the
last
function
to
use
categorical
cross
entropy
with
you
know
long
name,
but
the
fact
is
that
you
don't
care,
and
you
just
you
know,
copy
that
the
optimizer
is
a
gradient
descent
he's
also,
you
know
easy
to
copy,
just
three
letters
and
then
take
that
model
now
you're
ready
to
train
the
train
is
a
very
simple
just
one
Messer
you
get
access,
you
that's
your
input.
Why
is
this?
Your
output
of
your
data
set
and
epoch
is
how
long
you
gonna
run
this
training
session
after
training
is
done.
C
You
can,
of
course
you
can
look
at.
We
have
other
message
to
show
you
what
the
detail
of
the
training
is.
What
the
accuracy
is.
You
know
all
the
other
good
stuff,
I'm
kind
of
didn't
show
here,
but
after
training
is
done,
you
can
save
the
model
to
a
file.
That's
for
no,
but
you
can
also
save
it
to
local
storage
in
browser
I'll
strip
it
to
some
server
through
HV
request.
C
C
So
that's
high
level
API
or
what
is
a
low
level
API
the
lower
view
I,
actually
drill
down
inside
the
layers.
Each
layer
actually
is
many.
Many
smaller
operations,
for
example,
there
matrix-multiply
there's
some
kind
of
you
know.
You
know
cost
function
or
something
like
at
the
cosine
or
sine
function
or
some
kind
of
activation
function.
You
would
use
I
want
to
show
you
how
you
can
do
that
with
10
to
the
GS.
C
So
let's
say
we
have
a
set
of
data
set.
We
want
to
estimate,
you
can
see
those
dots.
Those
are
you
know,
kind
of
polynomial.
You
know
kind
of
function,
output,
but
you
know
it's
not
really
perfect
right,
so
you
cannot
really
know
what
exactly
the
function
is.
We
want
to
estimate
the
parameter
for
that
function,
so
a
B
and
C
will
be
the
goal
for
us
for
this
model
to
estimate
we're
doing
the
same
thing.
We're
loading
our
library.
We
create
three
variables
with
the
initial
value
at
zero
point
one.
C
What
is
the
variable
are
the
tensor
that
the
model
gonna
update.
The
training
session
was
starting
to
tune
to
make
sure
the
output
is
the
same
as
what
we
gave
it
to
them.
So
here
we
use
10
to
flowly,
need
low
level
API
like
say
at
multiply,
where
you
know
those
kind
of
low-level
API,
you
could
construct
a
tiny
graph,
tiny
neural
network
graph,
so
this
looks
a
little
bit
crazy
right,
so
just
for
a
tiny
little
function
and
just
so
long,
so
we
have
better
way
to
do
that.
C
So
we
can
use
chain
function,
chair
Masser,
to
make
it
more
concise
to
to
express
the
same
thing
here.
We
defined
a
lot
function.
Remember
last
time
we
just
say:
oh,
it
is
a
categorical
entropy
cross
entropy.
Here
we
have
to
define
our
own
because
we
wanted
more
control
right.
So
here
is
a
mean
square
error.
So
what
it
does
is
that
calculate
the
difference
between
your
model
output-
and
you
know
your
data
set
output
and
then
we
use
the
same
as
the
SGD
function.
The
gradient
descent
function
as
our
optimizer.
C
In
the
end,
we
manually
run
epochs
time
of
minimization
of
the
last
function,
so
at
the
end
of
the
day,
is
the
same
as
what
you
just
saw.
You
know
internally
in
the
high-level
API,
so
that's
about
it.
So
for
tensorflow
j
s
we
also
part
of
the
tender
flow
ecosystem.
We
not
only
give
you
javascript
also
give
we
also
integrate
it
into
tender
flow
world.
We
have
intensive,
more
tender
board
in
the
visualization
to
give
you
a
visualization
of
how
the
training
happens.
C
Like
give
you
a
lot
of
charts
to
show
how
the
accuracy
you
know
increase,
why
is
the
training
happens?
So
those
are
the
at
the
end
of
the
line.
The
last
line
is
how
you
plug
in
into
the
tangible
visualization
so
busy
you
can
see
in
life
when
you
have
the
10
zaboor
opens
oh
yeah.
Here
we
go,
we
do
have
a
graph.
So
that's
what
do
you
see
when
the
training
happened?
In
a
previous
example.
A
A
We
very
sort
of
happy
to
see
the
download
statistics
and
also
seeing
more
and
more
people
join
as
contributors,
more
than
200
people
contributing
code
actively
to
tensorflow
jeaious
and
invite-
and
we
invite
you
all
to
become
part
of
this,
and
also
many
developers
are
building
really
powerful
extensions
and
libraries
on
top
of
tensorflow
J's.
So
there
are
some
examples
there,
which
let
you
do
some
some
specialized
custom
stuff
on
on
top
of
the
library.
I
won't
show
you
three
examples.
A
A
So
just
in
closing,
I
wanted
to
show
this.
If
you
have
machine
learning
needs,
if
you
envision,
as
as
JavaScript
developers
or
as
no
developers,
if
you
have
certain
needs
or
requirements
from
the
library,
we
would
love
to
get
your
inputs.
There's
a
there's,
a
UX.
Our
study
that
we
are
doing
so
you
know,
please
feel
free
to
join
us
and
and
give
us
your
feedback.
We
would
love
to
hear
from
you
and-
and
here
are
some
links
that
are
very
useful
for
getting
started.
Of
course,
tensorflow
dot,
slash,
slash,
Jas,
that's
our
main
website.
A
It
has
all
of
the
things
examples,
documentation,
tutorials
and
the
link
to
the
github
repositories
up
there,
where
we
have
again
lot
of
these
examples
built.
We
have
a
mailing
list
in
red
there.
If
you
join
the
mailing
list
again,
that's
an
excellent
way
to
interact
with
other
developers
and
directly
with
us
on
the
tens
of
loggia
steam.
A
One
thing
I
wanted
to
show
you
is
that
if
you
go
to
the
Google
code
labs-
and
we
have
these
running
at
the
Google
booth
outside,
if
you
go
to
the
Google
code,
labs
and
search
for
tensorflow
j/s,
all
of
our
examples
are
available
as
interactive
notebooks.
So
you
can
click
through
this
and
basically
gets
to
be
up
and
running
and
get
started
and
explore
all
these
different
features
we
talked
about
here
today.
A
Lastly,
there
is
a
new
textbook
that
has
come
out,
which
is
really
a
very
nice
way
to
learn
the
basics
of
machine
learning
from
a
JavaScript
programmer
point
of
view.
All
the
examples
in
this
book
are
written
in
JavaScript,
so
that's
something
that
that's
worth
checking
out
and
also
we
have
a
overarching,
comprehensive,
tensorflow
course
on
Coursera,
which
has
a
T
of
JS
module
now
just
released
this
this
week
earlier
this
week,
so
so
plenty
of
resources
to
get
started
and
looking
forward
to
you,
know
your
your
involvement
in
the
in
the
community.