►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone,
my
name
is
Jason
Mays
I'm.
The
developer
advocate
for
tensorflow
ajs
here
at
Google,
which
basically
means
that,
if
you're
using
machine
learning
in
JavaScript
in
some
shape
or
form
out
in
the
wild,
there's
a
good
chance,
we'll
cross
paths
at
some
point
now,
if
that's
today,
I'm
contort
to
you
about
using
machine
learning
in
JavaScript,
of
course,
so
let's
get
started
now.
First
up
I
want
to
talk
about
how
machine
learning
has
the
potential
to
revolutionize
every
industry,
not
just
protect
ones,
but
all
of
them.
A
In
fact,
we
could
be
standing
right
here.
The
beginning
of
a
new
age
we've
already
been
proved
industrial
and
scientific
revolutions.
But
what
about
the
future?
That
could
be
a
machine
learning
one
too,
and
we
could
be
at
the
very
beginning
of
that
right
now.
This
is
a
really
exciting
time
to
start
learning
about
machine
learning,
as
you
can
jump
on
the
bandwagon
early
and
really
get
involved
and
have
impact,
of
course,
before
I
get
started
on
that.
A
What's
the
difference
between
artificial
intelligence,
machine
learning
and
deep
learning,
I'm
sure
many
of
you
today
have
very
different
backgrounds
and
it's
important
to
understand
what
all
this
is
all
about,
then,
where
it
comes
from
and
what
all
these
key
terms
mean.
So
we
can
understand
what
we're
going
to
be
making
later
on
now.
First
off
I
want
to
start
with
artificial
intelligence,
also
known
as
AI.
A
This
is
essentially
the
science
of
making
things
smart
or
more
formally,
human
intelligence
exhibited
by
machines,
but
this
is
a
very
broad
term,
in
fact,
and
right
now
we're
actually
in
a
place
of
narrow
AI.
This
basically
means
that
the
system
can
do
one
or
a
few
things
just
as
well
as
a
human
counterpart
gives
you
in
that
niche
area,
such
as
recognizing
objects
and
a
great
example
of
that
is
when
people
in
the
medical
industry
are
trying
to
understand
what
brain
tumors
did
like.
A
Nowadays,
experts
use
machine
learning
to
actually
work
alongside
them
to
help
point
out
what
parts
of
an
image
may
contain:
a
brain
tumor,
for
example-
and
this
leads
to
better
results,
because
sometimes
it's
just
too
grainy
for
the
human
eye
to
see.
But
ml
can
pick
up
on
these
fine
differences,
which
leads
to
better
results
for
both
the
patients
and,
of
course,
for
doctor
now.
Machine
learning,
on
the
other
hand,
or
ml
in
short,
is
an
approach
to
achieve
artificial
intelligence
that
we
just
spoke
about
on
the
previous
slide.
A
Now,
the
key
part
about
these
systems
is
that
they
can
be
reused,
and
this
is
done
by
creating
systems
that
can
learn
to
find
patterns
in
their
data
presented
of
them.
This
is
that
the
implement
in
level,
if
you
will
so
if
you
have
an
ml
system
that
is
trained
to
recognize
cats,
you
can
use
the
same
system
to
recognize
dogs
just
by
giving
it
different
sample
training
data.
A
For
example,
if
the
email
contains
a
certain
word
market
as
spam
now,
which
is
not
very
efficient
because
for
a
spammer
can
just
change
the
word
slightly
and
get
around
those
conditional
statements
now
fast-forward
to
today
and
machine
learning,
programs
essentially
get
tons
of
emails
to
classify
which
are
marked
as
spam
by
you
and
it
tries
to
find
what
attributes
are
those
emails
led
to
it
being
classified
as
spam
or
by
itself.
So
now
there's
no
battle
between
programmer
and
spammer
and
instead
the
end
user
can
concentrate
on
making
great
software
instead.
A
So
what
come
in
use
cases
are
Varon.
Well,
actually,
there's
quite
a
few.
These
with
typical
use
cases,
I
see
machine
learning
being
useful
about
our
arbors,
of
course,
but
we've
got
things
like
computer
vision
like
the
object
detection
example.
We
just
spoke
about.
We've
got
numerical
things
like
regression,
predicting
a
number
natural
language,
for
example,
text,
toxicity
or
sentiment
analysis.
A
We've
got
audio
for
speech,
commands,
for
example,
and
my
personal
favorite
is
generative,
which
is
essentially
things
like
a
style
transfer
and
the
creative
kind
of
applications
of
ML,
and
you
can
see
on
this
slide
example
from
Nvidia
whereby
they
are
generating
human
faces,
and
these
faces
do
not
actually
exist
in
the
real
world.
It's
been
trained
on
celebrities
in
this
case,
and
you
can
see
how
now
this
research
can
actually
produce
very
cool
imagery.
So
what
about
deep
learning?
A
Essentially
deep
learning
is
a
technique
for
implementing
machine
learning,
but
we
just
spoke
about
on
the
previous
slide,
and
one
such
deepening
technique
is
known
as
deep
neural
networks,
so
you
can
think
as
deep
learning
as
the
algorithm
you
might
choose
to
use
in
your
machine
learning
program.
Essentially,
so,
if
you
haven't
heard
of
deep
neural
networks,
don't
worry
essentially,
eby's
are
just
programming.
A
A
Actually
in
link's,
we
have
the
deep
learning
that
feeds
into
the
machine
learning,
so
the
algorithm
that
goes
into
the
implementation
and
that
machine
learning
gives
us
this
grand
illusion
of
artificial
intelligence,
which
is
what
we're
trying
to
aim
for
longer-term
and
these
terms
actually
go
back
to
the
1960s
and
50s.
It's
not
anything
new.
It's
just
that.
Now
we
have
the
power
with
all
the
cheap
processors
and
memory
that
we
can
actually
make
use
of
these
techniques
at
scale,
with
all
the
data
that
we
now
have,
this
previously
was
impossible
in
the
older
days.
A
So
how
do
we
train
machine
learning
systems
and
that's
a
great
question?
Essentially,
we
need
features
and
attributes,
and
you
can
see
here
from
this
example.
If
we
just
pretend
to
be
farmers
for
a
second
trying
to
classify
apples
and
oranges,
to
features
or
attributes
you
might
want
to
use
would
be
weight
and
color.
These
things
are
easy
to
measure
digitally
and
can
be
accessed
at
scale.
So
once
you've
got
those.
A
This
is
actually
a
very
naive
form
of
machine
learning
if
we
could
get
a
computer
to
figure
out
the
equation
of
that
line,
because
if
we
now
classify
a
new
piece
of
fruit,
we
take
its
weight
and
its
color
and
we
plot
an
onus
graph.
If
it
falls
above
the
line,
we
can
say
with
some
level
of
confidence
that
that
piece
of
fruit
is
an
orange
and
if
it
falls
below
the
line
we
can
assume
is
probably
an
apple
and
that's
kind
of
what
is
going
on
in
all
of
these
systems.
A
A
This
could
lead
to
a
scatter
plot,
as
you
see
on
the
chart
right
now
and
there's
no
easy
way
to
separate
this
data
with
a
straight
line
or
even
a
curved
line
for
that
matter,
and
this
is
a
good
example
of
a
bad
choice
of
features,
match
speeds
and
you
might
be
like
well.
Why
Jason
would
you
choose
such
things
and
it's
not
always
as
simple
as
apples
and
oranges.
Imagine
those
brain
tumors
we're
talking
about
earlier
on
what
features
and
attributes
really
use
to
be
able
to
distinguish
a
positive
from
a
negative
result.
A
In
that
case,
it
gets
very
hard
very
quickly,
and
this
is
known
as
feature
engineering
to
find
the
set
of
features
and
attributes
that
give
you
the
best
separation
in
data
and
that's
what
that
get
paid
a
lot
of
money
to
figure
out
properly.
But
what
about
higher
dimensions?
In
our
simple
example,
we
had
just
two
dimensions:
let's
assume
we
had
three.
A
Now,
it's
actually
interesting
to
note
that
most
machine
learning
problems
are
actually
using
much
higher
dimensions
from
three
now,
unfortunately,
our
human
brains,
just
can't
comprehend
what
that
looks
like,
but
you
have
to
trust
me
if
the
math
is
actually
the
same
and
instead
of
using
a
plane
using
something
called
a
hyperplane
and
by
just
means
is
one
dimension,
less
than
number
of
dimensions
that
you're
working
with,
but
the
math
works
out
the
same
and
using
this
high
dimensional
space
and
dividing
up
in
much
the
same
way.
So
it
should
be
easy
right.
A
We've
got
a
dog,
we've
got
a
mop.
What
could
possibly
go
wrong?
Well,
some
dogs,
that,
like
mops
and
vice
versa
and
my
point
for
bringing
this
up,
is
that
you've
got
to
be
aware
of
the
bias
in
your
training
data.
One
of
the
biggest
challenges
you're
face
is
not
finding
enough
training
data
that
is
unbiased
for
the
situation's
you
want
to
use
it
in.
A
So,
in
the
case
of
recognizing
a
cat,
something
as
simple
as
the
cat,
you
might
need
to
have
ten
thousand
images
of
cats
of
different
breeds,
different
stages
of
a
lifecycle,
different
shapes
sizes
in
different
environments,
different
lighting
conditions,
taken
on
different
cameras.
All
this
is
required
to
have
the
best
chance
of
understanding
what
cat
pics
laws
actually
are,
and
without
that
you
may
end
up
having
biases
in
your
machine
learning
model,
which
would
be
very
bad.
The
other
point
to
note
here
is
that
data
is
not
always
imagery.
A
A
Why
would
you
want
to
do
machine
learning
in
JavaScript,
and
that
is
a
great
question
too.
In
fact,
JavaScript
can
run
pretty
much
everywhere
in
the
web,
browser
on
the
server-side
desktop
mobile
and
even
internet
or
things,
and
if
we
dive
into
each
one
of
those,
you
can
see
many
of
the
technologies
that
you
already
know
in
love
on
the
left-hand
side,
they're
popular
web
browsers,
you
might
use
on
server
side.
We
have
no
js'
for
mobile.
A
If
you
so
desire,
just
like
you,
could
do
in
python,
if
you're
familiar
with
machine
learning
in
python
and
that
allows
you
to
basically
dream
up
anything,
you
might
want
from
augmented
reality
gesture
sound
recognition,
conversational
AI,
whatever
it
might
be.
You
can
do
that
in
JavaScript.
Now
as
well,
giving
you
superpowers
in
the
browser
and
beyond
so
there's
three
ways:
you
can
talk
about
using
machine
learning
in
JavaScript
I'm
going
to
go
through
all
of
those.
Now
the
first
one
is
pre
trained
models.
A
These
are
essentially
really
easy
to
use
JavaScript
classes
for
common
use
cases,
and
you
can
see
we
have
many
of
these
already
from
object,
detection
body
segmentation,
which
allows
you
to
find
where
the
body
is
in
an
image.
Pers
estimation
to
detect
the
skeleton
and
we've
got
speech,
commands
and
much
much
more.
In
fact,
some
of
our
newer
models
on
the
right-hand
side
there
you
can
see
we
now
support,
face
mesh
which
can
recognize
468
landmarks
on
the
human
face.
A
A
Recognition
is
using
cocoa
SSD,
which
is
the
name
of
the
machine
learning
model
that
we're
using
to
power
vests
and
that
has
been
trained
on
90
object
classes
such
as
these
dogs
on
the
right
hand,
side,
so
90
common
objects
can
be
recognized
out
of
the
box.
Now,
as
important
is
that
you
can
see
that
this
also
gives
back
the
bounding
box
data
which
allows
you
to
localize
it
in
the
image
and
that's
why
we
call
this
object.
A
Recognition
instead
of
image
recognition,
image
recognition
is
where
you
know
about
the
thing
exists,
but
you
don't
know
where
it
is.
So
this
is
a
pretty
cool
one
to
start
with
I'm
going
to
show
you
how
we
can
write
code
to
make
this
actually
work
ourselves,
so
there's
dive
into
the
code
now
so,
first
up,
let's
look
at
the
HTML.
This
is
pretty
boilerplate
stuff,
very
simply
going
to
import
a
style
sheet,
their
style,
dot,
CSS
and
then
in
our
main
body.
A
We're
gonna
have
a
demo
section
that
initially
is
going
to
be
invisible,
so
you
can
see
a
class
invisible
is
set
at
the
very
beginning
there,
and
then
we
have
some
images
that
we
want
to
be
able
to
classify
on
click.
So
these
all
have
the
class
classify
on
kik
and
an
image
contained
within
that
containing
give.
There
is
to
be
any
images
you
want
and
then
at
the
end
there
you
can
see.
We
simply
have
three
Script
imports.
The
first
one
is
essentially
bringing
in
the
tensor
flow
j/s
bundle.
A
The
second
one
is
bringing
in
the
cocoa
SSD
machine
learning
model
and
the
third
one
is
of
course,
the
JavaScript
we're
gonna
rights
to
get
all
of
this
working.
So
looking
at
the
first
lines
of
JavaScript,
first
of
all,
we're
just
going
to
define
a
constant
called
demos
section
and
that's
just
going
to
get
a
reference
to
the
demo
area
where
all
of
our
images
are
living.
A
We
weren't
going
to
set
a
variable
model,
has
loaded
and
set
it
to
force
and
also
define
a
variable
for
the
model
to
store
that
once
it
has
loaded
next,
we
need
to
load
the
model
of
course.
So
all
we
need
to
do
is
call
cocoa,
SSD
load
and
because
this
is
an
async
function,
we
use
the
ven
method
to
callback
a
anonymous
function.
A
Finally,
we
removed
the
invisible
class
from
our
demo
section
to
make
sure
it's
now
visible
and
not
grayed
out
like
it
was
before
so
next
we
get
a
reference
to
the
image
containers
ie,
all
the
divs
that
had
that
classify
on
click
plus
we
can
very
loop
through
all
of
those
and
essentially
add
a
click
handler
to
each
so
that
we
can
decide
what
to
do
when
each
image
within
it
is
clicked.
And
here
we
go.
A
Here's
the
handle,
click
definition,
we
simply
check
if
the
model
has
loaded,
if
it
hasn't
we're
going
to
return
straight
away,
because
there's
no
point
doing
anything
unless
the
model
is
available
to
use
and
if
it
is
available
to
use
we're
gonna
essentially
call
model
dot,
detect
and
we're
gonna
pass
it
the
image
that
was
clicked.
So
if
you
event
target
in
this
case
and
then
again,
this
is
an
async
operations.
We
use
the
ven
to
vend
call
our
other
function.
Handle
predictions
once
is
ready
and
in
handle
predictions
you
can
see.
A
A
So
we
can
loop
reverse
predictions
and
we
can
create
a
new
paragraph
element
for
each
and
set
what
we've
what
we
saw
along
with
its
confidence,
and
then
we
can
also
set
the
margin
of
this
paragraph,
so
it
sits
nicely
at
the
bottom
of
the
bounding
box
and
then,
of
course,
this
pincode
highlighter
is
essentially
the
bounding
box
that
I've
created
and
we're
just
setting
the
X.
Y
width
and
height
coordinates
of
that
element
so
that
it
sits
in
the
right
place
in
the
context
of
its
parent
div.
A
And
then,
of
course,
we
just
add
these
two
elements
to
Werdum
and
the
fact
should
now
be
visible.
And
finally,
the
CSS
is
pretty
self-explanatory
for
various
moments
when
we're
changing
for
GUI.
So
if
we
put
it
all
together,
this
is
what
we
get.
So,
as
you
can
see,
this
is
the
code
running
and
I
can
now
click
on
one
of
these
images
and
you
can
see
instantly
I
get
results
coming
back
with
the
bounding
boxes
showing
but
items
that
is
found
in
each
image.
A
I've
actuated
a
little
extra
bit
of
code
here
to
do
the
same
thing,
but
with
the
webcam
and
if
I
enable
this,
you
can
see
that
I
can
now
see
myself
too
and
notice
how
the
performance
is
pretty
cool.
It's
running
in
a
high
frames
per
second,
and
all
of
this
is
running,
live
in
the
web,
which
means,
of
course,
that
your
privacy
is
also
preserved,
because
this
data
is
not
being
sent
to
a
server
for
classification.
So
next
thing
I
want
to
talk
about
is
face
mesh.
A
You
can
see
here
how
it
can
recognize,
468
unique
points
on
the
human
face,
and
it's
just
three
megabytes
in
size.
In
fact,
many
people
are
starting
to
use
this
in
creative
ways
such
as
muddy
face,
which
is
part
of
a
L'oreal
group
who
are
using
it
for
a
our
makeup
Tryon.
As
you
can
see
from
the
image
on
the
right.
This
lady
is
not
wearing
any
makeup
on
their
lips.
A
In
fact,
lipstick
is
being
chosen
dynamically
at
runtime
in
the
browser,
and
then
we
are
applying
it
because
we
know
where
the
lips
are
from
face
mesh,
pretty
cool.
But
let's
see
us
running
for
real
using
my
face,
so
I
can
explain
a
little
bit
more
okay.
So
now
you
can
see
my
face
in
the
web
browser
and
as
I
open
and
close
my
mouth
you
can
see
it
reacts
really.
Well,
it's
running
a
high
frames
per
second,
but
this
is
just
running
on
the
CPU.
A
I
can
t
switch
at
the
top
right
and
we
can
get
even
better
performance
by
running
on
my
graphics
card
now,
in
addition
to
doing
the
machine
learning
in
real
time,
because
JavaScript
is
obviously
great
at
graphics,
but
whilst
in
rendering
a
3d
point
cloud
that
we
can
also
tinker
with
at
the
same
time
as
you
can
see,
I
can
move
my
face
around
on
the
free
viewpoint
file.
So
you
can
use
this
to
make
pretty
much
anything
you
want.
So
next
up
is
body
segmentation.
A
This
model
allows
you
to
distinguish
24
unique
body
areas
across
multiple
bodies
in
real
time,
as
you
can
see
from
the
animation
on
the
bottom
here.
But
you
can
see,
however,
that
segments-
and
it
even
gives
you
an
estimation
for
the
pose
of
each
body.
Are
you
aware
that
they
think
the
skeleton
is
which
can
be
used
to
do
gesture
recognition
or
much
much
more
now,
models
such
as
body
pics
can
be
used
in
really
delightful
ways.
A
In
a
much
more
frictionless
way
and,
of
course
all
of
his
runs
in
the
web
browser,
so
my
privacy
is
preserved.
None
of
these
images
are
going
to
a
server
and,
of
course
all
this
can
give
you
superpowers
too.
What,
if
you
combine
ten
surfer
chairs
with
something
like
web
GL
shaders?
In
that
case,
you
can
get
an
effect
like
this,
which
is
made
by
one
of
the
guys
on
our
community
in
the
USA,
which
can
shoot
lasers
of
your
mouth
and
eyes
all
in
real
time
and
a
buttery
smooth,
60
frames
per
second.
A
But
let's
not
stop
there.
If
we
combine
it
with
WebEx
are
a
very
emerging
web
standards.
You
can
now
even
project
people
from
magazines
into
your
room
in
real
time,
and
this
guy
is
using
this
on
his
phone
and
then
you
can
walk
up
to
the
person
and
and
kind
of
meet
them
in
real
life,
virtually
speaking,
so
that's
pretty
cool
and
I
thought.
Well.
If
I
can
do
this,
then
why
not
go
one
step
further
and
combine
it
with
WebRTC
to
teleport
myself
in
real
time.
A
You
can
see
here
how
I
can
project
myself
from
my
bedroom
into
another
living
space.
It
could
be
somewhere
else
in
the
world
to
meet
my
friends
and
family
such
that
I
can
be
closer
to
them,
even
when
I'm
not
and
having
tried.
This
myself
actually
does
feel
better
than
a
regular
video
cord,
because
you
can
walk
up
to
the
person
and
move
around
them
and
all
this
kind
of
stuff,
which
is
just
don't
get
with
a
regular
video
call
now.
A
The
next
way
you
can
use
sense
of
ojs
is
by
transferring
this
is
where
you
train
existing
models
to
work
with
your
own
data,
and
this
happen
next
logical
step
after
using
our
pre-trained
models
to
make
things
more
customized
to
your
needs.
Now,
if
you
are
mr
expert,
you
can
of
course
code
all
this
stuff
yourself,
but
I
want
to
show
you
two
ways
today
on
how
to
do
this
in
a
super
simple
fashion.
Now
the
first
one
is
teachable
Machine.
A
This
is
a
website
created
by
Google
that
allows
you
to
retrain
data
in
the
web,
browser
for
very
common
tasks
like
recognizing
an
object
or
speech
recognition
or
pose
estimation,
for
example,
and
in
just
a
few
clicks
you
can
make
your
own
ML
model.
So
let's
try
this
out
right
now
and
see
how
easy
is
to
use
for
something
like
a
prototype.
A
So
here's
teachable
machine
we
can
take
on
the
image
project
to
start
and
I
can
check
on
webcam,
and
you
can
see
now
that
I'm,
just
gonna
take
a
few
samples
of
my
pet
in
front
of
the
webcam
and
then
I'm
going
to
do
the
same
thing
for
class
two,
and
we
take
a
similar
number
of
samples.
But
this
time
I'm
going
to
use
this
deck
of
cards.
A
And
we've
got
a
similar
number
of
images,
as
you
can
see,
I'm
now
going
to
click
on
train
model,
and
essentially
that
means
it's
retraining
with
top
layers
of
the
model
that
we're
using,
so
that
we
can
classify
new
data
using
things
it
learnt
from
before.
So
in
just
a
few
seconds,
this
process
will
be
complete
and
we
can
now
see
a
live
prediction
coming
from
the
webcam,
and
hopefully
we
can
see
that
class.
A
A
So
do
you
try
that
out
in
your
spare
time-
and
you
can
use
this
in
prototypes,
so
you
can
simply
here
an
export
model
at
the
top
right
there
and
you
can
save
the
JSON
files
that
you
need
to
vend
load
this
model
and
your
own
custom
website
later
on,
to
do
something
more
useful,
so
maybe
I
can
show
a
deck
of
cards
and
reveal
a
youtube,
video
or
whatever
I
want
to
do
now.
The
next
method
I
want
to
show
you
is.
A
If
you
want
to
do
something
more
for
production
use
case,
which
is
more
than
just
a
prototype,
you
might
have
a
lot
more
data
and,
of
course,
in
the
web
browser
you
limited
by
the
RAM
that
you
can
use
in
a
single
tab
in
chrome
of
course.
So
if
you
have
like
gigabytes
of
data,
you
can
use
cloud
auto
ml,
and
this
allows
you
to
Train
custom
vision
models
in
the
cloud
which
can
plane
export
to
tensorflow
GS,
just
like
we
did
before
so
here.
You
can
see
I've
just
uploaded
lots
of
data
of
flowers.
A
In
this
case
lots
of
different
folders
of
different
types
of
flowers,
and
all
you
need
to
do
is
then
specify
if
you
want
to
train
for
higher
accuracy
or
faster
predictions
and,
of
course,
with
machine
learning.
There's
always
a
trade-off
between
these
two
things,
but
you
can
choose
what
you
prefer.
You
click
Next
and
then,
after
a
few
hours
of
training,
it
will
give
you
the
option
to
export
to
tensorflow
J's.
As
you
see
on
this
slide,
and
it's
super
simple
to
use
this
exported
JSON
file.
A
In
fact,
here's
the
code
all
on
one
slide,
all
we
need
to
do
is
include
pratense,
foj
s,
library,
at
the
top
here
we've
been
include
four
author
in
our
library
as
well,
and
then
below
this.
We
have
a
new
image
that
we
have
never
seen
before.
This
is
just
a
daisy
image,
I
found
on
the
Internet,
and
we
can
then
essentially
use
this
as
the
image
you
want
to
classify
and
then
in
just
three
lines
of
JavaScript
below.
We
can
now
classify
the
image.
A
We
can
pass
through
and
see
all
the
predictions
that
came
back
from
the
ML
model
for
that
single
image
and,
of
course,
you
can
call
model
dot,
classify
multiple
times
once
the
model
is
loaded.
So
if
you
were
to
use
this
sort
of
a
webcam,
you
could
then,
of
course
do
that
instead
and
have
it
running
in
real
time
on
webcam
data
and
third
way.
Of
course,
use
centerfudge
es
is
to
write
your
own
coat
and
scratch
now.
A
This
is
for
the
machine
learning
experts
out
there
or
people
who
want
to
go
more
hands-on
low-level
and,
of
course,
going
on.
That
would
be
too
much
for
a
30-minute
presentation
today,
but
there's
plenty
of
tutorials
on
our
website,
which
I'll
share
with
you
later
to
get
started
with
this,
but
today,
I'm
gonna
give
you
two
superpowers
and
performance
benefits
you
can
get
by
running
in
JavaScript
and
node,
for
example.
A
A
We
have
the
ops
API,
which
is
much
more
mathematical
and
if
it's
like
the
original
tensorflow
stuff,
if
you
will-
and
that
allows
you
to
do
or
the
funky
linear,
algebra
and
all
this
kind
of
stuff
so
depending
which
way
you
wanna
go,
is
to
waivers
of
tensorflow.
Jsu
can
use
here
based
on
your
experience
and
capabilities.
So
you
can
see
how
this
comes
together.
Essentially,
we've
got
our
models
at
the
top
they're,
based
it
upon
the
layers
API,
and
then
that
sits
upon
the
core
or
ops
API,
just
below
that.
A
Now
that
can
talk
to
different
environments
such
as
the
client
site
and
within
the
client
side.
You
might
have
different
environments
as
well
like
browser,
WeChat
or
react
native,
for
example,
and
each
one
of
these
environments
knows
how
to
talk
to
different
backends
such
as
per
CPU,
that's
always
available,
but
also
other
things
like
WebGL.
A
If
you
want
graphics
card
acceleration
on
the
front
end
or
wisdom
web
assembly,
if
you
want
to
have
better
CPU
performance
and
there's
a
similar
story,
of
course,
for
the
back
back-end
on
the
server
side,
we
have
no
js',
and
here
it's
important
to
note
that
we
have
the
same
performance
as
Python
land.
So
here
we
actually
calling
the
same
tensorflow,
CPU
and
GPU
bindings.
That
python
has
to
the
C
libraries
that
tend
to
throw
itself
is
written
in
and
that
allows
us
to
get
the
same.
A
Cuda,
acceleration
and
ABX
support
for
the
processor
to
make
sure
things
are
running
as
fast
as
possible
and
in
fact,
if,
for
some
reason,
your
machine
learning
team
is
still
using
Python,
then
of
course
you
can
you
load
in
save
Python
models
from
the
layers
API,
if
they're
using
key
RS,
and
you
can
use
the
tensorflow
saved
model
formats
via
our
ops
API,
to
load
back
directly
into
nodejs.
Without
conversions,
you
can
just
take
a
saved
model
and
then
use
back
in
ojs.
A
Now,
if
you
want
to
use
and
one
of
those
saved
models
on
the
client
side,
then
you
have
to
use
our
command-line
tensorflow
GS
converter
and
that
will
convert
the
model
into
a
JSON
format.
We
need
to
run
in
the
web
browser.
So
let's
look
at
performance.
Then
here
is
tensorflow
GS
vs.,
Python
running
mobile
nets,
and
these
are
the
inference
times
how
long
it
takes
to
classify
the
thing
we're
looking
for
in
the
image
have
a
top
there.
A
Then
you
get
to
get
further
performance
increases
in
nodejs
because
of
a
just-in-time
compiler
of
JavaScript
itself.
In
fact,
we've
seen
with
people
at
hugging
face
which
are
quite
famous
for
making
natural
language
processing
models
that
they've
seen
a
two
times
performance
boost
just
by
switching
to
no
GS
for
their
machine
learning,
pre
and
post
processing.
So
now,
if
we
focus
on
the
client
side
for
just
a
second
here,
are
five
superpowers
you
get
which
are
hard
or
impossible
to
achieve
on
the
server
side.
A
Now
the
first
one
is
privacy,
as
I
kind
of
hinted
at
before.
All
of
these
machine
learning
models
are
running
in
the
web
browser
on
the
client
machine.
That
means-
and
no
point
is
any
of
a
sensor
data
going
to
a
third-party
server
for
classification
and
that's
really
important
in
today's
world,
where
poverty
is
always
top
of
mind
and
with
sense
of
closure,
you
can
get
that
for
free.
A
Of
course,
the
link
to
this
is
lower
latency,
because
no
server
is
involved
when
you're
running
on
the
client
side,
then
we
don't
have
that
round-trip
time
from
the
mobile
device,
let's
say
to
the
server,
which
could
be
over
100
milliseconds
or
more
in
a
bad
mobile
network
connection
and,
of
course,
that
leads
to
lower
cost.
If
you
have
a
reasonably
popular
websites,
you
might
be
spending
tens
of
thousands
of
dollars
on
graphics
cards
and
beefy
processors
to
run
those
machine
learning
models
by
running
on
the
client
side.
A
Essentially
anyone
can
click
on
the
link
in
the
web
browser
and
have
the
machine
running
loaded
for
free
versus
trying
to
do
this
in
other
ways.
On
the
server
side,
we
could
inquire
to
first
of
all
understand,
Linux
and
install
Linux.
Then
you
need
to
install
the
tensorflow
stuff
and
the
drivers
for
CUDA
from
Nvidia.
Then
you
need
to
install
of
a
github
repo
and
compile
it
and
make
sure
it
runs
with
the
environment.
On
the
surface
I
see
all
of
that
hassle
goes
away
when
you're
running
on
the
client-side
net.
A
Kick
if
that
can
get
you
more
eyes
on
your
research
in
machine
learning,
which
could
be
very
valuable
if
you're,
a
researcher,
for
example.
Maybe
that
means
10,000
people
can
try
your
model
out
instead
of
the
five
people
in
your
lab
that
can
maybe
uncover
bugs
or
biases
in
your
model,
but
it
can
then
fix
before
you
see
primetime
now
flipping
to
the
server
side
for
just
a
second
there's.
Also
some
benefits
there
too.
Of
course,
if
you
choose
to
use
nodejs,
so
obviously
we
can
use
potential
flow
safe
model
without
conversion.
A
As
we
spoke
about,
we
can
also
run
larger
models
when
we
can
do
on
the
client
side
due
to
the
memory
limitations
in
chrome
per
tab,
and
of
course
it
allows
you
to
write,
write
code
in
just
one
language,
which
is
of
course
JavaScript
which,
needless
to
say,
a
lot
of
devs
use.
Javascript
according
to
the
Stack
Overflow
survey,
2019
I
believe
67%
of
people
are
now
using
javascript
in
some
capacity
which
is
pretty
cool
and
then
the
performance
benefits.
A
Of
course
you
can
get
by
getting
the
just-in-time
compiler
boost
in
nodejs,
overusing
machine
learning
in
Python,
for
example.
So
with
that
I
would
like
to
talk
to
you
a
little
bit
about
the
resources
you
can
use
to
get
started
if
you're
interested.
If
there's
one
slide
you
want
to
bookmark
today,
let
it
be
this
one
and
the
next
one.
Actually
so,
essentially
here
some
tutorials,
you
can
use
to
get
started.
These
are
code
labs.
A
Here's
our
website
to
get
started
the
models
that
you've
seen
in
this
demonstration
and
many
more
available
on
our
github
there,
and
we
have
a
Google
group
to
answer
any
more
technical
questions
that
you
may
have
or
may
be
thinking
about
later
on
and
then,
finally,
we
have
code
and
glitch
which
have
boilerplate
code.
You
can
use
to
get
started
now
on
the
right-hand
side
is
I
recommended
reading
material.
This
is
a
great
book
that
covers
everything.
A
Even
if
you
have
no
machine
learning
background
at
all,
that's
completely
fine,
there's
only
you
know
some
basic
Java
scripts.
This
book
will
take
you
through
everything.
You
need
to
know
to
your
machine
learning
chops
up
the
scratch
and
with
that's
please
come
join
our
community.
In
fact,
here's
just
a
few
more
examples
of
what
people
have
been
making
just
the
last
few
weeks,
and
this
is
growing
every
week.
A
If
you
check
out
the
major
of
T
FG
s
hashtag
on
Twitter
or
LinkedIn,
you
can
find
what
people
are
making
right
now
and
please
do
contribution
for
a
chance
to
be
featured
at
future,
show
Intel
sessions
or
even
conferences
and
such
in
the
future.
So
the
final
thing
I
want
to
leave
you
with.
Is
this
last
demo
from
a
guy
in
Tokyo
Japan.
A
He
is
actually
a
kind
of
dancer
and
he's
now
used
machine
learning,
center,
fo
GS
to
make
his
next
hip-hop
video,
as
you
can
see
here-
and
it's
really
great
to
see
creative
folks
starting
to
embrace
machine
learning
as
well.
It's
no
longer
just
for
the
one
percent
of
people
with
PhDs.
It's
now
for
everyone,
and
hopefully
tensorflow
J's-
can
make
this
even
more
accessible
to
all
in
the
future,
and
I'm
really
excited
to
see
what
you
will
make,
and
please
do
you
tag
us
with
made
of
TF
KS.