►
From YouTube: DevoWorm ML: Week 6 (TensorFlow 2.0 tutorial)
Description
Sixth DevoWormML meeting, October 9. Attendees: Bradly Alicea, Richard Gordon, Jesse Parent, Vinay Varma, Ujwal Singh, and Abraham Kohrman
B
A
A
A
Okay,
there's
those
wrong
actually
I'll.
Have
those
well
introduce
himself
when
we
get
started
here,
my
usual:
are
you
fine,
sir?
Oh
yeah
glad
you
could
make
it
okay?
Well,
let's,
let's
get
started
and
if
more
people
come
in
we'll
say
hi
to
them
as
they
come
in,
so
welcome
to
our
current
version
of
the
D
warm
ml.
A
C
A
A
A
B
B
A
B
B
It
provides
low
level
module,
which
is
the
th
bottom
model
and
also
have
a
high
level
module,
which
is
the
terrace
module.
So
what
business
is
that
in
th,
both
in
there
convolutional
layers,
there
are
match
normalization
nails
that
are
matched
putting
layers.
All
these
layers
are
already
implemented
in
TX
about
NN,
but
in
the
end
button
it's
it's
more.
You
know
it's
more
than
one,
because
you
and
you'll
have
to
take
care
of
the
all
parameters.
Also,
possibly,
let's.
B
Need
to
give
your
gamma
and
beta
values
when
you
are
using
pH,
but
in
it
this
is
for
a
of
course.
This
is
for
advanced
users
like
when,
when
users,
when
someone
wants
to
wants
to
experiment
with
a
new
kind
of
architecture
or
new
kind
of
for
defending
architecture,
then
P
n,
but
n,
N
or
P
dot
layers
module
will
be
very
useful.
If
that's
not
the
case,
and
if
someone
wants
to
make
it
quick
prototype
of
what
they
have
money,
they
can
use
a
TF
but
Karis
module.
B
To
use
the
sequential
API
like
for
a
standard
standard
examples,
let's
say:
I
want
to
classify
amnesty
oxygen,
going
to
get
the
whole
nation
in
works,
then
we
can
go
with
the
sequential
API
and
then,
if
we
want
to
experiment
with
experiment
with
three
existing
models
that
there
is
already
not,
let's
say,
communal
Network.
And
if
you
want
to
give
multiple
inputs
and
extract
multiple
outputs,
then
we
can
move
over
the
constant
API.
B
This
API
gives
us
more
flexibility
on
how
the
framework
and
then
the
surprising
API
it
is
advised
for
the
advanced
and
users
like
researchers
who
want
to
create
a
Explorer
experiments
that
creating
a
new
on
your
network
architecture.
So
all
these
api's
wanna
show
small
examples
of
each
of
them.
So
this
is
the
sequential
API
here.
This
is
a
standard,
inist
training,
training
code
that
we
just
imported
vodka.
B
Extreme
add
the
images,
and
why
plain
or
the
labels,
like
the
number
in
the
image,
let's
say
we
have
images
of
hydrogen
deficits
and
this
white
plane
will
have
contained.
Then
I
must
do
the
labels.
So
here
you
can
see
the
sequential
API
here
it
can.
This
is
how
we
will
implement
a
sequential
API
en
dot.
Kara's
got
layers
is
a
high
level
module
and
you
can
add
those
flatten
layers,
dense
layers,
layers
and
again
like
we
can.
You
can
input
any
kind
of
combination
here
so
for
this
time
we'll
use
cases
this
sequential.
B
B
B
B
B
And
then
we
suppress
me
okay,
this
subclass
me
pay
is
used
for,
let's
say
we
want
to.
It
gives
even
more
flexibility,
for,
as
it
uses
more
control
of
how
we
can
arrange
the
model.
Let's
say
if
you
want
to
add,
add
layers
dynamically
or
if
you
want
to
compute
the
food
shapes.
If
you
want
to
define
a
new
network
architecture
like
there's,
a
CNN
there's
a
RN,
and
if
you
want
to
define
our
own
architecture,
then
this
suppressing
API
will
be
very
useful
for
us.
B
B
B
B
B
B
C
B
B
B
And
you
can
explore
how
the
data
change
on
the
model
over
time,
so
this
is
the
tensor
flow
model
analysis.
This
toolkit.
We
can
explore
various
insights
from
this,
like
we
can
get
the
which
features
has
been
dominant,
at
which
point
of
time
during
the
training
and
which
which
factor
is
affecting
the
if
it
in
the
model
or
model
weights
heavily
and
any
more
stuff
from
this
model
analysis,
and
then
they
also
introduce
it
terms
of
force,
warming,
openness
or
meaning
helps
us
to
like
expose
our
models
to
the
world
efficiently.
B
So
this
is
best
like
this,
this
best
source
of
purpose
when
we
want
to
when
we
want
our
model
to
provide,
provide
outputs
to
many
users
at
a
single
time.
So
this
is
how
the
architecture
is.
This
is
the
client.
This
is
hi.
This
is
a
user.
Requesting
a
model
like
inference
from
a
moral
influence
is
nothing
about
the
output
of
the
model.
So
let's
say
let's
say
we
have.
B
We
have
multiple
versions
of
a
single
model.
Let's
say
we
trained
our
model
for
the
first
time
we
deployed
it
and
then
we
train
in
the
model.
We
got
new
data
in
the
model
and
we
deployed
a
another
version
previously
without
distance
for
serving
like
it
would
be
difficult
to
maintain
two
versions
in
the
same
summer,
the
versions
of
the
same
model
in
the
same
server.
So
with
ten
surface
army,
we
deploy
a
any
number
of
versions
without
affecting
the
previous
versions
in
the
server,
and
then
the
users
can
fix
the
details
from
any
version.
B
B
Whenever
a
client
puts
the
output
of
a
model
from
the
sermon
first,
the
source
will
get
it
to
the
the
latest
version.
If
the
user
did
not
specify
the
version,
so
source
will
try
to
give
the
latest
version
to
the
more
to
the
user.
First,
the
source
fetches,
the
in
the
file
system.
It
can
search
the
latest
model
latest
version
of
the
model
it
is.
It
should
fix
that
model.
B
B
Working
all
the
time,
so,
whenever
the
source
takes,
that
is
the
dynamic
manager
that
and
loader
has
been
created
for
the
model
to
be
served,
then
the
dynamic
manager
will
will
make
a
decision.
Okay,
is
there
enough
memory
to
load
this
new
version
or
do
I
have
to
remove
the
remove
the
older
versions
from
the
main
memory?
B
It
has
a
set
of
instructions.
Anybody
will
do
it
and
then
it
will
based
on
its
priorities.
If
you
decide
to
whether
to
accept
the
request
or
to
deny
the
request
in
case,
if
the
request
is
accepted,
then
this
model
will
be
directly
given
to
the
server
more
handle.
So
what
a
server
will
asami
is
nothing
but
client
sees
these
objects.
That
server
bill
is
nothing
but
an
object.
These
clients
use
this
objects
to
perform
and
look
like,
let's
say:
I
want
to
outputs
outputs
from
two
different
versions
of
a
model.
At
the
same.
B
C
B
C
B
Be
very
efficient:
that's
our
pencil,
no
sobbing
help
cells
to
sell
multiple
versions
of
a
single
model
in
a
single
server
and
in
support
of
JS
in
this
sense
of
for
serving
in
this
and
so
flow
extended.
I
found
out
a
use
case
where
feel
peaceful
time
to
let
this
distance
flow
that
provides
an
end-to-end
workflow
right
right
from
the
election
of
beta
to
getting
the
output
from
the
model.
So
few
people
did
is
that
they.
C
B
B
B
Or
we
can
convert
the
pipe
and
models
and
run
them
in
the
browser,
as
well
with
the
help
of
tens
of
node.js,
and
also
we
can
load
prepend
models
and
retain
them
in
the
browser
itself,
and
we
can
model
in
pubes.
Ask
it
like,
without
using
any
any
external,
any
frameworks
like
any
Python
frameworks
or
PHP
or
or
anything.
So
there
are
a
few
advantages
here.
C
B
Why
not
just
deploy
the
model
learn
and
get
and
sending
the
requests
to
that
model
and
get
the
response
from
the
model
so
why
they
use
fence
why
advantageous
offensive
roberge's
are
first
of
all,
there
is
no.
There
is
no
setup
required
that
you
won't
have
to
install
anything,
there's
a
there's,
a
cDNA,
but
when
it's
a
float,
yes,.
B
Your
HTML
code
and
then
all
you
can
you
can
use
all
the
flow
Chara's
layers
or
convolutional
layers
on
your
recommended
layers
right
in
the
browser
itself.
So
there
is
no
setup
or
no,
nothing
is
required
and
the
second
point
is
privacy.
So
when
we
want
to
play
now,
let's
say
we
have
a
huge
data
set
and
we
want
to
plane
a
model
on
that.
So
on
our
system
say
to
be
a
very
storm,
because
we.
B
B
B
Don't
need
to
send,
we
don't
need
to
request
to
the
server
to
depend
to
get
the
results
like,
let's
say,
I
wanted.
I
want
to
sell,
I
want
to
segment
an
image,
so
the
first
step
would
be
I
would
send
in
the
image
to
a
server
and
that
server
will
feed
my
image
to
the
model
and
then
the
output
will
be
given
to
me
as
a
response.
In
this
case,
we
don't
need
to
do
that,
because
there's
no
server
here,
we
can
do
it
entirely
in
the
browser.
B
Only
and
let's
say
we
want
to
do
some
real
time
video.
So
let's
say
you
want
to
do
Media
segmentation,
if
we
previously,
if
we
implemented
this
as
a
REST,
API
or
a
server
model,
so
we
need
to.
We
need
to
send
the
each
frame
of
the
video
to
the
model
and
then
the
model
will
sediment
that
image
and
then
it'll
send
a
response
back.
So
that's
a
there's,
a
huge
lag
in
there
because
we.
B
B
This
is
a
nice
example
or
that
Google
provided
when
in
the
when
they
present
it
ends
up
load,
dot.
J's.
Let's
say
if
this
is
a
game.
If
this,
if
there
is
a
pacman
game
here,
so
what
they
did
is
that
instead
of
this
up
left
right
down
controls
they
wanted
to.
They
wanted
to
make
the
machine
learn
machine.
B
B
The
up
arrow
or
we
need
to
become
specific
posture
of
ourselves
and
they
need
to
work
feed
those
images
as
well
the
same
from
left
arrow
by
right,
arrow
and
in
the
down
arrow,
and
then
we
will
obtain
the
model.
So
what
this
painting
does?
Is
that
we'll
be
giving
some
examples
right?
Let's
say
we
were
10
different
images
of
me
raising
matching
up
and
then
after
and
that
for
the
machine
to
associate
it
with
the
right
arrow,
sorry
top
arrow,
then,
during
this
training,
the
model
learns
to
in
the
texting
the
model
in
real-time.
C
B
Similarly,
for
the
right
arrow
left,
arrow
and
the
down
arrow
as
well,
so
all
this
is
possible.
All
of
this
is
possible
because
of
Instagram
dot
J's
only
because
this
entire
thing
can
be
run
in
the
browser
itself.
So
whenever
we
try
to
play
in
the
model,
which
is
these
images
that
we
fed
as
examples
and
then
after
the
training
has
been
completed
when
we,
when.
C
B
Arrow
and
then
it
will
cause
the
Ackman
to
go
up
or
or
the
respective
actions.
So
this
is
a
nice
example
of
how
or
installed
jsn
Abel's
us
to
be
real
time,
real
and
planing
or
real
time
predictions
of
of
users
of
Lewis
question.
So
that's
basically
about
tensorflow
RGS,
there's
also
a
superior
light.
So
this
is
specifically
designed
for.
B
B
B
B
B
B
B
C
B
Then
this
now
this
that's
very
bad
I
can
detect
objects
as
well,
so
they
many
advantages
in.
Let's
say:
if
you
want
to
make
a
simulation
of
of
seal
against
me.
Let's
see
if
you
make
a
robot
which
can
which
can
which
can
model
the
moment
of
the
CL,
you
can
make
any
plane
in
the
model
we
can
deploy
Raspberry
Pi,
and
then
we
can
install
the
Raspberry
Pi
to
instead
of
its
parts
that
it's
to
which
direction
it
should
new.
B
B
B
B
B
C
B
B
B
B
B
B
C
B
B
B
C
B
B
D
C
B
B
We
can
just
load
this,
for
let's
say:
I
have
a
batch
size
of
four,
so
I
want
to
four
images.
Only
when
I'm
planning
a
particular
batch,
so
TL
Bob,
dataset
fetches
those
for
any
four
images
as
a
batch
and
then
feeds
it
to
model
or,
and
then
again
it
features
the
full
four
images
from
the
secondary
memory
to
remain
this
way.
These
data
is
being
generated
on
the
fly
is
a
small
example
that
implemented.
B
B
C
B
B
B
B
C
B
Resize
it
into
a
specific
specific
size.
So,
in
this
data
set
dot
map
of
pass
function,
then,
when
the
code
reaches
this
way,
this
part
of
this
pass
function
is
invoked
and
then
those
filings,
those
file,
names
and
labels
will
under
we
under
go
into
this
this
process
and
then
also
I
used
another
function
to
pre-process
the
to
pre-process
those
images
that
I
wrote
in
this
code,
so
basically
the
magic
of
the
main
magic.
B
B
B
B
B
B
C
B
B
B
C
C
B
B
B
C
B
B
B
A
A
Yeah
I
saw
Jesse
and
goes
wall
had
to
leave
early,
so
they
they
said
thanks
for
the
great
talk
they
put
it
in
the
chat
there.
So
I
guess
yeah
yeah,
so
I
guess
what
you're
showing
us
is
that
there
are
a
lot
of
different
ways
to
do
this.
B
Yeah,
exactly
the
best
thing
about
principles
is
to
go
system
like
there's,
multiple
ways
to
achieve
what
we
want
to
do.
There's
like
just
saw
inception
rjs,
and
it
will
just
to
do
it
right
in
the
browser
itself
and
all
of
these
tools
as
I
believe
it
to
be
like
give
a
gear.
We
need
to
give
efficient
performance.
I
know
these
will
be
lagging
here
and
there
so
active
development
has
been
going
on
in
these
tools.
So
I
think
it's
a
really
great
effort
to
put
all
these
things
together
under
one
whole
yeah.
A
C
A
So
yeah
that's
interesting
because,
like
I
used
to
work
in
a
lab
where
we
were
doing
like
psychophysiological
measures
and
mapping
them
to
controls
on
a
computer
screen,
so
people
have
done
this
with
brain-machine
interfaces
and,
like
you
know,
there's
always
been
this
real-time
issue
where
you
have
to
sort
of
train
a
model.
This
is
sort
of
you
know.
Before
machine
learning
became
this
huge
field.
A
You
know
people
were
trying
to
figure
out
ways
to
train
these
models
in
real
time
so
that
you
could
take
signals
and
figure
out
like
the
affective,
States
or,
like
you
know,
different
movement
controls.
So
you
could
like
move
a
cursor
with
your
mind
in
different
directions,
but
you'd
have
to
like
decode
brain
activity.
To
do
that.
So
you
know
that
real-time
train
abilities
really
important
for
a
number
of
applications
and
it
looks
like
they
are
doing
going
in
that
direction
here.
A
So
it
looks
interesting.
I,
don't
know
how
how
good
it
is.
I
mean
there's
a
performance
issue
always
with
that
sort
of
thing
where
you
know,
if
you're
doing
in
real
time,
there's
like
individual
variation
and
then
there's
the
ability
to
stream
the
data.
You
know
and
then
give
feedback
appropriately.
So
but
I
think
that's.
B
B
A
A
Well,
thank
you
once
again,
so
I
guess
everyone
else
off
Abraham
Anthony.
So
if
if
Abraham
is
interested
in
presenting
he's
welcome
to
do
so
at
some
point
so
Vinay
you
know
presented
something.
It
was
fairly
technical
and
you've
seen
the
presentations
that
I've
given
where
you
know
there
bit
broader
so
I
mean
any
kind
of
presentation
is
welcome,
so
you
know
Abraham
if
you're
interested
in
contributing
like
that.
And
that
way.
A
Let
me
know
you
know
it
doesn't
have
to
be
next
week,
but
it
you
know
if
we
could
just
schedule
a
time-
and
you
know
you
just
share
your
screen
and
go
through
slides,
doesn't
have
to
be
an
hour
long.
Neither
can
be
like
a
fifteen
minute
things,
so
I
just
wanted
to
mention
one
more
thing
before
we
go
and
that's
this
pre
train
model.
A
C
A
We
have
a
short
description,
so
you
know
to
be
able
to
discuss
what
our
preacher
model
is
introduced
it
to
people
who
don't
know
then
point
out
that
they're
models
for
general
use
and
specialized
areas
like
languages,
but
not
a
huge
selection
of
options
specialized
for
biology,
then,
what's
of
what
specific
use
for
interest
is
a
pre-training
model,
the
biologists
and
I
think
we
kind
of
talked
about
all
those
things.
I
haven't
filled
them
in
to
this
area,
but
it's
something
if
we
want
to
fill
it
in
with
you
know.
A
A
A
Volume,
which
is
a
term
of
course,
I
talked
about
in
a
previous
lecture
about
data,
so
data
volume
is
the
size
of
the
data
set.
You
need
yeah
things
like
multi
variant,
feature
space
and
then
some
discussion
about
models,
not
equaling
the
phenomenon,
but
models
is
being
this
sort
of
heuristic
and
then
not
customized
for
Biological
problems.
So
we
assume
that
pre-trained
models
are
universal,
but
that
may
not
be
true
and
then
maybe
we'll
you
know
finish
this
with
an
example
of
modeling
of
biological
image.
A
You
know
different
things
like
blobs
versus
symbols,
so
I
think
Binet
brought
that
up,
which
is
like
using
a
symbolic
approach
or
using
like
a
data-driven
approach
where
you're
finding
blobs
or
like
shapes
that
are
features,
and
so
you
know
we
might
talk
about
that
as
well.
I
haven't
gotten
too
far
on
this
myself,
but
I
just
wanted
to
show
you
the
short
outline
of
what
I
was
thinking
about
for
that.
So
if
you're
interested
I
can
you
know,
send
you
a
link
to
the
Google
Doc
and
you
contribute
to
the
Google
Doc
or
no?