►
From YouTube: DevoWorm Summer of Code weekly meeting, 6-21
Description
Meeting for Week 7. Ujjwal Singh presents the paper "ImageNet Classification with Deep Convolutional Neural Networks".
A
Okay,
welcome
to
the
meeting
for
today
this
week
we
had
a
cancel
last
week
because
of
scheduling,
but
I
think
we
should
be
alright,
for
our
wall
is
going
to
present
his
paper
choice
and
then,
but
the
knee
also
wants
to
present
something
or
you
want
to
share
a
screen
for
five
minutes
at
the
beginning
of
the
meeting
here.
So
I
should
ask
first:
does
anyone
have
any
questions
or
comments
about
you
know?
What's
been
going
on
this
week,
I
know
talk
to
some
of
you
in.
B
Selective
whoops,
okay.
B
A
A
B
A
B
A
B
A
So
we'll
go
through
the
board,
so
we
have
a
bunch
of
things
that
are
done.
The
Python
scripts
for
plug
was
done.
Setting
up
environment
is
done,
preparing
slides
is
done,
digital
notebook
is
done,
so
the
objective
for
feature
analysis
is
done,
and
then
we
have
a
couple
of
things
in
progress,
so
the
one
number
eight
is
set
up,
use
deep
learning
for
J,
which
is
something
I
think
you
proposed
this
last
week
and
they
say
they
put
the
tag
on
that
at
the
winked
issue.
A
B
A
You
don't
want
to
have
to
go
too
deep
into
like
any
special
types
of
techniques,
but
you
want
to
finish
that
up
like
this
next
week
or
two
and
then
move
on
to
phase
two.
So
that
was
my
understanding
of
it.
So
you're
gonna
implement
an
unsupervised
approach
and
then
implement
implement
a
supervised
approach
in
Phase
two
well.
B
A
A
And
then
you'll
see,
you
know,
you'll
see
how
it
all
does
it
this
week
and
then
you
know
you'll
have
some
idea
of
what
we
might
want
to
talk
about,
because
we'll
we'll
talk
about
the
paper
a
little
bit
as
we
go
along
as
we
get.
You
know
at
the
end
of
that
presentation,
there'll
be
a
lot
of
questions.
A
B
B
B
Now
here
you
can
see.
Actually
this
is
the
original
image.
This
is
the
test
image
that
I've
sent
through
the
model,
and
this
is
the
output
image
that
I
bought.
Of
course,
it's
resized
to
know,
I'll
have
to
resize
it
back
yeah.
You
can
see
all
the
noise
getting
the
cutoff
and
in
the
bound
is
being
highlighted.
So
that's
the
current
set
of
progress
for
this
unsupervised
approach,
and
regarding
mode,
the
more
that
has
meant
the
task.
What
I
wanted
to
do
is
that
here
you
can
see
that
the
line
is
not
getting
placed
here.
B
Yeah,
that's
because
in
the
ocean
edge
itself,
it's
it's
very
visible,
so
the
enhancement
part
that
I
want
to
do
is
that
to
see
if
I
can
even
place
a
trace
that
fainted
boundaries
also,
so
that's
so
that
is
the
task
that
I
want
to
show
you
with
some
time.
What's
the
end
of
the
entities
of
project,
okay,.
B
This
is
the
code
that
I,
which
produces
the
output
that
I
just
showed
here,
the
towards
the
ending
of
the
encoder
you
can
see.
This
is
the
this
is
where
the
encoder.
This
is
where
the
image
is
coming
from
on
the
last
year,
I
needed
to
use
some
custom
layer
instead
of
for
some
people,
layers
from
the
Charis,
so
I
implemented
a
custom
layer,
and
this
is
how
we
moved.
B
This
is
the
plain
model
from
this
train
model.
I
got
it
outputting,
it
is,
and
as
this
model
has
some
custom-
and
this
model
has
some
custom
layers,
so
we
have
to
pass
that
also
as
the
custom
objects
also,
whenever
I'm
loading
the
model
so
for
deep
learning,
4j
I
cannot
do
this
cannot
be
passed,
it
seems
like
I,
will
understand
research
and
currently
support.
They
have
no
support
for
this
type
of
functionality,
so
that
end
of
a
limitation
for
me
and
but
but
there's
obviously
there's
a
workaround
for
that.
For
that
purpose,.
B
B
A
B
A
B
Yeah
my
current
state
is
that
without
passing,
this
custom
layer
too,
while
you're
in
the
model
I,
cannot
get
the
output.
So
that's
the
current
limitation,
but
then
they
have
a
beta
channel.
For
this
support
for
technical
support
and
I'll
ask
then
they
told
me
to
use
some
lambda
layers
which
will
innate
the
need
to
use
this
thing
so
I'm.
Currently,
at
that
point,
I
have
to
try
out
how
that
goes
in
the
coming
Sunday.
Maybe
I
will
be
okay.
A
B
C
A
I,
probably
don't
have
it
on
this
one
and
they
must
have
it
on
this
one
I'll
do
it
later:
okay,
so
yeah,
16
and
13
and
linked
you
know
they
said
no
later
and
then
now
we
can
go
to
the
other
to
do
issues
here.
So
the
first
one
is
number
17
begin
work
on
semi-supervised
approach
and
you're
telling
me
that
that's
gonna
be
like
within
the
next
week
or
two
that
you're
gonna
start
on
that.
B
A
A
Then
the
next
one
number
15
was
something
that
came
out
of
our
discussion
last
weekend
about
outreach,
and
so
this
might
be
something
of
interest
as
well
and
as
MIT,
so
I
went
to
a
conference
this
last
week.
That's
why
we
to
cancel
the
meeting
last
week
and
I
presented
a
poster
on
you
know,
what's
kind
of
going
on
an
evil
worm,
with
an
emphasis
towards
and
as
some
of
the
educational
outreach
things
that
are
going
on
and
so
I
have.
A
The
poster
I
can
share
a
link
in
the
Select
channel,
but
basically
it's
an
open
poster.
So
anyone
who
wants
to
you
know
use
it
for
any
other
purpose
like
if
they
want
to
print
it
out
or
if
they
want
to
make.
You
know
changes
to
it.
Don't
make
chain,
don't
make
changes
to
the
original,
but
you
know
we
can
talk
about.
A
That
sort
of
thing,
they've
been
doing
outreach
at
their
universities
and
they've
been
trying
to
get
interest
in
the
project
and
in
Summer
of
Code
I
know
Google
Summer
of
Code
actually
offers,
like
you
know,
outreach
materials,
but
I
think
you
know
I
mean
I'm
a
tional
for
the
students
in
those
you
know
at
your
universe,
so
I
mean
that's
something,
though
that
vinay
asked
me
about
specifically
so
I,
provided
them
with
some
material.
But
if
you
guys
are
interested
you
know
you're
welcome
to.
We
can
never
talk
about
that.
I
know.
A
A
B
A
A
A
So,
let's
see
so,
the
I
was
actually
when
you
were
talking
into
the
mic
into
the
ether
that
was
asking
you
about
issue
12
implements
offcut
normalization
loss.
So
is
this
the
thing
that
you're
gonna
do
like
if
you
revisit
the
phase
one
or
is
this
something
that
you're
gonna
do
in
like
in
the
next
two
weeks
or
so
yeah.
B
Difference
or
some
advantage
that
it
gets
for
our
for
our
output,
so
I
thought
of
despairing
that
and
going
forward
with
the
normal
loss
function
so
for
the
next
week
or
so
I'll
be
here
working
with
the
ordinary
loss
function
itself
and
when
I,
when
I,
when
I
come
when
I
come
back
to
this
again,
maybe
I'll
have
a
look
at
it.
Maybe
if
I've
got
something
wrong
or
something
like
that,
as
of
now
I,
don't
have
I
plan
to
leave.
Isn't
that
okay.
B
A
B
A
A
So
I'll
make
a
new
issue:
data
augmentation
strategies,
just
a
very
general
thing
and
I'll
sign
it
to
you
and
then
put
it
in
in
progress,
and
then
that
will
just
kind
of
cover
a
lot
of
the
things
that
are
going
on.
So
when
we
come
up
with
something
a
little
bit
more,
you
know
specific.
We
can
like
add
issues
to
it
so,
as
we
usually
do
so
now,
given
that
do
we
have
any
issues
we
need
to
put
on
the
board
for
next
time
or
for
the
next
week
or
two.
A
Yeah
so
yeah,
if
you
again,
if
you
can
think
of
anything
over
the
course
of
the
week,
you
can
put
them
on
the
board
yourself
or
you
know,
bring
them
to
the
next
meeting.
We
kind
of
debugged
a
little
bit
of
stuff.
You
know
during
the
week
and
we
didn't
put
it
on
the
board,
but
that's
okay.
This
is
just
a
general
kind
of
take
on
this.
So
now
we're
gonna
move
to
us
wall
and
as
MIT
thank
you
Vinay
and
we're
going
to
go
to
the
digital
basilar
area
board.
A
Repository
and
then
we
go
to
the
project's
tab,
we
go
to
the
digital
battle
area,
link
there
and
here's
the
board,
and
so
let's
go
through
the
board
from
right
to
left,
to
do
and
we'll
kind
of
get
some
updates
on
where
these
issues
are.
So
in
the
done
category,
we
have
a
bunch
of
things
here:
split
Bachelorette
videos
into
frames,
work
out
a
readme
file,
that's
kind
of
in
progress,
although
that's
kind
of
done
for
now,
we
might
work
on
it.
A
little
bit
later
manual.
A
Mass
creation,
defined
training
set,
create
data
set,
create
digital
No,
look
choose
a
segmentation
procedure,
model
selection.
So,
yes,
we've
gone
through
all
these,
and
it's
so
and
we've
been
talking,
got
I
got
an
update
last
week
and
what
was
going
on,
which
was
basically
you
guys
were
you
know,
throwing
data
at
the
model
and
training
it,
and
so,
let's
see
we
have
a
bunch
of
things
that
are
in
progress
but
actually
want
to
hear
an
update
from
Desmond
whose
wall
you
can.
Okay,
I'll
share
my
screen.
A
C
Week
we
can,
they
properly
play
it;
they
trail
it
to
beta
output.
On,
after
that,
we
have
also
on
this
week
we
talking
the
lake
after
we
get
our
segmentation,
so
how
we
go
now
they
take
out
the
features
so
period
on
some
nods
about
they
find
very,
very
elasticity
on
the
features
that
we
need
to
extra.
So
on
this,
we
can
write
on
the
next
week.
We'd
be
stayed
working
on
the
target
market
and
we'll
be
trying
to
make
okay,
okay,.
C
A
A
Yeah,
when
we
get
to
that
stage
when
we
start
working
like
having
a
dialogue
because
I'd
like
to
see
work
with
you
and
see
kind
of
what
your
ideas
are
about,
because
we've
been
talking
about
that
sort
of
independently
richard
has
been
working
with,
we've
been
talking
with
someone
else,
but
you
know
be
nice
to
have
like
everyone
on
the
same
page
about
it
and
kind
of
have
like
some
more.
You
know
open
discussion
about.
You
know
the
direction
of
that.
So
just
I
mean
you
know
we
can.
A
A
Of
just
kind
of
proposing,
maybe
proposing
something,
maybe
working
it
out
a
little
bit.
You
know
and
just
kind
of
been
in
practice,
and
then
you
know
once
once
we
like
think.
Oh
that's
a
good
idea.
Maybe
you
should
step
it
up.
We
can
do
it
so
I
just
wanted
to
make
you
aware
of
that,
so
that
we
don't.
You
know,
you
know
disappear
for
two
weeks
and
then
pop
with
something,
and
then
it
doesn't
work
well.
B
A
Yeah,
it's
good
and
yeah
and
I
think
yeah.
There's
you
know
because
we
want
to
be
able
to
go
back
and
forth
on
it
and
see
which
things
are.
You
know
maybe
most
workable.
Maybe
if
you
come
up
with
some,
you
know,
mathematics,
don't
scale
up,
but
it'll
be
a
problem.
So
it's
always
it's!
Okay,
so
Richard
sent
a
note
here.
Our
meeting
okay.
A
A
So
let's
go
back
to
the
board
real
quick
because
we're
it's
left
in
the
meeting.
So
in
progress
we
have
work
on
technical
details
of
the
data
set.
We
have
adding
information
about
the
biology,
a
bass,
weary
we're
still
working
on
that
model,
evaluation
schedule
and
present
papers.
That'll
do
something
we'll
be
doing
today.
A
B
A
C
A
A
Well,
okay,
so
any
that's!
That's
about
it!
For
the
number
three
number
twenty-eight
would
be
sort
of
the
next
couple
weeks.
So,
okay,
that
sounds
good
I
mean
will
will
maybe
make
some
issues
as
we
go
along,
but
I
think
I
think
that's
good.
Okay!
Now,
finally,
we
get
to
wall
and
he's
presenting
a
paper
today
and
you'll
have
to
share
your
screen
to
do
this.
You
have
a
separate
connection
for
screen
sharing.
B
So
discuss
about
Layton
is
the
classification,
it
deep,
emotional,
neural
networks.
So
this
is
the
one
of
the
benchmark
paper
which
said,
which
is
written
by
our
thirsts
Alex
in
jail.
Fred
these
these,
these
all
are
researchers
at
the
University
of
Toronto
and
so
before.
Starting
these
for
description
about
the
thirst
Alex
is
a
researcher
at
University
of
Toronto
is
working
at
Google
at
the
time
of
mr.
people.
In
his
little
imagenet
paper,
he
was
a
student.
B
His
meat
in
there
is
a
chocolate.
They
mediate
competition.
They
have
done
really
well.
They
had
set
a
benchmark
for
two
to
three
years.
It
is
a
Microsoft
has
break
it.
20
2015,
so
we
need
description
about
Alex.
Is
that
the
description
about
yeah?
So
it's
K?
What
is
a
computer
scientist
working
in
machine
learning
and
currently
serving
as
the
chief
scientist
at
open
yeah?
He
also
student
of
University
of
Toronto?
B
B
B
Now,
coming
back
to
the
paper
and
brief
description
about
paper
like
this
overview,
so
Alex
Nate
is
the
name
of
combination
networks
designed
by
Alex,
and
now
you
usually
listen.
It
completed
completed
in
a
snit
mass
scale,
visualization
challenge,
September
30
2012,
the
network,
achievement
of
fire
array
of
15.3%
move
their
temple.
8%
point
we
were
there
in
the
runner-up,
which
is
a
great
margin
to
the
additional
paper
per
unit,
is
adverse
that
depth
of
a
model
as
potential
for
its
high
performance,
which.
B
B
B
That,
when
the
Usenet
contest
that
fantasized
12.2
million,
which
is
belonging
to
one
of
the
thousand
classes,
this
paper
notes
the
current
raise
over
CNN
on
the
Furious
reversed
corner
hit
turns
like
in
CNN.
The
various
collar
has
soaking
in
the
papers
from
2000
to
2011.
But
after
this
paper,
the
paper
when
he
had
5500
papers.
So
it
is
a
very
big
increase.
B
So
they
are
the
one
who
bring
the
attention
towards
convolutional
neural
networks.
So
when
it
must
publish
this
paper,
Mary
should
claim
one
of
the
largest
union
to
the
date
which
they
did
to
a
number
of
key
improvements,
including
a
very
efficient
GPM,
feel
annotation,
an
architecture
that
announced
training
to
happen
on
multiple
GPUs.
The
demonstration
that
the
now
live
non-linearity
significantly
changes
that
anytime
they
had
given
some
concepts
of
which
is
very
useful
to
reduce
the
time.
B
So
each
Neron
in
the
new
network
computed
some
of
its
inputs
applies
then
offset
and
then
once
result
to
a
nonlinear
function.
Historically,
like
most
popular
nonlinear
functions
were
sigmoid
this
people,
a
strong
demonstration
that
the
label
non-linearity
significantly
reduces
training
time
when
deep
networks,
the
help
emphasize
point
is
really
the
non-linearity
function,
and
the
reason
is
like
me
that
for
positive
activations,
the
new
derivative
does
not
go
forward
zero.
This
is
an
important
benefit,
since
the
derivative
moves
to
zero
means.
The
gradient
descent
will
converge
very
slowly.
B
Naturally,
with
negative
activation
of
the
new
derivative
is
zero,
which
means
like
when
there
is
wave
it
aside.
It
won't
learn
anything,
but
as
long
as
it
has
son,
you
know
that
a
positive
activation,
then
the
part
of
headman
will
build
only.
It
appears
that
it's
better
to
have
this
subset
of
movements
learning
very
efficiently
than
having
most
of
the
neurons
learning
very
slowly.
B
So
this
is
the
thing
that
they
have
emphasized
and
they
have
figured
out
so
know
if,
as
you
can
see
in
the
figure
of
unity,
this
way
shows
how
much
faster
network
of
the
non-linearity
nodes
as
compared
to
different
the
nonlinear
functions.
The
magnitude
of
this
effect
ratio
depends
on
the
network
and
tasks,
but
since
this
paper
has
been
published
and
television
has
been
confirmed
in
many
regions
like
in
natural
language
processing,
although
deep
learning,
whatever
geez
the
really
only
native
function,
is
widely
used
nowadays.
B
B
For
it,
as
well
now
coming
to
training
on
multiple
GPUs
local
hits,
normalization
is
the
term
that
is
given
by
them
in
their
paper,
and
they
have
like
calculated
a
very
deep
formula.
We
can
say
like
it
is
a
if
what,
if
almost
any,
multiple
GPUs,
which
was
not
so
common
at
that
time,
so
they
proposed
at
work
is
too
large
to
fit
a
ygp.
So
the
author
proposed
to
use
the
fact
that
modern
architectures
GPUs
to
write
from
one
another's
memory
they.
Finally
architecture
is
a
common.
Let
me
down,
I
have
good
use.
B
A
number
of
times
is
that
people
in
a
calling
network,
eg
P,
operates
more
or
less
independently,
except
on
seven
years,
where
the
consumer
using
PGP
use,
reduces
the
error
rate,
but
when
to
21.7%
as
compared
to
a
single
chip
in
italy
because
they
are
which
are
working
simultaneously
so
error,
it
will
be
strictly
use.
This
single,
cheaper
network
has
a
funny
trap,
as
various
combination
of
coordinates.
As
for
JP
implementation,
so
kernel
is
like,
if
you
imagine,
your
data
set
in
the
form
of
matrix,
so
career
is
like
a
small
matrix.
B
B
B
Each
entry
in
the
matrix
multiplies
under
it,
then
the
you
know
so
much
anything
applies
to
offset
they
transited
through
non-linearity.
The
problem
of
course.
Well,
people
always
had
very
similar
matrices,
which
causes
them
to
the
text
similar
features
which,
in
turn,
it
uses
the
expansiveness
of
freedom.
This
is
what
the
equation
you
sets
out
to
sound
like
the
equation
has
shown.
B
The
equation
person
poses
in
order
to
kernel
then
looks
at
in
adjacent
commutes.
The
chosen
order
is
totally
under
t.
If
two
corners
are
adjacent,
it
doesn't
mean
that
they
are
posed
together
on
the
screen.
It
just
mean
that
the
run
the
order
picked
them
to
be
neighbors
for
each
part
of
the
equation,
inhibits
and
training
matrix.
If
the
adjacent
corners
have
high
value
in
the
same
entry,
like
the
entry
of
the
kernel
tree,
will
be
known.
B
The
ability
to
much
because
of
this
star
entries
of
the
existing
corners
as
there
is
no
value
as
you
can
select
the
stars,
a
very
thin
in
the
corner,
1
2
4,
&
5,
whereas
the
cross
marks
in
corner
1,
2,
4,
&
5
are
much
darker
than
that
of
dots,
so
they
tend
to
be,
will
go
strongly
inhibited
because
star.
It
is
a
very
high
values.
Using
these
techniques
they
again
reduce
the
rate
by
1
point
4
percent,
so
they
have
beaten
the
there
are
about
ten
point.
B
So
this
is
the
image
that
they
had
presented
in
that
research
paper.
Illustration
of
the
architecture
of
this
linear
explicitly
showing
the
delineation
of
responsibilities
between
PG
pubes.
No
one's
appearance
lay
apart
at
the
top
of
the
figure
by
the
other
lens
on
the
iPad
at
the
bottom.
As
you
can
see
so
another
improvement
is
like
they
use
overlapping
poly
a
polymerase
typically
applied
after
a
combination.
Here
it
slowly
is
to
downsize
the
image
by
gathering
together,
small
enough
pixels
in
the
prior
work.
B
B
This
is
the
thing
which
they
have
applied
in
this
paper
to
reduce
the
error
rate
further
by
3%,
which
is
a
very
large
margin
to
understand
it.
Like
so
the
definite
figures
we
can
see
like
the
if
we
assume
like
the
inputs
of
24
plus
24
th,
the
three
color
channel,
that
is
ideally
the
first
layer
96
coordinates
for
capers.
If
you
are
applied
to
this
image,
each
under
his
size
of
11
crosshair
across
three,
and
they
are
scanned
over
the
image
which
died
a
few
pixels
on
the
x
and
y
directions.
B
B
They
have
total
of
thirty
seven
point,
seven
million
images
which
they
have
divided
the
paper
to
GPUs.
In
sixteen
point,
eight
million
stupid
a
period
is
approximately
so
they
have.
This
is
the
way
how
they
have
countered
the
fact
like
they
are
using
PGP
simultaneously,
they
play
no
single
model
and
they
are
able
to
claim
such
a
large
model
for
the
first
time
in
that.
B
B
B
A
B
A
A
B
So
that
is
the
game
propose
to
ensure
that
these
things
position.
So
these
are
the
list
of
all
building
which
you
can
fo.
If
you
have
any
questions
regarding
this,
you
can
either
takes
me
a
snack,
the
other
usual
price
which
I
think
or
take
away
from
the
spectrum
top
in
the
context
of
life.
How
it
is
useful
to
us
so
I
think.
As
far
as
I
know,
Venus
Project.
B
Use
central
scissors
make
people
say
or
whiz
very
good
there,
and
we
can
also
use
this
technique,
but
the
problem
is,
we
are
physics
like
we
don't
have
to
miss
Eckman
damages,
we
have
to
make
models
and
we
have
to
do
other
stuff
like
finding
each
of
the
three
shapes.
So
it's
not
like
this
has
to
not
the
benchmark
and
in
hand
deep,
deep
black
dot.
V
3,
which
is
proposed
by
the
one
in
2018,
is
a
bench
my
technique.
B
So
you
try
to
get
you
more
about
our
models
and
we
remove
the
models
so
that
we
can
get
more
accurate
details.
So
that's
my
head
there.
So
one
reason
why
we
have
adopted
it
and
if
you
are
going
to
like,
if
you
want
to
reproduce
this
code,
which
said
I
said
it
may
require
really
powerful
jutsu.
So,
but
you
can
go
through
the
paper
and
I
think
the
court
has
been
a
principal
community.
A
A
B
A
Yeah
is
there
any?
Has
anyone
ever
done,
like
a
study
of
you
know
what
it
means
to
add
layers
like
you
know,
if
you
have
two
layers
or
100
layers,
it's
obviously
more
powerful
than
two
layers,
but
why
you
know
what
is
the?
Is
there
any
sort
of
understanding
of
why
that
is
or
like
what
what
each
additional
layer
adds?
It
would
be
like
you
know,
if
you
added
like
you
know
more
or
maybe
a
deterministic
system,
it
would
be
like
if
you
added
power
to
something
it
would
increase
and.
B
If
you
are
neediest,
you
are
expecting
whatever
features.
It
is
not
even
like
anyone
ever
said
to
some
feature.
Then
you
proceed
to
next
year's.
You
know
expecting
monomers,
which
end
we
give
more
a
moment.
So
this
is
how
this
is
a
parameter
like
how
we
find
and
how
accurately
want
your
dataset
to
be.
It
may
be
a
trade
off
like
if
you
are.
B
B
B
A
That's
good,
thank
you
for
that
explanation
and
so
I
think
the
conversation
here
was
sort
of
in
the
same
vein.
It's
just
kind
of
like
you
know.
Why
would
you
have
layers
you're,
just
the
explanation?
More
than
three
layers
counts
as
deep,
so
yeah
I.
Guess
you
have
to
get
into
the
feature
space,
I
guess
to
really
understand
what
the
trade-offs
are.
So
I
think
that
was
a
good
talk
for
future
talks.
A
A
That's
always
good,
because
it's
especially
with
this
type
of
topic
or
these
type
of
topics,
you
know,
if
you
can
visualize
it,
it's
a
little
bit
easier
to
understand
and
you
know,
wade
through
the
jargon
that
they
use
in
the
papers
because
they
do
use
a
lot
of
specialized
language
and
you
know
I
mean
you
can
understand
it,
but
it's
easier
to
understand
broader
concepts.
If
you
have
images
so
I
mean
I.
Think
a
very
good
job
and
I
think
that
you
guys
Vinay
and
I
think
Vinay
is
next
week
or
is
it
as
men?
A
I
can't
remember
right
now:
it's
okay!
So
you
guys
yeah.
We
have
the
schedule
and
slack
so
you
guys
are
prepared
for
this,
and
so
hopefully,
if
you
have
any
questions
about
that
or
about
the
projects
in
the
next
week,
let
me
know
I'll,
let
you
know
on
slack
or
you
know,
email,
even
if
you
want
to
set
up
some
face
time
on
Google
meet.
That
would
be
okay
too.
A
A
You
know
I'm
gonna,
give
you
a
pretty
decent
review,
you
know
and
I
guess
the
procedure
is
I
write
it
up
and
I
share
with
you
and
it's
largely
feedback
for
you
know
how
your
been,
how
you
been
doing,
how
you
can
improve
it's
yeah
I.
Don't
think,
there's
that
much
you
need
to
improve
on
something
that
helps
people
along
so
be
on
the
lookout
for
that
and
they
think
there's
a
deadline
on
your
end,
like
that,
you
have
to
fill
out
the
form
yeah
Congrats,
so
I
think
everyone's
doing
okay.