►
From YouTube: Weekly Sync 2020-07-24
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#
C
D
I
did
finish
the
immediate
response
thing
and
the
guitar
chat
that
chatbot
also
like
both
of
them
are
passing
all
the
cis
and
I'm
waiting
on
your
review
on
distributor
orchestrator
to
continue
oh
and
I'll,
be
working
on.
No,
I
still
have
some
stuff
left
to
do
that
upload
and
comment
like
I'll
try
to
finish
that
before
next
meeting.
A
A
All
right
anything
else.
D
A
Oh,
oh,
oh,
oh,
are
you
have
you
have
you
let's
see,
let
me
see
so.
A
All
right,
great,
okay:
now
I
see
what
you're
saying
sorry
we're
just
all
right.
I
do
you
did
exactly
what
I
asked
for,
which
is
list
off
what
you
wanted
to
talk
about:
okay,
so
saksham,
where
you,
what
do
you?
I
saw
you,
you
added
a
bunch
of
those
models.
So
that's
great!
That's
really
really
cool.
So,
let's
see
covenant
models.
A
The
only
comment
on
actually
well.
I
had
a
few
comments
going
on
this
one,
but
let's
see
but
we'll
we'll
deal
with
that
in
a
second.
C
A
All
right,
I'm
gonna,
I'm
gonna,
I'm
gonna,
say
we
should
stay
away
from
faces
here
and
I'll.
Explain
that
later.
A
Basically,
I
have
to
go
stand
in
front
of
an
ethics
review
board
if
we
do
anything
like
that,
and
I've
got
enough
stuff
that
I
have
to
do
so.
Unfortunately,
but
I
I
mean
I
encourage
you
if
you
want
to
do
that
at
some
point
to
throw
up
you
know
another
repo
like
under
your
own
github
and
and
show
how
you
could
do
that.
That
would
I
mean
that's.
That
would
be
great
right.
A
Yeah
yeah
exactly
right,
so
I
would
encourage
you
to
do
whatever
you
want
there
on
on
your
own
on
on.
You
know,
a
repo,
that's
under
your
github
name,
but
yeah
I
have
to
I
have.
I
will
have
to
stand
in
front.
A
A
Yeah,
exactly
you
could
sort
of
you
know
demo,
whatever
you
wanted.
You
know
sans
sans
intel
ethics,
review
board
and
red
tape,
so
yeah,
but
yeah
for
the
scope
of
for
the
scope
of
gsoc,
we'll
we'll
want
to
we'll
want
to
keep
everything
you
know
yeah
as
as
as
a
inanimate
object
as
possible
right,
okay,
but
good
good,
good
thinking.
Let's
see
because
that
that
people
do
like
to
see
that
stuff.
Okay,
so
pytorch
news
review,
get
single
pr
needs
review
and
then
basically
you're
saying
you're.
C
A
The
other
day
when
we
did
the
meeting
we
had,
we
had
a
we
had
we
lost
one
star
and
within
since
tuesday
we've
gained
eight
stars,
so
I
don't
know
what
the
hell
is
going
on,
but
we'll
see
we're
about
to
do
that
release
I
just
I
finally
got
the
compliance
task
is:
is
all
the
compliance
tasks
are
being
double
checked
today
for
the
main
repo
and
then
we'll
see
how
I
got
to
get
the
other
plug-in
stuff
still
done,
but
we're
closing
in
there.
E
E
E
A
Nice
and
this
sort
of
this
is
this
actually
brings
up
something
well
we'll
talk
about
it
later,
but
yes,
you
got
anything
you
want
to
talk
about
this
week.
F
F
A
A
Knock
off
this
bullet
point
here,
all
right,
so,
first
off
I
wanted
to
cover
oh
I'll,
do
it
at
the
end
of
this
okay.
What
did
I
want
to
say?
Let's
see.
A
Oh
yeah,
okay,
so
basically
we
don't
really
have
something.
I
think
this
should.
I
is
failing
on
npm
audit.
We
don't
have
a
good
way
for
well
our
issue.
Space
is
a
bit
crowded
and
we
don't
really
have
a
good
way.
For
you
know
this
type
of
discussion
like
we're
doing
where
we're
soliciting
we
we
want
to
find
get
some
feedback
from
each
other
on
various
things
like
you
know,
hey
looking
for
examples
related
to
vision
or
nlp.
A
So
I
was
wondering
I
was
wondering
if
anybody
has
any
ideas
on
like
it
sounds.
That
seems
something
like.
I
know.
Github
had
some
kind
of
discussion
feature,
but
I
haven't
seen
where
the
hell
that
is,
I
thought
there
was
some
guess,
but
they
were
supposed
to
be
implementing
some
kind
of
discussions
thing,
but
I
haven't.
I
don't
know
where
that
is
so.
I
was
wondering
if
you
guys
have
any
it
seems
like
there
might
be
some
kind
of
like
platform
or
something
kind
of
like
we
use
getter.
A
That
might
be
good
for
that
type
of
thing
where,
where
or
well
maybe
it
I
mean,
maybe
it
just
ends
up
being.
We
have
issues
that
are
pinned,
but
you
know
there's
just
so
many
issues
right
that
I
don't
know.
Maybe
we
just
need
to
create
issues,
and
then
I
have
that
community
input
needed
label
that
we
can
add
to
them.
That
might
be
a
better
way,
but
I
don't
know
what
do
you
guys
think
because
I
feel
like
we
have
sort
of
a
lack
of
ability
to.
A
You
know
ask
like
there's
not
a
great
way
for
us
to
ask
each
other
for
to
for
for
idea
feedback
in
general
other
than
sort
of
the
getter
chat,
and
that
ends
up
being
kind
of
hard
to
follow.
Sometimes,
does
anybody
have
any
thoughts
on
this
or
think
everything
is
good,
as
is
or
any
suggestions
for
ways
we
might
want
to
do
this?
A
A
F
A
A
G
A
D
E
F
A
Oh
wow,
all
right.
Okay,
that's
guy!
You've
sold
me
already
all
right,
okay,
yeah
sold,
I'm
gonna
be
looking
into
that.
So
let's
see
all
right.
Okay,
so
let's
just
run
through
these
guys.
So,
first
of
all,
let's
check
out
and-
and
you
guys
have
probably
noticed
this-
but
unless
you're
pinging
me
specifically
for
prs
where
the
ci
isn't
passing,
I'm
not
I'm
not.
I
haven't
I'm
not
looking
at
them
because
I've
just
been
way
too
overloaded.
A
So
basically,
if
the,
if
you
need
input
on
a
pr
that
the
ci
is
not
passing,
make
sure
you
ping
me
in
getter
until
I'm
until
I
look
at
it
because
I'm
I'm
I've
got,
I've
got
a
just
like
just
way,
overloaded
right
now.
So
so,
let's
see
great,
this
looks
great
sweet
all
right
and
then
is
the
tutorial
up
to
date
or
were
let's
see,
oh.
D
A
A
Great
done
merged
all
right
and
then
we
need
to
make
a
note
we're
going
to
do
merged
we'll
update
a
tutorial
in
another
pr.
So
I'm
gonna
make.
Let's
make
an
issue
for
that.
A
A
E
A
Message
updates
to
use
idiot
response:
okay-
and
I'm
gonna
put
this
one
on
the
3.8
release
here,
since
we
got
that
in
we
might
as
well.
Have
it
because
that'll
make
yeah
sweet
awesome,
great,
okay,.
A
Done
yeah,
we
merged
master
great.
What
the
hell!
Oh
wait!
Oh
damn
it
all
right.
Okay,
so
you're
gonna
have
to
update
change
plug
again.
I
have
done
resolve
conflicts
before,
but
then
I've
also
hit
resolve
conflicts
and
it
then
immediately
merged
the
results
into
the
master
branch,
which
is
why
yeah
I.
A
A
Merge,
okay,
so
then
need
review
on
distributed
orchestrator,
so
that
is
you
need.
You
do
want
me
to
review
that
pr
then.
D
A
Great
okay,
yeah.
I
was
watching
that
issue
that
you
created
sort
of
slowly
tick
things
off
here
yeah,
so
I
haven't
added
this.
D
A
A
A
D
Sort
of
drag,
if
you
can't
just
give
us
yellow
examples,.
A
I'm
kind
of
wondering
here
if
we
want
to
just
take,
take
the
existing
commands
and
turn
it
into
just
data
flow
run
and
basically
say:
if
you
specify
sources,
then
we're
going
to
do
it
that
way,
the
way
we
have
otherwise
we're.
A
Thank
you.
Should
we
what.
B
A
Here
right
so
where's
this
and
we
need
to
update
the
stamp
command.
That's
another.
G
A
A
A
We
essentially
right
now
the
way
that
we
do.
It
is
basically
we
usually
use
like
if
we're
running
just
one-off
things.
We
use
the
memory
source
like
that
right,
so
we
need
a
way
to
say
right.
We
need
a
way
to
say
basically
like
what's
the
context
and
then
what's
the
like,
you
know
we
need
a
way
to
say
what's
the
context
and
then
what
are
the
values
for
that
context
right
so,
let's
see.
A
A
And
then
so
source
records
right,
the
inputs
we
have
the
inputs
to
add
for
each
context
right
and
basically,
our
record
keys
here.
So
world
and
user
become
our
context,
and
then
we
have
our
inputs
for
each
context
and
then
we
say
also.
I
want
to
define
the
context
as
this.
A
You
know
this
definition,
which
is
value
in
this
case,
so
we
we
need.
A
A
Yeah,
I
guess
yes,
syntax
is
always
tricky.
How
do
you
make
something
that
makes
sense?
We
could
just
do
something
like
this
or
context
f
value
so
and
then
followed
by
the
context,
keys.
A
A
A
Inputs
into
it
right
so
so
for
each
content,
we
create
a
context.
We
create
two
contexts
right,
one
for
world
and
one
for
user,
and
then
we
put
two
inputs
into
it.
We
put
one
input,
that's
going
to
be
hello
with
with
the
definition
of
key.
The
value
is
flow,
the
definition
is
key,
and
then
we
do
this
other
one.
Where
we
do.
You
know
where
we
say:
okay,
the
context
we
created
so
world
in
the
first
case
that
should
be.
A
Yeah
so
this
is,
this
is
obviously
a
this
is.
This
is
not
the
best
example
from
a
from
a
phrasing
perspective
here,
but
let's
you
see
what
I
mean,
though
right.
D
A
All
right
so,
let's
see-
and
I
think
maybe
we
could
even
probably
simplify
this
more.
We
could
just
say
you
know
print.
A
A
So
I
don't
know
I
mean.
I
think
that
the
the
other
thing
is
that
now
we've
we've
we've
got,
we've
got
the
dataflow
run
command
and
it
has
records
as
a
sub
command,
and
I
can't
remember
what
the
hell
happens
to
arc
purse
when
you
do
this
when
you
have
sub
commands,
but
then
also
want
to
run
it
as
a
command
itself.
A
And
we
have,
let's
see
yeah,
we
have
run
all
records
and
we
have
run
the
set
of
records
and
then
we
basically
have
you
know
the
you
know,
run
records
here
and
then
run
here.
So
I
don't
know
what
happens
if
we
do
this.
I
can't
remember
if
this
works
or
not
isaac
def
run
self.
I
think
that
when
you
have
sub
commands
under
a
command
it
just
doesn't
let
you
run
the
command
itself.
It
only
lets
you
run
sub
commands
when
you
register
with
arc.
A
A
A
A
Yeah
see
because
it's
trying
to
do
like
it
really
wants
you
to
choose
a
sub
command
if
you
register
a
sub
command
there.
So
I'm
wondering
if
this
is
what
I'm
thinking
is
is
I
think
we
probably
need
to
turn
like
records
or
maybe
look
for
the
presence
of
sources.
A
Let's
see,
let's
see
what
his
sources
so
sources
and
it
looks
like
yes,
it's
required
or
no,
it's
not
required,
because
it's
got
a
default.
So
basically,
oh
yeah
see
and
that's
going
to
be
wacky.
I
don't
think
we
really
want
to
default
here
in
this
case,
for
the
run
command
so
or
like
we
want
it
to
be
empty.
A
You
know
so
we'd,
probably
just
look
for
run
command,
config
we'd,
look
for
the
presence
of
that
and
then
we'd
look
for
the
presence
of
like,
let's
see
with
where
is
it
so
basically
put
it
all
into
the
dataflow
we're
going
to
need
to
modify
all
the
dataflow
run,
commands
and
we're
going
to
basically
say:
okay,
if
there
are
any
sources
that
are
given
you're
going
to
run,
you
know
all
of
it.
Otherwise,
you
need
to
run
you
go
yet
so
otherwise
you
run
only
so
yeah.
A
If
you
see
any
sources,
then
you
run
the
existing
basically
use
the
existing
code.
All
right,
if
you
don't
see
any
sources
given,
then
you
use
this
new
syntax,
basically,
which
is
like
a
simplification
where
we
just
I
mean
you,
can
reuse
the
existing
code
and
create
a
memory.
You
know,
source
behind
the
scenes
and
do
the
same
stuff
behind
the
scenes
right.
D
A
Yeah,
well
I
mean
so
you're
gonna
get
randomly
generated
contacts
if
you
don't
specify
them
right,
but
then
the
key
is.
A
Right
so
you
might,
you
might
have
one
where
you
do
yeah
I
mean
what
you
could
do
here
is
you
could
create
that?
That's
actually
so
running
on
that
line
of
line
of
thinking
so
contacts
you
could
do
run
context
or
you
could
do
single
single
and
then
for
run
single.
You
could
just
do.
A
A
It's
like
you
know,
I
think,
for
run
and
then
to
data
flow,
and
then
you
think
you
can
just
pass
the
array
of
input,
values
and
yeah,
and
then
you
just
you
know,
call
it
good
with
one
right.
You
know
return
whatever
the
output.
A
A
A
Oops,
yeah,
so
yeah
context
right
that
I
think
yeah
there.
You
go
that
that
gives
you
a
much
cleaner
implementation
here
and
then,
when
you
do
run
context,
you
can
reuse
a
lot
of
the
code.
You
already
have.
I
mean
you're,
going
to
end
up
reusing
a
lot
of
this
code,
but
all
right
cool.
So
that
gives
you
a
path
forward.
Then.
A
All
right,
great
cool,
yeah,
good,
good
ideas,
good
good
good
plan
making
this
into
a
new,
a
new
command
here,
because
yeah,
it's
definitely
getting
tedious
typing
records
and
all
the
sources
and
stuff
so.
F
A
Okay,
so
all
right
pie,
torch
models
need
review.
I'm
in
the
process
of
reviewing
that
I
have
it
open.
C
Regarding
the
this
people
requested,
if
you've
seen
if
you've
already
seen
or
not,
there
is
a
we
are
using
loss,
function
and
optimizer,
which
are
predefined
here.
So
I
was
maybe
like
I
wanted
to
ask
like
how
can
we
make
them
accessible
through
cli.
B
B
A
A
You
want
to
make
these
yeah
okay,
so,
let's
see,
I
think
this
is
going
to
be
another
sort
of
case
where
we
do
something
kind
of
like
we
did
with
the
num
kite
numpy
config
and
the
the
type.
A
If
they
are
documented
yeah,
exactly
if
they're
documented
with
that,
then
we
can
do
that
pretty
easily
and
then
we'll
just
need
to
do
like
an
enumeration.
A
I
feel
like
we
just
talked
about
this
last
week
too,
where
we
need
to
oh
yeah
with
regard
to
layering,
we'll
need
to
do
an
enumeration
of
all
the
possible
options
and
then
register
as
them
them
as
entry
points,
because
that
way,
you
know
the
reason
why
we're
doing
this
is
because
from
like
a
security
perspective,
we
only
want
to
have
a
defined
set
of
things
that
we're
allowing
people
to
instantiate
and
a
defined
set
of
you
know
a
predefined
set
of
what
their
parameters
are
going
to
be
because
or
else
you
can
get
into
a
space
very
quickly
where
all
of
a
sudden
you're
you're
giving
you're
you're
giving
people
too
much
control
over
what
code
gets
instantiated.
A
So
that's
that's.
Why
we're
doing
it
this
way
so
yeah,
let's
definitely
check
out,
let's
check
out
the
use
of.
So
let
me
just
make
a
note
here
so
well,
let's
see
where's
the
best
place
to
make
a
note
of
this
yeah
right
now.
I
don't
know
if
it's
critical
right
now
to
get
these
stuff
merged
right.
You
need.
You
want
to
get
this
merged
before
the
next
release
here,
so
I
mean
we'll
get
immersion
and
you
can
jump
back
on
it
and
try
to
make
it
customizable.
A
A
Likewise,
okay.
A
This
will
be
helpful
to
those
adding
support
for
custom
layers.
A
All
right
great
so
then
I'll
say
we'll
follow
the
pattern.
A
A
What
did
I
just
call
that
how
to
make
a
triple
loadable
object.
A
All
right,
okay
and
then,
let's
see.
A
All
right
entry
points
is
a
word
damn
it.
It
is
now
all
right.
Okay,
good
single
pr
needs
review,
okay
and
the
getter
one.
Okay,
we
just
wanted.
We
just
did
you
do
the
change
log,
yeah,
okay,
great
yeah,
push
that
and
we
can
merge
it.
Okay,
so
and
then
obviously
I've
got
a
review
in
progress
here.
So
I'll
give
you
that
shortly,
that's
what
I
was
doing
before
this
meeting.
A
Okay,
so
looking
for
good
examples
related
to
vision,
so
does
anybody
have
any
any
anything
where
we
can
stay
away
from
humans?
Basically
because
that
brings
us
into
all
sorts
of
all
sorts
of
things.
So
I
mean
anybody
see.
C
C
A
Yeah,
so
okay,
so
yeah,
but
for
okay,
but
you
I
mean
you'd
essentially
want
to
just
be
creating
bounding
boxes
here
and
then
we
would
export
that
to
something.
And
then
we
I
mean
we
would
worry
about
the
problem
of
showing
images
later
right.
So
if
you're
doing
object,
detection
you
might
want
to
do
you
know,
creating
bounding
boxes
and
then
storing
bounding
boxes
as
feature
data
right
or,
as
you
know,
prediction
data
right.
H
A
And
then
you
could
you?
Could
you
know
your
program
of
choice?
You
can
you
can
write
something
else
to
to
take
that
and
put
the
bounding
boxes
right?
That
could
be
like
a
yeah
I
mean
we
could
we
could
have
another
set
of.
We
could
basically
have
an
operation
that
goes
and
draws
bounding
boxes
right.
If
you
wanted
to
oh,
yes
and
then
creates
another
feature,
you
know
you
could
do
that
with
the
data
flow
pre-processing
source
or
something
actually,
which
brings
up
a
good
point
that
we
should.
A
We
should
make
it
so
that
the
pre-processing
source
has
access
to
the
prediction
data
as
well,
because
well,
let's
see
in
this
case,
in
this
case
we
do
a
prediction
right
and
then
we
end
up
with
well,
let's
see
yeah,
I
guess
we're
looking
at
model
predict
I
mean
you
would
want
to
just
make
a
data
flow
where
you're
doing
the
pre-processing
and
you
do
the
predict
within
the
data
flow
and
then
you
end
up
yeah.
A
A
A
prediction
assigns
the
feature,
data
and
or
basically
just
the
output
of
model
predict,
goes
into
this
bounding
box
function,
and
then
you
know,
you
add
now
one
of
your
featured
data
is
this
this
bounding
image
with
bounding
box
or
something
or
maybe
I
guess
in
this
case
we
probably
need
the
ability
for
the
the
output
of
the
data
flow
to
be
assigned
to
prediction
data
or
something
I
don't
know,
thoughts
on
that
just
said,.
A
The
other
thing
that
that
comes
to
mind
is
that
that
very
basic
example
extend
it's
an
extension
of
the
very
basic
example
that
you
see
in
scikid,
where
they
do
the
coins,
the
image
with
the
coins
and
well,
I
don't
know
if
there's
going
to
be
there's,
probably
not
a
data
set
for
this,
but
well
you
might
be
able
to
just
basically
scrape
images
from
the
web.
I
don't
know
what
the
well
I
don't
know
what
they.
A
That's,
probably
the
dubious
dubious
licensing
issues
there,
but
you
could
do
something
where
you
segment
the
image
right,
like
the
thought
here
being
you
use
some
of
the
more
the
more
classical
opencv
methods
to
segment
the
image
into.
You
know
possible
objects
and
well,
I
guess
that's
just
object
detection.
But
beyond
that,
like
the
idea
there
was,
you
know
what
are
the
values
of
all
the
coins
in
this
image
and
sum
it
up
or
something
right
like
what
what
is
like?
What
are
all
the
numbers
on
this
image
you
could
take?
A
Maybe
mnist
and
and
like
take
a
picture
of
a
bunch
of
dollar
bills,
or
something
and
and
try
to
you
know,
draw
draw
rectangles
use
opencv
to
to
grab
rectangles
and
then
look
within
the
rectangles
for
numbers
and
then
sum
up
all
the
numbers,
and
you
know
tell
someone
how
much
money
they
have
right.
Okay,
I.
E
E
Yeah,
so
I
don't
know
because
that
will
be
cool.
If
you
can,
I
don't
know
if
it's
possible,
because
we
can
have
a
live
demo
kind
of
thing.
We
can
turn
it
and
then
a
person
will
detect
blink
and
then
it
will
directly
that
you
blink
the
eye.
Then
we
can
count
in
how
many
minutes
you
can.
You
are
blanking
or
we
can.
We
can
have
many
use
cases
like
when
you're
officer
watching
this
screen
and
whether
you're
blinking
or
not
these
kind
of
things.
C
Yeah
I've
thought
of
these
functions
and
operations,
but
I
was
like
thinking
like
maybe
here
in
the
ffml
we
wanted
like
stuff
related
that
is
related
to
machine
learning.
E
Yeah,
this
will
also
be
the
machine
learning,
because
we'll
have
to
first
figure
out
the
eye
from
the
face
that
will
use
the
face,
features
that
you'll
have
to
learn
using
model,
and
then
you
have
to
detect
whether
the
eye
is
getting
close
or
not.
That
is
again
going
to
be
machine
learning
that
will
be
totally
based
on
ml
models.
Only.
E
F
A
Nothing,
let's
see
what
did
you
say,
ash.
F
There's
this
corona
virus
test
x-ray
data
set
that's
a
simple
application
of
cnns.
There
you
go
yeah
do
definitely.
A
That
would
be
a
good,
a
good
thing
to
do
the
I'm
thinking,
thinking
more
about
blinking,
and
I'm
remembering
that
my
co-worker
terry
was
complaining
that
the
machine
learning
models
like
sometimes
don't
detect
when
people
like
some
people,
just
like
naturally
have
their
eyes
closed
more
and
then
machine
learning
models.
Data
sets
are
not
trained
on
that
and
she
had
seen
some
research.
That
was
that
that
that
that
wrecked
havoc
with
so
that
may
also
be
something
that
we
want
to
stay
away
with
from.
I
think
I
think
pretty
much
in
general.
A
Let's
just
stay
away
from
anything
with
with
with
people,
as
I
know
as
bland
as
that
might
sound,
but
I
I
it
will
it
will
help
us
avoid
hot
water.
That's
for
sure.
So
I'm
sorry
to
be
sort
of
the
the
I'm
sorry
to
be
a
a
bummer
on
that
but
yeah.
I
think
I
think
we
might
wanna
unless,
unless
you,
you
made
sure
you've
done
like
a
bunch
of
like
thorough
examination
of
the
data
set
and
made
sure
like
you've
got
like
very
distributed.
A
Demographics
and
everything
then
yeah,
then
let's
stay
away
from
that.
B
A
So
where
was
there
more
information
about
this
covet
data
set?
Yes,.
A
Okay,
the
other
thing
is,
I
mean
obviously
saksham's
work
is
focused
around
images,
mostly
so
the
I
mean
is
this
something
that
we
could
still
I
mean
this
this,
I
assume
you
know
you're
getting
a
giant
matrix
here
right
so
can
we
still
use
the
the
in
the
image
operations
essentially
operate
on
on
large
matrices
right.
A
So
so
is
this
something
that
we
can
still
apply
those
pre-processing
techniques
to
because
his
goal
is
to
show
you
know
the
use
of
the
you
know,
maybe
some
psychic
image
operations
and
and
psychic
pre-processing
stuff
and
then
feed
it
through
feed
it
through
it.
Some
kind
of
machine
learning,
model
right,
right,
suction,.
C
A
I
mean
I'm
I'm
not
sure
if
those
are
going
to
be
if
those
are
going
to
map
like
you
know,
if,
because
we
could,
we
could
just
right.
People
have
done
this
before
where
they
basically
take.
You
know
matrix
like
data
and
then
you
know
use
right
like
that's
that's
what
you're
saying
here
is
like
a
lot
of
people.
That's,
like
you
know
a
lot
of
what
people
have
been
doing
with
cnns
and
then
they
map
them
into
other
fields
right.
So
maybe
we
can
apply
the
same
techniques
to
this
data.
A
F
So
you're
saying
like
using
cnns
for
some
kind,
some
other
kind
of
data,
or
particularly.
B
A
What
is
this,
this
is
the
same
thing,
damn
it
all
right,
okay,
I
should
probably
read
what
the
hell
it
says,
requesting
a
collaborative
effort
of
the
ai
community
to
fight
government
19.
A
Send
send
if
anybody
has
some
stuff,
maybe
send
it
to
the
getter
channel,
to
send
info
to
getter,
channel
ping
suck.
A
F
A
C
So
the
thing
is
that
these
data
sets
just
use
the
cnn
that
I've
added-
and
I
was
more
of
looking
for
like
new
features
that
can
be
added
to
dffmln
or
just
just
using
cnn
and
cnn,
and
be
done
with
it.
A
Well
so
I
mean
I
thought
your
your
thing
was
gonna,
be
that
you're
gonna
take
the
you're
going
to
take
the
image
preprocessing
operations
like
you've
done,
like
let's
check
out
your
tutorial
here.
A
Oh
yes,
okay,
oh
it's
just!
I
ran
through
it,
so,
okay,
so
yeah
I
mean
the
the
point
here
is
that
you're
you're
taking
I
mean
you're
you're
running
these
pre-processing
operations,
so
we
can
just
run
these
pre-processing
operations
on.
I
mean
you're
looking
for
you're
you're,
hoping
to
find
a
data
set
that
you
get
good
accuracy.
A
C
A
C
A
C
A
A
Okay,
it
doesn't
so
it
doesn't
necessarily
have
to
be
related.
I
mean.
A
Yeah
I
mean
you've
also
got
what
you
could
also
do
here.
Is
you
could
you
know
chain
two,
let's
see,
I
mean
okay,
okay,
the
goal.
Okay,
tell
tell
me
your
goal
again.
C
So,
basically,
what
I'm
so
basically
what
I
meant
that
with
that
was
that
I
was
looking
for
more
examples
that
use
image
preprocessing
so
that
we
can
so
that
I
can
add
more
operations
using
opencv
and
sk
image,
and
then
we
can
feed
it
to
the
feed.
The
pre-processed
data
to
the
model.
A
C
Data
sets
that
just
use
cnn
they
classify
with
a
good
accuracy.
So.
A
B
C
A
Okay,
I
mean:
can
you
use
some
of
the
like
the
edge
detection,
stuff
and
then
color
detection
to
classify
you
know
I
mean?
Is
it
all
lego
bricks
or
is
there
some
things
that
aren't
lego
bricks.
A
Okay,
I
mean
I
mean
I
would
I
would
say
you
could
try
using
some
of
the.
It
seems
to
me
that
you
would
want
to
pull
do
the
same
stuff,
that
you've
kind
of
done
with
this.
This
feature
extraction
here
right
where
but
with
lego
bricks,
you're
looking
for
how
many
circles
are
on
the
top
right
and
how
many
edges
are
there
and
then
you
know
like
curvature
for
some
peaches
pieces
right,
and
so
you
have
the
I
mean,
there's
the
edge
detection,
stuff
and
there's
what
else
is
there
in
there?
A
C
Yes,
there
are
more
functions
like
edge
detection,
like
you
just
said,
and
you
you
transform
and
of
detect
hawk
descriptors,
and
there
are
many
of
such
feature
descriptors
in
opencv.
A
C
F
So
these
opencv
and
psychic
image
libraries
are
mostly
used
for
that
stuff
yeah.
They
just
detect
the
edges
through
some
algorithms
and
they
just
mark
it
off
and
it
is
useful
to
label
data,
and
then
you
just
run
it
through
the
models
like
cnn
and
stuff,
and
then
you
use
some
data
that
doesn't
like
you
ultimately
don't
need
the
algorithms
anymore.
F
I
guess
the
better
idea
would
be
like
to
not
focus
more
on
these
operations,
rather
focus
on
focus
on
getting
a
better
support,
support
for
neural
networks
and
the
efficiencies.
G
A
A
Exactly
yeah,
I
think,
yeah.
I
think
I
think
you're
on
the
right
track
here
right,
because,
because
we're
caught
up
we're,
definitely
caught
up
in
this
idea
of
what
opencv
things
can
we
use
when
you
know
a
large
part
of
this
is,
is
the
data
flows
will
allow
us
to
to
chain
the
different
neural
networks
together
right?
That's
what
you're
saying.
F
I'm
I'm
I'm
like
I,
I
would
suggest
like
I'm.
I
don't
know
like
whether
you
would
be
comfortable
or
suction
would
be
comfortable,
but
I
will
say
that
we
shouldn't
focus
much
on
these
operations
for
now,
because
we
don't
have
many
applications
in
dfs
fml
integrated.
Yet
so
I'll
suggest
like
if
suction
could
focus
on
focus
more
on
integrating
neural
network
properly,
and
I
can.
F
A
Yeah,
okay,
so
yeah.
Let's
shift
focus
to
that
then,
because
I
think
you're
right.
I
think
you
know
there's
right,
there's
there's
obviously
so
much
you
can
do
with
these
and
and
definitely
having
proper
support
for
the
for
layer.
Configuration
is
big,
so
I'll
try
to
get.
I
guess
my!
Let
me
well
my
schedule's
pretty
packed
today.
What
is
it
if
th
yeah,
because
we
basically
we
need?
We
need
all
all
of
these
right
this.
A
The
tutorial
for
this
is
the
same
as
the
tutorial
for
for
for
the
layers,
so
so
I'll
get
on
that
asap
and
I'll
I'll.
Make
that
my
top
priority
right
now,
because
all
right
and
and
then
we'll
we'll
look
at
we'll
look
at
expanding
the
the
support
there.
So,
let's
so,
let's
okay.
A
John
will
create
guide
and
I'll
probably
end
up
doing
the
the
converting
the
tensorflow
hub
ones
or
whatever
the
one
wherever
the
one
that
exists
right
now,
I'll
probably
end
up
doing
that
as
part
of
the
example,
so
that
there's
a
concrete
example
there
so
create
guide
to
show
how
to
get
to
make
layers
into
trigger
point
plugins,
all
right
so
I'll
make
that
my
top
priority,
then
okay,
so
sweet
anything
else.
A
Let's
see
we'll
do,
let's
see
I'm
gonna
I'll,
do
the
get
the
get
single
pr
once
we're
done
with
this
meeting
and
then
I'll
do
that?
So,
let's
see
well
I'll,
do
all
the
reviews
and
then
I'll
do
I
got.
I
got
a
few
more
meetings
back
to
back,
but
this
afternoon,
basically
I'll
get
to
it.
So,
let's
see.
A
G
B
A
A
A
E
Thing
here
is:
we
have
so
many
functions
once
we
create
the
vectorizer,
we
can
instantiate
so
many
functions.
So
do
we
want
to
expose
all
of
them
or
we
just
want
to
use
the
most
used
one,
because
the
one
that
I'm
using
is
you
can
scroll
down
a
bit
vectorizer
dot
fit
transform.
That
is
what
we
use
to
convert
the
text
to
the
numbers,
the
vectors
yeah.
A
A
Well,
I
mean
you're,
I'm
I'm
you're
you're,
the
one
who
knows
the
most
about
nlp
here.
So
that's!
That's
your
that's!
Your
call.
E
Yeah
yeah
I
mean
this
is
the
most
commonly
used,
because
this
is
what
people
use
it
for
they
just
want
to
convert
it
to
numbers
yeah.
So,
let's
see.
A
That's
what
the
what
methods
do
we
have
here?
Yeah.
A
So
I
mean,
I
would
say,
let's
see,
and
then
these
have
different
dark
er.
These
have
different.
These
have
different
methods
and
you're
saying
the
one
we
usually
use
is.
E
A
A
I
mean
okay
parameters,
I
mean
there's
a
lot
of
function.
I
mean
there's
a
few
functions
here
right,
but
I
don't
see,
I
don't
see
a
lot
of
okay.
Well
transform.
H
H
A
So
maybe,
but
also
at
the
same
time
like
if
you
don't
have
like
whatever
you
do,
make
sure
you
have.
You
want
to
make
sure
you
have
an
example
for
right,
because
people
aren't
going
to
use
it
unless
it
has
an
example
right.
So
if
you,
if
you,
if
you
want
to
do
the
the
the
leg
work
to
make
an
example,
that's
great,
but
you
know
otherwise
you
know
maintain
the
maintain
focus
on
on
what
what
I
mean.
A
E
Okay,
one
more
thing:
can
you
scroll
up.
E
At
the
top
yeah,
so
if
you
see,
if
you
see
the
signature
here,
we
have
this
sorry
pro
preprocessors
yeah
preprocessor
home
tokenizer
right.
So
these
are
basically
functions
that
we
can
write
and
pass.
So
what
should
we
do
about
these?
Because.
A
Okay
yeah,
obviously
we
can't
have
people
passing
in
random
functions,
convert
all
characters
so
preprocessor
and
tokenizer
so
override
the
prepress
same
string,
transformation
stage,
while
preserving
the
tokenizing
and
n-grams
generation.
Steps
only
applies
if
analyzer
is
not
callable
tokenizer
override
the
string,
tokenization
step,
while
preserving
the
pre-press
in
nm
grams
generation.
Steps
only
applies
if
analyzer
equals
word.
A
E
A
All
right:
well,
I'm
with
my
my
immediate.
My
immediate
reaction
to
this
is
basically,
if
you
want
to
use
either
of
these,
you
need
to
be
running
your
data
flow
from
a
python
file
and,
like
yash
talked
about
last
time.
We
have
examples
of
doing
things
from
the
command
line.
It
would
be
good
to
have
examples
of
doing
things
from
python
files,
and
this
seems
like
sort
of
an
ideal
ideal.
A
Ideal
example,
for
you
know
how,
if
you
were
to
write,
if
you
were
to
write
one
of
the
data
flows
in
your
example,
you
should
show
how
to
do
this
in
python
2,
right
more
than
just
with
the
dataflow
create
command
and
the
command
line
stuff,
and
when
you
do
that,
you
can
maybe
include
a
you
know
you
can
create
an
input
where
the
value
equals.
You
know
some,
whatever
your
pre-processing
function
is.
Does
that
make
sense.
A
A
The
problem
is
that
that
the
large
part
of
the
philosophy
is
that
we
need
to
separate
the
code
separate
the
interface
from
the
from
the
implementation
right,
because
this
is
what's
going
to
allow
us
to
to
do
things
like
you
know,
have
these
orchestrators
that
call
the
operations
like
across
languages
right.
So
though
I
mean
this,
it
works
all
fine
and
well
when
you're
in
python,
but
let's
see
callable.
A
A
A
We
can
have
like
a
primitive
that
is
callable
or
something,
and
then,
if
it's
callable,
you
can
pass
in
a
function
that
that
that
could
be.
That
could
be
what
we
do
here
and
and
then
in
that
case,
if
it
sees
that
the,
if
you
see
if
it
sees
that
so,
for
example,
you
you
have
okay.
So
let
me
just
sort
of
pop
it
up
and
show
you
so
df
types.
A
Put
right
so
here
we
look
where
we
look
at
like
the
spec
and
stuff
right
or
yeah.
So
this
is
when
we
create
an
input,
object
right.
We
do
things
like
do
spec
validation
rate.
Well,
we
could
do
something
here
where
we
say
you
know,
load
callable.
A
If
string-
and
we
could
say
you
know
like,
if
definition,
dot,
primitive.
A
Whatever
right
does
this
make
sense,
we
could
do
this,
in
which
case
then
you
would
just
pass
the
path
to
the
you
know:
the
path
to
the
function
that
you
wanted
right
now.
E
Yeah,
but
this
will
be
right
like
from
the
security
perspective,
as
you
are
telling
to
section
that
people
can
insert
random
code
here.
A
Yes
right
so
the
the
risk
here
is
that
then
somebody
says
os
dot
system
and
now
they're
passing
you
know
whatever
they
want
to
a
shell.
So
this
is
why
this
is
why
this
is
discouraged
so.
E
A
Yeah
I
mean
I,
I
think
your
your
other
option
here
right
is.
Is
that
let's
see.
A
A
The
the
thing
is
that
that
yeah
I
mean
I
don't.
I
don't
think
I
don't
think
there's
I
don't
think,
there's
a
there's
a
way
where
we
end
up
up
up
winning
against.
You
know
the
possible
security
implications
of
this.
Basically,
if
you
can
take
a
look
at
the
there's,
some
discussion
about
the
yaml
library
and
their
safe
load
function
and
that
sort
of
tells
you
why
I'm
iffy
about
this,
because
basically
they
said
that's
the
yaml
spec
and
everybody
said
well.
This
is
not
good
and
so
yeah.
A
This
is
not
quite
the
same
thing,
but
we
could
definitely
be
in
a
very
similar
position
quite
easily
and
so
I'll
just
I'll.
Just
if
you
go
to
the
libyaml
python
website,
it'll
be
friendsetter,
so
you
can
read
about
that
if
you
want
to,
but
for
now
I
mean
this
is
this
is
exactly
why
we
need
to
make
the
entry
points.
A
The
entry
point
loadable
plugins
for
the
for
the
config
objects
right.
We
can
do
the
same
thing
and
we
can
make
them
for
you
know
whatever
data
type
this
is
but
but
the
thing
is
then,
then,
is
it
going
to
be
compatible
cross
cr
like
when,
when
these
things
are
distributed
right
so,
for
example,
with
agan's
work,
he's
we're
going
to
be
you'll
run
the
main
data
flow
on
on
one
node
and
then
on.
Maybe
another
workstation
you're
running.
A
A
Then
right
so
you'd
need
to
have
you
know
whatever
that
you
you
could
you
could
do
the
entry
point
thing
as
so
long
as
you
know
that
entry
point
is
installed
and
registered
on
the
other
system,
then
you
know
on
if,
if
you
you're
sending
this
to
a
remote
node
and
the
remote
node
is
what's
actually
running
the
operation
when
it
gets,
you
know,
maybe
it
gets.
The
definition
primitive
is,
like
you
know,
entry
point
or
whatever
it
instantiates
it
over
there,
and
it
says:
okay,
I've
got
this
thing
installed
great
like
now.
A
I
can
go
load
the
function
and
I'll
pass
that
function,
that
that
would
work:
okay,
right,
okay,
but
at
this
point
yeah.
F
E
E
I
know
so
so
so
like
you
can
you
may
have
some
words
that
you
want
to
remove
you,
you,
don't
you
don't
want
to
tokenize
them.
Basically,
so
maybe
there
is
a
company
name
and
you
have
a
company
name,
that's
abbreviation,
and
you
don't
want
to
split
it
things
like
this,
so
I
mean
you
can
do
anything
inside
it.
Tokenization
can
be
anything
the
way
you
want
to
break
up
the
text
in
any
way
you
want
you
have
you.
E
A
E
No,
because
this
is
something
it
does
internally,
it
doesn't,
I
mean
it
uses
yeah.
Can
we
pass
one
operation
as
an
argument
to
a
different
operation.
A
A
That's
going
to
get
us
in
trouble
with
the
distributed
orchestrator
as
well.
I
think,
but
we
can.
Let
me
think,
let's
think
about
that,
so
one
once
again,
both
both
things
would
have
to
be
installed
on
the
same
the
same
system,
and
then
you
would
have
to
be
you'd
have
to
know
whether
it's
an
async
operation
or
not
and
and
let's
see
you'd,
basically
be
passing
the
run
method.
A
If
you're
passing
the
run
method,
I
mean,
let's
see
it
could
be
done.
I
think.
D
A
Yeah
did
you
do
you
end
up
with
the
case
where
now
you
have
a
and
you
you
now
you
have
so
so
currently,
basically
everything
you
could,
you
could
have
a
different.
Every
operation
could
be
running
on
a
different
computer
right
and
all
of
the
data
could
be
is
just
sent
between
everything
right
like
you,
can
you
can
network,
send
everything
over
the
network
between
all
these
operations.
A
When
we
start
passing
around
callables,
then
you
either
have
to
serialize
the
implementations
and
all
the
libraries
that
it's
using
and
everything
or
you
have
to
have
them
installed
on
the
same
machine
or
you
have
to
somehow
proxy
the
you
have
to
essentially
proxy
the
input,
data
and
the
output
data
right,
and
so,
if
say,
say
for
example,
so
the
easiest
way
to
do
this
would
basically
be
pass
some
sort
of
a
a
rapper,
alias
thing
which,
basically,
when
someone
calls
the
function
it
says
hey.
I
need
to
call
this
operation
go
find
out.
A
Where
does
this
operation
actually
exist?
Call
this
operation
and
then
return
the
result
right
and
that
operation
may
exist
on
another
machine
still
right.
The
problem
is
we're
likely
mixing
async
and
synchronous
code
at
this
point,
and
then
you
you
end
up
blocking.
So
that's
not
good,
because
we
can
never.
We
can
we
never
want
to
block,
but
let's
see
yeah
it's
an
internal
thing.
A
D
A
A
Yeah
I
mean
so
here's
what
I
think.
I
think
that
you
can
what
why
don't
you
create
this
callable?
I
think
I
think
you
should
only
support
it
from
python.
Is
what
I
think
at
this
point,
because
then
yeah,
because,
basically,
if
you,
if
you
just
say
like
if
you
provide
this,
if
if
we
have
some
kind
of
definition,
you
know
basically
we'll
we'll
look
at
we'll
we'll
we
we
and
we.
This
is
something
we
need
to
do
or
we
need
to
basically
do.
Where
do
we
have
it?
A
A
And
we
need
it
to
be,
like
you
know,
a
string,
string,
etc
right,
and
so
we
need
one
that
this
is
I
mean
you
don't
have
to
do
this
now,
but
eventually
we
should
do
something
like
this
and
and
but
for
now,
we'll
just
check
if
you
put
the
primitives
as
callable
when
we
go
to
do
the
distributed
orchestrator
thing,
if
you
get
an
input
that
says
callable,
basically,
you
just
throw
up
an
error
and
say
no
right,
like
I'm
not
going
to
do
callables
you're,
I'm
not
passing
around
your
callables
right.
A
You
can
do
not
implemented
error
and
then,
from
from
the
perspective
of
of
of
the
the
command
line
loader,
you
would
also.
Basically,
if
you
go
to
run
well,
I
mean
you,
don't
necessarily
know
that
you're
loading
from
the
command
line-
you're,
just
not
gonna,
I
mean
when
you
instantiate
an
input.
A
A
A
A
So
we'll
we'll
have
to
add
this
right
because
basically.
A
This
should
this
should
this
should
be
enough
to
sort
of
you
know
catch
anything,
that's
not
coming
from
python
right
and
then
I
guess
in
the
orchestrator,
we'll
probably
also
want
to
also
want
to
say
I
mean
the
orchestra
is
just
going
to
bail
when
it
tries
to
serialize
things
the
distributed.
Orchestrator,
I
mean
it's.
Just
it's
not
going
to
be
able
to
serialize
that
callable
so
to
whatever
protocol
nas
is
using,
because
I
I
suspect
it
was
json
right
that
we're
dumping
and
loading
yeah
yeah.
A
So
it's
just
gonna
bail.
So
in
this
case
this
this
would
be
what
you
you
would
want
to
do
here.
I
guess
in
case
someone
tries
to
pass
in
like
an
entry
point
load
style
path
just
to
let
them
know
that
no
we're
not
doing
that,
but
yeah.
I
think
I
think
the
the
answer
is
that
that
we're
trending
towards
what
yash
was
talking
about
last
week,
which
is
we're
going
to
have
some
python
examples
too,
which
is
a
good
thing
but
yeah.
I
think,
and
I
think
most
people
will
likely.
A
A
Close
function
right
and
then
you're
looking
for
good
examples
related
to
nlp
what
what
types
of
things
were?
You
were
you
tentatively
thinking
about.
E
Basically,
anything
I
want
to
show
the
operations
and
then
the
models.
Okay,
the
use
of
the
models.
A
E
A
A
A
I
meant
I
meant
yeah
okay,
so
so.
E
E
A
My
immediate
thought
would
be
you
could
you
could
do
the
question
answering
and
combine
it
with
the
classification?
Somehow
right,
somebody
asked
a
question
and
you're
gonna
give
them
right.
So
maybe
you
have
you
have
a
set
right.
That's
the
one
where
you
have
is
why
I
meant
to
say
you
have
a
short
sum,
like
you
have
a
short
paragraph
right
or
something,
and
then
you
ask
the
question
it
gives
the
answer
based
on
this
short
paragraph
right.
A
You
maybe
figure
out,
like
you,
you
chain
that
using
data
flow
with
the
with
the
sentiment
analysis
to
see
right
I
mean
this
is
trivial
and
it
doesn't
really
mean
anything,
but
it
shows
how
you
can
chain
them
together
right.
You
can
sort
of
gauge
you.
You
do
something
where
you
say:
okay,
what's
what's
the
answer,
so
somebody
asked
a
question
about
something
and
you
figure
out
what
the
answer
is,
and
then
you
figure
out
the
sentiment
of
the
answer
and
like
yeah,
I
guess
I
mean
that's
basically
it
right.
A
E
No,
like
I'm
not
looking
for
any
particular
category,
I'm
just
looking
for
anything
that
involves
operation
and
model.
It
can
be
anything
and
it
has
to.
A
I
thought
you
had
an
example
here:
oh
wait,
yeah
classifier,
I
mean
you
have
this
example
right.
A
A
E
A
A
Spam
yeah
could
combine
yeah.
I
like
that
spam
classification
is
a
good
one.
H
A
Okay,
all
right
all
right
cool,
so
I'll,
get
on
these
reviews
and
then
is
anybody
have
anything
else
they
wanted
to
talk
about
today.
A
F
F
Basically
you
just
like
we,
we
can
just
write
the
wrong
command
and
just
pass
it
in
the
modeling.
Let's
see
what
prediction
comes
out
of
it
or
the
correct
command.
A
G
F
A
E
Okay,
if
you
want
to
do
something
like
this,
then
have
anyone
check
this
gpu
t3.
A
Well,
yeah,
if
you
can
figure
out
how
to
do
this,
that'd
be
awesome.
If
you
have
you,
I
mean
I
looked,
I
looked
and
it
looks
like
they
have.
They
basically
said
that
they
were
doing
gpt2.
A
A
They've
done
a
number
on
this
thing:
yeah,
where
is
their
damn
website
yeah
here,
so
this.
A
A
A
This
this
thing
check
this
thing
out.
Basically,
they
say
where
it
is
again,
there's
a
good
command
line,
so
this
is
their.
A
They
have
this
davinci
model
or
davinci
model
and
they've
got
some
examples
right,
but
the
thing
is
I
mean
this
is
a
bit
deceptive
because
they've
trained
this
thing
on
millions
and
millions
and
millions
of
examples
right.
So
it
doesn't
actually
just
take
this
many
examples.
It's
my
understanding.
It
takes
lots
but
yeah,
let's
see
come
on.
A
Yeah
billion
yeah,
not
a
million
billion
yeah,
so
because
look
at
this
yeah
and
it
comes
up
with
the
correct
commands-
pretty
crazy,
but
yeah
so
that
I'll
just
leave
this
here,
for
you
guys
to
check
out,
let's
see
but
yeah
that
I
mean
I
like
that,
I
I'm
not
I'm
not
sure
if
it
could
do
yeah
if
we
could
do
type.
A
And
see
if
we
can
correct
spelling
right,
that's
the
I
mean
I
like
I
like
this
example.
I'm
I
think
that's
you
might
have
a
you
might
have
trouble
with
the
data
set
there
too,
though
so
yeah,
I
think
spam
spam
classification,
maybe
your
your
your
you're.
Well,
I
guess
a
bunch
of
misspelled
words
actually
would
be
pretty
easy
to
fix
too,
because
you
could
basically
just
take
a
bunch
of
things
that
are
spelled
correctly
and
then
jumble
some
of
the
letters
in
them,
but
yeah
all
right.
A
A
Cool
all
right:
well,
thanks,
everyone
and
I'll
talk
to
you
guys
next
week
have
a
good
weekend.