►
From YouTube: Weekly Sync: 2020-01-21
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
I
mean
it's
it's.
You
know
it's
hard
to
write
comments
yeah,
it's
just
like
you,
do
everything
and
then
it's
like
okay
now
I
need
to
go
comment.
It
and
you
know
it
sounds
great
in
theory,
but
it's
hard
to
get
around
to
it.
So
we'll
just
try
to
make
sure
that,
as
as
pull
requests
come
through
I
think
there's
a
note
saying
that
you
know
we
should
be
documenting
things,
but
obviously
we've
been
we've
been
a
little
bit
feature
happy
lately.
B
A
So
we've
all
been
guilty
of
this,
but
you
guys
I
mean
you
guys
have
documented
well,
but
we
all
could
document
better
right.
It's
always
always
possible.
So
all
right!
Okay!
So
please
give
a
description
of
that
funkin
area
in.
A
Oh
I
see:
okay,
okay,
I
see
yeah,
that's
what
you
guys
are
saying:
okay,
yeah,
okay,
we'll
do
that
for
her
I
think
everybody
else
would
appreciate
that
too.
A
So
basically,
this
is
some
download
code
that
was
written
for
the
for
the
idx
source
that
parses
the
mnist
image
data,
the
handwritten
digit
data
set,
and
so
we
were
just
we
had
to
download
that.
So
basically,
what
I
did
is
I
just
wrote
this
quick
little
code
here
to
they
basically
just
caches:
it
caches
it
somewhere
and
it
caches
it
relative
to
wherever
this
file
is
so
like.
A
If
this
file
is
in
the
in
the
test
test,
idx
dot,
py
location,
then
basically
it's
going
to
say:
okay,
what's
the
directory
of
this
file,
so
it's
going
to
put
it
in
dfml
Source
idx.
Well,
this
is
the
root,
so
Source
idx
tests,
and
then
it's
just
going
to
download
this
URL.
The
first
argument
to
this
file
and
it's
going
to
validate
that
the
contents
of
the
file.
The
hash
of
the
contents,
equals
this
hash
right
here.
A
So
we
know
that
we
got
the
right
thing,
and
so
we
just
I
think
we
really
just
need
to
copy
this
code
into
because
there's
a
few
things
that
needed
in
various
different
plugins.
So
we'll
probably
put
this
code
in
the
util,
the
util
section
of
the
main,
the
main
repo
and
we'll
have
to.
A
Cool
yeah,
so
yeah
we'll
just
put
this
in
the
main
util
section
here
and
I'll.
Just
I'll
just
do
this
right
now.
B
And
ouch
oops:
well,
okay!
So
this
we
don't
want
hdio
http,
let's
see!
Is
there
anything
else
that
needs
to
change
here?
Okay,
gzip.
B
B
A
Great
okay,
I'm
gonna
make
a
pull
request
out
of
it.
Okay
and
then
we
can
just
oh
actually,
okay.
This
is
weird.
It's
weird
that
I
can
make
the
poll
request.
Maybe
you
should
make
the
pull
request.
I
feel
like
that's,
not
gonna
go
well,
something
something
tells
me
that
that
won't
work
well
so
yeah.
If
you
make
the
pull
request.
I'll
just
put
this
in
a
comment
on
there
and
basically
I'll
just
put
the
two
notes
that
so,
let's
see
I'll
put
it
in.
A
A
Okay
and
then
change
gzip.
B
B
A
Give
mal.util
Dot
download
or
something
or
let's
see
which
let's
call
it
download,
and
then
let's
put
these
notes?
Okay,
so
basically
just
it
right
now
uses
the
aioh
up
library,
which
is
you
know
the
async
one,
but
but
this
is
now
going
to
go
into
the
main
package,
because
it's
too
many
of
the
sub
packages
there's
too
many
of
the
plugins.
The
core
plugins
need
this.
Now
it
was
like
the
should
I
one
needs
it.
A
There's
open
issues,
saying
add
this
there
and
then
we're
obviously
going
to
use
it
for
the
idx
source
and
we're
going
to
use
it
for
the
tensorflow
source.
So
we
should
just
put
it
in
here.
We
should
put
it
where,
where
they
all
have
access,
so
we
don't
keep
redoing
it.
B
B
A
B
Cool
yeah,
so
let's
see.
B
And
this
this
I
have
to
change.
If
you
see
the
tensorflow
I
have
put
the
beta
version
and
the
problem
that
I
was
having
I
exploded
and
it
turned
out
like
I
was
using
python
3.6.5.
It
was
using
the
beta
version
of
tensorflow.
That's
how
it
was
giving
me
problem,
yeah
yeah,
so
it
cannot
use
the
2.1
tensorflow.
There
is
some
problem
with
the
beta
version,
so
the
letter
version,
then
I
got
the
2.1
and
then
it
worked.
A
Yeah
I
had
a
really
okay
yeah
when
I
first
added
the
tensorflow
models
to
this.
It
had
a
really
tough
time
with
tensorflow,
because
we
went
from
needing
some
of
like
the
async
exit,
stack
and
context.
Lib
got
added
in
Python
3.7,
and
that
thing
is
tremendously
helpful
for
a
lot
of
the
stuff
that
we're
doing
here
because
of
that
double
context.
Entry
pattern
and
so
I
just
decided
all
right.
We're
just
going
to
go
to
we're
gonna.
A
It
used
to
be
python
3.6
and
it
was
like
okay,
we'll
we'll
just
change
it
to
the
minimum
required
version
is
3.7.
Well
that
didn't
go
so
well,
because
tensorflow
had
an
open
issue
back
then
to
basically
release
a
version
that
works
with
python
3.7.
So
I
had
to
like
wait
until
they
released
the
version
that
worked,
because
before
that,
you
would
have
to
like
compile
it
yourself
and
apply
some
weird
patch.
A
A
So
yeah,
that's
a
pain
yeah,
let's
see
yeah,
okay,
so
I'll
review
this
more
thoroughly
later,
but
basically
you'll
just
add
that
in
and
then
you'll
use
it
in
the
test
case
right
and
you
can
decorate
the
test
case
just
like
these
are
decorated,
where
you
just
say
cash
download,
and
then
obviously
this
is
expanding
the
arguments
here
so
that
you
know
first
argument.
Second
argument.
Third
argument,
so
that's
just
because
they're
also
would
be
really
long,
so
I
just
put
them
there
and
yeah.
B
Okay
and
one
more
thing:
yeah,
actually
I,
remembered
the
classes,
but
I
want
you
to
check
them
specifically
because
in
the
later
it
should
not
be
like
you
have
to
change
everything
it
should
be
like
we
can
expand
it,
so
there
should
be
an
end
of
General,
so
I'm
not
really
sure.
If,
if
that
is
right
or
what
whatever
I
have
done,
you.
B
A
B
A
Great
yeah
anything
that
you've
got
from
places.
We
just
need
to
make
sure
that
we
know
where
we
got
it
from
and
then
because
we
need
to
add
the
license
for
those
things:
okay,
Apache
too.
So
that's
that
works
great.
A
B
A
A
A
A
Great
yeah,
let's
see,
was
there
no
way
we
could
just
import
from
there
or.
B
No,
they
said
different
repository
I
I'm,
not
sure
how
to
do
that.
So
I
took
the
file
and
put
it
in
now.
A
B
Here,
yeah
there
is,
there,
is,
there
is
one
but
I
was
I
was
not.
A
Then
yeah
no
yeah
you're
right
yeah.
This
is
not
a
real
package,
so
yeah,
that's
the
only
way
to
go,
then
cool
sweet
that
looked
I
mean
this
is
awesome.
This
is
looking
great,
so
yeah,
that's
very
exciting.
That's
gonna
be
very
exciting.
A
Let's
see,
is
there
anything
else
you
needed
from
me
on
that
right
now
or
I
guess.
You
know,
because
I'll
give
you
the
more
thorough
review,
yeah.
A
B
B
B
This
request
for
documentation.
A
B
B
Already,
once
once
we
are
done
with
it,
it
should
almost
be
okay.
We
just
have
to
set
up
a
documentation.
A
Okay,
great
and
then,
let's
see
what
was
this
here-
was
this
the
load
stuff
or
oh
yeah
load.
A
B
B
So
like
in
the
data
flow
now,
our
final
output
is
of
mapping
type
yeah,
at
least
for
now
yeah.
So
we
actually
want
the
mapping
of
the
feature
data
like
SRC,
URL
and
maintain,
but
the
one
which
that's
outputting
is
just
the
SRC
URL
one.
A
B
A
Yeah,
so
it
just
grabs
the
first
one:
that's
what
get
single
that's
sort
of
like
this
is.
This
is
also
why
we
need
more
documentation
on
those
ones.
The
git
single
just
means
get
a
single
definition
that
matches
this
type.
There
should
be
there's
one
somewhere
that
was
get
multi,
but
I
think
it
was
in
another
branch
that
I
never
merged,
because
I
was
still
work
in
progress
on
that
stuff.
B
A
B
A
Not
enough
branches,
oh
geez,
let's
see.
B
A
A
Group
by
get
multi
all
right,
so
in
this
one,
obviously
we
just
didn't
break
and
we
append
them
all
to
the
list.
So
what
I
need
to
do
is
I
need
to
merge
this
one
or
do
I
guess
why?
Why
so
I'm
still
confused
at
what
exactly
we're
talking
about
here
so
you're
saying.
B
That,
like
it's,
not
our
current
issue
doesn't
need.
This
I
was
just
asking
about
it
because
yeah,
but
if
someone
needs
to
yeah.
A
Yeah
and
I've
I
ran
into
that
too.
I
think
you.
You
probably
also
noticed
this
when
we
were
testing.
It's
like
you,
you
want
to
say,
get
single
on
the
mapping
type
and
it's
like
there's
a
bunch
of
mapping
types.
So
you
have
to
just
sort
of
watch
the
logs
and
see
what
the
outputs
are.
If
we
had
this,
then
we
would
actually
know
what
the
hell
they
all
are.
A
Them
yeah
yeah.
The
logging
has
been
really
helpful.
Let's
see,
and
that's
it
that's
the
nice
thing
I
was
explaining
this
to
somebody
the
other
day
so.
A
Yeah
I
have
that
export
code
somewhere
and
then.
A
Sweet,
let's
see
no
I
forgot
what
I
was
just
saying.
Sorry,
let's
see
yeah,
so
this
will
be.
This
will
be
helpful.
A
A
B
Let's
see
I
guess
we
just
need
to
add
the
export
part.
Then
we
are
ready
to
use
the
CLI
one
because
it
works
from
the
pipe
it
works
from
python.
B
B
Resolve
oops
scroll
that
okay,
if
you're
gonna
put
something,
let's
duplicate
the
float
put
yourself:
okay!
Well,
that
doesn't
make
sense
anymore.
Okay,
insert
or
update
answer
update.
Okay,
nice.
A
Pixel
Loop,
okay
yeah,
so
we
just
need
export
and
then
we'll
be
able
to
successfully
dump
out
the.
What
was
it
that
it
was
not
dumping
again.
Was
it
the
features
from
that
generator
problem
or
I?
Don't.
B
Know
there
was
one
thing:
I
think
you
add
a
way
to
do
the
okay.
We
need
to
figure
out
what's
happening
or
something
yeah.
Let's
see.
A
B
B
Okay,
so
let's
do
yeah,
that's
not
it's
the
dev
service
Dev
run.
So
can
you
post
that
command
and
get
there?
Oh.
A
B
A
The
generator
export
value
okay,
so
this
is
the
thing
yeah.
This
is
what
I
added
initially
so,
basically,
okay,
okay,
let's
open
example
maintained
and
then
it's
Let's
test
right.
B
No
like,
after
this
thing,
I
don't
use
this
anymore.
I
was
just
using
a
make
prediction
data
flow
for
this
one
to
work.
We
need
the
export
to
read
that.
A
B
Dataflow
I
I
think
I
moved
it
inside
I.
Don't
if
did
you
pull
the
latest
change?
If
you
pulled
it
it
it's
not
Global
anymore.
Okay,.
A
Okay
I
see,
and
it
wasn't
Global
because
you
were
loading
and
yeah
you're
loading
in
the
sub
data
flow.
Okay,.
B
A
Okay,
so
yeah.
Well,
we
still.
A
This
yeah
yeah,
that's
okay,
but
basically
I'm
just
trying
to
figure
out
because
I
can't
remember
why
the
export
didn't
work
because
it
worked
before
it
worked.
Semi
recently,
okay,.
B
It
works
if
I
don't
have
the
config
for
n
data
flow.
B
Yeah,
it
works
also
I
I,
don't
think
it
works
perfectly
because
when
we
loaded
for
no
the
model,
pretty
config
should
be
of
type
model,
pretty
config
right,
but
what
I'm
getting
is
just
a
dictionary?
B
Okay,
okay,
so
I
think
we
aren't
doing
anything
when
we
are
exporting
or
for
in
the
firm
date
to
any
of
the
configs
yeah.
Okay,
let's
see
so
that's
what's
going
on
I'm,
not
sure
yeah.
A
Okay,
so-
and
this
has
to
do
kind
of
with
that
unifying
the
configuration
issue,
because
there's
config
parsing
code-
this
is
I-
was
explaining
this
to
somebody
the
other
day.
Is
that
there's
config
parsing
code
throughout
the
code
base,
like
a
lot
of
the
code
base,
has
to
do
with
config
parsing,
because
you
know
we
have
to
be
able
to
serialize
and
deserialize
all
these
things
so
Okay.
So.
B
A
A
A
The
other
thing
is
that
yeah
they
will
get
loaded
as
a
dictionary
and
I
think
that
it
only
works
one
level
down.
Currently
this
is
basically
just
someone
needs
to
spend
some
time
and
it'll
probably
be
me.
I
meant
to
do
this
over
the
weekend,
but
I
got
swamped
so
I
think
I'll
just
have
to
do
this
this
week
or
this
weekend,
but
yeah,
because
that
config
person
code
is
just
very
hairy.
A
Okay,
but
so,
basically
we
need
to
look
at
the
export
stuff.
So
I'll
just
make
a
note
that
the
fact
that,
because
I,
don't
think
we're
gonna
solve
this
right
now.
So
we'll
just
make
a
note
that
we
need
to
look
at
the
export
stuff.
A
B
Yeah,
that
test
is
completely
working.
A
B
A
B
Tested
I,
I
think
there's
a
test
case
in
me:
okay,
let's
see
but
I
haven't
tested
it
with
database.
Okay,.
A
Okay,
so
I
think
what
okay,
so
what
I
would
recommend
here
is
that
we'll
need
to,
and
let
me
make
some
notes
in
the
pull
request.
Oh
wow
I
have
way
more
windows
open
than
I.
Usually
do
I
try
to
keep
just
the
browser
and
then
the
terminal,
because
it's
just
too
many
things
to
all
tab
through.
Let's
see.
A
A
A
Gffml
Source,
my
sequel,
that's
ql,
and
we
need
to
have
them
install
if
no
there's
one
more
I,
don't
know
if
config
yaml
is
used.
But
what
else
are
we
doing
here
all
right?
Oh
the
get
features,
okay,.
B
A
Feature
which
really
needs
to
get
renamed
at
some
point
to
operations,
get
your
operation
kit,
but
that's
kind
of
we'll
do
that
later.
The
generate
features.
B
A
Exactly
yeah,
because
the
way
that
it
worked
before
right,
we
were
pulling,
we
created
a
new
table
in
the
database
and
then
we
like
imported.
So
we
generated
the
data
with
the
get
the
git
operations
right
and
we
saved
all
of
that
generated
data
to
that
Json
file
and
for
everybody
else,
who's
not
familiar,
we'll
just
sort
of
walk
through
it
here.
So
you
can
see.
A
So
we
point
all
these
URLs
at
the
or
we
give
all
the
URLs
to
these
operations,
which
are
going
to
do
things
like
figure
out.
You
know,
they're
going
to
pull
down
that
get
repo
and
then
run
git
log
and
figure
out
how
many
commits
we're
in
a
given
time
period
and
it
runs.
This
is
sort
of
the
sequence,
and
these
are
all
the
individual
inputs
and
outputs
linked
together,
and
so
the
result
of
this
is
that
it
generates
this
massive
data
set
where
we've
got
like.
A
Okay,
the
authors
going
back
each
quarter
for
10
quarters
the
number
of
commits,
and
then
the
diversity
of
authorship
is
the
work,
metric
or
the
work
feature.
And
then,
of
course,
you
know,
it's
mapped
to
the
source
URL,
and
we
store
all
of
that
in
this.
Basically,
this
temp
Json
file
and
then
what
we
did
was
we
had
this
custom
database
integration
to
illustrate
like
how
you
could
interact
with
whatever
database
you
wanted
by
writing
a
source.
A
The
thing
is
now
we
have
the
mySQL
database
Source,
and
so
when
we
make
a
prediction
with
machine
learning,
we
could
just
interact
through
that,
and
so
we've
put
the
updated
we've
added
a
mySQL
database,
abstraction
to
the
database,
Source
we'll
log
in
so
again
added
a
abstraction
around
databases
which
is
like
sort
of
more
generic
to
interact
with
databases
than
the
sources
were.
A
The
sources
are
specific
to
just
storing
like
records
or
repos
right,
and
so
this
was
to
interact
with
the
specific
mySQL
database
that
this
application
had
that
we're
supposed
to
be
integrating
with.
Whereas
now
we
can
basically
just
take
those
operations
and
we
can
point
them
at
the
database
and
using
this
data
flow.
We
say:
okay
at
the
end
of
running
the
data
flow,
where
you
scrape
the
data
and
then
put
that
scrape
data
through
the
model
update
the
database.
So
we
can
get
rid
of
all
of
this.
We
don't
need
to
have
them.
A
Write,
you
know,
add
this
custom
code,
because
I
think
this
is
confusing.
Anyways
I
mean
it's
kind
of
confusing
enough,
as
it
is
it's
kind
of
hard
to
wrap
and
wrap
the
head
around.
So
we
don't
need
to
be
adding
code.
You
know,
I
think
this
whole
example
can
now
be
done
without
writing
any
code,
except
for
this
little
bit
of
JavaScript
that
you
add
to
the
front
end
so
yeah
so
now.
Basically,
this
will
be
one
command
and
it
will
just
update
the
database
there
from
the
data
flow.
A
Let's
see
so
we
will
just
need
to
okay.
This
is
where
we
install
tensorflow,
so
we
just
need
to
make
sure
that
we
install
the
MySQL
Source,
let's
see
so
this
is
the
initial
scraping
of
the
data.
So
this
is
its
own
section
here,
so
we
don't
need
the
MySQL
Source
until
we
go
down
here
and
we
run.
Actually
we
don't
need
it
till.
Even
after
we
can
remove
the
merge.
B
A
Let's
see,
basically,
we
train
the
model.
I
think
we
can
remove
this
section.
This
whole
section
here
on
pulling
from
the
database
yeah.
So
we
can
remove
this
section
because
we
don't
need
to
interact
with
the
database,
we'll
basically
just
say
hand
wave,
you
know,
use
the
insert
or
update
operation,
and
everybody
will
be
like
great
I
love
that,
because
that's
what
you
want
most
of
the
time
with
the
database
just
so
many
of
you
insert
or
update,
and
it
just
works
so
remove
pulling
from
data
set
database
section.
A
All
of
this
is
just
to
update
the
database
because
we
basically
added
I
mean
what
we
were
doing
here
is
we
had
two
tables
now
and
then
we
were
keeping
the
data
consistent
across
the
two
tables,
so
we
needed
custom
logic
to
do
that.
Obviously,
because
or
else
you
would
have
these
insane
SQL
queries,
and
so
it
was
either
just
sort
of
illustrate
how
you
might
do
that
in
Python.
Even
though
that
doesn't
it's
not
it's
not
super
clear
anyway.
So
we'll
just
stay
away
from
that.
A
So
we
can
get
rid
of
this
whole
section
where
we
add
an
extra
table
and
everything
because
we
don't.
We
then
imported
that
Json
file
into
the
database,
but
we
don't
really
care
about
that
because,
quite
honestly,
once
we've
used
that
once
we've
gathered
that
data
set
to
train
the
model,
we
don't
really
care
what.
A
Was
anyways?
We
just
want
the
prediction
from
the
model
now,
so
we
get
training
our
model.
That
stuff
is
all
good,
making
a
prediction:
okay!
So
now
in
this
one
we
have
the
initial
data
set
or
data
flow,
which
will
just
this
is
going
to
run
the
prediction
right
and
so
we're
going
to
need
to
modify-
or
this
is
going
to
run
the
scraping
with
this
first
command
here
right
with
this
original
data
flow,
and
then
we
need
to
add,
and
then
we
need
to
do
okay.
Well,
the
prediction.
A
A
We
ran
the
data
flow
initially
on
all
of
the
repos
or
all
of
those
URLs,
and
now
we're
running
it
on
one
of
them
and
we're
saying
store
the
results
in
the
database
right,
but
we're
not
going
to
have
the
database
anymore
it'll,
just
basically
just
output,
the
results
for
us
and
we
could
have
it
store
it
in
like
a
Json
file
and
then,
when
we
run
the
predict
command,
we
could
say:
Okay
grab
that
from
that
Json
file.
B
A
B
A
Here's
how
you
do
it,
you
know:
here's,
here's
part
A,
here's
how
you
do
it
for
one
of
them
and
now
Part
B.
Here's
how
we're
gonna
combine
these.
What
would
be
two
commands
into
one
command
by
writing
this
data
flow
right,
which
is
the
data
flow
that
you've
created
in
this
polar
request.
That
does
the
run
and
then
update
the
database
right
so
yeah,
let's
say
making
a
prediction
to
make:
making
it
okay
and
then
modifying
the
Legacy
app
will
be
changed
to
run
the
one
dataflow
run
command
where
we're
doing
the
new.
A
B
A
Making
a
prediction
to
making
a
single
prediction
and
then
add
section:
let's
see
what
would
this
be
it'd,
be
like
something
like
you
know,
making
prediction
and
saving,
let's
see,
I,
don't
know
how
would
you?
How
would
we
describe
this?
Let's
see,
we've
made
another
data
flow
right
so
are.
A
Well,
that
was
the
initial
that
was
the
initial
that's.
The
first
data
flow
is
where
we're
collecting
the
features
right.
So
now
we're
basically
saying
this
is
right.
This
is
a
data
flow
for
using
the
collected
features
from
the
other
data
flow,
making
a
prediction
on
it
and
saving
it
in
the
database.
So
we
need
a
more
concise
way
of
saying
that,
let's
see
so
add
a
section
on,
let's
see,
let's
make
your
saving
prediction
in
database
or
something.
A
And
then
you
talk
about,
you
know
you,
basically
you
dump
the
yaml
file
for
you,
export
the
you
export
the
data
flow
to
the
yaml
file.
A
Export
new
data
flow
to
yaml
file
include
in
this
section
include
diagram
were
made
JS
diagram.
A
Yeah,
okay,
so
yeah.
That
I
mean
that's
basically
what
you'd
need
to
do
in
this
section?
Is
you
export
the
new
data
flow
to
the
yaml
file,
and
then
you
include
that
and
you
sort
of
explain
why
we're
doing
what
we're
doing
and
then
you
give
them
the
picture
to
show
them.
This
is
this
is
what's
going
on
right
and
you
might
want
to
illustrate
I
might
want
to
say
it
would
be
helpful.
A
Make
prediction
too
so
I
think
we
had
some
things
in
here
yeah.
So
it's
probably
helpful
to
add
comments
like
this
type
of
stuff
between
the
various
exported
sections
like
of
the
ammo
file,
where
we
can
talk
about
like
okay
when
it
comes
through.
You
know
when,
when
the
output
of
this
operation-
or
we
can
talk
about
like
okay
now
at
this
stage,
for
this
example
input,
the
output
might
be
this
right
and
then
we
can
give
a
little.
A
You
know
block
Json
comment
saying
this
is
the
data
that
is
coming
out
of
this
now
and
that
will
help
people
probably
understand
okay,
what's
going
in,
what's
coming
out
more
than
just
what?
What
are
all
these
inputs
linked
together?
A
I'm
curious
as
to
what
this
other
person's
data
flow
project
is.
There's
a
lot
I've
seen
I've
keep
seeing
these
data
flow
projects
around,
like
people
keep
doing
similar
things,
which
is
great
because
it
means
some
people
like
this
stuff,
and
but
you
know
it's
also
interesting
to
see
everybody
has
a
slightly
different
Twist
on
why
they
might
be
doing
what
they're
doing,
let's
see
so
I
think
that
is
sort
of
the
gist
of
what
we
need
to
do
here.
A
Is
there
anything
else
on
that?
One.
A
Cool
cool
I
think
this
is
this-
is
this
has
been
a
long?
This
has
been
a
long
one.
B
A
Thanks
for
your
continued
continued
work
on
this
one,
this
is
definitely
tricky,
let's
see
and
then
so
yeah
so
saksham
is
there.
Was
there
anything
else
that
you
wanted.
B
To
talk
about
yeah,
there
was
one
more
thing
which
I
want
was,
so
we
wanted
to
make
predictions
the
dictionary
right
so
like
when
we
are
dumping
into
CSV.
How
do
we
want
the
columns
to
be?
A
B
Here
not
here
it's
another
two:
oh
okay:
okay,
okay,
okay,.
A
A
B
A
Is
great
so,
let's
see.
B
B
A
See,
let's
see
so
yeah
yeah
I
see
what
you're
saying
but
I
think
with
the
Json
sources
doesn't
matter
yet,
but
with
the.
B
B
A
Csv
headers
all
right,
so
this
is
where
we
were
adding
things
right
as
the
the
prediction
for,
because
there
was
one
prediction
and
one
confidence
right.
B
A
I
think
what
we
probably
want
to
do
is
basically
just
say:
let's
see,
where's
the
features.
This
is
the
reading.
Okay,
this
is
the
writing.
Oh.
B
A
It,
oh
damn
it:
okay,
yeah,
that's
right,
yeah,
so
I
thought
we
were
going
to
get
away
with
something,
but
it
looks
like
we're
not
Okay.
So.
A
Yeah
in
the
in
this
in
this
file,
yeah
yeah
this
file
is
a
mess.
We
had
to
do
a
lot
of
wacky
things
to
get
like
the
merge
command
to
work,
but
that
really
was
not
necessary.
We
shouldn't
have
done
that.
So,
let's
see
so
what
are
we
doing?
We're
feature
field
names
equals
set
report
data,
so
we
go
into
every
single.
B
Field
open
file
writing
and
actually,
why
do
we
have
predictions
to
be
dictionary
like?
Can
you
give
an
example.
A
So
the
reason
why
we
might
want
it
to
read
dictionary
is,
if
we
have
like
you
know
several
models.
I
have
several
different
models
that
predict
several
different
things
right.
So
if
we
basically
took
like
we
took
a
CSV
file
and
we
have
five
columns
right
and
so
we
trained
the
first
three
columns.
We
used
the
first
three
columns
to
predict
the
fourth
column
with
one
model,
and
then
we
use.
B
A
The
yeah,
then
we
use
the
first
two
columns
to
predict
the
last
column
with
the
other
model
right.
In
that
case,
we
need
to
be
able
to
have
right.
We
need
multiple
predictions,
and
so
it
looks
like
what
we
currently
do
here.
Is
we
basically
just
jump
through
every
single.
B
A
A
So
we
probably
need
to
change
that
back,
because
all
we
really
want
to
do
is
data
prediction
Target.
This
is
all
good.
This
is
all
good.
This
is
good,
but
yeah.
So
all
we're
really
looking
for
here
is
to
add,
is
to
make
this
data.prediction
dict
right
and
that
way,
when
we
call
the
predict
method,
yeah
repo
dot
predicted.
We
do.
We
passed
that
thing
that
we
that
we
basically
can
just
pass
self.config.predict
right
or
self.parently.
A
Predict
yeah,
so
this
is
what
I
meant
here
is
by
this
is
self.configured,
so
we
need
to
change
all
the
predict
methods.
Oh,
oh,
yes,
okay!
So
this
is
the
problem,
so
they
pass
the
feature
name,
they
are
predicting
per
sec.
Okay.
This
is
a
badly
worded
sentence.
Sorry,
what
I
meant
by
this
is
that
we
need
to
change
the
content
like
we
need
to
change
the.
B
A
B
B
A
Don't
use
those
arguments
now
because
they
already
have
it
right:
they
have
it
in
their
config.
But
sorry
that
was
that
was
badly
worded.
A
Okay,
so
I
think
that
that
one's
going
to
be
awesome,
because
now
we
can
like
save
multiple
features,
we
can
have
multiple
models
running
on
the
same
thing
without
it
overwriting
each
other.
It
would
be
very
helpful
because
yeah
now
you
can
take
different
feature
datas
and
run
them
through
different
models.
Right
so
that'll
be
great,
let's
see
and
yeah.
So
is
there
anything
else,
then,
on.
A
A
Too
many
tabs
too
many
tabs
too
many
windows
all
right,
so
how
to
figure
out
the
CSV,
oh
and
I
was
going
to
show
that
here.
Oh
no,
here,
okay,
so
basically
what
we
do
right
now
is
we
go
through
and
this
feature
field
names
is
basically
a
list
of
it's.
It's
like
the
columns
right
feature.
Field
names
becomes
like
all
the
features
that
are
going
to
be
in
the
better
they're
they're.
The
field
names
is,
is
the
the
column
names
and
then
it
should
be
called
column
names.
A
Why
is
it
not
called
column
names
and
yeah?
This
is
this
file
needs
help,
as
you
guys
have
seen.
So,
basically,
all
we
could
do.
What
we
could
do
here
is.
We
could
basically
just
say,
like
prediction,
field
names
right
and
then
we
could
do
prediction.
B
Then
what
would
be
like
prediction?
One
underscore
confidence
position,
one
and
so
values
like
that.
Yeah.
A
Exactly
so
that
prediction.
A
B
A
What
we
need
to
do
here,
we
need
to
do
prediction
underscore
and
confidence
underscore
so
the
best
way
to
do
that
is
try
like.
A
See
I
might
be
making
this
more
complicated
than
it
needs
to
be
I'm,
probably
making
that
more
complicated
than
it
needs
to
be.
Let's
just
do
this,
I
want
Lambda
and
map
okay.
A
So
let's
see
prediction
field
names,
so
yeah
bitter
tools,
dot,
chain,
I'll
map
here,
map,
I'm,
good.
A
We
basically
we'll
just
pre-pen
prediction
and
confidence,
so
just
like
you
said
so,
let's
see
now
the
trick
will
be
when
we
read
them.
I
guess
we
just
read
anything:
that's
prepended
with
prediction:
confidence
all
right!
Well,
I
could
do
this
now
or
I
could
do
this.
I
can
I
can
push.
Let's.
B
Just
I
I
just
wanted
to
know.
Well
maybe
we
wanted
to
do
it.
A
Yeah,
so
basically,
if
we
do
this,
I
wonder
what
will
happen
so
you
said
this
test
is
failing
right
now,.
B
B
We
have
in
we
have
this
one
over
here,
yeah
but
okay.
So,
let's
see.
B
A
A
A
B
B
A
B
Do
we
need
that,
of
course?
Yes,
we
had
this
anymore.
Do
we
need
y
the
third
line,
like
that?
Oh
yeah,
we
do.
B
B
A
Okay,
let's
see
yeah,
let's
just
check
that
test
case
one
more
time
here:
oh
yeah,
that's
gonna!
Do
it!
Every
I
forgot
that
this
has
subtests
for
everything
single
file.
Extension
Target
name
insert
equal
Target
new
value
all
right,
so
this
type
of
thing
works,
I!
Guess
if
you,
if
you're,
if
you
have
time
to
mess
with
this,
it
looks
like
it
is
dumping.
B
B
A
B
See,
let's
see
repo.
B
Parse
headers,
as
we
okay
so
to
do
change
this
so.
B
A
B
B
Repo
feature
specific
predictions
we
can
good,
so
we
just
need
to.
A
Function,
arguments
there,
we
go,
that's
yeah,
that's
how
we
should
say
that
and
then
we
need
to
fix.
A
Some
cleanup,
but
mostly
fixed
all
right,
so
we
cool
cool,
let's
see
anything
else
on
your
front
again,.
A
Awesome,
let's
see,
and
then
we
had
some
other
people
who
jumped
on
and
said
they
were
going
to
add
some
models
or
something
people
looking
at
gsoc
stuff,
hoping
now
hoping
to
do
some
proposals,
yeah
and
then
speaking
of
gsoc,
so
yeah
I
talked
to
Terry
with
the
python
org
and
it
sounds
like
we're
good
to
go
for
this
here.
A
I
need
to
go
and
post
some
of
the
project
ideas.
Obviously,
we've
got
a
lot
of
stuff
in
there
now,
so
we've
got
yeah.
We've
got
a
lot
of
stuff
going
on
in
there.
So
I
don't
know
exactly
what
all
the
project
ideas
will
be
like.
You
know
the
sample
ideas
but,
of
course,
we're
always
open
open
to
whatever,
but
yeah
I'm
gonna
go
update
that
wiki
page
soon
so
and
then
yeah
so
did
we
have
anybody
else
have
anything
they
want
to
talk
about.
B
Yeah
so
I
was
I
was
I
am
working
on
that
mnist
data
center?
Yes,
yes,
so
you
coming.
You
tagged
me
in
something
like
further
dffl
base
dot,
pi.
A
B
Week
you
said
there
was
no
time
so,
oh.
A
Yeah,
so
this
this
is
I
tried
to
explain
it
with
this
comment
here,
but
I
I
was
also
probably
not
super
clear.
The
problem
is,
this
is
kind
of
hard
to
explain.
I'll
take
another
stab
out
of
here,
but
so
what
we're
looking
at
here
is
basically.
A
B
A
A
Yeah
I,
don't
know
why
GitHub
does
that?
Sometimes
it's
weird
okay,
this
is
well.
This
is
not
gonna
help
me
get
to
what
I
need
all
right.
Let's
see
so
basically,
okay,
I
didn't
explain
a
little
bit
while
I'm
pulling
it
up,
but
so
the
idea
with
this
was
that
we
could
create
that
long
term.
We
would
create
some
kind
of
source
that
takes
a
data
flow
and
using
that
data
flow.
A
Does
some
arbitrary
pre-processing
on
the
data
as
you're
feeding
it
into
the
model
right
and
so
for
everybody
else
who
wasn't
looped
in
on
this?
What
we're
doing
is
is
saksham
is
working
on
the
the
mnist
model
and
the
handwritten
digits,
and
so
that
stuff
is
basically
giant
arrays
of
of
numbers
from
0
to
255
and
those
represent
like
the
from
completely
white
to
completely
black
pixels,
like
grayscale.
A
So
so
yeah,
so
so
you
load,
you
load
those,
basically
those
those
massive
arrays
that
that
that
contain
the
grayscale
values
and
then,
once
you
have
those
in
you
train
the
model
on
that.
Well,
a
lot
of
people
who
do
this
show
an
example
the
in
their
example
that
that
they
they
normalize
the
whole
every
every
array.
So
you
basically
scale
those
zero
to
255
values
from
zero
to
one
as
a
fraction
or
as
a
as
a
float
right.
A
So
what
we
need
is
we
need
the
ability
to
sort
of
like
do
this
on
the
Fly,
and
so
the
idea
is
that
we
would
is
that
we
would
provide
a
source
which,
as
its
config
takes
another
source
right-
and
this
is
where,
like
these
nested
configs
are
helpful.
So
we
can
basically
say
this
Source
wraps
some
other
source
and
when
you
get
data
from
that,
other
source
I
want
you
to
pre-process.
A
This,
like
I,
want
you
to
take
this
feature
that
you're
going
to
get
for
each
record
for
each
repo
and
I.
Want
you
to
pre-process
that
feature
by
running
this
operation
on
it
right
and
so
we're
going
to
specify
that
the
feature
data
itself
should
be
one
of
the
inputs
right.
So
we
need.
We
need
a
couple
arguments
of
this
thing.
A
We
basically
need
to
Define
this,
like
dict
mapping,
where
we
can
do
arbitrary
feature,
names,
richer,
strings
to
some
sort
of
structure
right,
which
is
like
a
regular
config
structure
like
we
have
already
all
over
the
place
and
that
config
structure
would
contain
things
like
the
operation
name.
That
would
want
to
run
for
pre-processing
the
name
of
the
input
that
the
feature
should
would
go
to
that
feature.
Data
itself
should
go
to,
and
then
it
should
also
be
contain
within
itself.
A
It
would
need
another
mapping
to
grab
all
the
other
arbitrary
inputs
that
that
operation
could
could
have
and
map
the
map,
those
to
sort
of
values
given
on
the
command
lines
like
hard-coded
values.
Basically,
so,
if
we're
doing
this
like
divide
operation
right
this
normalize
operation
for
the
image
data
we
would
want
to
pass.
This
is
where
I
want
to
write
this
down.
B
A
B
I
trained
the
model
with
only
like
100,
arrows,
yeah
and
and
also
tested
it
with
100
Iris.
Also,
then,
then,
I
okay
made
predicted
the
output
and
the
and
it's
a
it's
a
dictionary
mapping
right.
So
it
outputs
like
all
the
arrays,
the
all
the
784
array
elements
yeah
on
the
command
line.
Oh
yeah,.
A
A
We
don't
have
to,
we
could
add
a
setting
that
basically
says
like
don't,
output
to
the
command
line
right
we
could,
we
could
have.
We
could
Implement
something
that
was
different.
Like
output
modes,
all
right,
we
could
do
dump
to
CSV.
We
could
be
like
just
display
the
prediction:
don't
display
the
input
data
right,
like
maybe
just
display
the
index
and
what
the
prediction
for
that
index
is
but
yeah
right
now.
Obviously,
it's
just
very
rudimentary.
A
It's
like
okay,
here's,
your
input,
data,
here's,
your
output
data,
in
case
you
forgot
what
it
is.
It
includes
your
input,
data
plus
the
prediction
right,
because
that
made
it
easy
to
just
really
serialize
and
deserialize
that
Json
so
but
yeah.
Obviously,
if
you
are,
it
is
not
the
most
convenient
thing.
If
you
have
these
giant
arrays,
so
we
could
add
some.
That
could
be
something
that
we
could
do
here.
That
would
be
good
is.
Let
me
make
an
issue
for
that.
B
A
Let's
see
all
right,
this
would
be
in
you,
tell
CLI
command,
specify,
add
ability
to
output
and
different
formats
and
different
amounts
of
data.
All
right,
because
that's
what
we're
finding
here
is
basically
that's
way
too
much
data.
We
don't
care,
we
care
what
is
the
index
in
that
input,
and
then
you
know
what
what's
the
label
or
what's
the
predicted
so
right
now,
output
of
prediction
is
dumping.
A
It's
just
standard
out.
We
should
approve
this
to
not
dump
feature
data
if.
A
Yeah,
so
you
could
start
on
this
if
you
wanted
to
here,
because
what
was
it,
we
were
stuck
on
something
here.
What
were
we,
what
were
we
currently
on
with
this
one?
It's
been
a
while.
A
Oh
yeah,
great,
oh
yeah,
and
that's
why
we're
talking
about
this
sorry
I'm
still
getting
back
in
the
swing
of
things.
Let's
see
been
working
on
other
things,
let's
see
so
high
priority,
and
this
will
probably
be
medium.
B
A
A
All
right,
okay!
So
in
this
it's
in
this,
this
is
basically
like
the
changed
command.
Where
we'll
be
throwing
the
pre-processing
right
in
front
of
reading
the
data
from
The,
Source,
right
and
so
we'd
say
images
equals
the
source
type
of
pre-process
and
then
the
images
Source
the
source
for
it
right.
This
config
property,
that
is
the
source,
is
idx3,
so
it
knows
I'm
going
to
load
the
idx3
source,
and
then
you
specify
the
file
name
for
the
idx3
source
and
the
feature.
A
So
basically
we
just
Nest
this
one
level
in
so
that
the
source
loads,
its
own
source
that
it's
actually
going
to
pull
from,
and
then
we
say
you
know
Source
images,
and
then
this
is
the
the
first
level
dictionary
and
in
that
first
novel
dictionary,
whatever
that
that
is
called
like
this
might
be
something
like
all
right
make
this
a.
B
B
A
So
this
might
be,
this
would
be
like
features,
features
image
and
then
op
is
array.
Normalized
so
features
image
and
then,
let's.
B
See
what
is
a
good
name
for.
A
This
input
name
feature
piston
too,
so
this
would
be
like
array.
So
if
we
look
at
and
then
we'll
Define
array
normalized
here,
so
that
that's
clear
too
so
input,
let's
see
data,
that's
very
nondescript,
let's
see
feature,
let's
just
call
it
feature.
A
So
the
feature
goes
to
array
and
then
the
inputs
and
then
we'd
have
something
like
so
inputs
would
be
another
dictionary
and
the
so,
let's
see
fires
or
denominator.
A
So
this
at
config
for
pre-process
config.
A
And
then
we'd
have
something
like
source,
which
is
of
type
base,
Source
right,
and
then
we
have
so
that-
and
this
is
where
we're
loading
so
Source
equals
idx3
with
file
name
equals
train
images
through
you
but
and
feature
equals
image
right.
Does
that
make
sense.
A
This
maps
to
this
right
here,
let's
make
that
abundantly
clear
here.
A
A
This
would
be
the
you
know,
pre-process
up
and.
A
Something
like
you
know,
the
operation,
so
op
would
be
operation
and
then
you
know
it'll
load
the
operation
there.
So
this.
Actually
this
should
go
up.
A
It
up
here
too,
so
it's
more
clear.
So
this
is
Operation
right
and
then
this
would
be
feature
would
be
some
string
and
this
is
the
so
this
is
the
feature
or
input
name,
let's
see
operation.
So
this
is
the
operation
input
name
of
operation
input
feature
is
passed
to.
A
A
We
keyed
off
image
and
this
feature's
right.
So
this
is
the
this
key
here
right.
So
here's
features
and
here's.
This
is
the
features,
so
this
anything
after
this
Dash
here
this
stuff
all
goes
in
this
dict
right,
okay,
so
yeah.
A
So
image
is
the
key
and
then
the
values
map
into
this
config
here
and
so
op
becomes
array
normalized,
which
will
get
loaded
as
the
actual
operation
and
then
feature
becomes
array,
and
then
we
need
another
one
of
these
dict
to
mapping
things
so,
but
this
is
where
I
just
sticked
to
any
right,
and
so
this
one
is
going
to
be.
Let's
see
this
will
end
up
being
denominator.
255.
A
It's
kind
of
hard
to
parse
right
and
it
will
be
hard
to
parse
literally
when
we
have
to
write
the
parsing
code
so
and
this
one
is
so.
A
Oops,
where
are
we
okay?
This
is
array
normalize
and
the
inputs
are.
A
Let's
see
so
well
and
this
calls
the
entry
point
loading
stuff,
so
this
ends
up
being
just
like
idx3
gets
loaded
to
be
an
actual
plug-in.
You
just
say
array
normalize
and
it
looks
up
those
entry
points
for
the
operations
and
it
loads
the
correct
one.
So
you
you
know:
you'll
you'll
have
the
right
data
here
so
or
normalize.
A
Yes,
well
so
this
this
stuff
won't.
So
this
goes
into
these
asses
wait.
Why
are
you
not
giving
me
my
comments?
Okay,
oh
because
that's
it
they're
tabs
silent
tabs
above
so
these
classes.
B
A
Then
you
need
so
you
need
to
the
trick
here.
Is
you
need
to
modify
dfml
base.py
to
understand
what
to
do
or
to
understand
how
to
properly
load,
load
and
populate
these
structures
when
they
are
encountered.
A
A
A
Okay,
so
in
here
there's
some
some
configuration
code
has
been
cleaned
up,
but
basically,
this
is
what
happens
is
for
all
the
documentation
stuff
right.
We
have
to
output
for
all
the
documentation
like
knowing
what
type
everything
is
through
all
this
stuff.
We
we
have
these
args
and
config
methods,
and
then
we
replace
them
with
this
Auto
args
and
config
stuff.
So
when
that
happened,
this
got
Consolidated
a
little
bit
too
these.
Basically
these
two
functions.
So
this
one
says:
okay,
what
are
the
things
that
this
field
is
by
parsing?
A
That
config
object
right,
and
this
one
says:
okay,
given
that
config
object
in
some
value,
some
data
type.
What
do
I
do
with
it
right?
How
do
I
turn
that
value?
That
was
given
me
onto
the
command
line
into
the
class.
You
know
into
an
instance
of
whatever
class,
of
whatever
type,
that
that
argument
is
supposed
to
be
right,
so
that
might
do
something
like
you.
B
Know
where
is
it
doing
that
type
class
I
think
wait.
A
A
Right,
weird,
things
may
happen
here,
because
this
is
this
is
why
that
UniFi,
args
and
config
issue
is
up,
because
there's
there's
like
some
of
this
code
is
in
from
deck
some
of
it's
in
convert
value.
It
all
needs
to
be
like
in
the
same
place,
so.
A
To
do
that
yeah
at
the
same
time,
because
or
else
or
else
you
wouldn't
like
it
starts
running
through
these
different
code
paths
and
like
things
don't
work
right,
but
you
will
find
out
when
you
start
to
do
this
so
and
and
what
I
might
suggest
first,
is
that
we
just
get
this
working
without
pre-processing.
B
A
When
you
know
that
is
exactly
so,
we'll,
basically
we'll
get
this
into
master
and
then
we'll
add
the
normal
of
the
array
support,
because
this
stuff,
where
we're
messing
with
config
stuff,
it
could
take
a
while
and
we
don't
want
to
like
you
know,
we
want
to
get
that
Source
into
the
main
code
base,
because
the
source
works
right.
The
source,
Works,
you've.
B
Oh
no,
my
PC
is
running
low,
so
yeah
the
code
is
working
right,
so
I
I'll,
add
that
read
mode
write
mode,
stuff,
okay,
so
that
the
bytes
can
be
read
for
the
idx
source
files
and
then
we'll
open
the
new
pull
request
for
pre-processing
stuff
yeah.
A
Yeah,
so
let's,
let's
yeah,
let's
make
so
let's
make
a
note
of
this,
so
X3
git
Source
merged
with
tutorial.
B
With
poor
accuracy,
also
like
when,
when
we
train
and
test
the
files,
there
is
the
the
we
don't
have
anything
else
to
for
the
prediction
stuff.
B
What
do
you
mean
like
when,
when
we
train
on
that
60
000,
arrays,
yeah.
A
B
That's
what
making
me
like
very
confused
about
this
thing
right,
oh
yeah,
so.
B
A
I'm
just
thinking
Tiff
mp2
16
bit
yeah.
We
really
only
want.
B
A
Okay,
yeah
I
mean
it
would
be
nice
to
be
able
to
say
like
take
a
PNG
of
some
image,
and
then
you
know
run
it
through
here
as
the
prediction
right
and
in
that
case
I
think
what
you
want
to
do
is
we
talked
about
adding
support
for
the
CSV
file,
the
CSV
source
to
go
through
and
look
at
column.
Name
like
you
provide
a
list
of
column
names
that
it
should
load
this
data
from
some
from.
A
God,
damn
it
from
you.
Basically,
if
it
sees
like
a
file
name,
it
should
go
and
load
that
file
name
and
then
parse
it
using
something
right
and
I.
Think
we
missed
the
part
I.
Don't
think
the
issue
captures
the
part
that
we
need
to
parse
it
using
something
right,
and
so
we
have
these
config
loaders
right
now
and
they
already
sort
of
assume
binary
data.
So
what
you
could
do
is
you
could
Implement?
A
So,
let's
see
do
the
okay
and
where'd
that
issue.
A
Okay,
so
let's
modify
this
and
then
we'll
just
capture
it
all
in
here.
So
let's.
A
On
prediction:
complete
the
CSV,
Source
modification.
B
A
If
you
look
at
the
config
loader
code,
where
was
that
actually
and.
B
A
B
Sorry,
what
We've
managed
the
config
file,
but
if
you're
talking
about
how
we
use
it
in
the
SQL
up,
we
haven't
done
that:
okay,
I'm
guessing
you
are
talking
about.
B
A
This
is
perfect,
so
if
we
have
this,
this
will
be
great.
So
this
or
well,
this
is
yeah.
This
is
dirkoff,
okay
and
I.
Think
yeah.
We
have
load
file.
Okay,
great,
so
let
me
copy
that
syntax
from
your
newer
pull
request:
it's
not
merged,
and
then
that's
basically,
what
we'll
do
here?
A
Oh
geez,
it's
going
to
load
like
4
000
lines
as
if
they
all
changed
great
okay,
so
again,
recently
implemented
this
thing
that
basically
you
can
use
this
config
loaders
class,
and
then
you
call
the
load,
file,
method
and
it'll.
Just
look
at
the
file
extension
and
determine
the
correct
config
loader
to
load.
So
if
you
make
a
config
loader
that
knows
how
to
read
a
BMP
file,
then
you
can
parse
that
BMP
file
into
this
format.
A
That's
the
same
array
format
that
you've
already
got,
and
so,
if
someone
basically
takes
like
a
if
someone
can
can
get
a
BMP
file
that
has
their
number
in
it
right,
then
you
could
you
could.
A
Actually,
you
might
want
to
just
do
like
a
PNG
file,
because
no
one's
going
to
be
able
to
create
a
BMP
file
and
then
make
it
so
that
it's
explicitly
just
that
many
bytes,
so
you
might
want
to
just
like
make
one
that
loads
pngs
and
then
converts
like
that.
That
can
do,
let's
see
I,
don't
know
if
you
could,
if
you
I,
don't
know,
you
need
to
figure
out
how
to
make
a
PNG.
That's
only
grayscale
right.
A
B
See,
let's
see.
B
Yeah
you
can
complete
the
sentence,
then
I'll.
Okay,
talk
about
that.
B
A
A
You
need
a
picture
of
a
handwritten
digit
in
some
file
format
and
then
you
need
let's
see,
and
then
you
need
to
write
a
config
loader
which
so
C
config
slash
yaml.
Oh.