►
From YouTube: Weekly Sync 2020-05-08
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.phj5mf2gwqev
B
B
A
Finally,
we
got
those
in
good
job
figuring
that
out
I
want
you
and
what
else
since
I
don't
know
if
we
really
had
much
going
on
since,
since
the
other
day,
Oh,
shame
changed
the
links
that
was
good,
I,
finally
fixed
those
just
this
I
think
we
might
already
gone
over
this,
but
we've
got
the
model
plugins
importing
dynamically,
and
then
we
talked
about
psych
it
and
how
that's
a
pain
in
the
ass
and
then
whooping
grab
it
sweet,
oh
and
we
got
the
new
file
search
tutorial,
merge
that
was
great
nice
job
on
that
suit.
A
A
A
E
A
A
So
thanks
to
them
and
then
everyone
else
do
you
kind
of
want
to
just
like
roll
through
I'll
sort
of
give
you
guys
I,
don't
know
if
you
guys
know
if
I've
really
introduced
myself
much
before,
but
so
and
then
we
can
just
roll
down
the
list
here:
I'm
starting
with
doggin,
so
I'm
John,
I
work
at
Intel
and
it's
the
I'm
on
the
security
team.
Actually
I'm,
not
on
a
machine
learning
focus
team.
A
A
You
guys
may
have
seen
that
other
projects
Evie
been
told
that
Terry
my
coworker
has
that's.
We
do
some
like
work
like
that,
where
we
create
tools
and
and
make
security
easier
to
do,
for
project
teams,
and
then
we've
also
got
some
people
who
work
on
this
thing
called
the
trusted
platform
module,
which
is
this
it's.
We
have
a
software
stack
that
goes
along
with
this
hardware
thing
called
the
TPM
and
that
basically
does
all
your
cryptography
for
you
like
off
the
main
CPU,
so
it
can't
get
like
side
channels
and
stuff
and
so
yeah.
A
F
F
It's
just
some
basic
stuff,
which
is
very
premature
and
like
it's
kind
of
setting,
the
rocks
of
we
just
closed
our
banker,
projects
than
that
I'm
part
of
some
clubs
and
you
making
up
and
IT
Club
and
I,
ignoring
I'm
interested
more
in
my
opinion
and
framework,
so
I
work
in
CB
based
projects
and
I'm
also
looking
into
integrating
deal
with
hardware
quantizing
deep
learning
network.
So
this
other
project
which
I
work
I
usually
play
for
called,
but
more
more
because
all
of
a
sudden,
an
alternative.
But
hopefully
these
datasets.
H
I
It's
me
my
shoe,
so
I
am
under
ed
student
computer
science
from
India,
so
I
only
formal
again
after
when
I
was
searching
for
D
sock
so
and
then
I
started
working
from
January,
1st
I
guess
in
the
new
year.
And
yes,
it
was
a
good
experience
and
I
have
been
learning
a
lot
of
things.
I
am
primarily
focused
on
machine
learning
and
a
bit
of
robotics,
especially
drones.
So
that's
it.
I
J
Thank
You
Joan,
hello,
everyone.
My
name
is
name
I'm
master
student,
computer
science.
My
background
is
engineering
I,
mostly
used
machine
learning
for
application
in
engineering,
but
I
have
never
been
in
a
computer
science,
major
scientists
already
or
a
developer.
So
I
was
looking
for
a
good
open
source
project,
so
I
found
the
FML
and
I'm
so
happy
that
I
found
a
very
good
group,
a
very
supporting,
grow
and
I
hope
to
learn
a
lot
of
things
from
you.
Thank
you.
A
K
Hello:
everyone,
I'm
Jim
I
am
a
second
year
student
information
technology
undergrad.
So
in
first
year,
I
tried
my
hand
on
web
development,
but
it
was
very
boring,
so
I
shifted
to
data
science
and
machine
learning
and
stuff
em
and
now
I
am
trying
my
hand
on
computer
vision
and
stuff,
so
I
hope
I
make
the
best
out
of
this
three
to
four
months
of
goals
of.
Thank
you
job.
All.
G
So
hello,
everyone-
this
is
Ron
shoe
yeah,
so
I'm
from
Mumbai,
India
and
I'm
in
third
year
undergraduate
student,
so
I
have
got
like
one
year
more
to
go,
and
so
my
primary
interests
are
like
probably
in
data
science,
because
I
actually
started
doing
some
data
science
course
from
my
first
year
and
stuff
and
I
actually
started
contributing
to
open
source
from
the
like.
Previously
at
Oktoberfest.
D
Hi
everyone
I'm
stache
Nell
I'm
from
chennai
tamilnadu,
india,
so
I
got
introduced
to
gee,
saw
last
year
while
I
was
a
student
at
NIT
Tucci,
now
I'm
working
as
a
technology
estate
in
Morgan
Stanley
Gaeseong,
has
been
a
great
journey.
For
me
it
was
a
wonderful
experience
and
the
like,
you
guys,
are
doing
an
amazing
job.
E
Hi
guys,
yes,
I'm
a
second
year
in
the
guard
in
Delhi,
India
and
I,
contributed
to
be
FSM
in
last
year's
for
the
machine
learning
models,
and
it
has
been
a
really
great
experience
and
it
was
actually
the
starting
point,
whereas
like
entered
open-source
and
began
working
on
projects
and
John
has
been
such
a
great
and
patient
manpower
throughout
Google
Summer
of
Code
and
I
hope,
you
guys
learn
a
lot
one
thanks.
Yes,
thank
you.
A
It's
like
just
please
merge
my
pull
requests
like
and
then
there's
like
this
is
just
a
whole
range
of
things
and
then
there's
other
people
like
the
kernel
community
is
like
very
strict
and
snarky
sort
of.
Sometimes
they
can
be
a
little
I.
Don't
know
if
you
guys
have
ever
read
some
of
the
Linux
kernel
mailing
list
stuff,
but
yeah
there's
they
can.
They
can
get
a
bit
in
a
bit
heated
exchanges
with
each
other.
A
You
know
add
this
feature
to
this
this
thing
and
then
you
go
and
you
try
and-
and
you
see
like
just
even
if
you
don't
succeed
like
you,
get
a
feel
for
like
what
is
what
is
the
process
like
in
this
community
because
then
it
just
gets
only
and
easier
like
once
as
you
go
through
things
like
you,
you
know
you
see
bugs
and
stuff
or
you
like.
You
see,
you
see,
documentation
issues
and
you
just
like
get
familiar
with
you
kind
of
get.
A
It's
like
takes
a
little
bit
to
get
at
least
for
me,
like
I,
took
a
little
bit
for
me
to
get
comfortable
with,
like
oh,
like
you
know,
should
I
go
and
should
I
post
this
change.
You
know
like
they
probably
know
what
they're
doing
more
than
me
it's
their
project,
but
sometimes
you
know
it's
like.
Oh
I
didn't
see
that
like
thanks
for
fixing
that,
and
so
you
get
the
whole
whole
range
of
responses
and
like
a
submission
process
and
everything
it's
open
source
is
fun.
A
I
love
that
there's
just
like
people
all
over
the
world.
Writing
software.
That's
free,
I!
Think
it's
great
all
right!
Well,
so,
let's
get
down
to
business!
I
just
wanted
to
sort
of
do
a
little
make
sure
everybody
knows
each
other
because,
like
it's
been
really
nice
to
have
this
I
think
I
love
this
community
that
we
have
and
I
think
you
guys
are
all
really
great
and
you
work
hard
and
we
all
work
hard
and
we're
making
cool
stuff
and
I.
A
Think
it's
especially
nice
now
like
that,
we're
all
sort
of
remote
to
to
have
this.
It's
it's
fun
to
have
this
community.
So
thank
you
for
joining
me
on
this
wild
ride.
All
right,
so
we
talked
about
merge,
stuff.
New
file
search
tutorial
merge,
let's
just
like
view
that
for
a
second
because
it
looks
sweet,
oh
god,
that's
zoomed
in
and
then
tutorials
source,
and
we
got
simple
source
for
new
files.
New
file
types,
sweet
yeah.
A
So
this
goes
over
how
to
write
so
sutanu
went
through
and
wrote
a
source
that
does
any
files
like
dot
any
files.
You
like
probably
know
those
from
like
various.
You
know
anything
in
like
Etsy
/
et
Cie.
You
know
lots
of
those
config
files
from
various
demons
are
written
in
any
format,
so
basically
he
goes
through
and
he
shows
us
how
to
write
the
file
source,
because
this
is
a
common
thing
that
we've
run
into
before,
with
like
the
idx
source
and
stuff
we're
like.
Oh
okay,
like
oh,
we
got
a
new
datatype.
A
How
do
we
write
out?
How
do
we
write
a
source
for
it
and
this
common
thing?
That
comes
up,
so
it's
nice
that
we
have
this
tutorial
now
and
so
yeah.
This
is
great
nice
job
with
that,
and
then
we
got
of
course,
the
woeful
rabbit
models
and
that'll
be
great,
because
that's
the
other
thing
I
wanted
to
do.
Is
we
also
I
think
that
this
right
now
uses
the
Python
API
right?
Am
I
correct
in
that
not
the
command
line?
It
doesn't
call
out
to
the
command
line
version.
A
So
yeah,
so
basically,
what
what
we'll
be
able
to
do
is
take
this
and
then
run
the
command
line,
client
because
it
takes
the
same
format:
right,
yeah,
okay,
sweet,
so
yeah
I,
don't
know
if
so,
we've
talked
about
async,
I/o
and
the
event
loop
and
stuff
before.
Oh-
and
that
brings
me
to
another
thing:
let
me
just
YouTube:
we've
talked
about
async
I/o
and
the
event
loop,
and
you
know
how
everything
in
there
from
L
runs
in
the
async
I/o
event
loop
and
let's
see
and
I'm
gonna
grab
this.
A
Plus
I
want
to
share
this
playlist.
You
pull
up
Lois,
okay,
so
I
put
together
this
playlist
and
I
was
thinking.
I
could
put
a
few
playlists
together
on
the
YouTube
channel
about
how
to
learn
or,
like
you
know,
some
learning
resources
for
various
things.
That
might
be
helpful
when
like
in
conjunction
with
the
FML
or
just
machine
learning
in
general.
So
if
you
guys
have
links
to
things,
let
me
know
and
we'll
add
them
to
that
so
I'm
going
to
create
YouTube.
A
So
yeah
this
is
just
a
little
video,
but
I
think
it
says
advanced
async,
IO,
but
I
feel
actually
does
a
pretty
good
job
of
sort
of
like
giving
you
bite-sized
pieces,
and
so
basically,
what
happens
is
that
since
we're
running
within
this
event,
loop
like
it's
all
within
one
thread,
we're
not?
We
don't
have
any
code
right
now
that
calls
out
and
creates
other
processes
or
threads.
A
So
if
we've
got
all
these
right,
whenever
you
have,
you
see
async
def,
you
have
that
that's
a
that's
a
co-routine
function,
so
that
means
that
that
function
might
be
doing
some
kind
of
async
I/o
op
like
asynchronous
I/o
operation
and
so
like
it
might
be.
Reading
from
or
writing
to
some
kind
of
network
socket
and
at
some
point
that
basically
means
like
at
whatever
point
that
is
it's
going
to
pause
execution
there
after
it
finishes
the
write
or
read,
and
then
it's
going
to
go
back
into
this
loop.
A
A
If
I'm,
if
I
only
have
one
thread
and
one
of
those
threads
you
know,
went
and
pulled
a
bunch
of
data
from
the
database
database
and
then
ran
a
machine
learning
model
like
a
tensor
flow
model.
Right
now,
for
example,
it's
going
to
lock
up
that
thread,
so
you're
not
going
to
be
able
to
go
and
grab
other
sockets
and
do
other
I/o
operations.
While
this
thing
is
happening,
and
so
what's
interesting,
will
we
have
ways
we
have
there's
the
the
nice
thing
about?
A
What
we've
done
is
that
everything
is
serializable,
so
we
can
take
everything
and
serialize
it
into
a
config
structure,
which
means
that
it's
very
easy
to
pass
that
config
structure
into
a
new
process,
and
then
we
can
have
other
like
we
can
run
a
model
within
another
process.
It's
something
we'll
have
to
do
eventually,
but
with
something
with
when
you
run,
for
example,
when
you
use
the
sub
process
module,
that's
obviously
calling
out
and
creating
a
new
process,
and
you
are
returned
these
stream,
reader
and
stream
writer
objects,
which
are
basically
they're,
they're
black
pipes.
A
The
data
in
from
you
know
the
database
or
wherever,
and
then
it
within
a
separate
process
will
do
the
machine
learning
so
that
we're
not
locking
up
the
main
thread
that
we're
using
and
then
we
stream
the
data
back
in,
and
so
that's
sort
of,
like
the
ideal
flow
that
we'll
get
to
eventually.
So
that's
that's.
Basically
what
we
need
to
do
is
set
up
some
stuff
that
will
basically
like,
for
example,
with
the
model
predict
operation.
A
F
A
F
F
A
Yeah
so
I'm
gonna,
I'm
gonna,
give
you
some
more
I'm
gonna.
Give
you
a
little
few
more
comments
on
this
and
then
we'll
we'll
take
it
from
there.
But
this
looks
great
I'm
very
excited
about
this.
I
was
just
I
was
just
I
was
just
going
on
and
then
we'll
all
we'll
all
run
through
this,
so
that
everybody
can
see
once
we
merge
it,
because
this
is
very
cool.
Basically,
what
we're
doing
areas
yeah
yeah,
what
Argon's.
F
A
And
that'll
be
good
to
get
everybody's
feedback
on
that,
but
basically,
what
audience
done
here
is
you
know?
So
we
have
these
data
flows
and
you
know
a
data
flow
might
include
the
using
a
machine
learning
model
for
prediction
or
something.
But
this
is
a
basic
example
of
something
that
we've
had
to
do
all
the
time
with
the
documentation
which
is
take
a
little
video
recording
and
turn
it
into
a
gift
so
that
we
can
put
it
on
the
documentation
website,
and
so
this
is
basically
showing
okay.
A
A
That's
what
this
is
right
here.
It's
like
you
know
it's
just
this
little
wrapper
function
that
says
these.
This
I'm
gonna
run
ffmpeg
and
I'm.
Gonna
run
it
with
these
arguments
right
and
so
so
we
write
this
file
and
then
well.
We
want
you
know
we
want
to
make
this
accessible,
and
so
what
we're
doing
is
we're
standing
up
somewhere.
We
make
it
get
repo
for
committed
and
we
push
it
to
the
git
repo
and
then
on,
maybe
like
a
server
that
we
have.
We
go
and
we
deploy
this.
A
We
we
deploy
a
an
HTTP.
We
use
the
HTTP
API
to
have
another
flow
data
flow
that
responds
to
a
web
hook,
and
a
web
hook
is
basically
this
giant
blob
of
data.
That
github
will
send
an
arbitrary
URL.
You
can
put
it
in
the
settings
of
your
repository
and
then,
whenever
any
you
can
specify
which
events
like
if
it
gets
a
push
event
to
the
repo
to
the
master
branch
it
can
send
this
web
hook
over
and
the
web
hook
will
say,
hey
something
happened
and
then
notify
this
service.
A
So,
basically
now
we
have
this
data,
this
other
data
flow,
which
sits
in
behind
an
HTTP
API,
and
it
whenever
you
push
to
your
github
repo
github,
sends
this
web
hook
to
this
web
hook.
Service,
which
we're
running
like
in
another
terminal
and
and
the
web
hook
service
will
say.
Oh,
like
let
me
pull
down
the
latest
version
of
that
repository
rebuild
the
container
that
it's
running
in
and
redeploy
the
container.
So
what
you
end
up
with
is
basically
you
make
changes
to
these
operations
in
these
files.
A
A
F
A
A
F
A
G
A
F
A
F
A
A
F
A
A
A
B
A
Should
have
git
repo
spec
yeah,
okay,
so
when
you
assign
it
used
to
be
that
when
you
assign
something
to
speculate,
show
up
like
this,
so
you
could
see
what
the
parameters
of
this
dict
are
and
that's
basically
what
the
spec
is
and
now
we're
actually
enforcing
the
conversion.
And
the
thing
is
that
now
that
we're
enforcing
the
conversion
that
spec
is
wrong.
We
shouldn't
have
the
spec
in
there,
and
so
we
removed
the
spec
from
the
code.
We
just
didn't
regenerate
the
yamo
and
remove
it
from
there.
So
that's
what's
happening
there.
A
So
this
is
why
we
need
to
have
that
we
need
to
do
some
work
to
automate
the
testing
of
the
tutorials
and
I'm,
not
sure
exactly
what
that
will
all
entail.
But
that
needs
to
happen
at
some
point
because
you
know
it's
it's
a
lot
of
work
to
revalidate
the
tutorials
and
we
really
need
to
make
sure
that
they
get
validated
on
on
every
run
of
the
CI.
A
But
let's
see
okay,
so
issues
with
decoration
usage
example:
okay
and
then
sake
so
you've
got
the
test
case
here.
How
is
this
going.
G
A
A
A
B
B
C
B
A
So
what
this
says
is
basically
we're
saying
just
like
you
did
in
the
in
the
test
case
above
we're
going
to
take
each
feature,
name
and
map
it
to
whatever
the
output
of
this
function.
Was
this
operation
right?
So
if
the
years
feature
went
into
this
operation-
and
it
was
a
value
of
two-
then
it
would
get
multiplied
by
ten,
and
so
it
would
be
20.
So
we
would
end
up
with
years
:
20
in
the
output
here
and.
A
A
So
if
you
have
anything
that
has
the
same
input
definition,
anything
that
has
the
same
output
definition
it'll
tell
the
one
operation
that
has
an
input
matching
the
output
of
the
other
one
that
it
needs
to
be
connected
right,
and
so
so,
with
this
in
particular,
the
output
operation
sort
of
like
they
don't
they
they
are
getting
like
their
input.
Is
the
spec
object
right?
So
that's
why
we
say
you
know:
dot
opt
at
inputs
spec,
because
we're
gonna
create
this
new
input.
That's
going
to
be
used
into
the
associate
definition
output
operation.
A
L
A
So
this
has
to
do
with
this
flow
dictionary
here
and
so
what
we
do
is
we
I
want
to
do.
We
have
a
UML
out
now
so
basically
the
way
that
the
flow
works
is
you
for
each
operation?
You
go
through
its
inputs
and
you
say:
where
is
that
input
allowed
to
come
from
and
that's
how
you
define
the
connections
between
them?
Is
you
go
through
each
operation
that
you
want
in
your
network
and
you
say
where?
A
Okay,
what
are
the
inputs
for
this
operation
and
where
is
each
input
allowed
to
come
from,
and
so
in
this
example,
we've
got
this
so
we've
got.
Let's
see,
we've
got
the
accept
user
input
operation
and
then
we've
got
we're
going
to
do
a
literal
eval
on
it,
we're
going
to
create
a
dictionary
and
then
we're
gonna
feed
that
dictionary
through
model
predict
and
then
we're
gonna
print
the
output.
So
the
first
thing
that's
going
to
happen
is
input.
A
Definitely
definition,
and
so
what
we're
saying
here
is
for
literal
eval
input.
The
inputs
to
this
function
are
there's
only
one
and
it's
called
stir
to
eval
and
you're
allowed
to
get
it
from.
This
is
an
array
of
places
that
you're
allowed
to
get
it
from
and
the
index
the
only
index
in
this
array
is
a
mapping
of
get
user
input
to
input
data
saying
that
this
operation
we're
going
to
get
it
from
this
output
in
this
operation
and
then
we
go
through,
and
we
do
that
for
everything
else
so
like
when
we're
creating
this.
A
We're
then
going
to
take
that
and
make
it
the
key
in
or
we're
going
to
make
it
the
value
in
that
dictionary.
So
we
use
this,
create
mapping,
create
feature
map
operation
or
which
is
the
same
as
it's
dff
ml.
That
mapping
create,
and
we
say,
okay
well
what
it.
What
are
the
inputs
right
so
well?
The
key
is
allowed
to
come
from
the
seed,
so
C
dot
years,
and
so
we
say
C
dot
years.
A
K
K
A
K
A
Right
yeah,
let
me
know
how
it
goes
and
if
you
get
really
stuck
then
you
know
just
just
give
me
give
me
a
ping
and
all
come.
Take
a
look
at
it.
I've
got
I've
got
a
very
full
plate
right
now
or
other
otherwise.
I
would
go,
take
a
look
at
it,
but
it
would
be
good
for
you
to
you
know,
understand
that
code.
A
G
J
J
J
A
So
the
substitution,
so
basically
yeah
I,
don't
know
this
probably
isn't
the
most
clear
way
to
do
this.
We
should
I
should
probably
should
probably
change
this,
because
basically
the
reason
we're
putting
them
in
this
file
is
just
to
sort
of
like
have
a
nice
clean
list
of
them,
but
we
should
probably
just
separate
it
with
back
slashes
and
do
it
all
here,
because
what
this
does
is
this
does
the
substitution.
A
C
A
A
Now,
where
the
hell
is
it?
Well,
it's
way
the
hell
down
here.
Okay,
so
you
register
the
operations,
and
this
is
the
same
way.
It
works
for
models
and
for
everything
because
everything's
a
plugin,
and
this
is
basically
how
the
plug-in
system
works.
So
we
put
we
under
this
entry
point
section
of
the
the
setup
you
why
we
put
under
DF
of
meld
operation.
We
just
list
out
all
the
operations
that
we
want
to
provide
that
this
package
provides
and
we
provide.
A
You
know
we
basically
say
this
thing
will
map
to,
and
then
you
give
the
Python
path
and
then
with
:
for
the
thing
in
the
file.
So
you
do,
the
you
know,
should
I
dot,
pi
PI
and
then
call
in
the
function.
Name
within
dot
with
the
PI
pi
PI,
dot,
py
file,
and
so
that's
how
it
knows
when
you,
when
you
were
that's
how
it
knows
when
you
say
you
know,
get
clone
repo.
What
it's
doing
is
it's
looking
at.
A
A
H
A
A
A
A
Basically
from
the
end
of
this
file
back
up
when
recording
okay,
good
I
mean
this
file
back
up
to
about
here
these
two
for
loops,
they
could
be
put
in
a
function
and
then
we
could
just
sort
of
call
the
function
and
say
you
know
populate
my
namespace
with.
Let's
see,
actually
we're
okay,
what
do
they
do?
Yeah?
They
do
set
atracsys
modules
name.
So
this
so
sis
modules
name
is
is
basically
like
you
know.
Whatever
this
file
is.
A
A
A
A
So
this
would
be
if
file
is
test
doc
test,
that
parent
is
test
and
that
parent
is
tensorflow
hub,
slash
and
then
I
believe
it
is
just
model
tensorflow
hub
right,
so
you're,
basically
gonna
say
look
in
there.
You
won't
have
a
skip
I,
don't
think,
because
skip
is
really
only
useful
for
Skell
I
feel
like
if
I
remember
correctly,
and
then
you'll
say
you
know.
Tensorflow
hub
is
the
package
name,
and
so,
if
we
do
now,
this
is
our
function.
Call
or
like.
A
Basically,
there
are
tests
are
our
file
here
is
this
is
pretty
much
almost
the
entire
contents
of
this?
You
know
test
doc,
test,
there's
going
to
be
the
import
or
from
gif
of
Mel
import,
m'kay,
doc
tests
or
m'kay
yeah
m'kay
talk
test
is
probably
better
and
then
you'll
just
you
know,
run
this.
You
run
this
just
this
is
basically
the
whole
file
here
and
it
will
populate
it
then
populates,
the
the
global
namespace
of
this
file,
this
test
doc
test
file.
A
Basically,
what
we're
doing
is
we
pass
this
as
the
module
and
then
this
becomes
module
and
then
so
it
goes
through.
It
creates
test
cases
by
reading
all
the
files
in
this
directory
that
we
passed
this
route
and
then
it
adds
the
test
cases
to
the
global
namespace
of
of
this
file
so
that
when
unit
test
comes
through,
it
sees
this
test
cases
and
it
runs
them,
and
so
basically
what
we
need
to
do.
If
we
refactor
this
out,
then
then
yeah
we
can
add.
A
Basically
a
file
just
like
this
to
each
to
each
plug-in
that
we
have
and
the
plugins
will
now
get
the
benefit
of
having
their
doc
test
run
to
because
right
now,
like
you've
seen,
we
have
kind
of
a
clunky
the
we
have
to
do
the
clunky
way
of
creating
the
SH
files
and
the
py
files
and
then
running
meu.
We
have
like
a
separate
test
harness
that
we're
basically
copy
pasting
around.
A
But
then
this
is
it's
like
we
could.
So
basically
we
could
do
this,
like
don't
I,
wouldn't
concern
yourself
too
much
with
it
like
this,
so
you
could
do
this,
but
like
so
you
could
do
this,
but
it's
still
going
to
be
sort
of
an
exercise
in
like
we
need
to
figure
out.
What
is
the
best
way
to
do
this,
because,
right
now
we
have
the
the
way
that
we
do
this.
A
B
A
It
doesn't
okay,
there
we
go
yeah
it
does.
It
does
directory
with
CSV
files.
So
that's
I,
guess
that's
a
problem
with
this.
Maybe
that
would
end
up
in
the
rap
and
then
we
have
two
different
places.
The
thing
is
so
this
is
like
this
is
basically
the
problem
that
we're
faced
with
right
now
is
the
same
sort
of
problem
with
the
tutorials.
Is
that
there's
not
a
good
way
like
there's
a
good
way
to
test
Python
code
right?
A
That
is
in
examples,
is
with
the
doc
test
stuff
that
we
have
the
built-in
doc
test
stuff
and
then
the
wrappers
around
it,
but
there's
not
a
good
way
to
test
the
conjunction
with
the
command
line
commands.
So
you
know
like
okay,
well
run
this.
You
know
run
this
command
on
the
command
line
to
cat
the
dataset
CSV
to
create
that
file,
and
then
on
this
other
command.
A
A
Sorry,
the
code
block
things,
and
it
says:
okay,
if
its
console
run
the
console
command,
like
probably
in
a
container
or
something
right,
and
we
just
go
through
and
we've
run
everything
and
we
check
that
the
outputs,
correct
right,
but
that's
like
involves
writing
this
whole
sphinx
plug-in,
probably
so.
That's
kind
of
a
mess
so
where
we're
at
right
now
is
we've
got
this.
A
What
I'm
trying
to
say
is
in
this
particular
case
the
amount
of
IND
code
that
you're
going
to
end
up,
adding
to
get
the
example
working
is
basically
like
net
the
same
because
you
still
have
to
write
the
wrapper
code
to
say
you
still
have
to
do
this
thing
where
you
say
directory
with
CSV
files,
you're
just
going
to
do
it
in
a
different
file
up
in
this
file
now
and
so
like
to
some
extent,
that's
not
cleaner,
but
some
extent
yeah.
To
some
extent
it
may
just
not
be
cleaner.
A
That's
that's
one
place
where,
if
we
had
this
abstracted
doc
test
library
within
util
testing,
then
now
we
just
you
know
pop
those
three
lines.
Those
three
lines
here
in
to
you
know
test
slash
test
doc,
test
under
feature
get
or
something
and
all
of
a
sudden.
All
of
the
get
features
are
getting
there
they're
there
examples
tested
right,
and
so
that's
sorry
that
was
sort
of
a
long.
It
was
a
very
long-winded
example,
but
what
I
was
trying
to
say
is
that
that
6:19,
like
has
value
long
term
in
other
places.
A
This
isn't
actually
particularly
one
of
them
like
we're
gonna,
do
it
and
you
could
do
it
either
way,
so
you
may
just
want
to
stick
with
the
existing
way
that
we
were
doing
things,
because
that
might
be
cleaner
implementation
for
this.
For
this
case,
since
you've
already
got,
you
can
copy
that
scaffold
of
like
create
the
SH
files
or
create
the
create
use,
the
SH
files
to
create
the
CSV
files
and
then
run
the
test,
and
you
can
put
it
all
in
one
file,
so
that
might
be
the
way
to
go
there
but
long
term.
A
A
Cool
yeah,
I
think
that'll
be
good.
Yeah
that'll
be
sweet.
Yeah
we've
got
some
really
nice
doc
test
stuff
going.
So
thanks
for
ya,
thank
you.
Thank
you
for
contributing,
so
many
of
those
doc
test.
This
has
been
that's
really,
that's
really
beefing
it
up
and
I
go
on
I,
go
on
to
a
documentation
website
every
so
often
and
I'm.
Just
like
you
know.
Well,
okay,
pretty
much
every
day,
multiple
times
a
day,
I
just
go!
Look
around
and
I'm
like
okay.
Does
it
look
right?
A
A
You
may
notice
that,
because
I
noticed
where
you
wrote
all
those
doc
tests,
but
I
don't
know
if
all
of
them
had
been
in,
we
didn't
have
every
single
page
as
in
like
we
didn't,
have
every
single
file
under
the
API,
Docs
and
so
I
went
through
and
scripted
the
API
doc
generation,
and
so
now
everything
shows
up
and
so
yeah.
So
that's
that's!
That's
it's
good
that
we
have
everything
now,
so
you
can
actually
see
all
the
stuff
that's
being
doc
tested
and
you've
written
doctors
for
some
yeah
I
noticed
that
where
I
need.
A
A
resource
or
space
source
source
source
yeah,
it
doesn't
look
the
prettiest,
but
it
all
is
there
at
least
there's
some
there's
some
beautification.
That
could
be
done,
but
but
at
least
we
have
all
the
examples
now
we're
sure
that
we're
getting
them
all
and
the
documentation
site
sweet
all
right.
Anything
else.
F
A
F
F
A
A
Would
actually
be
a
good
thing
to
do
is
to
add
a
check
for
this
oops
but
yeah
this
this
file
is
I
like
what
they've
done
with
this
format,
but
I
also
feel
like
it
ends
up
sort
of
as
a
mess,
but
this
is
these.
Are
my
personal
I
know
if
you
guys,
if
you
guys,
you
guys
have
probably
noticed
but
everything
everything
with
me
is
like
it
has
to
look.
A
It
has
to
look
clean,
yeah,
I'm,
very
obsessed
with
every
everything
needs
to
look
clean
because
you
know
clean
implementations
clean
like
it's
all
got
to
be
so
it's
all
got
to
be
right.
Yeah
I'm,
a
neat
freak
when
it
comes
to
the
code,
but
my
with
my
desk
yeah,
not
so
much.
My
desk
is
a
mess,
but
you
know:
what's
in
the
desk,
is
it's
neat,
and
so
alright?
A
A
A
A
A
We
want
to
do
here
is
basically
I
just
realized,
actually
how
completely
nonsensical
this
this
list
of
issues
is.
It
definitely
requires
more
explanation,
I'm,
sorry,
okay,
so
we
have
this
sub
spec
proper.
Let
me
let
me
just
go
so
we
did
this
thing
where
we're
Auto,
creating
the
definitions,
but
only
for
primitive
data
types
right.
So
we
also
have
this
thing
called
a
sub
spec
and
there
might
be
a
good
time
see
if
this
is
more
clearly
formatted
in
the
API
reference.
A
This
is
what
that's
completely
unhelpful
okay,
so
why
isn't
picking
these
up
whatever?
Okay?
So,
basically,
within
a
definition,
we've
got
the
primitive
right,
and
so
what
we'll
have
here
is
we
will
have
the
primitive.
So
currently
we
have
that
list
of
primitive
types
that
we
support
right.
But
for
this
one,
what
we're
going
to
do
is
we
say
if
we
see
an
object,
that's
a
spec,
so
in
a
spec
is
any
kind
of
name
tupple
or
data
class.
A
Okay,
so
we're
gonna,
we
look
at
it.
We
see
that
it's
a
that,
it's
a
name
tuple
or
data
class,
and
then
we
say:
okay
create
a
new
definition.
The
primitive
is
going
to
be
mapping
because
we're
mapping
it's
a
key/value
pair,
mappings,
right
or
I
think
it
might
be
map.
All
of
this
stuff
needs
to
be
validated
and
standardized
at
some
point,
so
we
need
to
go
through
and
we'll
assign
this
to
the
spec
object
of
the
new
definition.
A
When
we
create
will
make
the
primitive
map
and
then
we'll
say
actually
I-
think
that's
it
yeah.
That's
all
we're
gonna
do
basically,
if
you,
if
you
see
that,
then
create
the
definition
appropriately
and
just
assign
the
annotation
as
the
spec.
Now
the
the
next
thing
is,
the
issues
original
title
is
basically
is
do
it
for
sub
spec,
but
this
makes
sense
to
do
first,
because
if
we
haven't
in
spec
water
to
do
suspect,
but
so
sub
spec
is
basically
the
same
thing.
A
Only
a
sub
spec
is,
and
that's
sort
of
what
this
is
trying
to
show
is
this
code
here
is
not
tested,
and
if
we
implement
this
stuff,
then
it'll
get
tested
there
we'll
need
to
implement
test
to
know
get
tested,
but
the
sub
spec
is
the
same
thing
as
the
spec,
but
it's
basically
okay,
I
have
a
list
of
objects
that
are
going
to
be
of
this
type
right.
So
if
I
say
that
this
definition,
I
set
sub,
spec
and
I
set
spec
to
some
object
and
then
I
set
sub
spec.
A
So
type
name
couple:
it's
got,
you
know
a
fieldname
string
and
age
int,
and
so
now
what
we
want
to
do
is
we
want
to
say:
okay.
Well,
we've
got
this
definition.
A
Spec
is
so.
This
is
like
what
you
would
be
programmatically
doing
if
you
saw
my
data
in
the
argument
of,
or
let's
just
say
so
like
this
is
what
the
operation
would
look
like
so
death
process,
my
data
and
it's
going
to
say
data,
my
data
right,
so
this
is
you're
gonna,
see
it
in
the
annotation
when
you
have
it
here,.
A
A
A
A
A
G
A
A
C
A
Right
yeah,
so
this
guy
and
this
guy
would
probably
be
good
next
targets
for
you
and
then
the
other
one
is
to
create.
If
you
see
the
result,
so
you
know
if
you've
got
a
if
you've
got
a
function
here,
where'd
you
go
yeah.
So
if
you've
got
this
function
and
you
see
the
it's
got
a
return
type
annotation,
then
you
you
can
make
just
basically
say:
okay,
if
I
see
an
annotation
on
the
function
and
there's
only
like
one
value,
it's
not
like
a
tupple
or
something
then
well
I!
A
Guess
as
long
as
the
value,
the
return
type
is
not
a
dict,
you
can
go
through.
You
can
just
say
result
so
and
actually
you
could
even
do
it.
You
could
even
do
it
where,
if
you
see
a
if
you
see
something
that's
a
name
tuple
or
a
data
class,
then
you
extrapolate
that
the
typing
information
from
that
you
could
even
do
that
too.
If
you
wanted
to
get
fancy,
which
actually
would
be
really
great,
so
basically
sort
of
like
what
we
did
here,
but
the
reverse
right.
A
If
you
see
one
of
these
like,
if
you
see
a
name
topple
like
my
data
and
the
annotation,
then
go
through
and
instead
of
like
with
this
one,
it's
just,
we
just
have
one
type.
So,
okay,
we're
just
gonna,
say,
result
process
my
data,
you
know,
output
and
then
the
only
output,
it's
going
to
be
a
result,
and
then
you
just
make
a
new
definition
for
it
like
we
were
doing.
A
But
if
you
see
a
name
tupple
here,
you
could
go
and
for
every
key
and
then
the
value
and
the
keys
of
name
temple
like
name
and
age.
You
can
go
create
a
definition
with
the
appropriate,
primitive
there
and
you
could
even
you
know,
recurse
into
that
and
say:
okay,
if
I
see
it,
if
I
see
one
of
these
you'd
only
recurse
one
level
down
or
you
only
go
one
level
down,
though
and
you'd
say
okay,
this
you
know.
A
If
I
have
my
data
as
my
output
here
and
I
see,
name
and
name
is
actually
a
name
couple.
Okay,
then
I'm
gonna
go
through
and
I'm
gonna
create
a
definition
where
this
is
actually
a
sub
spec
or
you
not
not
a
sub
spec
you're
gonna
go
through
and
say
this
is
the
spec,
for
that
name
definition
is
whatever
the
name
couple
would
be.
That
is
in
place
of
string
in
this
situation.
A
A
A
Just
like
tell
me
if
you're
you
don't
just
give
me
a
little
synopsis
of
like
what's
changed
or
what,
where
everything
is
at,
because
I've
just
been
super
swamped
lately,
and
that
really
helps
me
review
your
pull,
requests
faster,
so
cool
all
right,
thanks,
everyone,
I'll
post
this
video,
because
I
remember
to
record
it
this
time
and
I'll
talk
to
you
guys
on
Tuesday
have
a
great
weekend.
Yeah.