►
From YouTube: Weekly Sync 2021-02-16
Description
Meeting Minutes: https://docs.google.com/document/d/16u9Tev3O0CcUDe2nfikHmrO3Xnd4ASJ45myFgQLpvzM/edit#heading=h.clt7mtjpsukc
A
B
My
screen
damn
thank
you
and
you
always
help
me
with
that.
Oh
my
gosh
wow,
all
right,
okay,
geez!
I
need
a
persistent
reminder.
This
is
well
in
meeting
your
screen.
Okay,
all
right!
So
this
is
the
question.
Damn
it
I
didn't
there
we
go
now.
I
should
be
sharing.
B
Okay,
so
this
is
this:
is
our
weekly
sync
reading
and
now
we're
sharing
and
we're
recording-
and
we
have
this
question
from
alex
here
who
says?
Can
anyone
help
me
figure
out
if
something
is
possible?
I
want
to
view
the
attributes
of
a
trained
psychic
model,
for
example
the
coefficient
attribute
of
the
linear
regression
model
I'll
pop
open
the
docs
here.
So
you
can
see
that
which
is
this
couldn't
figure
out
a
way
how
I
trained
the
model,
I've
done.
B
Psychic
model
properties
is
always
a
good
thing
to
use
when
you're
trying
to
figure
that
stuff
out,
however,
it
doesn't
seem
to
return
a
fitted
model.
Is
there
any
way
of
doing
this
all
right,
let's
just
throw
together
a
quick
example.
B
Okay,
so
quick
updates
actually.
B
Just
because
things
things
there's
a
few
things
that
have
changed
and
let's
get
this,
and
so
you
guys
saw
this
list
of
stuff
that
I
put
in
here.
So
this
stuff
changed
over
the
weekend.
Nothing
really
major
the
only
one
of
the
major
things
that's
happened
is
so
we
finished
that
transition.
We
had
somebody
who
I
can't
remember
the
guy's
name,
but
he's
not
he
he
hasn't.
B
He
hasn't
showed
up
in
our
meeting
yet,
but
he
has
been
helping
us,
be
a
poor
question
all
and
so
he
went
through
and
he
moved.
He
did
the
initial
move
from
set
up
py
to
requirements.txt,
and
then
he
moved.
He
helped
us
move
again
from
requirements.txt
to
setup
config.
So
we've
completed
that
setup,
config
move,
and
now
we
have
the
scale
the
this.
When
you
do
pack
service
dev,
create
it,
creates
a
package
with
a
setup.config
file
and
so
that
that
there's
a
whole
thing.
B
Let
me
see
where's
that
issue.
I
think
we've
talked
about
this
before,
but
just
because
this
is
sort
of
interesting
from
a
packaging
perspective.
A
lot
of
what
we
do
is
packaging
and
a
lot
of
what
anyone
does
with
python.
Is
you
know
you
can
package
your
stuff,
so
it's
good
to
know
about
there's
an
issue
we'll
we'll
find
it
in
a
minute
but
update
a
bunch
of
libraries,
tensorflow
spacey
so
part,
auto
sklearn
spacey
had
an
api
breaking
change
because
we
have
created
a
major
version
number.
B
There
were
some
edits
there.
Let's
see,
oh,
we
got
retry,
so
I
had
this
patch
sitting
in
this
other
branch
to
add
retry
to
operations,
and
you
guys
know
how
that
should.
I
test
always
fails
because
of
that
stupid,
npm
audit.
In
point
and
oh
what's
up.
B
B
No,
it
works
on
the
other
computer,
great,
all
right,
fantastic,
okay,
anyways.
So
it's
not
important.
There's
an
issue
we'll
find
it.
So
as
part
of
this,
the
reason
why
I
started
saying
this
is
because
you
see
this
all
this
delete
model
transformers.
B
Okay,
so
because
we
have
to
test
all
the
models
in
the
ci
transformers,
we
had
an
issue
with
the
updating
of
the
apis
from
when
we
upgraded
tensorflow
to
2.4
from
2.3
there
was
transformers
needed
to
get
upgraded
and
that's
more
nlp,
stuff
yeah.
I
think
it's
somewhere
mlp
stuff-
I
remember
correctly-
and
but
didn't
have
time
to
do
that-
don't
have
time
to
do
that,
because
we
want
to
try
to
get
the
release
out
and
there's
no
tutorials
depending
on
it.
B
So
we'll
try
to
basically
we
spit
it
out
into
its
own
repo.
We
will
bring
it
back
into
the
main
core
tree.
You
know
once
we
once
we
get
it
updated,
but
obviously
we
can't
have
stuff
that
doesn't
work
in
there,
and
so
we
needed
to
update
tensorflow.
B
So
anyways
that's
been
moved
and
you
can
check
the
commit
message
for
where
that's
been
moved
to,
let's
see,
let's
see
you
can
see
it.
If
we
do
git
log
and
then
you
say,
model
transformers,
oops,.
B
And
you'll
see
that
we've
moved
it
over
here,
so
I've
moved
it
to
that
org
all
right,
okay,
so,
let's
just
let's
dive
into
this
one,
that's
a
question,
because
I
think
this
is
a
good
good
thing
that
we
can
just
cover
real
quick
here.
B
Just
to
recap,
the
question
is:
how
do
we
get
the
the
properties
out
of
the
sidekick
model
so
for
specifically
the
coefficient
and
the
answer
is
model
scikit
diff
small
model?
Second,
second
base.
B
B
Linear
good,
I
don't
know
how
much
okay,
it's
because
the
context-
that's
yeah.
Okay,
that
makes
sense
yeah
because
we're
using
the
context
pattern
here
and
I
don't
think
we
need
to
use
the
context
pattern.
Let's
see
yeah
so
you'd
have
to
do
it.
B
So
I
guess
this
is
well
this
is
we
don't
we
aren't
exporting
this
stuff,
so
this
isn't
really
like
all
right.
So
my
thought
process
here,
I'm
not
explaining
my
thought
process.
Okay,
so
all
right.
So
we
have
this
high
level
api
and
we
have
the
double
context.
Entry
pattern
right,
and
so
this
is
a
context
the.
So
this
is
a
model
context
that
actually
has
this
attribute
of
clf,
and
so
we
are
exposing,
via
the
high
level
api,
the
ability
to
pass
something
that
doesn't
have
a
context.
B
B
We
don't
need
to
do
that,
async
with
model
as
model
context,
and
now
we've
got
an
open
model
right,
so
load
model
from
disk.
If
it.
B
Exists
and
so
because
between
those
calls
we're
not
we're
we're
saving
the
model,
basically
between
these
train
load,
high
level
ones
so,
and
so
then,
in
here
we
would
be
able
to
access
the
context,
because
now
we've
loaded
it
to
disk.
So
it
gets
serialized
here
and
it
stays
unloaded
and
then
it
now
it's
loaded
against.
B
B
B
Noise-
oh,
that
may
be
a
good
thing
to
think
about.
We
should
be
not
demoing
that
all
right,
so
there's
the
coefficient
great,
so
we
can
dump
this
back
to
the
guy.
B
B
All
right
great
everybody
see
what
we
did
there.
B
Does
anybody
have?
Why
do
we
have
that
double
context?
Pattern,
because
if
okay
well,
it's
because
we
want-
and
so
this
is
also
remember
how
we
talked
about.
I
think
we
talked
about
that
location
thing
earlier
right
as
a
one
of
the
projects
making
sure
that
we
preserve
that
double
context:
entry
pattern
everywhere:
it's
mainly
for
consistency
without
the
with
that
throughout
the
code
base.
Thank
you
and
that
will
enable
it's.
B
It
will
enable
us
to
do
better
enables
so
we
have
this
project.
Let
me
just
explain
that,
so
we
have
this
project,
because
this
is
a
good
example
of
why
yeah
github
loaded,
so
this
possible
possible
gsoc
project
related
to
supporting,
zip
and
and
tar
arc
and
just
archives,
as
with
with
data
flows
and
models.
So
basically
we
have
this
directory
property
on
everything.
Right
now
we
used
to
sort
of
auto-generate.
B
You
know
what
that
directory
default
directory
name
might
be,
and
we
we
can
change
all
of
that
to
location,
and
then
we
can
sort
of
write
some
abstraction
around.
You
know,
okay
location,
what
if
location,
is
not
a
directory
passed?
What?
If
it's
a
zip
file?
Now
we
can
sort
of
auto
extract
that
right
and
make
it
usable
to
the
model,
and
in
this
way
we
provide
a
more
portable.
B
You
know
you
can
sort
of
just
tar
or
zip
up
your
model
directories
and
then
send
them
between
machines
or
you
don't
even
have
to
do
that.
You
just
put
dot
zip
and
now
your
model
is
going
to
be
stored
in
a
zip
file
right
and
it's
easier
to
transport.
B
So
another
thing
that
you
can
do
so
so
if
you
didn't
okay
so
now
we
have
this
double
context:
entry
pattern,
that's
asynchronous
right,
and
so
we
go
and
we
make
that
change
to
location.
Now
say
we
want
to
support
loading,
a
model
from
say
we
want
to
support
loading,
a
model
not
only
from
an
archive
but
from
an
archive,
that's
stored
over
the
network
somewhere
right
in
this
case.
So,
for
example,
like
you
know,
an
http,
you
know
an
http
server
right.
B
So
now
we
can
use
the
you
know:
we'd
use
an
async,
io
library
for
this
right
and
we'd
load,
the
model
asynchronously
through
the
a
enter
method,
and
then
you
know
when
it's
unloading,
then
the
rest
of
the
code
executes
and
you
run
the
model
right,
and
so,
if
you
don't
have
that
double
contact,
sentry
pattern,
you
don't
really
have
a
good
place
to
do
that,
saving
and
loading
right,
and
so
it's
sort
of
you
know
right
now.
B
It's
not
it's
not
really
getting
used
much,
but
it's
it's
mainly
for
consistency,
so
we
can
keep
all
of
the
you
know
as
a
code
style
and
code
pattern
type
of
thing,
but
the
reason
and
the
reason
for
that
is
because,
like
eventually
you
pretty
much
like,
if,
if
you
take
things
to
a
certain
point,
then
then
you
know
like
this.
This
feature
that
we're
talking
about
here
right.
This
is
sort
of
a.
It
was
a
premature
optimization
at
the
time
for
models,
and
I
guess
not
really
it's.
It
wasn't
really
an
optimization.
B
So
much
as
we
know
that
this
is
a
pattern
that
if
we
follow,
then
it
will
allow
us
to.
You
know
not
have
to
refactor
things
in
large
ways
later
and
then
also
if
we
make
everything
follow
that
pattern,
then
you
know,
then
we
we
can
assume
that
we're
kind
of
safe
for
all
of
our
classes
within
the
structure
is
that
is
that
is
that
a
good
answer:
yeah
cool,
all
right?
B
B
Okay,
so,
and
then
let's
just
put
it
out
so
this
is
because
of
the
double
context:
entry
pattern:
we,
the
the
high
level
functions,
save
and
load
model
every
time,
one
of
them
so
which
is
train
accuracy.
Predict,
is
called
to
this,
so
this
ensures
users
never
forget
to
save
the
model.
B
So
I
believe
we
can
actually
take
this
model,
that's
open
as
a
context,
and
we
can
pass
it
directly
to
train.
So,
let's
find
out,
though
I
can't
remember
if
this
works
or
not
okay
yeah.
So
we
need
this.
We
should
remember
to
support
that
so
yeah,
because
this
should
have
worked.
I
think
there
was
something
I
can't
remember
what
it
was.
B
There
was
something
else
that
tries
to
support
this
kind
of
thing,
but
but
this
is
so
you
know
we've
now
we've
taken
it,
we've
opened
the
context,
so
we're
gonna
we're
gonna,
keep
we
can
use.
The
idea
here
is
that
we'd
be
using.
You
know
the
same
high
level
functions
only
now
we're
we're
as
the
user
right
as
the
caller
or
like
okay.
I
know
that
I'm
I'm
gonna
keep
this
thing
open,
so
I
don't
need
to
save
it
to
disk
every
time
right.
So
I
can.
B
I
can
sort
of
get
better
performance
out
of
that.
If,
if
I
wanted
to
by
opening
the
context
myself
and
then
passing
it
to
high
level
functions,
I
thought
we'd
support
that,
but
we
must
not
so
we'll
go
fix
that
okay.
So
it's
because
there's
a
double
contact
surgery
pattern.
This
is
true
users,
don't
forget
to
say
the
model.
B
From
the
parent,
which
is
what
we
reference
usually
within
the
context
you
reference
the
you
know,
the
main
object
is
called
the
parent,
so
we
need
to
open
a
new
issue
to
track
passing
the
context.
The
high
level
functions,
which
support
the
caller
opening
the
context
and
then
just
using
context.
This
would
allow
the
user
to
dictate.
B
Oops
and
the
double
context,
entry
pattern
notes,
I
believe,
are
located
under
contributing
so
basically
out
of
nuts,
oh
yeah.
I
guess
we
should
probably
just
call
this
one.
Okay,
yeah-
and
this
is
kind
of
light
on
explanation,
but
maybe
we'll
link
to
this
explanation
right
here.
So.
B
To
do
add,
link
to
this
meetings,
video
explanation.
B
All
right,
okay,
so
all
right
great!
We
need
an
issue
here.
If
somebody
could
open
an
issue
that
would
be.
B
Great
all
right,
all
right
now,
let's
see
so
what
else
do
we
have?
I
think
that's
that
on
that
one
so
who,
let's
see
so
shaw,
what
did
you
want
to
talk
about.
E
Today,
I've
been
writing
tests
for
the
data
frame,
source
and
yeah,
so
I
want
to
know
which
tutorial
should
I
follow
for
lingard
all
right,
cool.
C
B
Them,
okay,
we
might
be
able
to
get
those
into
this
this
release.
Then
I
will
check
that
out.
C
And
for
for
hdf5,
I
am
a
bit
confused
to
map
the
how
to
store
the
data
and
then
how
to
retrieve
something
like
that.
So
I
wanted
to
discuss
the
mapping
of
storing
the
data
engine
in
sdf
yeah.
B
Okay,
anything
else.
B
D
Just
one
thing
like
I
have
seen
you
commenting
of
my
commenting
on
multiple
requests
that
there's
an
issue
with
black
like
the
formatting
and
stuff.
Oh.
B
Yeah,
that's
that's
a
good
idea
actually
and
I
think
yeah
we
can
implement
a
pre-commit
hook
and
yeah
yeah
pre-commit
hook
would
be
good
to
advise
people
on
within
the
contributing
documentation.
D
All
right,
so
I
have
already
implemented
it
for
my
library
I'll,
implement
it
right
just
for
the
black
just
a
black,
or
do
we
need
pre-comment
hooks
for
something
else
like
I.
B
Yeah
that
one's
broken
right
now
but
yeah
white
space
would
be
a
good
one
yeah
if
we
can
add
this
to
the
pre-commit
hooks
that
that
would
be
good,
so
right,
great
thanks,
yeah,
so
and
and
that
white
space
one
there.
The
correct
command
of
run,
I
think,
is
in
an
open
issue.
I
think
we
have
the
open
issue
to
track
changing
it
and
the
main
reason
why
I
think
we
should
hold
off
on
that-
and
I
think
I
noted,
is
because
suit
honcho's
work
is
a
massive.
B
You
know
bunch
of
massive
changes
and
and
having
a
bunch
of
having
having
him
deal
with
those
white
space.
Changes
as
well
would
be.
You
know
annoying
because
that's
going
to
create
a
lot
of
merge
confluence
here
and
there.
So
we'll
probably
wait
on
on
doing
the
white
space.
We're
going
to
wait
on
wait
on
fixing
those
white
space
changes
and
fix
fixing
the
white
space
checker
until
the
accuracy
stuff
is
merged,
because
that
way
we
don't,
we
don't
create
all
those
conflicts
which
is
just
going
to
be
annoying.
D
B
I
was
gonna,
I
was
gonna
yeah,
I
mean
no,
I
wasn't
gonna
say
pre-commit,
but
so
it
was,
it
was
pretty
same,
the
same
line
of
stuff,
so
I've
seen
so
internally.
We've
got
this
project
and
one
of
the
things
this
project
does
is
there's
yaml
files
and
then
there's
uids,
and
so
each
each
yaml
document
has
a
uid
association.
A
uuid
associated
with
it
and
uuids
are
basically
random,
uniquely
identifiable.
B
They're
identifiers,
and
so
basically
what
you'll
do
is
you'll
write
the
yaml
files
and
then
you
submit
a
pull
request
and
then
the
they
have
a
github
action
bot
that
goes
through
and
edits
it.
It
has
like
you
know
how
you
can
enable
the
edits
from
maintainers
button.
So
as.
C
B
That
checked
the
bot
can
go
through
and
then
add
uids
to
your
to
your
pr.
So
what
I
was
thinking
is
what
we
could
do
is
we
could
have
a
bot
that
goes
through
and
if
your
black
check
fails,
we
rebase
through
we
run
black
on
every
commit,
and
then
we
force
push.
So
we
could
do
that.
That
way,
we
wouldn't
have
to
deal
with
people.
You
know,
then
they
wouldn't
have
to
figure
out
the
commit
hook.
They
would
just
you
know
just
work,
that's
obviously
slightly
more
complicated.
D
B
I
don't
know
if
we
have
the
right
permissions
on
that
so
because
I've
been
seeing
basically
there's,
there's
yeah
I've.
I've
tried
to
automate
multiple
things
with
github
actions,
but
the
github
api
restricts
like
okay,
for
example,
I
was
trying
to
automate
the
download
of
the
logs
for
the
penny
and
dependencies
things
over
the
weekend
and
the
yeah
the
actions,
the
actions
api
endpoints.
B
Now
I
have
to
like
now
now
I
need
admin
rights
all
of
a
sudden
to
download
the
logs,
and
so
I
think,
there's
weird
things
with
some
of
the
apis,
where,
like
things
that
you
could
do
usually
like
with
your
regular
user
permissions,
not
via
the
api,
you
can't
do
via
the
api
for
whatever
reason
and
I'm
thinking
that
this
might
be
one
of
those
situations,
so
a
pre-commit
hook
might
be
better
here.
B
Unfortunately,
so
if
you're
going
to
do
one,
you
know
probably
do
the
pre-commit
hook
and
we
can
sort
of
just
keep
this
on
the
back
burner
as
a
maybe
thing
or
if
it's
you
know
easy
enough,
because
the
thing
is
you'll.
B
Try
to
do
it
and
then
it'll
either
work.
It
won't
and
I
think
it
might
not.
So
I
think
it's
it's
more
of
a
more
of
a
pipe
dream
here
and
sort
of
to
say
as
something
to
use
on
other
projects,
if
you
guys
just
to
tell
you
about
this
technique.
So
if
you
have
other
projects
you
can
use
it.
I
think
we
can't
probably
use
this
technique
here
because
of
the
specific
org
permissions
that
we're
dealing
with
so
anyways
yeah
just
sort
of
a
method.
B
I
wanted
to
mention
all
right
anything
else
from
your
side.
Yes,
okay,
so
and
then
let's
see
and
then
saksham
so
saksham.
How
is
it
going
with
you.
B
I
really
want
to
see
us
get
the.
I
really
want
to
see
us
get
the
config
file
support
in
the
next
release.
Where
did
it
go?
I
think
I
already
tagged
a
few
things:
okay,
yeah
because-
and
you
guys
know
you
guys
know
this
one
so
just
yeah
just
to
recap:
basically,
you
know
we
have
the
config
loaders,
so
we
can
use
it
can.
Basically,
we
were
thinking
about
the
similar
to
curl
syntax,
where
you'd
say,
like
you
know
at
model.yaml,
and
then
you,
you
know
how
we'd
have
like
these.
B
B
So
you
could
just
have
a
series
of
vml
files
or
json
files
or
whatever
and
and
we'd
pick
those
up
using
the
config,
loader
and
sort
of
make
them
command
line
arguments.
I
think
I
was
I
was
doing
stuff
the
other
day
I
was
like
this
would
just
be
really
helpful
and
then
I
realized
okay.
I
can
copy
paste,
it's
not
that
big
of
a
deal,
but
still
it's
nice,
because
then
obviously
they're
tracked
and
get
and
everything,
okay,
all
right.
B
E
E
F
B
E
Yeah
this
one,
this
one
uses
most
of
the
like
both
the
examples
for
adding
source.
They
normally
use
a
file,
a
test
file
to
store
the
sample
data
and
sort
of
implement
it
as
the
source.
E
The
thing
with
pandas
data
frame
is,
I'm
not
sure
if
we
should
create
a
file-
and
you
know,
store
the
data
in
that
and
then
sort
of
import
it
for
the
tests.
So
is
there
any
way
that
we
could
like
just
directly
go
through
that
without
creating
a
file.
B
E
B
Can
you
I'll
tap
cool
okay,
so
test
any
okay,
so
this
is
sort
of
like
a
naive
test
for
any
source.
We
just
do.
B
Yeah,
so
so
we
do
a
couple.
We
do
a
couple
couples
we
do
save.
We
do
load,
okay,
oh
yeah,
and
we
refactored
this
one.
Okay,
so
I
see
so.
This
is
not
using
that
one,
because
we
have
this
other,
so
yeah,
okay,
so
we
have
that
other
one.
That's
like
this
class
that
you
can.
You
can
mix
in
to
do
like
a
a
poly,
probably
probably
polymorphism
or
yeah
polymorphism,
and
then
you
can
have
like
test
case
plus
source
test
and
then
it
runs
some
tests
for
you.
B
So
but
this
is
this-
is
the
in
this.
B
You
know
we
don't
have.
We
don't
have
yeah
with
this
is
this
is
an
example
of
writing
just
a
test
using
the
high
level,
save
ap
load
apis,
and
I'm
only
calling
that
out,
because
I
think
you
know
some
of
you
have
seen
that
other
version
of
writing
a
source
test.
So
I
think
you
pretty
much
can
just
instantiate
the
source
with
the
data
frame
and
you're
done.
E
Yeah,
I
actually
because
I
wanted
to
follow
the
convention
as
far
as
possible,
so
I
just
wanted
to
confirm
that
so
yeah.
B
E
B
All
right
cool
all
right
great!
So,
let's
see
what
do
we
got
next,
we
have
so
nitesh.
Do
you
want
to
do
you
want
to
show
us
what's
going
on
with
hdf5.
C
C
Is
it
visible
now
right,
yep
yeah,
so
these
two
types
of
or
data
we
can
store
in
hdfs
like
the
first
one
is
just
a
simple
tabular
right
and
second
one
is
images
when
we
are
going
to
store
the
a
list
of
images
right.
So
in
case
of
table,
this
group
is
like
a
directory
and
feed
is
the
another
directory
and
f1
f2
are
under
directory
site.
C
This
contains
the
a
list,
a
numpy
array
kind
of
data
type
to
to
represent
the
features
stuff,
and
this
is
for
the
prediction
part
right
and
the
f1
f2.
After
combining
these
two
features,
we
are
going
to
get
the
table
part.
So
this
is
for
the
table
structure
and
for
images.
We
have
a
another
group
directory
directory
and
then
image
which
stores
the
image
one
image
to
where
this
image
one
is
a
whole
numpy
array,
2d
or
3d.
C
Maybe
it
depends
on
the
image
right,
so
at
the
at
the
time
of
retrieving
the
data.
I
was
thinking
that
how
can
we
deal
with
the
images
part
and
then
f1
f2
part,
because
in
in
in
the
tabular
form,
we
just
need
to
extract
the
f1
and
f2
and
then
combine
these
two
things
to
make
a
one
table.
But
in
case
of
images
a
whole
image
is
is
just.
C
E
C
B
So,
okay
yeah,
so
so
I
have
one
concern
about.
That
is
because
we
need
to
be
able
to
load.
You
know
multiple
types
like
what.
If
we
had
multiple
groups
right-
and
we
were
loading
across
groups
right
and
we
had-
is
this
possible
to
have
some
image
data
in
one
group
in
a
file
and
then
having
some,
you
know,
tabular
data
and
then
because
say
we
had
a
record
that
has
tabulated
data
and
image
data
stored
in
the
same
hdf5
file.
How
does
that
work.
C
Right
now,
right
now
in
a
config,
the
attribute
for
the
feature
is
just
single
string,
so
to
extract
the
data
from
different
groups.
We
need
to
make
it
a
list
so
that
user
can
insert
the
list
of
groups
that
you
have
to
retrieve
the
feature
from
these
these
these
these
many
groups
right
and
that
yeah
we
can.
We
can
retrieve
the
data
from
different
different
groups:
okay,.
B
So
that
attribute
in
a
list
is
there
any
way
we
can
just
say:
hey
grab
all
of
the
data
you
know,
like
you
know,
combine
across
all
of
what
would
be
records,
no
matter
what
you
know,
find
all
the
features
and
come
combine
them
all
into
you
know
each
record:
do
you,
can
you
do
auto
discovery,
or
is
this?
Do
you
have
to
do
like
it
shouldn't
there
be
some
way
to
auto
discover
what
all
the
features
are.
E
Okay,
like
I
have
worked
with
this
file
format
once
before,.
C
E
C
We
can
find
how
many
groups
in
a
particular
group
are
by
just
keys.
We
just
need
to
retrieve
the
keys,
and
then
we
have
that
in
a
group,
let's
say
in
a
group
directory.
We
want
to
find
that
how
many
subgroups
are
there,
so
we
just
need
to
find
the
keys
present
in
the
group,
so
it
automatically
gives
the
feed
and
predict.
What's
there
something
like
that,
and
then
we
have
to
iterate
over
the
feet
and
to
extract
the
data.
B
So,
and
and
and
with
the
way
that
people
create
these
files
typically,
you
know
the
data
sets
that
are
typically
stored
in
them.
Is
it
safe
to
assume
that
you
know
there's
going
to
be
sort
of
a
one-to-one
mapping
across
all
of
the
different
types
of
features
and
subgroups?
You
know:
is
it
safe
to
assume
that
you
know
okay
for
index
three
in
f1
there
will
be
an
image,
because,
obviously
you
know
in
this
particular
example.
It's
just
an
example,
but
there's
no
image
three
right.
B
B
So
say,
for
example,
we
have
you
know,
take
the
this
is.
This
is
just
a
you
were
just
drawing
a
diagram
right,
but
if
we
looked
at
this
as
if
it
was
an
actual
file
right-
and
we
see
okay
image,
one
at
index,
0
of
image
and
index
image,
2
at
index,
1
of
image
so
and
then
we
have
within
f1.
We
have
index
0
index
1
index
2
and
index
3
right,
4
items.
B
So
if
we,
if
we
were
to
do
auto
discovery
across
features-
and
we
said
okay,
you
know-
let's
recursively
go
down
through
the
groups
and
through
the
subgroups
and
identify
all
of
the
groups
and
subgroups
that
exist
right.
We
we'd
enumerate
f1
f2
an
image,
correct,
yeah,
okay,
so
in
enumerating
f1,
f2
and
image,
then
if
we
were
to
go
through
and
we
were
to
say,
okay,
let's
pull
right,
let's
start
creating
records
from
this
file.
B
In
that
case,
we
would
go
through
we'd
pull
index,
0
f1
index
0
f2
index
0
image
and
we'd
create
a
record
out
of
that
and
we
do
the
same
thing
for
index
1
across
3
of
them
now.
For
example,
we
look
at
this
diagram
and
we
okay,
we
go
to
pull
index
two
out
of
f1
f2
an
image
and
there's
no
image
in
you
know:
there's
no
index
to
an
image.
Is
that
ever
going
to
be
a
case
based
on
you
know
that
will
never.
That
will
pretty
much
never
happen.
B
C
B
Yeah
yeah
so
so.
Well
also,
I
mean,
I
think
I
think
I
mean
we
need
to
make
sure
that
this
works
with
you
know
in
the
wild
quote-unquote
formats
right.
So
so,
let's,
let's
and
it
sounds
like
you
know
from
what
I
know
about
this,
it
seems
reasonable
that
this
would
be
what
we
find
right.
So,
let's
try
to
let's
try
to
go,
implement
this
and
then
we'll
evaluate,
because
this
is
sort
of
the
most.
I
think
this
is
a
case
that
we
can
slim
down
from.
If
we
find
that
this
is
not.
B
You
know
what
what
what
we
wanted
as
originally
thought,
whereas
you
know
we'd
end
up
building
up.
If
we,
if
we
went
with
you,
know
our
our
our
specific
grab,
these
features
list
right.
So
so,
if
we
auto
discover
everything
we
can
always
pare
down
from
there,
but
I
think
this
would
be
good
start.
You
know
a
good
place
for
us
to
start
from
before
we
go
and
analyze.
Some
of
some
data
sets
all
right
cool
anything
else
on
this
one.
B
So,
let's
discover
all
groups
such
subgroups
so
wherever
there
might
be
data
and
make
each
into
a
feature,
okay,
so
and
then
the
other
thing
is,
you
know,
we're
thinking,
thinking
about
predictions
here,
and
so
the
predictions
thing
is
interesting,
especially
as
we
move
to
right
now.
The
way
we
store
predictions
is
sort
of
you
know
with
the
the
prediction
and
then
the
confidence,
and
I
think
I
think
this
might
need
to
change.
We've
thought
about
this.
B
A
lot
right
and-
and
I
just
want
to
bring
it
up
again
right
now,
because
when
we
have
the
prediction
in
the
confidence-
okay,
like
it's
one
thing
when
we're
going
into
a
source
that
exists
and
we're
loading
feature
data,
it's
like
okay.
Now
I
need
to
go,
save
those
predictions
back
to
the
source,
okay!
Well,
how
do
I
do
that?
So
the
I
think
the
most
obvious
approach
would
be
in
in
this
case.
You
know
you
might
create
a
group.
B
It
seems
like
you
might
have
a
group
with
named
by
that
same
name
and
put
the
data
in
there,
but
then
it's
like
okay,
where
do
you
put
the
confidence
and
the
main
reason
for
having
the
confidence
is
that
you
can
now
identify
that?
A
certain
feature
is
is:
is
a
prediction
and
not
just
a
feature,
so
you
know
it's
something
that
we
came
up
with.
B
So
I
don't
know.
This
is
something
that
I
just.
I
wanted
to
bring
this
up
again
because
I'd
been
thinking
about
it
recently,
and
I
want
to
make
sure
that
everybody
also
is
thinking
about
it.
So
we
can
figure
out
what
the
best
solution
here
is
right.
We
need
to
combine
our
our
brain
power
because
we
need
a.
We
need
a
way
to
store.
B
I
think
I
I've
gone
over
it
many
times
and
I
feel
like
we
need
to
move
to
essentially
just
storing
it
under
the
name
and
then
somehow
storing
the
confidence
separately,
and
I
think,
like
the
structure
of
the
record,
object,
might
want
to
change
to
be
so
that
we
store
the
predicted
value
in
features
and
then
to
understand
whether
it's
a
predicted
value
or
not.
We
sort
of
we
query
whether
it
has
a
confidence
associated
with
it,
and
that
would
allow
us
to
do
so.
F
John
you're
not
sharing
your
screen
right
now,
yeah.
Thank
you.
B
So
here's
the
here's,
the
code,
data
example,
and
so
this
is
an
example
of
this
is
an
example
of
how
we
might
combine
multiple
models
together.
So
we
implement
this
profit
model.
When
we
train
the
pop
model,
we
can
use
it
for
predictions.
B
Profit
is
like
a
forecasting
thing,
so
all
right,
so
we
grabbed
the
training.
We
grabbed
the
test
data
we
load
it
all
in
and
we
just
yeah
wait.
Wait,
oh
yeah,
because
we
needed
to
modify
it
and
just
mess
with
it
a
little
bit
so
group
it
by
county.
Let's
see
where's
what
I'm
trying
to
show
so
train
predict.
B
Okay,
so
this
is
this.
Is
the
awkwardness
that
is
caused
by
having
predictions
in
a
separate
thing,
so
predict
the
number
of
cases
for
each
county?
So
first
first
we
predict
the
number
of
cases
so
we're
doing
two
things
here
we
create,
we
create,
you
can
effectively
think
of
it
as
two
models,
but
we're
creating
a
model
to
predict
deaths
given
cases
because
we
figure
that's
a
linear
relationship.
B
Probably
so,
let's
just
do
this
simple,
linear
regression
model
just
for
simplicity's
sake,
so
we
have
a
cases
to
deaths
model
and
then
we
have
a
model
for
predicting
the
new
the
number
of
cases
in
a
given
county
for,
like
you
know,
given
the
date
and
that's
the
profit
one,
because
profit
operates
on
two
things:
the
date
and
the
number
of
cases
to
predict.
B
So
we
do
that
per
county
because
you
you
can
only
do
it.
That's
the
only
way
you
can
do
it
with
profit,
so
you
have
to
create
a
model
per
county,
and
so
that's
why
I
say
you
can
think
of
it
as
essentially
two
models,
but
we
really
have
one
of
these
for
each
county.
So
so
we
train
the
the
model
per
cap,
the
per
county
model
to
predict
cases,
and
then
we
go
through
and
we
look,
we
make
predictions,
and
so
we
make
predictions
and
we
figure
out.
Okay.
B
What
is
the
you
know
what
what
will
be
the
number
of
cases
for
that
county?
You
know
for
this
date
range,
and
now
we
have
the
situation
where
okay,
we
need
to
get
the
predicted
cases.
You
know
sort
of
access
that
pred.
You
know
that
prediction
method
grab
the
value,
don't
grab
the
confidence
and
feed
it
through.
You
know
to
the
next
model
here
to
do
a
prediction,
so
this
is
a
little
bit
awkward
and
what
might
be
better
examples.
B
So
here's
the
here's,
the
pi
county
thing.
So
this
is
that
keep
record
thing,
and
otherwise
we
end
up
with
yeah.
What
do
we
do?
We
get?
I
features
predictions
okay,
actually
maybe
this
ends
up
being
kind
of
easy?
Maybe
okay!
B
Maybe
this
wasn't
the
best
example,
but
where
this,
where
this
comes
in
a
little
more,
is
if
you
do
a
data
flow
and
you
get
the
output
of
of
a
data
flow
as
a
prediction
and
then
you
have
to
go
in
and
you
have
to
combine
the
features
dictionary
for
that
record
with
the
predictions
and
you
have
to
then
extract
the
key.
I
guess
maybe
it's
not
that
big
of
a
deal.
I
guess
what
I
was
trying
to
show
here
is
that
we
could
just
pass.
B
You
know
one
object
to
the
other,
and
now
I'm
realizing.
We
can
basically
just
do
let's
see
for
record
record
features,
cases
actual
cases
evaluated
cases.
So,
let's
see.
E
B
And
then
the
the
what
I'm
thinking
is
yeah
exactly
right
in
a
series
right
and-
and
I
think
yes
said-
this
is
called
stacking.
I
can
never
remember
this
stupid
phrase,
but
you,
you
have
also
heard
it
describes
complex
features,
but
basically
the
the
feature
going
into
one
model
is
the
output
of
a
previous
model.
So
you
you,
this
is
not
something.
That's
like
a
you
know,
a
known
truth,
it's
something
that
we
came
up
with
based
on
another
model,
so
now
you're
having
varying
degrees.
You
know,
as
you
propagate
through.
B
Yeah
yeah,
we
could
totally
write
a
data
flow
for
it,
so
yeah
it
was
more.
This
is
more
it's
more
of
like
a
a
an
urban
ergonomics
of
using
the
code
thing
like
you
know.
How
do
we
make
it
as
clean
as
possible,
and
I
think
mainly
it's
it's-
that
that
reaching
into
predictions
and
grabbing
value
is
maybe
not
not
as
nice
of
a
way
to
do
this.
B
As
let's
see
so
features
we
could
do
you
know
record
features
record
predictions,
and
then
this
would
be
so
if
we
did
this,
this
combines
the
two
dictionaries
and,
let's
see
so
cases
predicted
cases.
So
if
cases
and
features
actual
cases,
otherwise
we
set
yeah.
B
So
this
says
you
know
this
has
set
the
cages
feature
to
the
predicted
cases,
and
then
we
grab
actual
cases
just
to
do
the
reporting
down
here,
but
so
this
would
take
create
a
dictionary
where
we
say
okay,
record
features
and
then
so,
basically
great
you
know,
take
all
the
features
from
record.
Combine
them
all
with
all
the
predicted
features,
but
you
can't
really
do
this
if
you
have
to
reach
in
and
grab
value
out
of
there.
B
B
Which
would
give
you
the
confidence?
This
is
just
I'm.
I'm
I've
been
thinking
about
this
a
lot,
and
so
I
also
want
you
guys
to
think
about
this,
because
I
think
this
is
one
of
the
main.
This
is
one
of
the
one
of
the
changes
that
we
need
to
do
before
the
beta,
because
I
don't
know
I
have
a
feeling
that
does
anybody
particularly
like
it.
The
way
it
is
is,
I
guess,
where
I'm
going
with
this.
E
No,
I
think
that
what
you
right
now
did
just
now,
like
the
prediction
and
confidence
being
different.
I.
C
C
B
Okay,
great,
so
I
think
this
is
this
is
confirming
my
theory
here,
which
is
why
I
bring
it
up
right,
and
I
think
we've
talked
about
it
before.
The
other
thing,
which
I
think
could
be
interesting
here
is
having
features
include
any
predictions,
which
is
this.
One
is
definitely
you
know
this
is
this
is
where
we
get
a
little
more
confusing
here.
So
basically,
if
a
feature
exists
right,
if
we
know
that,
like
a
truth,
a
ground
truth
exists
for
a
future.
We
return
that
that
ground
truce
in
this
case
actual
cases
right.
B
If
we
happen
to
have
a
prediction,
then
we
include
the
prediction,
but
only
if
we
don't
have
a
ground
truth
right
does
that
make
sense,
and
then
that
way,
if
you
were
doing
this
chaining
thing
right,
you
say:
okay,
give
me
the
features
right
and
the
features
includes
the
predictions
from
the
last
one.
Unless
there
was
a
ground
truth,
in
which
case
we
use
the
ground
truth
right
now.
This.
B
Ambiguity
right,
which
may
not
be
good
because
you
may
you
know
you
may
think
you're
using
the
predicted
value
when
really
using
the
ground
truth
or
it.
May
I
don't
know,
maybe
that's
that's
sort
of
your
nominee
friendly.
I.
C
B
What
we
can
do
first
is
we'll
do
this
change
and
we'll
go
from
there
and
decide
if
you
know,
as
as
we
as
we
write
more
examples
and
do
more
stuff,
we'll
we'll
figure
it
out,
but
I
wanted
to
raise
that
all
right.
Okay.
So,
let's
move
on.
That's
a,
I
think,
an
important
change,
though,
so
I
wanted
to
make
sure
we
cover
it
all
right
and
then
did
we
have
anything
else
so
succumb
so
you're
working
on
the
duracom
config
loader
stuff,
how's
that
going.
E
Yeah,
it's
it's
almost.
I
think
I'm
almost
there,
but
just
I
had
no
doubt
here.
C
B
B
A
B
E
Yeah
yeah
this
guy
right.
Yes,
yes,
so
here
we
switched
the
minus
one
in
the
from
the
accept
block
to
the
try
block.
Oh
yeah,
I
remember
that
yeah
yeah,
so
it
was
giving
me
an
error
because
of
creating
the
net
the
model
class
again
from
the
exported
config
of
model.
B
So
if
you
have
like
source
json
and
you
just
did
source,
then
it
would
go
and
it
would
look
for
source,
but
the
thing
is
it
doesn't.
Do
it
can't
do
that
for
the
upper
level
keys?
I
can't
remember
exactly,
I
think,
maybe
we
just
need
to
we.
B
Probably
we
probably
can
implement
it,
but,
for
example,
if
you
had
like
source
json
or
source
df
source
json,
that
was
that
that
would
let
you
access
the
so
if
you
had
dash
source
dash,
df
and
then
dash
source
right,
so
the
data
flow
source
source
that
it's
pre-processing
and
then
you
said
json,
because
you
specified
that
you're
using
a
json
source
and
then
you
you,
it
would
be
able
to
look
in
there
use
this.
Tri-Accept
block
is
what
lets
it
grab
dash
df-source,
json
or
df-source,
but
it
gets
confused.
B
B
E
E
So
the
error
comes
in
here,
so
this
is
none
right.
These
two
are
null,
so
it
tries
to
get
the
config
of
this
when
it
goes
in
traverse.
Config
get.
E
Right
here,
it's
nice
to
get
the
config
it
treats
the
nun
as
a
everywhere
as
a
dictionary.
It
tries
to
get
the
config
for
this.
I
wonder
why
that
is.
B
E
A
B
B
I
think
the
thing
is
that
this
ties
into
that
stupid,
config
file,
stuff
we
were
talking
about
earlier
and
just
the
fact
that
it
we
need
well.
We
did
a
lot
of
work
to
do
the
unified
config.
We
still
aren't
done
unifying
the
config
stuff.
There's
just.
I
think
we
we
need
to
take
a
harder
look
at
the
whole
config
thing,
because
it's
it's,
it's
still
is
still
disjoint
in
many
ways.
Okay-
and
I
think
this
has
to
do
with
when
we
tried
to
go-
implement
shared
config
okay.
B
So
when
we
tried
to
go
implement
shared
config,
we
found
that
we
needed
to
like
one
of
the
things
that
was
missing
with
that
is
sort
of
recursively
instantiating
objects,
starting
at
the
bottom
so
like.
If
you
looked
at
everything
within
it
within
it,
within
an
object
as
a
if
you
looked
at
everything
within
an
object,
you
know
a
dictionary
and
you
went
down
to
the
very
leaf
nodes
right.
You
need.
We
need
to
go
down
to
the
leaf
nodes.
So,
okay,
sorry,
let.
B
The
top-
this
is
what
I
think
should
happen,
and
I
think
if
we
do
this,
we
will
have
solved
our
config
issues
and
it
will
basically,
I
think,
if
we
do
this,
we
will
have
solve
most
of
our
issues
with
config,
with
shared
config,
this
type
of
thing
that
you're
finding
and
set
us
up
to
do
the
from
config
file,
stuff
and
and
so,
and
I
think
it's
sort
of
taking
taking
this-
is
sort
of
going
back
to
the
drawing
board
and
say
what
really
needs
to
happen
when
we
have
configs
right
instead
of
sort
of
what
we've
done,
which
is
okay,
clobber
things
together
until
it
works,
and
now
it
doesn't
work
in
some
cases.
B
So
so
I
think
now
we
know
more
about
how
config
works
and
and
the
way
it
works
is
really
we
load
any
relevant
files
right,
for
example,
der
conf,
or
you
know
the
command
line.
You
know
if
we
were
to
do
these
files
on
the
command
line
to
essentially
replace
command
line
arguments
you
load
all
the
relevant
files
that
might
have
config
options
in
them
create
a
massive
dictionary
right.
Now
you
go
down
and
you
recurse
into
the
dictionary
into
all
the
leaf
notes,
and
you
start
saying
you
know,
is
config
dict
right.
B
All
the
way
up
and
now
you
have
all
your
instantiated
objects
right
and
that
that
essentially
I
mean
that
that
also
provides
a
validation
right
instead
of
doing
these
things,
you
know
loading
on
command
now,
you've
loaded,
you
know
at
the
beginning,
and
so
now
we've
done
some
validation
there,
which
is
great.
So
I
think
that
is
sort
of
the
overall
solution
that
that
will
get
us
to
where
we
need
to
go,
and
then,
when
we
look
at.
B
Not
happening
right
now
we
have
this
totally
disjoint
thing
that
we've
clobbered
together
together
and
refactored
many
times,
and
things
are
happening
in
different
places.
If
you
were
to
go
through,
if
you
wanted
to
take
a
stab
at
this,
I
think
you're
going.
I
think
if
you
implement
that-
and
you
may
just
want
to
split
this
out
into
like
a
separate
you-
you
may
want
to
split
this
out
in
into
like,
like
a
sit
like
start,
a
test
file
with
unit
tests.
This
is
how
I
start
things.
Basically,
you
write
a
test
file.
B
It
provides
a
pretty
graphic
development
model,
so
I
would
advise,
maybe
maybe
doing
something
like
that.
I
think
I
don't
know
it
depends.
If
you
want
to
go
that
route,
I
guess.
Let's
see
part,
I
mean
this
is
all
we're
a
bit
down
a
rabbit
hole
because
we
were
all
we
were
supposed
to
be
doing.
B
Yeah
yeah,
let's,
let's,
let's,
let's,
let's
take
a
step.
Let's
I
think,
if
you're
willing,
then
this
is
something
that
really
needs
to
happen
so
and
you've
got
the
most
experience
out
of
the
config
stuff
than
anybody.
So
I
think
this
your
you
would
be
the
you
know
a
good
person
to
to
go.
Do
this
so
and
obviously
you
know
we'll
work
closely
together.
So
all
right!
Okay!
So
let's,
let's
just
make
that
the
plan.
I
know
this
kind
of
throws
a
wrench
in
your.
B
You
know
your
your
image,
stuff,
yeah
it'll,
completely!
Stop
that
yeah
it'll!
Stop
that
work
right!
Is
there
any
way
that
we
can?
Let's
see?
So
what
was
the
yeah?
The
main
thing
here
is
that
we're
loading
the
model.
E
Okay,
so
the
next
thing
that
that's
the
error
causing
thing
is:
if
I
do
this,
then
everything's
working
but
okay.
So
now
I
will
open
the
predict.sh.
So
here
this
is
the
predict.
This
is
the
data
flow
run,
records
all
command.
E
Here
we
are
not
adding
the
records
that
are
being
loaded
from
the
directory
to
this
as
an
input
set
to
the
seed.
E
So
if
we
are
not
adding
that,
then
we
are
not
we,
we
won't
be
able
to
read
that
from
the
seed,
speed
image
right
right.
B
All
right,
sorry,
I
I
have
to
go
now
because
yeah
I
got.
I
got
a
meeting
scheduled
last
minute
in
this
time
slot,
but
I
think
you
and
I
need
to
meet
one-on-one
anyway.
So
let's
actually
take
this
offline.
Okay
and.
B
About
this,
because
I
think
this
is
pretty
in-depth
yeah
yeah
yeah.