►
From YouTube: Weekly Sync 2022-01-18
Description
Meeting Minutes: https://docs.google.com/document/d/1vKYEPtqKiwsFwhVKPmPub5ebMqN9HteBcbdFAuTXalM/edit#heading=h.fiwvqq7dzxgx
A
Okay,
so
we'll
go
around
and
we'll
do
introductions
and
let
me
just
pop
open
the
meeting
minutes.
A
There
we
go
all
right
so
and
and
I'll
we'll
do
a
little
process
overview
of
of
how
things
work.
So
it's
about
time
we
have
this
recorded.
Okay.
This
is
a
very
long
document
at
this
point,.
A
A
All
right
january,
18th,
2022.,
okay,
so
introductions.
So
what
we
usually
go
to
do
is
we
go
through
and
we
do
so
so
we
we
talk
about
the
agenda.
So
I'll
do
so
we'll
talk
about
process
introductions
and
then
you
know
then
I'll
fill
out
the
rest
of
the
agenda
items
so
yeah.
So
basically
we
we,
we
run
the
meeting
it's
about
an
hour.
So
meeting
is
an
hour
sometimes
for
height
of
gsoc
season.
A
A
You
know
work
work,
work
in
a
a
setting,
that's
not
so
asynchronous
right.
You
a
lot,
can
get
lost
in
communication.
So
all
right!
So
let's
go
around
and
oh
so
actually
we'll
finish
out
the
process
so
basically
one
hour
and
then
so
what
we
do
is
you
know
agenda.
A
We
we
go
around.
We
get
everybody's
agenda,
any
you
know
what
what
would
you
like
to
talk
about
today?.
A
And
then
we
prioritize
based
on
time.
A
So
if
somebody
has
to
leave,
you
know
like
within
30
minutes,
we'll
do
theirs.
First
right,
if
something
is
going
to
take,
you
know
potentially
two
hours.
We
do
that
last
right,
because
this
is
longer
tasks
go
last
and
that
way
you
know
if,
if
we
it's
sort
of
a
hop-on
hop-off
situation
right,
if
you
don't
have
to
come
right
at
the
beginning
of
the
meeting,
obviously
it's
it's
then
you're
not
gonna
get.
A
You
could
potentially
knock
it
on
the
gen
agenda,
then,
but
you
know,
it's
sort
of
first
come
first
serve
in
that
that
debugging
respect
right
so
yeah
and
then
yeah,
so
you
can
drop.
Basically,
if
we've
covered
your
thing,
you
know
feel
free
to
drop,
feel
free
to
stick
around
free
to
drop.
A
Agenda
items
or
stick
around.
A
Because
some
of
these
things
can
go
for
a
long
time
as
hashem
knows
you
know
we
can.
We
can
end
up
on
this
call
for
several
hours
if
we're
really
into
debugging.
So,
okay,
all
right.
So
let's
do
introductions
so
banesh
did
you
want
to
go
first.
B
A
B
A
junior
cs,
undergrad
in
a
public
university
in
india,
and
actually
I
was
like-
can
kind
of
written
new
to
dffml.
I
was
like
I
was
just
browsing
through
a
couple
of
like
pride
and
software
foundation
organizations-
and
I
found
this
very
interesting
like
like
from
what
I
would
think
I
haven't
really
gone
much
through
a
monster
dogs,
but
from
the
question
what
I
understood
it's
kind
of
like
related
to
pi
planning.
You
know
exciting
pipeline
information
anymore.
A
B
A
All
right
great,
so
let
me
actually,
I
realize
I'm
not
sharing
my
screen
here.
So
that's
the
other
thing.
This
meeting
is
a
constant
reminder
for
me
to
share
my
screen.
You
guys
will
this.
This
happens
all
the
time
it's
some
for
someone
is
forgetting
to
share
their
screen.
Usually
it's
me
so
all
right.
Okay,
so
you
guys
can
see
the
meeting
minutes
now
right.
A
So,
oh
so
the
recording
we
well,
I'm
I'm
recording
using
open
broadcaster
studio.
So
let's
see
it's
over
here
so
yeah,
so
we
record
and
then
I
post
on
youtube.
So,
oh
and
that's
a
good
sort
of
you
know,
after
recording
posted
recording
posted
on
youtube.
A
A
You
know,
data
set
generation
and
model
usage,
and-
and
you
know
it's
very
much-
you
know
it's-
it's
sort
of
the
whole
general
machine
learning
workflow
right
and
we
have
people
from
all
sorts
of
different
areas
who
have
you
know
come
in
and-
and
you
know,
built
up
this
project
right
and
hashim
has
been
a
big
part
of
that
so
and
we'll
we'll
hopefully
have
some
other,
hopefully
have
some
other
people
from
past
gsoc
years
and
past
contributors.
A
You
know
we
have
some
people
who
have
participated
as
a
part
of
gsoc
and
some
people
who
have
you
know
just
been
on
here
and
and
and
hanging
out
and
and
we
all
sort
of
get
to
know
each
other
and
we
have
fun
working
together
on
various
machine
learning,
problems
right
and
various
aspects
of
that
and
there's
also
a
lot
of
non-machine
learning
work
to
be
done
too.
A
You
know
just
based
on
the
fact
that
you
know
we
have
this
sort
of
generic
pipeline
thing
right,
but
yeah
we,
you
know,
we
have
fun,
we
hang
out
and-
and
we
work
on
interesting
stuff,
so,
okay,
so
who
wants
to
go
next?.
C
C
So
while
I
was
looking
at
what
where
I
can
start
contributing,
so
I'm
also
pretty
new
to
this
up
till
I
was
mostly
looking
at
web-based
organizations
and
things
like
those
so
is
like
first
time
I've
got
trying
to
look
into
more
of
a.
A
C
Yes,
like
small
work,
I've
done
in
for
sort
of
a
front
and
back
and
both.
A
Okay,
cool
with
django
or
what
kind
of
stuff
have
you
worked
with.
C
Oh,
I
did
a
one
project
in
django.
It
was
like
a
small
project
for
some
sort
of
college
work
where
we
had
to
do
a
final
project
and
and
some
project
sort
of
thing,
and
basically
my
front-end
side.
I
mostly
worked
with
either
with
react
or
swelt.
A
All
right-
and
I
don't
know
how
to
spell
or
or
pronounce
that
other
framework
that
you
said,
but
so
yet
soviet.
I
know
I've
seen
it,
but
I
haven't
yet
tried
to
say
it.
It's
just.
You
know
it's
funny.
When
you're
off
on
the
internet
reading
things
you
have
no
idea
how
they're
pronounced
until
somebody
somebody
else
says
it.
How
do
you
spell
that
again,
though,.
A
C
A
B
Oh,
like
generally,
I
kind
of
started
my
programming
and
c
c
language
and
then
I
kind
of
like
shifted
the
transition
to
c
plus
this
and
then.
Finally,
I
ended
up
in
python
like
major
language
kind
of
thing,.
A
B
Projects
I
also
like
worked
with
django
and
I
work
with
django
and
currently
I'm
like
working
on
a
one
stack,
application,
okay
and
also,
I
also
like
I
have
some
experience
related
to
machine
learning
projects
and
also
recently,
like
I,
I
will
I'm
like
currently
working
on
our
recommendation
system
cool.
It's
like
it's
like
not
really.
A
A
Okay,
cool
so,
and
the
reason
I
asked
you
know-
and
I
want
to
hear
everybody's
experience
is
because
you
know
we
have
various
issues
this
this
thing
it's.
B
A
Beast
right,
and
so
we
have
stuff
all
over
the
place
and
we've
there's
there's
an
http
side
of
things
like
an
http
service
that
the
back
end
and
then
there's
also,
you
know,
there's
the
beginnings
of
a
client
app,
that's
written
in
react
right
now,
and
then
you
know.
Of
course
we
have
like
the
models,
and
we
have
you
know
the
general
pipelines
and
the
operations
there
was
some
work
done
to
actually
do
like
a
distributed
setting.
A
A
If
you
come
communicating
via
getter
or
via
this,
this
meeting,
you
know
what
what
kind
of
stuff
you're
interested
in
what
kind
of
stuff
you're
currently
working
on
there
is
likely
a
place
for
that
somewhere
in
this
project
right
either
as
an
example
use
case,
or
you
know,
maybe
we
have
some
work
to
work
on,
for
example,
with
with,
like
the
back
end,
dev
stuff,
you
know
there
is
that
hp
api
needs
to
be
refactored,
so
you
know,
there's
there's,
there's
work
all
over
the
place
in
different
areas,
so
whatever
you're
most
interested
in
you
know
calling
that
type
of
stuff
out
you
know,
helps
us
prioritize
issues,
so,
okay,
cool,
so
was
it
so
it
was
up
so
banesh,
and
I
would
I
would.
B
And
his
name
is
like
abhijit.
C
A
Okay,
great
did
I
get
there
right.
A
Okay,
okay,
all
right
and
then
so
who?
Who
wants
to
go
next.
D
So
I'll
go
next,
my
name
is
manus.
I
from
india,
I'm
a
junior
in
undergrad,
so
I
came
across
your
organization
through
ps
psf,
and
I
was
just
curious
about
your
mission
of
statement
mission
statement.
You
guys
mentioned
you
heavily
rely
on
data
flows
which
are
basically
directed
graphs.
This
is
caught
my
I
just
want
to
know
more
about
it
and
I
thought.
A
Okay,
so
there's
a
lot
of
things
in
this
space
now
so
a
few
years
ago
there
was
not
not
a
lot
going
on
here.
As
far
as
workflows
data
flows,
I
mean
there's,
there's
been
a
lot
of
data
flow
programming
happening.
You
know
since
forever,
but
sort
of
you
know,
as
as
far
as
like
data
flow
plus
ml
stuff,
it's
kind
of
blown
up
recently.
So
there's.
E
A
A
Yeah
we
do,
we
probably
need
to
update
that
yeah.
So
you
know
we
we,
the
the
focus,
is
and
I'll
pull
up
the
you
know
the
the
focus
is
abstracting.
The
whole
machine
learning
workflow
you
know,
and
and
and
the
way
that
that
is
you
know
done
is-
is
using
data
flows
to
a
large
extent
right
so
or
that
is
the
goal
so
right
now
right
now,
things
are
very
all
over
the
place.
A
There's
many
ways
to
do
things
right,
but
the-
and
we
probably
need
to
drop
that
because
that's
not
you
know
what
our
docs
reflect.
Where
is
a
good
so
just
to
show
what
is
the
data
flow?
This
is
a
data
flow.
So
this
is
you
know
it's
a
directed
graph
right.
You've
got
your
your
purple.
Things
are,
you
know,
basically
little
functions
that
run
and
then
your
your
pink
things
are
your
inputs
here,
right
and
so
effectively.
A
You
know
that
if
you've,
if
you
have
heard
you
may
have
heard
about
machine
learning
before
that
feature,
engineering
is
like
80
of
the
work
right.
So
the
goal
behind
data
flows
is,
you
know,
make
feature
engineering
easy
right.
So
make
it
really
easy
to
do?
Don't
don't
focus
on
a
data
set
focus
on
generating
a
data
set
right
and
if
you
focus
on
generating
a
data
set,
you
can
build
new
data
sets.
A
You
can
easily
grab
new
inputs,
you
know
for
inference
because
you
can
generate
all
the
rest
of
the
data
you
need
for
your
model
so
that
that
is
not
really
the
the
the
sole
focus.
But
it
is,
you
know
a
large
part
of
it.
There's
really
sort
of
three
things
that
we
focus
on,
which
is
you
know:
data
set
generation,
data
set,
storage
and
and
and
then
usage
with
models
right,
and
so
the
models
are
the
most
mature
part
of
the
project.
A
There's
been
a
lot
of
work
done
to
there's
been
a
lot
of
work
done
to
implement
models
off
of
plug-ins
like
we
have
this
plug-in
architecture.
A
And,
basically
you
know
a
lot
of
the
work
that
has
been
done
is
to
to
go
and
and
look
at
you
know,
maybe
popular
model
architectures
and
wrap
those
as
pre-trained
or
wrap.
Apis
of
you
know
various
machine
learning
frameworks
to
allow
people
to
train
their
own
models
easily
right
with
it
with
a
sort
of
unified
api,
so
that
that
that's
sort
of
you
know
more
of
the
focus
here
but
yeah
anything
else.
So
what
what
kind
of
stuff
have
you
done?
What
are
you
interested
in.
D
I've
done
most
of
my
work
has
been
it's
been
on.
Data
analytics
I've.
D
Using
our
tableau,
I
also
have
some
experience
in
nlp.
I
worked
on
a
in
a
hackathon.
I
worked
on
a
like
a
project,
a
product
to
summarize
text,
github
comments
basically-
and
you
know
like
auto,
assign
buckets
to
it
to
automate
the
process.
A
So
was
it
clustering
or
classification?
Did
you
have
you
know
did?
Did
you
have
pre-assigned
things
that
you
bucketized
them
in
or
did
it
sort
of?
You
know
try
to
figure
out
on
its
own
where
they
fit.
D
D
So
I
started
with
c
plus
plus
myself
in
high
school
and
I
kind
of
transitioned
into
python
when
I
entered
college.
So
most
of
my
work,
my
interests
lie
in
machine
learning,
so
I
mean
I've
been
I'm
also
currently
looking
into
reinforcement
learning.
I
have
spent
quite.
A
E
Sharia,
okay,
hi
hi
guys
this
is
hashim,
I'm
from
pakistan,
I'm
an
undergrad
student
and
I'm
about
to
graduate
actually
I've
been
I've.
I
first
started
contributing
to
dfml
like
two
years
ago,
and
I've
been
contributing
on
and
off
yeah,
I'm
mostly
interested
in
machine
learning
and
deep
learning,
and
most
of
my
work
is
also
related
to
that
yeah.
That's
it.
A
Great
thanks,
sasha,
so
and
and
hashem
already
also
created
a
lot
of
these
tutorials
which
have
youtube
videos
attached.
So
these
are
really
great.
Let's
see
are
they
on
or
maybe
they're
not
in
the
latest
release?
Yet
damn
that's
right.
We
need
to
do
release
so,
okay
or
where
are
they
they're
under
examples?
A
Notebooks
right
at
the
front?
Okay,
so
created
notebooks
and
tutorial.
A
A
All
right,
all
right,
okay,
so
let's
go
through
and
look
at
the
you
know.
What
are
we
gonna
do
today?
So
does
anybody
have
let's
see?
Is
there
is
I'll
see
hashem?
You
might
have
something.
So
is
there
anything
that
are
open
issues
that
people
want
to
talk
about
today,
any
anything
in
progress
or
that.
F
Oh
yeah,
no
issues,
hello,
everyone-
I
am
also
from
india-
and
I
am
interested
in
deep
learning-
spend
some
time
looking
over
how
to
train
models
and
stuff
in
tensorflow.
F
A
All
right,
great
and
you've-
you've
been
you've,
been
on
with
us
here
for
a
few
months
now,
right
just
off
and
on.
A
Yep,
okay,
cool
all
right,
so,
okay,
so
let's
go
around
and
and
let's
see
so
so
I
know
there
may
be
those
of
us
who
have
been
around
and
working
on
things.
Do
you
guys
have
anything
that
you're
currently
working
on
and
then
we'll
go
to
sort
of
perspective
projects
and
and
perhaps
issues
that
people
might
want
to
tackle.
E
All
right,
so,
let's
see
we
don't
need
to
discuss
that
right
now.
I
just
wanted
to
know
if
there's
something
to
catch
up
on.
A
Yeah,
so
I
think,
and
things
have
been
slow
so
basically
we
have
this
whole.
The
main
thing
that's
been
going
on
right
now.
As
far
as
I'm
concerned
is
yeah
and-
and
I've
been
saying
this
now
for
a
long
time,
but
we
need
to
we
need
to.
A
We
need
to
split
out
this
whole
thing,
so
basically,
there
are
a
lot
like
I
was
saying:
there's
plugins
and
there's
a
lot
of
different
plugins,
and
so
how
many
plugins
do
we
have?
I
think
we
have
like
20
some
22
23
at
this
point,
so
the
architecture
of
the
code
is
or
of
the
repo
is
such
that
if
you
go,
this
is
the
main
package
dffml.
This
is
probably
worth
worth
covering
right
now,
so,
basically,
code
structure,
so
dffml
is
the
main
package
right
and
then
everything
else
you'll
hear
this.
A
This
you'll
hear
referred
to
as
the
main
package
and
then
everything
else
you'll
hear
referred
to
as
a
plug-in
in
the
docs,
and
then
you
know
when
we're
talking
about
things.
So
if
you
go
into
the
main
package,
you'll
see
a
structure
that
relatively
mirrors
the
top
level.
So
this
would
be
the
root
of
the
repo
right,
because
there's
no
path
here
and
then
now
we're
in
the
main
package
right
so
we're
in
the
dfml
directory,
which
is
in
the
root
of
the
repo.
A
So
within
the
dfml
package,
you'll
see
things
like
accuracy,
db,
df
model,
operation,
service
source
and
then
tuner.
I
think
you've
done
some
work
on
the
tuners
right,
that'll
be
fun,
so
these
all
most
most
of
these
are
plugins.
Actually,
I
think
pretty
much.
All
these
are
plug-ins.
Some
of
them
are
defunct.
We
need
to
remove
port,
I
believe
feature,
so
these
plugins
will
go
into
the
source
one
because
this
has
got
a
pretty
healthy
list
here.
So
these
are
all
data
sources.
A
So
basically,
if
you're
looking
at
the
documentation
and
you're
looking
at
the
about-
and
here
data
set
generation,
machine
learning,
data
set
storage,
these
correspond
to
data
set
storage
right,
and
so
basically
you
can
store
the
csv
file.
You
know
it
could
be
a
database,
it
could
be
a
date
data
flow.
You
could
free
pre-processing
with
the
data
flow,
it
might
be
a
directory
of
files,
and
these
are
like
the
numpy,
specific
or
the
nist
ones.
A
If
you
guys
have
have
seen
the
nist
data
set
there,
your
idx
380x1
and
then
you
know,
json
store
things
in
memory,
grab
things
from
a
specific
operation
and
then
you
know,
there's
some
other
helpers
in
here
right.
Actually,
there's
one
there's
an
abstraction
in
here
to
actually
help
us
write.
Pre
pre,
pre-canned
data
sets
as
well.
So,
for
example,
you
know
the
iris
it
once
again.
The
idea
here
is:
don't
we
we
don't?
We
don't
really
like
store
any
data
right.
A
We
always
generate
the
data
or
download
the
data
right,
and
this
helps
us
have
reproducible
setups.
So,
for
example,
this
is
the
iris
training
data
set
source
right,
so
we're
gonna.
We
have
this
cache
directory
and
you
know
we're
gonna
we're
going
to.
Essentially
we
download
the
data
set
and
you
know
we
store
it
in
the
cache
directory
and
then
we,
you
know,
do
a
little
find
replace
on
the
headers
to
make
them
more
compatible
with
our
csv
source.
A
So,
there's
a
lot
of
there's
a
lot
of
things
that,
if
you
haven't
done
a
lot
of
python
before
you
may
not
have
seen
before
in
the
code
base,
so
decorators
are
probably
something
that
you
have
seen.
Async
is
probably
something
that
that
a
lot
of
people
may
not
have
seen,
and
then
you
know
using
things
like
an
await,
goes
with
async
and
using
things
like
yield
are
or
generators,
maybe
things
that
people
have
not
have
seen
so
these
there
are
examples.
Basically,
the
code
base
is
your
guide.
A
If
you
may
have
heard
people
say,
use
use
the
source
luke
if
you
like,
star
wars.
So
basically
there's
there's
a
lot
of
stuff
in
here.
Get
grep.
Is
your
friend,
let's
see
what
is
on
this
terminal
window?
Can
I
show
this?
Yes,
okay,
git
grep
is
your
friend
and
it
could
you
know
you
you
can
you
can
find
examples
of
of
most
things,
wow,
okay,
it
doesn't
want
to
go
over.
You
can
find
examples
of
most
things
by
looking
through
the
source
code.
A
A
What
is
something
that
we
might
want
to
know
about
so
cache
download,
because
this
is
kind
of
a
confusing
one
right.
So
if
I
wanted
to
know
more
about
cache
download,
there's
a
documentation
page
for
it
and
it
you
know
it
explains
what
to
do,
but
I
I
might
also
want
to
understand
more
by
looking
at
usages
right.
So
in
that
case
I
would
do
you
know.
A
Git
grep
dash
c
gives
me
context
of
five
lines
on
either
side,
and
you
know
I
can
look
through
the
code
for
all
the
usages
of
cache
download
and
I
can
see
you
know
how
the
arguments
vary.
You
know
here's
the
here's,
the
definition
of
the
function
itself,
so
that's
these
are.
These
are
all
helpful.
You
know
things
things
help
you
navigate
the
code
base
right.
A
The
other
thing
that
is
your
friend
is
git
log
right,
so
we
just
looked
at
cache
download
and
we
saw
that
it's
defined
in
diff
mode
utility
net,
so
we
might
want
to
go
find
out
more
about
that
right
and
this
git
log
dash
p
is
a
very
useful
command
right.
So,
and
I
will
paste
these
in
that
in
the
chat.
So
what
was
the
other
one?
I
did.
A
Get
crap
dash,
c5,
cached
download.
This
is
a
cache
download
see.
I
can't
even
remember
how
to
spell
it,
and
I
don't
know
how
many
times
I've
used
this.
Okay,
all
right.
So
these
are
your
friend,
and
so
this
dash
p
is
very
helpful,
so
essentially
sahil,
hopefully,
will
join
us
for
a
different
meeting
time.
I
know
this
meeting.
Time
is
tough
for
him.
A
So
if
you
want
to
figure
out
why
something
is
the
way
it
is,
you
can
go
to
the
log
here
right
and
so
dash
p,
so
get
git
log,
you
know,
tells
you
many
things
about.
You
know
what
happened
right,
but
this
isn't.
You
know
always
the
most
helpful
thing
you
you
sometimes
you're.
Looking
for
a
specific
you
know,
you're
like
I
want
to
know,
what's
going
on
with
this
cache
download
function,
why
am
I
not
using
it
correctly?
Well,
maybe
the
arguments
changed
right,
so
you
might
grip
for
it.
A
You
might
find
the
definition
in
net
and
then
you
might
say,
okay
well
what
what
what
happened
there
like?
What
happened
recently,
oh
well,
maybe
it
was
related
to
this
fixed
issue
of
progress
being
logged.
Only
on
first
download
right-
and
I
might
come
in
here-
and
I
might
say
that
oh
this
validate
protocol
function
changed.
Maybe
there
was
a
bug
in
when
we
changed
it.
You
know
how
it
changed.
A
So
this
git
log
dash
p
is
is
very
helpful
and
then
you
can
identify
the
commit,
and
you
know
often
what
you
might
do
is
you
know,
identify
the
changes.
Look
for
the
file
that
you
care
about,
identify
the
commit
that
changed
and
then
perhaps
you
know,
do
a
git
log
dash
p
and
then
with
the
commit
and
identify
other
files
which
changed
as
a
result
of
this
commit
as
well
right.
A
So
maybe
in
this
one
we
didn't
have
a
specific
this
one,
this
one's,
not
a
good
example,
because
only
dfml
util
net
change,
but
you
might
see
ones
where,
let's
see
here's
kubernetes
stuff
now
that
one
doesn't
have
multiple
examples.
Okay
subprocess
run
yeah
this
oh
this
is
here.
There's
recent
changes
to
samples
should
I
should
I
java
dependency
check?
Okay,
so
this
thing
changed
recently
and
it
changed.
A
Where
did
it
change?
Okay
here
so
stream
output
logs?
So
it
started?
Where
did
we
go?
We
use
this
run
command
right,
so
this
run
command
was
implemented
somewhere
and
then
it
was
used.
So
if
I
look
back
at
this
command
commit
here,
I
would
see
you
know:
okay,
so
here's
using
run
command
and
shortly
before
it
I've
added
this
run
command
helper
right.
So
you
can
you
can.
A
This
is
some
tools
you
can
use
to
find
more
context
on
the
code,
if
you're
confused
about
why
things
are
the
way
they
are
okay.
So
let's
get
back
okay,
so
pr's
pending
yeah?
Okay.
So
why
did
we
go
into
all
that?
Well,
we
were
diving
through
the
code
base
and
I
got
off
on
a
tangent
there.
So
there's
the
sources
right
which
map
to
the
dataset
storage,
abstraction
and
then
there's.
A
Simple
linear
regression
model
or
the
logistic
regression
model
which
exists
within
the
main
package
and
then
most
everything
else
is
implemented
based
off
of
different
libraries
like
tensorflow
or
you
know,
pi
torch
or
scikit.
So
if
we
go
so
now
we're
in
the
root
of
the
repo
right
and
if
we
were
going
into
model,
we
would
see
here's
our
machine
learning,
libraries
that
that
we
know
right.
A
These
are
mostly
a
lot
of
major
libraries
here
right,
so
if
we
clicked
into
scikit
scikit's,
actually
a
pretty
messy
one,
but
not
not
messy
in
that,
it's
messy
it's
it's
it's
pretty
clean,
but
it's
not
going
to
be
the
most
obvious
example.
So
xg
boost
this
is
pretty
pretty
obvious
example.
So,
let's
see,
let's
look
at
the
regressor,
so
basically,
this
is
the
the
way
things
are
structured,
there's,
basically
config
objects
and
classes,
and
so
the
config
objects
holds
everything
that
you
need
to
know
about
a
this.
A
This
is
like
the
static
definition
of
an
object
right
and,
and
the
reason
why
we
do
this
is
so
that
we
can
serialize
things
and
save
things
to
disk
right.
So
we
don't.
We
don't
mix
and
match
you.
If
you
put
something
in
a
config
object,
you
you
basically
can
assume
that
you
can
reinstantiate
whatever
the
the
the
main
object
is
right,
so
the
xpg
regressor
model
right
just
from
the
config
right.
A
So
if
I
made
a
json
file
that
had
all
this
stuff
in
it
and
then
I
loaded
it-
and
I
passed
it
to
this
thing-
I
would
have
the
same
object
right
effectively,
not
not
actually
the
same
instance,
but
you
know
it
will
do
the
same
thing
right.
I
can
count
on
it
to
do
the
same
thing
right,
so
you
may
have
some
state
within
the
model
that
happens
after
it's
loaded
but
but
effectively,
all
the
configuration
happens
via
this
config
object,
and
each
each
object
within
dfml
has
follows
this
pattern.
A
So,
let's
see
the
other
thing
that
is
important
to
know,
I
think
basically,
the
rest
of
the
code
base
file
is
the
same
thing.
So
basically
there's
the
models.
There's
the
sources
right,
so
there's
my
sql
source
right
now.
I
think
I
have
a
source
that
I
have
I'm
going
to
add
and
then
you
know,
here's
the
http
service,
which
is
what
I
was
talking
about.
This
is
the
back
end
service
that
you
can
use
to.
You
know
as
as
a
micro
service
type
thing
right.
A
If
you
wanted
to
deploy,
you
know
if
you
wanted
to
use
all
of
the
same
command
line
or
python
apis,
but
over
http,
you
could
do
it
this
way
and
this
needs
to
be
changed.
Okay,
so
any
I
think
we
covered
generally
the
code
structure
we
covered.
You
know
if
you
find
something
in
the
code
structure.
How
do
you
find
other
things?
You
know
other
examples
of
it
we
covered.
You
know
how
do
you
see
the
history
of
those
of
the
usages
and
of
the
code?
A
Let's
see
the
other
thing
we
need
to.
We
covered
the
config
object,
which
is
a
major
concept.
A
Then
the
last
thing
that
we
need
to
cover-
probably
is
this
double
context:
entry
pattern
and
then
testing
and
that
sort
of
should
get
ever
give
everybody
a
pretty
good
overview
hashem.
Do
you
have
anything
else
that
you
think
would
be
a
good
and
then
sort
of
docs?
E
I
don't
know
if
everybody's
gone
through
the
contributing
section.
A
That's
important
yeah,
so
we'll!
Let's,
let's
remind
me
if
I
don't
say
that
when
we
get
to
docs
so
the
this
this
so
along
with
the
config
right
so
you'll
see
this
is
the
slr
model.
So
this
these
properties
correspond
to
the
properties
from
the
config
object,
the
slr
model
config.
A
So
you
can
pass
properties
directly
to
the
object
or
you
can
fast
them
to
a
config,
and
then
you
can
pass
the
config
to
an
object
and
once
again
you
know
this
is
all
for
serialization,
because
we
define
things
so
we
want
to
be
able
to
save
things
to
disk.
We
also
want
to
be
able
to
these
data
flows.
A
You
want
to
be
able
to
run
them
anywhere,
which
means
that
you
know
I
might
want
to
run
them
on
a
different
machine,
and
that
was
kind
of
that
kubernetes
stuff
that
I
was
just
showing
and
from
that
perspective
you
know
you're
going
to
need
to
be
able
to
to
transmit
all
of
your
configuration
information
in
a
serializable
fashion
over
a
network
connection,
which
means
they
have
to
be
in
this.
You
know
these
serializable
config
objects
right
so
after
you've
instantiated
your
object.
A
The
the
general
pattern
of
usage
is
this
double
context:
entry,
which
is
covered
under
the
tutorials
section.
Now
the
the
sort
of
the
general
users
of
dfml,
you
know,
may
not
see
the
double
context,
entry
pattern
right
and-
and
that's
because
we
have
these
high
level
functions,
which
are
an
interface
to
many
of
the
things
that
you
can
do
right.
So
the
high
level
function
train,
you
know
it
trains,
a
model
takes
a
data
set
accuracy.
You
know
assesses
the
model's
accuracy
which
this
okay.
This
is
the
old
version
of
the
docs.
A
Let
me
go
on
the
master
branch,
so
everybody
here
will
will
likely
work
out
the
master
branch.
It
assesses
the
model's
accuracy
and
then
you
know
prediction
you
know,
makes
predictions
given
that
the
the
data
set
that
we
want
predictions
on
right,
but
you
don't
see
a
double
context,
entry
here.
So
that's
because
these
high
level
functions
hide
that
double
context.
A
Entry
and
what
is
within
predict
and
score
and
train
looks
like
this,
which
is
first,
you
instantiate
or
first
you
like
we've
instantiated
the
model
we'd
pass
it
to
predict
score
train
and
then
those
functions
will
enter
the
context
of
those
objects
right,
and
so
this
is,
if
you,
the
reason
why
we
do.
This
is
because
there's
a
lot
of
things
that
follow
this
pattern,
for
example
like
loading
a
model
right.
A
So
if
I
have
a
model
that's
saved
on
the
disk,
the
first
thing
that
I'm
going
to
do
you
know
to
use
it
is:
I
have
to
load
any
saved
state
right
if
it
if
it
already
existed,
and
to
do
that
to
do
that,
I
I
enter
the
context
of
the
main
objects.
So
there's
parent
and
then
context
so
this
this
would
be
the
parent
object.
The
model
and
this
memory
source
is
a
parent
object
or
just
the
main
object,
and
then
so
you
enter
that
context
right.
A
So,
basically,
you
you
load
the
model
from
the
disk
if
it,
if
it
exists
on
disk
in
tempter,
and
then
you
know
for
memory
source.
You
know
this
is
just
a
source
backed
out
of
memory,
so
we're
defining
these
objects.
You
know
right
here,
but
if
I
had
like
a
json
source,
you
know
I'd
be
loading
it
from
from
the
disk
there
within
this
within
what
does
it
look
like?
Let's
just
show
you
guys
so
this
corresponds
to.
A
There
was
a
good
example
of
this
two
seconds
ago.
It
was
model
xdg
boost
exergy
regressor,
so
this
corresponds
to
this
initial
entry
corresponds
to
these.
That
anytime,
you
see
async
with
that's,
just
that's
a
enter
and
a
exit
right,
so
a
enter
happens
effectively
when
you
do
an
async
with
so
is
when
we
do
this
statement
when
we're
done
with
this
block
here
before
we
enter
this
inner
block.
A
A
So,
for
example,
if
I
have
temporary
files
that
my
model
creates
right
or
if
I
have
yeah
any
any
sort
of
temporary
resources
right,
we
this
this
ensures
that
if
you
create
them,
you
free
them,
because
python
will
do
that
for
us
with
the
with
block
right.
So
first,
what
happens?
Is
we
end
up
in
this?
A
A
inter
of
the
the
parent
object,
and
then
the
parent
object
will
create
this
this
when
we
we
can
call
these
objects,
and
that
creates
an
instance
of
a
context
right
and
an
instance
of
a
context
is
actually
sort
of
like
the
object
in
memory
that
we
would
actually
use
for
things
right.
So
I
may
do
several
things
on
my
object
in
memory
right
before
I
save
it
back
to
disk
right,
and
so
this
is
an
optimization
around.
A
You
know
the
fact
that
I
don't
want
to
save
and
load
these
things
all
the
time
right.
So
basically
I
can
just
keep
them
alive.
You
know
I
can
I
can.
I
can
keep
them
open
for
as
long
as
I
want.
A
I
can
do
as
many
you
know,
trainings
or
whatever,
as
I
want,
and
then
I
can
flush
it
to
disk
right,
but
you
know
sometimes
I
may
want
to
use
the
high
level
api
and
I
want
to
just
have
that
done
for
me,
so
this
will
save
and
flush
the
disk
each
call,
but
sometimes
I
know
for
sure
that
I'm
going
to
you
know,
have
a
specific
sync
with
sequence
of
events
and
I'm
not
going
to
need
to
to
have
that
flush
to
disk
done
for
me
right.
So
I'm
just
going
to
do
this.
A
A
Double
contact
century:
okay,
we'll
talk
about
docs,
okay,
so
this
is
the
main
documentation
site,
there's
also
the
master
branch
docs.
So
you,
those
of
us
who
work
on
you,
know
the
people
who
consume
the
project
they
care
about
the
master
branch
or
they
care
about.
You
know
the
release.
Stocks
right,
and
this
right
here
says
what
version
you're
working
on
in
the
top
left
hand
corner
the
people
who
are
working
on
the
project.
A
We
care
about
these
master
branch
docs,
and
that's
because
you
know
this
is
what
what
we
see
here
in
master
is
what
we
see
here
in
the
documentation,
and
so
when
we
submit
pull
requests-
and
we
want
to
you
know,
change
the
way
the
code
works
or
change
the
way.
The
docs
work
that
shows
up
here
on
this
page
and
right
now
we
have
an
open
issue
for
the
fact
that
these
are
all
blown
up
here.
These
are
one
level
higher
than
they
should
be
all
these
headers.
A
So
navigating
around
you'll
find
that
you
know
the
tutorials
generally
tell
you:
how
do
you
we
talked
about
the
plugins
right,
the
tutorials
generally
tell
you.
How
do
you
implement
a
plug-in
right?
So
if
I
wanted
to
implement
a
new
kind
of
model,
I
would
go
to
the
models.
Tutorials
right-
and
I
would
say
you
know
first,
it
walk
me,
you
know
generally
through
how
do
I
use
a
model
so
first
off
you
want
to
probably
look
at
the
quick
start.
A
You
know
that's
the
general
thing,
but
this
is
this
model
tutorial
for
many
of
you
is
going
to
be
what
you're
most
interested
in,
because
a
lot
of
people
I
know,
are
very
interested
in
doing
models,
so
this
will
take
you
through
generally.
You
know
how
to
how
do
we
use
a
model?
A
We
have
both
a
python
api
and
a
command
line,
client
api,
and
so
this
talks
about
the
python
api
and
then
there's
another
example
here
which
walks
you
through
the
command
line,
and
then
we
get
into
well
okay,
so
there's
there's
several
several
stages
and
and
the
model
one
is
the
most
fleshed
out
here
at
this
point.
A
But
you
can
do
this
for
anything,
so
everything
is
defined
using
this
plug-in
system,
which
is
this
entry
point
based
system,
which
is
a
python
contras
construct
entry
points,
they're
a
way
to
register
plugins
in
python
packages,
and
so
you
know,
if
you're
curious
about
that,
you
could
go.
You
know
leverage
the
entry
point
system
in
a
different
project.
If
you
wanted
to-
or
you
could,
you
know,
use
some
of
the
stuff
that
dfml
does.
Obviously
we
have
a
lot
of
work
built
up
to
make
those
more
effective
right.
A
So
you
know
you
can
load
things
by
name
right.
So
what
you'll
notice
is
that
in
the
setup
py
file
for
the
main
project
or
if
you're,
looking
in
the
we'll
look
in
a
model
since
we're
looking
at
models
or
if
we're
looking
at
models,
we
would
be
looking
at
the
setup
config
file.
A
A
This
one
is
not
up
to
date:
okay,
that's
fine!
So
usually
this
this
may
be
in
a
entry
points,
txt
file
it
may
be
in
setup.config.
It
may
be
in
setup.py.
We
have
models
from
four
years
now
I
think
so
not
all
of
them
are
in
the
same
place,
but
the
the
right
place
for
them
to
be
would
be
entry
points,
txt
and
so
you'll
see
that
that
under
these
entry
points,
this
is
where
we
register
plugins
and
we
say
for
the
dffml
dot
model
right,
which
corresponds
to
our
source
tree.
A
Where
we
went
from.
You
know,
dfml
from
the
root
of
the
repo
to
the
main
package,
and
then
we
went
into
model
right.
So
here's
you
know
the
generic
definition
of
a
model
right,
and
so
we
are
saying
that
you
know
this.
These
models
that
we're
going
to
register
here
correspond
to
the
generic
definition
of
a
model
right
and
they
are
the
tf
dnnc
and
tf
dnr.
A
So
the
the
deep
neural
network
classifier
and
the
deep
neural
network
regressor,
and
so
they
then
you'll,
see
this
entry
point
style
path,
which
is
a
python
path
right,
which
is,
you
know
the
same
as
a
regular
unix
path
or
windows
path
that
you'd
see,
but
instead
of
a
backslash
or
a
forward
slash,
you
have
a
dot
and
so
model
tensorflow,
and
it's
relative
to
wherever
this
file
is
so
model.
Tensorflow.
Is
this
directory
right?
A
So
we
see
this
path
which
says
dffml
model
tensorflow,
so
we
go
in
there
and
we
say
it
says:
dot
dn
nc,
so
we
go
in
here
and
then
it
says
colon
df
or
it
has
a
colon
and
it
says
dnn
classifier
model
which
is
actually
down
here
right.
So
this
is
the
object
and
then
you'll
notice
that
when
you
look
at
the
object
here,
you'll
have
this
entry
point
decorator
on
top
of
it,
which
tells
it
hey
you
this.
This
helps
us
map
back
so
that
we
have.
A
We
have
like
a
two-way
map
here,
because
the
python
files
don't
know
about
the
setup
files,
the
setup
files
do
know
about
the
python
files.
So
this
guy
says
you
know
dnnc
and
then.
A
Oh,
I
forgot
to
actually
make
a
new
tab
and
then
the
other
one
will
say:
hey:
okay.
Yes,
I
am
dnnc
by
putting
this
little
at
entry
point
on
itself,
right
here
right
and
then
that
that
there's
your
entry
point
style
path,
okay,
so
so
this
you
know,
this
is
how
we
leverage
the
plugin
system.
A
You
can
load
any
of
these
things
dynamically
or
you
can
just
instantiate
and
import
them
as
you
normally
would
any
python
object
right
with
the
regular
import
stuff
and
after
that
you
know
you
use
them,
and
then
you
there's
tutorials
on
how
to
write
each
type
of
plugin
right,
and
so
this
will
take
you
through
okay,
you
know
what
what
does
a
model
do?
What
kind
of
class
do
I
need?
What
kind
of
methods
do
I
need
and
then
also
the
model?
A
One
is
a
little
bit
farther
than
the
other
ones
where
it
has.
How
do
I
actually
package
this
right
and
it'll?
Take
you
through
there's
a
thing
that
creates
packages
for
you,
which
creates
that
setup.config
and
this
it
will
generate
a
dfml
package
which
you
can
then
publish
on
pi
pi,
and
so
that's
part
of
this
whole
thing
that
that
we've
been
working
on
is.
A
Is
you
know
we
want
to
have
this
second
and
third
party
plug-in
ecosystem,
where
people
can
publish
their
own
plugins
right
and
then
we
can
build
this
community
library.
Like
I
said
we
have
22
or
23
plugins
right
now,
but
they're
all
sort
of
constrained
to
this
source
tree
right,
they're,
all
stuck
in
this
git
repo,
and
so
we're
trying
right
now
to
to
split
them
out
and
make
it
so
that
everybody
can
put
them
wherever
they
wanted
right.
A
You
know
you
could
host
them
under
your
own
github
repo
or
we
can
host
them
under
the
dfml,
org
and,
and
that
way
there
will
be.
You
know,
mini
plugins
from
all
over
the
place.
Then.
A
So
that's
generally,
you
know
how
the
plugins
work,
where
you
find
the
documentation
on
how
to
write
each
plugin.
How
you
can
follow
these
two
to
to
write
new
plugins,
and
then
you
know
we
have
the
example
usage,
which
is
okay.
Well,
how
do
you
use
all
this
stuff
right
and
so
you're
likely
going
to
want
to
go
and
take
a
look
at
the
examples?
A
First
to
understand,
you
know
what
are
the
kinds
of
things
that
have
been
done
and
that
people
are
doing
with
that
that
previous
students
have
done
and
written
stuff
on,
and
I
would
start
with
all
the
stuff
that
hashem
has
done
where
he
has
these
videos
and
there's
a
youtube
playlist
and-
and
he
he
has
you
know,
he's
done
a
great
job
of
setting
all
these
up
right,
so
reach
out
reach
out
to
us
on
the
getter
channel.
A
If
you
have
questions
as
you're
going
through
this
stuff-
and
you
know
as
you're
familiarizing
yourself
with
the
project,
if
this
is,
you
know
something
that
you
find
that
you
really
want
to
work
on
and
you
want
to.
You
know,
help
build
out
cool
use
cases
and
and
plug-ins
to
make
it
so
that
you
know
people
can
easily.
A
You
know
you
want
to
make
it
easy
to
to
build
and
deploy
models
and
train
new
models
right.
This
is
what
we're
doing
right.
So
you
know,
and
and
this
is
how
we're
doing
it
and
how
we're
structuring
it,
and
then
you
know
there's
the
getter
channel,
where
we
talk
a
lot
and
we've
thought
about
switching
to
a
different
chat,
but
right
now
we're
sort
of
just
stuck
over
there.
A
E
A
Okay,
the
contributing
section,
of
course,
okay.
Well,
that's
a
good
thing
to
end
on
here,
so
the
contributing
section
is
documents
largely
how
we
work
together
right
because
we're
all
I
mean
there's
people
from
all
over
the
world
here
so
and
it
covers
things
like
testing
talking
about.
I
need
to
update
the
gsoc
stuff
and
testing
documentation.
A
It
covers
a
lot
of
what
we
covered
here,
there's
probably
some
of
what
we
covered
here.
That
needs
to
be
added
to
this.
So
if
anybody
feels
like
you
know,
they
want
to
go
get,
you
know
some
some
contribution,
experience
going
through
the
recording
and
adding
some
of
the
stuff
we
talked
about
about
code
based
layout
and
notes
would
be
awesome,
and
you
know
there's
stuff
about
how
we
actually
publish
the
releases.
A
You
know
this
is
you
know
the
the
process
that
we
go
through
to
publish
the
release,
there's
stuff
about
the
documentation
testing.
So
this
is
these
all
the
documentation
pages?
We
did
an
analysis
of
this.
I
think
at
some
point
last
year
we
want
to
make
sure
that
everything
is
tested,
because
there's
so
many
of
us
that
work
on
this
project
and
everybody
is
sort
of
an
expert
in
their
thing
right.
That's
why
we
went
around
and
said.
Well,
you
know
what
are
you
interested
in
on?
A
You
know
everybody
will
write
their
stuff
and
then
we
need
to
make
sure
that
there's
lots
of
tests
because,
as
we
go
forward,
the
rest
of
us
are
not
experts
in
in
specific.
You
know
use
cases
right,
so
we
need
to
make
sure
that
we're
all
writing
a
lot
of
tests
that
that
help
us.
A
You
know
those
those
help
us
work
together
effectively
right,
because
if
I,
if
you
don't
have
a
test
for
the
thing
that
you
added
to
the
documentation,
then
I
don't
know
if
my
change
breaks
your
tutorial
right
so
and
we
want
to
make
sure
everybody's
stuff
stays
on
this
website.
You
know
and
stays
getting
used
and-
and
you
know
out
here
for
others
to
see
right
so
start-
go
through
this
contributing
documentation.
There's
a
lot
of
headings
here,
but
not
a
ton
of
content.
A
Right
and
you'll
see
you
know,
this
is
one
of
the
things
that
happens
all
the
time,
so
I'll
call
this
out
explicitly,
but
look
at
this
who's
working
on
what
section,
because
this
tells
you
you
know
the
guidelines,
because
there's
yeah
this
tells
you
the
guidelines,
essentially
don't
ask
if
you
can
work
on
something
just
do
it
unless
you
see
things
that
fall
under
this
the
rest
of
this
this
documentation
here,
which
is
basically
if
somebody
else
said
that
they
started
work
on
it,
you
you
should
let
them
work
on
it
right.
A
A
So
there's
other
people
in
here
like
hashim
is,
is
you
know
we're
trying
to
once
once
we
can
get
our
permissions
ironed
out
on
things
you
know,
hashim
will
be
one
of
our
maintainers
and
we'll
have
other
maintainers,
but
right
now
those
people
don't
have
the
ability
to
go
subscribe
to
all
the
notifications.
A
A
Okay,
we
are
getting
probably
kicked
off.
Google
meet
pretty
soon
here,
but
does
anybody
have
any
final
questions
for
the
day,
we'll
we'll
move
to
the
next
or
I'll
move
to
my
next
meeting
and
everybody
can.
E
So
I
know
it
can
be
intimidating
starting
out
in
a
new
project,
but
you
know
just
focus
on
reading
the
documentation.
It's
it's
your
friend
right
there
and
the
tutorials
made
it
very
easy.
Now,
especially
the
videos,
and
if
you
have
any
issues
you
can
always
reach
out
everybody's
helping
each
other
in
this
community
and
all
the
meetings
are
available
on
youtube
as
well.
That
that's
always
helped
me
to
you
know,
get
back
to
issues
that
I
had
problem
understanding
and
all
that
yeah.
A
E
One
last
thing:
there
are
good
first
issues
that
you
can
hunt
for.
You
know
yeah
if
you're
starting
out.
A
Yes,
and
and
some
of
those,
so
some
of
those
have,
I
think,
most
of
the
ones
that
are
labeled
good
first
issue
have
enough
information,
but
you
know
these
there's
there's
a
lot
of
issues
in
there
and,
and
what
usually
happens
is
it's
us
in
this
meeting
jotting
stuff
down
as
we
go
right,
and
so
you
saw
me
writing
meeting
myths
a
little
bit
right,
but
as
if,
if
I'm
the
one
presenting
and
taking
midi
minutes
and
then
creating
issues,
sometimes
we
have
other,
like
sometimes
other
people
create
issues,
but
a
lot
of
it
is
just
sort
of
jotting
stuff
down
as
we
go.
A
So
there's
not
a
lot
of
information
in
a
lot
of
the
issues
right.
So
if
you
see
something
that
sounds
mildly
interesting,
but
there's
not.
You
know
enough
information
in
there
ping
and
getter
right,
because
we'll
figure
out
how
do
we
get
more
information
on
that
issue?
To
provide
you
more
context,
right?
Well,
cool
thanks.
Everyone,
oh
and
the
last
thing
is
as
since
we
have
a
few
people
who
are
new
here.
I
would
really
really
love
it
if
we
could
put
any
feedback
on
anything.
A
So,
basically,
if
you're,
not
if
something
is
even
just
the
slightest
bit
unclear,
you
know
if
you're,
if
something
confuses
you
please
post
it
in
this.
In
this
discussions
thread
because
we
need
to
unders
like
the
main
thing
that
we
need
to
figure
out
is
how
to
make
sure
that
it's
easy
for
people
to
understand.
A
You
know
what
the
project
does
and
and
and
you
know
how
how
you
know
they
can
use
it
and
contribute
to
it
right,
so
any
feedback,
even
if
it's
just
a
random
thought,
even
if
it's
just
like
the
tiniest
little
bit
of
information
kind
of
like
some
of
the
issues,
just
throw
it
in
this
thread.
A
Because
that
way
we
can.
You
know
we
can
ask
you
later
for
more
information.
We
can
talk
about
it
in
the
meeting,
but
if
we
don't
have
it
anywhere,
then
we
don't
know
how
to
improve
right
and
we're
all
just
we're
all
about
working
together
to
improve.
You
know
we're
we're
just
having
fun
here
and
and
trying
to
trying
to
make
some
cool
code
right.
So
sweet
cool,
well
thanks
everyone
and
we
might
have
another
meeting
later
this
week.
A
I
know
that
sahil
wan
has
some
stuff
and
there
was
somebody
else
who
said
they
were
working
on
something
a
cnn
model
recently.
So
so
we
might
have
another
meeting
this
week.
We
might
just
do
next
week.
I
need
to
reschedule
this
time
because
I
think
we'll
probably
go
earlier.
Oh
and
we
forgot
to
talk
about
the
logo.
Okay,
we
did
a
lot
of
intro
today,
all
right.
A
A
I
think,
hashem
you're,
the
you're,
the
one
you're,
the
only
one
who's,
not
everybody
else
is
in
ist
right
time
zone.
E
E
A
A
Cool
all
right
sounds
good,
so
I'll
I'll
see
you
guys
at
you
know,
7
30,
some
of
you
guys
this
time,
6
6
30
p.m.
Your
time,
hashem
and
6
am
my
time.
It's
always
fun.
Working
together
on
these
all
over.
So
all
right,
thanks
everyone
nice
to
meet
you.