►
From YouTube: DevoWorm #13: Graph NNs/Representations, Axolotl embryo modeling, Multiple Viewpoint Reconstruction
Description
Google Summer of Code update, Poisson's ratio for cellular microfilaments, the cutting-edge of Graph Neural Networks, demo of Axolotl embryo modeling and multiple viewpoint reconstruction for reconstructing 3-D images. Attendees: Susan Crawford-Young, Karan Lohaan, Harikrishna Pillai, Gopinath Balaruguman, Harini Kumar, Richard Gordon, and Bradly Alicea
B
How
am
I
well?
I've
been
fighting
with
the
computer
program
and
dick
really
wants
to
know
about
some
authors.
I
sent
you,
I
did
send
you
a
list.
A
A
But
right
yeah,
I
don't
know
I
had
no
guarantee
if
anyone
will
respond.
That's
what
I'm
trying
to
find
out.
A
A
Yeah,
I
know
it's,
they
don't
yeah.
B
I
have
to
have
some
way
of
well,
I
don't
know
I
just
I
sent
you
the
people,
I
noticed
from
the
aps
physics
meeting.
B
A
Hi
he's
good,
that's
good,
so
I
know
that
quran
is
wanting
to
present
on
some
things
that
he's
been
doing
he's
been
doing
some
things
at
the
axolotl
and
I
don't
know
when
he's
gonna
show
up
maybe
a
little
bit,
but
he
wants
to
present
on
that.
It
looks
really
good.
He
showed
me
some
of
his
proposal
when
it's
pretty
decent.
A
B
B
Yeah
anyways
I'll
I'll
get
it
done
next
week,
I'll
try
to
put
it
together
and
put
some
seeds
in
the
new
microscope.
I
think
it'll
be
more
stable.
Okay
than
the
other
one
just
easier
to
work
with
so
yeah.
A
Yeah,
okay,
well,
welcome
to
the
meeting.
I
guess
people
will
be
coming
in
momentarily.
Hopefully,
alright
start
in
on
some
of
the
things
I
have
did
you
have
anything
susan?
You
wanted
to
talk
about
or
present.
C
B
Okay,
yeah
all
right,
I'm
just
going
to
say
microtubules
microtubules
are
basically
incompressible,
although
I
know
they
can
be
compressed,
but
I'm
just
going
to
say
that
they're,
not
that's
what
cross-silence
ratio
is
is
when
you
pull
on
it.
How
thin
does
it
become?
Because
if
you
pull
on
a
glob
of
sleep
or
something
it
it
thins,
when
you
pull
on
it
and
I'm
going
to
assume
microtubules
that
doesn't
happen
to
them.
B
Very
often,
it's
not
something
they're
kind
of
a
more
rigid
structure,
but
they've
discovered
that
microfilaments
change
their
ratio
due
to
frequency
okay.
So
if
it's
at
10
hertz
their,
they
say,
they're,
six
0.66,
poissons
ratio,
which
doesn't
make
any
sense
to
me
because
the
limit
is
supposed
to
be
0.5.
B
So
I'm
not
sure
where
that
lies,
but
as
a
standard
of
materials
is
a
false
ratio
of
0.33.
So
but
that's
at
10
hertz,
but
at
0
hertz,
it's
0.5!
So
it's
like
I,
I
don't
know
what
to
say
so.
A
How
are
you,
hello,
hi,
good,
welcome
to
the
meeting,
so
why
don't
we
get
started
on
some
things?
First
of
all,
I
I
know
some
of
you
are
here
for
google
summer
of
code.
That
is,
those
proposals
are
due
on
the
19th,
and
so
that's
coming
up
sooner
than
you
think
some
of
you
are
going
to
be
well.
If
they're
two
projects,
we
have
the
the
microsphere
project
with
axial
model
embryos,
and
then
we
have
the
graph
neural
networks
with
graph
neural
networks.
A
I
guess-
and
some
of
you
have
asked
me
about
data
for
that-
the
data
I've
been
trying
to
I've,
sent
you
some
links
to
what
we
call
the
devozu,
which
is
a
site
where
we
have
different
types
of
data.
A
So
that's
if
you
want
to
know
more
about
the
deadlines
in
the
format
go
to
that,
and
then
we
also
have
you
know.
If
you
want
to
send
me
a
rough
draft
of
your
proposal,
then
you
know
by
all
means:
please
do
I'll
review
it
and
I
have
in
in
the
pinned
content
on
the
channel
on
the
diva
worm
channel.
I
have
like
sort
of
a
drawing
of
what
it
should
look
like.
A
A
So
if
you
schedule
out
10
weeks
of
coding
time
and
maybe
two
or
three
weeks
of
community
time,
I
think
at
the
beginning,
there's
a
two
or
three
period
of
community
activity
and
then
there's
this
10
week
period
of
coding
and
keep
in
mind
that
that's
about
a
half
half
time.
So
that's
about
20
hours
a
week.
So
when
you're
you're
saying
you're
going
to
do
something
in
a
week,
that's
20
hours
and
then
that's
it
and
so
and
you
can.
A
I
think
you
can
extend
it
out
past
10
weeks,
there's
an
extended
there's
some
flexibility
in
the
scheduling.
But
I
don't
know
you
know:
it'd
have
to
be
we'd
have
to
talk
about
that,
because
I'm
not
sure
I
think
it
can
go
out
to
like
20
weeks,
but
you
know
you
would
have
to
like
distribute
your
work
accordingly.
So
you
know
this
is.
I
think
incf
has
decided
on
some
rules
for
this.
A
So
I
wasn't
a
part
of
that
decision-making
process,
but
it's
basically
the
10-week
period
and
then
you
can
extend
it
if
you
want
to
take
it
slower.
But
that's
that's
for
this
project.
It's
half
time
some
of
the
projects
and
not
in
our
group
are
different
than
that.
But
that's
that's.
What
we're
gonna
go
with
so.
A
So
richard
gordon
asked
what
data
do
you
have
for
axolotl
embryos
and
I
think
it's
a
data
set
that
susan,
the
initial
data
set
susan,
had,
I
don't
think
we
have
anything
newer
right
now.
B
Yeah,
yes,
there's
quite
a
bit
of
it.
Like
I
said,
I'm
going
to
get
more
data
this
week,
I
will
work
on
it
and
make
sure
I
get
through
the
images
of
the
seed
of
some
sort.
A
A
Something:
okay.
B
All
the
images
will
be
of
the
lower
megapixel
value
than
the
ones
for
the
flipping
microscope,
because
the
cameras
are
only
like
a
1.3
megapixel
on
my
little
microscopes,
so
they'll
be
smaller,
so
you
can
email
them
around.
A
Okay,
oh
that
yeah.
I
guess
that
would
be
they're
not
as
large
the
files,
so
it
should
be
good.
A
A
A
B
Yes,
the
the
ball
microscope.
Data
is
not
of
an
egg
because,
like
I
said,
you've
been
having
issues
with
those
salamanders,
not
laying
eggs
all
winter.
I'm
not
sure
what's
up
with
that,
but
the
water
was
bad
this
summer
here
because
because
it
was
so
dry,
so
they
were
not
as
as
well
as
saddle
matter
as
they
should
have
been.
All
of
mine
are
kind
of
wet.
B
A
A
Okay,
let
me
share
my
screen
here,
so
one
of
the
things
I've
been
doing
is
going
through
some
of
these
things
with
graph
neural
networks,
and
I
don't
know
who
who's
interested
in
that
project
here,
but
I've
been
collecting
some
resources
on
graph
neural
networks
and
some
of
the
things
people
have
been
doing
on
this.
This.
D
A
Pretty
exciting
area,
it's
very
fast
moving,
so
it's
kind
of
hard
to
maybe
wrap
your
head
around
what's
going
on
in
the
area,
so
this
is
I've
got
four
things
here
and
then,
like
three
pre-prints
I'll
show,
so
this
one
is
talks
about
higher
order.
Gnn
suffer
from
the
kwl
and
network
complexity.
A
So
this
is
where
you
have
it's
actually
n
star,
star,
k,
complexity,
but
this
is
a
their
notation
for
a
network
and
its
complexity
and
it's
a
problem
in
graphs.
It's
just
that.
You
have
a
lot
of
nodes
and
a
lot
of
connections.
A
This
shows
the
power
of
the
proposed
algorithms
and
neural
architectures,
so
they
use
something
called
spec
net
and
they
plug
it
into
this
lwl
and
they
and
it
you
know,
gets
processed
out
to
this
wl,
and
so
this
is
let's
see
if
I
can
find
the
paper
here.
So
this
is
thirteen
one
three
nine
this
one
here.
A
So
this
is
spec
net
sparsity,
aware
permutation,
equivalent
graph
networks.
So
this
is
this
has
to
do
with
message
passing
and
message
passing
graph
neural
networks.
They
have
clear
limitations
in
approximating
permutation,
equivalent
functions
over
graphs
of
general
relational
data,
more
expressive,
higher
order
graph,
neural
networks
do
not
scale
to
large
graphs.
A
They
either
operate
on
k,
ordered
tensors
or
consider
all
k,
node
subgraphs,
which
means
they
basically
consider
a
lot
of
different
nodes
and
a
lot
of
different
connections,
implying
an
exponential
dependence
on
k
in
memory
requirements
and
do
not
adapt
to
the
sparsity
of
the
graph.
So
they
have
this
dependence
on
the
number
of
connections.
A
So
I've
talked
about
hairball
networks.
Hairball
networks
are
very
dense
with
respect
to
k
and
there
are
a
lot
of
connections.
So
it's
it's.
It's
considering
all
sub
graphs
with
all
these
nodes
k
and
it's
so
it's
just
doing
a
lot
of
things
at
once,
and
so
that's
going
to
tax
the
memory
requirements.
A
So
this
doesn't
adapt
to
the
sparsity
of
the
graph
by
introducing
new
heuristics
for
the
graph
isomorphism
problem,
and
this
is
a
problem
in
graph
theory
and
they
talk
about
in
the
paper.
We
devise
a
class
of
universal
permutation,
equivalent
graph
networks
which,
unlike
previous
architectures,
offer
a
fine
grain
control
between
expressivity
and
scalability
and
adapt
to
the
sparsity
of
the
graph.
A
So
they
they
kind
of
go
through
in
this
paper
sort
of
how
people
are
using
it.
So,
in
recent
years,
numerous
approaches
have
been
proposed
for
machine
learning,
with
graphs,
most
notably
approaches
based
on
graph
kernels
and
there's
some
citations
here.
We're
using
graph
neural
networks
here
are
graph
kernels
based
on
the
one-dimensional
waste
filer
layman
algorithm,
one
wl,
a
simple
heuristic
for
the
graph
isomorphism
problem
and
corresponding
gnns
have
recently
advanced
a
state-of-the-art
and
supervised
node
and
graph
level
learning.
A
However,
the
1wl
operates
via
simple
neighborhood
aggregation
and
the
purely
local
nature
of
the
related
approaches
misses
important
patterns
in
the
given
data,
so
they
kind
of
go
through
like
they
use
this
algorithm
as
a
kwl
as
a
a
basis
and
they're
treating
it
as
sort
of
a
problem
that
they
want
to
improve
upon
and
then
so.
A
more
powerful
algorithm
for
graph
isomorphism
testing
is
the
k
dimensional
algorithm.
A
This
algorithm
captures
more
global,
higher
order
patterns
by
iterative
computing
and
coloring
or
labeling
for
k
tuples.
So
they
do
this
coloring
there's
a
problem
in
graph
theory
called
the
the
end
color
problem
or
it's
multiple
colors,
the
idea
being
that
you
have
different
nodes
that
are
different
colors
and
you
want
to
keep
them
separate.
So
it's
a
problem
with
sorting
the
network
and
making
sure
that
the
colors
are
different
in
different
places,
and
so
you
know
this
takes
a
lot
of.
A
This
is
a
highly
computationally
intensive,
and
so
they
have
more
survey
here.
However,
since
the
algorithm
considers
all
and
n
sub
k
any
k
temples
in
an
end
node
graph,
it
does
not
scale
the
large
real
world
graphs.
So
these
these
computational
methods-
sometimes
they
don't
scale
very
well
so
new
neural
architectures-
that
possess
the
same
power
in
terms
of
separating
non-isomorphic
graphs,
suffer
from
the
same
drawbacks.
A
Their
memory
requirement
is
lower
bounded
for
n
sub
k
for
an
unknown
graph
and
they
have
to
resort
to
dense
matrix
multiplications,
but
you
there
are
ways
around
that
and
they
describe
it
here
and
so
then
they
describe
the
present
work.
So
they
work
on
this
problem
of
the
this
graph
graph,
isomorse
morphism
problem,
and
they
denote
it
this
way
and
then
they
kind
of
work
this
through
and
then
they
introduce
this
method
called
spec
nets
which
actually
tackles
this
problem.
A
A
We
are
motivated
by
a
simple
observation:
gnns
cannot
easily
capture
certain
topological
characteristics
of
a
graph
such
as
its
number
of
cycles,
so
the
gnns
that
we
think
about
the
conventional
gnns.
They
have
a
problem
capturing
some
of
these
higher
order,
graph
theory,
craft
theoretic
properties-
and
this
is
true
of
this
first
paper
as
well.
A
Some
of
the
performance
metrics
here
for
these
different
techniques
to
capture
topological
characteristics
of
a
graph.
We
use
persistent
homology,
a
method
from
topological
data
analysis,
so
I
think
we've
talked
about
topological
data
analysis
before
and
it's
basically
taking
things
that
have
a
shape
or
have
some
geometry
and
using
techniques
to
decompose
those
shapes
and
to
measure
them.
So
you
create
a
measurement
scheme
on
those
shapes
and
you
measure
their
properties,
their
shape,
their
size,
their
deformation,
and
so
this
is
something
they
use.
A
A
lot
of
tools
from
higher
math,
like
persistent
homology
at
its
core
togo,
learns
multiple
filtrations
or
orderings
of
a
graph,
and
so
this
is
what
they
do
here.
They
have
these
different
filtrations,
which
are
just
ways
that
you
can
order
the
graph
and
here's
an
overview
of
how
that
looks
in
practice.
A
We
replaced
existing
node
features
with
random
features,
thus
preventing
information
leakage,
so
they
can
actually
look
at
things
that
have
an
actual
topology
instead
of
looking
at
say
like
a
graph,
that's
just
built
from
something-
and
you
put
it
into
like
the
language
of
pairwise
connections,
you're,
actually
measuring,
maybe
something
like
the
surface
of
an
embryo,
where
you
have
like
points
that
you're
using
as
you
know,
nodes
and
then
you're
making
connections,
but
those
connections
are
maybe
curved
in
some
way.
A
So
that's
you
know,
that's
a
topologically
relevant
network,
and
so
this
approach
actually
is
tailor-made
for
that
and
last
but
not
least,
we
also
show
that
togl
improves
a
graph
neural
network
expressivity,
resulting
in
an
architecture
strictly
more
expressive
than
wl1,
which
we
talked
about
in
the
last
item,
and
then
this
is.
This
is
something
by
a
bunch
of
people
in
topology.
So
that's
great
code
is
available
here.
This
is
the
borgwort
lab
in
their
togl
topological
graph.
Neural
networks
packages
is
on
github
and
then
that
link
is.
A
I
have
all
this
in
the
slack,
so
you
can
search
for
the
links
and
everything
in
the
slack.
I
just
grabbed
this
out
of
the
slack,
so
we
could
go
over
in
the
meeting,
so
they
have
a
github
repository
and
the
paper
is
it's
one
of
these
two,
this
one
good.
So
this
is
topological
graph,
neural
networks,
and
this
shows
the
same
things
here.
A
So
it
kind
of
shows
you
you
have
the
node
attributes
here
you
have
a
node
map.
You
have
k
views
of
graph
g,
you
have
these
filtrations,
which
are
the
different
orderings.
You
have
the
persistence
diagrams
and
then
you
have
aggregation
and
then
finally
output.
So
all
these
things
are
different
operations
on
the
data
you
figure
out
what
the
nodes
are.
You
make
a
node
map.
A
A
But
you
can
link
it
directly
to
objects
and
then
their
shape
and
their
and
their
topology.
The
last
two
here
are:
this
is
another
paper
graph.
Neural
networks
are
still
the
first
choice
for
graph
data,
but
let's
see
how
do
you
pronounce
this
graphomers?
A
A
A
You
know
I'm
not
going
to
get
into
exactly
the
technical
details,
but
that's
what
they
call
it
you,
if
you're
this
deep
into
neural
networks
and
things
like
that.
You
know
what
that
is:
plus
spatial
encoding,
so
you're
encoding
things
in
space
plus
centrality,
encoding,
plus
edging
coding.
So
basically,
what
you're
doing
is
you're,
taking
information
about
the
spatial,
location
of
objects
or
nodes,
you're
encoding
and
then
you're
encoding,
two
properties
of
the
graph
you're
encoding,
the
centrality
of
the
graph,
which
means
you're
finding
the
central
point.
There
are
different
statistics.
A
A
And
so
there
are
different
measures
you
can
use
to
calculate
that,
then
you
have
edging
coding,
so
the
edging
coding
or
the
edges
here,
and
they
have
different
properties
and
you
encode
those
zero
coding,
spatial
information
sort
of,
I
wouldn't
say,
topological
information,
but
definitely
like
information
about
the
structure
of
the
graph
and
then
the
edge
encoding
or
the
where
the
edges
where
the
edges
go
and
where
they
come
from,
and
then
that's
added
to
a
pure
transformer,
which
is
then
this
graph
former,
so
you're,
basically
taking
a
graph
you're,
putting
it
into
this
machine
learning,
algorithm
you're,
taking
different
attributes
of
the
graph
and
you're
able
to
get
this
answer
this
this
sort
of
thing
so
no
graph
transformer
works
on
large
graphs,
not
a
large
data
set
of
graphs
scaling,
the
transformer
architecture,
to
huge
nodes
or
to
large
graphs.
A
This
is
still
an
open
question,
so
these
graph
warmers
work
for
moderate
size,
graphs
for
very
large
graphs.
They
don't
necessarily
work
very
well,
I
think
for
any
developmental
application.
These
would
work
fine,
but-
and
I
don't
know
if
they
have
a
github
repository.
Let
me
check
the
paper,
so
this
is
the
benchmarking
graph
former
on
large-scale
molecular
modeling
data
sets
so
they're,
actually
using
the
graph
former
on
molecular
data
sets,
and
so
this
is
this.
A
Is
the
architecture
here
and
they're
adapting
it
to
3d
molecular
dynamics
simulation,
so
these
are
like
things
like
protein
folding,
and
things
like
that.
With
these
simple
modifications,
graph
former
could
attain
better
results
on
large-scale
molecular
modeling
data
sets
than
the
vanilla
one
and
the
performance
gain
consistently
obtained,
be
obtained
on
two
and
3d
molecular
graph
modeling
tasks.
A
So
this
is
like
the
message:
parsing
one
message,
passing
ones
we
talked
about
in
the
first
one,
but
this
is
that's
the
classic
sort
of
technique
where
you
have
messages
moving
from
one
node
to
another.
This
is
a
little
bit
different
than
that,
and
so
this
kind
of
goes
through
some
of
the
stuff
on
this
is
a
quantum
chemistry
data
set
they
used.
A
This
was
used
in
kdd
cup
2021,
which
is
a
machine
learning
competition
in
the
meanwhile.
It
greatly
outperforms
the
competitors
in
the
recent
open
catalyst
challenge,
which
is
a
competition
to
track
on
nurip's
2021
workshop.
So
this
is
a
something
that
they
had
at
nerf's:
2021,
open
catalyst
challenge:
it's
aims
to
model
the
catalyst
ad
sorbate
reaction
system
with
advanced
ai
models,
so
this
is
their
link
here.
A
This
is
something
I
think
out
of
microsoft
research,
so
it's
at
the
end
of
the
abstract
here,
and
so
they
kind
of
go
through
some
of
the
ways
that
they've
used
this
technique
and
some
data
sets
a
lot
of
molecular
data
sets.
So
it's
a
little
bit
removed
from
what
we're
doing,
but
they
do
go
through
some
of
the
benchmarks
that
they're
using
here
and
then.
Finally,
this
this
is
something
called
brain
gb
benchmark
for
brain
network
analysis
of
graph
neural
network.
A
So
again,
this
is
a
little
bit
of
ways
from
what
we're
doing,
but
they
do
these
brain
networks
where
they
extract
networks
from
brain
imaging
data,
and
so
these
are
complex
networks
that
are
based
on.
You
know
different
locations
in
the
brain.
They
could
be
cells,
it
could
be
voxels
which
are
imaging
properties
and
they
try
to
make
these
connections
using
a
covariance
matrix
or
some
other
type
of
approach,
and
so
they're,
using
graph
neural
networks
to
sort
of
solve
some
of
these
networks
in
a
way.
A
A
A
So
I
wanted
to
go
over
that
because
I
know
we
talked
about
perf
neural
networks.
I
want
to
give
some
examples
of
things
in
the
literature.
It's
it's
kind
of
a
tough
read.
I
know
there's
a
lot
of
innovation
and
a
lot
of
tough
reading
in
there.
But
hopefully
you
know,
maybe
someone
will
apply
and
they'll
have
a
you
know.
A
They'll
have
simplified
it
a
bit
or
maybe
come
up
with
a
really
nice
example
of
how
to
do
this
in
developmental
biology,
but
any
in
any
case,
I
think
koran
had
a
thing
that
he
wanted
to
share
with
us.
I
mean
we
were
talking
about
it
earlier.
E
A
B
Just
briefly
to
dick
with
bradley-
and
I
are
trying
to
find
some
authors
for
you-
you
know
for
the
sales
sales
papers.
G
G
G
B
Okay,
well
certainly
well.
I've
already
introduced
the
one
author
who
I
thought
was
was
great
two
weeks
ago
so
for
her
well,
she
has
a
whole
group,
so
it
could
be
any
one
of
the
people
in
our
group
that
do
this
all
right,
yeah
just
put
her
down.
G
And
okay.
B
B
B
E
E
D
E
E
E
Yeah
another
problem
that
was
you
know
that
I
was
facing
when
I
was
trying
to
implement
current
state-of-the-art
3d
techniques,
was
that
this
thing
has
a
static
blue
background,
because
the
column,
you
know
the
images
and
the
pixels
intensity
stays
similar.
D
Outline
because
I
was
facing
some
issues
with.
D
E
After
we
get
the
outlines
of
the
embryo,
we
have
to
find
out
like
two
three
things.
One
of
them
is
the
axis
of
rotation
and
the
degree.
E
E
Take
a
picture
so
the
point
at
the
displacement
of
these
points
can
be
generalized
to
you
know
to
find
out
how
much
or
what
exactly
is
the
angle
of
our
rotation
like
if.
E
Displacement
we
see
of
the
corresponding
points
within
the
two
images
based
on
that
we'll
be
able
to
get
our
angular,
which
is
also.
E
So
what
we
have
for
working
with
our
3d
model
is
like,
I
think,
most.
E
Per
model,
so
we
have
like
eight
model
outlines
per
for
for
axolotl.
D
E
Model
so
eight
samples
of
embryo
images,
then
we
get
like
eight
outlines
from
them
and
then
we
have
to
you
know,
map
those
outlines
across
along
the
axis
of
rotation
and
we
can
again.
E
E
A
E
And
then,
based
on
that,
we
can,
you
know,
keep
on
going
if.
E
D
E
H
B
B
It
was
it's
hard
to
tell
the
angles
with
the
flipping
microscope,
because
it's
how
the
embryo
operates
itself
is
to
the
angles
of
the
pictures,
and
that
probably
varies
from
egg
to
egg
slightly.
B
B
Yeah,
what
you've
done.
B
I'm
happy
to
see
it.
I
just
but
I
I
do
have
the
newer
microscope.
So
I
I
can
get
you
a
proper
angle,
angle
of
rotation.
I've
got
a
top
and
bottom
one,
and
then
I've
got
the
ones
that
are
coming
in
from
the
sides
and
the
bottom
like
that
from
all
sides.
So
there
are
10
positions.
A
Looks
like
richard
had
a
question
about
or
he
had
a
comment.
He
said.
Look
at
the
literature
on
imaging
rotating
asteroids.
G
G
You
know
there
was
this
project
that
I
was
going
through
that
had,
like
you
know:
satellite
lots
of
satellite
imagery
data.
E
B
And
I
just
think
you've
done
a
really
great
job
tackling
this
so
far,.
B
That
should
help
help
you.
E
E
E
E
B
B
C
E
D
G
D
A
Thank
you
very
much
for
attending
yeah,
but
you
mentioned
in
your
talk
about
the
2.5
dimensions,
and
so
you
know
as
interesting.
D
A
G
In
the
time
between
light
hitting
the
front
of
the
embryo
and
sketch.
G
Which,
well
you
can
find
a
good
way
to
get
it
where
you
take
through
focus
images
and
produce
a
sharp
image
of
the
whole
surface.
B
G
G
B
Oh
okay,
I
would
sure
like
to
know
the
name
of
that.
D
A
Well,
it's
very
good
quran!
Thank
you!
Yeah!
Okay,
let's
see
we're
at
the
top
of
the
hour,
so
I'll
probably
go
through.
Maybe
one
more
thing
before
we
go.
If
we
have
anything
to
mention
before
that
or.
G
Mary
and
tiffany
is
an
interesting
lady.
G
She
well
she's,
now
retired,
and
she
used
to
be
a
one
of
the
best
takers
of
scn's
skinny,
electric
micrograph
of
diatoms.
G
E
You
know
people
have
trained
neural
nets
on
different
classes
of
objects
like
you
know,
they
have
a
class
objective,
an
airplane.
This
has
anything
like
13
class
objects,
so
they
have
like
existing
models.
Existing
spatial
information
for
the
3d
supervision
data
for
a
2.5
d.
I
think
they'll
probably
have
depth
maps
and
the
image
itself
for
2d
supervision.
E
And,
based
on
that,
you
know
it
tries
to
you
know.
E
3D
model
of
the
of
the.
E
You
know
the
self-driving
car
aspect
because
they
have
to
generate
a
very,
very
less
like
a
model
that
takes
very
less
latency
that
can
generate
the
3d.
You
know
heat
map
of.
E
This
was,
this
was
pretty
prominent,
I
think
in
the
2000s.
You
know
as
a
very
accurate
way
of
getting
3d
data
and
generating
like
a
model
based
on
that,
so
they
have
like
a
very,
very
big
data
set
of
images,
and
you
know
they're
trying
to
generate
the
point
cloud
here
from
all.
E
Is
so
it's
it's
very,
it
depends.
E
You
know
they
engage
the
shutter
speed.
That
is
there
the
because
the
aperture.
E
A
E
F
F
E
F
F
A
Well,
that's
yeah!
That's
great
yeah!
Let's
keep
moving
on
that!
That's
that's
pretty
good!
I'm
glad
to
see
that.