►
From YouTube: GSoC Project Update (GNNs), July 7
Description
Attendees: Jiahang Li, Bradly Alicea, and Wataru Kawakami
C
A
A
Yeah
so
well
right.
Our
topic
today
is
is
it
is
just
a
temporal
name,
topology
aware
temporal
gravity,
natural
for
cell
tracking
yeah,
and
it
is
our
research
idea,
and
maybe
we
will
change
this
name
in
the
future.
But
the
points
are
topology
graph,
your
network
and
sale
tracking.
That
means
we
want
to
combine
topology
data
analysis
and
graph
your
network
to
solve
some
issues
regarding
sale,
tracking
and
property.
A
Such
as
frame-to-frame
methods
and
globally
tracking
methods
and
the
first
one
framed
frame
methods
is
to
segment
and
connect
sails
across
successive
frames
based
on
their
visual
similarity
or
the
euclidean
distance,
and
the
globally
tracking
method
is
to
combine
combinatorial
algorithms
with
graph
modeling.
That
means
they
would
model
cells
as
nodes
and
connections
between
sales
across
successive
frames
as
edges,
and
there
are
some
different
methods
are.
B
A
A
I've
introduced
the
existing
works
at
these
points,
the
non-different
methods,
which
is
the
classical
methods,
and
just
like
what
I
said,
we
have
frame
to
frame
masters
and
the
globally
tracking
method,
this
one
globally
tracking
methods
and
also
we
have
some
deep
learning
and
there
is
one
type
of
deepening
methods
which
is
the
fram
two
methods.
A
That
means
people
would
like
to
use:
convolutional
neural
networks
to
segment
sales
and
use
some
current
new
networks
or
some
other
classical
methods
to
track
tongue
series
of
cell
development
developments
respectively,
and
these
are
some
backgrounds
of
automation.
Tracking,
probably
I
have
limited
knowledge
of
cell
tracking,
automated
cell
tracking.
So
if
you
have
any
one,
any
two
compliments.
A
Yes,
you
can
yeah
just
say
in
our
meeting
and
and
next
I
want
to
briefly
introduce
our
motivations
of
uh-huh
okay,
to
introduce
our
motivations
of
why
we
combine
topology
and
graphene
natural
to
attract
sales,
and
these
are
our
motivations.
A
For
example,
the
existing
deep
learning
works
only
predicts
frame
to
frame
sale
associations,
rather
than
providing
a
global
tracking
solutions
like
what
we
introduced.
What
I
introduced
before
and
the
global,
the
globally
tracking
solutions
have
been
provided
by
some
classical
methods
instead
of
deep
learning
methods
and
as
for
graphene
networks,
it
has
a
better
capability
to
capture
graph
dependencies
via
a
message
passing
between
its
nose
and
there's.
Another
motivation
which
has
not
been
demonstrated
by
this
paper
is
the
topological
specs
of
a
graph.
A
I
think
that's.
If
we
only
consider
the
images
I
mean
the
original
formats
of
the
data,
we
can
hardly
obtain
the
topological
information
from
those
networks.
A
So,
given
these
graphs
or
given
these
simplification
complex,
we
have
capability
to
do
such
things
to
view
this
data
from
a
topological
perspective,
and
the
second
motivation
is
that's
why
it's
about
a
question?
Why
introducing
topological
data
analysis
such
as
td,
which
is
tda
into
our
analysis,
because
from
a
lot
of
works
we
know
that
the
topology
and
geometry
are
so
important
for
biological
network
and
existing
works.
A
Existing
words,
regardless
of
network
lag
consideration
of
topological
effects
of
biological
data,
especially
those
still
trees,
also,
charter
trees,
and
actually,
in
my
in
our
idea,
I
like
to
introduce
the
map
the
proposing
mapper
into
our
analysis,
but
we
will
the
more
existing
diver.
In
our
analysis,
I
will
expand
on
so
before
diving
to
our
idea.
I
would
like
to
briefly
introduce
the
idea
of
mapper,
and
it's
very
it
is
a
great
work,
expect
the
political
information
for
a
point
such
as
this
is
a
actually.
A
Given
a
difficult
hand
on
the
member
to
utilize
the
function
or
lens
or
lens
function
to
map
this
points
cloud
into
a
one-dimensional
or
high
dimensional
layer,
and
you
will
see
that
this
will
assume
that
this
function
map
is
going
out
to
a
one-dimensional
data
and
or
we
can
say,
organization
mapping
and
we
will
partition
this
one
dimensional
impedance
into
multiple
intervals,
negative
covers
and
discovers
overlap
overlapping,
and
we
will
obtain
the
pre
image.
B
A
Oh
okay,
yeah
and,
and
then
we
will
obtain
the
pre
pre-image
of
this
one-dimensional,
embedding
or
one-dimensional
covers,
and
to
this
point
scout,
and
so
that's,
we
will
have
multiple
overlapping
clusters
of
this
point
scout
and
as
for
each
cluster,
we
will
regard
each
cluster
as
a
node.
And
if
there
are
two
clusters,
two
overlapping
cluster
will
connect
an
each
between
these
two
clusters
and
the
benefits
of
mapper
is
that
using
mapper
can
help
us
obtain
a
high
level
high
level
representation
of
a
point
cloud
yes
just
like.
If
we
are.
A
Hi,
could
you
hear
me
now?
Yes,
yes,
yes
yeah,
there
are
some
problems
with
my
natural
sorry
yeah
yeah
yeah.
So
yes,
the
the
benefits
of
mapper
is
that
we
can
obtain
some
topological
information
or,
I
can
say,
a
high
level
representation
of
a
point
scout
like.
If
we
are
just
directly
looking
at
this
points
card,
we
would
it
would
a
little
bit
difficult
for
us
to
find
that
it
is
actually
a
hand.
A
But
if
we
look,
if
we
look
at
such
a
a
mapper
okay,
we
can
say
such
a
point
or
process
points
called.
We
find
that
it
is
very
similar
to
a
hand
yeah.
So
that
is
the
point
of
mapper,
which
is
to
extract
the
topological
high
level
topological
information
from
a
points
cloud,
but
actually
in
tda
people
would
like
to
use
persistence
homology
to
deal
with
such
simple,
complex
or
other
things,
but
I
will
not
introduce.
A
I
will
not
use
a
persistent
homology
here,
because
it
seems
that
you
can
use
topo
persistent
homology
to
extract
the
topological
information
from
the
whole
point
scout,
but
what
we
consider,
what
we
focus
on
at
the
moment
is
that
it
is.
The
point
is
each
point
in
this
cloud
instead
of
the
overall
cloud
is,
I
will
explain
in
detail
in
later
yeah
yeah
yeah,
so.
A
Okay
yeah,
so
I'd
like
to
introduce
our
overall
structure
of
our
research
idea
and
the
so
given
inputs
is
a
video.
It's
a
microscopy,
video
consisting
of
another
imagine
and
the
first.
The
first
part
is
to
segment
and
address
the
same
choice
or
position
of
standard
types
from
this
image.
A
Yes-
and
this
is
stage
one
of
the
graph
and
the
stage
tool
is
to
build
temporal
box
based
on
this,
this
I
mean
the
the
data,
the
the
output
of
segment.
A
B
A
A
These
two
are
both
sets
of
these.
Are
these
models
there
and
there
will
be
two
connections
between
between
them
and
then
in
a
stage
in
the
third
stage
of
the
demographic
we
will
apply
after
natural,
or
we
can
say
a
topological
error,
a
program
network
to
these
temporal
groups,
and
we
will
obtain
a
series
of
mapping
such
this.
A
A
B
A
Yeah
yeah,
yes,
yes
and
step
four
will
be
the
our
task,
such
as
the
link
prediction
based
on
this
embedding
yeah
yeah
yeah
yeah.
Actually
there
are
lots
of.
I
think
there
are
a
lot
of
tasks
regard
related
to
our
to
our
problems,
such
as
the
multi
object,
tracking
and
I'm
not
sure
they
will
use
link
prediction
to
solve
their
problems.
A
Yes,
because
link
prediction
is
a
common
use
task
in
the
graph
neural
network,
but
probably
in
other
fields
such
as
the
object
tracking,
they
will
use
some
other
tasks
so
in
the
future
I
would
like
to
I
mean
I
would
like
to
dive
into
other
papers
to
see
how
they
solve
such
problems.
Yeah.
A
Yeah,
so
so,
actually,
this
topology
of
where
temporograph
network,
or
we
can
say
its
ttg,
will
be
employed
on
each
frame
and
each
frame
will
be
represented
as
a
graph
and
and
actually
we
will
find
that
this
pipe.
This
pipeline
is
composed
of
two
parts.
A
The
first
part
is
this
tt
drill
and
which
will
be
employed
on
each
frame,
and
the
second
part
is
how
to
do
message
passing
between
successive
frames
and
actually
people
would
like
to
use
recurring
neural
networks
to
pass
information
from
this
frame
to
the
next
frame
and
and
in
the
next
slide.
I
would
like
to
describe
the
details
of
this
t-t-gen
and
instead
of
the
iron,
because
by
now
it's
very
simple
to
design
such
a
iron,
because
I
just
think
of
just
look
at
this
cell.
A
We
assume
that
we
have
a
td
generator
on
this
frame
and
with
this
ttr
we
will
have
a
high
level
embedding
of
this
cell,
and
this
is
the
the
first
import
of
the
iron
and
the
second
impulse
is
the
hidden
embedding
of
this
air
and
given
these
two
inputs
to
have
a
higher
level
representation
of
this
node,
which
is
actually
the
output
of
rn,
yes
in
source,
we
can
describe
an
iron
like
this,
but
probably,
and
future
works.
Maybe
we
would,
I
mean,
would
change
the
structure
of
this
iron?
Yes,
you,
yes,.
B
B
A
B
D
D
B
B
Some
are
going
to
be
different
and
the
way
that
they're
connected
is
through
these
message
passes,
which
I
assume
means
that
you're
passing
information
about
like
the
change
in
the
graph
structure
or
the
embedding
structure
so
like,
if
say,
for
example,
a
cell
divides
between
one
frame
and
the
next
frame,
then
that
information
would
be
passed
or
would
it?
How
would
that
work
or
would
it
just
predict
things?
Would
it
predict
the.
B
A
Yeah
yeah,
okay
yeah.
So
actually
I
would
at
a
moment
I
would
assume
that,
as
for
just
just,
for
example,
let's
see
look
at
this
cell,
yes,
yeah,
because
we
will
assume
that
if
the
one
cell
can
I
mean,
if
we
think
of
we
think
about
correspondence
between
two
successive
frames.
So
we
think
that
this
one
cell
will
will
divide
into
two
cells,
but
we
will
not
say
two
cells.
A
I
mean
turn
into
one
cell
right
right,
so
so
this
will
happen
because
there
are
some
weird
behaviors
of
these
sales
buds.
But
at
the
moment
we
will
not
consider
it
to
just
consider
the
common
things.
The
the
common
case
yeah
that
so
so,
if
we
to
think
of
to
think
about
this
cell,
that
means
the
source
of
destination
from
the
last
frame
will
be
only
one
cell
right
right.
A
So
we
can
think
we
can
think
that
there
is.
There
is
a
recurrent
neural
networks
and
there
is
ironing
and
this
rna
will
take
two
inputs
and
the
first
input
is
the
june
on
the
outputs
of
general
on
this
frame,
and
we
know
that
each
node
has
its
its
high
level.
Embedding
output
argument,
so
that
is
known
as
a
high-level
embedding,
but
this
this
is
the
first
impulse
of
the
argument
and
the
second
impulse
is.
A
So
in
this
way,
this
iron
will
probably
pull
past
the
message
from
this
frame
for
the
next
frame.
A
B
A
Yeah
yeah
so
actually
is
to
propagate
information
between
the
transfer,
brands
and
qn
is
to
propagate
information
in
the
single
frame.
Yeah,
okay,
yeah
yeah.
So
any
questions
any
more
questions
on
this
side.
B
I'm
just
taking
some
notes.
Yeah
I
have
yeah.
I
don't
have
any
questions,
it
looks
good,
I
mean
that's
the
framework
so
now
you
know
I
I'll
just
mention
that,
like
some
of
the
images
that
you'll
work
with
you
know,
sometimes
it's
it's
not
perfect
information
in
terms
of
like
cell
division
and
segmenting
cells,
I
mean
you
can
get
it
actually.
You
can
probably
get
a
good
sense
of
how
many
cells
there
are.
B
A
Uh-Huh,
do
you
mean
the
the
I'm
sorry,
do
you
mean
in
body?
Do
you
mean
this
one
when.
B
D
B
In
the
segmentation
process
you
get
like
you
know,
you
have
to
play
with
the,
and
this
is
maybe
just
for
step
one.
You
have
to
play
with
the
with
the
different
properties
of
the
image
so
that
the
images
are
segmented
properly,
because
sometimes,
if
the
image
is
not
quite
right,
you
get
you,
you
know
you
get
saturation
or
something
and
the
cells
aren't
segmented
properly.
B
If
you're
looking
for
like
nuclei
of
cells,
which
is
probably
the
best
case
here,
it
probably
won't
be
too
bad
because
the
nuclei
are
pretty
easily
identifiable
and
so
you'll
just
have
like
a
map
of
like
these
dots,
and
then
you
can
take
that
and
make
an
embedding,
but
just
to
let
you
know
that
you
know
be
be
sure
to
pay
attention
to
that
step,
one
to
make
sure
that
the
images
are
like
segmented
properly
and
that
you
don't
have
a
lot
of
like
noise
in
the
segment
that
noise
in
the
image
that
then,
when
you
segment
it
and
you
get
the
information
that
you're
not
identifying
cells
that
aren't
there,
because
that
can.
A
B
B
Yeah
yeah,
it's
usually
good.
I
mean
it,
it's
it's
usually
pretty
good,
but
sometimes
it
can
be.
You
know,
just
because
of
the
way
that
microscopy
data
is
collected,
you
get
a
lot
of
what
they
call
autofluorescence,
which
is
noise
and
images
in
some
images.
That's
not
so
bad.
In
other
images,
it
can
be
worse,
so
you
might
that
might
carry
over
so
yeah.
I
would
just.
B
I
would
just
maybe
have
like
a
step
like
a
step,
one
and
a
half
where
you
like
kind
of
look
at
the
images
and
then
make
sure
that
the
segmentations
make
sense.
Like
you
know,
there
are
no
obvious,
like
errors
in
in
the
segmentation.
A
Yeah
yeah
yeah
yeah.
I
got
your
points
yes
yeah.
Actually
this
you
remember
this
one.
This
sorry
yeah
this
paper,
the
ewv
paper,
and
actually
this
paper
has
samsung
works
like
segmenting
those
images
and
to
obtain
their
positions
of
their
cell
nucleus,
and
I
think
maybe
we
can
refer
to
how
they
implement
such
a
step.
B
Yeah
yeah
yeah,
there's,
probably
yeah,
there's
an
automated
way
to
check,
probably
because
it's
really
because
you
know
I
mean
what
diva
learn
is
doing,
is
basically
taking
an
image
taking
out
like
markers
that
you
define
and
then
it's
by
giving
you
some
numbers
or
some
other
sort
of
image.
That
just
gives
you
just
those
points.
But
if
they're
points
that
it
misidentifies,
then
you
have
a
problem.
So
you
have
to
make
sure
it's
a
check.
You
have
a
checking
step,
making
sure
that
you
don't
have
any
glaring
errors.
B
Otherwise
your
different
frames
are
going
to
be
inconsistent.
You
won't
be
able
to
you
know,
you're
going
to
message
past
something
that
isn't
there
and
if
you
put
the
annotations
on
it
or
something
later,
it's
not
going
to
make
any
sense.
So,
but
I
mean
you
know,
that's
that's
something
that
we've
just
had
problems
with
in
years
past,
where
you
know
we
don't
want
to,
especially
when
you
do
like
a
large
batch
of
of
data.
You
know
you.
You
sometimes
get
a
little
bit
of
an
issue
with
some
of
the
images.
B
Okay
and
then
yeah,
I
like
this.
I
like
the
structure
of
this,
where
your
message
passing
to
different
frames.
In
years
past
we've
tried
to
work
with
like
the
temporal
aspect
and
we've
kind
of
not
done
a
like
we've
kind
of
focused
on
the
different
frames
as
discrete
things
so
like
you
know
that
that
temporal
flow
is
the
thing
it's
really
kind
of
tricky
to
get
right.
So
that's,
like
you
know
message.
Passing
is
a
good.
B
Well,
you
know,
I
don't
know
you
haven't
implemented
it
yet
so
we'll
see
how
well
it
works,
but
I
think
I
understand
what
you're
doing
you're
just
basically
taking
information
from
one.
You
know
one
frame
applying
it
to
another
frame,
you're
passing
a
message
about
what
was
and
then
what's
in
the
next
frame,
and
then
you
keep
doing
that
across
the
different
frames,
and
that
gives
you
yeah.
A
Yeah,
actually
in
in
in
in
that
paper,
in
double
z
in
double
cv,
paper
yeah
your
paper.
What
they
do
is
what
they
do
is
actually
they.
They
did
not
model
those
images
like
this
graph.
They
just
model
the
whole
cell
trajectories
as
one
directed
graph,
and
they
would
connect
each
connect
nodes
between
successive
frames.
A
If
the
nodes,
the
nodes,
have
mother
and
daughter
relationship,
for
example,
and
if
in
describing
in
such
an
example,
though
that's
the
paper
that
ewz
wcv
paper
will
not
connect
nodes
but
notes,
connect.
Nodes
between
I
mean
will
not
put
the
edges
in
in
the
frames
such
such
as
these
edges.
A
They
will
only
connect
nodes
between
two
frames
with
such
a
temporal
connections,
and
this
temporal
connection
in
their
work
is
the
directed
edges
in
a
directed
graph
neural
network,
and
they
would
just
propagate
information
like
what
graphene
natural
would
do
and
yeah
yeah.
A
They
would
check
the
euclidean
positions
between
those
nodes,
for
example,
if
in
this
frame,
these
two
notes
are
very
close
to
the
note
in
this
frame,
or
we
can
say
we
can
put
these
notes,
and
we
just
drag
this
note
in
this
frame,
and
then
we
will
check
their
ingredient
distance
between
these
three
nodes
and
if
they
are
very
close,
they
will
connect
edges
between
these
nodes.
Yes,
it's
like
what
can
we
do?
Can
graph
k,
neuron
k,
nearest
neighborhood
graph
would
do
yeah.
B
A
B
Well,
yeah,
that's
good,
I
mean
just
so
if,
if
like
you
have
a
cell
that
divides
and
it
results
in
two
cells,
those
cells,
if
they're
near
one
another,
get
connected
what
if
they
divide
and
go
to
the
other
opposite
ends
of
the
or
opposite
sides
of
the
of
the
embryo
with
them
I
mean
because
that's
what
happens
in
c
elegans,
you
get
like
a
division
and
one
goes
left
and
one
goes
right.
B
A
I'm
sorry,
do
you
mean,
do
you
mean
one
star
goes
out?
The
front
goes
out
to
me.
B
B
A
Yeah,
I
think
it
is
a
question
yes,
because
actually
it
is
related
to
how
would
not
touch
a
bra
in
a
single
frame
right
if
we
used
payment
to
follow
such
a
breath.
Maybe
we
would
not
expect
I
mean,
but
actually
my
point
is
that
I
think
look
at
this
this
age
like
these
two
strikes
and
these
tools
comes
from
the
same.
A
I
don't
know
another
stair,
but
this
this
doesn't
mean
that
this
tools
come
come
from
the
same
marginal
yeah
like
yes,
so
think
this
this
ages.
This
you
see
this
simple
in
the
same
frame
just
help
us
to
propagate
to
propagate
information,
propagates
messages
from
the
graph.
It
doesn't
mean
that
when
these
connect,
who
knows
then
two
notes
come
from
the
same
model
sayer,
it
doesn't
mean
that
right.
A
B
Yeah
yeah,
it
doesn't
have
to
be
so
yeah.
I
guess
that
that
makes
sense.
You
could
just
connect
it
like
using
a
knn
type
method
and
then
just
annotate
the
embeddings
later
and
say
these
are
related.
These
are
daughter
cells,
these
aren't
daughter
cells
and
you
know
we
can
make.
We
can
talk
about
like
we
can
make
inferences
from
the
embeddings
later.
I
guess
it's
just
a
matter
of
like
getting
a
quick
like
embedding
based
on
nearest
neighbor
criterion,
because
we
can
use
other
criterion
like
we
can
use
any
criterion.
B
A
B
A
Yeah
yeah
yeah,
actually
because
we
are
considering
when
we
build
such
a
graph
in
a
single
framework
considering
its
distance
but
in
in,
but
it
can
be
some
other
distance
instead
of
the
the
euclidean
distance.
If
I
mean
if
we
are
con,
if
we
are
regarding
such
a
frame
as
the
manifold
or
some
other
geometry,
maybe
we
will
use
some
other
other
kind
of
distance.
I
mean
yeah,
but
yes,
yeah,
but
actually
I
think
it
is
now
not
tower.
It
is
not
our
most
important
point
here.
A
Yes,
because
our
point
is,
I
mean
how
to
model
such
a
graph
in
a
single
frame.
Yes,
we
can
discuss
it,
but
it's
not
our
most
important
points.
One
point
is
that
we
connect
such
a
graph.
We
build
a
such
graph
for
each
frame
because
we
want
to
propagate
information
in
each
event
and
more
important.
I
will.
I
will
explain
in
the
next
next
slide.
More
important
is
that
this
message,
passing
in
the
single
frame
will
help
us
to
perceive
the
topological
information
in
of
this
frame
of
this
network
in
a
single
frame.
A
Yes,
actually
there
are
a
lot
of
choices,
but
the
most
important
is
that
we
can
we
build
such
a
graph
for
such
a
frame
is
that
we
want
to
use
the
topological
information
of
this
frame
of
this.
These
points
this
time,
yes
and
we
think
yeah
I'll,
explain
in
detail
in
the
in
the
next
slide:
yeah,
alright,
yeah,
okay,
yes
and
yeah,
and
actually
I
think
you
asked
me
these
questions.
A
I
think
maybe
I
I
didn't
explain
clearly
our
motivation,
there's
one
point
I
didn't
explain
is
that
just
look
at
this
slide
is
that
actually
we'll
know
that
graph
new
network
on
a
point
cloud
can
help
us
to
propagate
message.
That
means
each
node
each
node
in
each
point
in
that
point
cloud
can
perceive,
can
obtain
the
information
of
its
located.
A
That
is
its
neighborhood,
but
why
we
use
mapper
is
that
we
know
that
a
shadow
or
a
shadow
or
a
german
with
small
number
of
layers
is,
is
always
the
best
best
structure
of
this
kind
of
german,
such
as
a
two
or
three
layer
dragon
is,
is
better
than
a
five
or
six
layer,
and
some
research
works
has
has
showed
that,
and
if
the
number
of
layers
of
this
drain
is
very
is
very
small.
A
Is
then,
as
for
each
points
in
that
point
cloud,
it
would
only
perceive
information
of
this,
this
small
locality,
this
small
range
or
this
neighborhood-
yes,
but
this
small
neighborhood.
The
information
of
this
neighborhood
will
not
help
the
node
in
this
in
in
this
range
to
perceive
the
topological
information
of
this
locality.
Yes-
and
I
think
the
more
important
is
that,
if
a
note,
a
points
in
here
can
perceive
the
topological
information
in
this
range.
A
Yes,
like
like,
like
you,
can
perceive
the
position
or
some
kind
of
means
of
positions
in
in
this
point
cloud
or
in
this
topology.
Maybe
it
will
help
you
to
I.
I
don't
know
how
to
explain
it.
Clearly,
maybe
you
helped
to
track
sales
like,
like
I
mean
I
mean
just
think
of
it
as
embryo,
as
some
some
kind
of
embryos
and
embryo
will
say
the
sales
here,
the
sales
in
a
certain
position
of
this
topology
will
develop.
It
will
develop
into
some
certain
positions
like
in
the
in
the
next
frame.
A
B
Yeah,
I
think,
yeah,
I
think
so
it's
like
in
in
development.
You
do
have
actual
development.
You
do
have
something
called
positional
information,
which
is
where
the
cell
is
is
responding
to
its
location
in
relation
to
other
cells
or
where
it
is
in
the
embryo
and-
and
that
might
be
the
same
principle
here.
Where
you
have
you
know
locally,
it's
doing
it's.
It
knows
what's
going
on
locally,
but
not
necessarily
globally,
and
that's
that's,
but
you
piece
it
together.
You
can
get
something.
B
Like
this
a
hand,
you
have
this
point
cloud
of
different
points,
the
points
when
you
respond
or
they
have
like
they
describe
their
neighbors.
Maybe,
but
then,
if
you
go
to
the
global
scale,
you
know
you
need
to
have
their
all.
The
cells
are
kind
of
doing
their
own
thing
and
then
you
end
up
with
this
hand
or
this
structure
here
on
the
right
I
mean
I'm
just
yeah.
I
understand
kind
of
what
you're
saying
about
like
the
the
process
of
analyzing
this.
B
Yeah,
that's
free,
oh
stan,
yeah,
okay,
yeah
matt
is
mapper
a
common
technique
for
tda.
I'm
not
familiar
with
it.
So
much.
A
Oh
okay,
okay,
yeah,
yeah,
yeah,
yes,
and
yes,
just
like
you
said.
The
point
is
that
if
we
can
it's
it's
not
like
a
cost
training.
Yes,
your
alternative,
high
level
representation
of
this
point
scout.
A
Yes,
I
mean,
if
you,
you,
can
think
that
if
we
use
number
two,
the
points
called
as
as
this
kind
of
this
kind
of
network
then
asked
for-
and
I
think
that
we
have
points
here
and
I've
heard
this
system,
and
we
will
have
points
in
here
and
actually
these
these
points
will
be
located
in
in
here
and
it's
like
faster
and
then
you
doing
a
message
passing
on
this
network.
A
A
A
Yeah,
so,
okay,
okay
yeah,
so
I
mean
my
point:
is
that
let's
looking
let's
look
at
the
points
here
and
we
can
think
of
a
map
mapper
as
some
kind
of
cluster
or
some
kind
of
core
screening
and
the
mapper
has
cast
the
nodes,
the
points
here
into
a
single
single
node
here
right
and
if
we
do
message
passing
on
such
saturday,
this
network.
A
B
A
Yep,
but
because
we
know
that
sometimes
the
graphene
natural
model
have
to
be
have
to
be
shadow,
have
to
be
they.
They
can
only
have
small
number
of
layers
such
as
two
or
three
num,
three
layers,
yeah,
I'm
not
sure,
I'm
not
sure
how
much
I
have
I
have
introduced,
but
I
mean
the
point
is
that?
Because
we
know
that
the
graphene
natural
model
is
always
shadow,
that
means
they
can
only
have
small
number
of
layers
such
as
two
or
three
layers.
A
A
They
then
such
a
note
can
hardly
perceive
the
topological
information
of
this
locality,
because
in
this
range
you
a
point
doesn't
know
the
position
of
it
or
the
topology
confirmation
of
it
right.
Well,
I
mean
we
are
more
considering,
as
for
this
node,
we're
more
considering
such
a
red
such
range
of
topological
information
right
yeah,
so
I
mean
such
as
we
would
like
to
note
which
finger
the
points
is
located
in.
A
So
if
we
can
perceive
the
information
of
this
range
at
these
three
fingers,
then
we
will
note
these
points
located
in
this.
This
finger,
the
the
smallest
finger,
but
if,
if
we
can
model
a
point
called
as
this
network-
and
we
just
use
a
shadow
graphic
natural
model
on
this
network,
then
by
the
message
passing
this
node
will
perceive
information
of
this
locality,
then
it
will
help
us
to
perceive
the
topological
information
of
this
locality
right
yeah.
A
A
Yeah,
that's
great
yeah
yeah,
so
so
that
is
why
I
didn't
use
persistent,
homology
or
persistent
diagram
here,
because,
in
my
understanding
I
think
persistent
homology
will
help
us
help
us
to.
A
But
I
mean,
in
other
words,
if
we
are
only
considering
a
sub
graph,
such
a
small
range,
and
we
use
persistent
homology
persistent
diagram
on
this
small
group,
this
sub
graph,
then
we
can
only
I
mean,
only
obtain
a
half
representation
implementation
of
this
top
graph.
It
is
also
okay,
whether
we
try
it
in
the
future,
but
if
I
mean,
if
we
only
think
of
just
like
what
apological
financial
tasks
has
done,
I
remember
that
paper
that
worked
as
focusing
on
the
graph
classification
and
nonclassification
is
not
just
focused.
So
with
the
system.
A
Instead
of
small
nobility,
but
here
what
what
we
focus
on
is
the
small
locality
or
a
certain
point
yeah,
but
why
I
didn't
consider
the
homology
here.
Yes,
just
like
what
I
said,
if
we
only
consider
the
buff
or
the
start
points
out
of
this
point
and
we
use
the
precise
homology
persistent
diagram
on
stuff,
then
we
can
obtain
the
high-level
topological
representation
of
this
graph.
Then
it
would
also
help
the
model
to
I
mean
to
perceive
the
information,
the
political
information
of
this
disability.
A
A
Wait
a
moment,
let
me
okay.
So
could
you
see
my
scripture
now?
Yes,
yeah
yeah,
so
let's
make
it
short,
because
I
think
it
is
very
late
for
whatever
yeah
yeah
yeah.
So
this
is
the
the
last
slide,
which
is
the
tt
drone
and
the
tdg
will
be
employed
on
each
frame.
A
I
mean
the
graph
of
each
frame
and
like
what
I
said,
the
this
temporal
connections
will
be
deal
with
will
be
operated
by
the
recurrent
new
network.
A
So,
as
for
the
graph
of
of
each
single
frame,
we
will
consider
the
ttg,
which
is
the
topological,
wear
temporographic
network,
and
this
is
a
graphic
natural
which
combines
a
ordinary
and
ordinary
graphical
network
and
a
topological
mapper,
and
the
first
part
is
the
german,
which
is
just
simply
propagates
information
on
this
graph
to
obtain
a
ordinary
embedding
of
this
graph,
which
is
this
one
and
and
if,
if
you
could
not
hear
me
clearly
or
something
else,
just
type
in
slack,
so
I
I
would
know
yeah.
Oh.
A
Yeah,
okay,
that's
cool
yeah,
and
and
as
for
this
graph,
we
know
we
would
extract
the
3d
position
of
syneukai
or
centuries
or
something
else,
and
we
would
call
this
these
positions
as
topological
information
or
the
ordinary
political
ignition,
such
as
three-dimensional
coordinates
as
this
matrix
and
we
have
such
length
function.
A
This
function
will
map
the
positions
matrix
into
one
dimensional,
one-dimensional,
embedding
and
this
lens
function.
I
remember
this.
One
paper
called
deep
graph
mapper
and
I
have
noise
introduced
here,
and
I
will
share
this
paper
in
slack
and
this
deep
graph
mapper.
This
paper
also
motivates
also
motivates
our
design
of
this
ttj,
and
in
that
paper
they
use
graphene
network
here
to
map
the
embedding
such
embedding
into
the
one-dimensional.
Embedding
such
a
matrix
into
one-dimensional
embedding
and
we
think
the
graphical
natural
can
help
out
it.
A
A
So
to
show
the
topology
topological
information
from
a
certain
perspective
and
we'll
have
such
a
one-dimensional
embedding
and
we
extract
the
overlapping
intervals
such
as
this
cutter
cuttlewood
embedding
and
we
obtain
its
pre-image,
and
we
will
do
some
putting
some
simple
putting
such
as
to
obtain
the
mean
value
of
of
each
interval
or
each
cluster.
A
Then
we
have
a
program,
yes,
and
it
is
very
similar
to
that's
hand,
point
code
in
the
previous
slides,
yes-
and
you
might
remember
that
we
have
some
points
called,
and
we
do.
We
use
mapper
to
obtain
a
high
level
network
and
it
is
very
similar
to
the
process.
Here.
We
have
a
original
points,
cloud
or
original
network
and
we
use
mapper
to
obtain
a
port
graph
like
this
and
actually
in
the
pre.
A
In
practical
case,
this
graph
will
have
a
lot
of
points
or
lots
of
notes,
and
this
graph
will
have
a
small
number
of
notes,
and
then
we
will
use
another
graphic
network
or
I
would
I
would
recommend
using
graphing
natural
here,
but
we
may
have
some
other
choices
to
make
each
cell
easier
here
to
leverage
its
high
level
topological
position
or
topological
information
in
their
certain
frame.
A
That
means
a
node
here
will
perceive
the
topological
information
of
its
locality,
then,
after
this
graph
natural,
this
small,
very
small
graph
in
the
natural
world,
because
this
graph
is
small
and
probably
its
dimension
of
ins.
Its
feature
is
also
small,
so
this
graph
natural
will
small
will
be
small
and
will
have
another
embedding
and
we'll
find
that
this
embedding
is
actually
a
4.
A
The
note
here
that
knows
the
notes
in
this
this
graph
instead
of
this
graph
instead
of
the
original
graph.
So
we
will
map
all
combination
the
embedding
here
and
in
this
embedding
we
can
combine
these
two
embedding
to
form
the
outputs
in
batting.
Yes,
maybe
it
is
concatenation-
or
maybe
it's
just
some
to
just
sum
to
embedding
yeah.
A
Yes,
we
can,
we
can
try,
we
can
do
some
experiment
to
show
which
is
better
and
after
that
we
will
have
an
output
embedding,
and
we
find
that
in
this
opening
padding
we
have
the
information
of
original
postcards
original
network
and
we
also
have
information
of
the
network
of
the
mapper
after
the
map.
Processing
and
the
information
of
this
network
will
help
the
nodes
in
the
original
network
to
receive
the
topological
information
in
its
locality
in
a
larger
range
of
locality
yeah.
So
this
is
the
design
of
ttj.
A
B
That's
good,
I
think
I
don't
have
any
real
questions
right
now,
thanks
for
presenting
on
that-
and
I
think
it's
it's
you
know
once
you
get
like
some
examples
running
you
know,
maybe
you
can
do
a
small
scale
run
through
with,
like
the
actual
data,
just
to
show
that
it
works
and
then
scale
it
up.
You
know
like
when
you
start
to
get
your
different
stages
accomplished,
and
then
you
put
it
into
this.
You
know
I
wouldn't
like
do
the
whole
thing
at
once.
B
I'd
start
very
small
and
then
show
how
that
works
from
step
by
step,
and
then
then
you
can
scale
up
and
and
validate
it,
and
I
mean
you
know
the
validation
I
don't
think
is
going
to
be.
I
mean
you're,
probably
just
going
to
use
the
data
that
you
have,
but
I
don't
know
well.
You
know
that
it
might
not
be
a
bad
idea
to
actually
have
like
a
validation
step
where
you're
kind
of
making
sure
that
it's
working,
you
know
it's.
It's
matching
what
you
expect
in
terms
of
the
output.
A
Yeah
yeah
yeah
yeah,
actually
because
we're
not
pretty
sure
this
structure
will
work,
but
we
will
need
some
outputs
of
g
sub
projects.
So
so
basically
I
mean
I
mean
if
this
I
mean,
if
this
structure
doesn't
work
for
json
projects,
because
because
you
will
find
that
this,
our
design
of
this
structure
is
aligned
with
what
we
design
for
json
koi
designed
for
devo
graph.
But
if
the
output
of
this
structure
is
not
very
aware,
maybe
we
will
consider
we'll
consider
using
the
structure
from
this
elv
paper.
A
Yes,
yeah.
Yes,
yes,
because
it's
because
there's
a
few
words
regarding
how
to
grow,
how
to
improve
graph
neural
networks
on
these
developmental
analysis,
and
we
can
not
sure
it.
It's
not
like
the
convolution
on
your
networks.
We
have
such
as
the
rest
nets.
We
have
a
lot
of
pre-trend
network
and
a
lot
of
large
data
sets
such
as
the
imaginext
has
guaranteed
that
this
this
convolutional
network
we
work,
can
work
on
lots
of
large
data
sets,
but
actually,
as
for
the
graphing
network,
we
have
we
have
no
such
a
classical.
A
I
mean
such
a
classic
natural
structure
which
can
really
work
on
various
types
of
data
sets.
So
all
we
can
do
is
just
to
try
this
network
yeah
right,
yeah,
yeah
yeah,
so
yes,
so,
and
actually,
as
for
the
structure,
there
are
still
actually
I'm
actually
we've
we've
implemented
some
drafts
of
the
step,
one
and
step
two
and
I'm
still
implementing
the
step
three.
Okay.
Now
and
and
probably
because
we
want
to
try
some
real
that
I
said
some
data
set.
A
B
C
A
Yeah
yeah,
but
actually
I
I've
con
contact
the
authors
of
this
paper
of
ew
ewtv
paper
and
they
have
done
some
graphene
natural
on
these
videos,
and
I
think
I
remember
they
said
they
will
release
a
new
version
of
this
paper
and
their
source
codes
in
github
in
this
week
or
maybe
the
next
week,
and
I
can
see-
I
think,
that's
if
they
release
their
source
codes.
It
will
help
you
to
to
build
the
pipeline
of
the
step
one.
I
think
yes
yeah.
What
do
you
think.
B
So
I
guess
I
just
want
to
make
sure
that
yeah
you're
dividing
the
work
up
and
it
looks
like
you
are
now
like
for
g
sock.
You
have
to
submit
some
work
and,
of
course,
you're
going
to
have
again
you're
kind
of
working
together.
So
it's
kind
of
hard
to
just
make
sure
that
you
sort
of
have
your
own
sort
of
work
defined.
B
So
you
can
write
it
up,
because
when
you
do
your
your
final
evaluation,
this
is
going
to
be
later
in
the
summer
or
in
the
fall
you're
going
to
have
to
write
up
what
you
did
and
submit
something
to
gsox.
So
I
don't.
I
don't
know
how
we'll
divide
that
up,
but
we'll
figure
that
out
how
to
divide
up
the
parts
for
that
just
to
make
you
aware
of
that.
B
The
other
thing
is
is
that
I
hope
that
every
you
know
when
as
you're
working,
you
take
notes
and
we
we
have
different
methods
for
taking
notes.
You're,
welcome
to
use
your
own
method,
but
we
usually
use
different
tools
like
something
called
obsidian,
which
is
a
a
nice
tool
for
like
taking
notes
and
organizing
them.
We
also
use
other
types
of
like
readme
files,
and
things
like
that,
which
you
can
also
use
to
just
make
sure
that
you're
keeping
track
of
everything
it
looks
like
you're
you've
got
slides.
B
At
least
gia
hung
has
slides
that
he's
like
kind
of
going
through
and
putting
things
together
in.
So
that's,
but
I
just
want
to
make
sure
that
we
have
that
so
that
when
we
go
back
at
the
end
of
the
project
or
we
go
back
like
next
year
or
something
we
can
go
back
and
look
in
and
see
what
was
done,
because
this
is
very
complicated
work
and
we
have
to
have.
C
Speaking
of
the
dividing
our
work,
yeah.
C
I'm
thinking
of
I'm
thinking
of
try
some
gnn
transformers,
okay,
actually,
I'm
interested
in
transformers,
so
I'm
I'm
searching
for
some
works
of
gna
transformers
yeah.
Actually
I
I
found
some
some
of
them
applied
to
like
pedestrians
or
some
other
tasks,
so
yeah,
I'm
I'll
try
to
find
some
yeah.
B
Yeah,
but
I
think
this
is
good
that
you're
like
dividing
up
this,
the
parts
you
know,
so
you
have
like
an
infrastructure
to
go
forward
on.
So
you
know
I
I
don't
know
what
the
the
the
gnn
part
will
look
like
I
mean
we
have
some
good
candidates
and
we'll
see
what
works
well,
but
yeah.
That's
the
other
thing
too
is
like
you
know.
We
have
these
different
candidate
approaches.
B
We
can
you
know
we
can
try
different
things
and
and
compare
them.
I
think,
because
it's
it's
so
open,
we
don't
really
know
what'll
work
well,
having
a
couple
of
different
things
that
are
working
will
help
us
understand
like
which
ones
are
better,
which
ones
are
worse.
B
So
I
mean
I
don't
want
to
put
too
much
pressure
on.
You
know
you
to
like
go
implement
everything,
but
just
make
sure
that
you
know
maybe
we
have
like.
We
could
have
like
two
different
approaches
as
rivaling
approaches
and
then
see
which
ones
are
better,
which
ones
don't
because
we
you
know,
we
don't
really
know.
B
If
I
mean
this
looks
like
it'll
work,
but
you
know
sometimes
it
doesn't.
Sometimes
it's
easier
said
than
done
so
just
to
keep
that
in
mind.
Yeah.
A
Yeah,
okay,
yeah.
I
think
we
will
try.
Yes,
oh
yeah,
yeah,
oh
actually,
as
for
yeah,
whatever
you've,
you
said
the
clock
transformer
yes,
actually,
because
I've
just
assumed
that
to
put
some
gm
on
a
single
frame,
it
can
be
any
kind
of
ordinary,
such
as
gcn
jt
graph
sage.
And
yes,
it
could
be
the
graph
transformer
yes,
and
I
I
think
I'm
not
sure
did
you
say
that
graph
transformer
network,
that
paper
gtn.
A
Yes,
okay,
okay!
Yes,
I
remember
that
paper
and
but
I
remember
that
that
was
designed
for
heterogeneous
graph
right,
yeah,
okay,
okay
yeah.
I
will
check
check
this
yeah
because
my
my
bachelor
dissertation
is
about
heterogeneous
natural.
So
I've
checked
this
paper
before
yeah.
Sorry,.
C
I'm
I'm
still
a
beginner
in
gnn,
so
yeah.
I
I
don't
understand
that
the
types
of
genus.
A
A
If
we
want
to
do
something
on
this
structure,
we
have
to
compare
different
structure
in
our
model,
because
we
do
not
decide
which
structure
is
better,
which
structure
is
best
for
our
structure.
So
we
need
to
compare
them
so
graph.
Transformer
is
a
is
a
good
choice,
but
actually
I
remember
a
gtn
is
designed
for
heterogeneous
graph
instead
of
r
graph,
because
in
it
seems
that
our
graph
will
be
homogeneous
graph,
and
I
think
you
can
search
another
work
or
I
don't
remember
its
name.
A
You
can
search
just
search
transformer
for
graph
transformer
own
graph
or
something
else,
and
it
is.
It
has
a
similar
name,
but
it
is
actually
totally
different
from
the
gtn.
Yes,
and
in
last
year,
in
last
year
I
mean
in
last
year.
There
is
a
competition
in
the
open
graph
benchmark
and
this
organization
has
held
a
competition,
and
I
remember
the
first
rank
is
is
is
not
any
kind
of
graph
new
network,
and
I
I
mean
this
competition
is
to
compare
which,
which
graph
your
natural
is.
A
A
Yes,
it's
very
weird
because
the
transformation
that
it
does
not
take
any
any
type
of
message
passing
or
it
is
not
accurate.
To
say
that
I
mean
it
is
not
similar
to
any
kind
of
existing
graphical
network,
and
I
think
you
can
try
to
search
the
transformer
format
on
graph
instead
of
the
graph
transformer,
because
the
graph
transformer-
or
we
can
say
gtn
is,
is
specifically
it
is
actually
designed
for
heterogeneous
graph.
A
Okay,
yeah,
so
yes,
oh
by
the
way
bradley,
so
you
you
just
mean
that
we
need
to
make
some
notes
on
some
logs
right
to
lock
our
process
procedure.
Yeah.
B
B
Your
points
yeah
and
you
know
it
could
be
in
the
form
of
a
digital
notebook
too.
We've
often
done
that
in
in
gsoc
projects,
so
it
could
be
like
a
collab
notebook
and
you
can
keep
you
can
save
text
in
there
as
well
as
code.
So
that's
actually
like
where
you
can
put
everything
in
an
order
and
then
you
know
kind
of
has
a
logical
flow
to
it.
So
you
know
you
can
see
the
steps
and
and
and
link
the
code
to
it
and
all
that.
A
Okay,
okay,
I
understand
yeah,
yes,
okay,
and
actually,
I
think
because
this
work
this
idea,
although
we
we
cannot
sure
that
he
will
work,
but
it
seems
that
this
work
is
a
line
with
the
g-stock
projects
design.
A
So
I
mean
if,
if
it
really
work
besides
submitting
some
outputs
to
gsoc,
we
can
also
submit
some
such
as
some
papers
to
some
journals
and
conferences.
B
B
A
A
Yes,
and
I
think,
from
this
week
or
the
next
week,
we
will
try
to
write
some
quotes
and
do
some
experiments.
Yes,
I
think
yeah.
B
Okay,
yeah,
that's
very
good
yeah.
Definitely
you
know
and
don't
just
like
kind
of
go
all
over
just
kind
of
like
maybe
make.
If
you
do
an
experiment,
you
know
do
a
very
simple
comparison
like
you
know,
you
might
try
two
different
models
or
you
might
try
the
same
model
with
two
or
three
different
settings.
B
You
know
parameter
values
or
configurations
just
to
show
like
what
maybe
works
better
or
just
you
know
kind
of
think
about
like
what
what's
the
best
set
of
things,
I
can
put
head-to-head
and
see
if
they
work
and
then
you
know
you
can
test
them
out
that
way,
because
I
you
know
if
you
otherwise
you're,
just
gonna
get
bogged
down
and
thinking
about
like
every
little
thing
and
you
know,
but
the
best
way
to
do
it
is
just
to
have
like
you
know,
a
couple
candidates
that
you
you're
gonna
go
ahead
and
test
out
and
see
which
ones
are
better,
which
ones
are
easier
to
implement
and
then
we
can.
B
We
can
evaluate
it
when
you
get
the
results
and
yeah.
A
A
I'm
okay,
yeah,
okay!
So
so
are
we
done
today.