►
From YouTube: HTM Hackers' Hangout - Sep 6, 2019
Description
Let’s talk about:
- Hex grids
- 2D Object Recognition Project
- Elephants!
- Brains@Bay Meetup
- nupic.torch contribution pipeline (WIP)
Discussion at https://discourse.numenta.org/t/htm-hackers-hangout-sep-6-2019/6553
A
Start
recording
welcome
the
HTM
hackers
hang
out.
It's
nine
o'clock,
September
sixth
Friday
you've
got
one
two
three
four
or
five
people
connected
right
now
great.
So
this
is
first
time
on
zoom
thanks
for
trying
it
out
I'm
gonna,
let
anybody's
free
to
time.
I
didn't
have
that
if
they
have
a
comment,
I'm
gonna
go
through
an
agenda
and
talk
about
some
points
and
then
we'll
have
you
know
some
conversations
on
the
topics
so
but
I
am
sort
of
stealing
the
screen
right
now
to
go
over
the
agenda.
A
Real
quick,
so
right
share
I
want
to
share
there.
We
go
my
again
bear
with
me
because
this
is
the
first
time
I
think
I
can
just
share
my
whole
desktop
one
right.
It's
not
working!
It's
not
hold
on
just
a
moment:
Google
Chrome,
3,
no
oath,
it's
gonna
be
easier.
If
I
do
the
whole
thing?
Ok,
there
we
go
so
you
can.
You
should
be
able
to
see
this
hackers
hangout.
That's
around
point,
yeah,
all
right!
A
So
here's
my
agenda,
we're
gonna
talk
about
the
discussions
we've
been
having
recently
about
hex
grids,
which
has
been
super
interesting
and
that's
mark.
Browns
on
online,
if
you
can't
join
in
I,
understand
mark,
but
we'll
certainly
continue
this
conversation
in
the
forums
I'll
give
it
update
on
to
the
object,
recognition
stuff,
which
I
haven't
had
much
time
to
work
on
lately.
The
elephant's
comment
was
I
want
to
I,
want
to
explain.
I
want
to
open
for
conversation,
this
topic
about
hierarchy
a
little
bit
and
we'll
just
leave
it
at
that.
A
Let's
see
so,
the
interface
is
blinking
at
me.
It
just
says:
ok
still
figuring
this
out,
so
those
as
a
chat
message,
I
see
how
it's
an
indicated,
argot
and
then
I
also
want
to
mention
those
brains
that
they
meet
up
were
that
were
we're
doing
I
explained
a
little
bit
about
it
and
what
I've
been
doing
on
the
torch,
to
create
a
proper
open
source
contribution
and
pipeline
automation,
continuous
integration,
etc.
A
Alright,
so
that's
what
I'm
going
to
talk
about
hello,
everybody,
since
this
is
a
meeting.
Maybe
we
could,
if
you
want
to
say
a
word
about
why
you're
interested
or
what
you're
working
on
or
anything
I
would
be
happy
for
all
of
you
to
do
that.
If
you
have
a
chance,
there's
some
type
of
something
in
the
UI
and
zoom
that
lets
you
raise
your
hand,
I,
don't
remember
where
it
is,
but
there's
some
way
to
raise
your
hand.
So
I'm
gonna
be
looking
for
that.
A
A
A
So
he's
interested
but
I'll
go
over
the
latest.
Thinking
about
about
that
and
point
to
Mark's
hex
screws
forum
thread,
which
is
in
the
link
that
I
said
so
look
I'm
gonna
do
that
I'm
gonna
switch
to
another
video
and
go
to
a
white
word.
Okay,
so
I
mean
you
guys.
Can
hear
me,
okay
over
here,
adjust
my
mic
a
little,
maybe
alright.
A
So
here
we
have
a
Graham
little
neuron
I'm,
going
to
try
to
sort
of
describe
this
issue
and,
and
hopefully
you're
somewhat
familiar
with
a
champion,
but
these
neurons
have
axons,
and
so
this
is
what
I
got
from.
This
is
what
I
talked
about
so
Jeff
about
this
for
a
while,
a
neuron
has
an
axon
that
comes
out.
A
This
is
Jeff's
understanding
right
now
it
by
4
caves,
it
splits
in
two
and
then
half
of
it
or
one
path
leads
outside
of
the
cortex.
It
goes
out
of
the
cortex
and
the
other
one
travels
within
the
cortex
going
somewhere
else.
So
the
the
idea
that
we
have
about
this
in
HTML
aw,
my
ok,
really
long
distances.
So
we're
not
it
within
the
same
sort
of
area
of
the
snark,
but
it
will
reach
another
point
in
time
and
create
these
tufts
of
places
where
it
will
connect
to
dendrites
in
another
area
of
cortex
right.
A
A
Okay,
this
thing
is
a
segment
and
it
has
you
know
synapses
and
then
the
idea
is
when
you
create
another
segment,
that
the
axon
continues
to
grow
and
it'll
go
out,
some
other.
You
know
unknown
distance,
I,
don't
know,
but
I
would
assume
I
don't
know
if
I
want
to
assume
anything
at
this
point,
but
probably
in
the
same
I,
don't
know
some
unknown
distance
and
basically
does
it
again
and
so
we're
modeling
this.
It
creates
more
synapses
with
other
than
writes
in
this
other
area,
so
we're
modeling
all
of
this
happening.
A
Basically,
this
is
temporal
memory.
You
know
where
you
have
distal
segments
that
have
synapses
and
then
we're
representing
the
synapses
from
the
distal
point
of
view.
That's
the
one
thing
that's
confusing
about
this
is,
if
you
look
at
our
theory-
and
you
read
our
pseudocode-
we're
really
representing
these
values
from
the
you
of
dendrites,
not
the
axons,
so
you
have
to
think
backwards
a
little
ways
that
makes
sense.
Okay,
so
this
is
HTM
theory
right.
So,
let's,
let's
forget
this
for
a
while,
the
X
there's
one
axon
and
it
goes
out.
A
So
what
Mark
is
saying
and
I'll?
Let
you
forget
correct
me
if
I'm
wrong,
but
well
Marcus
and
and
what
Calvin
is
saying,
and
what
these
other
documents
that
they're
showing
you're
saying
that
this
goes
off
in
in
many
different
directions:
sort
of
somewhat
amna
directionally
not
and
not
just
it
goes
out
of
the
cortex,
but
it
also
stays
in
the
cortex
and
there's
some
type
of
cloud
pattern
or
some
type
of
spherical
Arbor
of
axons.
Okay.
A
And
that
means
that
when
these
guys
create
Tufts
out
here,
because
they're
going
to
do
the
same
thing,
they
little
saps
a
whole
bunch
they'll
create
these
Tufts
and
then
these
are
long
distances
again.
This
is
not
close,
so
I
want
to.
You
know,
put
a
little
thing
here
on
each
one
of
these
to
show
you
that
these
go
a
long
way
relative
to
you
know
the
neurons,
the
neuron
cell,
so
these
guys
are
all
going
to
sit
up
and
connect
to
some
group
of
neurons
somewhere
else.
A
A
There's
an
there's
a
it's
so
hard
to
explain
this
for
me,
because
I
don't
quite
understand
all
of
it
yet,
but
it's
the
best
way.
The
best
vision
I
have
of
this
in
my
head
is
that
there's
like
there's
an
unstable
grid
or
many
unstable
grids
that
are
sort
of
partially
constantly
and
depending
on
input
from
below
or
activity
from
in
the
connections.
Different
grids
emerged
in
different
ways
in
this
sort
of
configuration.
A
So
if
you
want
I'm
still
one
thing
that
I
do
want
to
show
when
is
just
wasn't
different
now
I
do
this.
This
is,
let's
show
this
okay,
so
this
is
a
paper
that
really
got
me
thinking
and
it
was
linked
from
in
big
kings
and
marks
in
your
original
post.
You
link
to
a
paper
that
was
really
complicated.
You
said
this
is
a
complex
paper.
This
was
one
of
the
main
papers
at
reference
when
it
was
talking
about
these
axonal
projections.
So
I
read
through
some
of
this
paper.
A
I
didn't
understand
all
the
methods
and
stuff
because
I'm
not
neuroscientist
but
I,
think
I
get
the
gist
of
it,
and
so
the
gist
of
it
is
the
chapter
yeah.
Okay,
so
mark
saying,
should
be
far
more
branches.
The
clouds
don't
like
the
approximate
will
so
these.
So
are
you
talking
about
this?
These
clouds
mark,
because
this
this
is
I,
don't
feel
like
this
is
the
same.
This
is
short.
These
are
short
connections.
This
is
like
a
little
local
cloud.
These
are
Kevon
he's
not
the
long
ones
these
are.
A
So
if
you
look
at
this,
here's
the
scale-
and
so
this
is
two
hundred
micrometers
here.
So
we're
only
talking
like
from
the
soma
400
micrometers
500
600,
1000
Maxim-
is
that
a
millimeter
one
millimeter
is
a
thousand
micrometers
a
millimeter
yeah.
A
So
aren't
we
with
with
these
longer
connections
or
are
we
talking
about
longer
ranges
than
than
that?
Or
is
that
what
common
saying
these
are
those
things,
because
this
doesn't
jive
I,
don't
think
with?
If
you
look
at
this
one,
this
is
a
millimeter
right.
So
so,
when
I
look
at
this
graphic,
it
looks
like
the
that
cloud
is
in
the
middle
here
and
then
the
connection
of
the
abdominal
Tufts
that
they're
finding
are
in
this
pattern
around
it,
because
this
is
like
two
millimeters
away.
A
A
A
A
Okay,
before
we
go
on
to
another
topic,
this
was
just
I'm
happy
to
hear
if
you
have
anything
to
say
mark
if
you
can,
but
otherwise
you
know
we'll
continue
discussion
on
the
forum,
because
I
think
this
is
something
I'm
going
to
see
the
length
histograms
in
the
paper.
Okay,
so
I'll,
look
through
I
still
have
more
to
read
in
this
paper.
So
let
me
go
back
to
what
Jeff
said.
A
Like
I
said:
I've
been
talking
to
him
about
this,
so
though,
so
he's
saying,
excuse
me
the
way
he
understands
it,
which
was
the
segment
and
then
descent
and
a
tuft
and
a
segment
of
the
tough
and
there's
only
one
of
those.
He
said
his
comments
were
from
the
warmth.
So
that's
the
way
he
understands
it
to
happen
in
v1
and
and
also
he
doesn't.
A
He
never
saw
that
those
longer
distance
connections
are
omnidirectional
said
that
there
are
short-range
connections
that
are
very
directional
so
but
I
did,
but
he
didn't
give
any
measurements
to
those
and
then,
in
the
last
thing
that
he
said
the
sized
again,
because
I
pointed
him
to
this
paper
and
I
and
I
said
that
it
looks
like
because
if
you
look
at
some
of
those
graphics
trivia,
look
at
some
of
these
graphics.
I
forgot
to
show.
A
Where's,
the
good
ones
these
these
are
the
ones
I
posted
on
the
forum.
This
is
pretty
explicit
about
what
they're
trying
to
show
what's
going
on
here
this
graphic,
and
there
is
another
one
down
here
this
one.
These
are
the
ones
I
put
on
the
forum.
This
is
the
one
I
showed
Jeff
to-
and
this
is
this
is
showing
external
projections
of
two
three-and
four-and
earths
are
in
five
and
in
six,
and
this
is
all
prefrontal
cortex
all
right.
A
A
That
was
always
said,
since
I've
been
working
there
that
if
someone
can
invalidate
anything
about
the
theory,
we
create
that's
a
great
thing,
that's
like
because
it
opens
up
a
bunch
of
doors,
because
you
realize
you're
doing
something
wrong,
and
then
you
have
to
look
at
alternatives,
because
usually
that
disproof
comes
with
an
alternative
or
a
reason
for
it
that
you
can
investigate.
So
I
would
be
happy
if
this.
A
A
I,
don't
actually
have
anything
to
say
about
those
I
linked
to
it.
I
there's
a
PR
I
should
I'll
say
something:
there's
a
PR
that
I
can't
pronounce
the
guy's
name,
there's
a
PR
on
it.
So
here's
the
project,
vice
tech
spies,
so
he
has
a
PR
and
it
contains
assistant
I,
don't
know
where
the
the
link
is,
but
there's
a
pull
request
here.
He
added
HTM
cord
to
this,
which
is
which
is
awesome.
A
He
did
exactly
what
I
asked
somebody
to
do
is
that
we
someone
should
try
and
hook
up
an
HTM
to
this
basically,
and
he
did
it
so
super
happy
that
he
did
that
and
then
now
he
has
a
and
I
merged
that
PR,
adding
HTM
court,
and
now
he
has
another
PR
or
another
branch
that
adds
actually
does
some
of
the
cop
and
the
layer
instructions.
You
know
and
ties
them
together,
which
is
again
something
I
was
asking
people
to
do
so.
A
A
For
me,
if
you're
thinking
about
vision,
the
the
idea
that
all
of
the
columns
are
doing,
object,
recognition
that
it's
not
feature,
detection
or
feature
extraction
at
the
lower
levels
and
the
combinations
of
those
features
of
the
mid
levels
and
the
combinations
combinations
of
higher
levels,
but
that's
not
what
the
Harvey's
doing
as
far
as
other
composition
and
construction,
each
one
of
the
mini
column,
not
each
one
of
the
cortical
columns
wherever
it's
at
in
the
hierarchies.
It's
doing
the
same
thing.
A
It's
doing
object,
recognition
in
the
same
way,
so
so
you
can't
think
of
them
as
different
parts
of
a
process
or
a
construction
of
something
you
have
to
think
of
it
as
doing
exactly
like
their
composition
happens
in
the
unit,
it
has
to
happen
in
the
unit
and
has
to
be
completely
like
agnostic
about
whether
it's
getting
parts
and
pieces
or
direct
sensory
input.
You
know
wherever
it's
out,
so
that's
challenging
anyway.
A
One
of
the
things
that
Jeff
told
me
this
is
again
a
story
from
Jeffy
about
how
to
think
about
the
hierarchy
in
this
way
or
the
lack
of
hierarchy.
In
this
way,
we're
doing
on
the
recognition
is
to
imagine
he
use
D,
just
a
letter
like
at
a
but
I,
like
the
putting
more
something
more
tangible
service
of
the
elephant
and
looking
through
straw,
that's
like
what
v1
sees
that's
the
field
of
view
that
it
sees
it's
about
like
looking
through
a
straw.
A
So
it
has
a
very
high,
detailed
viewpoint
of
a
small
field
of
view
in
your
vision.
That's
what
do
you
want
gets
if
you
go
up
the
hierarchy,
the
field
of
view
gets
broader
and
the
amount
of
detail
gets
lower
and
I.
Go
that
that's
true,
as
you
go
up
the
hierarchies
also
in
like
somatic
sensory
areas
and
I
assume
auditory
sensory
areas,
but
the
higher
levels
get
direct
sensory
input.
That's
one
of
the
things
that
the
old
sort
classic
heart.
You
cannot
explain
why
denier
levels
get
direct,
some
certain
limit,
different
things.
A
Just
you
know,
compositional
thing
going
on
at
each
level
of
hierarchy.
So
what
was
I
saying?
Oh
yeah,
so
so
the
the
analogy
of
looking
at
through
a
straw
at
an
elephant
on
the
horizon
and
realizing
that
there's
one
cortical
column
that
has
that
I
mean
there's
a
lot
of
them.
They
have
overlapping
abuse,
probably
space,
but
one
cortical
column
has
a
field
of
view
of
about
looking
through
a
straw
like
that.
So
it
had
and
you've
been
recognized
an
elephant
and
it
was
a
curious
try.
You
can
recognize
thousands
of
things.
A
You
know
it
just
has
to
be
far
enough
away
that
and
the
steel
down
small
enough
that
you
can
see
most
of
the
object
and
put
together.
So
if
you
go
if
you're
thinking
about
higher
up
in
the
hierarchy-
and
this
was
the
idea
of
singing
the
elephant
in
the
distance
in
the
elephant
very
close
away
or
very
close
to
you
and
a
distance,
your
higher
levels
of
the
hierarchy
would
not
be
able
to
recognize
an
elephant
because
it
doesn't
have
enough
detail
to
see
the
appendages
hopping.
A
It
just
gets
worth
a
great
ler,
so
it's
not
gonna
help
when
identifying
object
that
distance
the
higher
levels,
because
that
they
don't
have
the
resolution,
so
only
the
lower
levels
are
going
to
be
able
to
do
love
different
issues
with
them,
and
you
can
you
can
do
it.
You
can
look
very
far
away
and
you
can
identify
something
very
particular
without
any
input,
except
if
you
want,
you
know
you
test
it
out
yourself,
just
look
through
a
straw
and
all
the
things
around
you
and
it
takes
scanning
a
while,
but
I
know.
A
I
could
tell
that's
a
leaf
anyway.
I
thought
that
was
an
enlightening
way
to
think
about
it,
and
and
in
one
of
the
forum
conversations
someone
said
someone
said
it's
almost
like
it:
might've
been
Casey
like
there,
you
can
think
of
the
different
regions
of
the
hierarchies
of
being
different
senses
and
I.
Think
that's
an
interesting
way
to
think
of
it.
The
hierarchy
seems
to
be
less
and
less
important.
The
more
the
more
we
look
at
sort
of
feels
like
to
me.
There's
very,
very
few
connections
that
are
hierarchical.
A
Most
of
them
are
lateral
anyway,
but
all
of
them
work
together
and
it's
more
about
breaking
how
they
break
up
the
sensory
field
of
view,
perhaps
than
it
is
about
how
a
lower-level
in
forms
are
available.
I
think
they
all
inform
each
other.
Each
each
layer
Paul's
very
clear
that
HTM
must
include
cooling
in
time
space.
A
Absolutely
it
has
to
yeah
and
that's
that's
what
and
we
can
I
think
we've
been
cool
over
this
is
it
seem
like
it
would
be
more
useful
to
be
able
to
pool
sort
of
over
a
broad
combination
of
different
scales
and
details.
You
know
when
it
is
just
to
do
them
at
different
hierarchical
levels.
Usually
it
has
some
sort
of
feature
extraction,
timer
space
right.
A
Excuse
me
all
right,
so
that
was
about
the
elephants,
open,
open
to
comments
and
suggestions
about
this.
If
anybody
wants
to
have
any
thanks
mark
for
your
chat,
if
you
guys
don't
see
it
marks
chatting
over
here,
you
can
open
up
the
chat
somehow
on
zoom
I'm,
not
sure
if
the
chat
is
included
in
the
video
recording,
probably
not
I
mean
whatever
might
adventure
that
all
right.
The
next
topic
we're
going
to
go
to
I,
don't
see
any
hands
raised.
So
the
next
topic
we're
going
to
go
to
is
the
brains
at
bay
Meetup.
A
So
this
I've
been
recording
these
I,
don't
show
it
to
you.
The
link
is
in
on
the
forum,
but
it
we've
got
two
meetings.
The
first
meeting
had
like
10
people,
the
second
one
had
like
35,
so
it
grew
fast,
which
is
great,
but
we
ran
out
of
space
in
the
mid
say
to
you
so
I'm
looking
for
alternative
venues
and
these
looking
about
area
fairly
close
to
the
Caltrain
station.
A
If
anybody
wants
to
help
out
with
that,
let
me
know
Matt
at
momenta
org,
so
it's
gone
well,
the
first
one
the
topic
was
continuous
learning,
and
so
the
idea
is
there's
a
lot
of
paper
review.
So
usually
there's
a
there's,
a
few
papers
that
are
being
investigated
and
then
someone
deep
dives
reads
the
paper,
and
these
are
like
machine
learning
papers
for
the
most
part,
because
this
were
sort
of
targeting
the
machine
learning
audience
and
saying.
A
How
can
we
apply
brain
inspired
ideas
to
machine
learning,
though
the
thing
you'll
be
missing
in
dementia,
so
since
we're
sort
of
hosting
it
and
our
one
of
our
employees
is,
is
running
the
thing
we
always
present
one
of
our
ideas,
but
we
let
anybody
else
come
in
and
present
a
paper.
That's
related
like
comment
at
sparsity
I'm
do
create,
and
that
sort
of
thing-
and
it's
there's
reads
so
that's
still
happening-
we're
trying
to
find
another
venue,
the
second
one,
the
one
that
has
so
many
people
was
about
sparsity.
A
So
there's
a
lot
of
interesting
stuff
there,
and
then
we
had
rain
your
morphix
there
to
talk
about
the
new
chip.
I.
Don't
know
that
there's
a
name
for
it,
but
it's
a
very
well
suited
for
sparse
matrix
operations
at
very
well-suited
for
neural
emulation
and
HTM.
It's
very,
very
interesting
job
in
the
next
meeting.
I,
don't
know
what
it's
gonna
be
about,
but
we're
trying
to
find
a
venue,
I,
think
I,
think
there's
a
plan.
I
just
don't
know
what
it
is
all
right,
but
one
last
topic:
gonna
open
it
up
for
discussion.
A
A
So,
if
you
don't
know
you
could
torch,
is
see
this
new
pic
torch.
Is
our
Python
torch
implementation
of
a
biologically
inspired
that,
essentially,
what
we've
learned
in
ACM,
biologically
inspired
applications
to
deep
networks
in
torch,
so
we're
gonna?
We
do
this
we're
doing
this
a
torch.
Now.
Excuse
me
we're
gonna
also
put
one
together
in
a
tensor
flow.
So
what
I'm
doing
is
in
just
a
good
faith
being
good
open
source
community
manager,
I've
got
PR
and
I'm,
putting
together
the
proper
contributing
documents,
so
people
know
how
they
should
contribute.
A
So
we're
no
longer
really
maintaining
Dubnyk
anymore,
so
this
is
going
to
be
pretty
much
or
at
this
point,
our
main
open
source
offering.
This
is
currently
new
contortion.
What
will
be
nupic
tensorflow,
so
I'm
sort
of
giving
it
the
nice
open
source
treatment
and
make
sure
we've
got
a
pipeline
for
deployment
and
for
polar
best,
valid
and
all
that
stuff?
Honestly,
Lewis
has
done
a
lot
of
this
work
for
me
already,
and
he
continues
to
do
a
lot
of
my
work
in
this
area
because
he's
simply
better
at
it.
A
So
thanks
Lewis,
if
you're
watching
anyway
keep
an
eye
on
this
terrific
torch,
I'm
gonna,
we're
gonna
be
working
on
on
visuals
and
such,
and
this
is
a
library
that
I'm
producing
ethylene
will
be
using
heavily
over
the
over
the
next
few
months
to
put
together
some
interesting
visuals
about
how
our
city
applied
to
neural
networks
and
and
other
ideas
that
were
investigating
can
improve
their
performance
in
different
ways.
All
right.
So
that's
the
last
bit.
A
Okay,
well,
if
you're
watching
this
on
youtube,
please
take
a
moment
to
like
the
video
and
subscribe
to
our
channel.
We
do
these
hackers
hangouts
pretty
much
once
a
month,
sometimes
I
miss
them.
We're
trying
out
zoom
this
time
that
it
will
end
up
on
YouTube,
no
matter
what
live
streaming
system
I
use
so
with
that
I
think
that's
pretty
much
the
state
of
the
HT
matter
space
right
now,
thanks
Matt,
no
microphone
today,
so
just
learn
days
and
work
well,
great!
Jesus!
Thanks!