►
From YouTube: Rust CV meeting 2021 09 29 - Photometric stereo on the web with Rust and Wasm - Matthieu Pizenberg
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
my
name
is
in
french,
matthias
pisonberg
or
in
english.
Matthew.
Eisenberg
is
fine
too,
and
I'm
going
to
talk
about
photometric
stereo
and
how
we
can
port
a
photometric
theory
algorithm
that
are
written
in
rest
to
the
web,
and
so
most
of
this
work
has
been
funded
by
a
french
agency
einer
and
I'm
working
at
normandy,
university
so
and
some
of
the
images
are
from
the
biomuseum.
So
these
are
the
logo
about
this
so
quickly.
I'm
gonna
present
three
things.
A
First,
quick
tower
about
three
reconstruction:
we've
already
had
a
bit
of
background
with
the
prison
presentations,
then
a
little
bit
of
what
is
photometric
stereo
and
finally,
how
do
we
code
this
in
rafts
and
how
do
we
port
it
to
the
web.
A
So
3d
reconstructions
from
images
so
like
we
saw
in
the
different
in
the
previous
presentation
what
most
people
think
of
when
we
say
3d
reconstruction
from
images
are
things
that
looks
like
that
and
which
are
basically
aiming
at
reconstructing
something
in
the
real
world.
Usually
buildings,
cities,
things
like
that
from
a
sequence
of
images,
but
you
have
to
know
that
it's
not
at
all
the
the
only
thing
that
we
can
do
and
the
kind
of
reconstruction
that
can
have.
A
A
A
A
So
it's
it's,
maybe
a
little
less
intuitive,
because
we
are
not
focusing
on
the
geometry
aspects,
but
on
the
photometric
aspect
of
the
information.
So
what
the
lights?
What
the
intensity
of
the
light
from
the
image
can
give
us
as
information.
A
A
A
little
bit
of
context
and
photo
on
metric
stereo,
the
interpretation
of
images
to
have
an
idea
about
the
shape
that
is
represented
by
images,
is
something
that
our
brain
is
able
to
do.
But
we
have
to
keep
in
mind
that
it's
something
that
can
be
interpreted
in
different
ways.
A
So
I've
put
here
animagis,
which
is
representing
some
kind
of
gray
level
pixels,
and
the
idea
is,
if
you
had
to
say
what
is
the
3d
shape
behind
this
image?
What
would
you
think
it
is,
and
actually,
if
we
are
considering
the
fact
that
images
are
the
process
of
capturing
and
projecting
information,
we
can
have
multiple
interpretations
of
this
image.
A
Typically,
there
are
three
main
factors
for
the
formation
of
this
image.
There
is
the
geometry
which
can
be,
which
corresponds
to
the
explanation
of
the
sculpture.
Here
there
is
the
the
pigments,
the
the
colors
of
the
thing
we
are
observing,
which
can
be
the
interpretation
of
the
painter
here,
and
there
is
how
objects
are
lighted
so
depending
on
which
light
to
project
to
an
object
on
an
object
it
may
be.
A
It
may
appear
differently
on
your
sensor,
so
we
have
to
know
that
if
we
only
know
if
we
don't
know
any
of
this
information,
we
don't
have
enough
prior
to
know
exactly
which
of
these
is
the
reason
why
we're
observing
an
image.
So
when
we
are
trying
to
rebuilding
an
information
from
a
2d
image,
we
have
to
take
into
account
those
three
information.
A
Knowing
this,
there
is
a
field
of
3d
reconstruction
that
is
one
of
the
early
fielder
that
is
using
the
light
information
which
is
called
shape
from
shading,
and
the
idea
is
that
you
can
estimate
the
appearance
of
a
surface
that
doesn't
have
any
shiny,
composites
so
surface.
That
is
called
lambertian.
A
A
And
if
you
have
more,
if
you
have
more
curves,
you
can
even
have
a
situation
where
the
object
is
casting
self
shadows,
where
part
of
an
object
is
not
lighted
anymore
by
source.
A
And
so,
if
you
use
this
model
to
try
to
interpret
how
light
is
how
light
appears
on
your
image,
you
get
to
the
numbered
law
which
says
that
the
intensity
of
an
image
at
coordinates?
U
and
v
depends
on
the
albedo
which
we
will
use
the
the
letter
ho
ho.
The
greek
letter
row
is
here:
I
don't
know
how
to
pronounce
this
in
english
or
in
french,
and
then
you
have
the
the
dot
products
between
the
direction
of
the
light
and
the
surface
normal.
A
A
But
if
you
say
that,
instead
of
one
image,
you
are
going
to
look
at
multiple
images
at
the
same
position,
that's
where
things
get
interesting.
So
in
this
slide
here
you
can
see
on
the
left.
I
have
put
a
tiny
shiny
spot
on
the
left,
then
on
the
center
and
then
on
the
right
of
our
rusty
mascot,
and
the
idea
is
that,
if
you
are
looking
at
the
same
pixels
with
different
lighting
information,
the
only
thing
that
is
changing
here
between
the
images
is
the
lighting.
A
So
the
the
albedo
of
each
pixel
is
some
information
that
is
associated
to
the
physical
value
of
the
object.
You're
observing
and
the
normals
is
not
changing
between
different
images.
So,
if
you
are
in
a
setup
where
you
have
calibrated
your
lighting,
so
you
know
where
is
the
light
coming
from
for
each
image?
You
are
in
a
situation
where
the
only
thing
you
don't
know
is
the
is
the
the
albedo
and
the
orientation
of
the
surface,
and
if
you
have
enough
images
at
least
three
you
can
reconstruct
those.
You
can
get
those
two
information
back.
A
So
if
you
are
taking
something
from
far
away,
you
will
get
the
resolution
of
one
pixel
per
geometry,
and
it's
the
same
if
you
are
taking
something
from
very
near
from
very
close
position
so
depending
on
where
you
can
place
your
sensor
and
how
far
it
can
be
of
the
object
you
are
trying
to
reconstruct.
A
A
Here
it's
a
different
setting,
it's
the
it's
a
caption
of
the
bio
tapestry.
If
some
of
you
are
familiar
with
with
this
piece
of
art,
it's
it's
a
it's
an
embroidery
that
is
a
thousand
years
old
and
that's
exposed
at
the
bayou
museum
in
france
and
it
it
basically
retraces
how
guillaume
le
conqueror
guillermo
norman
invaded
england
to
be
the
first
norman
king
of
of
their
england.
A
So
yeah,
it's
a
passionate
story,
but
so
the
thing
is,
we
are
interested
in
this
project
in
trying
to
do
three
reconstruction
of
this
embroidery,
and
so
we
have
been
using
a
photometric
stereo
for
that
and,
as
you
can
see
here,
there
are
a
few
different
images
here
where
you
can
see
the
embroidery
at
the
in
the
in
the
middle
of
the
image
and
at
the
corners.
A
A
That
looks
like
this,
so
that's
photometric,
stereo
and
now.
How
are
we
doing
this
in
in
rest
and
how
importing
this
web?
A
It's
a
demo
of
web
application
that
he's
using
all
that
I
just
presented,
and
here
it's
running
in
localhost,
but
you
can.
You
can
really
run
it
on
yourself.
I
will
show
the
link
again
of
the
presentation.
A
So
here
we
are
loading,
a
few
images,
so
I'm
loading
images
here
and
when
I'm
doing
that,
what
it
is
doing
it's
it
is
actually
using
the
image
crate
from
the
rest
community
and
it's
decoding
every
image
that
I
am
passing
to
the
interface.
A
So
it's
taking
a
bit
of
time,
because
these
are
rather
big
images.
These
are
something
like
a
26
megapixel
images,
something
like
that.
So
it's
taking
a
bit
of
time,
but
it's
running
in
the
in
the
browser
and
then
I'm
dropping
the
lights.
The
lights
configuration
that
is,
that
contains
the
information
about
where
our
the
image
is
located.
A
So
the
algorithm
has
started,
it
is
trying
to
build
the
reconstruction
and
then
it
has
finished
and
it
has
made
available
the
normal
maps.
And
here
you
can
see
that
we
have
reconstructed
the
we
have
retrieved
the
directions
of
the
during
the
directions
of
the
normals
for
each
pixels
and
once
you
have
that
you
can
do
some
kind
of
integration
to
retrieve
the
the
actual
surface
through
the
surface.
A
So
how
is
organize
the
code
for
for
this?
So
what
I've
found
to
be
working
well
is
to
be
very
organized
with
how
you,
where
you
put
your
codes.
So
basically
I've
been
splitting
myself
splitting
my
code
in
four
different
directories.
The
first
one
will
contain
the
core
parts
of
the
algorithm
as
the
rush
library.
A
What
does
the
library
cut
looks
like
so
it's
what
you
could
expect,
I'm
using
an
algebra
for
most
of
the
computations,
and
so
basically,
what
you
have
here
is
some
kind
of
configuration
where
I'm
saying
the
different
configuration
of
the
algorithms
like
how
many
iterations,
what
are
the
different
conversion
threshold,
what
he
used
estimated
the
mean
depth
of
the
of
the
image
at
the
beginning
and
what
are
the
direction
of
the
lights?
The
calibration
the
light
calibration
and
then
the
interface
of
the
photometric
stereo
function
is
just
it
takes
a
config.
A
I'm
setting
all
the
default
features
to
false,
because
actually,
I'm
not
performing
any
any
actual
encoding
or
decoding
or
stuff
like
that,
I'm
just
using
the
the
bare
types
of
the
image
create
so
keeping
this
minimal.
Also,
I
have
wisen
wasn't
by
engine
here
which
is
used
for
generating
interrupt
with
with
webassembly,
but
I
only
want
that
to
be
active
when
I'm
actually
compiling
the
the
wasn't
binary
and
not
when
I'm
compelling,
for
example,
the
cli
binary
so
keep
it
optional
as
well
and
same
for
certain.
A
Then,
for
the
web
assembly
configuration,
you
can
have
the
information
about
how
to
set
up
everything
in
the
webassembly
guide.
But
basically,
as
a
quick
reminder,
you
have
a
config.tamil
with
you,
where
you
set
up
that
crate
type
is
a
dynamic
library
and
harley.
I
don't
remember
what
it's
about.
Then.
You
have
a
dependency
on
your
code,
library,
you
add
the
features
that
you
need
for
everything
to
work.
A
You
optionally,
add
also
features
on
all
the
dependencies
that
you
need,
for
example,
in
the
image
I
added
jpeg
and
png
features,
because
I
want
to
be
able
to
decode
the
jpeg
images
and
be
careful
not
adding
the
jpeg
multi-threaded
one,
because
it
will
crash
and
wasn't
so
and
yeah.
A
Once
you've
done
that,
you
need
a
way
to
communicate
between
your
rest,
webassembly
library,
some
javascript
codes,
which
is
used
for
your
front-end
and
what
I
recommend
is
usually
to
go
for
a
web
worker.
A
It's
basically
an
emulation
of
a
thread,
but
the
main
advantage
of
doing
that
is
that
you
are
not
taking
control
of
the
main
thread
why
the
rest
code
is
running
the
web
assembly.
Your
code
is
running
because,
usually
as
computer
vision
like
aficionados,
we
are
trying
to
run
code
that
might
run
for
a
long
time,
and
so,
if
you're
doing
this
in
the
main
thread
you
are.
A
The
interface
is
not
responsive
anymore,
and
you
don't
want
that
because
you
will
think
that
it
has
crashed.
The
browser
will
think
that
it
has
scratch.
And
after
a
few
seconds,
it
will
tell
you
that
you
need
to
close
the
the
close
to
the
webpage
or
things
like
that.
But
actually
you
just
have
to
wait
a
little
bit.
So
that's
why
I
recommend
going
from
for
a
worker
setup
for
the
res
web
assembly,
and
that
will
be
probably
another
talk
for
another
meetup.
A
But
when
you're
doing
that,
you
have
to
know
that.
A
You
will
want
eventually
to
have
an
async
version
of
your
algorithm,
and
that
is
because
you
will
want
to
be
able
to
stop
the
algorithm
some
way,
which
is
not
possible
in
web
assembly
running
in
a
worker
in
that's
the
main
difference
between
when
you
run
stuff
in
a
in
in
the
in
the
cli,
where
you
can
just
control
c,
and
it
will
stop
everything
and
in
the
in
the
web
worker.
A
If
you
want
to
stop
the
algorithm,
the
only
thing
you
can
do,
if
it's
not
set
up
in
a
good
way,
is
to
close
the
web
page,
and
then
you
lose
everything
like
all
the
setup.
You
did
like
loading,
the
images
and
everything
so
yeah.
A
There
is
a
little
bit
of
yeah
a
lot
of
stuff
about
how
to
set
up
web
assembly
codes.
That
is
taking
a
lot
of
time,
but
yeah.
We
can
give
that
for
another
moment
and
yeah.
So
that's
just
another
view
of
what
looks
like
from
the
worker
javascript
module.
A
You
have
to
know
that
the
worker
javascript
module,
that
is
loading
the
web
assembly
code,
cannot
be
used
directly
in
a
worker,
as
is
because
worker
do
not.
Web
workers
do
not
support
es
modules
and
the
way
your
webassembly
bundle
will
be
will
be
compiled
is
as
an
es
module.
So
if
you
want
to
be
able
to
use
your
webassembly
bundle,
you
will
need
another
step
with
the
tools
with
something
like
espl.
A
That
will
take
your
bundle
and
I'll
put
everything
in
a
non-module
file
that
will
gather
everything
you
need
in
one
file
so
yeah
when
you
once
you've
done
that
you
have
you,
you
make
a
variable,
initialization
function
that
will
initialize
the
data
that
you
need
and
then
the
main
idea
is
that
you
listen
to
messages
and
once
the
message
arrives
asking
to
run
the
algorithm,
then
you
just
run
it
and
that's
yeah.
That's
pretty
much!