►
From YouTube: wasmCloud Working Group - Machine Learning 03/31/22
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
https://wasmcloud.com
A
Welcome
to
the
watson
cloud
machine
learning
subgroup
for
thursday
march
31st,
2022.
kristoff.
I
think
that
you
put
some
kind
of
notes
together
for
next
steps
in
the
wasm
cloud
machine
learning
channel.
Let
me
get
those
kind
of
pulled
up
here
and
or
if
you
want
to
share
your
screen,
maybe
we
could
kind
of
talk
through
what
we
think
some
of
our
next
steps
might
be
with
with
the
framework,
so
by
the
way,
incredible
job
on
getting
the
on
getting
everything
released.
A
I
know
that
you,
it
looks
like
people
were
putting
in
a
lot
of
work
on
getting
some
more
realist,
examples
for
tensorflow
and
the
onyx
stuff
working,
and
that
is
just
awesome
work
across
the
board.
But
maybe
I
think
our
sort
of
goal
for
the
day
is
to
put
our
heads
together
and
kind
of
figure
out
what
should
the
demo?
A
A
Yeah,
absolutely
great
so
in
the
in
the
code
that
you
put
out,
there's
you've
got
four
different
models:
working
the
two
are
around
object.
Detection
and
I
don't
remember
one
is
your
toy
and
then
there
was
another
one
that
you
did,
but
you've
kind
of
laid
out
a
sort
of
set
of
next
steps
here
around
maybe
doing
a
low
latency,
a
streaming
kind
of
a
demo.
A
Here,
I'm
getting
it
up
and
working
for
the
tpu
on
the
coral
dev
board,
which
I
think
is
an
interesting
idea.
Maybe
the
intel
neural
compute
stick
and
then
possibly
doing
a
face
detection
demo.
A
I
think
what
we
should
probably
try
to
do
is
come
up
with
an
idea
that
gives
us
a
fast
follow
like
a
quick
milestone
that
we
can
what's
a
quick
win
would
be
maybe
something
that
we
can
take
a
real
picture
and
get
a
real
result
out
of,
and
then
we
look
at
making
sure
that
the
demo
then
has
a
roadmap
that
we
can
build
to.
So
we
start
with
maybe
a
local
demo
of
face
detection
that
gives
you
the
coordinates
from
a
picture
and
then
we
move
to.
A
You
know
maybe
a
little
bit
more
structure
and
then
we
make
it
distributed,
and
then
we
do
a
demo
of
building
nest.
You
know
as
like
as
a
service,
maybe
some
thoughts
around
the
table
for
what
people
think
about
that.
B
C
So
so
one
of
the
things
I
like
about
jd
image
recognition
is
there's
there's
a
lot
of
there's
a
lot
of
models
to
choose
from
one
thing
I
don't
know
about
face.
Detection
is
because
I
haven't
worked
with
it
specifically
is:
what's
the
accuracy
of
an
untrained
model?
C
In
other
words,
if
you
take
an
off-the-shelf
model
and
you
and
you
wanted
to
use
it
in
your
home
nest,
doorbell,
do
you
don't
you
need
to
train
it
on
your
on
the
faces
of
your
family,
to
get
good
accuracy
or
or
does
the
off-the-shelf
model
work
with
faces?
It
hasn't
been
trained
on.
That's
something
I
just
don't
know.
B
A
D
Yeah,
I
think
the
first
step
is
sounds
something
reasonable.
You
know
start
with
image
and
and
then
we
move
from
there-
and
I
would
suggest,
like
a
you
know,
maybe
take
a
look
at
this
things
like
media
pipe.
You
know
they
have
like
a
kind
of
a
environment
for
you
to
chain.
Together,
you
know,
data
processing,
you
know
post
processing,
all
together
with
the
image
classification,
so
that
might
be
useful
to
like
move.
Those
sort
of
you
know
scriptable
training
together
in
the
cloud
just
just
idea.
D
A
Okay,
so
what
would
the?
What
would
the
visual
vision
look
like
here,
bringing
media
pipe
in
and
in
here?
Would
this
be
while
some
cloud
is
driving
media,
a
media,
pipe
client
on
an
edge
device
and
then
sending
it
to
the
cloud
or
what?
What
does
that
look
like.
D
Yeah,
so
one
possible,
you
know,
scenario
could
be
like
you
have
multiple
service
providers
and
and
then
you
have
some
sort
of
like
a
scripting
environment
that
the
basically
change
them
together.
You
know
with
a
like
an
easy
to
use.
You
know
user
interface
either
like
a
some
gui
or
something
like
a
scripting
environment
that
you
can
change
them
together.
That
I
think
that's
how,
like
those
pipeline
media
pipeline
software,
that
they
are
doing
their
things,
so
so
it
could
be.
You
know
something
like
that.
A
D
B
D
C
A
B
Coming
to
look
out
for
a
already
trained
model
and
which
does
face
detection,
I
think
it's
a
good
idea,
as
you
said
liam
at
the
very
beginning,
and
to
try
it
out
with
images
so
not
with
a
stream
not
with
a
bunch
but
with
single
images,
and
that
can
also
be
tested
very
quickly
in
the
unit
test.
You
can,
you
know,
test
the
provider
I
mean
I
did
that
with
all
of
we
do
that,
all
the
time
with
mobile
net
and
squeeze
net
and
that's
cool
and
once
that
works.
B
A
Okay,
so
what
about
the
when
we
think
about
managing
the
results,
though
I
feel
like
we
have
a
little
bit
of
work
to
do
here
to
make
the
results
you
know
usable,
you
know
what
would
we
suggest
for
face
detection?
Well,
what
does
I
guess?
We
need
to
understand
what
kind
of
tensors
do
the
algorithms
return?
Do
they
return
a
set
of
coordinates
in
the
image
and
say
here's
the
box
around
the
face.
B
C
Well,
like
I
I
said
I
had
hadn't
used
it
yet,
but
I
assume
that
it
gives
you
a
closeness
match
to
the
the
face
you're
you're
displaying
you
want
to
it
either
gives
you
the
closest
match
in
the
database
or
compares
two
faces
and
gives
you
a
closeness
match,
and
so,
if
it
was
gonna
unlock
your
front
door,
then
you'd
have
to
decide
it
needs
to
be
above
some
confidence
threshold.
Is
it
something
like
that.
E
Oh
okay,
I
I
did
not
have
that
same
thought.
I
I
didn't
know
my
experience
with
object.
Detraction
is
bounding
box
based
like
what
liam
was
talking
about,
and
I
said
the
question
I
had
before
was
okay,
even
if
we
get
bounding
boxes
put
around
people's
faces
in
an
image
in
our
demo,
I
I
felt
like
our
demo
was
hey.
Let's,
let's
capture
the
images
in
which
someone's
face
appears
right
because
you
you
know,
maybe
it's
a
robber
or
something,
and
so
we
we
actually.
E
The
output
that
we
want
is
not
really
the
bounding
box.
We
just
want
to
know.
Is
there
a
face
in
the
image,
so
we
sort
of
want
a
list
of
the
objects
detected
in
the
image,
and
I
I
think
there
might
be
a
post-processing
step
there
to
take
the
tensor,
the
return
tensor
and
well,
at
least
for
image
classification.
E
What
you'd
get
is
this
giant
tensor
and
it
would
have
the
probability
of
each
each
keyword
being
in
the
image
right
and
so
maybe
you'd
have
you
know,
dog
and
and
plate
and
face
right
and
face.
If
face
is
over
80,
then
you,
you
know
you
want
to
send
that
one
back
or
something
like
that,
but
that's
for
image
classification.
D
So
andrew
you're,
talking
about
like
multiple
inferencing
steps
right,
the
first
one
is
the
object,
identification
and
then
you
know
maybe
like
a
with
the
image
with
the
face
in
it.
Then
you
do
a
second
round
of
machine
learning
which
is
like
a
you
know,
look
at
the
closeness
of
different
faces
and
identify
unique
identity
with
yeah.
E
A
We
could
load
candidate
matches
in
as
part
of
a
demo
right
like
I'll
put
my
face
on
you
guys.
Can
you
know
like
put
me
on
github?
I
I
don't
care,
so
I
like
this,
because
I
want
to
make
sure
kristoff
that
we're
meeting
the
needs
for
your
goals,
because
I
think
that
you
have
some
demos
that
you
want
to
give
as
well
and
minkyu
and
andrew
and
johnny.
A
I
want
to
make
sure
that
we're
meeting
the
needs
of
your
goals,
which
I
believe
is
that
we
do
a
lightweight
detection
on
the
edge
some
we
will
do
in
the
dream
where
we're
going
is
we'll
do
mobilenet
on
some
edge
device,
whether
it's
a
google,
coral,
tpu
or
nvidia
jetson
and
we'll
use
a
imagenet
there
to
detect
the
images
with
faces
and
then
we'll
use
wasmcloud
to
send
the
faces
back
to
a
different
server
in
the
cloud
that's
running
on
an
intel.
A
B
It
sounds
very
good.
Maybe
I
didn't
understand
that
fully.
My
thoughts
are
still
stick
with
what
andrew
said.
Maybe
I
can
add
to
what
andrew
said:
okay,
but
in
general
it
sounds
very
good,
but
andrew
would
say
so
when
I
was
experimenting,
I
think
it
was
not
face
detection
but
people
detection
but
never
mind
so
I
mean
I,
as
I
said,
I
took
some
model
from
the
internet
and
I
didn't
train
it.
So
it
worked.
The
point
is
to
make
it
interesting.
B
Yes,
right
I
mean
there
were
bounding
boxes
coming
out.
You
have
to
post
process
them
in
terms
of
you
have
to
scale
them
to
the
image
size
you
want
to
display,
and
the
point
is
you
have
to
display.
You
want
to
display
the
image
and
to
want
to
have
an
overlay
with
the
bounding
boxes
just
that
people
in
the
demo.
They
have
to
see
that
image
with
the
bounding
boxes.
Overlays
such
that
they
see.
Yes,
these
are
the
boxes
which
match
the
people
right
or
the
faces.
A
We
don't
need
to
know
how
to
do
it
just
yet.
I
think
the
important
thing
we
want
to
achieve
today
is
that
we're
generally
aligned
on
the
on
the
mountaintop
that
we're
going
towards
and
then
it
sounds
like
we
have
some
pretty
clear
milestones
that
we
could
work
in,
so
the
first
milestone
might
be
to
do
this
pre
and
post
processing
so
that
we
can.
You
have
something
to
display
to
the
user.
Like
hey,
we're,
printing
out,
you
know
to
screen
or
putting
in
a
a
database.
A
F
So
I
have
a
so
just
I
didn't
want
to
speak
up
earlier.
Just
in
case
there
was
some
conversation
that
I
missed,
and
it
was
just
ignorant
of
something,
but
from
what
was
just
described,
do
you
think
that
the
appreciation
factor
is
there?
For
example,
how
will
users
be
able
to
see
that
you
know
certain
computation
happened
on
the
edge
and
certain
computation
happened
in
the
cloud
and
really
appreciate
the
performance
gains
that
are
coming
from.
You
know
just
kind
of
the
infrastructure.
A
Well,
I
think
the
you
know,
all
of
the
the
corals
and
the
jetsons
are
all
full
functional
computers
that
run
linux.
So,
depending
on
how
fancy
you
know,
we
want
to
make
this
it's
either
printing
to
screen-
or
maybe
you
know
I
don't
know
I
mean.
Maybe
we
need
to
think
through
that
a
little
bit,
there's
probably
a
bunch
of
pre-canned
demos
that
we
can
use.
A
You
know
or
adapt
the
infrastructure
from
to
make
to
give
you
the
sort
of
presentation
there
for
the
edge
right,
but
I
think
we
could
have
a
way
to
show
it
on
the
edge
or
to
report
it
back
to
something
that's
displayed
on
the
edge.
But
I
agree
that
there's
probably
half
the
work
here
is
in
the
user
experience
of
telling
the
story
yeah,
I
agree
with
them.
Yeah
yeah,
so.
D
Another
comment
that,
like
you
know,
face
detection.
You
know
it
might
be
a
too
difficult
task
for
this
first
step.
Now.
First
step
might
be
like
a
just
motion
detection.
You
know
if
you
detect,
you
know
whether
it's
an
animal
or
it's
a
it's
a
person.
You
know
something
like
that.
So
so
that
would
be.
You
know
a
more.
You
know
easier
first
step.
I
think.
A
Okay,
so
let's,
why
don't
we
do
for
milestone
one,
then
why
don't
we
set
a
goal
of
maybe
testing
a
couple
of
these
things
on
an
edge
device
and
seeing
you
know
if
what
word
the
direction
we're
headed
in
is
plausible,
so
we'll
take
mobilenet
or
one
of
the
other
image
classification,
algorithms
and
we'll
go
ahead,
and
you
know
have
it
just
echo
out
the
screen.
You
know
what's
in
there,
just
print
line
or
something
does
that
feel
like
that,
would
work,
steve,
kristoff
and,
and
that
way
we
can
agree.
A
You
know
we
can
maybe
do.
Thank
you
christoph
thumbs
up
appreciate
that
we'll
we'll
use
that
as
the
next
milestone
for
this
project
and
and
then
we
can
kind
of
dial
in
where
we're
going
from
there.
So
that
feels
like
a
pretty
good,
a
pretty
clear
direction
for
me.
A
Does
anyone
else
have
anything
they
want
to
bring
up
or
chat
about,
or
is
there
other
detail
that
we
should
align
on
here
as
far
as
what
help
we
need
and
how
we're
gonna
do
that
steve?
I
feel
like
you,
you
guys
have
been
just
you
and
christoph
have
been
working.
Really
well
over
slack,
I
guess
so.
I
don't
know
that
we
need
to
micromanage
you
guys
or
anything.
C
Great
yeah
and
christoph's
done
doing
a
lot
of
amazing
work
for
sure.
C
So
we're
gonna,
so
we're
gonna
do
an
image
classification
and
you
know
we
could
come
up
with
a
nice.
Maybe
an
ica
ui
for
getting
the
picture
in.
C
And
then
it
will
say:
is
this
an
elephant
or
a
screwdriver
and
just
and
display
the
results
on
the
with
the
with
the
probability
percentage
on
the
on
the
screen.
E
I
make
a
quick
comment.
I
I
do.
I
am
less
interested
in
what
type
of
inference
we
do.
Yeah
everything
everyone
suggested
is
great,
but
from
a
benchmarking
perspective
it
it
would
be
great
if
we
could
gather
data
on
the
the
time
it
takes
to
do
whatever
inference
we
do
down
down
at
the
edge
right
like
have
that
time.
E
Have
the
time
that's
taken
to
do
the
inference
up
at
the
cloud
and
then
and
then
you
know
what
is
the
time?
A
I
let
me
show
you
an
example
of
a
similar
demo
here
and
see
if
this
wouldn't
resonate
and
let
me
come
up
with,
like
maybe
a
story
andrew
hold
on.
Let
me
flip
over
to
this.
This
is
a
demo
that
is
going
to
be
presented
at
kubecon,
so
I'm
just
kind
of
leading
the
horse
a
little
bit,
but
it's
not
really
my
story
to
tell,
because
this
is
was
all
written
by
someone
else,
but
I
got
a
decent
slide
here
hold
on.
A
So
there
was
an
organization
that
did
a
wasm
cloud
demo
with
us
that
was
doing
image,
background
removal
and
andrew.
Let
me
see
if
I
can
play
this
out
so
when
they
implemented
wasmcloud.
One
of
the
things
they
realized
they
could
do
was
push
the
logic
down
to
the
browser
and
what
was
powerful
about
that?
Was
it
saved
them?
The
400
milliseconds
of
transferring
the
image
up
and
back
it
was
so
it
was
the
same
algorithm.
So
it
was
good.
A
It
was
faster
when
it
was
pushed
to
the
edge,
it
was
only
40
milliseconds
and
it
was
cheaper
because
they
were
no
longer
paying
for
the
compute,
because
it
was
happening
on
the
edge
so
for
capturing
the
sort
of
metrics
here
around
like
image
size,
processing
power
on
the
edge.
What's
the
sort
of
like
curve
and
boundary
to
say
it's
worth
doing
this
on
the
edge
versus
pushing
versus
pushing
the
data
back.
That
could
be
an
interesting
line
of
research.
A
To
sort
of
study.
Is
that
what.
D
A
A
Matching
across,
like
maybe
100
faces,
you
know,
is
this
a
person
we
know
and
it
it
makes
sense
to
do
the
face
detection
on
the
edge,
but
it
doesn't
make
sense
to
do
the
face
matching
on
the
edge
and
the
sort
of
large-scale
analytics
that
we
need
the
big
silicon
for
that's
the
kind
of
theory.
Okay,.
E
A
F
Yep
yeah,
I
I
think
just
to
add
to
that.
I
think
maybe
obviously
telling
a
story
is
good,
but
what
I
saw
is
you
know
the
flexibility
and
whatever
your
needs
are.
If
you
need
to
do
more
in
the
cloud
you
can,
you
know
whatever
you're
optimizing
for.
D
D
Leon,
so
that
the
case
two
is
something
you
guys
want
to
build
or
already
built.
A
Let's
say
that
again,
thank
you.
I'm
not
sure
that
I
understood
what
you're
doing.
A
A
So
that's
a
that's
a
that's
a
powerful
one.
Oh.
A
It's
not
machine
learning,
it's
just
a
background
removal
from
an
image,
but
but
we
could
do
the
same
sort
of
archetype
with
machine
learning
right.
You
know,
with
the
machine
learning
model
sort
of
decide.
Are
we
gonna
run
this
capability
in
the
cloud?
Are
we
gonna
run
it
on
the
edge
or
is
it?
A
Okay
and
I
think
steve
introduced,
we
were
having
a
conversation
the
other
day.
An
idea
here
that
can
play
into
this
story
of
you
know
when
to
choose
to
process
things
on
the
edge
would
be.
What
did
we
come
up
with
confidentiality,
respecting
machine
learning,
algorithms,
because
there
would
be
sometimes
let's
say
that
if
I
were
doing
cancer
detection-
and
I
want
to
take
a
picture
of
my
body
to
look
for
moles
that
you
know
like
I
wouldn't
necessarily
want.
A
I
mean
I'm
fine
with
my
face
going
in
github,
I
don't
know
if
I
want
my
body
picks
going
in
github.
Like
you
know,
I'd
just
be
embarrassed.
I
mean
like
for
everybody
else,
not
for
me
so
maybe
like
that
would
be
a
good
case
where
you
would
want
that
done
on
the
edge.
Maybe
we
come
up
with
a
better
story
than
my
example.
I'm
glad
we
got
this
on
video
recording
now,
that's.
B
So
I've
got
a
use
case,
maybe
to
add
and
adds
to
what
all
of
you
said.
So
at
my
work,
we've
got
data
processing
to
do
it
so
kind
of
locks,
but
from
cars
then,
and
we
do
lots
of
processing
time
matters,
and
so
one
thing
we
could
do
to
send
everything
up
to
to
aws
and
the
like,
but
probably
I
mean,
maybe
we
do
not
want
to
do
it
so
the
same
curve
question.
B
A
A
Data
on.
You
know,
machine
learning
on
the
edge
versus
pulling
it
back,
limited
and
deliberate
autonomy.
Regulatory,
like
you,
know,
gdpr
or
data
privacy
data
locality,
a
privacy
in
general,
the
data's
on
the
edge.
Maybe
it's
too
much
data,
so
I
think
we
can.
We've
got
a
good
story
here
that
I'm
already
thinking
ahead
to
the
real
release,
which
is
the
shared
white
paper
that
we
are
all
gonna
get
to
set
down
on
our
boss's
desk
and
be
like
man
did.
I
do
a
great
job
this
quarter
or
what.
A
Yeah
intel,
yes,
all
right,
good,
all
right,
okay,
I
think
we're
all
headed
in
the
same
direction
here.
So
next
check-in
we'll
want
to
just
have
some
image:
we're
gonna
start
with
an
image
classification
algorithm,
that's
either
echoing
out
the
screen
or
something
along
those
lines,
and
that's
our
next
milestone
and
then
from
there
we'll
look
at
what
the
next
milestone
should
be.
After
that,
okay.
A
Steve
you're,
real
good,
kristoff
yep
doing
awesome
buddy.
I
really
love
all
the
stuff
you're
working
on.
It's
amazing
everyone
else
on
the
call
jordan.
We
didn't
hear
from
you
today.
A
D
Just
one
more
question
so
leon:
could
you
send
me
a
link
to
your
background,
remover
demo
or
whatever
you
know,
give
me
a
link.
I
will
take
a
look
at
it.
Yeah.