►
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
https://wasmcloud.com
A
B
Awesome
can
you
I'm
trying
to
share?
Oh
got
it
okay,
so
this
describes
the
model
that
I've
been
working
on
with
christoph
brewing
at
bmw.
B
There
are
three
actors
in
this
picture:
the
circles
and
a
capability
provider
that
runs
a
machine
learning
inference
engine
and
there's
also
a
bundle
server
that
serves
the
models,
so
the
capability
provider
can
work
with
onyx
and
tensorflow
models,
and
we
have
a
couple
of
there's
a
some
demo
models
in
the
repository
that
are
useful
for
just
testing
kind
of
returns.
The
identity,
the
api
in
between
the
horizontal
arrow
between
the
inference,
api
actor
and
the
capability
provider
is
based
on
the
yzn
interface
developed
by
intel
and
a
bunch
of
other
collaborators.
B
So
it
basically
sends
a
tensor
which
is
a
multi-dimensional
array
over
to
the
inference
engine
which
runs
a
pre-trained
model
for
image
processing.
We
need
to
do
pre-processing
before
we
can
develop
that
tensor.
So
the
pre-processing
takes
a
jpeg
or
a
png
image
converts
it
into
the
right
color
space
scales
it
to
the
right
dimensions,
and
so
that's
done
by
an
actor
to
actor.
Call.
That's
the
arrow
going
down
and
left
from
the
inference
api,
and
then
it
sends
the
tensor
to
the
provider
when
it
gets
the
response
it
does.
B
Some
post
processing
and
the
post
processing
for
the
image
recognition
model
is
to
compare
the
image
against.
So
sorry,
the
model
compares
the
image
against
known
images
and
tries
to
recognize
what
is
this?
A
picture
of
the
post-processing
turns
that
into
english.
B
So
we
can
read
the
picture
names
so
now
I'm
going
to
switch
to
a
terminal
window
and
right
now
it
has
a
http
interface.
So
I'm
just
going
to
run
curl
first
I'll
show
you
images.
B
So
we
have
a
apple
bird,
a
bunch
of
a
bunch
of
images
there,
so
I'll
I'll
do
I'll,
send
the
apple
over
and
it
compares
it
against
the
images
and
it
was
a.
It
was
a
green
apple.
So
it's
decided
that's
a
granny
smith
apple
with
a
probability
of
95,
or
maybe
it's
a
fig
with
a
probability
of
1.5,
and
so
we
can
do
some.
B
Do
some
other
we
can
recognize
a
hot
dog
recognize
the
hot
dog
picture,
with
a
probability
of
99
or
with
with
less
than
one
percent.
It
might
be
a
french
loaf
or
a
spaghetti
squash.
B
So
just
so
you
can
see
what
picture
is
that's
what
it
looks
like,
so
that
is
the
machine
learning
provider
demo.
It's
currently
checked
into
kristoff's
personal
directory,
it'll-
probably
get
put
into
an
example
folder
soon,
but
it's
definitely
ready
to
play
with
and
you
can
add
some
models.
One
of
the
reasons
why
we
chose
to
run
the
capability
provider
run
the
inference
in
a
capability
provider
is
so
that
you
can
host
it
on
a
machine.
That's
got
a
beefy,
cpu
or
gpus.
B
A
Steve
this
is,
this
is
great,
so
you're
demoing
today
this
is
was
this:
are
these
tensorflow
or
these
onyx
models
that
you're
demoing
just
out
of
curiosity,.
B
I
think
this
is
onyx.
The
the
the
model
is
actually
this
image.
Recognition
model
is
actually
a
mobileness,
which
is
a
well-known
model.
That's
often
used
in
small
constrained
devices.
B
B
Images
that
it
compares
against,
so
you
can't
get
too
you
can't
get
too
obscure,
but
so
liam
the
answer
to
that
question
is
it:
could
it
could
be
either
onyx
or
tensorflow?
Those
are
different.
Libraries
for
loading.
The
model
in
this
case
that
the
model
that
does
the
recognition
is
mobile.
A
Net
100,
okay
and
then
one
question
I
have
is:
if
I'm
you
know
when
we
are
using
tensorflow,
are
we
just
using
like
the
standard
tensorflow
as
the
kit?
You
know
on
the
other
side
of
the
capability
provider
as
the
provider
here,
and
the
reason
why
I
asked
that
is:
is
that
when
we
were
talking
to
intel,
they
said
that
they've
already
included.
You
know
all
the
optimizat,
all
the
optimizations
for
even
their
cpu
is
already
baked
in.
B
Yes,
so
this
runs
so
as
the
capability
provider,
it's
compiled,
natively
against
the
tensorflow
libraries.
So
if
you
install
the
intel
extensions
for
tensorflow,
then
you
would
get
those
extensions
as
well,
and
so
those
intel
extensions
are
take
advantage
of
cpu
instructions
in
intel
processors
that
accelerate
machine
learning
models.
B
There's
a
link
to
that
in
the
readme.
For
this
repository
there's
also
a
segue
to
the
python
capability
provider,
which
was
the
next.
The
next
thing
I
was
going
to
talk
about
sure.
A
Any
questions
or
comments,
there's
a
few
things
mentioned
here
in
the
justin.
Did
you
want
to
chime
in
on
this
on
your
thoughts
or
anyone
else
in
the
call.
C
No
on
mind:
it's
just
premature
optimizations,
obviously
he's
demoing
that
it
can
return
multiple
results
at
various
probabilities.
I'm
just
saying
you
know
improve
that
by
dropping
anything
below
a
certain
margin
of
error.
B
C
A
Okay,
I'll
take
a
look
at
that.
Well,
I
think,
ultimately,
I
think
there's
a
this-
is
just
opening
yet
another
door
of
exploration
here
and
gives
you
a
ton
of
power
and
capability
for
building
distributed,
machine
learning
and
anything
where
there's
collaborative
machine
learning
or
even
you
know,
you
know,
machine
learning
in
a
mapreduce
model.
You
know
where
you're
taking
it
and
just
using
wasm
cloud
to
be
the
connective
tissue
between
you
and
you
know
the
greater
forest.
Well,
that's
great
steve!
That's
awesome!
B
B
It
takes
a
few
shortcuts
that
are
less
secure
and
probably
less
stable,
but
it
was
a
quick
way
to
connect
wasmcloud
actors
to
to
machine
learning
code,
and
so
what
it
does
is
you
can
send
a
message
from
an
actor
to
the
provider,
and
this
is
actually
not
specific
for
machine
learning.
It's
any.
You
can
invoke
any
python
code
and
right
here
on
the
screen.
This
is
from
the
readme
and
the
repository
which
is
now
in
the
examples
repository
western
cloud
examples,
an
actor.
B
If
we
want
to
call
this
factorial
function,
we
can
pass
a
parameter
here,
we're
just
passing
a
number
and
we're
we're
getting
back
integer
response
from
this
function-
and
this
is
the
the
python
implementation
that
runs
on
the
on
the
capability
provider.
B
So
we
can
pass
all
kinds
of
objects,
we
can
pass
primitive
types
or
lists
or
hash
maps
and
it
gets
converted
to
python
objects
and
and
read
in
passes
as
the
argument
to
the
function
and
then
the
return
value
again
gets
converted
back
to
a
rest
object
and-
and
this
will
also
work
after
we
add
other
language-
support
for
other
other
languages,
for
actors
so
feel
free
to
play
around
with
it.
B
B
But
feel
free
to
play
with
it,
and
let
us
know
what
you
think.
D
Steve,
so
I
have
just
a
clarification.
Slash
question
mostly
clarification,
so
this
isn't
it's
not
a
way
to
write
capability
providers
in
python.
It's
a
way
that
you
can
have
actors
call
python
functions
is
that
is
that
right.
B
Well,
you
could
write
a
capability
provider
in
python
with
this
absolutely
and
then
the
one
difference
is
it
doesn't
have
a
set
interface
api,
like
our
other
providers,
do
the
the
api.
Is
you
pass
the
function
name
and
the
argument,
so
it's
kind
of
a
generic
way
to
invoke
the
provider,
but
you
could
put
any
code
you
want
there
and
because
it
does
have
a
pre-installed
python
environment.
It
means
you
can
install
tensorflow.
You
can
install
the
intel
extensions
for
tensorflow
or
any
other
python
library
and
run
it
so
yeah.
B
You
could
use
it
to
take
even
existing
python
code
and
and
have
wisdom
cloud
actors
call
it.
But
there
are
again
caveats
to
portability
and
security.
A
Interesting,
I
think
this
is
another
neat
example
steve.
Thank
you
so
much
for
all
the
hard
work
on
this,
and
you
know
I
think,
you're.
You
know
kind
of
tenuous
connection
here
to
machine
learning
with
all
the
focus
on
you
know,
r,
and
what
people
do
I
think
is
is
interesting
as
well.
Well,
does
anyone
have
any
questions
on
this
before
we
move
on.
A
All
right
well,
thank
you
again,
steve
and
we'll
watch
for
the
demos
to
be
checked
in.
Let's
make
sure
we
drop
a
couple
links
into
into
chat
on
that
and
I
kind
of
want
to
open
the
floor,
any
anybody
else,
any
demos
or
anything
that
they
wanted
to
share
today.
A
Okay,
I'll
do
a
quick
community
call
out.
I
know
a
lot
of
people
were
able
to
attend
the
last
wheel
of
tech
event
from
our
great
friends
at
red
badger
and
I'd
call
out
that
they've
got
a
new.
We
love
tech
event
on
the
way
featuring
gnats,
so
this
is
next
week.
This
is
wednesday
april
the
20th
at
their
hq
and
virtual
I'll
drop
the
link
into
slack
and
we'll,
of
course,
tweet
it
again
to
get
anyone
interested.
A
We
obviously
also
love,
not
only
tech
but
gnats
and
are
huge
fans.
I
I
think
that
derek
and
the
team
are
going
to
be
talking
about
jet
stream
and
some
of
the
other
exciting
new
things
that
they
have
and,
of
course,
all
of
these
capabilities
and
features
are
available
in
wasm
cloud
as
well.
One
of
the
main
many
reasons
why
we've
elected
to
partner
with
nats
on
the
development
of
lawson
cloud,
any
other
community
call
outs
that
people
wanted
to
make
today.
A
Great
any
open
issues
or
development
opportunities
we
wanted
to
discuss.