►
From YouTube: wasmCloud Working Group - Machine Learning 03/17/22
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
https://wasmcloud.com
A
Welcome
to
awesome
cloud
machine
learning
thursday,
I
guess
we're
gonna
start
today.
Steve
has
a
proposal
on
iterating
the
api
steve.
Would
you
mind,
sharing
your
screen
and
maybe
giving
us
a
walk
through
about
what
your
proposal
is
all
about?
What
it
is
that
you
like
to
solve
for.
B
B
Okay,
this
is
in
a
google
doc
that
I
just
shared
on
the
it's
also
shared
on
the
chat
for
the
zoom
call.
So
it's
a
it's
a
proposal
for
a
small
change
to
the
wazianon
proposal
and
the
specifically
what
it
does
is.
It
adds
a
couple
of
tag
bytes
to
describe
the
kind
of
data
being
sent
in
a
tensor.
B
The
current
structure
is
is
defined
there
in
the
in
the
current
was
the
n
spec
and
the
data
is
always
an
array
of
bytes,
and
this
is
sort
of
pseudo
code.
It's
sort
of
rust-like,
but
not
exactly
rust-like,
but
it's
an
array
of
bytes,
and
the
idea
is
that
the
sender
and
receiver
have
previously
agreed
on
what
that
means.
So,
for
example,
if
it's
a
three
by
three
by
three
array
of
f32s
in
row,
major
order,
then
every
four
bytes
of
that
array
is
to
be
interpreted
as
an
f32.
B
B
B
Also,
it
requires
the
sender
to
be
able
to
format
the
data
in
the
right
in
the
right
format
and
if
it's
a
low-powered
edge
device
with
constrained
memory,
it
might
not
be
easy
to
reformat
or
repack
the
data
in
a
different
way,
and
so
what
this
proposal
does
is.
It
adds
just
a
couple
fields
that
describe
how
that
data
is
packed,
so
the
data
is
still
sent
as
an
array
of
bytes.
B
Although
there's
a
potential
way
to
expand
this
proposal,
where,
if
it's
not
stored
that
way
in
the
sender,
it
could
become
an
array
of
arrays.
But
I
didn't
include
that
in
this
proposal.
B
So
I
raise
a
couple
of
in
this
document.
I
bring
up
a
couple
of
reasons
why
it
could
be
error
prone.
B
And
andrew
brought
up
that
web
assembly
has
standardized
on
little
indian
bit
ordering
which,
which
could
for
messages
that
go
in
and
out
of
webassembly
the
fact
that
it's,
if
it's
known
to
be
little
indian,
then
then
that
reduces
the
variability
somewhat.
But
there
are
some
realistic
scenarios
where
the
web
assembly
is
just
an
intermediate
node
and
it
might
be
receiving
data
for
somewhere
else
from
somewhere
else,
and
the
web
assembly
is
the
last
top
before
being
sent
to
the
the
provider.
B
So
so
the
the
proposed
change
is
to
add
a
a
flags
field
and
for
types
instead
of
there
being
just
one
type
for
the
types
to
be
an
array.
The
flag
field
has
two
bits
to
find.
B
One
is
whether
it's
row
major
or
column
major
order.
The
other
is
where
it's
little,
indian
or
big,
indian
and
then
the
types
field
I
might
might
have
gone
a
little
overboard
in
in
turning
types
into
a
bit
field.
B
But
basically
the
idea
is
that
each
the
types
array
has
one
value
for
each
dimension
of
the
tensor,
and
so,
for
example,
if
you
had
three
dimensions
of
f32
and
one
dimension
of
u8,
then
you
could
have
the
you
could
have
a
four
by
four
bytes
in
that
array,
where
three
of
them
would
be
hex
82
and
one
of
them
would
be
hex
zero
and
then-
and
that
was
it
I.
B
Instead
of
defining
these
values,
I
considered
using
the
same
tag,
values
that
were
in
the
seabor
standard,
but
I
didn't
want
to
tie
the
two
things
together,
since
this
is
an
apri
api
proposal
and
that's
a
serialization
proposal
so
happy
to
hear
happy
to
hear
feedback
on
this.
I
have
a
couple
other
thoughts
on
ways
I
might
modify
this
but
I'd
love
to
hear.
I
know
there's
people
on
the
call
have
thoughts
on
this.
C
Yeah,
I
can
jump
in
first,
so
so
so
steve.
I
would
say
this
is
a
great
work.
You
know
I
especially
like
this
array
type.
You
know
that
definitely
is
needed.
I
will
even
go
down.
You
know
your
basic
founder
bug.
You
know
original
api
definition,
so
you
know
definitely
require
the
feature
for
like
a
non-revision
cnn
type
of
machine
learning
model
so
yeah.
So
I
think
we
would
need
to
modify
the
wordy
and
then
you
know
at
the
future
point
to
accommodate
this
change.
C
That
is
this.
You
know
type
need
to
be
a
multi-value,
a
array
type
rather
than
a
single
value.
So
as
opposed
to
that
the
other
thing
you
are
proposing
the
big
little
ending
proposal,
I'm
a
little
bit
concerned
about
this.
You
know,
I
think
you
know
when,
when
you
are
doing
machine
learning
right,
you
basically
need
to
do
stage
two
stages
of
work.
So
the
first
stage
is
they
have
preparation.
C
C
So
so,
basically,
you
know
whenever
somebody
is
training,
a
machine
learning
model
you
know
by
default
they
pick
a
specific
format
on
data
representation
or
how
you
normalize
the
data
right
and
anybody
that
tried
to
use
this
model
for
inference
will
have
to
adhere
to
this.
You
know
specific
format,
so
that's
basically
the
task
and
the
data
preparation.
So
it's
really
complicated
issue.
C
You
know
it's
more
than
you
know
big
little
ending
translation,
so
I'm
a
little
concerned
that
we
are
combining
like
two
different
problem
domains
into
single
one
and
consolidate
them
into
a
single
api,
because
you're
only
catching
like
a
part
of
the
data
preparation
in
this
api.
C
C
You
know
for
the
case
that
your
edge
device
may
not
have
processed
the
power
to
do
the
you
know
job
adequately
right,
so
you
might
have
need
to
have
a
service
report
and
have
an
api
for
data
preparation.
So
it's
not
limited
to
like
a
translation
between
big
and
a
little
ending.
It
might
be.
You
know
image
encoding
decoding,
you
know
normalization
renalization,
you
know
whatever
you
need
to
do
to.
C
B
The
my
thinking
was
that
for
something
like
image,
image,
preparation,
yeah,
there's,
there's
certain
certain
other
features
like
you
might
expect
the
image
to
be
compressed
to
a
certain
fixed
width
and
height,
for
example,
or
cropped
in
a
certain
way
or
using
a
certain
color
space,
and
if
you
were
making
an
api
that
transferred
images
you
might
want
to
include
those
things
as
part
of
the
api,
like
the
the
dimensions
of
the
image
since
this,
since
this
api
is
not
specifying
any
kind
of
higher
level
objects,
it's
specifying
just
numbers.
B
I
thought
it
might
make
sense
to
describe
the
format
of
the
of
the
numbers
that
it's
sending.
I
don't
think
that
this
imposes
at
least
the
way
I
was
thinking
about.
I
don't
think
it
imposes
a
burden
on
the
sender
to
to
do
the
data
preparation,
so
there's
still
there's
still
kind
of
an
implicit
part
of
the
api
that
requires
the
sender
and
receiver
to
agree
on
who's,
doing
the
data
preparation.
B
If
this,
if
the
sender
say
an
edge
device,
is
not
able
to
convert
little
indian
to
big
indian,
then
it
will
have
at
least
sent
in
this
api
with
with
the
extra
flag
bits,
it
will
have
sent
the
description
of
the
data
so
that
the
the
receiver
could
do
that
last
bit
of
data
prep.
So
in
that
case
it
imposes
a
reformatting
burden
on
the
on
the
inference
engine,
which
is
a
separate
concern
from
the
data
formatting.
I
think
that
I
think
that's
what
you're
saying
so.
C
And
in
addition
to
that,
you
know
the
the
server
implementation
doesn't
have
any
knowledge
of
how
the
data
is
formatted.
C
That
is
that
you
could
load
different
models
right,
so
you
could
load
a
big
ending
model
or
you
could
load
a
you
know
a
little
ending
model
next
time
and
you
know
basically,
knowledge
is
embedded
in
in
the
basis
in
the
model
discussion
right,
usually.
B
C
Yeah,
that's,
of
course,
that's
one
way
of
handling
it.
Yet
you
know
my
point
that
this
is
just
a
small
aspect
of
a
data
preparation
problem
which.
D
C
C
B
I
don't
I
don't
right
now,
except
that
I
have
seen
some
some
serialization
as
it
turns
out
that
seabor
has
standardized
on
big
indian,
but
no,
it
was
just
just
a
conceptual
emission
from
from
the
way
I
was
looking
at
it.
C
I
see
yeah,
we
actually,
you
know,
look
into,
you
know
expanding
worthy
and
you
know
for
like
ease
of
use
purpose.
You
know
exactly
what
you
like
a
character
is
here.
You
know,
for
example,
we
thought
about
you
know
setting
up
some
like
a
helper
library.
C
You
know
to
look
at
sender,
jpeg
image
and
it
convert
to
like
appropriate.
You
know
tensor
format,
so
we
you
know,
that's
the
kind
of
approach
we
thought
about
it.
You
know.
So
you
know
there
could
be
like
an
sdk
library
to
help
this
to
make
it
easy
to
use,
and
in
the
end
you
know
we
say
that
you
know
it's
probably
like
a
not
mature
enough
to
incorporate
into
the
aps
specification
so,
but
you
can
still
put
it
in
that,
like
sdk,
you
know
for
like
a
either
use
purpose.
C
That's
that's
another
way
out.
You
know
basically
to
have
supporting
utility
functions
that
that
people
can
use,
but
not
in
the
you
know,
api
definition.
B
E
Hey
steve
ida-
I
mentioned
this
earlier
up
in
the
thread
and
I'm
interested
to
hear
what
you
think
about
this
now.
So
the
proposal
you
suggested
you
know
does
implicitly
require
the
receiver
of
the
tensor
to
be
able
to
transform
the
data,
and
so
it
would
make
all
wazzy
and
nnn
implementations
bigger
right,
they'd
have
to
handle
the
translation
of
data.
E
If,
like
ming
chu,
we
wanted
to
do
that
in
a
separate
through
some
separate
mechanism,
I
don't
know,
maybe
a
separate
provider
and
or
separate
actor
in
wasmcloud
or
whatever
right.
It
seems
to
me
that
the
level
of
metadata
that
you're
you're
trying
to
add
here
would
be
important
for
such
a
translation
function,
and
I
mentioned
previously
what,
if
we
added
a
way
to
the
wasan
and
aps
to
introspect
the
models
and
that
so
that
you
could
say
well,
I
have
a
model,
I've,
I've,
instantiated
or
whatever.
E
B
C
C
D
C
You
know
big
little
ending
different
image,
encoding
format
and
even
for
like
a
normalization
yeah,
do
you
want
to
normalize
negative
one
to
positive
one,
or
do
you
want
to
normalize
to
zero
to
one
right,
there's
also
like
a
very
too
common
scheme
that
the
people
use
for
machine
learning
and
they
may
use
different
schemes,
so
you
need
to
do
you
know
appropriate
translation
of
the
raw
data.
C
B
Yeah,
I
see
in
fact
we
could
provide
a
library
for
for
either
side
either
in
wasm
or
on
the
servers
for
a
server
side
that
does
these
arbitrary
translations.
So
they
could
take
the
the
spec
of
the
current
input
tensor
and
then
the
desired
format
that
needs
to
go
into
the
predict
engine
and
do
the
right
transformation
and
absolutely
that
could
be
a
library
so
that
every
wesian
and
implementer
wouldn't
have
to
implement
it.
Yeah.
B
That
would
I
think
that
would
be
a
good
idea
to
have.
I
don't
think
it.
I
don't
think
that
eliminates
the
need
to
have
this
data
somehow
sent
along
because
it
that
transformation
library
still
needs
to
know
what
kind
of
transformation
to
do.
E
Can
I
comment
on
that
real
quick,
so
I
think
we're
we're
sort
of
mixing
two
two
ideas
here:
the
the
tensor
has
a
shape.
Okay,
the
shape
is
an
array
of
dimensions.
E
You've
identified,
something
that
was
missing
in
the
shape
of
the
tensor,
which
is
that
each
one
of
those
dimensions
could
have
a
separate
type
all
right,
so
that
actually
is
pretty
critical
to
the
high
level
view
of
a
tensor
tensor
actually
doesn't
understand
in
this
api.
What
its
format
is?
It's
just
a
bag
of
bytes.
E
So
all
that
the
tensor
really
knows
is
is
those
like
dimensions
and-
and
I
think
number
one
I
think
we
should
add
the
types
to
the
dimension
like
you
suggested
here,
but
when
it
comes
to
the
actual
you
know,
serialization
of
the
of
the
bytes
we've
talked
about
colors,
you
know
and
color
schemes
and
and
yeah
even
big
little,
indian
and
stuff,
like
that.
I
think
that
is
this
type
of
stuff
that
might
not
need
to
live
on
the
tensor
object.
E
Because
it's
a
separate
concern,
it
could
be,
you
know,
handled
either
separate
in
a
library
or
by
these
other
wasn't
functions
or
whatever,
like
does
that?
Does
that
make
sense
the
the
distinction
I'm
trying
to
make.
B
Yes,
it
does
and
when
you
say,
color
you're
referring
to
images
yeah
yeah
so
the,
but
to
me
the
reason
why
little
indian
big
indian
are
separate
from
color
is
because
this
isn't
an
image
api.
It's
just
it's
an
image
for
sending
integers
yeah.
C
B
C
And
people
say:
oh
well,
I
do
normalization
differently.
You
know
you
need
to
you
know
off
by
0.5,
so
that
I
can
you
know,
be
like
from
negative
one
to
positive
one
or
something
like
that.
So
that's
like
a
flat
day
of
those
parameters
that
people
were
asking
to
to
be
added
in
the
api.
B
Yeah,
I
I
understand
so
you
think
little
indian,
a
big
indian
in
steps
just
a
little
bit
into
that
floodgate
and
that
we
just
shouldn't
go
there
at
all
right,
yeah.
E
I
mean
yeah,
I
would
say
I
would
say
the
little
indian
big
indian
I
that
debate.
This
is
a
webassembly
api,
so
it's
it's
all
really
going
to
be
a
little
indian
okay,
but
so
I
won't
even
that
flag,
I'm
sure
like
shrug,
but
but
the
other
flag,
the
row
major
column,
major
that
that
I
I
would
consider
adding,
but
because
it
opens
the
door
for
all
these
other
yeah
hey.
I
want
to
do
rgb
now
I
want
to
do
bgr.
No,
I
want
to
normalize
this
way.
E
No,
I
you
know,
that's
actually
a
big
problem.
I
think
in
openvino
right
now,
there's
like
a
ton
of
combinations
of
flags
and
I
was
sort
of
hoping
we
could
avoid
that.
C
Yeah
people
say,
like
you
know:
you
spend
like
a
70
of
time
to
do
data
preparation
and
then
you
know
30
to
do
the
actual,
influencing
that's
actually
pretty
accurate.
You
know
the
data
transformation
is
a
big
big
problem
to
me
and
with
all
its
complexity,
so
I
think
that's
kind
of
a
concern
that
we
might
be
opening
up
the
big
flag
here.
B
A
Great
well
christoph
just
joined
and
kristoff.
My
apologies.
Parts
of
the
united
states
have
a
time
change
and
I
so
I
think
our
meeting
moved
up
an
hour
on
your
time.
So
our
apologies
on
that
that
it
wasn't
that
that
wasn't
clear
the
so
do
we
wanna,
maybe
catch
kristoff
up
on
the
on
the
discussion
so
far,
and
what
the
sort
of
how
you
guys
are
thinking
about
the
this
particular
recommendation.
B
Okay,
so
so
I
know
I
know
christoph
has
has
read
this
so
feedback
from
minkyu
and
and
andrew
was
that
there
are
a
lot
of
properties
of
the
the
tensor
that
are
really
specific
to
the
model
and
that'll.
B
A
lot
of
those
maybe
are
better
to
to
go
into
the
domain
of
data
preparation,
and
so
that
includes
things
about
images
like
dimensions
and
color
space
and
might
even
include
whether
the
data
is
centered
out
centered
around
zero
or
not,
and
the
little
indian,
a
big
indian
probably
falls
into
that
bucket
and
that
they
they
had
already
received
a
bunch
of
suggestions
for
modifying
the
wessey
nnn
proposal.
B
To
add
other
features
like
is
the
color
space
rgb
or
bgr,
etc,
and
that
by
adding
little
indian
big
endian,
it
sort
of
begins
going
down
that
path,
and
it's
probably
way
out
of
scope
to
incorporate
all
the
considerations
for
all
the
different
kind
of
models,
and
so
it
it's
probably
better
to
not
go
down
that
path.
Also
because
webassembly
is
standardized
on
little
indian
and
then
the
second
part.
B
So
there
was
a
lot
of
sound
like
a
lot
of
positive
feedback
on
having
value
types,
especially
for
for
non-cnn
kind
of
models
where
the
dimensions
of
the
array
are
not
the
same
type.
And
so
there
was
favorable
feedback
on
that
and
then
kind
of
mildly
favorite
on
row,
major
and
column,
major
bits.
B
I
one
of
the
things
that
I
so
thank
you
for
that,
I'm
glad
one
of
the
things
that
I
stumbled
into
a
little
bit
after
writing.
This
was
the
realization,
it
probably
makes
sense
to
add
a
value,
type
boolean
and
a
value
type
string,
which
would
be
easy
to
add
to
this
to
this
schema
here.
B
E
Even
the
number
of
characters
in
the
string
could
be
variable
size,
so
yeah.
If,
if
you
create
a
tensor
of
strings,
you
have
to
have
some
way
of
encoding
the
length
of
each
string
in
the
in
the
value
type.
Does
that
make
sense?
Yes,
yep
absolutely.
E
So
can
I
just
what
I'd
like
to
understand
you
mentioned,
I
think
pandas
or
nunpai,
or
one
of
these
things
earlier
steve
in
the
chat
window,
and
I
was
wanting
to
understand
you
know
what
use
case
like
how
do
you
plan
to
use
these
changes,
because
I
think
we
want
to
make
it
super
easy
to
use
right.
E
I
think,
but
I
wanted
to
understand
a
little
bit
more,
where
that's
coming
from,
because
there
may
be
other
ways
to
solve
the
problem
right
and
so
could
you
discuss
that
a
little
bit
like
where
what
what
code,
what
language,
what
applications
are
we
we're
looking
at.
B
Sure
so
for
and
some
of
the
examples
I've
looked
at
in
both
python
and
rust,
they
they
use
the
end
something
equivalent
to
nd
array,
and
that
has
something
called
array
base
is
kind
of
the
internal
data
structure
that
it
uses
to
store
arrays,
and
it's
there's
a
bunch
of
different
ways
that
it
can
store
it.
But
it's
it's
an
array
of
multiple
dimensions
and
various
data
types.
The
the
tracked
onyx
library
uses
that
internally.
B
So
so
that
seems
to
be
a
common
thread
that
the
concept
of
nd
array
and
because
it's
available
in
rust
and
python-
I
see
so.
For
example,
if
you
take
the
the
imagenet
examples
it
takes.
Take
the
jpeg
image,
use
an
image
library
to
to
squish
it
down
to
the
right
dimensions
and
create
the
tensor.
So
I
want
to
make
it
really
easy
for
the
these
source
language.
B
That's
collecting
the
data
and
and
invoking
this
api
to
be
able
to
take
an
nd
array
where
the
data
probably
is
and
convert
it
into
a
tensor
and
then
for
the
receiver
to
get
that
the
tensor
from
this
api
and
pull
it
almost
immediately
or
with
trivial
transformation
into
whatever
that
ml
library
needs.
E
Cool,
I
think
we
are
completely
on
the
same
page.
There
yeah
nd
array
was
one
of
the
like
yeah.
If,
if,
if
we
could
do
zero
copies
from
nd
array
to
tensors,.
B
E
E
So
is
there
a
way
we
can
sort
of
test
this
out
right
because
is
there
I
mean?
Does
anyone
have
any
nd
array
using
code
out
there,
that
we
could
test
out
like
how
much
how
much
transformation
is
necessary.
B
Well,
I
at
least
my
approach,
for
that
is
to
start
with
some
of
the
some
of
the
examples,
and
I
know
there's
a
the
intel
examples.
Repo
on
github
there's
also
a
ton
of
tensorflow
examples
on
github
that
so
I
was
going
to
take
some
take.
Some
of
those
and
part
of
the
design
of
the
library
would
be
to
to
really
play
around
with
that
to
try
to
get
to
zero
copy.
E
Yeah,
hey,
let's
maybe
we
shouldn't
dig
too
deep
right
now,
but
I
this
let's
continue
discussing
this
because
I'm
all
about
this
awesome.
B
E
A
Well-
and
I
want
to
make
sure
we
give
kristoff
time-
I
know
christoph-
you
have
been
putting
in
yeoman's
work
and
getting
the
wasm
cloud
ml
framework
going.
D
A
Great
so
steve,
do
you
maybe
want
to
talk
a
little
bit
more
about
the
examples
or
maybe
andrew?
If
we
could
hear
from
you,
you
mean
q.
What
what
would
be
your
ideal
example
because
we've
kind
of
talked
back
and
forth
about
you
know
what
we're
gonna
demo
on
the
intel
side,
and
I
would
love
to
get
aligned
on
that
and
understand
you
know,
which
is
there
a
particular
model?
You
want
to
see
running.
E
We're
talking
now
we're
not
talking
now
about
this.
This
good
proposal
we're
talking
about
the
demo.
Okay
yeah.
Well,
I
think,
there's
two
things
that
I
would
say
and
maybe
make
sure
you
have
more,
but
I
think
that
the
key
piece
in
the
demo
is
the
ability
to
move
the
comp,
the
inference
from
the
edge
to
the
cloud
quote
right
based
on
some
machine
criteria.
E
E
I
am
fully
confident
that's
possible,
but
because
you
guys
have
assured
me
that,
but
I
that's
one
key
piece
of
this:
is
the
ability
to
move
the
code
and
then
the
other.
The
other
piece
is,
I
think
you
know
christoph's
doing
good
work
with
the
onyx
stuff,
but
it
might
be
interest.
It
might
be
more
interesting
for
us
to
use
a
highly
optimized
inference
engine
like
tensorflow
for
the
for
the
inference
engine.
A
C
Yeah,
I
would
add
a
little,
you
know
addition.
I
think
you
know
the
moving
of
workflow.
You
know
it
should
be
flexible
enough,
so
some
cases
you
might
be
like
moving
a
workload
from
cloud
to
your
edge,
so
in
this
case
edge
could
be
your
home
computer,
for
example,
right
yeah.
C
So
so
yeah!
So
as
long
as
you
know,
I
think
andrew
basic
defines
this
generic
cloud
on
code.
You
know
so
that's
kind
of
a
capability
that
that
we'll
be
very
interested
in.
B
So
here's
one
one
thing
that
wasn't
cloud
can
do
today
is
that
if
you've
got
a
let's
say
your
edge
device
has
the
image
and
maybe
that
edge
device
is
a
cell
phone
or
some
some
coral
board
or
something.
If
you
have
a
laptop
on
your
lan
running
the
inference
engine,
then
it
would
route
to
that.
And
if
that
laptop
on
the
land
went
away,
it
would
route
to
a
cloud
cloud
engine.
B
And
then,
if
the
laptop
reappeared
on
the
network,
it
would
route
the
workload
to
the
to
the
local
lan.
So
that's
is
prioritizing
network
without
without
specific
knowledge
about
whether
there
are
gpus
or
or
the
cpu
capability
or
memory.
But
that's
one
kind
of
thing
we
could
do
today.
We
think
we
can
do
some
other
kinds
of
routing
based
on
other
characteristics,
but
is
what
I
described
significant
enough
to
you'd
want
to
see
that
in
a
demo.
C
Yeah
yeah,
so
I
think
from
like
a
user
perspective.
You
know
they
get
the
benefit
of.
Like
you
know.
Maybe
if
you
connect
to
the
cloud
service,
you
have
to
pay
for
it
and
if
you
have
a
service
running
on
your
local
pc,
then
it's
free
right.
So
that's
a
user
benefit.
So
another
aspect
is
that
you
know
you
might
be
running
your
image
motion
detection
on
your
camera,
which
is
you
know,
it's
a
low
low
resolution.
C
So
has
a
lot
of
false
alarm
and
you
could,
you
know,
migrate
to
a
more
powerful
full-blown
machine
image.
You
know
recognition
server
on
your
local
network,
so
in
that
case
you
have
better
predictions.
So
those
are
like
a
two
aspect
of
potential
and
customer
benefit.
A
Okay,
I
think
we
have
enough
detail
that
steve
and
I
can
run
with
a
story
and
come
up
with
a
story
to
tell
and
what
we're
gonna
do
is
we're
going
to
check
this
into
the
wasmcloud
demos
into
the
examples
repo
as
a
starting
spot,
but
it
should
demonstrate.
I
think
we
can
demonstrate
a
couple
ideas.
I
thought
a
cool
blog
post
would
be
exactly
what
you
just
described.
A
Mikshu
would
be
building
nest
on
a
budget,
you
know,
and
it
could
start
with
a
lo-fi
model
that
just
pulls
out
faces
and
then
just
sends
the
face
pictures
up
to
a
more
powerful
server.
That
does
something
else
right,
because
you
get
all
the
distributed
network
for
free
with
wasm
cloud,
and
that
would
be
pretty
simple,
like
pretty
simple
air
quotes.
A
I've
been
working
on
this
we've
been
working
on
this
project
for
three
years,
but
it's
easy
now
that
all
the
other
stuff
works
and
and
of
course,
that
kristoff
and
everybody
been
working
on
the
machine
learning
part
but
other
than
all
that
hard
work
easy,
but
kristoff.
I
think
you're
ready
to
do
to
your
demo
if
we
want
to
transition
over
to
you
yeah
sure
we
can
oh
super
great
all
right.
Let
me
give
you
sharing.
D
D
So
that
is,
that
is
the
the
current
repository
yeah.
It's
a
complete
example.
So
if
you
want
to
replay
it,
there's
also
some
documentation
how
to
build
what
to
what
you
have
to
take
care
of,
and
it's
just
a
bit
outdated.
We
changed
the
the
scripts
which
start
the
application
a
bit,
but
I've
updated
tomorrow.
So
no
problem
and
you
control
it
via
that
deploy.
D
Directory
and
any
variable
you
would
have
to
change
is
hopefully
in
the
envs
environment
and
that
should
be
it
yeah.
Let's
just
watch
it,
and
so
we
have
currently
no
high
flying
model.
It's
just
an
identity,
input
output,
so
it's
it
spits
the
same,
or
that
gives
you
the
same
response.
What
you
give
back,
but
it
shows
the
whole
thing
I
mean.
Let's
just
start
it,
and
then
let's
have
a
look
on
that.
D
So
I'm
now
in
the
deploy
directory-
and
you
can,
if
you
do
not
know
what
controls
you
have
just
launch
the
run
script
without
any
argument-
and
you
should
start
the
binder
server
first
and
there
are
also
scripts
pre-loading
that
server
with
the
binder
that's
necessary,
of
course,
because
that
capability
provider
will
fetch
it
so
we'll
fetch
at
least
one-
and
I
have
started
it
already,
so
the
application
started
with
all
what
do
we
have?
We
have
the
nuts
running
in
a
container.
We
have
a
registry
that
wasn't
cloud
host
runs
as
well.
D
Of
course,
not
in
a
container,
though
binded
server
also
was
started
before
not
running
in
a
container
but
starts
up
below.
I'm
in
the
folder,
where
the
locks
reside
of
that
ryzen
cloud
host,
let's
see
what's
inside,
should
be
an
airline
and
from
time
to
time
I
look
what's
in
there,
so
I
can
see
if
it
starts
up
properly
and
so
takes
a
while.
D
Meanwhile,
we
should
see
the
washboard
coming
up-
let's
reload
the
page
you're,
probably
aware
of
that,
but
on
the
top
left
you
see
the
inference
api
already
that
actor
on
the
top
right.
You
see
the
capability
provider,
both
of
them
already
up
and
running.
What's
missing
is
the
http
server.
It
takes
a
while
bit
longer
on
the
lower
left
side.
You
see
the
two
links,
so
we've
got
one
link
from
http
server
to
inference
rp
actor,
and
that
is
the
lower
link.
So
the
contract
id
is
wise
and
cloud
http
server.
D
The
upper
link
connects
the
inference
api,
so
the
same
actor,
but
with
the
other
provider
with
a
wasn't
cloud
example:
ml
inference
capability
provider.
So
now
we
see
all
the
technical
stakeholders
up
and
running.
We
switch
back
maybe
to
to
make
sure
everything
works.
Fine,
usually
I
print
out
the
logs
you
you
can
play
with
it,
but
looks
fine
and
then
yeah
you
can
launch
requests
via
curl.
D
D
So
that's
a
post.
It's
going
to
that
port,
which
was
configured
somewhere
in
the
scripts
of
that
http
server,
and
then
we
have
we.
We
want
to
have
a
query
against
the
model
called
challenger
and
then
we've
got
a
different
parameter.
It's
is
index
all
the
other
parameters,
come
with
the
meter
data
and
will
be
loaded
from
bindle
server
itself.
D
D
So
the
data
you
want
to
have
the
prediction
upon
or
against
that
curve
was
started
with
dash
v.
So
this
is
for
verbs
and
it's
always
important
to
get
back
at
200,
which
we
do
here
and
then
you
see
that
result.
Maybe
it's
a
bit
verbose,
but
it's
not
bad
for
the
beginning.
What's
important
has
error
faults
and
then
you
see
the
output
tensor.
D
If
you
look
very
closely,
you
see
that
the
output
is
indeed
identic
to
the
input.
Can
we
can
make
sure,
because
that's
that
dummy
model
we
always
use
for
trying
out
things.
You
know
identity,
input,
output
and
that's
basically
it
we
can
have
a
look
at
the
logs
once
again,
so
they
reflect
what
we
saw
and
yeah.
That's
it
going
back
to
the
washboard.
That's
the
whole
application.
A
Nice
work
kristoff.
I
think
it's,
it's
phenomenal,
how
you
put
all
this
together
and
really
stayed
at
this,
and
I
think
a
lot
of
people
are
going
to
want
to
use
this
steve.
Do
you
think
that
you
know
you're
on
path
to
get
a
demo
together
that
people
can
cut
and
paste?
You
know
next
week
sometime,
you
know
maybe
one
of
the
onyx
models
or
you
know
imagenet
or
something
like
that
here.
A
Right,
sweet.
Well,
all
right,
I
think
we're
all
aligned
here.
We've
got.
I
think
the
next
is
to
take
the
basic
example
turn
it
into
one
that
really
lets
people
see
see
things
work
end
to
end,
and
then
we
can
package
this
up
and
pull
together
a
demo,
kristoff
and
I'd
love
to
you
know
talk
about.
I
would
love
to
show
this
off
in
the
regular
wednesday
meeting.
If
we
can,
I
don't
know
if
you
can
ever
make
that
if
not
maybe
we
can
get
together.
A
Have
you
record
the
the
example
that
you
and
that
steve's
gonna
help
pull
together
in
the
next
week,
and
then
you
know
we
can
put
it
on
youtube
and
put
it
on
put
it
on
twitter
and
stuff,
like
that,
I
think
a
lot
of
people
are
really
well.
I
know
I
know
a
lot
of
people
are
watching
this
because
they
keep
asking
me
when
it's
going
to
be
ready.
D
A
All
right:
well,
I
think
we
went
through
our
three
different
topics
today,
steve
and
andrew
and
christoph
and
ming
chu.
It
sounds
like
you
guys
got
the
api
proposal
discussed
on
what
you
think
you
want
to
pull
forward,
and
what
you
want
to
leave
behind
is
that
right.
B
Yeah
I'll
I'll
make
a
iteration
on
that
and
send
that
out
for
feedback.
A
What
wasn't
clear
to
me
was
bailey
had
talked
about.
Maybe
some
of
the
extra
things
that
you
guys
had
discussed,
maybe
pulling
in
but
decided
against.
Did
you
guys
think
that
pulling
those
together
into
a
library
was
a
good
idea
or
not?
I
didn't
quite
crack
that.
E
A
E
A
F
Okay,
yeah
I'd
have
to
try
it
out
a
little
bit
more.
I
my
main
thing
is
that
we,
we
probably
want
to
start
talking
more
about
how
some
of
these
things
fit
into
a
component
model,
something
I
feel
like
we've.
We
haven't
a
lot
recently
since
it
was
so
experimental,
but
I
want
us
to
start
kind
of
flushing
that
out.
A
Okay,
so
that
sounds
like
more
discussion
needed
on
that
piece.
At
least,
I
think
we,
I
think
the
priorities
become.
A
You
know
just
update
the
api,
then
the
then
the
big
example
are
the
example
a
tensor
here
and
then
you
know
towards
demos
and
stuff
like
that,
and
then
the
then
the
intel
example.
A
If
that's
a
separate
step
would
be
third
and
then
we
could,
we
can
come
back
and
pull
up
on
the
the
the
ml
tools
ml.
You
know
helper
api
functions
and
stuff,
like
that.
Does
that
feel
like
the
right
approach
on
this
one?
A
A
Okay,
all
right
well
on
our
side,
steve's
got
this
one
as
a
priority,
so
steve.
I
know
you'll
follow
up
with
everybody
as
needed,
any
other
comments
or
things
people
want
to
bring
up
today
on
this
one.
A
All
right
silence
is
acceptance,
I
could
stop
recording
and
we
can
hang
out
for
just
a
quick
minute
and
if
not
have
a
great
week.