►
From YouTube: wasmCloud Working Group - Machine Learning 02/03/22
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
A
Okay,
and
of
course
I
did
not,
you
don't
have
permission
to
share
kristoff,
but
you
do
now.
While
you
do.
B
Yo,
and
can
you
still
hear
me
because
it's
so
quiet,
we
got
you
good:
okay,
yeah,
okay,
yeah,
so
it
has
been
two
weeks,
let's
see
so
what
I
have
been
doing.
So
maybe
we
can
have
a
look
at
the
status
quo.
I've
been
working
on
the
capability
provider.
B
That
was
supposed
only
one
component
of
what
we
want
to
do.
Let's
see
the
others
that's
here,
so
the
capability
provider
which
is
supposed
to
do
the
inference,
is
ie
here
on
the
top
right
and
then
we
have
all
the
others,
and
but
I
only
got
the
ie
code
done
so
to
say,
and
so
in
brief
it
is
still
untested.
B
But
so
what
do
we
have?
Now?
We
have
a
contract.
It's
it
changed
slightly,
so
it's
not
so
much
version
and
like
as
it
has
been
before.
It
is
called
predict
now
and
it
yeah.
So
just
one
signal:
well,
the
method.
The
one
single
method
is
called
predict.
Now
then,
what
does
it
do
on
linkage
so
on
any
so
on
linkage
with
any
actor
that
capability
provider
collects
parameters,
we
shall
have
a
look
on
these,
maybe
in
a
minute
or
so
so
these
parameters
come
in
tuples.
B
So
it's
always
the
nickname
of
a
model
and
the
binder
url
so
url
to
a
binder
path.
We
can
have
a
look
on
examples
later
and
then
it
downloads
the
invoice
and
the
parcels.
So
what's
behind
that
the
parcels
there
are
two
of
them.
This
is
the
model
in
one
parcel
and
the
other
parcel
is
it's
meter
data
and
that's
also
the
reason
why
the
interface
changed.
B
So
if
you
have
ynn,
you
have
a
one-to-one
relationship
and
you
you
think
that
the
user
of
the
engine
has
to
pass
all
the
parameters
and
directly
expects
the
answer
and
our
system.
Now
it's
a
bit
different
because
we
said
we
want
to
load
the
model
and
some
other
parameters,
the
metadata
from
a
bundle
server-
and
this
is
done
also.
B
Then
the
interface
as
I
said
it
changed.
The
good
news
is
the
functionality
as
it
was
implemented
by
the
enter
guys,
and
I
think
radhumatai
that
is
now
as
it
is
very
close
to
the
very
original
version,
or
it
is
just
not
part
of
the
interface
now,
but
we
can
have
a
look
on
the
code
later
and
then
what
we
have
yeah
I
mean
it's
implemented
that
everything
shall
work
together.
We
can
have
a
look
on
the
running
example.
It
throws
an
error.
The
error
is
meaningful,
but.
B
Get
the
testing
done,
but
that's
what
we
have
so
in
brief,
with
a
capability
provider
which
is
supposed
to
be
implemented,
I
I
think
it's
feature
complete,
but
it's
totally
untested.
I
just
yesterday
I
launched
some
quick
and
dirty
tests
and
then,
if
we
have
a
look
on
the
overall
system,
the
other
technical
stakeholders
are
missing.
I
mean
we
have
the
ie
the
capability
provider
independent
server.
That's
the
status
quo.
A
Okay,
so
thank
you
so
much
christoph
for
the
update.
How
do
we
help
here?
You
know
the.
Is
there
some
place
where
we
can
plug
in
to
collaborate
and
assist,
because
I
kind
of
feel
my
perception
is-
and
I
don't
know
if
there's
like
maybe
discussions
that
happen
on
slack,
that
I
that
I'm
not
speed
on,
which
is
perfectly
probable,
is
that
are
there?
Are
there
other
ways
that
we
can
plug
in
to
help?
You
know
get
this
develop
this.
B
One
thing
I
recognized
is
that
I
don't
have
a
clear
view
how
to
debug
that
I
had
a
look
briefly,
and
I
saw
that
maths
much
much
boston,
I
think,
was
asking
questions
like
this
or
in
that
direction,
how
to
debug,
because
yesterday
late,
I
wanted
to
plug
my
debugger,
and
so
it
didn't
work.
Otherwise
you
can
play
with
it.
You
could
have
a
look
at
the
overall
architecture.
B
I
mean
we
can
have
a
look
together,
how
it
looks
and
feels
some
aspects
like,
for
example,
the
invoice
and
the
parcels
and
the
handling
of
the
metadata
such
that
it
goes
into
the
right
direction.
You
can
have
a
look
at
that
design
reviews
I
mean
what
I
will
do
next
is
test.
It
obviously
get
that
capability
provider
running
in
form
of
tests
and
then
build
the
other
technical
stakeholders,
like
we
said
the
rp
actor,
the
interface
actor
and
then
the
whole
system
that
we
have
an
example
application.
A
The
on
the
debugging
thing:
what
we've
tried
to
do
is
I've
dropped
it
into
link
now,
but
right
off
blossomcloud.dev.
We
did
sort
of
take
a
bunch
of
the
feedback
that
we
collected
and
wrote
up
a
few
pages
kristoff
to
help
document,
some
of
the
errors
in
debugging
procedures
for
the
host,
the
actors
and
providers.
That's
the
way
the
manuals
kind
of
organized
right
now,
but
I'm
sure
that
there
are
still
gaps.
A
So
if
you
can
help
me
identify
where
gaps
are,
I
will
try
to
collaborate
and
prioritize
making
sure
that
we
can
get
some
documentation
out
there
for
that,
because
that's
a
good
exercise
and
steve
would
it
make
sense
for
for
christoph
to
maybe
take
you
through
a
design
review
on
the
architecture
we
can
kind
of
talk
through
how
things
are
set
up
right
now?
Does
that
make
sense.
C
Yeah
absolutely-
and
I
can
follow
up
with
you
to
see
if
see,
if
there's
any
debugging
tricks
that
that
might
be
helpful.
B
Yeah
cool,
then
I
also
get
to
know
varsim
club
better.
That
would
be
super,
so
I
think
the
high
level
should
we
have
a
look
high
level
on
the
code
right
now.
That's
maybe
interesting
if
we
record
that
for
others,
so
then
others
have
a
big
picture
and
then
with
steve
offline
or
whenever
we
can
have
a
deep,
more
detailed,
look
or
a
different
view
on
that.
So
so
then
let's
go.
B
Maybe
we
start
with
with
the
interface
somehow
and
that's
where
the
the
invoice
or
the
is
structured,
because
that
is
what
is
supposed
to
be
on
the
bundle
server.
Okay,
let
me
give
me
a
second.
B
So,
that's
that's
that
I
did
I
configured
kind
of
the
identity
model.
That's
the
most
easiest
thing
that
you
could
you
could
use
and
test.
The
overall
structure
is
such
that,
as
I
said,
usually
we
would
expect
two
artifacts.
So
one
is
the
model
itself,
and
this
is
this
one,
this
parcel.
B
So
everything
is
in
parcels
and
that
model
is
a
puzzle
and
the
metadata
is
impossible,
and
now
it
is
assumed
the
code
of
the
capability
provider
assumes
that
each
parser
is
assigned
one
group
there's
one
for
model,
there's
one
for
metadata,
it
doesn't
have
to
be.
I
mean
you
do
not
have
to
define
groups,
and
but
the
thing
is
that
the
capability
provider
has
to
be
able
to
differentiate
between
these
two
because
it
passes
them
differently.
Of
course,
let's
have
a
look
on
the
meter
data.
In
a
minute
of
I
just
wrote
it
in
json.
B
I
saw
that
we
have
other
capability
providers
in
the
examples
where
json
was
passed
and
then
it's
nice
for
humans
to
read.
So
it
has
to
be
passed
differently
than
the
byte
stream
of
the
model
and
you
could
differentiate
the
both
by
their
mind,
type
or
media
type,
but
I
think
that's
not
very
advantageous.
So
currently
it
is
assumed
that
there
are
these
two
groups,
each
of
that
artifacts.
Each
of
these
parcels
is
assigned
one
of
the
groups
and
maybe
for
completeness,
let's
have
a
look
where
the
json
is
for
the
meter
data.
B
So
this
is
the
content
of
one
parcel.
We
have
a
modern
name
which
is
not
parsed.
I
think
so
it's
just
for
convenience
because
I
thought
it
would
be
nice
for
users
that
they
have
any
name.
You
know
any
anything
you
can
think
of
when
you
open
that
file,
and
then
we
have
got
everything
what
you
also
have
to
provide.
For
example,
if
you
use
raziannnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn-nnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
coding,
that's
the
format
so
to
say
the
target,
cpu
gpu
dpu
and
then
the
tides.
A
C
Yeah,
it's
good
here,
kristoff
had
left
this.
Are
you
gonna
get
to
the
interface.
B
Interface,
you
at
the
capability
provider
yeah
right,
yeah,
this
the
smithy
interface,
the
smithy,
okay
yeah.
We
haven't
been
there
it's
somewhere
here.
B
B
The
input
takes
the
model
nickname
so
the
model
and
the
tensor
and
an
index
tensor,
I
think,
was
a
wish,
or
maybe
a
proposal
from
andrew
brown,
who
said
that,
at
least
and
as
in
inference
engines
like
openvino.
B
B
Yeah,
we
have
a
result
status,
so
it's
just
a
boolean
if
there's
an
error
or
not
and
an
error
message
and,
of
course,
the
output,
so
the
byte
stream,
what
you
what
you
get
back,
does
it
kind
of
answer
your
question
steve
or
do
you
have
something
specific
in
mind.
C
B
B
Yeah,
okay,
you
have
to
take
apparently
it's
such
that
exactly.
This
is
the
model
you
provide
or
model
name
you
provide
when
you
do
a
request
and
that
what
that
has
to
match
is
the
parameters
which
are
passed
when
the
actor
links
that
you
know
when
the
put
link
method
is
called.
So
if
we
look
at
the
example,
you
always
have
that
provide
a
test
config
here
and
I
used
it
currently,
it
looks
like
that.
B
C
Okay,
so
look
it
looked
like
you
had
on
your
to-do
list.
It
looked
like
you
had.
I
think
you
had
an
implementation
of
the
capability
provider,
but
not
the
actors.
Yet
is
that
right
yep?
So
there
are
some
test
tools
that
we
have
that
can
help.
C
You
invoke
those
interface
apis
on
a
provider
without
having
an
actor
yet,
and
that
might
be
one
way
that
it
could
be
tested
and
help
sort
of
an
alternate
approach
to
to
debugging,
but
it
might
be
useful,
like
I
can
show
you
some
of
those
or,
if
for
for
anybody
else,
who's.
Listening
for
later,
the
a
few
of
the
capability
providers
in
the
wasm
cloud
capability
providers
repository
have
a
test
directory
and
that
an
example
that
might
be
useful
to
look
at
would
be
the
kv,
redis
or
sqldb.
C
B
I'm
aware
of
these
I
tested
when
I
did
the
first
iteration
so
when
it
was
really
was
an
n-like.
This
is
the
one
and
only
test
which
exists
right
now
I
mean
it
throws
an
error,
but
that's
fine.
We
can,
let's
replay
that
or
so,
if
you
do
as
a
so
what
what's
the
precondition?
I
understand
that
on
the
lower
left,
I
have
a
jet
stream
right.
That's
one
precondition
on
the
lower
right.
B
You
have
that
binder
server,
which
is
the
local
one
currently
and
on
the
right
just
for
clearness
or
clarity,
there's
how
the
invoice
looks
like
if
I
test
now
at
a
certain
point.
It
complains-
and
I
saw
that
an
hour
ago-
that's
natural,
because
he
here
he
complains
about
the
encoding-
that's
not
readable,
because
the
color
code
is
not
good,
but
it
says
you
know:
encoding
can
only
load
one
x
models
and
the
reason
is
that
it
gets
an
encoding
is
zero,
but
onyx
is
encoded
with
an
enum
equals
one.
B
C
B
Yeah
yeah
yeah,
but
what
you
it's
up!
I
prepared
it
here.
Let's,
let's
look
from
here.
What
you
see
is
that
the
parcels,
in
fact
they
are
downloaded
from
that
server.
You
see
that
highlighted
here
here
these
long
lines
with
that
hashes.
B
These
are
the
two
parcels
and
it
gets
back
to
100
and
it
says
also
flushed
zone,
so
many
bytes,
so
it
downloads
them
gets
them
in
and
then
that
error
message
comes
out
of
the
stuff
I
copied
from
wasn't,
and
then
I
mean
I
did
not
copy,
but
I
templated
wazinen
from
overall.
I
try
to
be
very
verbals
with
that
error
messages,
so
they
should
be
very
specific,
and
it
is.
I
just
currently
do
not
understand
why
it
gets
a
zero.
When
we
should
have
been
passing
a
one
we
can.
B
I
think
there's
a
parsing
error
because
it
should
match
one
and
then,
if
we
have
a
look
in
that
tract,
it's
still
the
tracked
in
friends
engine
here's
where
the
error
message
comes
from.
If
encoding
is
not
one
x,
then
it
throws
that
error
and
it
throws
the
error.
We
have
seen
that.
Oh,
I
do
not
understand
why
the
encoding
is
wrong.
So
what
I
mean
very
specific
error
messages
so.
B
Oh,
what
else
can
we
say
I
we
wanted
to
have
a
look
in
the
code,
so
maybe
can
I
make
a
comment.
Real,
quick,
christoph.
D
Yep
sort
of
rewinding
a
bit
back
to
the
interface
that
you're
exposing.
Can
you
look
at
the
inference
request.
B
Yeah
so
in
the
smithy
you
want
to
see
rather
smithy
or
sure.
D
This
is
fine,
okay,
all
right
line,
47
index.
You
retain
that
because
I
said
you
know,
models
could
have
multiple
inputs
right.
D
D
B
D
D
D
Okay,
if
you,
if
you
pass
a
tensor,
and
I
hope
that
tensor
structure
has
the
the
precision
the
dimensions
of
the
tensor
and
then
the
actual
data,
if
it
has
sort
of
all
those
things,
I
think
you
can
you
can
do
it.
That's
my
sense,
but
don't
quote
me
later,
if
I'm
actually
wrong.
C
D
Buffer
size
data,
okay,
what
about
the
well,
I
guess
kristoff
you're
sort
of
baking
in
the
the
the
precision
of
the
model
into
the
bundle
metadata
right?
Yes,
absolutely!
Okay!
D
C
C
Yeah,
actually
in
sensor,
data
is
supposed
to
be
an
array
of
bytes.
You
can
actually,
you
can
actually
make
that
a
blob
type
in
online
61.
You
don't
actually
have
to
call
it
tensordate.
You
could
just
say,
call
it
blob,
oh
really
I
mean,
and
then
okay
and
the
code
generator
will
will
make
that
be
a
vector
of.
B
D
D
So,
for
example,
if
you
were
going
to
send
an
image
you'd
have
let's
say
the
images
are
300
by
300,
so
the
dimensions
and
and
there's
we're
going
to
do,
rgb
or
whatever
for
color
space
you'd
have
like
300
by
300
by
three
and
like
we
need
to
pass
that
data
on
to
I'm.
I'm
pretty
sure.
We
need
to
pass
that
data
on
to
tensorflow,
and
if
we
don't
it's
still.
D
C
Yeah,
so
it
would
be,
however,
many
dimensions
or
in
the
tensor,
then
that
would
be
the
length
of
the
array.
B
Yeah
yeah-
that
was
the
idea,
so
here,
for
example,
I
mean
out,
is,
I
think,
just
for
convenience.
That
is
to
tell
you
what
what
you
get
back,
I
mean
just
for
the
human.
I
would
say
that
you
have
an
idea,
but
but
that's
what's
really
important.
What
is
past
and
what
is
used
directly
in
the
engine,
as
you
said,
it
has
to
know.
B
Yep,
yes,
yes
and
sorry,
just
it's
redundant!
Absolutely
why?
Why
is
it
there,
then?
I
think
it
was
easier
for
me.
I
don't
know.
I
don't
remember
why.
But
at
a
certain
point
I
thought
about
you,
andrew,
because
in
the
chat
in
the
channel
machine
learning
you
I
think
you
would
didn't
you
vote
to
put
the
dimensions
into
the
interface
right,
but
then,
when
coding
I
I
didn't
take
the
time
to
to
do
it.
I
thought
it
would
be
easier
if
it's.
A
D
D
C
I
had
another
thought
about
the
tensor
data
that
I
didn't
realize
until
you
showed
the
metadata,
but
if
that's
a
a
blob
and
you're
going
to
deserialize
it
based
on
the
data
type
like
f32,
then
I
think
that
f,
32
type
should
be
part
of
the
interface,
because
if,
if
you've
got
a
f64
or
u32
for
the
data
type
you're
going
to
deserialize
it
a
different
way
and
it
sort
of
feels
like
that
should
be
part
of
the
tensor
structure
that
you
pass
over
to
tell
the
receiver
how
to
unserialize
it.
C
Yeah,
so
if
you
you're
gonna
pass
data
is
a
byte
array
array
of,
but
is
that
really
an
array
of
float32s
for
your
model.
D
This
is
a
good
point,
steve
and
if
you
guys
do
this
right,
you
will
avoid
a
lot
of
pain
that
I've
been
through.
C
B
B
So
for
me,
these
are
two
things,
but
maybe
someone
andrew.
Can
you
correct
me?
I
would
always
deserialize
su-8
and
then
I
would
say
to
that
engine.
Here's
your
byte
array,
but
you
would
interpret
it
as
an
whatever.
Maybe
that's
wrong.
D
Yeah,
I
I
think
that
is
the
case.
You
know
these
most
of
these
engines
at
least
especially
openvino.
Just
it
needs
a
slice
of
memory,
so
it's
fine
right
and
and
it'll
interpret
internally
how
how
the
the
precision-
but
I
think
the
important
thing
is
on
the
user
side.
So
on
the
on
the
user
side
when
they're
writing
their
code,
who
is
the
user
here?
D
Actually,
maybe
this
doesn't
matter
for
you
guys,
but
I
I
had
issues
where
you
know
I
I
I
construct
a
a
tensor
of
of
f32s
and
then
I
had
to
get
it
to
a
you
know
an
array
of
u8's-
and
maybe
I
didn't
do
the
ending
this
right,
or
maybe
you
know,
and
so
so,
if
you
can,
if
you
can
eliminate
those
issues
for
the
user,
that's
probably
going
to
be
a
plus
wasn't
ends
a
low
level
api.
So
the
user
has
to
worry
about
it.
B
C
Well,
I
guess
it
depends
on
how
the
data
was
written.
So
I
guess
if
the
data
was
written
by
onyx
into
a
byte
array,
then
it's
going
to
end
up
being
the
correct
indian-ness,
because
I
assume
onyx
will
have
standardized
on
that
in
their
protocol.
So
you
so
you
don't
have
to
so.
One
thing
I
was
worried
about
is
whether
your
russ
code
is
going
to
deserialize
sounds
like
you.
Don't
do
that
at
all.
C
You
just
take
the
blob,
which
is
the
vector
and
you're
going
to
pass
it
right
to
onyx,
so
you
don't
have
to
do
any
deserialization
there,
but
whoever's
creating
that
array.
We
will
you'll
have
to
make
sure
that
the
endianness
is
correct
and
and
the
that
that
both
sides
also
agree
on
the
data
type,
whether
it's
f,
32
or
f,
64.
C
B
B
My
thoughts
were
at
least
me
as
a
programmer
when
I
shall
use
something,
and
so
maybe
at
least
with
standard
models
or
usual
models,
or
at
least
with
tract
onyx.
I
I
would
not
have
to
care.
Maybe
on
that
f32.
I
think
it's
I
don't
know.
I
mean
it's
it's
that
format
so
then
I
would
have
to
have
to
pass
it
anyway.
It's
like
boilerplate,
but
maybe
not
in
not
in
the
general
case.
As
you
see,
you
know,
you
know
what
I
mean.
I
mean
now
it's
a
bit
hidden,
but
it's
still
there.
D
I
think
it's
my
here's
my
opinion
on
this.
I
don't
know
what
you
think
steve,
but
I
think
it's
fine,
like
you're,
already
hiding
some
of
the
stuff
in
the
metadata
hiding
the
tensor
type
as
well.
It
might
be
okay,
it
it
is.
It
is
okay
right.
The
the
key
problem
is:
how
does
the
user
get
their
array
of
f32s
to
an
array
of
to
the
correct
array
of
u8s?
D
B
C
I
think
it's
no
pain
to
change
that
so
yeah.
So
I
agree
with
that.
I
think,
if
you're
going
to
have
an
actor
generate
this
byte
array,
then
we
would
probably
modify
this
interface,
but
if
it's
being
generated
by
some
other
tool,
especially
if
it's
an
onyx
tool,
then
it
doesn't
need
to
be
here
because,
as
far
as
the
actor
and
the
provider
are
concerned,
it's
just
a
pass-through
byte
array.
B
Yeah,
it
reminds
me
of
something
if
you
do
not
mind
that
I
have
a
look.
You
know
this
is
what
we
are
looking
for
right,
right,
yeah,
yeah!
It's
in
here
I
mean
yeah.
That's
it
yeah.
D
B
C
Well,
if
we
specified
the
interface
as
as
having
a
vec
of
f32s,
then
the
problem
would
go
away
because
the
the
serializers
would
get
it
right.
D
Right,
but
are
you
going
to
be
able
to
vary
that
vex
32
f-32,
to
evacuate
to
a
vec
f16
to
avec,
whatever
right,
based
on
the
model,
because
different
models
have
different
precisions
and
different.
C
D
C
You'd
have
to
do
kind
of
the
equivalent
of
a
of
a
union
in
in
smithy.
It
would
be
a
bunch
of
optional
fields
or
perhaps
something
like
it
like
an
enum
like
a
res
dna,
where
you
have
either
a
vector
of
f32s
or
a
vector
of
f64s.
C
C
But
maybe
I
I
I
don't
wanna,
I
know
our
meeting
has
limited
time
and
I
don't
know
if
this
is
what
we
wanna
spend
our
our
time
on.
We
could
come
back
to
this
point
offline.
B
So
then,
where
have
we
been
maybe
in
the
overall?
B
Sure
yeah,
maybe
we
can-
and
besides
I
have
a
question
so
I
so
we
started
always
at
the
main
and
then
in
the
main,
the
only
function,
the
own
functionality
we
have
here.
It
checks
whether
it
has
some
in
one
environment
variable
to
link
to
the
or
to
connect
to
the
bingle
server
and
then
just
for
fun.
I
put
something
inside
so
I
expected
to
see
these
log
messages,
but
I
didn't
see
them.
C
C
Before,
like
that,
well
inside
the
main
function,
but
before
before
you
call
provider
main
sorry
yeah,
I
didn't
see
it
yeah,
yeah
you're,
all
right.
Okay,
probably
the
same
thing
for
the
check
for
binder
bundle,
url.
A
C
Yeah,
you
won't
see
those
because,
even
though
it
compiles
to
an
executable
program,
it
started
as
a
sub-process
of
the
host,
not
by
visual
studio.
So
I
don't
know
if
visual
studio
will
be
able
to
find
the
process.
B
I
see
okay,
maybe
I
will
check
I
will
check
if
that
is
really
reached,
maybe
you're
right.
So
if
we
have,
if
we
want
to
have
the
the
big
picture,
what
happens
so
then?
At
a
certain
point,
an
actor
is
linked
to
that
capability
provider,
and
then
I
understand
what
is
called.
Is
that
put
link
method
because
the
provider
we
implement
that
implements
two
traits
and
that's
the
first
one
provide
a
handler.
So
the
first
thing,
which
is
called
besides
the
main
step,
put
link
so
what's
happening
here.
B
We
get
we
at
this
link.
Definition,
we
get
what
is
written
in
sorry
in
the
test
configuration
at
least
in
our
example.
Here
right,
so
we
get
values
and
in
that
values
we
said
that
it's
always
tuples
between
the
names,
jesus
and
the
path,
and
so
we
we
would
collect.
B
I
think
it's
in
most
cases
it's
a
good
idea,
maybe
in
some
it's
just
not,
but
anyway,
lazy
load
is
not
implemented
for
now
so
but
then
and
the
in
the
put
link
we
have
to
go,
we
fill
our
state
and
maybe
to
follow
the
overall
data
flow.
It's
interesting
to
see
that
state.
I
have
to
look
at
that
myself.
It's
not
much!
It's
only
these
two
lines
here
so
46
47.
B
So,
for
you
can
have
a
multitude
of
actors
linking
to
that
capability
provider
and
for
each
you
have
an
actor
id
which
is
stored
in
the
string,
and
then
you
have
I
called
model
zoo,
because
each
actor
can
have
a
multitude
of
these
models
linked
together
with
a
respective
bundle
ui
and
then
then,
here's
the
tracked
engine
under
the
hood
that
engine
has
the
same
state
and
the
same
kind
of
state
as
the
vast
n
interface-
and
I
said
in
the
very
very
beginning,
so
behind
the
tracked
engine
now,
let's
have
a
look
on
that.
B
There
are
the
five
methods
which
you
would
recognize
if
you
are
familiar
with
razian
n-
and
I
maybe
it's
even
easier.
If
we
have
a
look
here
so
this
is
the
a
trade
called
inference
engine,
it
has
these
five
methods.
I
think
even
the
arguments
are
very
similar.
What's
different
is
the
the
result
type
and
then
the
attract
engine
implements
that
that's
a
point
where
I'm
not
very
happy
with
the
design.
B
I
didn't
know
how
to
do
it
better,
so
that
trade
is
implemented
by
the
tracked
engine
then,
but
from
from
a
design
perspective.
I
think
then,
here
we
shouldn't
have
a
tracked
engine,
but
the
inference
engine.
So
the
general
part,
because
later
when
we
want
to
have
an
implementation
for
tensorflow
or
for
openvino
and
so
on,
maybe
you
have
a
different
engine,
but
they
still
implement
the
same
trade.
No,
but
anyway
that's
part
of
the
state,
and
so
let's
have
a
look,
maybe
or
come
back
to
the
actors.
B
So
you
have
a
multitude
of
actors
and
each
actors
can
host
a
multitude
of
models.
So
that's
why
maybe
we
have
a
look
in
the
model
zoo
and
the
model
zoo
consists
of
a
model
name
and
a
context,
and
the
context
is
directly
here.
So
we
have
that
bindle
path
where
we
download
everything
from
the
parcels
and
then
everything
we
saw
and
discussed
graph
encoding
target,
tensor
type
and
people
familiar
with
wazin
and
so
graph
execution
context.
B
B
So
there
are
two
main
parameters:
maybe
you
want
to
work
with
and
if
you
do,
if
you
have
configured
the
whole
system-
and
you
want
to
call
it
it's
the
graph
execution
context,
what
you
really
need-
yeah,
that's
almost
it
so
on
linkage.
You
collect
the
parameters.
B
B
So
when
you
set
do
the
set
input,
that's
one
method
for
me
and
was
in
n,
you
do
the
compute,
that's
the
real
inference
here
and
then
you
get
the
output
and
then
you
pass
that
output.
B
So
it's
the
predict
is
what
matters
at
the
end.
But
before
you
have
to
initialize
your
state
and
that's
it
I
mean
that's
the
the
data
flow.
C
That's
really
neat.
You
had
said
you
weren't
sure
about
having
that
trait
for
the
tracked
engine
you
could,
if
you
needed,
to
expand
that
to
support
openvino
or
tensorflow
or
something
you
could
make,
that
a
an
enum
and
of
all
of
the
variants
of
the
enum
could
implement
the
same
interface.
So
yeah,
then,
in
the.
C
Let's
see
is
it,
I
don't
remember
which
file
it
was,
but
in
that
the
place
where
you
had
the
provider
instance
data
with
the
hashmap
you
could.
C
You
could
have
arc
of
box
of
that
inner
interface.
C
C
So
that
engine
could
be
arc
of
box
of
of
your
trait
or
else
it
could
be
arc
of
the
enum
either
way
anyway,
that's
something
else,
so
the
design
you
have
could
could
work
fine.
We
can.
We
could
make
that
slightly
generalized
to
handle
different
engines.
B
C
And
then
yeah,
I
definitely
want
to
follow
up
about
the
that
the
vector
of
f32s.
I
think
I
think
that
would
be
the.
I
think
we
want
to
do
that
in
the
interface.
C
So
so
there's
we
have
eight
minutes
left
in
the
hour.
C
How
do
people
feel
about
this?
Do
you
want
to
keep
talking
about
this,
or
do
you
want
to?
A
I
my
perception
is:
is
that
the
sort
of
like
detailed
review
that
we're
doing
together
here
is
is
perfect,
and
maybe
we
could,
maybe
I
don't
know
kristoff
if
you
think
it's
helpful,
if
steve
and
andrew
can
find
time
to
continue
this
or
if
we've
gone
through
this
enough
or
if
we
can
do
it.
Asynchronously
andrew,
I
think
your
experience
having
done
this
a
few
times
the
comments
that
I
heard
you
making
in
the
beginning
about
you
know.
A
You
know,
like
your
lessons
learned
in
waziyan
are
priceless,
like
literally
priceless
here
and
then
steve.
You
know
your
organization
that
you're
bringing
and
expertise
around
the
wasm
cloud
way
of
doing
these
things,
I
think,
is
also
super
helpful.
So
I
thought
this
that
this
meeting
becoming
a
working
session
has
been
super
productive
and
helpful,
and
I
christoph
what
are
you?
What
are
your
thoughts?
What
do
you
think.
B
I
was
a
bit
disappointed
about
that
whole
sprint,
because
originally
I
thought
today
I
would
bring
the
whole
system
you
know,
but
I
think
it
was
very
constructive.
Yes
and
I
think
it's
a
good
idea,
a
good
idea
to
to
get
that
detailed
review.
Further,
I
mean
what
I
can
propose.
There
are
some
points
we
definitely
want
to
see
in.
A
C
Yeah,
certainly,
and
as
a
side,
a
side
comment.
I
love
your
idea
of
model
zoo
when
I
first
saw
the
word
zoo
in
these
files.
I
thought
oh,
no,
I
hope
he's
not
using
zookeeper.
A
I
I
well
I
like
the
I,
like
the
the
sort
of
feeling
that
it's
a
zoo
already,
because
we've
got
a
whole
host
of
animals
in
this
one.
Okay,
that's
wonderful!
Well,
andrew!
What
do
you
think
is
a
is
a
good
way.
Would
you
mind
you
know
investing
you
know
more
time
on
feedback
here
and
maybe
I
think
the
the
three-way,
the
triperitops.
If
you
will
you
know
it's
not
pair
programming,
I
think,
is
super
effective.
A
Would
you
mind
you
know
hopping
on
and
doing
some
more
of
it.
D
Sure
I
think
kristoff
my
impression
is
kristoff
actually
is
tracking
with
what
we're
talking
about
here
and
now
it's
a
question
of
finessing
the
details
in
npr's,
and
so
I
think
actually
we
might
not
need
a
meeting
if
you
just
tag
me
and
steve
in
in
whatever
pr
you
know.
If
we
want
to
talk
there,
I
think
we
we
know
the
way
we're
going.
We
just
need
to.
We
can
actually
asynchronously
deal
with
the
details
so,
but
I'm
open
to
a
call
to
I'm
just
saying
it
might.
A
Well,
I'm
I
I,
I
think
we're
with
whatever
works
for
everybody
I
mean
so
maybe
an
async
review
next
and
then
we'll
sort
of
determine
based
on
the
current
feedback.
You
know:
what's
the
do
we
pull
up
again
next
week,
sometime
right
now,
our
next
meeting
is
two
weeks
from
today,
but
maybe
we
need
it.
We
can
do
another
pull-up
next
week
and
see
where
we're
at
depending
on
how
things
go
offline,
steve
kristof
does
that
work.
A
Okay,
christoph
I
mean
I,
I
think
we're
I
think
part
of
this.
Is
you
know?
How
do
you
want
us?
How
do
you
want
the
team
to
help
you
know
and
show
up?
So
I
think
if
you
have
a
preference,
if
you
would,
rather,
if
you
think
you
would
rather
do
like
an
interactive
walkthrough
like
this
and
have
continued
the
discussion,
then
don't
don't
hesitate
and
ask
for
I
mean
from
I
can't
speak
for
andrew,
but
for
the
cosmonaut
folks.
You
know
we
want
to
help
you
get
this
done
whatever
it
means.
A
If
we
would
do
a
daily
stand
up
with
you,
if
you
wanted
to
do
that,
you
know
like
to
help
get
it
out
the
door
right.
I
love
what
you're
doing,
and
I
think
this
is
going
to
be
awesome.
So
so
so
don't
let
so
you
know,
let's,
let's
hear
what
you
think.
B
Yeah,
I
think
I
would
go
with
andrew
for
for
the
next.
You
know,
time
being
such
that
I
think
it's
very
productive
to
maybe
to
post
questions
and
thoughts
on
in
the
channel
some
slack
and
then
because
then
I
can
work
in
part.
If
we
talk,
I
cannot
do
anything
and
I
just
have
some
evening
hours
so
and
then
at
a
certain
point.
Maybe
it
turns
or
when
we
change.
A
A
You
know
the
feedback
he's
got
here
and
getting
a
feedback
loop
going,
and
I
know
we've
got
a
bunch
of
people
that
are
interested
in
chat,
including
some
of
the
commercial
customers
are
kind
of
watching
this
because
they
want
to
try
plugging
in
their
own
onyx
models.
C
Yeah
I'll
do
that
I'll,
send
a
draft
summary
to
kristoff
and
I'm
sure
he'll
have
tweaks
and
then
and
then
we
can
post.
It
sounds
good.
A
Well,
thank
you
both
so
much
and
then
kristoff.
Is
this
checked
into
your
branch
that
you
have
on
your
on
your
fin
fault
here
at
github.
A
B
B
A
Okay,
great
well
I'll
drop.
You
a
note
on
the
the
dco
sign
off,
that's
required
for
anything
in
wasn't
cloud
because
we're
in
the
cncf.
It
basically
just
means
that
you
are
saying
that
you
know
that
you
know
all
the
code
is
you
know
the
licenses
are
right,
all
that
kind
of
stuff.
It's
like
a
legal
checkbox.
We
need
to
comply
with,
because
wasn't
cloud
is
actually
owned
by
the
cloud
native
computing
foundation.
That's
one
of
the
things
we
did
with
the
donation.
So
I
know
that
is
an
extra
step
and
steve.
A
Yeah,
okay,
we'll
drop
details
on
that
on
slack
kristoff
just
to
make
it
so
that's
not
a
block
or
anything
like
that.
Okay,
thank
you,
everyone
for
an
awesome
meeting
today
and
I
really
appreciate
you
know
all
the
hard
work
and
collaboration
on
this.
So
I'm
excited
to
get
you
know
a
cat,
not
cat,
and
if
we
get
it
done
I'll
dress
up
as
the
dog
or
cat
and
see.
If
we
can,
you
know
trick
the
machine
learning
algorithm
on
on
camera.
I'll.
Take
that
I'll
get
the
costume.