►
From YouTube: wasmCloud Working Group - Machine Learning 02/17/22
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
https://wasmcloud.com
A
Welcome
to
the
watson
cloud
machine
learning
subgroup
for
february
17
2022.
kristoff!
Do
you
want
to
lead
us
off?
Give
us
an
update
on
where
you
are.
B
Let
me
share,
then
I
will
do
so.
I
prepared
an
agenda
proposal.
I
mean
it's
bigger
than
the
content
almost,
but
so
my
proposal
is
that
to
go
over
the
status
quo
from
like
5
5000,
feet
perspective
and
then
reiterate
and
zoom
in
because
at
least
that's
how
I
like
it
to
get
the
big
picture
first
and
then
the
details
afterwards
until
it
times
up
maybe
so,
what's
then,
the
status
quo
is
yeah.
We
have
some
sort
of
implementation
of
all
the
technical
stakeholders.
B
Now,
let's
briefly
come
to
the
back
to
the
question:
what
are
the
technical
stakeholders
that
sketch
a
bit
older?
But
it's
based
on
what
we
on
our
summary
from
the
first
meeting,
so
we
have
a
bundle
server.
Now
we
have
that
capability
provider
ie
so
inference
engine.
We
have
an
inference
actor,
that's
that
I
and
we
have
an
inference
api
and
then
an
http
server,
so
more
or
less,
and
usually
rather
basic
with
an
implementation
of
all
of
that.
Then,
concerning
the
capability
provider,
that's
ready
for
being
tested
and
yeah.
B
That
was
an
issue
during
the
last
few
days
because
I
had
a,
I
had
a
back
which
I
didn't
get
resolved
so
steve
helped
me
a
lot
yeah,
so
basically
after
steve
having
reserved
that
I
have
to
upgrade
dependencies,
I
tried
that,
oh,
it's
not
yet
working,
so
maybe
a
little
mistake
somewhere,
but
I
I'm
very
optimistic
that
we
have
it.
So
we
can
start
testing
systematically
now
concerning
the
contract.
B
Yeah
there's!
No.
I
changed
it
from
two
weeks
ago.
I
think
it's
not
the
final
version,
because
steve
built
in
unions.
I
also
have
a
little
bug
somewhere,
so
I
have
to
upgrade
dependencies
there
too,
and
then
here
there
are
some
minor
things
like
tensor
type
and
index.
I'm.
I
think
we
want
to
change.
B
Okay,
then
this
is
the
status
quo
and
my
proposal
is
what
I
really
would
like
to
discuss.
Is
I
stumbled
upon
the
inference
actor
myself
that
has
a
contract
which
is
very
similar
to
the
contract
of
the
capability
provider
itself,
and
then
maybe
we
can
rediscuss
the
contract
of
the
capability
provider.
So
if
there
are
some
open
points
left
and
well,
the
other
points
I
think
are
rather
optional.
B
I
I've
one
question:
if
we
really
want
to
have
that
whole
application,
if
it's
possible
to
have
a
manifest,
because
all
the
examples
I've
seen
they
are
in
pre-otp,
I
have
not
seen
any
examples
with
manifests
in
in
the
otp
generation
yeah.
That's
basically
it
maybe
you've
some
comments
about
that.
A
C
I
don't
I
don't
have
much
to
to
add.
Unless
I
don't
know
what
what
are
you
asking
exactly,
I'm
happy
to
have
kristoff
keep
keep
going.
A
B
B
So,
let's
maybe
come
back
to
that
picture
so
that
sketch
I
yeah
what
what
did
we
say
so,
there's
an
maybe
we
start
from
the
left,
so
there
are
requests
coming
in
via
http
and
they
they
are
routed
somehow
to
the
inference
api,
and
then
we
said,
okay,
that
will
be
routed,
then
routed
to
the
inference
actor
and
then
route
it
to
the
capability
provider,
which
comprises
the
inference
engine.
B
When
I
implemented
this,
I
also
did
the
inference
actor
and
I
did
a
contract
for
that
and
that
currently
is
a
copy
kind
of
of
the
capability
providers
contract,
and
that
seems
awkward.
So
I
understand
that
I
need
a
contract
for
that
inference.
Actor
and
I
mean
then
there's
no
particular
state.
So
no
knowledge
that
inference
api
or
that
inference
actor
has
to
know
they
just
route
requests.
All
the
state
is
in
the
capability
provider
which
itself
gets
a
state
from
the
bundled
server.
B
So
that's
cool,
but
somehow
what's
the
role
of
that
inference
actor
I
mean
what
value
is
it
supposed
to
act?
Maybe
I've
got
a
flow
in
in
my
head,
but
I
do
not
see
the
added
value,
except
that
I
know
it
should
really
validate
the
input
it's
getting,
but
to
be
honest,
validating
input.
We
could
also
do
in
the
inference
api
right
I
mean
and
that
what's
what
it
gets
from
the
http
server.
So
what's
the
role
of
that
I
in
that
picture.
C
So
I
the
inference
engine
is,
is
I
think,
the
the
only
thing
I
mean
the
provider
on
the
other
end
of
that
inference?
Api
is
the
is
the
only
thing
we
need.
C
I
I
had
a
man
imagined
if
there
was
going
to
be
an
actor
that
it
would
be
a
different
engine,
sort
of
like
a
running
on
a
on
a
mobile
device
or
something
that
you
know
kind
of
like
the
use
cases
for
tensorflow
lite,
where
you're
running,
maybe
a
smaller
model
that
doesn't
need
gpu
access
that
might
not
even
have
a
intel
cpus
running
running
on
an
edge,
an
embedded
device
or
something.
C
D
Is
there
a
way
to
go
directly
from
so
looking
at
the
numbers
here
number
one:
the
request
comes
in
through
the
http
provider.
What
is
it
possible
for
the
number
two
line
to
go
directly
to
the
engine
or.
B
Yeah
that
would
be
possible
and
honestly
it
feels
like
that
would
be
the
the
way
to
go.
Currently
I
mean
I,
I
sketched
that
based
on
summary,
after
the
first
meeting,
so
maybe
I
thought
I
didn't
get
any
ideas,
so
I
just
sketched
it
and
you
know
implemented
that,
but
now
it
feels
doubled.
As
you
said
andrew,
I
would
also
go
from
well.
The
three
is
not
from
I
api
to
I,
but
the
three
would
be
from
I
api
to
to
the
inference
engine.
C
D
B
D
A
Well,
there's
there's
some.
There
are
some
reasons
for
that.
You
know
when
the
we
think
about
the
model,
these
all
of
the
round
circles
here
are
stateless
and
reactive.
A
So,
let's
think
about
a
situation
where
we
wanted
to
scale
this
to
thousands
of
requests
per
second
now,
obviously,
in
this
particular
case,
we
may
be
slightly
blocked
by
having
the
inference
engine
right
and
how
many
you
know
inferences,
can
it
take,
but
you
would
scale
then,
at
the
actor
level,
both
you
know
vertically,
within
a
wasm
cloud
host
or
horizontally
across
multiple
homes,.
A
Then
the
ability
here
to
have
you
know
inference
actors
and
receivers
running.
You
know
in
one
place
and
then
they
could
be
talking
back
to
an
inference
engine,
that's
back
in
the
cloud
so
because
you
know,
if
we
think
of
this
as
layers
here,
there's
a
layer
between
the
actors
and
the
inference
engines
and
it's
separated
by
nats
right
yeah
across
that
natsir.
Let
me
actually
share
my
screen
real
fast
I'll,
pull
up
a
just
a
graphic
here.
A
I
probably
need
a
better
one
than
what
I
have
here,
but
just
let
me
just
paint
this
visually
here
as
soon
as
I
can
find
the
share
screen
button.
Oh
my
gosh.
If
I've
been
on
that
many
calls
like
I've,
just
completely
lost
the
button.
Oh
it's
right
here!
It's
the
one
in
green.
A
All
right,
so,
let's
pretend
that
these
edge
devices
were
you
know
in
schools
or
automobiles,
or
something
like
that.
We
could
have
the
http
providers
here
with
with
an
actor
and
they
could
then
dispatch
to
a
larger
inference
engine
that's
running
as
a
provider
up
here
in
the
cloud
right.
So
that's
the
kind
of
model
that
we
have
there
and
then
that
scalability
would
be.
If
you
know
down
here
at
this
particular
location,
let's
say
you
know
where
we
need
to
accept.
You
know
I
don't
know.
A
However
many
requests
per
second,
we
could
scale
the
the
actor
here.
A
It
probably
makes
sense
in
this
case
for
us
to
think
about
having
a
queue
between
you
know
the
http
and
the
inference
engine,
but
that's
probably
a
detail
for
later
that
or
that
you
could
build
on
the
fly
for
an
implementation
for
the
for
the
mvp.
A
I
think
you
know,
thinking
of
even
if
it's
all
here
in
one
server,
you
know
an
http
server
talking
to
an
actor
that
then
dispatches
to
the
inference
engine
is,
is
probably
the
right
way
to
go
steve
did
I
do
an
okay
job
explaining
that
you
think
or
is
there
another
way
that
you
might
do
it.
C
Yeah,
I
think
well
another
way
to
think
about
it
is
that
it
might
not
always
be
the
http
server
that
we
use
to
invoke
this.
So
you
might
have
your
your
business
application
running
in
actors,
so
you
know,
maybe
you've
got
you've
been
scanning,
some
images
and
you
need
to
go.
Send
it
off
for
additional
processing
or
you've
got
a
chunk
of
text
that
you
need
turned
into
audio
through
a
text-to-speech
engine.
So
there
could
be
actors
that
have
that
or
or
you
might
have
a
protocol
that
works
overnight.
A
That's
actually
much
that's
a
additionally
as
compelling
a
reason
as
to
why
we
might
do
that
as
well.
Hey
maxim
welcome
to
the
call
by
the
way.
A
Thank
you
christoph.
I
think,
if
you
want
to
go
back
to
sharing
your
screen
again,
that
was
great
but
yeah
so
from
a
detailed
perspective.
Andrew
and
kristoff.
Is
that
clear
why
we
want
that
actor
there?
But
I
think
we
all
agree
that
we
only
need
the
one
actor
here.
So
the
one
circle
in
our
in
our
implementation.
B
E
B
B
C
So
this
is
a
case
we
we
are.
Are
you
talking
about
the
basically
starting
up
a
system
from
scratch
and
putting
all
the
actors
and
providers
in
place
ready
to
run
workloads.
B
Exactly
because
from
the
workflow,
what
is
now
the
most
important
thing
that
is
unit
test
right,
I
mean
we
really.
I
really
have
to
test
the
capability
provider.
That's
first
and
that's
we.
We
were
debugging
that
so
should
be
possible,
and
afterwards
we
for
a
demo
case
should
have
well
the
whole
application,
and
then
I
figured
out
that
the
link
definitions,
for
example,
we
have
to
provide
them.
We
have
to
start
all
the
actors
capability
provider
and
that's
easiest
done
by
that
manifest
correct.
C
Yeah,
so
we
we
don't
have
a
manifest
in
the
current
system.
There
is
a
something
on
the
roadmap,
but
that's
not
gonna
be
available
for
at
least
a
few
weeks,
so
I
would
say
in
the
meantime,
you're
probably
looking
at
writing
a
shell
script,
maybe
something
like
the
run:
dot.
Sh.
That's
in
the
example
pet
clinic,
and
I
can
I
can
help
you
with
that.
B
Yeah
good
idea
super
then
that
one
is
checked
yeah
I
mean
then
the
the
next
point
would
be
a
bit,
not
philosophical,
but
it's
further
ahead,
but
I
I
stumbled
over
one
point
referring
to
the
time,
maybe
after
the
http
interface.
Let's
come
back
to
the
to
this
to
the
sketch
here,
because
on
the
one
hand
we
have
that
sketch
that
illustration.
B
So
what
does
it
mean
that
requests
would
currently
be
sent
or
will
be
sent
via
http
with
the
data
that
influence
engine
should
do
a
compute
upon
right
and
then
it
gets
an
answer.
Http
is
not
known
for
low
latency
or
high
throughput
or
so,
and
on
the
other
hand,
people
are
talking
about
their
choral
dev
boards
and
other
things.
So
maybe,
at
least
in
the
long
run,
we
would
like
to
have
a
different
interface.
B
What
I
dream
about
liam
and
bailey,
if
you're
referring
to
cat
no
cat
based
on
a
camera
image
and
so
on,
would
be
nice
to
have
at
least
conceptually
another
capability
provider
in
that
system,
which
provides
maybe
images
or
whatever
maybe
acoustic
signals,
or,
in
my
case
from
my
day,
job
may
be
locks
and
then
provide
that
directly
or
indirectly
to
that
inference
engine
in
a
very
fast
manner
and
that
only
results
are
forwarded
either
in
a
database.
Kb
store
whatever.
B
B
C
B
C
Yeah
yeah,
you
could
definitely
do
something
like
that.
You
could
send
it
send,
have
an
input,
queue
and
a
queue
of
results.
F
Sorry
go
ahead,
I
was
just
gonna
say
it
feels
like
websockets
type
of
a
interface,
yeah
or
quick
interface,
where
it's
boundless.
You
know,
you're
not.
A
I
think
the
I
think
what
would
be
powerful
is
if
we
can
get
the
mvp
up
here
with
http.
We
can
reuse
those
components
like
building
blocks
to
prototype
a
faster
mechanism.
So
perhaps
what
it
might
look
like
is
there
is
a
capability
provider
for
a
camera
and
then
that
can
feed
to
an
actor
that
just
streams
it
directly
into
the
inference
engine
or
something
along
those
lines.
A
So
I
think
those
are
some
good
there's.
It's
a
good
discussion
exercise
around
how
we
might
do
that
and
steve.
Maybe
we
can
brainstorm
some
stuff
as
a
to
do
with
kevin
and
see
what
kevin's
ideas
are
on
on
the
topic.
C
Yeah,
I'm
pretty
sure
kevin
would
suggest
using
dats
for
that.
For
that
queue,
and
so
you
could,
you
could
have
a
you-
could
put
all
that
on
an
app
subscription
and
have
an
actor,
listen
to
those
and
and
shove
them
into
the
inferences
inference.
Engine.
B
That
sounds
cool,
maybe
he
can
sketch
anything
in
that
machine
learning
channel
and
I
don't
know
if
it
answers
already.
Then
at
least
conceptually
a
question.
The
question
how
you
would
start
that
you
know
it's
kind
of
a
service
right
that
inference
engine
provides
a
service
and
I
could
kind
of
register
or
start
it
or
I
could
just
leave
it
so
and
when
I
started
I
mean
me
as
a
requester,
the
request
could
go
over
http,
but
then
the
data
flows
differently,
and
so
that
I
mean
starting
stopping
of
that
service.
B
A
A
I
I
think
that
I
don't
think
that
that
I
think
we've
got
a
lot
of
different
options
here.
We
could
try
and
we'd
really
need
to
kind
of
dial
it
in
with
some.
A
You
know
some
service
level
objectives
for
what
we
want
to
hit
as
far
as
what
we're
trying
to
achieve
here,
but
to
me,
I
feel
like
we're
on
the
path
to
get
there
with.
You
know,
knocking
out
an
mvp
first,
the
capability
provider
isn't
going
to
unbind
itself,
so
it's
going
to
be
in
the
sense
of
wasm
cloud.
Once
you
start
it,
it's
there
waiting
for
link,
deaths
to
be
attached
to
it
and
then
those
actors.
A
You
know
we
already
have
the
ability,
the
actors,
the
round
circles,
have
the
ability
to
talk
to
nats.
So
if
steve's
point
instead
of
http
we're
connected
to
gnats,
we
could
listen.
We
could
have
multiple
strategies
here,
for
I
think
using
one
engine
using
one
inference
engine
on
the
google
coral
tpu.
If
that's,
what
we're
going
to
go
with
or
the
jetson
and
then
have
multiple,
maybe
multiple
ways
to
input
to
it
or
something
like
that
steve.
Your
thoughts.
C
Yeah
this,
I
think,
that's
all
almost
implementation
details,
I'd
love
to
see
us
get
the
model
up
and
running
with
some
with
some
protocol.
I
was
going
to
ask
about
the
model:
are
you
using
resnet
or
what
is
the
image
recognition
model
you're
using.
B
Oh,
I
do
not
use
any
yet,
but
I
would
go
with
the
examples
I
think
two
weeks
ago
we
had
a
look
on
the
examples,
what
the
the
radar
guy
uses
and
I
think
it
was
mobilenet.
C
C
No,
no,
I
think
those
are,
I
think,
that's
great
there's
I
mean
there's
a
handful
of
decent
models.
I
wanted
to
find.
I
think
it
would
be
great
if
we
could
find
one.
That's
has
a
tensorflow
implementation
as
well
as
in
onyx
and
and
both
mobile
net
and
but
mobilenet
does
satisfy
that
criteria.
D
Hey
kristoff,
I
have
a
couple
comments.
I
was
just
while
we've
been
talking,
I've
been
looking
at
the
wasm
cloud,
artifacts
repository
and
by
the
way,
good
good
work
on
getting
all
that
stuff
looks
like
there's
a
lot
of
work.
That's
been
done
there.
Okay,
I
think,
for
me
the
general
design
of
the
providers
and
the
actors
and
stuff
is
it's
pretty
much
there
right.
I
think,
what's
missing
from
that
repository
is
how
to
run
this
thing
like.
D
D
I
think
we
need
to
spend
some
time
running
requests
through
the
system
and
seeing
what
happens
and
what
errors
we
get,
because
I
I
suspect
it
won't
be,
you
know,
send
any
bytes
and
you'll
get
a
get
a
good
response.
I've
had
issues
with
this
in
the
past,
where
you
have
to
sort
of
like
format
the
tensors
in
just
the
right
type
right.
D
You
have
to
decode
the
jpeg
image
into
something
that
the
you
know
that
your
backend
understands
and
I
think
there'll
be
a
little
bit
of
work
there
and
I
think
it
would
be
an
advantage
to
quickly
shift
into
trying
to
find
those
issues
quickly.
B
Right
yeah,
I
was
hoping
to
so
until
yesterday
evening,
stephen,
we,
we
were
having
a
look
at
the
test
of
the
capability
provider
itself
and
that
doesn't
accept
http
requests.
But
you
know
that
contra
expects
that
the
contract
method
is
being
called.
E
B
You're,
absolutely
right
that
this
should
be
the
milestone,
I
hope
to
achieve
it
within
the
next
sprint.
So
all
the
time
between
our
two
b
weekly
meetings.
For
me,
it's
like
a
sprint,
so
maybe
with
a
bit
of
luck
within
the
next
print
that
should
be
done.
E
D
Right
and
then
no
pressure,
though
no
pressure,
you're
the
one
doing
the
work
so
hey
whenever
I'm
just
saying
that
getting
to
the
point
where
we
can
debug
the
the
inference
part
is
gonna,
be
an
advantage
because
I,
I
suspect,
there's
gonna
be
some
like
foot
guns
there
that
we
haven't
yet
seen.
A
Is
there
is
there?
I
know
you
and
steve
have
been
working
on
this
a
bit
kristoff?
Is
there
a
demo
that
we
could
walk
through?
Is
there
you
know,
do
we
have
it
up
in
you
know
working
even
like
piecemeal
yet
or
are
we
still
trying
to
focus
on
that.
B
There's
no
demo
yet,
but
I
mean
goes
along
with
what
andrew
said
right.
I
mean
that
would
be
the
next
thing
to
do
so.
D
Hey,
I
have
a
comment
for
the
wasm
cloud,
guys
I
think
watching
kristoff
do
this
is
pretty
valuable
for
you
guys
to
see
how
easy
or
hard
it
is
to
create
the
interfaces
to
create
the
providers
to
set
everything
up,
to
explain
how
to
set
everything
up
to
other
users,
all
that
stuff.
You
know
I'm
saying
because
I
I
think
at
this
point
I
get
it
now,
a
little
bit
more
but,
like
you
know,
christoph,
was
bringing
up
earlier.
D
You
know
no
one
said
anything
earlier
about
why
we
have
the
iapi
and
the
I
actors
and
stuff
it's
because
we
didn't
get
it
right.
We
were
just
like
sure.
Okay,
I
guess
we
need
two
actors.
I
don't
know
you
know
and
the
ability
to
see
quickly
like
oh,
no,
no,
we
don't
need
two
actors
or
oh.
This
is
how
this
is
gonna
run.
It's
gonna
be
valuable
for
wizard
cloud
in
the
future.
A
I
think
that
is
a
phenomenal
callout
andrew
and
you
know
we
do
steve
and
I
absolutely,
I
think,
are
trying
to
pay
attention
to
the
developer,
experience
here
and
figure
out
how
we
can
streamline
this
and
make
this
better.
Your
example,
I
think,
is
a
great
one
to
call
out
that
there's,
probably
even
opportunity
to
like
tell
the
story
of
how
this
was
created
as
an
example.
A
Provider-
and
you
know
we
are-
you-
know
one
of
the
things
that
actually
just
happened
in
the
last
two
weeks.
I
think
this
was
one
of
the
contributing
issues
was
you
know
we
actually
wrote
up
like
a
developer
guide
for
how
to
debug
this
stuff
at
scale
and
and
a
few
other
things
so
100.
I
agree
with
you
that
there
is
some
developer
experience
stuff.
We
could
and
should
learn
here
to
make
this
a
smoother
experience
through
and
through
steve.
I
know
you've
been
doing
some
hands-on
on
this.
A
What
are
your
thoughts?
Do
you
have
any
ideas
or
things
you
think
that
we
should
be
doing.
C
Well,
andrew's,
100,
right
that
there
are
some.
There
are
some
bumps
in
the
road
that
people
go
through
when
they're
learning
wasn't
cloud
and
some
of
them
are
conceptual,
some
of
them
are
are
technical.
Some
of
them
are
like.
How
do
I
run
this
command?
And
you
know
what
does
this
api
mean
and
all
of
those
are
things
that
can
be
improved?
C
It's
one
of
the
reasons
that
why
I'm
working
closely
with
kristoff
here
is
all
the
things
that
we're
running
into
I'm
feeding
back
into
the
pipeline
and
using
that
to
to
improve
documentation
and
the
tooling
and
stuff.
So
we
definitely
love
that
people
are
getting
the
arrows
in
their
back
to
get
through
this,
and
we
want
to
make
this
really
easy
over
time.
We
want
to
make
this
really
easy
for
developers.
That's
that's
our
prime
prime
directive.
A
Yeah,
you
know
we
actually
are
getting
ready
to
relaunch
our
tutorials
andrew
on
instruct,
so
those
are
under
development
right
now,
and
so
the
initial
you
know
walkthrough
is
very
similar
to
what's
at
wasmcloud.dev.
A
You
know
create
an
actor.
You
know
is
more
starting
as
a
user
of
the
tools,
but
then
I
imagine
we'll
launch
some
content
around
transitioning
in
so
I've
got
a
great
team
working
on
it.
It's
bryan
slatten
new
jazz,
his
new
wasn't
bookout
is
helping
to
write
it
and
then
jordan
rash
from
capital.
One
is
working
on
the
delivery,
so
I
saw
a
demo
this
morning.
It's
a
great
start.
You
get
a
live
container
on
one
side,
with
the
tools
already
up
and
then
instructions
on
the
other.
A
So
you
can,
you
know,
just
go
through
and
learn
how
to
do
these
things.
That's
where
that's
the
experience
we
want
to
give
people,
you
know
yeah
to
get
them
out.
I
agree
100
we're
making
the
investment
and
expect
to
see
a
first
draft.
I
would
guess
you
know
early
march
of
that,
but
it'll
be
up
in
prime
time
by
kubecon
for
sure.
D
Yeah,
that's
cool
yeah,
because
because
I
mean
just
a
little
bit
background
when
I
was
in
iotg
at
intel,
that's
the
internet
of
things
group.
We
we
had
a
system
similar
to
this
right,
where
we
wanted
to
move
computing
data
between
different
nodes
and
stuff.
D
Like
that
and
the
complexity
like
that's
that's
already
complex
and
then
you
add
in
you
know
the
specific
complexities
of
whatever
system
you're
building
like
oh
here's,
how
you
have
to
do
this
and
that
and
where
and
suddenly
it
gets
just
very
complex
for
the
users
and
it
almost
either.
You
know
in
the
worst
case,
you
have
to
be
like
the
complete
expert
of
the
system
in
order
to
even
build
anything,
that's
cool,
you
know,
and
so
you
guys
providing
documentation
and
on-ramp
tutorials
is
huge
huge
for
this
company.
A
On
the
on
the
plus
side,
there's
a
lot
of
complexity
here
that
does
go
away
when
you
think
about
the
ability
to
you
know
for
the
actors
and
providers.
Yes
just
to
work
tm,
you
know
like
nobody.
Our
idea
was,
you
know
the
comp
part
of
the
complexity
that
we
swallow
is
the
distributed
application.
E
A
Right-
and
I
think
that
alone
is
huge,
especially
for
what
we're
trying
to
enable
here,
which
is,
you
know,
distributed
machine
learning.
So
I
agree
with
you
on
all
accounts
that
we've
got
a
lot
of
work
to
do
here,
to
explain
it
conceptually
give
people,
you
know
hand-holding
walk-throughs,
and
I'm
trying
to
do
that
in
a
way
that
is
infinitely
scalable
by
investing
a
ton
of
time
and
money
up
front
on
like
a
great
training.
D
Yeah,
I
think
our
our
failure
in
that
project
was
the
the
routing
and
the
underlying
distributed
infrastructure
was
more
visible
to
the
user,
and
that
fact
you
know
it
became
unmanageable
for
the
users
to
understand
the
the
mesh.
D
A
Yeah,
I
think
that's
going
to
be
the
the
biggest
uplift
when
we've
been
meeting
with
teams.
You
know
having
conversations
about
this.
You
know
one
of
the
things
that
we
kind
of
ask
is
well.
What
are
you
trying
to
do
now
that
you'd
want
to
do
distributed
right
because
we
just
make
the
distributed
piece,
just
easy,
so
yeah
but
great
call
outs
all
the
way
across
the
board.
I
really
appreciate
you
thinking
holistically
and
taking
a
step
back
on.
How
does
this
impact
the
metis.
A
A
Address
my
points,
thank
you
bailey.
I
know.
You've
got
a
few
things
in
the
space
going
on.
Did
you
have
anything
you
wanted
to
add,
or
are
you
just
following
along
just
to
stay
up
to
speed
with
it.
E
I
want
to
try
to
eliminate
networking
as
best
possible
and
I
haven't
eliminated,
pushing
down
a
webassembly
module
for
inferencing
into
my
database
and
since
I'll
provide
a
single
store
capability
provider.
You
know
open
source
to
anybody
that
that
would
be
a
capability
another
trick
in
our
our
toolbox.
Basically
that
that
might
be
a
slightly
different
approach
from
what
we're
coming
up
with
here.
A
Yeah,
that's
that's
really
powerful.
Now
I
know
underneath
the
hood
nats.
Does
you
know
when
it's
set
up
on
a
on
a
linux
box
with
the
right
kernel
features
it
does
all
the
zero
copy
stuff.
You
know
all
day
long
and
is
really
heavily
tuned,
but
steve.
Is
there
somebody
on
the
cinedia
side,
maybe
that
we
touch
base
with
on
what
might
be
the
best
operational
mode
to
you
know
to
maybe
think
about
how
we
would
turn
nats
into
a
streaming
engine
in
this
case.
A
We
would
just
we
would
just
bridge
it
with
an
actor
internally.
Is
that
how
we
would
do
it
in
the
wasm
cloud?
You
know
world.
C
Are
you
talking
about
trying
to
avoid
having
to
have
an
actor
in
between
two
capability
providers?
I'm
I'm
not!
No.
I
I
think.
A
A
Think
we
have
to
do
that.
So
if
you
know
we,
if
we
and
we
have
the
nats
capability
provider
now,
so
it's
going
to
have
to
go
capability
provider
to
an
actor
to
the
machine
learning
capability
provider.
But.
C
Nets
is
already
incredibly
performant,
so
I
wanna
know
where,
where,
if
there's
a
performance
bottleneck,
we
could
look
at
that,
but
but
I
don't
know
where
they
are
yet
because
we
haven't
put
this
put
this
pipeline
together.
Okay,.
A
E
We
have
a
create
pipeline
api
for
kafka
connections,
for
example,
or
s3
data
stores
that
continuously
gets
updated
as
they
change
I
I
would
have
a
similar
ingestion
stream.
I
think,
for
at
least
data.
That's
large,
like
images
or
video
analytics,
that
type
of
data
tends
to
cause
some
different
types
of
bandwidth
issues,
definitely
for
iot
devices,
at
least
with
the
partners.
I've
been
talking
to
you
about
this
use
case
as
far
as
cinedia
people.
I've
been
talking
to
steve,
mostly,
is
that
your
contact
as
well.
C
Kevin's
been
the
person
on
our
team.
Who's
worked
most
closely
with
them.
I
know
he's
spent
a
bunch
of
time
with
derrick
collins
he's
one
of
the
founders,
and
so
I'm
sure
we
could,
if
you
don't
think
steve,
is
the
right
person.
We
could
I'm
sure
we
can
introduce
you
to
other
folks.
C
Okay,
one
of
the
things
that's
actually
it
is
slightly
ml
related,
but
we're
looking
at
some
ideas
internally
to
to
get
around
the
fact
that
nats
needs
a
smaller
message
size
and
if
you
need
to
send
larger
payloads,
you
need
to
do
a
little
bit
of
extra
work
and
we're
looking
at
ways
to
make
that
kind
of
bundling
and
unbundling
of
large
messages
transparent
at
the
api
level.
C
So
maybe
that'd
be
something
you
could
even
use
in
your
ingesting
data
stream.
E
Yeah,
that's
really
interesting.
Yeah
I
intentionally
got
a
4k
little
camera.
I
can
give
you
the
specs
for
that.
If
you
want
that,
it
was
only
30
bucks.
So
just
just
a
tiny
little
thing,
and
even
even
that
I
I
did
run
into
some
payload
issues.
I
I
do
know
that
for
the
most
part,
I
think
when
people
are
doing
inferencing
with
images,
they
usually
drop.
The
compress
you
know
they
they
do
all
kinds
of
things,
turn
it
to
grayscale
all
that
kind
of
stuff.
C
Yeah
cool
so
yeah
we
keep.
We
keep
finding
some
great
features
in
gnats
they've,
put
a
lot
of
thought
into
all
the
problems
that
people
have
with
distributed
systems
and
they've
solved,
so
many
of
them
from
security
to
reliability
and
different
qos
settings.
It's
it's
a
really
nice
piece
of
infrastructure.
E
Yeah,
mainly
the
parts
that
I'm
putting
in
is
our
combining
our
distributed
system
with
their
distributed
system
so
that
we
have
serendipity.
That's
that's
the
interesting
challenge
for
me.
A
That
is
awesome,
okay,
well
bailey,
as
we
pull
together,
what
we
call
project,
chunky
boy,
I
think
it's
all
going
to
end
up
open
source
anyway,
the
http
junking
stuff,
and
we
do
have
a
customer,
that's
driving
that
request.
So
it's
got
high
priority
on
our
side,
so
expect
to
see
that
out
public
and,
if
that's
useful,
for
you,
I
hope
all
the
better
and
then
christoph.
A
Thank
you
so
much
for
all
the
hard
work
on
this
and
I
asked
steve
to
please
just
stay
engaged
with
you
as
you
need
help.
A
You
know
just
be
proactive
and
don't
hesitate
to
reach
out
and
ask,
or
set
up
time
directly
to
help
you
get
this
over
the
line
and
then
just
my
final
call
to
action
would
be
just
the
reminder
that
kubecon
wasm
day
eu
is
may
the
16th
and
applications
are
open
to
submit
to
speak
until
february
the
28th,
so
there's
still
11
days
left
talks
aren't
due
unt
for
a
couple
months,
so
you
have
plenty
of
time
to
plenty
time
to
finish
up
work.
A
The
only
thing
you
need
is
just
you
know
your
high
level
description
and
a
few
other
things.
It
takes
about
10
minutes
to
fill
out
an
application.
It's
not
that
bad
as
far
as
getting
through
so
keep
that
in
mind.
I'd
love
to
get
as
many
great
user
stories
as
possible.
So
I
love
when
people
when
users
have
problems
and
visions
that
they
care
enough
about
that
they
try
that
they
solve
the
problems
themselves.
A
That's
really
powerful,
and
this
is
a
great
case
of
that
kristoff,
where
you
know
you
have
a
vision
for
a
longer
term
vision
for
where
you
think
you
can
go
with
this,
and
you
know
you're
just
you
know
just
now.
You
just
take
the
hills.
You
know
you
just
continue
to
knock
them
all
down,
so
I
think
it's
great
does
anyone
else
have
anything
andrew?
Did
you
have
anything
you
wanted
to
mention
or
maxim?
Thank
you
for
joining
us
today.
F
Nope
not
for
me,
I
have
a
curious
question
so
in
one
of
the
talks
you
guys
earlier
on,
as
I
was
watching
some
of
these
earlier
recordings,
there
was
a
mention
of
nats
being
a
side
loader
or
a
side
running
service.
However,
I
have
done
some.
You
know
general,
like
search
and
stumbled
upon
knots.
F
Rs
integration
is
am
I
looking
at,
and
thinking
of
two
different
things
you
know
would
would
not
so
be
as
a
kind
of
a
client
that
this
is
pipe
through
be
almost
a
requirement,
whereas
nats
rs
crate
wouldn't
be
even
suitable
for
this
case,
as
a
replacement
to
that,
or
am
I
completely
confused
by
this.
C
C
The
nas
protocol
and
that's
rs
is
how
we
connect
to
gnats
from
a
rust
program,
so
our
actors
that
are
compiled
into
webassembly
actors
written
in
rest,
the
compiled
sorry
actors,
don't
directly
speak
to
nats,
but
the
capability
providers
that
are
written
in
rest,
use
that
nats
rs
library
and
any
other
kind
of
services
that
that
use
that
and
our
host
uses
an
which
is
written
in
elixir.
C
I
saw
andrew
had
a
question
about
otp:
that's
the
erlang
otp
implementation
of
the
weizen
cloud
house
that
uses
a
elixir,
nats,
client,
so
yeah
we
do
use
the
the
that's
rs.
It's
the
rest,
client.
A
Great
questions,
andrew,
did
you
get
your
otp
question
answered,
or
do
you
want
to
dive
in
that
a
little
further
away,
all
good.
D
A
Yeah
definitely
some
overloading
of
terms
across
the
domains
here.
A
Okay,
well
super
well
bailey.
Thanks
for
the
link
on
the
camera
kristoff
did
you
get
a
coral
board?
The
google
coral
tpu
board
you're
on
mute.
D
A
We
can
pick
up
on
slack
kristoff.
I
guess
real
quick
though
I
would
add
that
yesterday's
meeting
it'll
be
on
youtube
soon.
I'll
put
a
link
out
in
the
channel
brooks
did
a
demo
with
the
coral
tpus
we're
just
getting
them
all
ready.
You
know,
wasn't
cloud
packaged
up
and
ready
to
go
and
prep
for
this,
so
we're
trying
to
solve
and
parallel
some
of
the
other
things
we
know
we
got
to
do
for
this,
but
excited
for
sure.