►
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
A
B
Yeah
sorry,
I
just
was
having
trouble
finding
finding
the
right
window
here.
Let's
see
so
yeah,
it
looks
like
steve,
put
a
link
to
the
the
adr,
the
architectural
decision
record
for
using
jetstream.
B
I
guess
before
I
get
into
the
the
quote:
unquote
demo
here
just
a
couple
of
quick
words
on
it.
I
just
wanted
to
mention
that
jetstream
is
a
essentially
a
sub
product
of
nets.
B
When
you
run
it
comes
built
into
the
net
server.
So
all
you
really
have
to
do
in
order
to
enable
jet
stream
is
just
pass
the
the
js
flag
to
a
nat
server
and
you're
good
to
go,
and
so
what
the
adr
mentions
is
that
we've
essentially
decided
to
use
jetstream
as
the
means
for
us
storing
all
of
the
metadata
that
needs
to
be
stored
about
and
in
a
lattice.
B
So
we
can
share
a
screen
here.
B
At
me
all
right,
so
we
can.
We
can
see
my
terminal
window
here,
yep
yep
and
you
can
see
all
three
boxes:
yep,
okay,
so
in
the
bottom
box.
Just
so
that
we
know,
there's
there's
nothing
up
my
sleeves.
Basically,
this
is
just
as
you
can
see.
It's
just
nat
server
running
with
the
dash
js
option
and
what
the
wasm
cloud
host
does
is.
B
When
you
start
it
up,
it
will
create
a
a
stream
within
jetstream
and
we
use
that
stream
to
store
all
of
our
cache
data,
and
you
can
use
the
the
nas
tool
to
take
a
look
at
the
list
of
streams,
and
so
you
can
see
that
I've
got
one
called
lattice
cache
default
and
that
default
actually
corresponds
to
the
last
prefix.
B
So
if
you're
running
multiple
lattices
on
the
same
infrastructure
in
multi-tenant
mode,
where
each
lattice
has
its
own
prefix,
each
lattice
will
also
get
its
own
completely
independent
stream
and
that
actually
lets
you,
among
other
things,
have
different
configurations
for
different
lattices.
So
you
could
have
an
in-memory
stream
for
one
lattice
and
one
persisted
on
disk
with
three
cluster
members
in
a
different
stream.
B
So
in
this
case
you
can
see
we're
using
it's
a
subject
called
lc.defaults
and
then
the
nats,
multi-level
wildcare,
wildcard
character
there,
and
essentially
the
trick
we're
using
here
is
involves
around
the
use
of
maximum
messages
per
subject,
in
which
case
ours
is
limited
to
one,
and
so
essentially,
each
message
on
each
subject
is
a
cached
bit
of
information
that
we
emitted
so
think
of
these
as
immutable
events
and
whenever
an
event
takes
place
that
corresponds
to
state
that
needs
to
change
within
the
lattice
cache.
B
I
believe
I
right
so
I
can
also
use
the
nats
tool
to
view
the
contents
of
a
stream.
So
what
you'll
see
here
is.
B
I'm
having
trouble
here
there
we
go
so
I'll
scroll
through
here,
and
you
can
basically
just
see
that
I've
got
a
pile
of
messages
in
here
that
are
sitting
in
the
stream,
and
these
messages
are
actually
the
the
sum
total
of
messages
emitted
during
one
of
our
integration
tests.
So
what's
really
cool
here
is.
I
can
fire
up.
B
And
because
this
stream
is
up
and
running
what
you'll
see
in
a
few
seconds,
is
this
big
stream
of
cached
messages,
so
we
created,
what's
called
an
ephemeral
consumer
for
the
lattice
cache
and
that
in
in
jet
stream
terms,
that
means
that
the
consumer
is
alive
as
long
as
the
host
process
is
and
then,
when
the
host
process
dies,
the
consumer
dies
as
well,
and
you
know
like
I
said
this-
isn't
really
all
that
much
of
a
fancy
demo
other
than
to
show
you
that
I
can
bring
hosts
up
and
down
and
have
the
hosts
automatically
reconstitute
their
cash
status
from
nets.
B
One
of
the
main
reasons
why
we
chose
nats
for
for
this
is
we
already
use
nats
as
our
core
messaging
infrastructure.
So
it's
already
a
requirement
for
wasmcloud
and
since
jetstream
essentially
comes
with
nats
for
free,
it
seemed
kind
of
like
a
no-brainer
for
us
to
use
this,
and
I
strongly
encourage
anybody
who's
interested
in
it
to
go
check
out
the
jet
stream
documentation.
B
It's
the
url
for
that
is
actually
down
down
here
in
the
bottom
box.
It's
docs.net,
dot,
io,
slash,
jetstream,
there's
a
ton
of
stuff
that
you
can
do
with
jetstream
that
we're
not
even
coming
close
to
scratching
the
service
of
so
definitely
recommend
people
play
around
with
that,
and
you
know
just
to
prove
that
we've
actually
cached.
B
Yeah
so
yeah
these
are
the
the
cached
claims,
so
everything
that
has
a
json
web
token
within
the
lattice
emits
a
set
of
claims
to
the
cache.
B
So
we
can
tell
who
issued
it
what
the
name
is
and
what
the
public
key
of
the
entity
is
and
by
having
that
that
information
in
the
cache
that
essentially
allows
us
to
do
all
of
our
security
checks
and
our
validity
checks
and
things
like
that
before
facilitating
remote
procedure,
calls
between
actors
and
providers
and
so
on
so
again
just
to
show-
and
I
can
fire
this
up-
and
you
know
I
downloaded
the
cache
so
certainly
not
the
most
exciting
demo,
but
from
an
enablement
standpoint.
C
B
It's
it's
not
difficult
at
all.
You
just
start
it
with
dash
js
and
you
get
jet
stream
enabled
for
free
and,
like
I
said
when,
if
you
allow
the
wasmcloud
host
to
create
the
stream
for
you,
you
essentially
get
an
in-memory
stream.
B
So
if
I
were
to
bounce
this
nat
server,
I
would
I
would
essentially
lose
the
cash.
But
what
you
can
do
is
before
you.
You
start
any
hosts.
You
can
create
the
stream
yourself
and
we'll
have
some
documentation
on
how
to
create
that
up
on
our
website.
Once
we
get
the
the
first
release
of
the
the
odot,
the
the
otp
version,
but
you
can
create
it
yourself
and
configure
it
with
disk
based
persistence.
You
can
set
the
number
of
replicas
for
the
stream
for
anything
from.
B
I
believe
the
options
are
one
three
or
five.
So
you
know
in
a
large
enterprise
cluster.
You
could
set
up
this
stream
so
that
there
are
five
disk
based
replicas
running
around
in
your
cluster,
and
you
know
the
documentation
goes
into
this
as
well,
but
you
can
mix
and
match
so
that
you
can
have
a
lattice.
That
has,
let's
say,
five
net
servers
that
are
running
jet
stream.
You
could
also
have
ten
more
of
them
in
that
same
cluster
that
are
not
running
jet
stream
and
everything
still
magically
works.
A
B
The
nats
message
broker
based
capability
provider
by
virtue
of
sitting
on
top
of
nets,
is
essentially
compatible
with
jet
stream.
We
don't
have
any.
We
don't
have
a
capability
provider
that
does
things
like
create
ephemeral,
consumers
and
things
like
that
again.
Jetstream
hasn't
really
been
part
of
the
net
server
for
very
long,
so
it
hasn't
been
out
that
very
long,
but
I
would
definitely
love
to
hear
people's
thoughts
on
you
know
how
how
they
see,
features
like
nets,
jet
stream
and
other
ones
like
kafka,
fitting
into
different
capability
providers.
B
You
know
we
have
an
event
streams
provider
as
well,
so
I
can
see
use
cases
where
it
might
make
sense
to
have
a
jet
stream
based
event
stream
provider,
that
that
is
more
specific
to
that
particular
type
of
use
case
rather
than
using
the
generic
nats
provider,
but
yeah
there's
a
whole
bunch
of
untapped
opportunity
there
we
just
haven't
dug
into
it
yet.
D
Steve
I
was
just
gonna:
ask
kevin
the
immediately
the
the
hos,
the
nats
server,
that
that
wasn't
cloud
host
immediately
connects.
You
itself
doesn't
have
to
be
running
jet
stream
right
at
least
as
long
as
at
least
one
of
them
in
the
cluster
is
right.
Is
that
right.
B
That's
correct,
so
one
thing
that
I
was
playing
with
earlier
this
morning
in
kind
of
a
mad
scientist
scenario:
was
you
start
a
wasm
cloud
host
pointing
it
to
a
leaf
node,
which
is
I
imagine,
that
being.
B
Very
common,
if
not
a
best
practice
in
large
enterprise
situations,
where
you
point
it
at
a
leaf
node
and
then
the
leaf
node
has
its
connection
information
pointing
off
to
some
other
nat
server
or
announced
cluster.
And
what
I
had
point.
What
I
had
set
up
was
a
leaf
node
pointing
to
the
machine
behind
me
and
then
the
machine
behind
me
was
connected
to
yet
another
machine,
and
so
and
I
had
the
nets
jet
stream
server
running.
Essentially
two
hops
away,
and
you
know
the
to
your
point.
B
The
the
only
requirement
is
that
the
topic
space
be
visible
from
one
client
to
another,
and
that's
really
the
only
thing
that
matters
so
as
long
as
you've,
stitched
together
your
nats
infrastructure,
so
that
you
haven't
blocked
the
the
special
jet
stream
topic
space.
Everything
works,
yeah
great.
A
A
Awesome
work
kevin
any
other
questions
for
kevin
and
stuart.
My
apologies.
I
thought
that
was
steve
starting
to
chime
in
I
didn't
mean
to
mix
up
the
names
there.
Any
other
questions
for
kevin.
A
Great
well,
as
always,
just
more
exciting
features
continue
to
stack
up
around
the
otp
release,
and
I
know
it's
gonna
be
a
lot
of
fun
to
play
with.
Were
there
any
other
demos
that
we
wanted
to
do
today?
I
know
we
had
a
couple
that,
were
you
know
considered
for
this
afternoon.
A
Okay,
all
right!
Well,
let's
go
ahead
and
move
on
into
the
agenda
for
today.
Just
a
quick
call
out
the
cfp
for
wasm
day
for
kubecon
n
a
has
completed.
We
have
well
over
20
submissions,
so
I
look
forward
to
getting
some
reviews
in
there
and
I
know
a
couple
people
on
the
call
submitted.
A
Thank
you
so
much
for
those
kevin
still
has
his
linux
foundation
stuff
up
and
this
week
we
have
another
wason
time
meeting
tomorrow
for
anyone,
that's
interested
in
attending
links
or
there's
a
link
in
the
notes
linked
from
the
calendar.
Invite,
I
guess
steve.
Maybe
I
know
you
did
an
update
into
slack
this
week
on
the
sprint
from
last
week,
sort
of
a
quick
retro
is
there
anything
you'd
want
to
highlight
out
as
far
as
things
accomplished
or
priorities
that
we're
working
through
this
week.
E
Sure
so
so
the
the
jet
stream
stuff
is
real
exciting
for
this
week.
We're
we're
still
on
track
for
this
week's
goals,
to
get
wash
merged
with
otp
and
to
have
some
sample
code
for
actors
and
capability
providers
up
there
on
github.
So
I
would
love
to
have
people
start
start
playing
with
it.
E
I
can
I'll
send
out
a
note
on
on
slack
general
channel
when
that's
when
that's
up
and
I'd
love
to
hear
hear
people's
ex
experience
with
that
and
jetstream
is
now
in
the
main
branch
for
the
otb
host,
so
that'll
be
that'll,
be
ready
to
play
with
with
the
actors
and
providers.
A
Okay,
that's
that's
awesome,
any
questions
about
the
the
updates
either
either
what
was
posted
in
slack
or
anything
that
steve
just
shared
any
questions
about
priorities
or
what's
in
the
roadmap
here.
C
E
Yeah,
I'm
happy
to
share
the
board.
I
don't
know
if
it's,
I
think
it's
less
informative
than
the
text
description,
because
it
takes
a
lot
of
scrolling
in
it
and
a
few
seconds
to
get
your
head
around
what's
on
the
screen,
but
so
I
guess
I
would.
I
would
put
it
back
to.
Does
anybody
have
any
questions
or
want
more
information
or
more
insight,
and
then
we
could
go
from
there.
A
I
think
it
sounds
like
folks
are
okay
with
what
was
posted
in
chat,
so
I
would
say:
let's
not
spend
the
time
I
just
said
to
open
up
the
floor.
You
know
stuart,
I
think
you
know
david
and
kevin
have
been
chatting
through
a
couple
of
otp
related
things.
Is
there
anything
worth
you
guys
discussing
now
live
that
you
guys
would
want
to
talk
through.
I
know
that
we
were
talking
through
some
potential
timeout
issues
on
gnats
and
getting
those
options
surfaced
in
the
product.
D
Yeah,
he
david's,
unfortunately,
can't
make
it
today,
but
but
we
have
been
talking
and
we'd
really
love
to
start
working
on
some
form
of
lattice
control
as
per
our
or
lattice
operator.
I
guess
maybe
you'd
call
it
as
proud
conversation
with
kevin.
In
slack.
D
I
think
we
we
wouldn't
be
able
to
start
working
on
it
for
a
couple
of
weeks,
but
we
do
have
time
after
that
to
spend
like
the
next
six
weeks
or
so
or
maybe
a
month
working
working
on,
and
we
you
know
work
closely
with
it
sounds
it
feels
like
to
me.
Alexa
would
be
a
good
thing
to
do
it
in,
but
we're
all
elixir
and
noobs
news.
D
B
Yeah,
I
I
go
back
and
forth
at
least
20
30
times
a
day
trying
to
figure
out
where
I
I
think
this
should
land
yeah
so
far.
B
I
think
the
most
compelling
case
for
for
this
comes
up
being
something
like
otp,
where
it
just
feels
like
some
of
the
concurrent
stream
processing
stuff
that
you
get
out
of
the
box
feels
like
it
will
just
be
a
smooth
fit
for
consuming
a
bunch
of
deltas
that
are
received
over
nets
and
and
materializing
that
into
this
observed
state
structure,
yeah
and
I'm
running
running
some
timers
and
things
like
that
to
periodically
determine
what
imperatives
need
to
be
emitted.
In
order
to
convert
the
observed
state
to
the
desired
state.
B
That
also
feels
like
something
that
otp
will
do
fairly
easily
out
of
the
box.
I
feel
like
I
haven't
obviously
experimented
with
it,
but
a
lot
of
the
concurrent
nature
of
what
needs
to
be
done
feels
like
it
would
be
head
against
desk
rust
stuff,
and
maybe
you
know
a
little
bit
more
of
an
easy
fit
in
elixir.
D
It
feels
it
feels
like
supervised
processes
is,
is
the
way
to
go
with
this
because
it
needs
to
be
resilient
and
you
know
stay
up,
and
I
can
imagine
spawning
processes
to
do
specific
things,
like
you
say
in
parallel
to
gradually
nudge
the
what
the
reality
to
the
desired
on
a
loop.
I
think
it
sounds
like
exactly
the
right
thing
to
do,
an
elixir
I
was
thinking
we
can
push
it
out
there
and
do
it
in
gleam
or
something
I
know
you've
been
talking
to
louis.
D
I
I'm
I'm
not
serious
about
this,
but
we've
been
talking
with
louis
pilfold,
who
actually
instantly
used
to
work
at
red
badger
for
a
couple
years.
He
has
done
great
job
so
far.
I
think
this
strongly
typed,
elixir
or
gleam
or
whatever
yeah,
I
think
alex,
says
the
right
thing.
B
Yeah
gleam
is
it's
definitely
on
my
list
of
things.
I
wish
I
had
more
time
to
play
with
just
just
the
very
idea
of
a
rusty
elixir
or
a
strongly
typed
elixir
is
very,
very
tempting.
A
Catching
yeah,
I
think
I
love
the
discussion
that
we're
already
into
the
how
we
would
accomplish
this,
but
I
want
to
know
if
we
should
take
a
step
back
and
maybe
talk
about
the
what
we
should
do
first,
and
I
love
that
we've
got
multiple
stakeholders
here
that
are
maybe
interested
in
collaborating
or
committing,
and
this
is
a
question
I
think
for
kevin
and
stuart
here
you
know:
do
you
do
we
feel
that
the
way
you
know
we're
not
doing
real?
A
You
know,
like
prds,
you
know
product
requirement,
documentations
for
like
detailed
planning,
and
I
I'm
not
suggesting
that
we
need
to
do.
We
feel
that
the
way
that
we've
been
putting
out
a
an
rfc
on
github
and
then
inviting
comments
is
that
enough
of
a
process
here
for
us
to
align
on
the
what
we
are
trying
to
accomplish
together
before
we
get
into
how
we're
going
to
do
it.
What
is
your
feedback
is
stuart?
I
think
you
followed,
along
with
a
couple
of
the
bigger
ones,
we've
done
and
kevin.
A
I
know
you've
driven.
Most
of
you
know
many
of
the
biggest
rfcs
that
we've
done
and
and
steve
and
brooks
you
guys
as
well
I'd
love
to
know
what
your
thoughts
are.
Is
this
a
time
where
we
might
want
to
maybe
plan
a
little
bit
more
because
we,
I
could
see
this
being
important
to
the
crestlet
folks,
represented
by
microsoft
and
matt,
some
of
the
red
badger
team
and
obviously
the
wasm
cloud
and
cosmonautic
communities
as
well.
D
I
think
I
think
that
sounds
like
a
great
way
to
do
it,
and
we
should
at
least
do
that
right
and
kevin
your
your
thoughts
already.
We
should
transcribe
those
into
an
rfc
of
some
sorts
that
we
can
discuss
it.
B
Yeah
I
I
completely
agree.
I
like
the
rfc
approach.
Some
of
the
rfcs
have
been
sort
of
they
felt
kind
of
like
yelling
into
the
void.
You
know
the
rfcs
are
only
as
good
as
the
number
of
comp
or
as
the
amount
of
comments
that
they
get,
and
you
know
the
more
people
who
comment
on
an
rfc,
the
the
more
useful
that
thing
is.
B
You
know,
there's
still
some
some
real
value
in
just
converting
raw
thought
into
a
formal
paper
like
that.
It
always
helps
clear
things
up
but
yeah.
I
would
definitely
love
to
get
a
bunch
of
feedback
on
on
this,
so
I
think
I'll
I'll
take
it
on
as
a
to-do
list
item
for
myself
to
create
sort
of
a
starter
rfc
to
to
talk
about
the
work
that
we
want
to
do
and,
like
stewart
said,
you
know
describe
some
of
the
thoughts
that
I've
already
got
on
it
and
then
and
then
we
can
go.
D
From
there
that
would
be
great,
and
I
mean
we
haven't,
got
any
bandwidth
for
the
next
couple
of
weeks
anyway.
So
we
can
use
that
time
to
evolve
the
discussion
around
it
and
we'll
get
to
a
point
where
we,
where
we
all
feel
comfortable.
We
can
start
something,
and
even
if
it's
only
a
spike
to
start
with
just
to
see
what
it
looks
like.
A
That's
that's
super
right,
because
I
think
this
is
a
you
know
this.
If
it's
gonna
man
itself
manifest
itself
as
some
sort
of
a
you
know,
wasn't
cloud
lattice
controller
that
we're
going
to
nest
behind
the
crd.
A
I
love
the
idea
that
this
is
portable
and
has
use
cases
independent
of
the
implementation
that
we're
going
to
use
to
drive.
That
is
that
my
high
level
understanding,
yeah
kevin
stewart
you
know
kind
of
a
line
there
is.
Is
that
a
good
tl,
dr
for
how
we're
thinking
about
this
exactly.
E
A
Okay,
well
kevin.
I,
as
always,
oh
sorry,.
B
I
I
did
kind
of
want
to
mention,
and
I'll
probably
put
this
in
the
rfc
as
well-
is
that
you
know
having
learned
what
I've
learned
so
far
about
jet
stream,
using
it
for
the
essentially
the
lattice
distributed
cache.
I
think
it
also
makes
for
potentially
a
really
good
place
to
store
the
list
of
desired
states,
as
essentially
as
a
deployment
store.
B
You
know
that
that
kind
of
stuff
kubernetes
already
already
has
a
way
of
storing
that
stuff
in
distributed
fashion,
but
being
able
to
put
that
into
essentially
a
jet
stream
stream
means
that
we
would
now
have
a
durable
replicable
resilient
store
for
deployments.
A
Great
honda
awesome.
Well,
I
think
I
think
getting
all
this
pulled
together
as
a
quick,
you
know,
rfc
and
then
driving
the
discussion
is
a.
It
was
an
awesome
way
for
us
to
do
this
and
matt
fisher
I'd
love
to
do
a
call
out
to
taylor
and
the
team
to
chime
in
on
this
one,
because
I
could
see
this
being
something
that
maybe
has
some
bearing
on
crestlet,
and
maybe
not
you
know,
because
this
is
sort
of
is
specific
to
our.
A
You
know-
application
runtime
there,
but,
but
maybe
because
I
think
we've
accomplished
a
lot
together.
You
know
in
helping
to
push
webassembly
forward
into
cloud
native
in
general
and
stuart.
Is
this
something
that
you
think
could
if
you
guys
were
to
speak
at
wasm
day?
Is
this
a
feature
that
you
know
you
guys
think
that
may
be
demonstratable
in
your
talk
or
is
that
is
that
maybe
a
stretch
goal
at
this
time
too
far
out.
D
I
I
would
love
it
to
be,
and
and
if
if
we
could
get
to
that
point
before,
then
that
would
be
an
amazing
thing
to
show
the,
but
the
I
think,
the
the
demos
that
that
we've
got
from
the
poc
that
we've
got
worked
up
already
is
a
good
demo
in
its
own
right.
So
I
mean,
I
think,
there's
enough
there
anyway,
if
we
don't
get
that
far,
but
it
would
be
great
to
to
add
that
like
get
upsie
item,
potent
kind
of
way
of
deploying
workloads
would
be
amazing.
D
To
your
point
earlier,
I
think
yeah
we
discussed
that
that
potentially
both
wash
and
a
kubernetes
operator
or
any
other
mechanism
would
forward,
manifests
onto
this
operator.
So
I
mean
it
could
be
an
amazing
thing
to
demo.
A
Well,
I
think
it
also
gives
us
a
bunch
of
other
opportunities
here,
because
I
see
a
ton
of
innovation
where
people
are
like.
I
see
a
ton
of
people
innovating
around
the
whole
crd
space
in
kubernetes
land.
You
know
there
are
startups
that
are
doing
like
you
know,
almost
like
helm
type
functionality,
and
you
know
building
uis
and
all
kinds
of
stuff
like
that.
A
A
A
great
well,
let's
maybe
slide
on
down
the
agenda
and
we'll
put
a
pin
in
this
one
for
now,
or
we
could
come
back
to
it
and
spend
time
on
this
now.
If
we
wanted
to
talk
through
it
are
there
any
other
feature
sets
that
we
feel
like
we
should
raise
today
or
that
we'd
want
to
talk.
A
Okay,
well,
I
think
that
puts
a
a
a
bow
on
our
meeting
for
today
I'll
go
and
stop
recording
and,
as
usual,
we
can
hang
out
and
just
chat
a
bit
cheers.