►
From YouTube: wasmCloud Community Meeting - 27 Sep 2023
Description
Welcome to the wasmCloud community! Tune in live where we discuss the latest developments in the wasmCloud ecosystem, WebAssembly standards, and break out sweet demos.
Agendas for wasmCloud community meetings can be found at: https://wasmcloud.com/community
A
Constraint,
nice
all
right,
hello.
Everyone
welcome
to
the
WASP
Cloud
community
meeting
for
Wednesday
September
27th
got
a
pretty
sweet
agenda
today,
starting
off
with
a
demo,
and
then
we've
got
three
potentially
long-form
discussions,
so
I
won't
waste
any
time
here
in
the
beginning,
but
before
we
jump
into
everything
here
on
the
agenda,
we
actually
have
two
new
community
members
on
the
call
and
I
would
love
it
if
they
gave
an
introduction.
I
already
asked
them
if
they
were
cool
with
it,
no
pressure
Marcelo.
Would
you
like
to
go
first.
B
B
B
B
Is
working
as
in
kubernetes
right
now,
I'm
very
interesting
about
the
wascom
and
how
to
use
the
technology
in
new
process
at
the
beginning,
I
just
try
to
understand
the
technology
nowadays,
I
try
to
to
do
kind
of
thing
with
the
technology,
maybe
in
different
concepts
or
different
Architects
or
systems.
C
A
That
that
was
perfect,
really
just
wanted
to
you
know,
but
love
hearing.
Why
why
people
come
around
and
what
they're
interested
in
and
what
their
their
kind
of
background
is.
So
yeah
really
really
happy
to
have
you
here.
D
A
All
right
and
then
we
have
one
other
Community
member
who's
joining
for
the
first
time
and
apologies
I,
don't
know
if,
if
that's
your
name
or
screen
name,
but
if
you
want
to
go
ahead,
it's
all
you.
E
Hi
I'm
fon
I'm
new
here
so
I
would
say
such
liability
engineer,
I've,
had
experience
working
with
kubernetes,
Docker,
redshifts
and
now
I
must
be
working
in
WS.
I
was
in
like
a
cncf
program
and
this
book
about
class
in
Cloud.
The
whole
lot
and
I
got
really
curious
about
it.
So
that's
what
made
me
join.
It
actually
joined
like
maybe
two
weeks
ago,
but
I've
just
been
moving
around
on
Shore
and
I.
E
A
D
That's
just
the
thing
we
all
cover
every
week.
We
really
want
that
that
demo,
slot
okay,
so
actually
I,
am
excited
to
give
the
damage.
Today,
we're
going
to
talk
a
little
bit
about
wazy
fills
where
they're
at
how
things
are
going:
yada,
yada
yada.
You
know
all
that
kind
of
stuff.
D
Okay,
so
we
have
made
some
progress
with
lazy
Phil's
in
case
you
didn't
see
it
in
the
slack,
if
you're,
not
a
member
of
the
slack,
whatever
we
did
get
to
a
point
where
we
have
a
working
wazy
Phil
example
and
we're
working
off
of
that
to
actually
make
this
something
that
everybody
can
use.
So
what
is
a
Wazi
film?
Awazi
fills
the
term
we
came
up
with.
D
It
is
not
a
real
term
that
existed
before
we
started
using
it,
but
now
it
is
because
we
decided
it
was
the
wasifil
is
a
thing
that
makes
it
so
that
you
can
use
your
actor
components
with
any
of
the
interfaces
that
are
defined,
especially
custom
interfaces.
There's
these
common
interfaces
that
everyone's
working
towards
supporting
and
solidifying
called
Wazi
cloud
and
those
are
your
kind
of
like
key
like
we
called
like
the
80
use
case
right.
You
have
like
key
value.
D
You
have
HTTP
stuff,
you
have.
You
know
like
just
those
kind
of
messaging
services,
those
kind
of
like
the
basic
core
things
you'd
expect
to
have,
and
we
have
those
built
into
into
Watson
without
itself,
but
there's
many
times
you're
going
to
want
to
write
custom
contracts.
That's
something
we've
always
supported
inside
of
wasm
cloud,
so
an
example
of
one
that
I'm
working
on
right
now
that
maybe
I'll
get
to
show
in
demo
in
a
month
or
two
when
I
as
long
as
I
get
time
to
work
on
inside.
D
It's
like
a
gpio
contract,
so
I
could
start
programming
in
gpio
devices
here
so
anyway.
There
is.
D
There
is
some
some
interesting
things
around
that,
but
basically
as
it
currently
stands,
or
as
we're
we're
in
the
middle
transitioning,
but
the
previous
way
of
doing
things
inside
of
Blossom
Cloud
was
that
you
would
take
a
a
contract.
Actually
I'll
show
you
this
inside
of
our
examples,
repo.
D
We'll
just
use
Blobby
because
we
love
Lobby
previously
what
you
had
to
do
was
you
had
specific
generated
interfaces
that
we
use,
plus
this
dependency
on
wazenbus
RPC,
and
these
things
gave
us
gave
us
the
ability
to
do
all
these
like
detached
dependencies
and
stuff.
But
what
we
had
was
these
dependencies
you
had
to
have
in
for
our
platform,
and
we
want
to
completely
split
that
out
and
have
this
entirely
component
model
based.
So
what
it
looks
like
now
is.
D
So
what
it
actually
looks
like
now,
when
you,
when
you
do
this,
is
everything
is
defined
via
wit
and
you'll,
see
that
there
is
nothing
here
that
has
anything
to
do
with
wasm
cloud.
So
that's
what
we're
going
for,
but
to
be
able
to
do
that
we
have
to
have
this
Wazi
fill,
and
so
these
Wazi
fills
generate
different.
These
Watson
Fields
basically
do
all
the
thing
that
does
the
encoding
the
data
to
be
sent
along
the
wire
and
then
decoding
responses
back
for
you
automatically.
D
So
I
just
wanted
to
do
that
again,
because
I
think
we
have
a
bunch
of
new
people
in
the
community.
I
want
to
make
sure
we're
on
the
same
page
before
I
demo.
What's
going
on,
or
it
just
makes
no
sense
so
and
yes
thank
you.
Victor
put
some
things
in
the
chat
which,
hopefully
we
can
send
that
into
the
live
share
as
well.
The
live
stream
as
well
about
like
what
the
old
interfaces
look
like
like
what
the
Smithy
things
look
like:
that's
what
that's
what
it
used
to
to
be
and
yeah.
D
So
there's
no
more
you'll
see.
There's
no
jordison
mentioned
in
the
chat
too.
There's
no
Prelude,
there's
no
nothing,
but
what
we
needed
to
do
was
support
custom
contracts.
So
we've
done
that
through
what
we
call
these
Wazi
fills
now.
This
Wazi
fill
example
I'll
go
ahead
and
Link
this
in
the
chat,
so
that
we
can
send
that
out.
This
is
an
entirely
like
I
said
it
is
an
artisanally.
Crafted
organic
locally
grown
hand
harvested
hand-picked
organic,
clear
free-range
example
of
Wazi
fill.
D
So
this
is
not
something
you're
going
to
have
to
do
all
the
time.
This
was
doing
the
proof
of
concept
to
show
that
it
works
and
basically
I'm,
going
to
actually
show
inside
of
Watson
of.
F
F
D
Where
it's
at
so
I
showed
this
diagram
before
in
a
previous
one.
So
sorry
for
those
who've
been
here
before
this
looks
a
little
bit
like
a
beautiful
mind,
I'm
sorry.
But
the
whole
point
of
what
we're
saying
here
is
that,
essentially
to
interact
with
the
wasm
cloud
host,
you
have
your
wasi
interfaces
and
you
have
to
have
the
ability
to
call
the
host
to
do
something
and
to
receive
a
call
from
the
host,
and
so
these
Wazi
fills
fill
in
these
contracts
for
us.
D
So
if
we
have
a
country
called
food,
a
contract
called
bar.
The
Wazi
fill
satisfies
these
interfaces.
For
us,
and
so
you'll
see
this
here,
because
this
is
just
taking
messaging
messaging
is
built
into
the
platform.
You
not
actually
use
it,
but
it
makes
a
great
example
and
when
we,
when
we
Define
this,
this
gets
defined.
Saying
like
hey
I,
want
to
export
this
consumer
interface
that
a
that
your
code
is
going
to
call
and
I'm
going
to
call
like
change.
This
call
to
a
host
so
like
inside
your
actor.
D
You'll
see
here,
let's
see
this
wasn't
cloud
is
only
because
of
the
name
of
the
contract,
not
because
it's
wasn't
Cloud
specific
and
you'll
see
right
here:
I
need
to
export
something
that
handles
a
message
and
I
need
to
call
something
from
a
consumer
called
publish,
so
I'm
going
to
try
to
publish
a
a
an
message
in
this
case.
So
what
this
export
does?
D
It
says
I'm
going
to
export
all
the
methods
for
you
to
do
it
and
it
takes
the
code
and
it
exports
out
all
the
stuff
for
request,
publish
and
those
kind
of
things
so
that
it
does
all
the
the
stuff
for
you.
So
you'll
see
that
it's
handling
all
the
serialization
and
stuff
that
you'd
expect
from
wassum
cloud
that
wasn't
Cloud
was
doing
before
in
those
libraries,
but
it's
entirely
external
to
your
dependencies.
D
D
So
I
we
have
a
little
script
in
there.
If
you
want
to
run
this
on
your
own,
so
we'll
just
build
this,
and
while
it's
building
it's
just
building
all
these
little
pieces
of
the
actors
and
then
it
will
put
all
of
them
together.
So
we'll
give
that
a
second
because
I
think
I
updated
rest.
D
So,
in
the
end,
what
we
get
is
a
final
actor
module
or
actor
component
that
can
be
run
inside
of
wasm
cloud
and
all
of
those
dependencies
are
injected
right
when
you,
when
you
build
it
rather
than
having
to
worry
about
doing
it,
when
you're,
actually
writing
your
code
and
in
the
future,
when
we
get
to
Dynamic
linking
all
of
this
stuff
where
we
we
link
it
in
we'll
just
be
putting
at
runtime
rather
than
like
beforehand,
which
is
what
we're
doing
right
now.
D
Okay,
so
now
we
have
this
component
and
I
want
to
show
how
this
all
works.
I'm
going
to
go,
one
of
them
tools,
component
width
and
I'm,
actually
going
to
look
at
actor.
D
So
you'll
see
here
that
it's
like
exporting
this
Handler,
but
it
needs
this
consumer
type.
So
when
you
write
an
actor,
it's
saying:
hey
I
need
to
like
import.
This
consumer
like
I,
need
to
use
it
and
something
needs
to
give
it
to
me
now
other
types
of
webassembly
platforms
for
them
to
be
able
to
support
this.
They
would
literally
have
to
add
this
stuff
into
their
host,
so
they
have
to
actually
compile
a
completely
new
host
and
run
it
for
them
again.
D
D
Okay,
bigger
so
you'll,
see
here
that
the
only
thing
we're
exporting
is
wasn't
Cloud,
Bus
guest,
which
is
exactly
what
our
host
expects
to
call,
and
then
it
Imports
two
things
that
our
host
implements
for.
You
don't
have
to
understand
all
of
it.
The
the
net
benefit
here
is
all
of
that
stuff
about
your
custom.
Logic
and
contract
is
now
entirely
encapsulated
away,
and
this
final
module
has
no
knowledge
that
it's
actually
doing
anything
specific
with
those
interfaces
like
to
the
outside
world.
It's
all
entirely
self-contained.
D
A
You
it's
me
not
plant,
so
I
just
want
to
be
clear
with
this
with
this
wazifil
stuff.
This
is
the
implementation
of
the
feature
that
was
in
Cloud
housed.
Now
that
you
can
Define
your
own
custom
contracts
to
do
whatever
you
want
outside
of
like
the
standard
contracts
that
right
now
we
provide,
for
example,
like
messaging
HTTP.
If
you
wanted
to
do
something
super
special
like
reading
off
a
sensor
or
like
connecting
to
Bluetooth
or
just
like
something
new.
A
D
That's
exactly
what
it's
for,
and
another
important
thing
to
point
out
here
is
you'll,
see
that
this
stuff
is
written
in
Rust,
but
that
actually
doesn't
matter
in
the
long
term.
You
could
write
this
Wazi
fill
in
any
language
that
can
compiles
to
a
component
and
once
again,
you're
only
writing
it
once
before.
Everyone
had
to
generate
one
of
these
kind
of
libraries
for
every
single
language.
We
generate
this
once
so
this
this
If
This
Were,
a
real
contract,
we'd,
say
a
component
written
in
go,
could
use
this
exact
this
component
wazifil.
D
E
D
Most
of
our
first
party
ones,
because
we're
already
doing
unrest
we'll
write
them
and
rest,
but
that
doesn't
matter
to
the
actual
Downstream
consumer
anymore,
because
it's
a
component,
all
it
has
to
do
is
satisfy
these
interfaces,
and
you
can
actually
see
like
all
these
different.
These
different
parts,
I
think
you
could
say
like.
Oh
this
is:
where
did
it
go
like?
Here's?
The
export
component
wit
and
you'll
see
that
it
looks
a
little
bit
different
right,
like
it's
exporting
a
consumer
type.
D
There's
all
that
matters
is
that
those
match
that
you
have
an
export
for
every
import
on
on
your
components,
and
so
that's
how
all
of
this
actually
works
underneath
the
hood,
but
that
make
more
sense,
awesome.
A
Yeah
I,
just
wanna
I,
always
want
to
call.
You
know
special
attention
that
this
is.
This
is
the
implementation
for
contracts
you're,
you
know
developing
your.
You
know
you're
developing
a
new
contract
or
or
a
new
feature,
but
all
of
this
this
whole
Pipeline
with
Wazi
fills
and
things
like
that
doesn't
come
into
play
when
you're
using
the
standard
interfaces
for
things
like
Wazi
cloud
and
like
those
imports
and
exports,
we
can
actually
predict
on
wasm
Cloud's
side.
D
Cloud
stuff
is,
is
automatically
provided
by
our
host
and
it's
actually
doing
this
kind
of
stuff
underneath
the
hood
to
translate
it
into
lattice
calls.
D
G
D
Yeah,
so
how
this
works
is
inside
of
your
the
actor
is
the
only
thing
you'll
ever
touch,
which
is
in
most
cases,
which
is
what
I'm
going
to
get
to
next,
but
your
actor
has
wit
dependencies,
and
these
wit
dependencies
right
now
are
with
a
simple
tool,
called
wit.
Dips
I
have
messaging
here
embedded
inside
inside
of
a
directory,
but
this
could.
This
is
also
like
a
URL
or
whatever
that
has
the
contract
location
and
then
that's
all
the
the
dependencies
it
needs.
D
So
that's
what
specifies
the
dependencies
for
like
which
contract
it's
doing
and
then
inside
of
the
code.
This
is
taking
those
dependencies
and
saying:
hey:
I
am
going
to
I'm
going
to
generate
the
the
the
bindings
for
this
for.
G
D
Correct
yeah
and
it
generates
all
those
code
for
those
for
those
dependencies,
so
dependencies
approach
a
different
thing,
because
they're
now
behind
interfaces,
so
normally
this
is.
This
is
something
that's
called
a
wasm
cloud
as
well
like
the
whole,
the
whole
reason
why
we
created
Watson
cloud.
Is
you
used
to
have
this
thing
right
where
okay,
I'm
importing
redis
right
like
and
you
had
to
choose
redis
and
you
had
your
redis
client
embedded
in
this
case
you're
just
saying,
I
am
making
a
call,
but
I
don't
really
care.
D
What's,
on
the
other
end,
I
just
care
that
it
satisfies
that
interface,
and
so
that's
what
that's?
What
you're
doing
when
you're,
when
you're
going
encoding
against
dependencies,
is
you're
you're
going
against
dependent
contracts,
you
can
still
pull
independencies
like
you,
can
pull
in
serialization
stuff
or
like
processing
logic,
other
things
you
could
pull
in
like
that
are
specific
to
like
rust
or
to
go
or
to
whatever
language
you're
using.
But
these
like
key
pieces
of
contracts.
The
key
pieces
of
things
are
done
by
contract.
D
C
All
right
Taylor
all
right,
so
let
me
just
explain
it
like
Liam,
which
the
team
jokingly
you
know
uses
as
the
lowest
part
of
the
hurdle
to
get
over
here.
So
we
start
with
I
want
to
create
my
own
component
in
you
know,
JavaScript
or
in
rust,
or
in
go
and
I'm,
going
to
use
the
tooling
that's
going
to
make
a
portable
Lego
block
that
runs
everywhere.
C
Is
that
right,
okay
and
then
I
would
want
to
be
able
to
upgrade
this
into
all
of
the
things
that
wasn't
Cloud
does
so
what
this
wazzy
fill
is
essentially
like
a
harness
that
lets
me
that
will
automatically
take
that
custom
component
and
then
give
it
all
of
the
extra
features
that
wasn't
Cloud
provides
automatically
and
then
enable
things
like
distributed
access
and
all
of
those
other
features
correct.
D
Yes,
so
essentially
this
the
nice
thing
about
this
is
once
again
it
is
entirely
platform
agnostic,
because
there's
no
like
in
this
case
this
is
a
Watson
Cloud
messaging
contract,
because
I
was
using
as
an
example.
But
if
I
was
writing
my
like
gpio
contract,
that
can
now
mean
that
anything
that
can
Implement
that
contract
I
can
use
my
code
with,
but
I
can
also
use
it
with
wasmo
Cloud,
because
I
have
these
like
scaffolding,
things
that
go
in
okay,.
C
So
that's
a
great
example.
Let
me
just
let
me
just
repeat
that
so
I
have
like
my
Raspberry
Pi
here,
and
it's
got
all
these
little
gpios.
You
know
I'm
hanging
off
it
and
off
the
top
here
and
so
I
want
to
be
able
to
you,
know,
program
or
use
this
GPO
component
from
over
here
onto
remote,
wasn't
Cloud
hosts.
C
This
is
on
my
Edge
say,
but
I
want
to
have
my
business
logic
running
in
the
cloud
once
I've
taken
the
standard,
GPI
component
I
upgraded
with
the
wazy
fill
now
I
could
access
it
from
the
cloud
right
or
from
anywhere.
Cloud
gets
I
get
it
for
free.
So
this
is
stupid,
powerful
because
it
lets
you
take
standard
components
which
are
the
portable
Atomic
building
blocks
that
should
work
on
all
wasmcloud
places
or
I'm.
Sorry
on
all
webassembly
run
times,
and
it
gives
you
the
application,
features
that
the
wildson
cloud
application
host
gives.
F
C
Okay,
thank
you
for
just
walking
through
it
at
a
real
Elementary
level.
Yeah.
D
So
the
other
thing
I
want
to
point
out
here
is
we
don't
want
people
having
to
manually
write
these
that
often
you
do
only
have
to.
You
only
have
to
really
write
them
once,
but
we're
going
to
be
doing
tooling
that
you
can
basically
say
here's
where
my
wazzy
feel
is
at
pull.
It
put
it
together
for
you,
but
we're
also
going
to
generate
them
and
that's
the
other
thing.
I
wanted
to
show
that
I'm
halfway
through
at
this
point
so
rather
than
hand
rolling
all
this
code.
D
We
can
actually
generate
it,
and
this
is
kind
of
what
it's
going
to
look
like
I'm,
going
to
generate
it
by
wash,
and
it's
actually
going
to
look
for
like
this
thing
at
the
top.
So
you
can
modify
it
to
your
own
use
cases,
but
what
happens
is
I?
Take
this
I
actually
parse
the
wit
and
then
I
I
spit
out
a
file.
So
I
can
actually
show
you
like
by
running
that
over
here.
F
You
just
go
back
down
to
a
single
pane,
I!
Remember
how
to
do
that.
D
And
you'll
see
that
it
actually
generated
code
for
me
from
my
wit
interface.
So
that's
the
this
is
generating
that
export
thing
and
all
of
this
code
is
now
automatically
generated
for
you.
It
does
so
basically
you'll
get
a
Wazi
fill
for
free.
Now,
if
you're,
an
advanced
user
like
let's
say
I,
know,
there's
some
companies
that
have
like
data
encryption
requirements.
D
We
just
use
our
default
like
we
encode
it
to
message
pack
in
this
case.
That's
the
default
for
a
lot
of
things
inside
of
wazen
cloud.
But
let's
say
you
were
you
were
encrypting
this
you
could
encrypt
the
data
inside
of
here.
All
that
matters
is
you
basically
have
a
have
a
a
deck
of
bytes
that
you
send
out
to
the
host,
and
so
we'll
have
documentation
on
this,
but
you
can
modify
this
and
change
it
to
do
whatever
you
want,
or
the
default
use
case
for
most
people
will
be.
D
Watch
generates
this
for
you.
When
you
create
a
new
interface,
you
push
it
up.
You
publish
the
Wazi
film
you're
good
to
go,
that's
pretty
much,
all
that
it
will
require.
So
this
is
just
one
side
of
it:
I
have
to
do
one
more
Wazi
fill
part
of
this
and
then
do
all
the
other
automatic
things,
but
this
is
generating.
So,
like
I
said
this
code
right
here
is
purely
generated
by
the
thing
I
just
ran
parsing,
my
wit,
which
is
the
exact
same
wit.
D
I
was
using
for
my
for
my
Wazi
fill
exports,
and
so
this
should
all
be
able
to
be
generated
for
you
automatically.
D
That's
the
that's
the
current
plan,
so
I
was
showing
you
all
that
I
showed
the
complicated
things,
so
everyone
knew
what
was
going
on
and
why
we
were
doing
it,
but
basically,
as
most
users
of
wasn't
Cloud,
all
you're
going
to
do
is
basically
watch
new
interface
and
then
you'll
say
like
wash
generate
and
it'll
generate
this
code
for
you
and
then
you
can
just
push
up
your
Wazi
fills
for
it
and
it'll
just
be
ready
to
go
so
obviously
we're
starting
with
just
rest
to
automatically
generate.
D
But
as
you
saw,
this
is
just
handlebars,
so
you
could
do
this
exact
same
thing
with
go
or
with
python
or
whatever
you
wanted
to
do.
We
will
obviously
take
any
contributions,
we'll
be
welcome
once
we
get
this
rolled
out
and
working,
but
you
can
basically
have
it
set
up
to
build
for
any
and
build
or
generate
a
Wazi
fill
for
every
single
language.
D
So
that's
like
I
said
started
complicated,
then
so
guess
what
you're
not
actually
going
to
have
to
worry
about
this
in
most
cases,
which
is
the
the
real
goal
here,
so
this
should
be
done
mostly
automatically
for
you
in
the
future
and
basically
makes
it.
So
you
can
add
whatever
types
of
extensions
and
buy
via
contract
that
you
want
to
on
top
of
wasm
Cloud.
So
whatever
your
custom
use
cases
are
whether
it
be
your
own
custom
project,
you're
doing
or
something
for
work
can
be
done
through
this
way.
D
D
Host
yeah,
it's
not
it's
not
at
the
host
model,
but
like
okay,
all
I'm
doing
to
put
this
together
is
I'm.
You
know,
let
me
actually
expand.
D
All
I'm
doing
to
compose
it
is
I'm
like
putting
all
the
modules
together
like
I'm,
taking
the
actor
and
I'm
putting
it
together
with
the
export
and
then
with
the
import
and
then
with
the
multiplexer,
which
is
something
else.
I
didn't
really
talk
about
today,
and
then
it
just
generates
it
all
into
that
final
compose
component.wasm.
So
that
is
that's
like
what's
actually
happening
when
you
put
it
all
together,
so
the
components
are
still
entirely
separate
from
each
other.
D
They
just
get
as
we
jokingly
call
it
smushed
together
when
you
compose
them
Vance,
you
got
a
question.
H
Yeah
I,
don't
I,
don't
want
to
take
up
too
much
more
time,
I'm
trying
to
understand
some
of
the
details
of
it.
But
the
first
question
was
you:
you
do
all
of
this
in
an
interface
project
right
so.
H
Got
an
interface
project
which
uses
Smithy
today,
I
can
rewrite
that
in
width
and
then
I'm
not
clear
on
is
it?
Is
it
generating
rust
libraries
that
I
can
use
in
my
actor
rust
projects,
or
is
it
generating
a
component
that
is
linked
with
the
component
that
I
generate
for
my
actor
and
if
the
latter
is
that
linking
done
statically
or
is
it
done
a
runtime.
D
D
You
is
basically
a
static
linking
step
that
will
be
have
the
possibility
of
being
Dynamic
here
in
the
future
right
now,
it's
just
static,
so
we
glue
together
the
components
you
can
actually
I,
don't
know
if
they
have
the
tooling
for
this,
but
you
can
technically
explode
those
components
back
out
if
you
want
to,
but
in
in
this
case,
like
we're
statically
linking
it
to
a
like
final.
This
is
the
actor
I'm
going
to
run
and
then
from
there
like.
H
Is
definitely
planned,
though,
it's
effectively
the
same
as
today,
and
when
can
I
use
this.
D
Hopefully,
I'm
trying
to
get
it
done
by
the
end
of
next
week.
There's
a
lot
of
little
like
loose
ends
to
tie
off
with
this,
because
I
have
to
generate
a
couple
different
things,
but
this
should
be
able
to
be
done
in
the
next
two
to
three
weeks
is
what
I'm
hoping
on
here
with
the.
D
A
All
right,
thank
you.
Taylor
I'm
super
excited
to
see
that
come
along
I.
Think
we'll
just
keep.
You
know
not
saying
there's
going
to
be
the
demo
every
week,
but
we'll
obviously
be
super
transparent
about
when
we're
moving.
You
know
our
regular
examples
and
things
over
to
use
the
Wazi
fills
or
or
standard
contracts.
Things
like
that.
A
All
right,
well,
I,
know
that
we've
got
we've
got
about
25
minutes
left
and
we
do
have
a
good
bit
of
discussion
left
so
I'm
going
to
propose
that
we
go
ahead
and
move
on
I
did
queue
up.
One
long
form
discussion
that
doesn't
relate
to
an
RFC
and
then
I
wanted
to
bring
up
two
of
our
two
of
our
rfcs
that
are
out
there.
One
was
called
out
last
week
and
the
other
one
is
new
as
of
maybe
like
an
hour
ago.
A
So
it's
more
of
a
call
to
well
whatever
request
for
comment.
You
know
for
people
to
go,
look
at
it
and
and
say
what
you
think,
but
the
first
one
that
I
wanted
to
talk
about,
came
out
of
a
discussion
in
the
WASP
Cloud
slack
with
a
wasapod
member
named
Nikita,
which
I
can't
unless
I'm
getting
any
names
wrong.
I
can't
see
if
they're
actually
here
but
so
speak
up.
A
If
you
are
but
I'm,
probably
I'm
gonna
be
missing
something
on
Zoom,
but
what
I
really
want
to
talk
about
is
the
RPC
timeout
that
we
have
in
wasmcloud
today.
So
you
may
or
may
not
know,
wasmcloud
operates
off
of
this
very
special
timeout
that
you
can
configure
at
runtime
for
a
while
awesome,
Cloud
host,
where
whatever
value
you
set,
that
is
how
long
was
inside
will
allow
a
certain
chain,
a
certain
operation
to
execute,
for
this
is
kind
of
a
nested
timeout.
A
So
as
soon
as,
if
you're
doing
something
like
receiving
an
HTTP
request,
sending
out
a
message
and
then
operating
on
the
key
value
store
and
then
operate
in
a
SQL
store.
And
then
you
returned
that
two
two
thousand
milliseconds
or
two
seconds
is
the
entire
amount
of
time
that
that's
allowed
to
operate.
A
The
reason
why
just
generally
is
wasmcloud
is
a
distributed
system.
Everything
can
run
on
one
machine
or
it
can
run
across
many
different
machines,
and
it's
actually
really
hard
to
tell
if
an
operation
is
taking
a
long
time,
because
it's
timing
out
as
in
it,
it
can
take
an
arbitrary
amount
of
time
to
execute
versus
a
process.
That
will
never
finish
because
it's
made
some
requests
to
some
resource.
That
is
never
going
to
return
right
when
you
have
something
that
can
take.
You
know
something
on
the
order
of
magnitude.
A
You
know
say:
you're,
downloading
a
file
right.
That
is
something
that
can
take
an
arbitrary
amount
of
time,
depending
on
network
conditions
topic.
The
file
is
things
like
that,
just
a
general,
a
general
heuristic
there.
So
we
limit
these
types
of
executions.
We
prevent
infinite
timeouts
by
providing
you
know
kind
of
a
kill
switch
here
in
this
timeout.
This
is
actually
I
know
that
this
is
good
pre-reading.
The
dealing
with
the
Diabolical
distributed
deadline,
dilemma
blog
by
Kevin
Kevin's
distributed
systems.
Master
he's
got
some
really
awesome.
A
Thoughts
on
it
in
this
one
I
would
definitely
recommend
it,
and
this
has
kind
of
inspired.
Some
of
my
thoughts
for
today
talks
about
kind
of
the
problems
of
trying
to
solve
a
a
deadline
like
trying
to
limit
a
process
to
a
certain
amount
of
time
or
just
letting
it
go
free
in
a
distributed
system.
Now
there
are
a
couple
Solutions
proposed
in
this
blog.
Some
of
them
talk
about
like
the
the
overarching
solution
is
to
not
have
this
problem
in
the
first
place.
A
Obviously
it's
a
little
tongue-in-cheek,
but
the
you
know
talking
about
real,
like
architecting
your
application
in
a
way
that
you're
not
sending
out
a
request
that
Waits
forever.
But
the
real
thing
that
I
want
to
focus
on
in
this
discussion
is
just
talking
about
the
viability
in
the
use
case
of
long-running
processes
in
the
first
place.
I
think
we've
all
kind
of
generally
seen
that
webassembly
works
really
really
well
for
like
functions
as
a
service
like
serverless
style
invocations,
where
you
know
you
send
it
some
information.
A
The
webassembly
module
does
some
processing
over
not
too
long
a
period
of
time,
and
then
it
returns,
and
if
you
look
at
different
serverless
Frameworks,
they
solve
this
in
a
couple
of
different
ways.
I
think
one
of
the
biggest
examples
of,
or
one
of
the
most
popular
examples
is
AWS.
Lambda
and
Lambda
has
a
built-in
default
timeout
of
15.
A
minutes,
15
minutes
where
you
can't
execute
for
longer
than
that,
and
that
is
of
course
assumed
they
they're,
assuming
that
you're
doing
processing
the
whole
time,
so
I
really
hope
to
have
an
example
use
case
to
kind
of
build
up
this.
This
straw
man
around,
but
one
thing
that
we
don't
really
encourage
or
have
a
lot
of
examples
for
right
now
or
was
and
Cloud
actors
that
are
doing
long,
running,
processing,
long-running
computation
and
it
just
doesn't
fit
exactly
into
the
model
that
we
put
out
a
lot
of
our
examples
for
for
today.
A
So
I'll
pause,
I'm
curious.
If
folks
have
thoughts
on
this
particular
topic,
I've
got
some
things
that
we
could
continue
to
talk
about,
but
we'll
go
from
there.
Yeah
Curtis.
A
I
think
it's
worth
Advanced.
A
To
make
sure
it
is
per
call,
but
it's
not
accumulating,
so
it's
actually.
It's
like
the
overall
timeout,
so
like
this
sequence,
diagram,
for
example,
this
isn't
wasn't
Cloud,
specific
or
anything,
but
right
at
the
top
level.
When
you
initiate
that
invocation,
the
the
process,
all
the
way
down
the
the
call
stack
in
the
system
in
wasmcloud
that
starts
your
RPC
timeout
and
further
down
the
line.
It
technically
starts
the
same
RPC
timeout
over
and
over
for
that
like
sub
process
but
Market
at
the
top
level.
First.
G
A
Yeah
it
is,
it
is
at
the
host
level,
though,
and
not
at
the
specific
application
Level.
So
if
you
have
one
set
of
actors
and
providers
that
form
an
application
that
you
always
expect
to,
you
know
to
execute
within
a
couple
of
you
know
a
couple
hundred
milliseconds
and
you
have
one
other
process
that
you
expect
to
you
know
work
within
the
order
of
a
couple
of
seconds.
A
You
either
have
to
set
that
blanket
RPC
timeout
at
the
higher
end
for
all
of
them,
or
you
have
to
run
it
on
on
a
different
host.
But
you
have
to
be
careful
because
if
you
go
between
hosts
that
might
be
working
with
different
RPC
timeouts.
G
Personally,
from
the
architectural
perspective,
I
I
do
need
per
client
control
over
the
timeout,
though
otherwise
you're
already
making
architectural
decision
and
assumptions
that
it
may
not
fit.
The
the
problems
that
I'm
facing
so
I
I.
Think
when
it
comes
to
this
thing
is
one
of
those
where,
like
give
the
capability
to
somebody
to
control
it,
trying
to
put
sensitive
defiles,
so
they
don't
shoot
themselves
in
the
foot
or
at
the
very
least
you
control
a
little
bit.
F
A
Like
to
hear
that
I
thought
I
saw
you
raise
your
hand,
but
I
wasn't
sure
if
that
was
left
over
or.
H
E
H
That
and
in
fact
the
rest
problem
I
ended
up
solving
with
a
design
that
involved
a
a
special
capability
provider
anyway.
So.
A
F
A
Think
that
I
think
that's
kind
of
that's
been
some
of
our
guidance
leading
up
to
this
point.
If
you
want
to
do
long-running
processing,
then
you
can
push
that
into
a
capability
provider,
because
it
can
continue
to
execute
I,
think
it
works
well
and
I,
don't
know
if
it's
the
best
answer
forever,
just
because
I
think
people
are
going
to
want
to
write
stuff,
compile
it
to
webassembly
and
do
their
processing
there
without
opening
the
whole
can
of
worms
of
where
providers
are
going
with.
All
that.
D
Yeah
my
my
thing
that
I
started
with
this
was
coming
around.
Like
I
wrote
an
image
resizer
for
my
own,
like
profit,
joy
and
profit,
and
the
problem
was
the
bigger.
The
image
gets
the
longer
it
takes,
and
I
can't
invent
something
right.
Like
I
could
hook
up
my
my
actor
to
like
I
could
externalize
part
of
it
to
a
provider
that
would
make
sure
like
it
would
alert
when
something
was
done,
but
what
I
would
really
like?
D
D
You
can
call
back
to
and
see
if
it's
done
or
you
know,
like
here's,
an
event
I'm
going
to
send
you
because
I
don't
want
everyone
to
have
to
reinvent
that
wheel,
every
single
time
they
want
to
do
something
similar
and
so
I,
like
I
I
wish.
There
was
a
way
that
we
could
like
say
like
hey:
just
leave
me
running,
I
might
run
for
another
10
20,
maybe
120
seconds.
Who
knows
but
like
leave
me
alone
over
here
and
I'll,
get
back
to
you
with
the
results
of
this
operation.
D
G
Yeah
I
mean
that's,
that's
what
I
would
like
to
have
from
from
the
architecture
perspective
right,
and
you
already
have
the
the
not
under
you.
So
you
know,
if
you,
if
me
as
a
client,
let's
say
you're
a
worker
that
is
organizing
paper.
I
made
this
I
in
any
situation.
If
I
want
to
be
waiting
five
seconds,
so
I
don't
have
time.
I
need
to
move
on
right
now
or
I
can
just
sit
here
for
for
the
next
five
minutes.
G
For
you
to
finish
to
organize
the
paper,
the
way
I
would
architect
it
is
that
you
would
send
a
message
back
to
whoever
is
interested
to
say:
hey
I'm
done
with
the
papers,
so
it's
my
responsibility
to
listen
into
those
messages
and
say:
yeah
I
was
I
care
about
that.
Let
me
now
continue
doing
it
and
from
the
from
the
co
perspective
you
know.
All
you
want
is
your
simple
sdks
that
says:
hey
I'm
interested
in
this
message,
I
know
not
request.
Replies.
Does
that
for
you,
you
know
it
wraps
everything.
C
I
Sure
yeah
I
mean
ideally
everything's
fire
and
forget,
in
which
case
you
really
don't
have
that
problem.
But
what
I
would
really
like
to
see
is
if
something
like
that
was
built
into
the
host
such
that
there
was
some
sort
of
training
and
machine
learning
ability,
so
it
monitors
actors
and
says:
okay.
The
typical
timeout
of
this
particular
actor
under
these
conditions
is
this,
and
so
therefore
I'll
only
report,
something
if
we
start
going
out
of
a
you
know.
Some
preset
kind
of
you
know
guard
rails.
I
That
says
well
normally
this
thing
takes
two
seconds
so
therefore,
that's
what
I'm
gonna
do
training
period
over
we're,
we're
good
we're
running
in
production
and
then,
if
it
starts
straying
out
of
that
it
the
only
then
it
might
start.
You
know
warning
the
system
or
something
like
them,
so
basically
exporting
that
whole
problem
into
another
component,
another
module
or
something
like
that,
so
the
individual
app
does
not
have
to
deal
with.
It
would
be
wonderful.
C
Taylor,
maybe
you
and
I
have
a
little
sidebar
here
and
DM
on
on
the
roadmap
stuff,
and
the
comment
that
I
wanted
to
make
is
you
know
one
of
the
gaps
in
webassembly
at
the
moment
that
we're
working
towards
you
know
this?
Is
you
know
we're
right?
C
You
know
here
now
or
something
like
that
and
then
a
little
bit
later
this
year
we
hit
the
finalization
of
bossy
preview
2.,
and
at
that
point
you
know
things
will
stabilize
all
the
run
times
and
the
language
tooling
we're
going
from
there
and
maybe
tail
you
could
link
this
blog
post,
but
underneath
the
hood
right
now
you
know
webassembly
doesn't
have
you
know,
async
support
and
what
that
will
enable
is
Futures
and
streams,
and
now
how
that
manifests
itself
inside
of
wasm
cloud
is
a
bit
different,
and
maybe
that's
where
oh
bill,
you
don't
think
this
is
really
relevant
at
all.
J
I
mean
it's
really
cool,
but
I
would
say
that
the
async
behavior
within
webassembly
components
is
relevant
and
intra
component
in
that
you've,
composed
with
other
pieces
and
parts
of
your
component
and
so
you're
able
to
write
different
types
of
code
when
you're
composed
with
others.
So
it
would
be
really
relevant
actually
to
maybe
even
the
lazy
fill
part
that
Taylor
was
showing
off
because
that's
a
composed
component,
but
with
wasmcloud
and
our
SDK
that
async
behavior
is
actually
handled
by
our
wasenclawed
host.
J
So
we
have
basically
all
of
these
features
and
functions
that
we
really
want
the
Wazi
Preview
2
stability.
That
is.
J
To
call
out,
but
we
perhaps
aren't
blocked
on
async
being
available
by
the
component
model.
D
You
said
streams
though,
and
it
did
make
me
think
another
possible
work
around
here.
Not
this
is
not
webassembly
streams.
Those
are
good
too,
but
maybe
we
should
have
the
idea
of
a
stream
inside
of
wasm
cloud,
because
then
you
can
give
that
pattern
of
like
hey.
This
is
like
streaming
data,
so
I'm,
sending
something
and
I'm
expecting
something
to
be
sent
back,
and
so
like.
It's
like
a
basically
a
bi-directional
stream
like
it's,
a
I
can
send
something
up.
D
I
can
receive
something
back,
and
then
we
can
do
that
kind
of
stream
pattern
of
processing
data
like
hey.
Do
you
have
something
for
me?
Hey
do
you
have
something
for
me
it
can
be
one
of
the
patterns
or
it
can
be
like
I'm,
just
listening
for
a
message
on
that
stream.
As
soon
as
I
get
a
message,
then
I've
got
the
data
back
anyway.
Really
rough
idea,
I
just
thought
of
so
it
could
be
totally
wrong.
But
if
that's
one
possibility,
I
was
just
thinking
of
there.
Yeah.
G
One
one
thing
that
the
infra
and
the
infrastructure
level
you
could
put
you
know
the
the
notion
of
the
correlation
ID,
so
you
know
you
could
simplify
more
or
less
what
message
you
may
be
interested
Taylor.
One
thing:
instead
of
thinking
about,
hey
I,
want
to
receive
a
message
from
this
particular
queue.
G
G
So
it's
my
responsibility
to
understand:
hey
what
messages
she's
producing
and
whatnot,
and
what
should
I
do
about
it,
and
now,
with
that
correlation,
it
is
important
because,
if
I
pass
an
ID,
the
correlation
ID
that
you
passed
through
all
the
messages.
I
can
say
this
given
messages
because
of
the
workflow
that
I
initiate
the
next
given
point
in
time.
Therefore,
it's
related
to
the
the
task
that
asks
you
to
do
and
then
you
move
on.
D
Yeah,
a
streaming
thing
will
be
interesting,
because
this
is
where
we
get
into
the
desire
for
wasn't.
Cloud
has
always
been
to
be
as
platform
agnostic
as
possible,
but
when
you
start
doing
something
like
a
stream,
that
means
it's
something.
That's
very
awesome,
Cloud
specific
that
the
code
would
have
to
know
about.
D
We
might
be
able
to
find
a
way
to
kind
of
abstract
it
away
somewhat,
but,
like
the
code
will
have
to
know
about
that,
and
so
what
I'm
trying
to
think
of
is
is
balancing
the
the
like.
Okay,
you
are
you're
trying
to
keep
it
as
agnostic
and
portable
as
possible,
because
right
now,
like
as
I
showed
today
with
like
the
Wazi
fill
stuff.
If
you
write
something
for
us
to
run
in
wasm
Cloud,
all
that
you
need
is
the
Wazi
fills
your
code
could
technically
go
run
somewhere
else.
D
As
long
as
you
have
that
same,
like
contract,
support
and
I
know,
we
have
streams
inside
of
wit
right
so
we'd
have
to
represent
most
of
the
contracts
like
purely
through
streams
and
maybe
hook
it
up.
That
way.
I
I,
like
that
that
was
my
thinking,
Bailey
like
when
I
was
thinking
actual
implementation
details,
but
it
would
also
mean
that
like
contracts
would
have
to
reflect
that
right.
So
like
some
things
that
might
not
be
so
like
an
HTTP
service
right
is
expecting
eventually
like
an
HTTP
request
and
so
yeah.
D
It's
the
contract,
reflecting
it
so
like
I'm,
just
hoping
that,
like
I,
just
demanded,
there's
going
to
be
contracts
that
don't
necessarily
have
streams
that
don't
make
sense
with
a
lot
of
streams
and
then,
if
they
want
to
take
advantage
of
something
with
like
longer
running
stuff
inside
of
wasm
cloud,
then
they'd
have
to
have
a
separate.
Would
you
know
the
worst
thing
in
the
world
but
have
a
separate
version
of
the
contract?
That's
more
stream
focused
yeah!
That's
just
I'm
thinking
out
loud
here.
J
Yeah,
what
I
was
going
to
say
is
that
pull
one
off
and
and
and
all
of
the
kind
of
some
of
the
ways
that
you
would
expect
the
actor
written
as
a
webassembly
component
hitting
the
Wazi.
I
o
streams
and
then
eventually
we're
gonna
have
a
type
that
is
stream,
but
right
now
we're
able
to
describe
the
apis
that
you
would
expect
of
a
stream,
including
getting
a
pulled
list
that
the
host
can
then
Implement
back
buffering
on
and
So
within.
J
Our
host
implementation,
underneath
I
would
expect
us
to
implement
something
like
Nat
streams
for
the
underlying
actual
implementation
for
this,
but
it
seems
like
the
modeling
is
well
defined.
So
I
think
this
is
probably
how
we
would
want
to
start
defining
contracts.
We
don't
yet
have
support
for
this,
but
I
think
I
think
we
should.
D
D
Let
me
just
see
what
it
looks
like
and
we
can
use
that
as
a
basis
for
an
RFC,
because
right
now,
I
don't
think
we
have
enough
knowledge
to
be
able
to
say
this
is
what
it's
going
to
look
like,
and
this
is
how
it's
going
to
work
I
think
we're
going
to
need
some
spiking
once
it
lands
for
this
to
actually
be
something.
We
can
then
say:
Okay
Community.
Here's.
D
Know
like,
as
we
all
come
together
and
say,
we've
tried
this.
We've
we've
done
it
this
way.
We've
done
it
this
way
like
what
are
the
trade-offs
and
then
have
some
sort
of
good
solution
in
place
for
for
people
to
do
that
so
I
think.
Basically,
it's
like
the
outcome
of
this
would
be
we're.
Gonna
wait
for
a
couple
months
and
then
we're
gonna.
As
soon
as
we
have
streams
land,
then
we
can
try
to
yeah.
D
J
Roman's
working
on
the
adapter
part
for
that
right
now,
so
we're
we're
helping
Upstream
that
work.
So
we're
not
we're
not
just
waiting
we're
we're
getting
it
done.
D
A
And
and
to
be
clear,
we
are
we're
thinking
that
actually
implementing
screens
will
give
us
the
ability
to
have
something
that's
like
to
have
a
legitimate,
long-running
Watson
Cloud
process
like
in
my
actor
logic.
I
want
to
do
a
computation
that
takes
10
minutes
and
that's
a
possible
thing.
A
I
think
I
think
that
sounds
pretty
reasonable.
I've
been
considering
whether
or
not
we
want
to
and
use
GitHub
discussions
a
little
bit
more,
but
I
think
I
think
this
may
I
think
we
talked
it
out
pretty
well
and
I.
Think
this
probably
would
be
a
good
thing
to
add
in
the
in
the
it
can
be
a
discussion
or
it
can
be
an
issue
that
we
put
on
the
later
section
of
the
roadmap
up
to
you
on.
D
How
you
I
think
that
part
convinced
me:
we
should
make
it
an
issue
because
I
want,
like
it's
just
going
to
be
mostly
I
mean
I'm
called
out
as
a
placeholder,
but
that
way
we
know
where
it's
at
and
then
as
soon
as
Preview
2
like
stabilizes,
we
can
start
experimenting
with
it
and,
like
I
said
anyone
can.
If
anyone
wants
to
be
like
real,
bold
and
just
try
it
with.
D
What's
there
right
now
feel
free
and
you
will
have
the
coveted
as
Brooks
called
it
demo
slot
guaranteed
to
talk
about
it
and
show
it
off
so
anyway
that
if,
if
you
want
to
try
it
try
it
but
I'm
gonna
say
like
just
with
how
much
we're
trying
to
finish
and
stabilize
like
the
Wazi
fills
and
the
provider
SDK
stuff
and
everything
that's
going
on,
like
it'll,
probably
be
best
to
pick
it
up
when
onesie
Preview
2
stabilizes
from
the
core
maintainer
perspective.
D
J
And
and
to
be
clear,
if
you
were
to
build
something,
you
would
want
to
use
the
pull
request
that
I
sent
that
uses
resources
that
changes
a
lot
of
the
way
that
these
underlying
interface
definitions,
work
but
I
will
say
as
far
as
we
know
right
now.
This
is
this:
is
it
TM
for
the
thing
that
we're
planning
on
standardizing?
So
once
when
we
say
land,
what
we're
talking
about
is.
J
First,
we've
got
to
get
all
the
interfaces
that
depend
on
this
piece
right,
because
streams
is
a
fundamental
underlying
piece
and
we've
built
a
lot
of
things
on
top
of
it
like
wazzy
HTTP.
So
as
HTTP
needs
to
point
to
the
new
thing,
the
wit
parser
with
syntax
wasm
time.
All
of
those
different
pieces
that
happen
within
the
bike
cut
Alliance
have
to
land
it,
and
then
we
would
consume
that
within
wasm
Cloud,
but
there's
nothing
stopping
you
from
joining
the
entering
The
Fray
here.
A
Awesome
I
think
that's
good.
That
sounds
great,
throwing
it
in
the
later
section
of
our
roadmap
is
kind
of
where
I
propose.
We
put
anything
that
needs
additional
spiking
has
external
dependencies
things
that
we
can't
reasonably
prioritize
for
for
right
now,
unless
you
do
it
based
off
Bailey's
PR,
then
you
can
DM
me
anytime,
up
to
12
59
PM
eastern
time
on
a
Wednesday,
and
you
can
do
the
demo
PR.
A
Okay,
we
have
got
one,
we
have
zero
minutes
left.
So
if
anybody
has
a
jump,
thank
you
so
much
for
for
coming.
I
think
we'll
keep
this
part
of
the
discussion
pretty
light.
It's
really
just
an
additional
call
out.
We
have
two
open
RF
I
mean
we
have
a
couple
of
open
rfcs,
but
we
have
two
recent
soon
to
be
considered.
Rfcs
to
to
accept
one
is
around
metrics,
which
was
which
Patrick
wrote
up
and
contributed,
which
is
awesome.
A
We
talked
a
little
bit
about
this
in
the
call
last
week,
but
there's
been
some
good
discussion
here
on
on
the
issue.
I
won't
read
out
all
of
it
because
you
know
there's
a
lot
of
words,
but
please
go
take
a
look
at
that.
A
We're
talking
about
you
know,
implementing
Prometheus
metrics
in
wasm
clouds
and
the
things
that
should
be
instrumented
as
metrics
things
like
that,
and
then
I
actually
just
put
in
a
an
RFC
this
morning
that
we've
been
talking
about
for
a
little
bit,
but
we
haven't
really
gotten
to
execute
on
until
now
that
we
have,
we
have
wasn't
time
and
instantiating
pre-compiled
components
and
modules,
which
is
super
awesome.
A
This
RFC
talks
about
the
ability
for
wasm
cloud
to
Auto
Scale,
based
on
request,
load
and
effectively
scale
to
zero,
when
not
in
use
scale
to
zero,
specifically
referring
to
the
fact
that
you
can
start
an
actor
when
it's
not
actively
executing
it's
not
taking
up
CPU
or
memory
use
resources
passively
at
all
on
the
host.
We
have
something
close
to
this
today.
A
This
RFC
outlines
a
little
bit
more
about
what
we
do
with
having
a
you
know,
a
set
of
instances
or
a
list
of
instances,
and
all
of
this
is
proposing
renaming
the
the
count.
The
the
core
of
this
RFC
is
renaming
the
count
field
to
a
sort
of
like
Max
scale,
Max
instances,
Max
concurrent-
something
like
that.
So
you
can
effectively
bound
the
upper
limit
of
concurrent
execution
on
a
single
actor,
so
we
definitely
value
some
some
thoughts
on
this.
A
One
I
tried
to
go
really
in
depth
on
some
of
the
things
that
we
can
change
in
a
backwards
compatible
way
to
not
break
things.
I
think
that'll
be
gonna,
be
really
nice.
If
we
do
everything
right,
I
think
that
this
can
all
go
through
with
the
next
minor
bump
in
wasm
Cloud,
without
breaking
existing
scripts,
and
things
like
that.
A
But
yeah
this
one
is
open.
As
of
like
I
said
an
hour
ago.
It's
just
pre-community
call
so
I
don't
expect
any
I.
Don't
expect
anyone
to
have
read
this
yet,
but
it's
open
for
comments
now,
I
think
I've
got
another
I.
Think
I've
got
another
couple
of
minutes
to
hang
out
for
a
second.
If
anybody
had
any
initial
thoughts
on
on
either
of
these
rfcs
I
think
go
ahead
and
throw
them
out
now.
A
Yeah,
you
notice
he
had
a
question
which
is
kind
of
interesting
you're,
asking
about
keeping
an
actor
kind
of
hot
and
and
running
ahead
of
time.
This
is
essentially
why
we
had
instances
running
beforehand,
I'm,
actually
taking
the
stance
that
we
don't
even
need
to
support
that
use
case.
And
let
me
tell
you
why
webassembly
is
pretty
fast.
It's
really
quick
to
instantiate
like
the
cold
start
penalty,
of
keeping
an
actor
or
of
not
keeping
an
actor
hot
is
in
like
the
single
digit
microsecond
level
of
latency.
A
As
long
as
you
are
performing
the
right
optimizations,
which
US,
of
course,
as
a
host
runtime,
will
be
responsible
for,
but
I
see
very
little
tangible
benefit
to
keeping
an
actor
hot,
which
is
action
which
is
actually
really
really
cool
and
I
hope
you're
getting
some
of
like
the
excitement,
sarcasm
and
this
I'm
not
shooting
down
your
idea,
but
I'm
looking
forward
to
coming
out
with
with
some
numbers
there,
because
it's
really
just
core
webassembly.
A
It
may
be
something
that
the
very
first
instantiation
takes
like
just
a
little
slightly
longer
like
when
you
start
the
actor.
We
do
the
pre-compile
step
and
all
that,
but
the
really
cool
thing
about
this,
especially
in
the
in
the
lens
of
how
long
a
thing
can
be
executing
webassembly,
is
so
fast
to
instantiate
and
get
started
on
work
that
we
don't
need
to
consider
running
hot
instances,
which
is
pretty
cool
and
now
I'll
just
leave
it
at
that.
A
All
right,
well,
I,
think
that's
a
good
time
as
any
to
to
wrap
up
the
stream.
Thank
you
all
for
coming.
Anybody
who's
watching,
live
appreciate
you
watching
and
for
our
two
new
community
members
who
are
on
the
call
today.
Thank
you
so
much.
Everyone
is
welcome
on
this
community
call
to
come
and
join
in
the
discussion
or
or
just
lurk
and
and
hang
out,
but
other
than
that
I
think
we'll
go
ahead
and
stop
it
here
and
we'll
we'll
see
you
all
next
week.