►
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
A
B
Oops
start
with
the
interface,
so
I'm
going
to
I'm
going
to
walk
through
an
interface
definition
in
a
smithy
file
and
then
I'll
show
some
of
the
code
that
gets
generated
from
that
and
then
a
http
server
capability
provider
and
a
echo
actor,
and
then
we
could.
We
can
try
to
run
it
and
cross
our
fingers
and
hope
it
all
connects.
B
So
this
is
a
smithy
file
for
the
server
capability
contract.
The
metadata
at
the
top
is
is
mostly
to
give
some
hints
to
the
code
generation.
Every
every
smithy
file
has
a
unique
namespace.
B
This
one
has
a
service
called
http
server,
and
this
wasmbus
annotation
right
above
it
attaches
some
attributes
to
the
service
it
has.
The
contract
id
which,
which
is
the
capability
contract
and
actor
receive,
is
a
directionality
indicator
that
says
this
service
is
something
that
an
actor
would
implement
the
handlers
for
the
operations
of
this
service.
B
B
In
fact,
we
can
use
the
built-in
smithy
enter
and
validators
that
that
come
come
with
that
tool
set,
but
the
we
we
can
define
annotations
like
this,
so
this
particular
service
has
one
operation,
http
request,
I'm
sorry
handle
request,
which
has
an
input
structure
and
an
output
structure,
and
these
are
almost
exactly
the
same
as
they
were
for
the
wasm
cloud.
0.18
we
made
two
we
made
two
minor
changes.
B
B
B
Same
same
feels,
one
thing
you
might
notice
here
is
that
we
always
generate
the
field
names
for
the
structures
in
the
idiomatic
form
for
the
target
language.
So
the
underscore
of
the
snake
case
is
what's
used
for
rus
field
names,
but
the
name
on
the
wire
is
the
name.
That's
declared
in
the
smithy
file.
Sus
should
be
consistent
everywhere,
and
so
this
30
flag
tells
the
rest
compiler
that
it
also
defines
a
trait.
B
B
Okay,
now
I'm
going
to
look
at
the
http
server
provider.
B
So
this
imports
that
interface,
here's
the
here's
a
place.
We
import
the
request
and
response
structures
we're
looking
here:
here's
them
the
main
file,
so
capability
providers,
instead
of
being
a
so
library,
it's
an
executable
which
is
part
of
what
gives
us
the
ability
to
run
in
any
language
when
main
runs,
the
the
host
actually
starts
up
the
capability
provider
as
its
own
process
and
the
host
pass
to
it
passes
to
it
on
standard
in
a
bunch
of
parameters.
B
It
needs
to
connect
to
the
lattice,
and
so
inside
this
provider
main.
What
what
happens
is
a
provider
main?
Is
a
library
function
that
that
we've
built?
So
you
don't
have
to
implement
it,
but
it
subscribes
to
several
of
the
nat
subscriptions
for
health
checks,
for
the
link,
definitions
and
the
shutdown
message
and
so
on,
and
then
it
just
hangs
around
waiting
for
a
shutdown
message,
but
you
pass
into
it
your
implementation
of
the
server
interface,
which
is
this
http
server
provider.
B
B
And
in
this
case,
what
we
do
with
that
is
we
spin
up
a
new
http
server
listener
on
a
port
on
an
interface
and
port
defined
in
that
link,
definition,
delete,
link,
says
we're
not
connected
to
the
actor
anymore,
and
we
stop
that
heb
server
and
then
shutdown
stops
them
all.
So
those
are
the
four
messages
you
need
to
handle
as
a
provider
and
then
there's
the
interface
specific
stuff
which
in
our
case
is
going
to
be
respond
to
the
http
request.
So
I
won't
go
into
that
detail.
B
That's
over
here,
so
this
is
the
entire
code
for
the
actor
some
of
it.
Some
of
the
code
is
handled
by
these
magic
of
derived
functions.
B
But
what
we've
done
is
we
we
said
we
have
to
implement
this
http
server
trait,
so
we
have
a
handle,
request,
function
to
respond
and
it
takes
a
http
request.
Parameter
returns,
the
http
response
and
what
the
active
echo
actor
does
is.
It
creates
a
json
string
of
the
input
structure
and
sends
that
back
as
the
body
of
the
response,
and
we
also
implement
a
health
check
which
says:
oh
actually,
I'm
not
sure
why.
But
I
was
probably
testing
the
health
request.
It
returns
unhealthy.
B
So
that's
the
actor!
Oh,
go
ahead!.
C
C
B
Actually,
there
was
one
other
thing
I
was
going
to
show
you
is:
we've
also
made
an
effort
to
simplify
them,
make
files-
I
don't
know
if
you're
familiar
with
that
both
make
files
for
the
capability
providers,
where
you
had
to
build
the
par
file
and
all
that,
but
here's
here's
the
whole
make
file
for
the
http
server.
You
define
your
variables
and
then
we've
taken
out
the
boilerplate
code.
So
this
this
wasn't
cloud
release
and
what
we're
all
working
really
hard
to
do
is
get
rid
of
boilerplate.
B
So
all
those
rules
for
building
the
par
file
they're
all
the
same
for
every
make
file.
So
we
just
put
that
into
a
common
file
all
right.
So
now
I'm
going
to
go
to
the
host.
One
thing
I
have
done
is
because
we
haven't
yet
implemented
the
wash
commands
or
a
manifest
file.
B
I
actually
created
a
a
manifest
file
in
elixir,
and
so
this
is.
This
is
a
workaround
that
I
came
up
with
on
my
own,
where
I
have
a
array
of
actors,
an
array
of
providers
and
then
an
array
and
then
the
link
definitions.
B
So
it's
the
same
kind
of
content,
as
was
in
the
manifest
from
the
0.18
host,
but
that
was
my
work
around
so
so
I'm
going
to
start
the
host.
B
B
And
I
curled
the
http
server
port
and
the
echo
actor
has
returned
this
json
response.
So
there
it
all
is
plugged
together.
C
One
other
thing
I
want
to
point
out
here
that
we
that
we
got
sort
of
for
free
when
we
switched
to
using
executable
binary
for
capability
providers
is
that
there
was
no
libsy
used
anywhere
in
in
this
example.
So
we
no
longer
have
to
deal
with
the
burden
of
trying
to
figure
out
how
to
build
docker
images
and
strong
time
environments
that
every
version
of
the
c
runtime
on
them,
which
was
a
huge
pain
in
the
butt
for
the
0.18
stuff.
E
A
This
is
amazing,
it's
really
good.
Yeah
look
dean,
you
know
the
we
went,
we
went
back
and
forth,
I
mean
kevin.
I
mean
I
felt
like
you
agonized,
on
this,
the
pain
of
making
all
these
changes.
After
all,
the
feedback
and
experience
we
had
without
18.-
and
I
know
it's
asking
a
lot
of
people
to
say
that
you
know
we're
doing
putting
in
all
this
effort
to
do
another
big
rewrite.
You
know
the
second
big
rewrite
in
a
year.
A
You
know
from
the
dot
18
you
know,
or
was
it
the
16
release
that
we've
done
in
december,
where
we
moved
to
acting
to
understanding
then
and
getting
that
experience
and
then
looking
at
at
this
on
the
way
out
and
steve.
I
really
appreciate
all
the
thought
and
perspective
and
leadership
you've
had
on
this
particular
topic
and
the
workflow
is
just
so
much
more
polished
than
where
we
were
before
and
the
I
think
we're
aligned
now
to
an
ecosystem
that
gives
us
the
tooling
and
has
this
whole
community.
A
You
know
the
amazon
has
been
driving.
That
speaks
to
the
scale
for
the
vision
that
we
have
here.
You
know
because,
ultimately,
we're
going
to
need
to
be
able
to
you
know,
have
libraries
of
these
things,
and
you
know
manage
these
apis
across
hundreds
of
things
in
the
future
is
the
scale
that
we'd
like
to
be
able
to
build
for
so
a
great
demo
steve.
I
really
liked
it.
Are
there
any
other
questions
for
steve.
A
Oh,
that's
hilarious.
All
right,
awesome,
demo,
steve!
Thank
you
again
so
much
for
the
demo.
Just
a
couple,
quick
community
notes.
We
have
until
august
the
7th
to
get
your
submissions
in
for,
or
is
it
august
the
8th
to
get
your
submissions
in
for
wasm
day?
A
You
know,
I
really
encourage
you
if
you
have
a
submission
to
pull
it
together
and
get
it
in.
I
would
say
we're
a
little
bit
behind
where
we
were
last
year
or
for
kubecon
eu
at
this
time.
Although
we
do
have
some
really
cool
ones.
Somebody
you
know
we
somebody
has
got,
is
looking
to
run
doom
in
the
database,
which
I
I
think
is
you
know,
that's
part
of
a
toy
piece
of
their
example,
but
there's
some
really
cool
stuff
that
people
are
putting
together.
A
I'm
really
excited
about
that
and
we
start
we'll
start
doing
some.
You
know
final
calls
here
to
get
those
out.
So
if
you
have
any
ideas
or
if
you
have
something
all
levels
of
content,
any
ideas,
a
case
study-
you
know
red
badger,
folks,
exactly
what
you're
doing
and
in
doing
the
kubernetes
one,
I
think
would
be
incredible.
A
If
you
wanted
to
do
this
discussion
that
you
guys
just
did
here,
if
you
guys
wanted
to
pitch
that,
I
would
be
super
enthusiastic
on
that,
because
we
are
all
thinking
the
same
thing
and
you
guys
are
spending
the
time
to
work
through.
You
know
the
details
and
have
so
much
valuable
and
powerful
experience
to
share.
A
So
if
you
haven't
pulled
up
together
a
submission,
please
do
so
and
submit
it,
because
I
know
that
there
is
a
large
community
of
people
that
are
thinking
through
this
and
in
fact
I
may
point
a
couple
people
that
have
pinged
us,
certainly
at
your
repos
or
maybe
even
try
to
facilitate
some
cross
introductions
along
those
lines.
So
please
get
those
in
wasm
time.
Bi-Weekly
meetings,
still
chugging
along
notes,
are
linked
in
the
doc
and
then
I
think
that's
all
I
have
on
community
any
other.
A
All
right,
well,
I
think
I
think
now
steve
if
or
brooks
I'm
not
sure,
if
somebody
wants
to
pull
up
our
sprint
for
the
week
and
just
give
a
call
out
to
where
we
are
and
what's
going
on
steve,
I
think
you
did
it
last
week,
you
know
with
the
the
zen
hub
board.
B
Yeah
yeah
give
me
a
sec
here.
E
If
I
had
to
do
it,
I
would,
I
would
just
have
to
ask
you
for
screen
sharing
permissions
liam
and
I
didn't
know
didn't
know
if
you
wanted
that.
A
All
right
I'll
just
give
them
to
you.
B
Anyway,
oh
yeah,
I.
B
B
B
B
So
we
fixed
we
closed
17
issues
last
week,
so
we
got
a
good
good
velocity
here,
the
sprint
black
blog
for
this
week.
There's
a
kevin
has
been
doing
a
lot
of
work
on
distributed,
cache
and
consulted
with
the
gnats
development
team
on
the
architecture.
For
that
we
are
consolidating
a
bunch
of
functionality
into
weld
sorry
into
wash
cli,
so
we're
still
gonna
have
the
wash
cli,
not
the
rep,
not
the
rebel.
B
The
rebel
part
is
deprecated,
but
to
be
able
to
issue
a
bunch
of
commands
and
we're
also
going
to
be
using
it
for
code
generation
from
the
smithy
models.
B
A
Steve
last
week
we
had
talked
about
you
know
some
way
we
were
researching
some
way
that
we
could,
you
know
automatically
dump
this
into
slack
on
like
a
weekly
basis
or
something
did
we
look
into
the
you
know,
paid
version
of
when's
zendesk.
You
know
zen
hub
or
anything
like
that
or
is
there.
You
know
some
sort
of
report
that
we
can
generate
after
we
do
planning
on
mondays.
B
Yes,
there,
there
is
some
reporting
that
I
I've
played
with.
I
wasn't
really
crazy
about
it,
so
it's
one
of
one
of
my
tasks
for
this
week
is
to
figure
out
how
to
get
the
get
the
right
reports
out.
So
by
the
end
of
this
week,
I
should
have
the
answer
to
that.
A
All
right
and
then
that's
something
that
you
know
maybe
we'll
add
to
our
agenda
for
mondays,
and
maybe
we
can
start
just
trying
to
as
we
pull
things
together
and
dump
out
to
to
to
slack
just
keep
people
up
to
date.
Yeah.
B
We'll
do
I'll
dump
some
reports
into
the
sprint
planning
slack
channel
on
western
cloud.
A
All
right,
that's
awesome,
any
questions
across
team
or
brooks
kevin.
Anything
you
guys
would
want
to
throw
an
ad
here.
C
E
Yeah
I
just
the
only
thing
I
wanted
to
call
out,
which
I
think
is
kind
of
cool
is
like
we.
We
have
a
lot
more
visibility
here.
Whenever
somebody
opens
an
issue
and
one
of
our
repos,
you
know
whether
or
not
github
decides
to
notify
us.
It
kind
of
pops
up
right
there
in
the
new
issues
list,
which
is
great.
D
But
it
managed
to
find
a
way
to
do
this
with
github
actions,
but
yeah.
It
doesn't
do
already
for,
like
the
google
github,
projects
is
terrible
by
default
and
it
took
an
awful
lot
of
work
to
get
even
basic
things
like
that
working.
So
this
wasn't
that
this
zenhub
does
look
quite
nice.
B
Yeah
one
of
the
things
that's
neat
about
it
is
the
issues
are
still
in
github
issues,
so
everybody
can
have
visibility,
it's
some
of
the
metadata
fields,
if
you,
if
you
think
about
it,
that
way
like
sprints
and
epics
rns
and
have
not
github.
So
that's
why
we
need
to
write
some
scripts
to
pull
those
things
together
to
make
reports
but
yeah
it.
It
serves
the
serve
the
function,
it's
free
for
open
source
projects
and
but
reporting
takes
a
little
bit
of
script.
Work.
A
Any
any
open
floor
items
that
anybody
would
want
to
bring
up
today
holy
cross
they've
recovered
okay.
Well,
anything
in
closing.