►
From YouTube: wasmCloud: Control Interface in Go Demo, ML Effort Update, wadm Update, Community Callout - 01/11/22
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
A
Welcome
to
awesome
cloud
wednesday
for
wednesday
january,
the
12th
2022.
we've
got
a
big
meeting
today,
but
we've
got
a
couple
new
folks.
Will
it
introduce
themselves
nikki?
Do
you
want
to
say
hello.
B
Hi,
my
name
is
nikki
sponpar,
I'm
a
principal
security
architect
at
t
royal
price,
I'm
just
very
interested
to
learn
about
this
technology.
I
think
it's
pretty
fascinating
the
small
footprint,
so
I've
just
started
with
the
demo,
so
I'm
really
just
very
much
in
the
ramp
up
mode
right
now,.
C
Hey
I'm
great
so
hi
everybody,
I'm
rostinopolis,
I'm
a
software
engineer,
turned
security
engineer,
I'm
now
the
one
of
the
early
security
hires
at
weights
and
biases,
which
is
an
ml
ops
platform
and
I'm
very
interested
in
the
machine,
learning
capabilities
of
plasma
cloud,
and
you
know
looking
into
how
how
could
potentially
leverage
both.
C
Hey
everyone
I'm
dan
norris.
I
recently
just
started
as
infrastructure
lead
over
at
cosmonic,
so
I'm
pretty
excited
to
dive
into
awesome
cloud
and
sort
of
all
the
potential
that
this
technology
has.
I've
done
a
little
bit
of
work
with
like
what
used
to
be
ycc
kind
of
similar
boat
to.
I
think.
A
lot
of
people
here
was
like
just
ex
and
I'm
stuck
full
time.
A
So
you
cut
out
a
just
a
hair
on
a
stand
there,
but
I
think
we
got
the
gist
and,
as
usual,
we'd
like
to
start
our
meetings
with
a
demo
and
jordan.
I
think
you've
got
a
demo
for
us
this
week.
D
Screen,
oh
cool:
does
everyone
see
two
windows
here,
yep
cool?
So
what
I?
What
I
did?
I
got
a
had
a
little
free
time
on
my
hands
this
week,
so
I
went
into
looking
around
how
I
could
do
a
little
rapper
shim.
I
don't
know
what
you
call
it
more
or
less
doing
a
go
port
of
the
control
interface
client
that
we
already
have
over
here
on
the
right.
D
So
what
I
did
was.
I
went
through
pretty
much
followed
this.
You
know,
structure
and
code
naming
convention
almost
you
know
as
closely
as
possible,
and
I
came
up
with
a
a
functioning
go
library
that
implements
pretty
much
all
these.
D
Calls
to
the
underlying
gnats
that
you
know
in
the
same
manner
that,
as
brooks
told
me
wash
and
the
washboard
do
with
rust
and
elixir.
This
is
more
or
less
the
go
implementation
of
that
so
yeah,
it's
a
pretty
it's
a
we
can.
D
We
can
walk
through
the
code
here
in
a
minute
if
we
want
but
you're
not
going
to
see
anything
that
too
special
all
you're
going
to
really
see
is
you
know
the
the
topics
that
are
that
are
very
well
documented
on
wasmcloud.dev
and
pretty
much,
I
followed
the
exact
logic
that
that
you
can
find
in
the
rest
code
here
so
more
or
less.
Now,
if
we
were
to
look
at,
you
know.
D
This,
the
spain.go-
I
I
wrote
you
know
I've
just
written
this
poor
man's
cli
that
just
reads
kind
of
like
the
r
as
we
go
in.
We
just
start
throwing.
You
know
the
start
act
of
the
stop
after
the
fa
after
and
I
just
wrapped
all
of
them
very
lightly
and
then
more
or
less
you
know
we
get
a
a
functioning
yeah
like
go
rapper
that
can
you
know,
call
the
call
the
host
get
information
about
it
and
I
have
hard-coded
all
of
the
all
of
the
values
in
the
main.go
here.
D
D
It
is
taking
the
http
server
provider
starting
it
and
then,
if
I
do,
link
start
and
link
start
is
not
really
an
accurate
term,
for
you
know
what
it's
doing,
but
for
the
sake
of
this
demo
me
remembering
what
the
commands
are.
I
named
it
start,
then
it
goes
through
and
it'll
actually
link
them
together
and
if
we
look
here
in
a
second.
D
Oh,
I
didn't
expose
the
port.
Well
if
we
were
to
go
into
the
into
the
into
the
docker
network
and
run
that
the
echo
actually
does
work,
I
rebooted
it
right
before
this
to
get
a
clean
environment.
So
yeah
I
mean
that's
really
where
I'm
at
today
the
code's
all
sitting
here
and
I
would
I
would
love
any
feedback
from
any
go
folks
out
there,
because
I'm
not
the
best
software
developer
in
the
world.
So
there's
no
real
error
handling,
there's
no
real
anything
right
now.
It
does
a
lot
of
assumptions.
D
It
grabs
the
first
host
but
yeah.
Hopefully
I
can
just
start
the
start.
You
know
a
little
bit
of
a
go
effort
and
see
what
we
can
do
with
it.
That's.
A
All
jordan,
thank
you
so
much
you
bring
so
much
to
the
table
for
those
of
you
that
are
new.
Jordan
also
runs
a
lot
of
our
community
stuff.
He,
you
know,
make
sure
that
our
meeting
recordings
get
sort
of
pulled
together
and
posted
on
all
the
places
on
reddit
and
youtube
as
well
as
he
did.
You
know
a
bunch
of
the
training
stuff.
That's
out
there,
his
design
cheat
sheets,
all
kinds
of
stuff,
so
jordan.
A
I
thank
you
so
much
for
being
such
a
wonderful
community,
member
and
continuing
to
just
figure
out
ways
that
you
can
jump
in
and
help
brooks
you
wanted
to
jump
in
maybe.
E
Yeah,
I
had
a
a
question
and
then
maybe
some
comments,
so
I'm
not
the
goofiest
gopher
or
whatever
the
go.
People
call
it,
but
what
jordan,
with
the
structure
of
this
wasp
cloud,
go
repo?
E
D
Yeah,
so
I
actually
wrote
a
second
one
here,
it's
and
also
in
the
examples
folder
where
it
is
doing
just
that,
and
I
shared
a
get
pod
in
slack
somewhere
that
reno
that
imported
the
library
as
as
you
normally
would,
but
yeah
yeah,
you
don't
have
to
it's
written
in
the
form
of
a
library
using
go
modules,
so
you
can
just
pull
it
in
with
that
right
there,
and
you
know
this
right
here.
If
we
were
to
just
dot
slide,
you
know
dot
go
run
this.
E
Okay,
awesome
because
yeah
I
was
wondering
because,
with
the
like,
the
way
that
you're
interacting
with
it
on
the
command
line,
it's
very
like
wash-esque,
which
is
cool,
and
then
you
know
something
that
you
could
like.
We
could
definitely
leverage
with
all
the
go
code
out.
There
is,
if
you
can
use
this
as
a
library
which
you
are
doing
like.
That
would
be
awesome
to
be
able
to
like
from
a
go
application
administer
somewhat
awesome
cloud
cluster.
D
E
No,
it's
I
mean
it
could
be
it's
it's
nice
and
useful,
so
I
I
did
want
to
say,
like
jordan,
thank
you
for
like
starting
on
this.
I
know
that
we
kind
of
talked
about
it.
I
think
it
was
yesterday.
E
I
think
you
whip
this
up
really
fast,
but
you
know
this
is
something
that
kind
of
lends
itself
like
with
the
community,
to
adding
support
for
for
different
languages
and
not
necessarily
for
actors
or
capability
providers,
but
getting
some
of
the
tooling
a
little
more
formalized
like,
especially
with
go
because
for
things
like
the
control
interface,
we've
technically
had
support
for
any
language
on
the
control
interface
that
has
a
nats
library.
E
If
you
hook
into
the
right
topics,
then
you
can
issue
control
interface
commands,
but
having
these
kinds
of
libraries
really
help
the
barrier
to
entry
and
not
making
you
have
to
learn
about
a
nat's
library,
not
making
you
learn
about.
Maybe
proprietary
is
the
wrong
word,
but
like
the
exact
structure
of
like
yeah,
you
need
to
do
a
request.
Multi
on
this
topic,
a
request
on
this
topic
to
publish
on
this
topic.
E
So
you
know
reducing
the
barrier
to
entry
by
being
able
to
bring
in
more
people
is
definitely
nice,
and
I
think
that
kind
of
a
next
step
for
for
an
effort
on
the
go
side
is
to
start
looking
at
the
the
effort
of
adding
code
gin
for
our
structures,
because
things
like
the
start,
actor
command.
Jordan,
I'm
sure
you
probably
had
to
hard
code
that
or
set
that
up
in
in
the
library,
but
if
we
can
generate
it
from
our
smithy
structures,
then
that
again
reduces
the
barrier
to
entry.
E
It's
something
that
we
of
course
have
to
prioritize.
As
far
as
when
we're
we're
getting
to
the
point
where
we
can
create.
A
D
A
You
know
for
the
folks
that
are
new.
The
sort
of
nuance
here
is
just
that
you
know.
Golang
does
not
have
incredible
support
for
web
assembly
at
the
moment,
and
it's
not
supported
right
now.
It's
a
sort
of
first-class
language
in
the
ecosystem,
so
we
did
have.
We
do
have
people
that
come
and
ask
for
that
support.
A
A
Jordan,
one
ask
that
came
up
is
I
had.
I
had
a
sidebar
with
steve,
based
on
the
slack
conversation
that
we
had
seeing
if
you
and
steve
could
maybe
shepherd
some
of
this
forward
a
little
bit.
If
that
might
be
something
you
were
a
direction
you
were
considering
going
is
adding
support,
for
you
know
auto
code
generation
and
things
like
that,
and
we
thought
honestly
the
most
important
thing
to
capture
in
that
would
be
the
process
of.
A
How
do
you
add
a
language
to
wasmcloud,
and
I
thought
that
that
would
be
something
you've
done.
You've
done
so
much
wonderful
work
around.
You
know
lowering
the
barrier
to
entry
for
wasn't
cloud
that
that
might
be
something
that
you
and
steve
could
do
a
sidebar
on
out.
You
know
as
a
as
a
project,
and
I
don't
need
an
answer
now-
I'm
not
trying
to
put
you
on
the
spot
on
that.
So
maybe
you
guys
can
follow
up
on
slack
on
that
particular
topic
and
go
from
there.
The.
A
You're
so
awesome,
jordan,
so
steve,
maybe
you
can
grab
some
time
with
jordan
and
you
guys
could
sketch
out
what
that
what
the
steps
might
be
in
that
work,
stream
and-
and
I
think
we
this
would
probably
be
a
new
section
over
at
wasn'tcloud.dev
on.
How
would
you
know
if
somebody
wanted
to
like
add
assembly
script?
You
know
what
would
be
the?
Where
would
they?
What
would
be
all
the
key
steps
that
they
would
need
to
follow?
For
example,
kevin.
Did
I
capture
that
correctly
on
what
we
were?
A
Yeah,
I
think
so.
Okay
super
great
now,
let's
turn
over
to,
we
have
a
new
sort
of
like
sub
effort
of
folks
that
have
come
to
the
community
that
are
working
on
the
machine,
learning
stuff
and
ross.
I
know
that
was
one
of
the
things
you
were
interested
in
talking
about
it's
we're
still
in
the
design
scoping
phase
of
this
but
steve.
You
were
at
a
meeting
that
we
had
yesterday
afternoon
that
I
think
was
incredible.
The
meeting
was
recorded
and
we
will
have
that
up
on
our
youtube
channel
shortly.
A
If
people
want
to
get
caught
up
on
it
but
steve.
I
think
you
have
a
sort
of
like
summary
that
you
can
sort
of
walk
people
through.
I
probably
need
to
give
you
ability
to
share
your
screen.
Don't
I
no
that's
that's
why
you're
not
sharing
your
screen.
F
So
yes,
there's
there's
some
exciting.
Oh.
F
F
So
I
can't
share
it
without
without
restarting
zoom,
so
I'll
talk
through
it.
So
it's
the
there's,
a
machine
learning
repo
inside
the
wasmcloud
org
on
github
and
so
there's
a
new
subgroup
that
has
just
formed
that
wants
to
that's
really
interested
in
moving
forward
with
getting
machine.
Learning.
Support
in
wasmcloud
and
kristoff
took
the
initiative
to
create
this
document
here
and
describe
some
models
for
how
we
might
support
it.
And
liam
could
you
scroll
to
the
first
first
diagram
that
has
the
inference
engine
and
the
model?
F
So
what
we,
the
idea,
is
to
to
have
a
we
want
to
support
multiple
models
at
run.
Time,
and
part
of
that
is
so
that
an
actor
can
make
a
decision
to
switch
between
models,
for
example.
So
you
could
do
a
b
testing
and
we
want
the
inference
engine
to
be
able
to
work
with
different
libraries
like
tract
and
tensorflow
that
can
make
they
can
take
advantage
of
native
cpu
instructions
that
that
and
and
gpus
that
can
really
speed
up
machine
learning
models.
F
So
so
the
inference
engine
would
be
implemented
as
a
capability
provider
and
that'll,
let
it
have
access
to
the
those
native
cpu
capabilities.
In
addition,
we
talked
about
using
a
bundle.
Server.
Bindle
is
an
effort
created
by
the
deus
labs
that
stores
an
archive
of
files
in
a
in
a
server.
It
has
some
similarity
to
to
an
oci
registry,
and
models
could
be
loaded
into
the
mental
server.
F
The
inference
engine
could
pull
them
down
and
run
them,
and
so
another
source
of
inspiration
is
the
the
wazian
interface
that
was
created
by
andrew
and
team
at
intel.
F
We
one
of
the
things
that
that
interface
does.
Is
it
abstracts
the
the
backend
engine
that
you're
using
like
whether
it's
track
or
tensorflow,
and
what
we
want
to
do
is
take
that
interface
turn
it
into
a
smithy
interface
and
make
that
a
capability
contract
for
this
inference
engine
capability
provider,
and
that
will
allow
actors
to
be
able
to
invoke
machine
learning
models
in
western
club.
F
So
I
definitely
want
to
mention
the
contributions
done
by
kristoff
and
bailey,
hayes
and
andrew
who
I
I'm
not
sure
on
this
call,
but
there's
a
lot
of
excitement
in
this
space
and
we
welcome
all
all
participants-
and
we
have
you-
can
read
we'll
keep
updating
this
repository
with
progress.
F
Are
there
did?
I
did
I
miss
anything
in
that
overview
or
are
there
any
questions.
A
I've
got
a
couple
questions
steve,
you
know
when
we
think
about
you
know
the
first
engine
to
be
supported.
We've
got
a
number
of
options
from
what
openvino,
which
is
out
of
intel
tensorflow,
which
is
obviously
originally
out
of
google
or
the
onyx
volks.
Was
there
sort
of
a
discussion
around
what
folks
were
looking
at
supporting
first
with
waziyan
or
through
the
wasan
sort
of
interface.
F
Yes,
there,
there
was
definitely
a
lot
of
discussion
about
that,
and
the
document
that
kristoff
wrote
here
mentions
tract
in
the
community
in
the
ai
community.
Tensorflow
really
has
a
lot
of
momentum,
and
that's.
There
are
so
many
people
working
on
that
that
that
we
have
confidence
that
it's
going
to
be
around
for
a
while.
F
So
we
think
that
would
be
a
good
place
to
invest
efforts,
but
you
know
like
a
lot
of
things
in
in
wasm
cloud
we'd
like
ultimately
for
people
to
be
able
to
make
choices
and
try
out
different
implementations.
Some
people
like
pytorch
some
people
like
tensorflow,
so
ultimately
there
will
be
several.
I
I
think
it's
it's
like
almost
certainly
the
first
implementation
will
be
either
tracked
or
tensorflow
and
between
those
it'll
probably
be
tensorflow.
A
Okay,
that's
yeah!
That's
wonderful,
steve!
Thank
you!
So
much
for
giving
everybody
the
summary
and
again
the
meeting
actually
went
on
for
I
think
over
an
hour.
I
will
get
that
up
on
youtube.
It
looked
really
interesting.
I
had
to
jump
off
a
little
early
brooks
thank
you
for
staying
on
and
recording
that
yesterday.
A
I'm
super
excited
about
this.
We're
absolutely
going
to
try
to
get
this
ready
for
kubecon
wasmday,
which
is
a
good
segue
to
our
sort
of
community
updates
and
just
fyi.
A
Now
I've
got
some
even
better
news
in
that
a
cloud
native
wasm
day
for
anybody
that
is
buying
a
full
or
a
kubecon
admission
anyway,
will
be
free
for
online
attendance.
There
will
still
be
a
charge
if
you
plan
to
attend
in
person
in
valencia,
spain
in
may
I
plan
to
be
there
and
and
go
from
there.
We've
sent
out
invites
to
a
program
committee,
which
is
we're
trying
to
do
a
more
european,
focused
program
committee
to
bring
folks
along
and
we're
working
now
on.
A
Agenda
new
to
the
program
this
year
will
be
some
training
that
we're
going
to
try
to
facilitate
I'd
like
to
get
it
facilitated
in
a
way
that
is
very
inclusive
and
allows
people
to
participate,
whether
they're
on
present
or
remote,
and
that
so
not
only
online
but
also
free
as
well.
So
we're
still
working
through
a
few
options
there
but
expect
additional
updates
there.
So
I
really
encourage
everybody
in
the
community.
A
That's
working
on
on
interesting
things
to
tell
your
story
share
what
you're
working
on-
and
I
know
we're
looking
forward
to
continuing
to
make
this
event
bigger
and
better
with
each
passing
month
any
questions
about
community
day.
A
I
also
know
that
there
is
a
webassembly
event
that
is
being
planned
for
the
eu
in
london
in
the
next
in
the
next
few
months,
but
I
think
it'll
be
next
week
or
so
before
we're
ready
for
any
details
that
we
can
share.
We
can
pass
on
about
that
for
anything.
That's
happening
there
just
open
floor.
Are
there
any
other
web
assembly
events
that
people
would
want
to
mention
or
sort
of
put
out
on
the
community
call.
A
Okay,
super:
let's
go
ahead
and
just
open
up
the
floor.
I
think
we've
got
a
few
different
development
efforts
that
are
underway
under
the
open
source
umbrella.
Wadom
really
comes
to
mind,
jonathan
kevin.
I
know
I've
seen
a
commits
going
by
there.
Is
there
any
sort
of
questions
or
a
work
that
we'd
wanna,
maybe
socialize
or
bring
in
for
discussion
on
the
call.
At
this
point.
G
I
can
show
what
I
did
for
amnesiac
it
was.
I
guess
I
call
out
for
help
that
kevin
actually
had
when
he
did
his
commit.
That
is
mostly
implemented,
but
it
needs
to
be
documented
and
see
where
it
goes
from
there.
If
I
can
present,
that
would
be
great
I'll
share.
My
screen.
G
So
this
is
the
change
that
actually
went
in
and
I'll
call
out
a
couple
of
things
here
and
then
I'll
actually
go
through
a
test.
That
shows,
can
you
zoom.
A
G
So
amnesiac
or
amnesia
is
a
database.
I
suppose
that's
built
into
erlang
part
of
ogp
when
I
show
some
of
the
information
I'll
actually
show
that
it
goes
to
file
system.
G
What's
really
cool,
at
least
as
far
as
I
understand
is
that
when
you
configure
it
as
amnesiac,
you
can
actually
have
replication
go
through
from
one
vm
to
another.
It
is
surprisingly
fast.
I
don't
know
why
it's
so
fast,
but
it's
faster
than
running
unit
tests,
which
I
was
really
surprised
by
so
like
the
first
thought.
We're
looking
at
here
is
a
store,
and
the
call
out
here
is
that
I'm
actually
making
this
copies.
G
That's
the
most
making
ram
copies,
so
that
actually
gives
the
persistence
as
opposed
to
disappearing
completely,
and
then
the
second
one
is
that
I
ended
up
creating
well.
I
did
create
an
interface
with
a
callback,
so
it's
modifiable,
I
suppose,
and
we
can
introduce
other
backends
in
the
future,
if
that's
really
what
we
want
to
do.
G
But
since
this
is
just
a
test,
I
just
I
had
it
in
such
a
way
that
right
now
it's
just
going
to
redis
cache
and
then
we
can
add
something
from
using
cache,
which
is
a
function
which
is
a
module
that
we
can
call
out
and
then
the
rest
of
it
is
code.
So
if
I
jump
into
my
repository
here,
some
of
the
things
you
see
on
the
side
here
is
me
creating
the
database.
G
So
this
is
what
actually
gets
created,
they're,
not
readable,
and
I
haven't
actually
looked
at
interrogating
them
using
any
of
the
immediate
commands.
Yet
and
then
here
I
have
a.
G
Voicing
configuration
that
shows
the
whole
thing
being
instantiated
since
the
lattice
store.
If
I
go
to
definition,
which
doesn't
work
it's
under
here,
so
this
is
what
the
store
looks
like.
I
have
a
dsl
at
the
end
of
the
day.
This
actually
creates
all
the
attributes.
Then
I'm
indexing
on
a
prefix
as
opposed
to
an
id.
I
did
notice
that
this
was
mandatory,
even
though
it
actually
formatted
it
up
here.
Id
is
mandatory.
If
it
isn't
there,
the
tests
actually
don't
create
the
database,
it
does
die
silently.
G
I
haven't
looked
into
exactly
why.
That
is
the
case,
but
it
is
required,
even
though,
in
the
code
we
actually
use
prefix
as
our
index,
which
is
what
I've
done
here
for
well
in
the
implementation.
G
I
suppose,
then,
if
I
look
at
the
test,
which
is
here
jumping
back
in
here,
so
this
is
using
cluster,
an
extension
for
a
mix
which
actually
lets
you
go
from
cluster
cluster,
which
is
this
guy
here,
and
that
lets
you
test
across
clusters
and
also
see
what
happens
between
the
two,
which
is
how
I
actually
checked
or
found
out
that
the
replication's
really
good.
G
So,
in
this
case,
I'm
doing
a
quick
dirty
write
here
as
opposed
to
a
transaction
and
then
a
dirty
read
as
opposed
to
transaction
as
well,
and
in
the
second
case
I
am
doing
a
distributed
system.
So
I
have
two
nodes
running
and
then
I
do
all
this
stuff,
which
is
a
bunch
of
asserts,
but
I'm
writing
to
one
node
and
I'm
reading
from
another
node,
and
that
is
surprisingly
so
that
was
the
call
out
that
kevin
actually
had.
G
So,
if
I
run
the
test
here,
which
is
just
running
the
entire
one
well,
which
is
what
you
just
saw
before,
but
it'll
run
and
pass
the
whole
thing
so
that
gets
called
out
when
we
want
to
write
a
lattice
out
in.
I
guess.
So.
This
is
the
cache
which
is
an
interface.
Then
we
have
the
reddish
cache
which
actually
now
implements
that
interface
and
then
the
amnesiac
cache,
which
is
doing
the
same
thing,
but
through
a
transaction
here
and
then
the
code
that
actually
uses
it
is
what
actually
works.
G
I
don't
know
where
it
is
well,
it's
actually
up
here
where
we
call
it
in
this
file.
So
on
the
deployment
monitor.
G
And
yeah,
so
that's
kind
of
the
gist
of
everything
that
is
there,
and
then
it
comes
down
to
actually
hooking
it
up
and
actually
testing
it
out
and
seeing
whether
there
are
changes
that
we
need
to
make
in
terms
of
like
is
the
id
actually
in
the
conflict
or
not
as
an
example,
even
though
that's
actually
not
set
as
a
primary
key.
A
Hey
jennifer:
do
me
a
favor
when
we
play
this
back
on
youtube,
it's
going
to
be
a
little
hard
to
see
this.
A
Drop
out
just
a
couple
links
into
chat
that
we
can
capture
or
maybe
either
dm
to
me
and
jordan
on
on
slack
and
what
we'll
do
is
we'll
actually
maybe
drop
in
little
like
subtitles
to
point
people
at
least
at
where
this
is
in
case.
Anybody
is
following
along
and
wants
to
jump
in
on
the
wadham
stuff.
A
I
know
that
you
and
kevin
have
been
making
great
progress
on
this
together,
but
I
think
the
I
think
this
is
possibly
interesting
to
multiple
people,
and
it
just
just
may
help
help
to
make
sure
that
we
bring
folks
along
with
us.
H
Yeah
this
is
this
is
amazing,
stuff.
I,
when
I
originally
when
I
originally
put
in
that
commit
like
you
know
it
would
be
nice
if
I
could
get
amnesia
to
work.
I
didn't
actually
expect
anybody
to
to
come
in
with
a
pr
to
make
it
work.
So
that's
awesome.
The
thing
that
I
couldn't
get
working
in
my
local
branches,
with
amnesiac
or
with
amnesia
in
general,
is
that
when
you
look
at
the
amnesiac
interface,
where
you,
where
you
configure
it
the
list
of
nodes
that
it
uses
for
replication
is
fixed.
H
So
you
know
if
I,
if
I
start
up
my
cluster
with
a
known,
fixed
set
of
nodes,
then
amnesiac
works
wonderfully.
What
I
couldn't
get
working
is
when
I
use
both
cluster
and
horde
in
auto
member
mode
where
cluster
is
using
dynamic
discovery
of
nodes.
What
happens?
There
is
amnesiac,
at
least
when
I
tried
it.
H
It
didn't
update
its
own
list
of
connected
nodes
like
so.
It's
essentially
unaware
of
clusters,
dynamic
discovery,
and
so
once
it
started
up
with
a
fixed
node
list,
it
would
never
append
to
its
list
of
nodes,
and
so,
if
I
see
like
you've
got
the
the
the
amnesiac
supervisor
there,
you
see
how
it's
called
it
calls
node.list.
H
So
if
I
start
up
a
node
using
discovery
as
the
first
node
in
my
cluster,
that
node
will
never
become
aware
of
newly
discovered
nodes,
because
the
the
amnesiac
supervisor
doesn't
is
unaware
of
updates
to
the
cluster
membership.
That
cluster.supervisor
does
so.
That
was
like
the
key
thing
that
I
could
never
figure
out
how
to
get
work
and
get
working
locally.
G
G
H
So
I
figured
out
so
what's
not
included
in
here
is
like
where
you
look
at
where
we've
got
hoard.registry
there
there's
a
another
flag
you
can
pass
to.
It
called
where
you
can
say
members
and
then
send
send
at
the
atom.
H
Auto
cluster
also
has
the
same
option
where
you
can
set
members
to
auto,
and
then
you
can
supply
like
a
gossip
protocol
for
how
the
cluster
thing
discovers
other
members,
and
so
what
I
was
using
was
a
cluster
in
gossip
mode
and
horde
in
auto
member
mode
and
those
two
work
together
perfectly
fine.
So,
like
the
horde
registry
discovered
all
the
other
nodes,
but
what
happened
was
amnesiac
wouldn't
update
when
new
when
new
nodes
joined,
and
so
I
couldn't
figure
out
how
to
get
over
that
problem.
H
So
that's
kind
of
why
that's
where
I
was
stuck
when
I
added
that
comment
to
my
commit
there.
G
H
G
I
was
really
surprised
I
was
expecting.
I
wasn't
expecting
my
test
to
pass
and
I
was
like
whoa,
okay
yeah,
it's
impressive
for
sure,
but
yeah.
I
will
look
into
this
stuff.
Yeah.
A
A
I
made
that
made
a
big
mistake
when
I
kind
of
handed
the
mic
over
to
you
janna
think
kevin,
not
that
I
handed
the
mic
to
you,
but
that
I
didn't
really
give
an
intro
around
like
a
reminder
around
what
wadom
is
man
would
jennifer?
Would
you
maybe
give
us
a
two
sentence
in
in
your
words
like
just
the
elevator
pitch,
for
what
does
this
enable
in
the
big
picture
for
the
ecosystem
and
then
maybe
kevin?
A
Would
you
also
throw
in
two
sentences
because
I'd
love
to
love
to
hear,
like
you
know,
just
a
summary
of
what
we're
really
doing
here,
because
I
know
we
just
we
just
like.
Don't
you
know
down
down
to
the
down
into
the
code
here
without
sort
of
saying
like
hey
this
work
stream
is
really
about.
G
So
the
genesis
of
this
was
to
facilitate-
I
guess,
the
user
to
migrate,
or
at
least
use
in
parallel
kubernetes,
as
well
as
whatever
and
provide
an
easy
migration
pot.
So
the
part
here
was
to
take
an
oem
spec,
an
open
application
model,
spec
use
that
to
instantiate
or
initialize
actors
and
providers
in
otp.
A
What
are
the,
what
are
the
couple
of
big
standards
that
you
guys
are
aligning
to
as
far
as
like
you
know,
at
the
at
the
50
000
foot
level
that
people
would
be
have
heard
of
before
that?
Are,
you
know
perhaps
part
of
the
cncf
or
something
along
those
lines.
H
So
there's
there's
not
really
anything
specific
here,
that's
you
know
a
big
giant
industry
standard
or
something
like
cncf,
so
I
mean
basically
what
bottom
is
is
an
implementation
of
an
autonomous
controller
pattern
and
what
we're
doing
is
wadum
will
observe
a
lattice,
and
then
it
takes
your
desired
state
which
comes
in
the
form
of
a
big
slab
of
yaml
or
json.
H
And
then
you
know
it
uses
the
a
control
loop
to
ensure
that
you
know
whatever
it
is
that
you
wanted
to
be
deployed
is
deployed.
So
you
know
jonathan
mentioned
that
you
know
you
can
use
the
the
wasm
cloud
washboard
to
manually
deploy
an
actor,
but
what
you
want,
what
you
can
do,
or
what
you'll
be
able
to
do
with
wadum,
eventually
is
be
able
to
say
things
like.
I
want
a
total
of
20
actors
and
I
want
those
spread
across
n
number
of
hosts
that
all
have
the
label.
H
This
set
of
label
constraints-
and
you
know
things
like
that
and
then
whatever
will
auto,
maintain
that
application
for
you.
So
if
something
goes
down,
it'll
bring
it
back
up
if
something
scales
too
much
it'll
scale
it
back
down
and
and
that
sort
of
thing
it's
like
a
babysitter
for
your
wasm
cloud
applications.
D
A
The
softball
I
was
trying
to
lob
over
kevin
with
the
alignment
was
to
set
you
guys
up
to
talk
about
open
application
model,
oam,
cncf,
stuff.
H
So
I
mean
oam
is
just
like
a
tiny
little
piece.
It's
just
it's
a
way
that
we're
standardizing
on
representing
an
application
model
and
that's
really
the
the
only
involvement
there.
G
Kevin
if
I
may
hijack
this
a
little
bit,
this
is
one
of
the
things
that
I
probably
haven't
wrapped
my
head
around
and
I
think
you
have
it
in
your
head,
but
I
don't.
I
don't
quite
understand
all
the
way
yet
so
some
of
the
stuff
that
we
discussed
just
now
was
basically
something
like
this
right.
So
you
have
an
oem
spec
that
comes
essentially
the
whole
score,
which
is
the
otp
part
that
we're
talking
about,
and
then
it
gets
provisioned.
G
G
So
wasn't
cloud
host,
which
is
the
part
that
we're
calling
washboard
has
a
state
that
it
uses
to
update
the
user
interface,
so
to
speak,
that
has
an
actor
provider
and
a
link,
and
then
we
have
one
which
also
has
a
state
which
goes
to
the
lattice
observer.
So
this
is
the
part
that
I
don't
quite
understand
how
this
link
actually
works
and
whether
we
are
actually
segmenting.
Those
which
I
think
is
the
way
that
we're
going.
G
H
Yeah,
so
I
think
if
we
go
to
the
spot
in
the
diagram
where
you
mentioned
host
core,
I
think
that's
probably
where
the
the
pattern
diverges.
So
host
core
is
not
responsible
for
anything
in
the
oem
spec
host
core
doesn't
know
what
whatever
is
it
doesn't
know
what
a
application
model
is.
H
So
the
way
that
this
is
is
planned
to
work
is
that
when
you
bring
up
an
instance
of
whatem,
you
will
then
deploy
an
applications
back
to
it,
and
then
it
will
issue
lattice
control
commands
to
the
lattice
in
order
to
bring
about
the
state
that
you
desire
in
the
application
spec.
H
It
remains
just
a
a
quote-unquote,
dumb
scheduler
for
wasmcloud
entities,
and
eventually
what
will
happen
is
that
the
observed
lattice
code
inside
washboard
will
actually
just
be
migrated
to
use
a
lattice
observer
and
the
the
stuff
that
washboard
observes
should
be
considered
like
for
convenience.
Only
it's
just
sort
of
as
a
a
nice
to
have
in
order
to
give
the
the
washboard
ui
something
to
show
you.
But
the
lattice
observation
that
wadom
does
is
actionable
based
on
your
based
on
multiple
deployed
application.
Specs.
H
Because
the
state
is
not
segmented,
it's
it's
just
that
both
watam
and
the
washboard
are
accumulating
state
using
the
same
set
of
rule
applications,
but
they're
accumulating
state
for
entirely
different
purposes.
H
The
stuff
that
washboard
uses
is
disposable
and
for
convenience
only
in
order
to
make
the
ui
have
a
little
bit
more
information
and
whatever
state
is
used
to
trigger
control
interface,
commands.
G
When
you
see
a
commanding
stick
there
I
mean
this
is
I
guess
one
of
the
pr
that
I
kind
of
questioned
before
whether
there
is
a
need
to
do
an
integrity
scan
where
one
thing
knows
well,
one
thing
is
actually
the
the
record.
I
suppose-
and
I
don't
I
don't.
I
just
don't
know
whether
there
is
going
to
be
a
divergence
case
or
not.
That's
kind.
H
H
So
I
I
guess
to
your
other
question-
is
no:
we
don't.
We
don't
need
to
do
an
integrity
scan
where
what
and
washboard
compare
notes
and
see
which
one
has
the
most
accurate
representation.
Whatever
whatever
wadom
sees.
H
That
is
the
the
authority
of
record.
G
H
So
wadom
is
designed
to
support
multiple
deployments
and
by
deployment
that's
an
instance
of
an
application
spec
model
as
well
as
multiple
lattices,
so
you
can
have
wadam,
observing
multiple
lattices
and
then
maintaining
multiple
application,
specs
per
lattice.
H
So
you
could
have
as
long
as
the
nat's
credentials
point
to
the
same
shared
topic
space
you
can
have
adam
maintain
applications
in
like
a
test
lattice
and
then
you
know
some
other
lattice
and
as
long
as
they
have
different
lattice
ids,
then
wadom
will
observe
both
so
the
the
logic
that
you
saw
in
there.
The
last
time
I
put
a
commit
in
was
when
you
start
a
deployment
manager
which
is
responsible
for
the
control,
the
control
loop.
H
That
has
yet
to
be
put
in
a
pr
that
will
have
the
lattice
observer,
do
or
not
have
the
last
observer,
but
it'll
have
the
atom
kick
off
a
couple
of
things
across
the
lattice,
so
they
can
query
the
lattice
state
from
you
know.
So,
if
it's
the
first
time
this
thing
has
ever
run,
it
needs
to
do
a
probe
so
that
it
can
figure
out
the
atlantis
contents.
A
That
so
janet
did,
you
feel,
like
your
questions,
are
answered
as
far
as
like
the
few
things
that
you
were,
you
know
that
you're,
so
you
can
keep
moving
forward
on
this
or.
A
Yeah
and
nikki
to
your
questions,
my
understanding
is,
is
that
this
takes
care
of
the
reconciliation
between
desired
state
and
current
state,
and
then
you
could
use
something
like
a
horizontal,
auto
scaler
on
kubernetes
to
drive
that
eventually
kevin
did
I
communicate
that
correctly.
H
So
when
you,
when
you
give
it
an
application
spec,
you
tell
it
how
you
want
your
actors
spread
over
the
list
of
available
hosts
and
if
you're
running
in
kubernetes,
then
you
can
use
the
higher
level
scheduler
to
start
new
wasm
cloud
hosts,
which
would
then
so
because
it's
running
in
a
control
loop,
like
what
you
could
see
happen
is
once
you
have
a
spread
defined
in
your
application
model
and
kubernetes,
then
spins
up
five,
more
wasm
cloud
hosts
that
could
affect
what
whydom
wants
to
do
with
where
it
schedules
your
actors.
A
Got
it
yeah
and
I
definitely
understand
that
this
could
be
driven
with
or
without
kubernetes.
I
was
only
just
using
that
as
an
as
an
example.
Nikki
did
that
answer
your
question,
or
did
you
have
any
other
discussion
you
wanted
to
wanted
to
ask
about
this
at
eye
level
or
any
level
really.
B
No,
I
mean
I
just
again,
I'm
just
trying
to
piece
things
together,
but
I
mean
it
just
seemed
to
me
that
that
was
you
know,
you're
adapting
your
your
actors
based
on
a
spec,
and
it
seemed
that
that
spec
was
kind
of
rigid,
and
so
I
was
just
wondering
you
know
was:
was
the
future
intention?
You
know
again,
I'm
thinking
in
terms
of
something
the
the
analog
would
be
something
like
an
auto
scaling
group
right.
H
B
A
Yes
and
watson
cloud
does
have
some
metrics
that
you
can
source
out.
You
know
for
current
current
status,
but
I
would
really,
I
think
we
probably
won't
have
a
broader
discussion
around.
A
You
know
what
some
of
those
generic
pieces
might
look
like
if
we
wanted
just
to
function
independent
on
wasm
cloud,
but
we
developed
this
was
a
clear
scoped
set
of
you
know,
component
that
needed
to
exist
to
enable
the
downstream
work
and,
of
course,
what
we're
really
driving
at
is
that
one
of
the
amazing
properties
of
wasm
cloud
and
web
assembly
is
that
these
actors
are
incredibly
tiny
and
they
have
a
really
small
memory
footprint.
A
You
know
you
know
from
a
size
perspective
they're
looking
at
you
know,
20
kilobytes,
to
two
meg
depending
on
language,
so
they
you
can
do
a
scale
event
very
quickly.
Once
you
want
to
load
them.
I
I
think
I
not
sure
if
I
shared
the
paper
with
you
or
not,
but
there
is
a
wonderful
publication
that
references.
Does
it
compare
time
between
scaling,
just
web
assembly
processes
versus
optimized
containers?
The
optimized
containers
were,
you
know,
four
to
eight
seconds
to
scale
versus
you
know
milliseconds
for
web
assembly.
A
So
we
really
want
to
make
sure
that
we
have
the
right,
tooling
and
infrastructure
to
help.
You
know
our
goal
with
wasmcloud
is
to
give
you
a
framework
for
building
functional
microservices
quickly.
You
know
what
react
does
for
html.
We
want
to
do
for
microservices
and
webassembly,
so
part
of
that
means
having
not
only
the
framework
for
building
apps
quickly
but
structuring
our
logical
components
together
into
an
idea
of
an
application
that
you
know
has
multiple.
A
You
know
different
components
here
and
then
to
give
you
the
facilities
to
commit
to
that,
and
you
know,
and
then
obviously
then
scale
that
or
change
that
definition
as
you're
moving
through
the
life
cycle
of
of
a
microservice.
B
A
Of
sort
of
figure
out
where
this
was
look,
this
is
the
fire
hose
for
sure,
and
I
appreciate
you
know
you
showed
up.
You
joined
on
slack.
You
know
this
week
and
I
appreciate
you
know
the
taking
the
desire
to
come
along
and
you
know
learn
about
what
we're
doing.
I
think
it's
really
powerful
and
I
wouldn't
be
all
in
on
this.
If
I
didn't
genuinely
believe
that
actors
were
the
next
epic
of
computing,
you
know
to
me:
it's
it's
clear.
A
You
know
we're
walking
up
that
stair
step
of
that
infographic
from
we
virtualize
cpus,
we
virtualize
operating
systems,
we
virtualize
clouds.
You
know
vms
containers,
kubernetes
virtualizing.
Each
individual
process
feels
like
the
next
step
with
webassembly.
So
that
way
we
have
now
cpu
independence
and
a
portable
security
model,
and
then
the
step
after
that
is
virtualized
the
libraries.
So
that's
what
we're
trying
to
do.
A
You
know
okay,
awesome
well
busy
meeting
today
we're
almost
the
time,
just
any
last
calls
or
other
topics
that
people
wanted
to
raise
or
just
bring
up.
A
Okay,
wonderful
well
ross
nicki!
Thank
you
both
so
much
for
coming.
E
All
good
I
have
one
community
call
out
for
today,
oh
and
I
do
need
screen
sharing,
but
I'll
explain
it
while
I
wait
for
it.
So
for
those
of
you
who
are
new,
we
like
to
do
a
thing.
Every
week
at
the
community
meeting
called
a
community
call
out.
These
are
just
issues
across
the
wasn't
cloud
organization
that
have
good
first
issue
on
them
and
they're
a
great
place
to
jump
in
if
you're
looking
to
contribute
for
the
first
time.
E
E
So
when
you
go
to
start
a
capability
provider
with
wazing
cloud,
the
way
that
you
can
find
the
newest
version
of
a
certain
capability
provider
is
by
coming
here
to
the
repository
finding
the
latest.
For
so,
if
we
wanted
to
use,
say
the
the
nats
capability
provider-
and
we
take
a
look
at
the
cargo
tunnel-
we
see
that
the
latest
version
is
0.11.7
or
we
can
come
to
the
releases
page
and
then
we
can
try
and
search
for
gnats
and
and
find
the
that
should
work
find
the
latest
version
0.11.7.
E
But
we're
kind
of
forced
to
do
this
because
a
lot
of
oci
registries
don't
support
the
content
discovery
part
of
the
oci
spec.
So
it's
a
little
bit
hard
to
find,
like
the
the
latest
version
of
a
capability
provider.
E
So
that's
kind
of
the
point
of
this
issue
would
love
a
little
bit
of
help
making
it
so
that
the
oci
reference
that
we
published
the
capability
provider
to
if
we
can
attach
that
specific
oci
url
to
something
like
a
shields
dot.
Io
badge,
that's
like
right
in
line
with
the
repository
readme,
I
don't
know.
If
that's
let
me
see
if
I
can
find
one.
I
don't
know
if,
if
this
is
common
knowledge
or
not,
but
a
shields
dot,
io
badge
is
something
like
this,
where
it's
just
like
a
little
badge.
E
That's
dynamically
dynamically,
updated
on
on
the
readme
we'd
love
to
have
something
like
that
added
to
the
capability
providers,
repo
and
all
that
information
is
available
within
our
github
actions.
So
just
a
matter
of
getting
that
out
and
putting
it
in
the
repository.
E
That
would
be
awesome
so
that
people
are
aware
of
like
newer
versions
of
capability
providers.
Things
like
that.
So,
if
you're
interested
in
taking
on
something
like
this,
I'm
going
to
post
the
issue
link
in
the
slack,
the
community
slack
as
usual-
just
drop
a
comment
on
that
or
drop
a
comment
on
the
issue
and
I'm
happy
to
help
out.
A
Thank
you
so
much
brooks
any
last,
any
last
words
before
we
end
for
the
week
busy
meeting
today.
Thank
you
so
much
everybody
for
coming
to
awesome
cloud
wednesday.
As
usual
I'll
stop
recording.
We
can
hang
out
for
a
minute
or
two
for
our
next
meetings,
cheers.