►
From YouTube: wasmCloud Community Meeting - 31 May 2023
Description
Welcome to the wasmCloud community! Tune in live where we discuss the latest developments in the wasmCloud ecosystem, WebAssembly standards, and break out sweet demos.
Agendas for wasmCloud community meetings can be found at: https://wasmcloud.com/community
A
All
right
looks
like
we're
live
now
on
a
few
places,
making
sure
YouTube's
up
okay
YouTube's
up
I.
Did
it
cool
all
right,
so
hey
Welcome
to
wasmcloud
our
weekly
community
meeting.
We
have
a
shortened
agenda
today.
This
was
a
shorter
week,
Memorial
Day
in
the
States,
but
also
bank
holidays
and
and
some
places
in
Europe.
So
some
of
us
were
on
vacation.
A
A
B
All
right,
thanks
Bailey,
so
hey
y'all,
my
name
is
Victor
I
work
with
the
team
over
there
at
cosmonic
and
I
wanted
to
show
some
of
the
work
I've
been
doing
on.
Let
me
share
oh
Bailey.
If
you
want
to
sit
on
a
screen
sharing,
but
I
wanted
to
show
some
of
the
work
I've
been
doing
on
the
wash
command
restructure,
so
some
of
y'all
might
have
seen
this
RFC
come
through
oops
all
right
awesome.
Let
me
see
if
I
can
see
if
I
can
share
my
screen
here.
C
B
Good
yeah-
and
so
this
is
the
this
is
the
issue
here,
but
just
you
know,
sort
of
is
the
RFC
with
some
of
the
changes
that
we
want
to
get
into
watch
the
command,
like
you
know
the
command
line,
tool
for
for
building
your
actors
and
signing
your
actors
and
interacting
with
Blossom
Cloud
locally.
What
we
wanted
to
do
was
you
see
there's
a
this
huge
sort
of
sprawl
of
commands
have
all
been.
B
You
know
added
over
time,
and
they
all
do
something,
as
as
commands
often
do,
but
we're
trying
to
streamline
basically
this
this
setup
right
and
make
it
a
little
bit
easier
for
people
to
run,
to
run
workloads
on
wasn't
cloud
and
to
sort
of
introspect
the
cluster
right
because,
of
course
running
something
is
just
day.
B
B
B
So
so,
thanks
to
Stephen,
Stefan,
Steven,
I
think
and
everyone
who's
contributed
Matt's
in
here
as
well,
and
so
so
what
I've
been
doing
is
just
putting
in
some
of
the
basic
the
basic
things
that
we
thought
would
help,
and
we
thought
that
we
thought
would
streamline
The
Experience
right,
so
you've
got
you,
know,
sort
of
folding
wash
link
right
and
and
moving
it
out
moving.
Basically,
a
bunch
of
sub
commands
out
of
control
right,
and
you
know
it
depends
on
how
much
how
you
want
to
sand.
B
If
you
want
to
say,
watch,
control
or
wash
cuddle,
or
you
know
whatever
else
watch
CTL,
but
but
we're
we're
looking
towards
removing
that
all
together
and
getting
as
much
as
we
can
as
top
level
commands
on
one
and
I'd
love
to
have
some
feedback.
If
anyone
has
some
and
more
importantly,
I'd
love
to
know
what
the
the
biggest
pain
points
are
or
like
what
commands,
do
you
run
that
often
fail
or
like
you
have
to
always
look
at
help
right
on?
B
You
know
the
awesome
Cloud
we
pretty,
we
bounce
new
developer
experience
and
developers
generally
don't
like
opt
out
analytics
or
any
kind
of
you
know
calling
home.
So
we
can't
we
can't
really
know
you
know
what
goes
wrong
the
most
like
when
you
run
a
wash
command
locally,
but
we're
really,
you
know,
depending
on
the
community,
to
tell
us
in
the
awesome,
Cloud
slack
or
anywhere
else.
B
Basically,
you
know,
what's
the
what's
the
thing
you
run
into
that
that
you
you
wish
the
the
CLI
was
a
little
bit
more
helpful
on
or
you
know,
basically,
if
you
have
to
use
dash
dash
help,
there's
that's
basically
something
we'd
like
to
fix
right,
it'd
be
great
if
you've
never
had
to
use
that
slash,
help
or
anything
right.
B
You
know
in
in
a
perfect
world,
you
wouldn't
have
to
read
the
manuals.
Everything
would
just
work
so,
but
but
yeah.
So
these
are
the
these.
Are
the
changes
we've
been
going
through
and
I'd
love
to
hear
from
anyone
in
the
chat
here
or
or
or
you
know,
purple
questions
cool
as
well,
if
you're
comfortable
with
it
just
any
pain
points
that
you've
you've
seen
and
would
like
to
call
out
and
I'd
love
to
add
them
to
this
to
this
work,
while
I'm
in
here.
A
Cool,
thank
you,
Victor
yeah
I,
and
it's
been
really
fun.
Seeing
your
changes
come
through,
you
know,
I
think
one
way
to
summarize
it
is
that
we're
consolidating
the
API
surface,
we're
making
sure
the
commands
that
at
least
we
use
every
day
are
sort
of
at
the
top
level
and
ergonomic
and
I
think
the
other
goal
is
that
we
don't.
This
is
one
of
those
Band-Aids
that
you
want
to
rip
off
once
it's
it's
painful,
changing.
A
You
know
your
muscle
memory
for
what
you
run
in
a
CLI,
and
so
we
want
to
get
this
right
as
quickly
as
possible
and
then
try
not
to
change
it
as
best
we
can,
and
for
me
this
is
like
one
of
those
things.
That's
all
my
like
mental
map
for
what
what
is
on
the
road
to
hitting
1.0,
since
we
knew
we
needed
to
fix
a
lot
of
this
stuff
because
of
how
things
have
grown
over
time.
A
B
And
one
more
thing
to
add
to
how
this
rollout,
how
the
rollout's
gonna
work
is
that
there
will
be
a
period
of
time
where
there
will
be
a
deprecation
notice
on
some
of
the
older
commands
and
then
we'll
like
update
the
documentation
as
well
and
make
sure
all
these
you
know
all
the
the
general
flows
that
you'd
expect
to
work
work,
of
course,
as
these
are
going
in
they're
they're
all
tested.
So
so
that's
all
good,
but
it
will
be
slow
and
and
somewhat
gradual.
B
So
you
don't
need
to
worry
about
any
sort
of
you
know
if
you've
written
a
bunch
of
scripts
that
use
washctl
they'll
still
work
for
quite
a
bit
of
time
and
you
should
have
a
pretty
smooth
transition.
If
you
don't
have
a
smooth
transition,
let
us
know
you
know
complete
loudly
and
we'll
get
on
it.
D
Yeah
sorry
I
think
you
pretty
much
answered
it.
I
was
gonna.
Ask
if
we
were
going
to
keep
the
CTL
sub
command
around
for
a
while
and
either
print
out
a
deprecation
notice,
or
you
know,
tell
people
what
the
new
command
is
or
both.
A
Yeah,
perfect
I
really
appreciate
that
and
I
think
some
of
that
is
outlined
in
the
RFC
if
I'm
not
mistaken,
but
if
it's
not,
we
should
probably
highlight
it
that
that's
you
know.
That's
our
Strat.
B
Yeah,
just
so
just
to
summarize,
we're
definitely
going
to
do
both
so
we're
going
to
try
and
tell
everyone,
and
so
generally
we'll
have
new
people
using
the
newer
commands
and
also
add
the
deprecation
notices
and
then
just
give
lots
of
time.
So
people
can
can
switch
over
at
their
own.
So
to
speak.
A
Okay
cool.
Thank
you
for
the
update,
Victor
Taylor.
Let
me
before
I
even
ask
you
I'm
going
to
make
sure
you
have
power.
Okay,
now
you
do
Taylor.
Are
you
game
to
give
us
an
update
on
some
of
the
stuff
you've
been
working
on.
E
This
was
just
a
little
joke
because
we
discovered
that
Google
me
has
Cowboy
mode
now,
so
I
decided
to
pull
up
my
real
cowboy
hat
and
go
all-time
young
people
with
a
sepia
filter.
So
anyway,
you
all
get
that
enjoyment
today
and
since
we're
live
streaming
to
be
recorded
for
posterity
forever
and
ever
anyway.
So
just
one
quick
update,
we
first
things.
E
First,
before
I
go
to
a
dam,
is
we
released
a
quick
bug
fix
to
wash
0.17
you
weren't
able
to
install
it
using
cargo
install
because
of
an
interesting
compilation,
error,
so
0
17
4
is
maybe
out.
E
E
No
I
failed,
of
course,
Cape
anyway,
it'll
be
out
soon,
and
so
oh
I
know
what
I
did.
It
will
be
out
soon
pretend
it
doesn't
exist
right
now
and
then
yeah
that'll
be
out
for
us
to
for
everyone
to
if
you're
having
trouble
installing
it.
So
that's
the
first
thing
just
make
sure
I
am
good
to
go
and
then
I'm
gonna
get
the
update
on
my
damn.
I
am
working
on
one
last
little
feature
this
week
and
then
I
will
be
good
to
go.
C
E
E
Okay,
so
right
here
we
have
our
wonderful
thing.
Running
I
can
show
you
all
hosts.
We
have
a
provider
in
four
I.
Just
took
our
simple
thing
running,
but
last
time
I
tried
to
show
a
complex
example.
So
I'm
going
to
do
that
again,
this
time
so
first
thing:
first
I'm
going
to
undeploy
the
one
I
was
currently
running.
Just
so
you
can
see
that's
all
working
and
then
I'm
going
to
PB
copy.
E
So,
once
again,
if
you
didn't
see
this
last
week,
we
have
a
multiple
spread
across
like
five
different
five
different
areas.
This
time,
I
actually
like
major,
my
yamlas
formatted
properly,
rather
than
being
a
you
know,
missing
a
tab
because
the
animal
so
I'm
gonna
go
ahead
and
set
and
send
that
to
the
server.
E
Now,
if
we
come
back
here,
it'll
take
a
second
but
we're
going
to
see
it
pop
up
everywhere.
These
ones
do
not
get
spun
down,
which
is
something
I'm
debugging
but
anyway.
So
now
we
can
actually
see
that
we
have
a
provider
running
here.
It's.
C
E
To
take
a
second
because,
on
my
machine,
if
you
don't
remember
from
last
week,
it
takes
forever
to
download
capability
providers
for
some
reason.
So
on
the
one
machine
it
was
already
crashed
on
it
started,
but
you
can
see
that
all
the
actors
started
up
the
link
definition
set
so
and
it's
and
it's
actually
spread
across
like
two
two
one.
E
We
have
one
on
which
is
exactly
what
we
desired
to
do,
so
that
is
the
the
full
demo
actually
working
like
I
said:
there's
just
some
small
things
like
some
actors
didn't
get
stopped
like
they
should
have
and
there's
just
a
couple
little
things
I'm
working
through
right
now,
but
really
they're,
just
minor
minor
things
so
anyway,
that
is
all
there.
E
All
that's
left
is
some
stuff
around
the
provider,
these
little
bugs
and
Ed
tests,
and
so
I'm
hoping
hoping
that
we
can
actually
get
it
done,
because
the
to
fix
this
whole
thing
with
the
Jitter
and
everything
I
was
talking
about
last
week
was
pretty
gnarly.
So
that's
all
done
now
in
merge.
That
is
the
update
for
what
I
am
so
looking
forward
to
this
week,
having
it
all
out
for
you.
A
Yeah,
so
that'll
basically
be
the
first
release
for
of
wadam
that
we
want
people
to
start
building
on
and
giving
us
feedback
what's
another
way
that
you
would
describe
with
damn
to
folks,
because
you
know
some
people
might
be
tuning
in
for
the
first
time.
E
Yeah,
so
with
damn,
if
you
are
coming
to
this
for
the
first
time,
the
easiest
comparison,
they're,
not
the
same
thing-
I'm
gonna
emphasize
that
very
clearly,
but
to
to
start
it
off
it's
similar
to
like
what
a
kubernetes
deployment
can
do
right.
You
say:
I
want
to
run
X
number
of
replicas
with
X
requirement
with
Y
requirements,
that's
at
the
basis
of
what
it
is,
but
this
is
for
a
full
application
and
not
for
like
a
single
thing
inside
of
it,
so
you
define
if
we
go
back
to
that
manifest.
E
We
can
look
here
that
we're
defining
essentially
a
whole
application.
We're
saying:
hey
I,
have
this
Echo
actor
I'd
like
to
run
I'm
downloading
it
from
here,
if
there's
an
actor
that
I'm
talking
about
and
then
I
say:
okay
I'm,
going
to
spread
this
everywhere
and
so
I'm
going
to
spread
five
of
them
and
I'm
putting
them
in
basically
different
regions
and
I'm.
Linking
this
to
something
called
an
HTTP
server.
So
I
find
something
called
an
HTTP
server
right
here,
and
it
is
downloading
this.
C
E
And
it's
also
spreading
across
all
the
different
servers
and
then
it
links
those
things
together
for
you
and
so,
if
you're
familiar
in
the
Watson
Cloud
space
at
all,
you've
probably
like
manually,
spun
up
something
and
connected
it
all,
which
is
very
good
for
prototyping
and
doing
it.
But
then,
when
you
have
it,
you
don't
want
to
do
that
every
single
time.
So
damn
lets
you
just
do
this
declaratively
with
a
yaml
manifest
just
like
you're
expecting
to
do
so.
It
gives
a
sense
of
familiarity.
E
E
That
is
this:
that's
why
this
looks
like
right
here
looks
kubernetes-esque,
but
it's
not
exactly
the
same
open
application
model
kind
of
came
out
of
that,
and
it's
just
a
great
like
open
standard
for
defining
what
an
application
looks
like.
So
it
was
a
perfect
fit
for
what
we
were
doing
here.
We
have
specific
components
that
aren't
of
the
type
that
you'd
expect
in
some
other
OEM
implementations,
but
we
we
use
the
model
which
is
very
useful
for
us.
A
A
All
right,
yeah
leaves
like
cut
it
out.
Yeah
I'll,
stop
okay,
okay,
we'll
stop!
So
next
up
we're
going
to
talk
about
an
another
RFC
that
Kevin
filed
Kevin
I'll
share
the
image.
If
I
can
here
it
is,
are
you
able
to
see
it
yeah,
okay,
cool
Hey
Kevin!
D
Sure
the
first
thing
I
got
out
of
this
is
that
my
diagrams
don't
work
well
in
dark
mode.
D
We've
got
to
figure
out
how
to
do
two-tone
diagrams,
so
the
super
short
version
of
this
art
scene
is
that
there's
a
number
of
ways
that
the
link
definitions
that
are
stored
in
the
key
value
bucket
can
not
match
what
other
components
in
the
lattice
think
are
there.
D
So
you
can
have
a
mismatch
between
what's
in
the
key
value
bucket
and
the
capability
provider's
memory,
you
can
have
a
mismatch
between
the
bucket
and
one
host's
memory.
You
can
even
have
a
mismatch
where
two
different
hosts
have
two
different
opinions
of
what
a
link
definition
is
and
both
of
those
are
different
than
what's
in
the
key
value
bucket.
D
So
if
you
want
to
read
all
of
the
Gory
details
of
how
all
those
various
failure
cases
are
triggered,
I've
written
down
most
of
them
I,
don't
think
I
got
all
of
them,
but
these
are
pretty
good
generalization
of
those.
So
there's,
basically
a
three-prong
attack
for
how
we're
going
to
try
and
fix
this.
The
first
is
that,
instead
of
pushing
like
definitions
to
a
capability
provider
and
hoping
that
the
provider
deals
with
it
properly,
the
provider
then
becomes
responsible
for
pulling
the
configuration.
D
So
the
provider
can
pull
it
when
it
starts
up,
it
can
pull
it.
When
a
link
is
established
and
you
can
you
can
pull
it
right
before
an
indication
happens.
So
you
can
have
this
scenario
where
you
might
have
a
capability
provider
with
100
or
so
links
in
it,
and
it
doesn't
actually
provision
any
of
the
client
resources
until
the
first
invocation
happens.
D
So
we
kind
of
get
a
performance
boost
out
of
inverting
the
configuration
Source,
but
we
also
gain
be
much
more
reliable
way
of
having
the
provider
get
that
information,
because
it's
always
going
to
get
the
most
up-to-date
information,
because
it'll
ask
the
host
that
spawned
it
and
not
the
key
value
bucket.
D
The
second
is
we'll:
do
a
link
ping
where
capability
providers
will
be
required
to
respond
to
a
ping
on
a
particular
link
definition,
so
we'll
send
the
Ping
to
the
providers
and
what's
the
status
of
the
link
between
this
actor
and
you
and
then
we
get
in
on,
you
know,
link
name
default
or
whatever,
and
that
is
a
scatter
gather
operation,
so
we'll
be
collecting
the
status
and
the
Ping
responses
from
all
of
the
capability
providers.
D
Actually,
the
the
host
listens
for
changes
to
the
key
value
bucket
and
when
the
key
value
bucket
receives
a
delete,
the
host
then
tells
the
capability
provider.
You
should
go
delete
this,
but
kind
of
the
same
problem
that
pervades
the
whole
system.
Is
that
there's
no
synchronous
acknowledgment
of
whether
those
operations
actually
succeeded,
so
we
don't
have
a
way
of
knowing
whether
a
capability
provider
has
actually
gotten
rid
of
the
resources
for
a
now
deleted
link.
D
So
the
the
last
of
the
third,
the
third
prong
here,
is
to
be
able
to
tell
a
capability
provider
Purge.
D
Whatever
you
have
for
this
link
and
the
link
the
capability
provider
can
then
you
know
you
can
then
dump
all
of
its
client
resources,
but
it
can
then
also
potentially
put
that
link
in
an
error
state
if
it
sees
fit
so
the
next
time
you
per
next
time,
you
ping
the
links
for
that
provider,
you'll
see
that
it
failed
The
Purge
and
it
might
even
be
able
to
supply
an
error
message
in
terms
of
you
know
why
it
failed
to
do
that.
D
So
long
story
short,
is
there's
a
whole
bunch
of
things
that
we
need
to
do
in
order
to
make
sure
that
our
link
definition
system
is
more
consistent
than
it
is
today,
and
the
the
risks
for
this
obviously
are
that
we
need
to
make
changes
to
the
capability
providers
in
order
to
make
this
happen.
But
the
the
proposition
is
that
the
the
effort
will
be
worth
it
and
it'll
be
worth
the
breaking
changes.
D
A
I,
have
it
read
through
everybody's
comments,
yet?
Is
there
anything
worth
calling
out
in
some
of
these.
D
I
think
it
was
I,
think
Connor
asked
in
here
what
the
various
States
might
be
for
the
given
paying
status
and
so
I
wrote
down.
I
saw
some
thoughts
there,
obviously
you'd
have
up
or
green,
or
you
know
whatever
it
is.
If
you're
using
like
a
traffic
light
pending,
is
that
is
a
state
that
we
would
determine
as
a
consumer
where
we
know
the
the
participants
for
Lake.
D
Aren't
there
or
aren't
running
so
you
might
have
a
link
definition
that
is
there,
but
the
actor
for
it
isn't,
and
maybe
the
provider
isn't
either
unknown
is
another
interesting
one,
because
this
would
tell
us
that
we
think
a
capability
provider
is
supposed
to
have
a
link
for
a
given
actor
and
it
could
reply
with
unknown
saying
no.
It
doesn't
have
that.
D
Finally,
the
down
or
error
state
is
what
I
was
talking
about
before,
where
you
know.
If
a
capability
provider
fails
to
start
the
client
resources
for
a
given
actor
and
or
it
fails
to
shut
them
down
on
a
purge,
then
you
know
it
can
return
this
type
of
status
so
that
we
can
get
decent
error
messages
out
of
it.
A
Well,
I
mean
that
all
makes
really
good
sense
to
me.
Does
anybody
on
the
call
have
any
questions?
Oh
hey!
Here's,
a
question
from
Jordan.
Is
the
plan
to
do
this
all
in
a
ping
request
to
the
provider.
D
I'm
not
sure
what
you
mean
by
this
all,
but
there's
there's
three
things
that
this
RFC
covers.
One
is
that
the
providers
query
the
configuration
from
the
host
rather
than
getting
it
pushed
from
The
Host.
The
second
is
the
support
for
Ping,
which
becomes
the
responsibility
of
capability
providers,
and
then
the
third
is
there
is
the
support
for
Purge,
which
is
also
the
responsibility
of
the
capability
provider.
A
C
F
F
There's
like
six
microphones
on
this
computer
yeah.
So
no,
my
question
about
the
ping
is
like.
We
already
have
a
way
of
doing
that
that
we
don't
utilize
at
all
and
I.
Think
I
asked
about
our
way
back,
but
a
while
back,
but
we
we
query
our
capability
providers
every
30
seconds
for
a
heartbeat
already,
and
it
has
a
whole
message
section:
we're
not
even
using
so.
Why?
Don't
we?
Why
don't
we
just
dump
link
status
and
that
I
because,
like
it,
goes
primarily
unused?
D
The
difference,
because
is
between
one
host,
knowing
whether
a
capability
provider
itself
as
a
whole
is
failing
versus
the
entire
lattice
being
able
to
have
visibility
into
the
status
of
all
the
individual
links.
So
the
purpose
of
the
help
of
the
the
heartbeat
requests
that
we
have
now
wouldn't
go
away.
D
It's
specifically
designed
to
ask
the
provider
if
it's
running
and
that
request
is,
is
done
independent
of
any
link
definitions,
so
a
capability
provider
could
fail
that
heartbeat
request,
even
if
there
were
no
link
definitions
and
if
it
didn't
have
any
client
resources
established
for
links.
F
D
Becomes
Something
That
We're
at
any
given
point
in
time.
I
need
a
real-time
query
of
what
a
provider
thinks
its
link,
statuses
are
and
the
the
heartbeat
exchange
between
the
host
and
the
provider
is
private
between
the
host
and
the
provider.
A
All
right,
one
little
update,
I
haven't
done
it
yet,
but
I
intend
to
take
our
ADR
repo
and
move
it
into
the
Watson
wasm
Cloud
repo,
so
it'll
be
at
the
top
level,
and
so
once
we've
gotten
the
right
level
of
feedback
on
these
rfcs
that
Kevin's
been
filing
because
they
seem
for
a
requests
for
comment.
If
you're
not
familiar,
so
we
really
want
feedback
from
the
community
once
we
have
that
we'll
convert
them
to
an
architecture,
design,
record
and
and
that'll
be
in
the
top
level
repo.
A
So
people
can
look
at
it
and
kind
of
understand.
You
know
why
we
made
these
decisions
at
the
time
and
the
things
that
we
explored
for
the
implementation
of
them
I
think
that's
the
main
update
there.
Roman
has
started
work
on
the
rust
host
RFC
implementation.
So
if
you
haven't
seen
that
one
definitely
check
it
out
and
provide
comments
there,
especially
if
there's
certain
things
that
you
want
to
see
and
I
think
that
is
basically
it
that's
the
Roundup.
A
Does
anybody
else
here
on
the
call
have
anything
they
want
to
share?
Oh,
hey,
we
got
a
question:
will
the
capability
provider
updates
coincide
with
the
component
model?
Timing
too?
A
I
guess
my
my
thought
is
probably
so
what
what
we
were
talking
about
there
with
the
most
recent
RFC
is
really
more
about
how
hosts
respond
to
Linked,
UP
implementation,
so
Beyond
capability
providers
needing
to
support
the
the
Ping
and
Purge
events
I
think
that
that
can
happen
independent
but
John.
What
are
you
thinking
are
you?
Are
you
hoping
that
maybe
we
land
these
at
the
same
time?
So
there's
only
like
one
big
change
or
are
you,
okay
with
us
taking
little
small
steps
at
a
time.
C
A
It
so
basically
we're
getting
started
on
the
component
model.
I
would
say
that
if
you
want
to
get
started
any
capability
providers
this
month,
then
don't
wait,
go
ahead
and
we'll
help
with
providing
scripts
and
and
things
to
to
easily
migrate
from
one
to
the
other.
A
The
component
model
work
is
still
churning
in
wasm
time
right
now,
which
is
the
first
webassembly
runtime
reference.
Implementation
of
the
component
model
specification
I'm.
A
This
is
me,
Bailey
is
now
wearing
my
bicode
alliance
hat
and
how
I
wasn't
glad
app
I
am
super
hopeful
that
the
the
lazom
time
release
at
the
end
of
June,
which
is
usually
like
the
23rd
or
24th
of
the
month,
will
include
that
like
first
component
model
experimentation
where
you
don't
exactly
have
to
stand
on
your
head
to
be
able
to
use
it,
and
so
at
that
point
we
can
start
consuming
that,
because
we
embed
wasm
time
and
our
host
and
then
enable
that
component
model
flag,
and
we
do
have
a
version
of
wasn't
time
that
has
it
and
I've
shown
demos
of
it
before
but,
like
I
said
you've,
it's
hard
to
get
it
going
right
now
until
we
have
the
right
stuff
working
with
the
default
providers
and
that
kind
of
thing.
A
So
if
you
want
to
get
started
this
month,
I
guess
please
don't
wait.
I
would
expect
us
to
see
lots
of
cool
updates.
Basically,
in
the
month
of
July
around
that
around
the
component
model.