►
From YouTube: wasmCloud Community Meeting - 05 Jul 2023
Description
Welcome to the wasmCloud community! Tune in live where we discuss the latest developments in the wasmCloud ecosystem, WebAssembly standards, and break out sweet demos.
Agendas for wasmCloud community meetings can be found at: https://wasmcloud.com/community
A
So
today's
meeting
we
have
kind
of
a
short
agenda,
but
I
think
it'll
be
a
really
fun
demo,
a
good
discussion
and
then
kind
of
a
late
addition
to
the
agenda
that
we'll
talk
about
when
we
get
there
as
long
as
we
have
some
some
time
in
the
end
for
a
discussion
on
on
wash
so
to
get
started,
we're
going
to
be
having
a
we're
going
to
have
Lachlan
give
a
demo
on
some
of
the
work
that
he's
done
on
the
wash
UI.
A
The
detached,
washboard
and
I
want
to
just
set
the
stage
really
quick
before
I
hand
it
over.
This
is
all
kind
of
development
stemming
from
this
RFC,
which
I
will
put
in
the
zoom
chat,
and
then
I
can
add
to
our
various
streaming
things
here.
In
just
a
moment,
around
decoupling,
the
wasmcloud
dashboard
or
colloquially,
the
washboard
from
the
host,
the
main,
wasn't
Cloud
host
that
people
will
use
for
cloud,
and
you
know
what
what
the
standard
is
today
is
the
waslam
cloud
OTP
host,
but
we
also
have
been
working
diligently
to
transition.
A
A
There's
there's
tons
of
rationale
and
a
couple
of
different
choices
around
the
technology
that
we
may
use
and
the
requirements
and
and
all
of
that
so
so
I'll
leave
that
as
an
exercise
for
the
for
the
reader
afterwards,
but
I
just
wanted
to
set
the
stage
that
we
have.
A
You
know
we
kind
of
pitch
this
RFC
I
guess
it
was
a
little
over
almost
two
months
ago
and
we're
kind
of
starting
to
starting
to
get
working
on
it
and
Lachlan,
who
has
much
more
UI
expertise
than
well
at
least
me
I
think,
has
some
great
opinions
and
thoughts
on
how
this
should
work,
so
he's
been
working
on
a
little
bit,
so
I
think
I
will
go
ahead
and
throw
it
over
to
him.
B
All
right,
good
morning,
good
afternoon,
whatever
time
zone
it
is
that
you're
in
Lachlan
working
at
cosmotic
on
the
UI
side
there
so
I
felt
it
a
good
place
for
me
to
be
able
to
contribute
is
in
the
UI
side
in
Walzem
Cloud,
so
I'm
gonna
quickly
share
my
screen
and
we'll
see.
B
What's
going
on
here,
you'll
see
my
screen:
there,
cool
yep
so
I'm
running
from
a
little
bit
of
a
fork,
my
own
Fork
I've
washed
here
and
what
I've
done
is
added
a
UI
command
and
we're
just
wrapping
that
up
this
morning
with
Brooke.
So
we'll
see
if
it
works
the
way
it
should.
B
I,
hoping
that
it
comes
through
probably
should
have
built
that
first,
okay,
there
we
go
so,
ideally
what
should
happen
is
it
starts
running
locally
and
then
yeah
great
it
does.
It
works
as
expected.
So
I've
got
a
host
running
here
in
the
background
that
may
that
host
may
actually
be
decayed
at
the
moment.
I
don't
know
that
that
host
is
actually
I.
Don't
know
it
is
there
it
is.
It
is
seen
less
than
a
minute
ago.
B
So,
if
you're
familiar
with
the
the
existing
washboard,
it
is
built
into
the
host
and
it
uses
Elixir
to
the
Phoenix
server
to
sort
of
build
that
up
and
display
that
this
is
fully
static
files.
So
it
is
a
react
application
and
it
uses
the
Nats
websocket
client
to
to
connect
directly
to
the
Nats
lattice.
B
So
this
host
right
here
it
is
this
information-
is
all
coming
through
the
lattice
I
get
a
little
overview
of
the
top.
Here.
Gives
you
a
quick
little
rundown
of
the
the
system
and
what
I'll
do
is.
B
That
one
put
that
guy
over
there,
okay
cool.
So
this
is
the
comparison
I
guess
between
the
two
versions
of
the
washboard
I'm
gonna
start
some
stuff
here,
I'm
going
to
start
an
active
and
I
believe
I
have
the.
C
B
And
then
you
can
see
it
show
up
pretty
quickly
in
there
I'm
going
to
start
another
host,
so
I'll
just
pop
over
here
and
do
a
wash
up
and
that
should
there
we
go.
It
shows
up
here
as
well,
and
this
information
will
update
as
it
gets
a
heartbeat,
the
let's
start,
an
actor
on
that
one.
B
And
I'm
going
to
do
and
we'll
do
five
on
there
and
you
can
see
it
pumps
up
the
total
number
of
actors
running,
but
what
you
can
also
do
is
you
can
break
it
down
and
you
can
see
what
the
breakdown
is
of
the
actors
running
on
each
host.
B
What
I'll,
probably
end
up
doing
is
changing
this
to
actually
display
the
friendly
name
rather
than
the
the
ID,
but
this
works
for
now.
You
can
still
identify
it
using
the
ID
for
the
both
of
those
there.
Let's
get
a
provider
started.
C
B
I
want
I
want
to
seems
to
be
having
a
bit
of
a
problem,
starting,
but
that's,
okay,
it
we
get
the
demo
and
then
there
is
a
link
definition
that
I've
already
put
in
place
so
that
link.
A
B
That
was
already
set
up
and
running,
but
I
can
also
configure
that
just
as
I
would
now
a
support
problem.
Okay,
thanks,
oh
yeah,
like
it'd,
have
to
be
on
a
different
port
yeah.
B
Sense
yeah,
so
I
can
start
a
link.
I've
already
got
one
there
yeah,
you
might
notice
there's.
This
is
only
a
view
at
the
moment.
It's
not
I
can't
actually
create
anything,
but
because
of
the
way
that
this
setup
Works
it's
connecting
directly
to
the
Nats
lattice,
which
means
that
I
should
be
able
to
issue
the
commands
just
the
same
way
as
the
existing
one
does
and
deploy
things
from
Registries
set
up,
link,
definitions
and
that'll.
Be
the
next
thing
that
I
that
I
build
out
in
here.
B
It
is
pretty
they're
the
idea
I'm
following
on
with
them
the
awesome
cloud
themes,
colors
I
will
probably
change
this
a
little
bit
further.
Bailey
kindly
pointed
out
that
there's
actually
a
few
more
colors
than
just
the
Green
in
our
color
palette,
so
I
might
actually
make
use
of
some
of
those.
But
there
is,
as
a
very
the
most
important
feature
of
any
UI
is
that
there
is
actually
a
dark
mode
and
a
light
mode.
So
we
will
get
both
of
those
working
yeah.
B
But
ideally,
what
we'll
be
able
to
do?
Is
this
because
it's
detached
you'll
be
able
to
start
this
UI
anywhere
that
you
have
access
to
a
lattice
and
then
put
in
the
details
of
the
lattice,
the
URL
and
the
credentials?
And
so
long
as
the
Nats
Leaf
is
is
running
with
websocket
Port
enabled,
then
the
UI
should
be
able
to
connect
to
that
yeah.
So
that's
that's
what
we
got
so
far.
A
From
this
is
this
is
awesome.
You
already
got
the
already
got
the
feedback
from
YouTube
that
this
is.
This
is
pretty
definitely
I
really
appreciate
the
style,
it's
simple,
but
it
shows
the
I
think
it
shows
the
information
in
a
little
more
digestible
format
than
the
washboard
has
currently
we're
just
kind
of
talking
about,
like
some
of
the
way
that
things
are
indexed
and
how
that
can
cause
confusion
and
I
think
this
presents
it
really
well.
A
C
Yeah
I
just
wanted
to
provide
a
little
bit
of
historical
context,
the
biggest
reason
why
the
washboard
doesn't
seem
to
fit.
The
current
way
of
interacting
with
the
lattice
is
that
the
washboard
was
originally
built
to
show
one
host
at
a
time,
and
so
that's
the
layout
that
it
used
from
the
beginning,
so
it
needed
to
be
refactored
to
deal
with
multiple
hosts
anyway.
I
think
we
might
actually
have
had
a
an
issue
in
the
GitHub
repo
for
it.
C
But
what
Lachlan
has
here
that
builds
the
the
dashboard
from
the
ground
up
with
the
idea
of
interacting
with
the
lattice
rather
than
within
with
just
one
host,
makes
an
awful
lot
more
sense.
A
G
Hey
now,
I
was
just
saying
that
I've
actually
been
working
on
one
that
is
extremely
similar
and
if
lashian
wants
to
share
notes,
I'm
happy
to.
B
Yeah
always
getting
to
see
more
ideas,
share
them
through
cool.
G
A
A
So
Lachlan
I
have
a
couple
of
questions.
The
first
one.
You
said
that
this
is
just
like
static
assets.
You
built
it
using
a
react
app.
A
Can
you
do
you
want
to
talk
a
little
bit
more
about
why
you
pick
that
and
if
you
want
to
share
your
code,
feel
free,
whatever
helps
illustrate
it.
Yeah.
B
For
sure
yeah,
so
it
it,
we
did
discuss
a
couple
of
different
options:
things
like
using
tari,
which
is
a
alternative
to
electron
built
on
top
of
rust.
We
also
built.
We
also
talked
about
like
what
are
the
Frameworks
we
could
use.
Do
we
use
what
do
we
use
like
Astro
or
next
or
any
of
the
other
back-end
Frameworks?
B
Do
we
use
stick
with
react,
or
do
we
look
at
angular
or
what
am
I
thinking
of
spelled
any
of
the
other
Frameworks
that
are
growing
in
popularity
and
we
sort
of
settled
on
react,
not
because
those
other
ones
are
bad
by
any
means,
mostly
just
because
we
want
to
be
able
to
provide
a
a
a
a
project
that
has
maintainability
and
the
react.
Community
is
by
far
the
largest
of
those
projects,
not
that
it'll
always
stay
that
way,
but
it
is.
B
It
is
definitely
currently
the
the
largest.
So
it's
more
than
likely
that
if
someone's
gonna
contribute
to
something,
they
will
have
come
across
a
react
project
in
the
past
or
they
will
in
the
future.
So
it's
it's
a
it's
a
good
skill
to
have
and
build
on
top
of
and
being
that
it
is
a
large
community.
It
was
relatively
easy
to
get
things
moving
quickly
with
the
UI.
B
It's
not
particularly
complex,
but
things
like
tables
and
managing
that
and
adding
sorting
and
filtering
and
all
that
sort
of
stuff
there's
definitely
a
lot
of
stuff.
That
is
that
was
enabled
by
the
fact
that
it's
it's
a
it's
a
large
community
yeah
we
use
Tailwind
as
well
for
the
UI.
B
It
just
adds
a
lot
of
it's
it's
just
so
straightforward
and
then
underneath
that
there's
actually
a
UI
framework,
that's
built
on
top
of
a
a
project
called
Radix
which
is
sort
of
like
they
call
them
UI
Primitives
that
that
don't
really
have
too
much
style
attached
to
them.
B
But
they
add
a
lot
of
the
accessibility
requirements
that
that
don't
get
thought
about
until
someone
gets
audited
for
accessibility
and
then
it's
like,
oh
well,
we've
got
to
go
and
change
all
of
these
other
things
because
we're
not
meeting
the
ADA
requirements
and
the
accessibility
web.
What
is
it
wcag
requirements
for
web
accessibility
so
that
particular
package
Radix
has
all
of
that
thought
about
and
covered
out
of
the
box.
So
it
was
easy
to
move
quickly
and
get
that
up
and
running
in
there.
B
As
for
some
code,
I
will
have
a
PR
up,
probably
this
afternoon.
So
before
the
before.
C
B
Recording
goes
up.
You
should
be
able
to
see
that
we'll
get
that
added
to
the
notes
or
something,
but
it
is
effectively
inside
of
the
wash
repo
and
there's
a
packages
folder,
which
has
the
react
application
with
a
bunch
of
stuff
in
here.
It
will
build
into
this
this
folder,
and
then
we
take
that
dist
folder
and
bundle
it
into
wash
so
that
wash
can
serve
those
static
files.
Just
as
soon
as
you
type
wash
UI,
which
made
the
most
sense
yeah.
A
In
that
that
wash
UI
command,
were
you
planning
on
putting
that
behind
the
experimental
flag
or
is?
Are
you
proposing
that
as
like
a
let's?
Just?
Let's
just
do
it
brand
new
thing.
B
Yeah
I
could
go
either
way
with
that
one.
It's
probably
not
a
bad
thing
to
throw
behind
the
experimental.
Given
that
it's
it's,
oh
I'm,
the
only
one
that's
tested
it
at
this
point,
but
yeah,
it's
it's
it's
in
there
where,
where
does
that?
Live
that
lives,
not
in
the
crates
in
here,
so
there's
a
little
new
UI
module
and
it
effectively
just
uses
the
warp
package
crate
rather
to
serve
up
those
static
files.
B
So
those
rust
embed
files,
it
takes
those
packages,
those
static
files,
bundles
them
up
and
then
serves
them
back
up
through
warp
embed
there's
another
crate
that
uses
warp
to
serve
static
files
from
an
embedded
asset,
yeah.
A
Nice
yeah
I
I,
don't
want
to
I,
haven't
taken
a
hard
or
I
I,
don't
take
a
hard
stance
on
like
any
new
feature,
needs
to
come
in
under
experimental
I.
Think
this
one
will
be
good
just
so
that
we
can
tease
out
some
of
the
Kinks
to
throw
the
experimental
flag
on
it.
And
then
you
know
we
can
release
it
with
wash
0.19
the
next
like
minor
release,
and
then
you
know
officially
stabilize
it
or
whatever
would
be
really
interested
in
having
people
have
this
sooner
rather
than
later.
A
Yeah
I've
gotta
I've
got
more
but
I'll.
Let
Jordan
go.
You
can
finish.
No,
no!
No!
No!
It's
fine
look
I'm
already
below
you,
nope
I'm,
lowering
my
hand.
You
go
okay,.
G
G
Why
are
we
building
this
into
the
CLI
Why
Can't
This
just
be
a
package,
we
start
or
sorry
a
water
map
we
start
or
whatever,
because
they
go
back
to
my
I.
Go
back
to
my
thing
last
week.
Wash
is
getting
very
big
and
it's
doing
a
lot,
and
we
have
this
like
super
powerful
Autumn
thing
that
we're
now
starting
by
default.
So
why
don't
we
just
use
it
all
right
can't
be
a
one.
A
If
you
wanted
to
like
like,
if
you
were
to
go
into
the
washboard,
you
know
full
the
assets
folder
and
then
run
like
Python,
3,
HTTP
server
or
whatever.
This
will
launch
the
washboard.
A
If
we
wanted
to
take
this-
and
you
know,
stick
it
in
an
actor
and
then
do
wash
app
deploy
washboard
and
then
that
deploys
like
an
HTTP,
server
and
stuff
I.
Think
that
would
do
the
same
thing.
I
think
we've
talked
a
little
bit
about
limitations
of
of
front
ends
with
wasm
Cloud.
Just
you
know
little
stuff
like
run
multiple
copies,
because
we
request
multiple.
A
You
know
a
web
browser
will
request
multiple
at
the
same
time,
I
I,
guess
all
I'm
really
saying
is
that
this
command
isn't
like.
It
is
just
making
that
a
little
simpler,
but
there's
no
reason
why
we
couldn't
do
wash
app
deploy
washboard
two.
B
Right
totally,
I
I
definitely
think
that
there
is
ways
that
we
could
make
this
more
simple,
sorry
to
cut
you
off,
Brooks
I.
Think
one
thing
that
we
had
discussed
also
is
the
same
way
that
wash
will
download,
Nats
and
run
that
we
could
also
do
that
same
thing
with
this
and
that's
what
we
were
sort
of
talking
about
where
we
were
talking
about
what
was
Atari
and
having
that.
So
you
were
typed,
something
like
wash
UI
and
then
it
would
go
and
download
the
UI.
B
We
could
do
the
same
thing
for
this.
Tori
is
effectively
just
a
a
browser
wrapped
around
some
static
files
anyway,
so
we
could
do
that
we
could
do.
We
could
do
it
as,
as
you
said,
as
an
actor,
but
you
could
just
throw
it
up
there
and
it's
static
files
so
long
as
the
client
itself
has
access
to
the
Nats
lattice,
then
this
can
be
deployed
anywhere
and
that's
that's.
The
thing
right
is:
is
the
client,
the
browser
or
whatever's
displaying
this
needs
to
have
network
access
to
the
lattice.
B
So
the
moment
it's
running
on
localhost
and
the
websocket
port
is
localhost.
4001
I've
got
it
set
up
too
so
so
long
as
the
client
itself
has
network
access
to
a
lattice,
then
Bob's,
your
uncle,
you
can,
you
can
access
the
Lotus
and
do
whatever
it
needs
to
so
wherever
these
static
assets
end
up
is
totally
flexible.
A
And
Jordan
I
do
still
hear
your
concern
that
we
touched
on
maybe
two
or
three
weeks
ago
around
the
security
of
wash
like
the
all
of
the
abilities.
That
wash
can
that's
not
the
right
way
to
say
it
all.
The
things
that
wash
can
do
around
manipulating
a
lattice
doing
RPC
calls,
and
you
know
something
like
this
like
setting
up
the
washboard.
A
It
gives
more,
not
necessarily
privileged
because
you
have
to
still
access
the
lattice,
but
more
and
more
capabilities
into
wash
I
think
it's
I
think
it's
worth
mentioning
that
wash
UI.
That
main
is
is
something
that
still
you
know
we're
we're
still
proposing.
There
can
definitely
be
alternatives
for
the
best
way
to
deploy
this.
It
could
come
on
wash
up
it.
A
Could
it
could
be
not
inside
of
wash
at
all
I
think
just
the
important
thing
would
be
accessibility
of
this
UI
and
I
kind
of
have
a
question
to
lead
through
that
blockline
I
was
wondering
if
you
could
describe
the
the
scope
of
the
washboard
like
like
what
is
in
your
view.
What
is
the
wasmcloud
dashboard
for.
B
Yeah,
for
sure
I
mean
the
way
that
I
had
viewed
it
as
I
was
building
it
out
was
similar
to
what
it
does
already
in
that
it
is
a
snapshot
of
what
the
current
state
of
the
lattice
is
and
and
a
way
to
potentially
just
view
that,
and
maybe
throw
a
couple
extra
things
on
the
canvas.
B
Can
you
hear
me
all
right,
yeah,
okay,
did
you
step
on
your
headphones
Kevin
yeah,
so
the
I
I've
viewed
it
as
a
way
to
quickly,
through
a
UI
ex
like
see
an
overview
of
the
letters,
which
is
why
a
couple
of
those
things
got
moved
up
the
top
so
the
overview
panel,
with
a
couple
of
different
pieces
about
what
the
components
that
are
running
within
the
wasm
cloud
lattice
and
then
also
the
hosts
being
that
are
that
are
running
so
those
hosts
pop
up
at
the
top
as
well
yeah.
A
I
think
so,
yeah
I
think
that
was
something
that
when
we
first
launched
the
the
wasm
cloud
dashboard,
you
know
in
the
OTP
host
the
way
the
way
that
it
is
today
there
were
some
questions
around
like.
Is
this
an
observability
tool?
Is
this
a
production
monitoring
like
seeing
what
things
are
running
in
a
lattice
tool
and
I?
A
A
I
think
that
there's
still
some
I
think
there's
still
some
questions
and
some
discussions
that
we
should
have
around
how
to
best
serve
the
washboard
like
how
it
is
easiest
and
best
for
users
to
deploy
it
because
I
really
like
Jordan's
idea
of
being
able
to
stick
this
in
an
actor
and
and
deploy
it
using
wadham
onto
onto
a
lattice
like
here's,
the
washboard
and
then
you
know,
I
have
an
HTTP
server
and
I'm
running
this
locally,
so
that
I
can
hit
it
I
like
the
idea
of
having
an
easy
way
to
launch
it
using
familiar
tooling,
I
think
most
of
the
time
people
have
wash
installed,
but
around
you
know
using
this
to
connect
to
a
remote
lattice
URL,
where
you're,
interacting
with
a
development
or
production
environment,
I
think
there's
still
some
more
thoughts
there.
A
We
are.
We
are
about
halfway
through
the
call,
so
I
want
to
make
sure
that
we
have
time
for
the
other
discussions,
so
I'm
going
to
propose
that
we
move
on,
but
as
soon
as
Lachlan.
Since
you
know,
you've
been
working
on
it.
If
you
want
to
put
up
that
PR
in
wash,
maybe
we
can
go,
we
can
do
a
little
back
and
forth
there.
A
Sweet
thanks,
everybody,
and
and
thank
you
Jordan
for
bringing
that
up.
I
think
that's
a
really
really
great
point
that
we
should
tease
out
there's
some
more
discussion
in
the
zoom
around
like
being
able
to
connect
over
Nats
and
and
all
of
that
stuff,
but
I
think
we
can
continue
that
in
GitHub
or
or
in
Slack.
A
So
next
on
the
list
on
the
agenda,
we
have
a
discussion
queued
up
which
Vance
elegantly
boiled
down
to
actor
life
cycles.
We
had
a
really
good
thread
earlier
in
the
week,
or
maybe
it
was
late
last
week
with
the
holiday
I
forget
what
day
it
is
and
I
thought
that
it
would
be
a
great
opportunity
or
I
think
Kevin
thought
this
and
then
I
agreed
that
we
should
talk
about
it
in
the
community.
A
Call
it's
just
kind
of
great
to
put
these
things
out
in
in
front
of
people
and
talk
it
out
in
person
in
person.
So
I
would
like
to
turn
the
Vance.
If
you
wouldn't
mind
talking
a
little
bit
about
the
threat
that
you
put
in
the
awesome,
Cloud
slack,
where
you
know
you're
talking
about
impedance
matching
and
you
were
having
a
really
good
time
developing
with
wasm
cloud
and
you
hit
a
little
bit
of
a
roadblock
around
something
that
you're
used
to
doing
I,
think
that'd,
be
that
would
be
great.
H
Yeah,
okay,
sure
so
implementing
a
rest
service
in
wasmcloud
turns
out
to
be
really
easy
and
straightforward
and
Bob's
your
uncle.
So
the
got
that
prototyped
and
up
and
running
really
really
quick
and
doing
a.
E
H
Http,
we
use
the
HTTP
server
capability
provider
and
then
we
use
the
key
Value
Store
as
the
backing
persistence
and
you
and
then
the
actor
is
just
running
the
procedures
for
managing
a
collection
of
of
items
in
a
in
a
rest
collection,
great,
no
problem,
HTTP
get
call
Key,
Value
Store
get
the
get
the
value
back,
return
it
to
the
HTTP
server
on
you
go
and,
and
vice
versa,
take
a
take
a
post
and
put
it
into
the
key
Value
Store.
H
However,
when
you,
when
you
really
start
to
get
into
things,
you
you
things
become
a
bit
more
complex
right.
So
the
My
challenge
is
that
one
of
the
the
the
patterns
that
we
have
to
support
is
that
we're
gonna
we're
gonna,
get
a
get
of
the
whole
collection
and
usually
you'll,
have
a
query
on
that
right.
So
you're
you're
not
asking
for
the
whole
collection
but
you're
you're.
H
You
are
asking
for
the
whole
collection,
then
you're
querying
to
filter
across
the
collection
and
ask
for
all
of
the
items
that,
where
a
equals
B
great.
So
so
you
do
that,
but
in
in
either
one
of
those
cases-
and
maybe
just
look
at
the
the
the
the
the
pathological
case
where
the
the
user
just
wants
the
entire
collection.
H
Well,
if
the
collection
does
not
reasonably
fit
in
in
one
HTTP
response
and
and
by
the
way
there,
there
is
no
limit
to
the
size
of
an
HTTP
response,
but
there
are
practical
limits
like
what's.
H
The
default
Allowed
by
Apache,
for
instance,
would
be,
would
be
one
consideration,
so
there
is
some
practical
limit,
although
it's
not
part
of
the
HTTP
spec,
you
know
how
much
is
a
reasonable
thing
to
put
into,
or
into
one
response
and
and
it's
pretty
easy
to
see
that
you
can
have
applications
where,
where
you're
not
going
to
be
able
to
put
that
in
so
use
cases
when
everybody
likes
use
cases.
So
let's
look
at
a
use
case.
H
I've
got
I'm
I've
got
a
an
actor
which
manages
a
collection
of
DNA
sequences,
so
we
put
them
in
a
key
value,
store
or
a
blob
or
whatever,
but
they're
indexed
right.
So
I've
got
Brooks's,
DNA
sequence:
I've
got
my
DNA
sequence:
I've
got
Kevin's,
DNA
sequence,
Etc
right,
so
so,
and
so
it's
a
collection
of
those
sequences
each
one
of
those
things
I
just
looked
it
up,
turns
out.
It's
like
three
three
and
a
half
gigabytes.
H
So
so
now,
I'm
I've
got
a
request
for,
for
one
of
these
things,
I
get
it
it's
three
gigabytes,
I
I
return
it.
Okay,
I
well,
I
could
hand
it
back
to
the
HTTP
provider
and
then
the
HTTP
provider
can
can
chunk
it
out
right
and
the
way
well
before
we
even
get
into
that.
Let's
is
three
gigabytes,
a
a
reasonable
amount
for
an
actor
to
return
to
a
provider
right.
H
So
that
would
be
the
first
question.
You
know:
where
are
the
limits
here,
but
I'm,
just
gonna,
I'm,
just
gonna
say
that
it
doesn't
matter
what
that
answer
is
because
then
I'll
just
double
it
right.
So
that's
so
so
we
you
you,
you
really
kind
of
need
a
pattern
here
where
you
support
a
continuation
and
what's
a
continuation.
Well,
if
you
have
a
stateless
function,
you
you,
you
call
the
function
and
it
gives
you
a
result
and
the
function
is
not
allowed
to
have
any
state.
H
So
if
there's
any
state,
the
client
needs
to
keep
the
state.
So
now
the
client
wants
the
the
the
function,
the
the
actor
to
to
to
continue
processing
from
where
he
left
off.
Then
one
of
the
one
of
the
arguments
to
the
function
is
the
continuation,
so
it
hears
here's
the
data
and
here's
a
pointer
to
where
you
were
in
the
data,
it's
it's
basically
the
state
and
and
so
that
model
works
right.
But
where
is
that
continuation
held?
H
So
the
basic
problem
that
I
have
the
real
life
problem
that
I
have
is
I'm
not
doing
DNA,
but
it
but
I.
You
know
it's.
It
doesn't
matter
what
it
is:
I'm
managing
a
rest
collection.
We
support
RFC
7232,
which
is
the
the
basic
mechanism
for
Content
range.
H
So
when,
if
you're
not
going
to
provide
three
gigabytes
in
the
in
the
one
HTTP
response,
if
you're
only
going
to
provide
you
know
one
gigabyte
or
one
megabyte
or
whatever
it
is,
then
you
indicate
that
in
in
the
in
the
header-
and
you
say
that
these
are
units
which
are
usually
bytes,
and
so
you
can
say
the
these
are
bytes
one
two
to
a
million
of
three
gigabytes
and
and
and
then
the
client
would
have
to
be
satisfied
with
that.
H
I
didn't
want
the
whole
thing
anyway
or
come
back
and
say
here's
the
same
request,
but
but
the
the
the
content
range
should
begin
at
byte,
1001
and
and
and
go
from
there
or
whatever
it
is.
So
how
do
I
support
that
mechanism?
So
that's
really
that's
really
the
question.
A
Yeah
I
I
heard
a
few,
a
few
things
in
there
which
I
just
want
to
make
sure
to,
or
do
we
lose
Kevin
darn
a
few
things
in
there
that
I
just
want
to
make
sure
to
clarify
going
forward.
I
think
three
and
a
half
gigs
is
it's
plenty
big.
We
can
simplify
and
call
it
a
gigabyte
if
we
wanted
to
pick
something
that
was
sufficiently
large
enough
to
not
want
to
return
for
what
it's
worth,
I
think
in
our
HTTP
server.
A
We
have
tests
going
up
to
like
50
megabytes
or
something
like
in
in
the
body,
and
the
real
constraining.
Factor
right
now
is
the
notion
of
chunking.
So
if
you
wanted
to
return
a
if
you
wanted
to
return
something
that
was
a
gigabyte
in
wasmcloud
now,
the
biggest
limitation
is
the
amount
of
time
that
it
would
take
to
do
that
and
there's
there's
that's
a
whole
other
discussion
around
optimizing
that
but
effectively
you
have
some
database.
A
You
pull
the
data
out
of
the
database
and
give
it
to
the
actor,
and
then,
since
that
crosses
a
message
boundary.
We
essentially
chunk
that
up
automatically
into
slices
of
about
a
megabyte
each
and
then
we
do
the
same
thing.
Sir.
You
mean
on
the
lattice,
that's
correct
on
the
last
and
that's
done
for
in
the
wasm
cloud
host.
So
it's
not
something
that
you
have
to
do
and
then
hand
it
to
the
HTTP
server
and
then
the
HTTP
server
could
assuming
have
you
know.
A
So
I
think
that
that
is
internally
solving
for
the
problem
of
something
being
too
large.
But
along
your
point
around
meeting
the
RFC,
which
the
number
I
know
you
have
in
in
the
thread,
but
I
missed
is
that
you
know
essentially
you
as
a
client
being
able
to
say,
hey
actor.
A
I
would
like
to
get
this
piece
of
data
and
then
7232.
Thank
you,
I'd
like
to
get
this
piece
of
data
and
then
the
actor
you
want
the
actor
to
be
able
to
return
a
part
of
that,
and
then
the
client
can
ask
for
more.
Essentially.
H
Yeah
so
that
well
that's
the
starting
point
right.
So
the
in
actual
fact
I'm
not
doing
bite
ranges.
So
we
we,
our
units,
are
when
I
say
we,
the
the
that
would
either
be
Sig
scale
or
the
TM
Forum
right.
The
the
apis
that
we're
using
that
set
of
apis.
Have
this
pattern
and
the
the
pattern
is
that
collections
are
collections
of
Json
entities
and
so
a
collection
is
a
collection
of
items.
An
item
in
this
case
is
a
Json
object,
and
so
the
units
are
items.
H
So
they
are
these
Json
objects,
and
so
we
we
we're
when
we
talk
about
a
Content
range,
we're
saying:
okay,
the
the
like,
if
I,
do
a
get
on
the
on
the
raw
resource
path
for
the
collection,
the
the
the
response
would
be
content
header
item
one
to
one
thousand
of
three
thousand,
and
so
then,
when
we
do
a
query
right,
we
we're
generally
querying
to
get
items.
H
H
So
if,
if
whatever
the
underlying
mechanisms
are,
if
if
the
actor
is
stateless,
it
means
that
when
the
next
request
comes
in,
he
would
have
to
get
the
entire
collection
and
and
start
over
again
and
you
he
would.
You
know
if
there
is
a
in
the
request,
you're
indicating
where
you're
starting
from
right,
so
he
could
replicate
the
work
he
did
before
get
to
the
point
he
was
at
and
then
return
the
next
request,
but
then
every
call
he'd
have
to
do
the
entire
thing
over
again,
so
it
the
the
the
time
consumer.
H
Well
yeah,
it's
not
workable
right.
So
so
it
really
comes
down
to
this.
The
we've
we've
got
and
here's
the
chesterton's
fence,
the
the
actors
are
stateless
and
and
and
and
it's
it's
it's
it's
a
problem
for
me.
So
how
do
you
deal
with
that?
Now?
I've
been
thinking
about
it,
a
lot
and
and
the
the
one
solution
might
be
that
the
that
you
don't
use
the
normal
HTTP
server
provider.
You
use
a
provider
that
is
aware,
and
the
provider
is
going
to.
C
H
Involved
in
in
the
heavy
lifting
and
and
I
guess
today,
that's
probably
the
only
way
to
to
Really
approach
that
problem,
but
all
I
can
say
is
that
in
the
model
that
I'm
used
to
that
is
low
impedance,
it's
not
that
strict.
So
the
when
we
we
in
in
erlang
I,
normally
code
in
erlang
in
erlang,
the
normal
pattern
is
I.
Would.
C
H
Processes
are,
are
instantaneous
and
and
have
very
little
overhead.
You
just
start
a
an
ephemeral
process
for
everything
you
want
to
do,
and
so
that
looks
very
similar
to
the
actor
model
we're
using
here,
except
that
in
this
case,
that
process
would
hold
the
the
the
the
state
in
its
in
its
memory,
and
it
would
it
would.
It
would
handle
the
subsequent
requests,
and
so
that's
kind
of
the
pattern
I'm
trying
to
well.
It
is
the
pattern
I'm
trying
to
replicate,
and
it
that's
where
it
just
suddenly
got
hard
right.
A
Yeah
I
I
think
that
makes
a
lot
of
sense,
especially
knowing
around
the
question
was
that
I
was
going
to
ask,
is
in
a
normal
the
way
you're
used
to
doing
this,
where
the
memory
lives,
because
I
could
see
this
solving
this
in
a
few
places.
A
The
first
or
you
know
my
knee-jerk
thought
around.
This
was
the
same
as
yours
around
having
the
you
know.
Essentially,
the
provider
is
involved
in
this.
You
know
we
have
an
HTTP
server
that
understands
when
something
is
too
large
and
or
you
know
it
accepts
or
looks
out
for
when
it
needs
to
be
able
to
handle
requests
like
this
and
then
subsequent
requests
coming
in,
don't
hit
the
actor
in
the
same
way
or
at
least
hands
the
data,
the
the
relevant
data
to
the
actors.
A
So
you
don't
have
to
query
the
database
or
the
you
know
as
a
database,
but
I
just
mean
generically
a
data
store.
A
Now
one
of
the
sticky
things
is
that
the
pattern
that
you
use
in
erlang
around
having
the
process
hold
all
the
state
in
memory.
There's
not
a
strict
limitation
around
webassembly,
not
being
able
to
do
this.
Webassembly
modules
can
you
know,
set
up
places
in
its
own
or
you
know
it
can
request
memory
inside
of
its
own
little
linear
slice,
and
then
you
could
read
from
there
in
subsequent
requests.
A
C
A
A
Kevin
of
course,
wrote
an
RFC
around
supporting
stateful
actors
and
I
think
that
that
might
be
I
might
have
to
find
it
after
the
call
that
that
might
lend
a
couple
of
you
know
he
had
to
drop
his
power
went
out.
Unfortunately,
that
may
put
us
closer
on
a
path
to
solution.
If
we
can
do
that
same
pattern,
where
an
actor
actually
has
State
and
you
can
control
which
actor
is
going
to
receive
a
request,
as
just
some
of
the
details
are
slipping
my
mind
at
the
moment.
H
What
I'm
saying
is
that
we
need
we
need
to.
We
need
to
be
stateful
and
and
persist.
So
that's
why
I
said
life
cycle
model,
although
since
I
suggested
that
I
I
thought
of
some
other
things,
one
of
which
was
was
the
the
component
model.
H
The
real
problem
here
is
the
lattice
right
it
as
long
as
all
this.
All
this
stuff
has
to
flow
over
the
lattice
and
it
becomes
impractical
to
just
it
like
in
in
straight
computer
science
theory.
If
we
have
a
stateful
function,
and
so
it
needs
to
stash
its
state
and
then
then
another
instance
can
run
it.
Can
it
as
long
as
it
can
access
the
state,
and
so
we
can
just
juggle
that
state
around.
But
if
juggling
that
state
around
means
putting
it
through
the
lattice,
then
my
my
DNA
doesn't
work.
H
So
you
could,
if,
if
we
had
a
provider
that
was
was
not
using
the
lattice.
If
it
was,
if
it
was
a
component,
then
you
know
maybe
that
works
and
then
the
whole
model
is
just
fine.
But
but
in
that
context.
A
Right
yeah
part
of
the
difficulty
comes
from
the
fact
that
well,
the
way
the
lattice
is
designed
is
that
you
can
send
a
request
and
have
it
be
answered,
no
matter
where
the
the
actual
Thing
Lives,
which
is
great
for
flexibility,
but
requires
a
lot
of
care
when
it
comes
to
sending
a
large
amount
of
data,
because
that
can
get
inefficient
very
quickly.
A
I.
If
you
want
to
respond
to
that,
that's
fine
I,
just
feel
like
I've
been
talking
a
lot
and
we
have
some
hands
up,
so
I
want
to
make
sure
I
hit
them.
E
Hey
so
I
guess:
I'll
lower
my
hand,
I'm
assuming
you
can.
You
can
hear
me
fine,
but
I
I
was
I,
wanted
to
to
jump
in
and
note
that
it
seems
like
there's.
There
are
multiple
things
being
discussed
at
the
same
time,
and
it's
kind
of
it
would
probably
be
helpful
to
to
separate
them
like
this.
Is
this
just?
This
is
a
caching
problem.
E
This
sounds
like
a
caching
problem
right,
and
so
there
are
multiple
places
you
can
solve
the
caching
problem,
there's
like
at
the
at
the
point
of
Ingress
right,
which
is
that's
we're
talking
about
the
provider
there
on
your.
You
can
solve
it
Downstream,
but
then
you
also
need
to
know.
Like
one
question
I
had
about
the
the
problem
was:
how
often
does
that
data
change?
It
sounds
like
like
new
line
delimited
Json,
but
how
often
does
it
actually
change
like
is,
is
Cash
eviction
like
a
a
problem
here.
H
E
Of
all
the
you
know,
like
events,
essentially
that
exist
I'm,
just
calling
them
events,
I,
don't
know
if
they
are
events,
but
there
are
objects
in
this
sort
of
like
you
know,
in
the
Stream
of
stream
of
things,
and
you
want
to
read
some
some
amount
of
them
right.
Like
I,
don't
know
if
it's
a
contiguous
range
or
like
a
chunks,
yeah.
H
The
the
requester
provides
intent,
but
it's
only
the
actor
who
knows
how
to
translate
that
intent.
H
What's
in
the
backing,
the
actor
takes
what's
in
the
backing
store
applies,
the
the
the
filter
which
the
filter
was
the
intent
provided
by
the
client
and
then
returns
a
result
of
the
client.
So
the
actual
implementation
of
how
you
map
the
the
the
the
the
content
to
the
the
stored
content
to
the
to
the
return.
E
Okay,
so
that
sounds
like
you're
getting
a
sort
of,
let's
say
a
filter
or
a
query
right
come
so
a
query
comes
in
from
the
outside
yeah.
H
E
Gets
it
and
then
reaches
to
some
backing
store
to
pull
the
actual
information
that
needs
to
that
needs
to
be
served
up?
Yes,
okay
and.
E
A
I
Yeah,
okay,
yeah
I,
just
wanna,
make
sure
because
I
heard
a
few
things
too.
As
far
as
size
like
returning
three
gigabytes
pretty
soon
you're
gonna
hit
the
32-bit
address,
space
limitation
and
I.
Think
the
Heap
of
an
actor
might
be
limited
to
one
gigabyte
in
wasm
time
that
that
might
not
be
exactly
right,
but
so
for
large
requests.
You're
gonna
have
to
do
some
kind
of
streaming
or
chunking
between
the
actor
and
anything
else
in
the
lattice.
I
So
that's
one
issue
the
filtering
this
Pro
for
the
a
equals
B
case
is
probably
going
to
have
to
be
implemented
in
a
back
end,
so
an
actor
could
receive
the
HTTP
headers
for
a
range
request,
but
even
a
bite
range
request,
but
the
back
end
would
have
to
be
able
to
pull
that
out
of
the
database.
You'd
have
to
use
it
a
database
or
some
middleware
that
can
handle
the
the
range
request.
I
As
far
as
state
I
haven't
read:
Kevin's
RFC
but
I
I,
wonder
if
a
KV
could
be
if
effectively.
If
there
was
some
kind
of
component
that
could
be
a
like
an
in-memory
cache
and
it
would
have
to
be
replicated
so
maybe
with
Nat's
KV,
it
could
be
replicated
I
wonder
if
that
could
be
effectively
the
in-memory
State
not
for
managing
cash,
but
for
managing
sort
of
the
the
state
of
a
request.
I
H
H
A
Van
so
I
want
to
the
the
one
thing
I
wanted
to
say
around
applying
the
provider.
I
think
that,
and
apologies
for
jumping
the
queue
here,
I
think
that
Bailey's
ideas
are
really
really
match
up,
with
a
way
that
we
should
be
solving
this
scenario.
A
I
think
that
pushing
the
logic
into
pushing
the
logic
into
a
provider
is
sometimes
the
right
answer,
but
sooner
rather
than
later,
these,
these
providers
are
going
to
be
Wazi
modules,
they're,
not
always
going
to
be
something
that
just
have
you
know
different
constraints
than
webassembly
modules.
A
Bailey's
ideas
around
a
native
support
for
streaming
data
and
to
have
stateful
actors
in
general,
I
think
would
address
this
issue
address
this
issue
pretty
well.
If
you
could
stream
the
data
instead
of
holding
it
all
in
memory
and
make
sure
that
requests
make
it
to
a
stateful
actor
that
understands
the
state
of
of
sending
this
data
back
in
multiple
requests.
A
That
would
better
address
the
solution
in
a
way
that
not
just
to
specifically
match
your
use
case
Vance
around
the
way
that
you're
used
to
doing
it
in
erlang
I
think
that
this
is
more
of
a
generic
case.
That
would
apply
really
well
for,
like
Blobby,
for
example,
where
we
could
be
serving
arbitrarily
large
files
from
a
blob,
store
or
I
know.
That
Taylor
has
an
example
where
you
know
he
is
a
photographer.
So
those
pictures
get
really
large
and
you
know
the
pictures
can
be
how
big
can
pictures
be.
A
They
can
be
like
you
know
many
many
megabytes
in
size,
especially
at
like
the
wild
resolution.
So
we
we
have
to
address
this
at
the
actor
level.
A
D
Hi
yeah
I
think
I
was
thinking
about
this
and
it
almost
the
problem
almost
applies
at
two
different
levels.
The
the
problem
in
advance
so
eloquently
described,
is
like
an
application.
Level
thing
where
really
the
actor
wants
to
open
a
stream
or
or
the
actor
in
the
provider
wants
to
share
a
stream
and
keep
it
open.
So
they
can,
you
know,
do
application
Level
stuff,
but
there's
also
like
an
HTTP
level.
You
know
Network
transport,
layer,
7
kind
of
problem
as
well
with
things
like
HTTP
keep
alive
and
Service
events
Etc.
D
So
you
could
imagine
that
an
actor
need
wants
to
stay.
Oh
stay,
responding
for
as
long
as
the
HTTP
connection
is
open
and
just
emitting
events
or
whatever
back
to
the
client,
and
so
that's
more
of
a
HTTP
level
com.
But
it's
a
similar
sort
of
thing.
D
I
guess
the
things
that,
like
the
alarm
bells,
are
right
that
go
off
in
my
head.
Are
that
that,
while
that
act
is
busy
with
that
request,
it
can't
serve
any
other
requests
and
if
you've
got
10
actors,
I
mean
you
know,
and
all
and
10
actors
are
doing
that.
Then
you
can
only
serve
10
requests
and
that
and
Erlanger
you
know
the
beam
engine.
D
D
Actors
are
cheap
as
well,
but
you
know
having
a
defined
number
of
webassembly
actors
might
be
limiting,
and
maybe
you
know
the
sort
of
approach
that
lunatic
takes,
that
you
know
that
tries
to
emulate
the
erlang
beam
engine
with
webassembly
actors
by
the
number
of
actors
is
very
flexible
and
can
grow
and
Shrink
according
to
the
requirements
might
be,
but
yeah
either
way,
whether
whether
it's
HTTP
keep
alive
and
server-center
events
at
the
protocol
level
or
whether
it's
the
application
itself
owning
a
stream
and
saying
I'm
gonna,
you
know
chunk
you
Json
objects
or
whatever.
H
C
H
Speak
what
in
our
implementation,
what
we
in
the
erlang
model?
We
in
this
particular
use
case
with
HTTP
rest
collections,
you
RS,
RFC
7232,
has
an
e-tag
right,
so
the
the
resource
has
a
serial
number
and
when
the,
if
you're,
going
to
make
subsequent
requests,
you
have
to
have
the
same
e-tag.
I
H
They
need
to
connect
to
that
so
in
in
the
this
idea
of
streaming
seems
to
sorry
I
guess
my
thought
there
was
simply
that
if,
if
the
the
life
cycle
model
might
not
need
to
change,
if
the
communication
model
does
so
the
life
cycle
model,
is
it's
stateless,
you
send
it
a
request,
it
sends
a
response
and
it
is
destroyed,
and
maybe
that
model
doesn't
need
to
change
as
long
as
there
can
be
more
than
one
response.
C
D
A
I
know
that
we're
at
the
top
of
the
hour
here
so
I
just
want
to
I'm
actually
happy
to
stay
on
for
another
second
or
another
couple
of
minutes.
While
we
wrap
this
one
up.
I
just
wanted
to
thank
everybody
for
coming.
If
you
have
to
jump
to
the
next
one-
and
you
know
say
that
we'll
see
you
next
week,
but
I
I'm
happy
to
hang
around
Vance.
Are
you
available
to
stick
around
for
another
couple
of
minutes.
A
Great
I
would
just
love
to
have
you
on
as
we
as
we
keep
talking.
Yeah
I
can
I
can
sit
here
for
another
five
minutes
or
so
Victor.
F
Yeah
so
I
guess
the
I
I
was
responding
or
I
put
my
hand
up
for
the
service
side,
events
and
questions
around
that
and
I
guess
one
of
the
things
that
pops
up
as
we're
talking
through
this
is
what
processing
is
inherently
has
to
be
a
single
actor
and
I
think
coming
from
other
actor
systems,
there's
a
lot
more
flexibility,
whereas
wasmcloud.
F
You
know
we
take
this
approach
of
trying
to
do.
Oh
I'm,
forgetting
the
a
word
like
append.
Only
and
data
flowing
through
the
system
isn't
isn't
like
I
mean
it
up.
It
updates,
but
I
think
that,
like
when,
as
actors
are
stateless
right
now,
we
are,
there
are
certain
design
decisions
that
like
have
to
be
made
earlier
on
when
you're
building
things
so
like.
Maybe
you
give
an
actor,
it's
not
happening
in
a
a
single
execution
right
now.
F
What
would
happen,
which
is
why
we're
kind
of
like
saying
like?
Oh,
maybe
it's
a
provider
instead
right
now
without
lacking
streams,
but
I
guess
I'm
trying
to
get
at
that
for
server-side
event.
Sent
events
like
to
me.
F
That's
also
like,
oh
is
that
is
that
a
queuing
solution
that
needs
to
be
there
instead,
you
know
like,
is
that
two
actors,
and
like
one,
is
a
a
key
and
there's
a
cue
in
the
back
end
instead
versus
like
it
used
to
just
all
be
in
the
same
code
base,
but
maybe
you
get
to
scale
those
components
separately
by
taking
on
actors
and
providers
yeah.
D
I
think
when
it's
time
to
act
it's
definitely
a
messaging
problem,
but
when
it's
like
a
web
page
just
sitting
on
an
open
connection,
just
wanting
to
be
notified
when
things
change,
you
know
if
it's
just
a
single
page
application
and
it'll
react
app
or
something
like
that.
You
know.
That's.
Quite
a
common
pattern
is
to
just
open
a
connection
to
an
HTTP
server
and
just
sit
there,
leaving
it
open
while
the
events
stream
back
to
you.
D
Yeah
I
mean
they
are.
The
airline
solution.
Is
that
well,
the
the
model
their
their
actors
are
different,
because
they're
allowed
that
you're
allowed
to
have
an
actor
for
every
single
user?
If
you
like
that,
wants
to-
and
that's
not
a
problem,
you
know,
even
if
you've
got
millions
of
users,
you
can
have
a
process
sitting
there
with
state
server-side
state
streaming
responses
back
to
you
or
whatever
it
wants
to
do.
F
Yeah
I
think
our
HTTP
server
right
now,
I
think
we
we're
looking
at
adding
websocket
support
and
like
streaming
support
there,
but
I,
don't
think
it
has
it
today
so
like
if
you
sit
there
with
that
connection,
open
you're.
Still
in
that
scenario,
where
you,
the
client,
has
to
issue
a
request.
D
H
If,
if
we
double
down
on
we
okay,
if
we
double
down
on
stateless
actors
and
and
it
shall
never
change,
then
we're
ruling
out
use
cases,
there's
some
things
that
just
aren't
going
to
work.
That
way,
so
you
know
that
really
I
think
becomes
a
question.
Is
that
a
necessary
thing
is
that?
Is
that
really
what
we
need
to
do,
or
is
there
a
stateful
actor
solution
that
doesn't
compromise
things.
E
Okay,
so
I
I
just
wanted
to
to
jump
in
and
say
again,
like
I
think,
we've
we've
just
we've
added
like
two
more
separate
discussions.
We've
had
like
five
orthodical
discussions
so
first
to
to
address
the
e-tag
thing:
that's
a
caching
mechanism
right
but
like
so
you're,
if
you're
using
the
caching
mechanism
to
address
that's
one
thing
right.
So
that's
like
the
first
thing
that
came
up
or
not
the
first
thing,
but
that's
a
thing
so
like
can
you
address
a
message
to
an
actor
specifically?
E
That's
that's
one
thing,
then
there's
the
question
of
the
workload
that
we
were
talking
about.
It
didn't
actually
sound
like
a
streaming
work.
It
sounded
so
at
first
it
sounded
like
oh
I
want
these,
like
you
know
some
sort
of
like
range
request,
kind
of
thing.
It's
clearly
not
I,
guess
not
that,
but
it
sounded
it
didn't
sound
like
streaming
at
first
because
it
sounded
like
like
it
sounded
like
the
provider,
streams
or
sorry
I,
shouldn't,
say,
provider
your
the
back
end.
That
holds
the
data
streams
and
then
sends
stuff
back
right
like
that's.
E
That's
not
streaming
necessarily,
but
streaming
is
another
use
case
that
could
that
could
happen
and
like
the
sort
of
on
that,
that
is
a
pro
back
to
like
an
HTTP
provider
level
thing
and
we
have
the
messaging
interface
right
like
there's
nothing
that
says
the
HTTP
provider
can't
expose
the
messaging
interface
and
then
turn
the
messages
it
receives
into
SSE
packets,
right,
like
on
the
other,
on
the
other
side
right
and
same
with
websockets
right.
This
is
a
this
is
a
matter
of
what
the
HTTP
provider
can
do.
E
So
there's
that.
So
that's
that,
like
that's
like
the
streaming
thing,
then
there's
the
actual
the
use
case,
Advanced
discussed,
which
was
it's
really
kind
of
like
a
it's
more
like
a
database
right
so
like
you,
get
a
filter
or
a
query
coming
from
the
outside
and
like
in
a
traditional
database.
What
happens
is
that
your
query
gets
turned
into
an
execution
plan
and
then
what
does
the
execution
plan
actually
talk
to?
It?
E
Talks
to
the
storage
right
so
you
have,
and-
and
this
is
the
actor
is
the
thing
turning
the
query
into
an
execution
plan
and
the
execution
plan
should
be
passed
to
something
that
can
hold
the
data
reasonably
and
as
Steve
brought
up,
which
is
a
really
good
point.
E
The
awesome
address
space
just
isn't
big
enough
to
hold.
You
know
X
gigs
over
three
right.
E
E
That's
one
way
that
that,
like
would
would
would
make
sense
right
and
then,
in
addition,
there's
the
the
the
point
about
what
what
the
like,
how
the
actors
will
will
sort
of
fit
together
and
how
they
get
automatically
spawned
right.
So
the
the
act
of
back
pressure
right
like
so.
If
you
know
that
the
actors
are,
the
current
amount
of
actors
are
overloaded
right
or
you
can't
serve
any
more
active.
Those
can't
serve
you
more
traffic.
Can
we
spin
up
more
actors?
That's
a
separate
thing
or
like
feature,
and
that's
like.
E
Can
the
host
detect
that
actors
can't
serve
messages
and
spin
up
more
in
response
and
that's
a
that's
a
great
like
all.
These
are
great
features
they're
just
like
in
so
many
different
directions
and
they're
all
awesome,
but
they're
like
they're,
all
things
I
feel
like
we
could.
We
could
work
on
it
and
we
should
definitely
have
but
I
I
just
wanted
to
just
call
that
out,
because
it's
like
there's
so
many
good
ideas
that
are
all
just
like
whizzing
by
in
this
discussion.
A
We've
talked
about
in
different
use
cases
and
and
how
we
may
solve
this
and
what
we
have
now
and
what
we
don't
and
I
want
to
try
and
wrap
up
this
discussion,
because
I
know
we're
over
time
and
kind
of
started
it
around
talking
in
the
actor
life
cycle
picture
I'm
gonna
try
to
grab
all
those
things
that
you
said
and
put
them
in
our
post,
Community
called
notes,
so
we
can
kind
of
tag
them
out
and
then
maybe
create
rfcs
or
discussions
like
all
that
kind
of
stuff,
yeah
I'll.
E
I'll
help
you
with
that
exactly
I
think
I
just
have
to
watch
this
again.
Yeah.
A
H
A
A
I
think
that
the
quickest
solution
in
order
to
get
something
to
just
work
or
be
able
to
do
this,
may
be
to
build
it
into
the
provider.
You
know
kind
of
do
the
thing
you
know
for
you
that
may
be
maybe
erlang
or
it
could
be
something
that
we
support
in
wasm,
Cloud
and
I.
Think
that
would
work.
I,
think
that
would
work
for
this
use
case
I.
A
Think
in
general,
though
the
ideas
around
having
stateful
actors
and
actors
that
can
operate
on
the
same
piece
of
data
across
different
requests
and
the
idea
of
being
able
to
stream
data
throughout
a
lattice
to
remove
the,
not
necessarily
the
burden
of
going
over
a
network,
but
something
that
doesn't
need
to
go
through
so
many
hops
would
be.
A
The
would
be
two
things
that
can
better
address
this
problem
generically
and
for
many
different
use
cases,
because
there's
certainly
other
use
cases
that
just
revolve
around
a
simple
requirement
of
I
want
to
serve
a
large
file
without
things
falling
over
or
I
want
to
operate
on
a
piece
of
100
megabytes
of
picture
data.
Things
like
that,
so
I
just
wanted
to
to
wrap
it
up
and
call
out
the
things
that
I
took
away
as
what
we
should
discuss.
A
I
will
try
and
hunt
down
Kevin's
RFC
around
stateful
actors,
because
I
think
that
would
be
a
really
interesting
one
to
keep
the
discussion
going
on,
because
I
know
that
we've
had
other
community
members
and
developers
ask
for
things
like
that
and
then
also
talk
about
about
streams.
E
One
more
one
more
that
I
forgot
to
mention
was
that
I
put
it
in
the
chat,
but
the
idea
that
wasn't
like
the
actors
will
only
be
able
to
do.
E
One
thing
at
a
time
is
also
not
going
to
be
true
for
forever,
because
Wazi
has
threads
and
that's
that
work
is
it's
actually
out
for
certain
platforms
like
C,
if
you're
using
velocity
SDK
from
c
and
in
browsers
it's
usable,
it's
just
it's
not
usable
in
rush
just
yet
and
I'm,
not
sure
exactly
where
the
go
implementation
is,
but
that's
another
like
sort
of
that's
yet
another
direction
that,
like
that
will
change.
Slash,
is
evolving.
E
So
then
that
changes
the
problem
fundamentally
right,
because
you
can
have
a
single
actor
right
and
surpass
what
you
can
do
with
just
an
early
process.
You
can
have
a
single
active.
A
Yeah
yeah
that's
going
to
be
really
exciting
to
add
or
when
that
that
makes
it
in
in
Rust
and
just
generically
was
them
all
right,
I'm
I'm
going
to
propose
that
we
call
it
here
just
because
I
think
well,
we
could
probably
talk
all
day
about
it,
but
we've
kind
of
we've
shot
past
the
end
of
this
meeting.
A
I.
Don't
think
that
we've
gotten
all
the
way
to
having
a
solution.
Yet
there's
more
discussion
to
be
had,
but
I
think
we've
got
some
good
points
to
to
take
away
from
here.
H
If
I
can
just
wrap
it
up
in
terms
of
my
question
sure
it
really
comes
down
to
this,
then
I
think
we've
identified
that
wasm
cloud
is
a
friction-free
way
of
implementing
these.
These
rest
patterns,
in
in
our
in
in
our
case,
Oda
components
and
so
I'm
part
of
this
Catalyst
project
and
we're
we're
we're
we're
doing
a
proof
of
concept
on
these
things
from
a
business
perspective.
It
gets
down
to
this.
H
It
works.
It
works,
it's
quick
and
easy.
As
long
as
you
don't
have
to
do,
pagination
and
then
it
no
longer
works
and
and
simply
that's
the
case,
it
no
longer
works
and
it
turns
out.
Pagination
is
pretty
important,
so
it
it
will
be
a
deal
breaker
in
terms
of
ever
being
able
to
go
into
production
and
that's
that's
the
reality
of
the
situation.
A
Yeah
I
think
definitely
around
manual.
Control
of
of
pagination
is
not
not
there
yet,
but
I
think
we've
got
some
really
exciting
things
to
to
talk
about
and
support
that
you
know
we're
we're
doing
a
lot
of
new
things
with
with
webassembly
here,
ideas
around
contracts
and
and
things
there's
definitely
room
to
improve,
and
that's
why
it's
really
important
for
us
to
tease
this
out,
because
things
like
patination
are
hard
requirements
for
people
running
in
production
all
right.
Everyone.
Thank
you
for
coming
to
this
call.