►
Description
wasmCloud is a platform for writing portable business logic that can run anywhere from the edge to the cloud, that boasts a secure-by-default, boilerplate-free developer experience with rapid feedback loop.
https://wasmcloud.com
A
All
right,
hello,
everyone
welcome
to
the
Watson
Cloud
community
meeting
for
Wednesday
July.
The
13th
I
have
a
very
exciting
demo
to
start
off
today
with
so,
let's
go
ahead
and
get
that
started
I'm
going
to
share
my
old
desktop
because
it's
got
all
kinds
of
things
so
you'll
have
to.
Let
me
know
if
you
can
see
everything
and
if
it's
normal
I
know
it's
like
a
wide
monitor.
So
please
stop
me.
If
anything
is
too
small
or
you
can't
see
anything
Etc
would.
B
A
Sure
I
think
I
should
be
able
to
do
most
of
it
in
the
browser,
so
I'll
just
do
I'll
do
one
window
at
a
time.
Let
me
see
if
I
can
just
organize
it
into
roughly
the
same
size,
because
I
know
that
that
affects
on.
B
A
Okay,
yep
I
will
shamelessly
show
my
geeking
out
over
the
web
stuff.
Great
wallpaper,
I'm
surprised,
Mac,
didn't
prevent
me
from
setting
this
like
150
megabyte
wallpaper,
but
they
were
cool
with
it.
So
I
will
take
it
so
today,
I
wanted
to
show
off
some
of
the
work
and
how
you
can
take
a
look
at
what
we've
done
with
open,
Telemetry
tracing
and
wasm
Cloud.
A
So
get
this
Zoom
thing
out
of
the
way
I
wanted
to
make
it
first
really
easy
for
anyone
to
start
out
and
look
at
what
traces
look
like
I
know
that
we've
demoed
them
them
in
the
community
call
before,
but
I
wanted
to
outfit
one
of
our
most
popular
examples
with
the
tracing
and
so
that
you
can
just
go
run
a
one-liner
and
make
sure
that
you
can
see
what
tracing
looks
like
and
then
I
can
talk
about
how
you
set
this
up
for
yourself.
A
If
anybody
has
any
questions,
I
go
along
feel
free
to
just
stop
me,
but
I'll
pause
at
the
end
too.
So,
in
order
to
take
a
look
at
tracing
in
wasm
Cloud,
all
you
need
to
do
is
download
or
clone
the
examples
repo
and
in
the
pet
clinic
directory.
We've
done
a
little
updating
of
our
run
script.
To
start
some
of
the
latest
actors
and
providers,
the
providers
that
support
tracing
and
additionally
under
Docker,
we
have
a
Docker
compose,
that's
going
to
run
the
database
Nats
and
then
additionally,
grafana
and
grafana
Tempo.
A
The
last
thing,
you'll
notice
down
here
are
two
environment
variables
for
the
WASP
Cloud
Host.
This
actually
makes
it
so
that
those
traces
get
exported
to
the
tempo
container.
A
So
really
those
are
the
only
things
that
you'll
need.
If
you
want
to
set
this
up
on
your
own
and,
of
course
worth
mentioning
that
you
don't
have
to
do
this
in
docker,
just
a
little
bit
easier
for
anyone
coming
for
the
first
time,
just
to
spin
it
all
up.
A
There's
a
Tempo
and
graphana
container.
You
don't
have
to
install
anything
locally
and
we
were
already
using
postgres
in
a
container
because
it's
kind
of
a
pain
to
do
you
know
across
different
operating
systems.
So
all
you
need
to
know
is
once
you've
clone
the
examples
repo.
A
We
have
this
little
checkup
script,
that
we'd
recommend
you
running
ahead
of
time,
just
to
make
sure
you
have
wash
and
some
of
the
terminal
utilities
that
we
use,
but
all
you
do
is
run
run.sh
all,
and
this
will
launch
up
your
Docker
containers
for
the
database
and
the
Watson
Cloud
host,
insert
in
some
some
good
good
test
data
from
some
of
the
cosmotic
employees
or
the
laws
of
cloud
maintainers
and
then
start
up
all
of
the
pet
clinic
resources.
A
So
this
is
great.
Actually,
if
you
haven't
run
pet
clinic
before
this,
a
couple
updates
to
this
script
will
make
it
nice
and
quick
and
and
fun
for
you
to
get
started.
And
after
a
few
seconds
here,
we
start
actors
and
start
providers.
That
pet
clinic
example
will
be
ready.
A
So
as
the
HTTP
and
SQL
providers
are
starting
up,
increase
it
a
little
bit
more,
the
last
line
here
is
saying
that
pet
clinic
started
and
is
available
yay.
So
we
go
to
our
browser.
I
can
refresh
on
the
Watson
Cloud
host
and
you
can
see
you
know
our
entire
pet
clinic
application
and
on
localhost
8080
is
the
actual
pet
clinic.
So
you
see
Brooks
Kevin.
B
A
And
I
and
Connor
and
I,
we
have
all
of
our
dogs
in
here
a
couple
of
different
vets
and
all
that
all
that
fun
stuff.
So
this
launches
the
the
pet
clinic
application
for
you,
but
the
real
thing
I
think
you'll
be
more
interested
in
is
the
tracing
part.
So
if
we
go
back
to
our
terminal
here,
you'll
notice,
under
the
docker
under
the
docker
folder
there
is
this
logs.txt
I'm,
basically
taking
all
the
logs
and
just
putting
it
in
a
text
file
to
make
it
easy
to
find.
A
We
can
tail
that
and
take
a
look
at
some
of
the
logs
from
Tempo
you'll
see
a
few
things
here,
including
span
ID
and
Trace
ID,
and
just
if
you
haven't
used
open,
Telemetry
tracing
before
I
had
not
before
we
did
it
with
wasmcloud
a
trace.
Id
is
like
an
Associated
set
of
functions
or
or
basically
a
whole
list
into
an
operation,
and
then
an
individual
span
can
be
a
smaller
set
of
logic.
A
single
function
call
and
pretty
much
the
way
that
we
approach
this
in
wasmcloud
is
a
single
invocation.
A
So
if
I
hit
enter
just,
do
a
little
delineation
here
and
I
click
on
the
Vets
endpoint,
so
I
click
on
the
Vets
tab.
This
makes
an
HTTP
request
to
the
API,
which
then
actor
to
act
or
calls
the
Vets
actor
and
then
issues
a
invocation
to
the
SQL
database
provider.
A
A
And
if
you
go
to
the
explore,
Tab
and
select
Tempo
here,
we
can
actually
query
that
Trace
ID
if
I
can
get
Zoom
to
move
out
of
the
way
run.
This
query
up
here
in
the
top
right
and
you
can
see
the
full
Trace
through
our
entire
wasm
cloud
system.
For
that
that's
endpoint,
and
this
is
really
cool,
because
this
starts
at
the
HTTP
request-
goes
from
the
HTTP
server
provider
down
to
the
down
to
the
clinic
API
actor,
which
then
does
a
couple
of
logs
and
then
actor
to
actor
calls.
A
You
can
see
the
pet
clinic
vets,
actor
on
the
Vets
dot
list
vets
operation,
and
then
you
can
see
the
the
further
traces
and
the
Vets
actor
and
eventually
all
the
way
down
to
the
query
in
the
SQL
database
provider.
So
this
is
super
valuable
to
see
how
all
of
this
data
and
how
all
of
the
functions
flow
through
a
wasm
cloud
system-
and
there
are
a
few
other
things
that
make
this
really
awesome
that
I
like
to
show.
So
this
is
a
this-
is
a
successful
trace.
A
A
nice
good
example
of
what
like
a
full
call
through
the
Watson
cloud
system,
I'll
go
to
owners
just
to
keep
this
easy,
but
one
thing
this
can
really
help
with
as
a
developer
is
debugging.
So
if
we
were
to
do
something
like
take
the
say,
you're
going
through
your
CI
you're
deploying
this
application
and
you've
left
the
Vets
actor
commented
out.
So
you
don't
start
the
Vets
actor
at
all.
A
A
We
can
come
and
grab
that
Trace
from
that
you
know
you
can
see
on
your
own
in
the
logs
that
there
was
an
error,
but
if
you
grab
the
trace
ID
associated
with
that
error-
and
we
come
back
to
your
your
tracing
back
end,
we
take
a
look
at
that.
You
can
see
that
there
was
a
problem
with
the
trace
all
the
way
down.
If
you
check
out
something
like
host
call,
maybe
it's
outpound
or
PC
I,
think
host
call.
A
You
can
see
that
the
vet
dot
listspets
actually
did
have
an
error
and
specifically
that
the
the
the
message
timed
out
so
then
you
can
go
on
to
debugging
and
finding
out
why
that
happened
or,
for
example,
if
you
have
the
Vets
actor
actually
running,
and
you
didn't
put
a
link
definition
for
it.
So,
instead
of
you
know,
you
started
everything
correctly.
You
just
didn't
link
the
Vets
actor
to
the
postgres
provider.
A
Then,
if
we
go
to
the
vet's
endpoint
we're
actually
going
to
see,
you
know
something
similar
in
the
application,
but
in
the,
if
you
take
a
look
at
the
trace
from
let's
see,
can
you
see
it
here?
A
We
can
trace
that
issue
and
see
that
it
makes
it
a
little
bit
further.
But
as
soon
as
we
got
to
the
wasm
guest
call
the
Vets
dot
list.
Vets
operation
failed
because
there
was
a
missing.
Let
me
see
if
I
can
zoom.
In
a
little
bit
there
was
a
missing
link
definition
for
Watson
Cloud,
SQL
DP.
A
So
what
this
really
this,
this
really
helps
from
a
developer
perspective,
even
starting
this
when
you're
writing
an
actor
for
the
first
time,
it
can
be
really
nice
to
see
exactly
where
these
calls
make
it
in
your
system.
A
B
Did
you
get
there
Andrew
I
know
you
and
I
have
had
some
offlines
on
this
and
have
been
excited
about
this
coming
along
one
of
the
areas.
We
are.
What
we're
trying
to
what
Brooks
is
trying
to
show
in
this
demonstration?
Is
you
know
how
we
can
you
know
Zoom
all
the
way
into
you
know
the
difference
between
invocation
requests
and
waiting
or
waiting
for
like
a
machine
learning
model
to
run
or
something
along
those
lines
to
run.
C
I
think
so
I
think
can
ask
a
question.
Maybe
I
didn't
completely
understand
we,
you
know
this.
The
spans
and
the
traces
will
work
for
tools
that
have
been
instrumented
instrumented
with
these
spans
and
traces
right.
A
That's
right
and
I
have
a
little
more
on
that,
but
I'll.
Let
you
continue
if
you
had
kind
of
a
two-parter.
C
A
I,
unless,
unless
the
tension
flow
back
end
actually
is
instrumented
and
does
like
exporting,
then
you'll
see
kind
of
it.
You
you'll
see
the
trace,
stop
it
where
you
call
into
the
tensorflow
back
end
and
then
resume
afterwards.
So
yeah
you
you'll
see
the
block
of
time.
You
just
won't
see,
what's
taking
a
specific
amount
of
time
inside
tensorflow.
C
Okay,
well
so
I
think
I
I
think
this
is
extremely
useful,
then,
for
the
entire
system
flow
right
and
then,
if
you
wanted
to
then
profile
inside
one
part
of
of
one
of
one
of
these
steps,
then
you'd
probably
have
to
bust
out
a
different
kind
of
profile.
Or
is
that
but
is
my
understanding
correct?
There.
A
B
You'll
see
all
that
what
what
what
you
will
see
is
if
we
take
the
machine
learning
example
that
we've
been
you
know,
building
around
the
BMW
Kristoff
example.
You
know
the
the
one
we've
been
presenting.
B
What
you
will
get
is
the
actor
you
know
there's
a
kind
of
like
a
three-step
process.
There
there's
the
preparation
actor,
so
you'll
get
the.
How
much
work
is
it
to
prepare
the
images
for
the
machine
learning
model?
You
know
all
the
insight
there
then
you'll
get
the
invocation
to
the
machine,
learning
model,
request
itself
and
then
you'll
get
the.
How
much
work
is
there
on
the
parsing,
the
you
know,
the
request,
interpreting
the
the
output
of
the
model
and
things
like
that.
C
B
Well,
I
think
I
think
for
the
question
that
we
were
and
I
I
pulled
some
Brooks
I
apologize,
I
pulled
some
extra
contacts
here
into
this
discussion:
I'm
kind
of
down
the
Deep
Rabbit
Hole
here
and
and
I'll
get
out
in
just
a
moment.
But
what
it'll?
Let
us
do
is
assume
that
those
models
are
Black
Box
Andrew
and
are
we
running
them
on
the
edge
or
are
we
running
them
in
the
cloud
and
then
sort
of
use?
B
All
the
other
context
around
that
and
say
you
know
here
is
maybe
compelling
reasons
for
where
we
direct
those.
C
Yeah,
will
you
be
able
to
tell
where,
from
these
from
these
traces
where
stuff
is
running,
I
mean
I?
Guess
if
you
look
inside
the
yeah.
B
Yeah
yeah
we
would
be
able
to.
We
would
be
able
to
tell
that
cool.
A
Just
to
address
that
most
recent
question,
Kevin
right
I
saw
you
had
your
hand
up,
but
like,
for
example,
you
can
see
this
postgres
provider.
You
can
see
that
may
be
a
cluster
ID,
but
you
can
see
where
these
things
are
running,
what
their
ID
is.
Actually
even
what
line
of
code
the
trace
is
actually
happening
at
it's.
Some
pretty
neat
visibility.
D
Yeah
I
just
wanted
to
mention
like
I,
don't
know
where
the
the
tensorflow
stuff
is
being
run.
I,
don't
I'm,
not
sure
I
have
that
context.
D
But
if,
let's
say
the
tensorflow
stuff
is
being
run
inside
a
wasm
cloud
capability
provider,
all
wasm
Cloud
capability
providers
have
the
ability
to
automatically
emit
Trace
data
to
the
same
place
that
the
wasm
cloud
host
is
emitting
to
so
a
a
machine
learning
provider
could
emit
a
whole
bunch
of
profiling
information
before
and
after
you
know,
handing
off
to
tensorflow
so
that
you
can
sort
of
shrink
treat
the
amount
of
Black
Box
you
have
in
that
span.
A
So
here
there
are
two
more
things
that
I
wanted
to
mention
talking
about
instrumenting
things
and
what
you,
as
a
Watson
Cloud
developer,
actually
need
to
do.
If
you
notice
looking
at
this
instrumentation,
the
things
that
are
instrumented
here
are
the
HTTP
server
provider.
These
are
all
coming.
The
wasm
cloud
are
coming
from
the
wasm
cloud
host
the
kind
of
wrappers
around
an
actor
that
allow
it
to
get
an
ads
message
and
things
like
that
and
the
SQL
provider
you'll
notice.
These
are
all
things
that
you
as
a
wasm
cloud
developer.
A
Don't
need
to
touch,
and
that's
actually
exactly
leads
into
my
next
point,
which
is
in
order
to
take
advantage
of
this.
All
you
need
are
a
version
55
or
later
version
of
the
Watson
Cloud
host,
using
the
latest
versions
of
the
capability
providers
that
are
instrumented
for
open
Telemetry
and
then
making
sure
that
you
just
have
a
tracing
back
end
like
this
grip
on
a
Tempo
to
actually
export
to
there's
nothing.
A
You
actually
need
to
do
to
hook
up
your
actors
to
this
tracing
and
it's
actually,
you
know
our
ultimate
goal
for
actors
to
not
care
about
that.
At
all,
further
on
our
our
mission
of
no
boilerplate,
you
just
get
this
tracing
included
for
free,
if
you'd
like
it,
and
if
you
have
your
own
custom
capability
providers,
I'd
recommend
taking
a
look
on
our
documentation
site,
we're
going
or
we're
going.
It's
likely
going
to
merge
right
after
this
community
meeting.
There
is
a
page
on
tracing
under
app
development
and
then
developer
workflows.
A
It's
called
open,
Telemetry
tracing.
You
should
be
able
to
search
for
it
as
well,
but
this
talks
about
walking
through
this
example
that
I
went
through
today.
So
if
you'd
like
to
replicate
this
and
remember
what
you
need
to
do,
this
has
example,
or
the
the
instructions
for
running
the
pet
clinic
sample
that
same
kind
of
Trace
that
you
were
looking
at
and
then,
if
you're,
setting
up
tracing
for
your
own
project,
how
to
do
that?
A
How
to
instrument
your
functions
in
a
capability
provider
and
as
always
our
examples
and
example,
capability
providers-
would
be
a
good
use
for
that.
So
really
the
tldr
is
that
if
you
want
to
see
this
happen-
and
you
have
your
own
laws
of
cloud
project,
just
update
the
oci
references
for
your
providers,
update
the
Watson
Cloud
host
and
then
run
a
tracing
back
end
and
you'll
be
able
to
check
out
some
traces
in
your
your
app.
A
Now
I
did
I
did
want
to
mention
one
more
thing:
Andrew
I
think
you
brought
up
a
really
great
point
and
I
wanted
to
show
it
to
you
in
this
Trace.
You
said
well,
when
you're
looking
at
a
function
and
there's
a
little
bit
of
a
gap-
and
you
say
okay,
this
is
you
know,
maybe
a
potential
flow
thing,
or
maybe
a
Nats
thing.
A
How
do
you
actually
get
visibility
into
that,
and
this
has
already
helped
us
immensely
as
like
developers
of
wasmcloud
I
wanted
to
point
out
that
async
Nats
17
was
just
released
and
you
know
happy
to
show
up
in
the
readme
here,
but
we
noticed
when
we
took
a
look
at
our
traces
in
Tempo
and
I
apologize
for
not
actually
having
one
for
you.
A
We
were
seeing
that
the
connections
using
the
async
mats
crate
could
take
over
two
seconds
just
to
create
a
Nats
connection
and
Nats
connections
usually
take
you
know
in
the
couple
hundred
milliseconds
to
create.
A
So
that
was
a
little
strange,
but
using
that
we
could
narrow
down
exactly
where
in
the
Nats
Library
there
actually
was,
that
bottleneck
I
mean
I,
forked,
the
async,
Nats
library
and
I
added
some
print
lines
around,
but
we
could
see
exactly
why
something
that
should
only
be
taking
a
couple
hundred
milliseconds
was
taking
three
or
four
seconds
in
some
of
our
applications.
So
this
is
is
huge
for
that,
and
also
you
can
see,
even
in
this
example
that
we're
looking
at
here,
this
entire
request
took
around
90
milliseconds.
A
This
is
all
running
locally
and
if
you
look
at
what
actually
takes
up
time
here,
you
see
most
of
that
actually
completes
by
what
is
that
by
around
30.
Sorry,
I
can't
do
that.
Most
of
it
completes
by
like
this
Mark
here,
which
I
guess
is
somewhere
around
15
to
20
milliseconds,
and
you
see
this
long
kind
of
waiting
period
before
this
wasn't
Cloud.
Outbound
RPC
ends
up
picking
it
up
and
sending
the
request
back,
but
most
of
it
finishes
quickly,
and
so
what
this
gives
us
a
hint
for
is
hey.
A
Maybe
we
should
go.
Take
a
look
at
any
function,
calls
that
happen
after
query,
that
aren't
instrumented
or
to
drop
into
the
max
library
and
in
and
take
a
look
for
specific
bottlenecks
there
for
why
this
has
a
waiting
period.
So
this
is
huge
huge
for,
even
if
you
don't
own
the
library
for
finding
possible
performance
things
to
improve-
and
this
is
please
take
this
in
no
way
as
a
criticism
of
the
async
maths
Library.
The
folks
have
been
awesome
with
putting
it
together
and
super
responsive.
A
They
just
released
that
fix
yesterday
for
async
nat17
I
reported
it
yesterday,
so
they
they
like
fixed.
It
got
it
out,
and
you
know
we're
looking
forward
to
to
working
with
them
a
lot
more.
B
Let's
see
the
the
logs,
the
logs
were
an
x-ray
machine
and
this
distributed
tracing
instrumentation.
That's
now
included
is
like
having
an
MRI.
You
know,
we've
just
got
another
whole
understanding
of
the
whole
call
stack
top
to
bottom
and
it's
just
been
so
powerful,
for
we
have
a
bunch
of
apps
that
cosmonic
that
we
have
built
and
are
built
on
top
of
wasmcloud,
and
it's
just
so
powerful
for
us
to
when
we
turned
it
on.
B
A
Yeah
so
I
hope
that
you
all
go
and
try
this
out,
take
a
pretty
screenshot
of
the
nice
big
trace
and
and
give
it
a
try
for
whatever
Watson
Cloud
apps
that
you
actually
have
running
and,
as
usual,
please
reach
out
to
us
on
slack.
If
you
have
any
trouble
getting
this
set
up,
we
try
to
make
it
as
easy
as
possible,
but
there
are
always
some
some
friction
points
with
setting
up
a
new
service
like
if
you
know,
if
you've
never
set
up
Tempo
before,
for
example,.
C
A
D
B
A
Yeah
requirement
it
needs
to
be
what
it
needs
to
be
too
big
for
an
ultra
wide
monitor
to
actually
render
it
needs
to
break
the
tempo
CSS.
D
Yeah
it
either
needs
to
break
the
tempo
CSS
or
needs
to
exceed
the
resolution
of
a
35
inch
monitor.
A
A
So
as
far
as
I
know,
we
don't
have
any
grafana
license
or
anything
to
set
up
it's
just
you
just
run.
Grafana
I
include
some
some
yaml
manifests
in
here
and
also
the
the
tempo
container.
That
was
one
of
the
reasons
why
we
picked
it.
It
was
so
simple
to
set
up
yeah.
D
Everything
in
the
that
we're
using
on
those
Docker
images
is
the
free
open
source
stuff.
It
doesn't
doesn't
use
any
of
the
paid
services.
A
Okay,
I
figure.
It's
also
worth
mentioning,
and
I
mentioned
this
a
little
bit
in
the
documentation,
but
you
don't
have
to
use
this
with
grafana
Tempo.
This
is
compatible
with
any
tracing
back
end
that
supports
open
Telemetry
like,
for
example,
us
at
cosmonic
we're
taking
a
look
at
things
like
honeycomb
and
and
datadog
and
I
know
that
there
are
other
Open
Source
Products
like
Jager
and
Zipkin.
That
also
do
the
same
kind
of
collecting
and
visualization.
A
B
Know,
Brooks
I
I
think
that
it's
super
helpful.
If
we
can
be
explicit,
does
it
make
sense
for
us
to
create
you
know
a
quick.
You
know,
FAQ
page
for
like
here's,
how
to
set
this
up
with
honeycomb
here's
how
to
set
this
up
with
Zipkin
here's
how
to
set
this
up
with
jaeger
or
is
it
is
it
all?
Do
you
think,
just
you
know
more.
D
More
their
docs
I
think
it's
definitely
more
their
docs.
The
big
benefit
we
get
is
that
if
the
collector
supports
otlp,
then
we
can
admit
to
it,
and
so
then
you
know
we
can
probably
just
add
a
little
note
that
says
you
know
if
you
want
to
go,
find
out
how
to
do
this,
for
your
favorite
collector,
just
look
up
how
to
configure
otlp
for
it.
A
Yeah,
all
all
you
need
to
do
from
the
Watson
Cloud
side
to
get
this
to
export.
Is
these
two
environment
variables,
so
the
hotel
traces
exporter
is
otlp
and
the
endpoint
I
have
it
going
to
the
tempo
container
here
but,
like
you
could
send
this
to
like
a
console
address
or
you
know
what
what
have
you,
but
that's
a
good
call.
A
I
can
add
this
to
the
open,
Telemetry
documentation
that
we're
adding
and
just
say
hey.
These
are
the
two
things
you
need
to
set
up
for
the
host.
E
Yeah
sorry
audio
challenge
today
yeah
this
is
this-
is
something
that
comes
up
a
lot
of
work.
You
know
just
not
just
work
for
fun
too,
but,
like
let's
say
you
have
multiple.
You
know
influx
DB
instances
like
locally,
maybe
in
the
cloud
and
other
places,
it'd
be
nice.
E
If
you
almost
had
like
a
plug-in
that
has
a
data
sourcing
refinance
so
that
you
can
route
your
queries
locally,
I,
guess
first,
but
then,
if
you
can't,
you
know,
get
the
data
you
need
like
locally
from,
like
an
actor,
maybe
to
then
go
to
the
cloud
to
skip
hops
kind
of
thing
and
right
now,
that's
kind
of
a
pain
to
manage
what
most
of
the
tools
are
out
there.
That
I
found.
D
So
there's
a
couple
of
different
levels
where
that
problem
can
be
addressed.
So
the
easiest
is
if
we
use
Nance
Leaf
nodes,
if
you're,
if
you're
using
a
nance,
Leaf
node
and
the
capability
providers
are
reasonably
aware
of
that,
behavior,
then
a
possum
Cloud
lattice
will
automatically
let
you
query
local
and
then,
if
there's
no
local
listener,
query
remote.
D
D
So
they
tend
to
run
behind
real-time
traffic
and
in
fact,
in
really
large
production
systems.
They're
often
not
even
recording
all
traffic,
they
configure
a
sampling
rate
and
they
only
sample
a
subset
of
the
traffic
so
like.
If
you
were
to
make
load
balancing
decisions
based
off
of
grafana
data,
you
would
be
making
you
might
end
up
making
decisions
based
on
you
know
the
how
how
heavy
the
load
was
five
minutes
ago.
E
A
Yeah
I
personally
have
heard
a
lot
about
grafana,
but
this
is
actually
my
first
time
using
it
for
this
kind
of
tracing
back
end
I
know
that
there's
a
ton
more
to
the
product
than
just
tempo,
so
I'm
glad
Kevin.
You
had
some
insights
there.
A
All
right
well,
thank
you
all
for
for
tuning
in
to
the
demo,
we're
really
proud
of
some
of
the
tracing
stuff
and
looking
forward
to
I
think
we're
going
to
nerd
out
over
the
the
biggest
Trace
I
like
that
challenge,
and
it's
gonna
be
really.
It's
already
really
helpful
for
us
developing
both
laws
and
cloud
and
applications
with
Blossom
Cloud.
So
it's
been
a
it's
been
a
lot
of
fun
and
I
will
take
little
to
no
credit
of
actually
setting
up
a
lot
of
the
tracing.
A
A
All
right:
well,
that
was
what
I
had
for
the
demo
and
we're
I
guess
we
got
about
20
minutes
left
so
I'll
move
on.
Unless
anyone
has
a
some
kind
of
short
like
five
minute
demo,
we
can
go
ahead
and
move
on
to
any
kind
of
community
updates
things
happening
in
the
wasm
community.
A
Oh
Jordan,
do
you
have
a
do?
You
have
a
demo
or
question.
F
Just
something
to
share
so
I
can
figure
out
how
this
nope
I
can't
share.
Help
me
Brooks.
F
So,
as
everyone
knows,
YouTube
was
down
for
10
months,
because
it
definitely
wasn't
me,
but
the
last
month
and
two
months
of
videos
have
now
been
uploaded
in
the
last
one's
processing
right
now
so
here
in
about
an
hour
if
you've
missed
anything
dating
all
the
way
back
to
April
you'll
be
able
to
catch
up
with
it.
So
that's
one.
F
Second,
one
is
I,
don't
know
if
you're
any
of
you
are
familiar
with
simple
icons:
more
or
less
it's
a
pretty
cool
little
package
and
CDN
plug-in
you
can.
You
can
use
for
like
importing
like
svgs
and
whatnot
and
kapow
with
that
in
the
next
release
we
are
actually
going
to
be
part
of
it.
F
So
Brooks
did
a
really
good
job
of
of
helping
me
tell
that
you
know
explain
to
them
like
hey,
we
don't
meet
your
metrics
yet,
but
we
will-
and
they
said
Okay
so
I-
think
probably
the
next
day
or
two
you'll
be
able
to
just
pull
in
one
of
these
JS
deliver
package.com,
throw
wasm
Cloud
right
there
and
utilize,
our
SVG
as
you
see
fit
and
I
think
they
have
yeah.
They
have
like
angular,
View
and
all
these
libraries.
So
that's
really
all
I
wanted
to
share
small
update.
D
D
F
And
the
the
other
thing
Brooks,
if
you
want
to
share
I,
don't
know
if
we
give
me
one
second
to
get
back
to
the
right
Channel
if
y'all
were
looking
at
Brook's
terminal,
I
dropped
it
in
the
random
channel
the
other
day,
but
where
do
I
screen
share
Slack
yeah?
So
if
you
use
Starship
and
you
go,
throw
that
in
your
configuration,
you'll
get
the
fun
little
wash.
F
A
Yeah
I
would
love
to
show
that
up.
Let
me
change
something
real
quick,
so
it's
actually
a
fun
demo.
A
We'll
be
real,
quick,
I
promise,
but
you
may
have
seen
that
I
have
a
wash
right
here
at
the
front,
because
I
also
use
Starship
and
immediately
put
this
in
it's
an
insured
and
sent
it
if
I
take
a
look
at
the
context
that
I
have
I
have
one
that
I
was
debugging
in
NGS
connection,
one
to
help
Liam
do
demos,
one
for
host
config,
which
just
automatically
uses
the
local
host
and
then
gcp
free.
A
A
Hopefully
my
thing
actually
yep
that
worked.
This
is
actually
reaching
out
to
host
that
I've
running
on
gcp.
It's
the
free
tier
I,
love
running
little
demos
on
here,
like
the
I
got
my
I.
Have
my
wood
dice
thing
running
there
right
now,
I,
don't
know
if
I
actually
showed
that
it's
fun,
if
you
curl
this
endpoint
I
I've
used
Kevin's
like
one
of
the
first
crates,
Kevin
wrote
with
somebody
else
for
a
dice
roller
and
you
it's
a
lot
actor
that
rolls
a
D20
for
you.
A
It
was
really
fun.
Okay,
whatever
side
went
on
a
little
side
tangent
here,
but
I.
B
B
B
Just
a
few
notes:
lots
of
good
things
open.
You
can
now
get
registered
for
kubecon.
If
you
were
interested
early
registration
is
open.
The
premium
registration
is
1700
bucks.
By
the
time
you
get
to
last
minute,
I
think
the
we're
still
two
discounts
deep.
So
it's
close
to
a
thousand
the
really
early
bird
registration
is
like
six
or
seven
hundred,
but
I
think
that
already
passed
and,
of
course,
cfp
is
open
for
cloud
native
wasn't
day.
So
please
feel
free
to
sharpen
your
pencils.
B
I
really
encourage
you
to
think
about
submitting
whatever
you're
working
on
Cloud
native
wasm
day,
we
the
last
few
years,
we've
I,
don't
know
if
I'll
be
chair
again
going
forward,
but
I
was
to
chair
for
the
last
year
and
we
really
work
hard
to
recruit
a
diverse
set
of
stakeholders
and
attendees
and
participants
to
the
call.
B
Last
year
we
also
did
have
wasmcloud
talks
at
multiple
other
Cloud
native
events,
so
I
could
see
a
distributed
tracing
one,
maybe
around
open
observability
day
or
something
at
Cloud
native
Telco
day
or
any
of
the
other
things.
Of
course,
the
Watson
Cloud
team
would
be
delighted
to
support
you
in
your
activities
if
you're
going
out
and
want
to
help,
prepare
I'm
a
talk
or
reach
out
to
the
community.
Please
don't
hesitate
to
let
us
know
and
ask.
A
I
had
one
more
fun
thing
that
I
would
love
to
share
from
just
like
General
wasam,
Community
Justin.
You
actually
put
this
in
our
slack
yesterday
that
I
thought
well.
It
was
really
fun.
The
awesome
32
Wazi
support
for
Tokyo
I
know
that
this
you
know,
Tokyo
is
used
so
ubiquitously
across
the
rust
ecosystem.
This
is
pretty
exciting.
Looking
at
them
adding
this,
it
is
worth
mentioning
as
we.
A
It
is
worth
mentioning
as
you
look
at
this
PR
that
this
is
not
full
Tokyo
support
for
wasm32
Wazi.
If
you
take
a
look
at
this
Meadow
one
which
I
actually
found
it
at
the
bottom
of
this
issue
here
there
are
a
couple
more
tasks
to
stabilize
this,
and
very
note
like
some
of
these
are
are
little
things
that
they
need
to
do,
but
very
notably
Wazi
lacking
the
API
for
connecting
a
TCP
stream
or
binding
to
a
TCP
listener.
A
These
are
probably
the
this
is
probably
the
biggest
blocker
for
this
getting
fully
supported,
but
it's
really
interesting
to
see
and
I'd
love
to
keep
track
of
this
as
as
they
proceed
because
of,
of
course,
these
are
kind
of
these.
A
Are
the
kind
of
things
that
we
look
at
with
wasmcloud
for
better
ways
to
support
what
the
general
Community
is
doing
so
I,
don't
know
if
anybody
had
any
comments
or
or
questions
on
this
specifically,
but
I
thought
it'd
be
fun
to
to
bring
it
up
as
we
come
across
things
like
this.
A
I
think
that
is,
that
is
part
of
it.
If
we,
you
know
there
was
full
Tokyo
support
for
wasm,
then
you
know
the
spawning
of
tasks
which,
as
long
as
as
long
as
wasm
is
single
threaded,
then
of
course
it
may
just
be
kind
of
a
facade,
but
things
like
accessing
time
through
Wazi
or
through
the
Tokyo
time
library
and
then
eventually
opening
sockets
and
things
like
that
or
is
what
would
be
needed.
D
Yeah
so
I
mean,
if
you,
if
we're
doing,
if
you're
looking
at
writing
an
actor
in
today's
rust,
SDK
you'll
notice
that
there
are
async
functions
in
there
and
you
know
we're
not
using
lazy.
So
what's
what
ends
up
happening?
Is
those
end
up
being
pulled
by
a
single
threaded
executor
and
you
know,
hopefully
in
the
future,
it'll,
be
a
smooth
transition
so
that
we
can
just
compile
our
multi-threaded
code
and
have
it
magically
work
in
YZ.
B
I
already
talked
about
it,
remember
the
whole
joke,
yeah
I
didn't
have
sharing
permission.
Oh
I
did
have
something
else.
Thank
you,
Brooks.
You
stick
together.
Kick
Me,
In,
The,
Shins
Liam.
You
know
actually
Justin
you
mentioned
earlier,
but
just
looking
around.
There
are
a
number
of
companies
that
are
hiring
folks
to
work
on
and
around
the
ecosystem,
including
where
I
work
at
cosmonic.
B
So
if
anyone
is
interested,
there
are
a
number
of
wrecks
that
are
out
now
and
there
are
more
wrecks
on
the
way
that
we're
posting
up
on
our
website,
so
the
Watson
Cloud
Community
has
just
grown
Leaps
and
Bounds
over
the
last
year,
especially
you
know.
Our
core
contributors
went
from,
you
know
a
dozen
or
so
to
over
120,
at
least
that
you
know
the
last
time.
I
looked,
and
you
know
it's
still
just
continuing
to
grow
in
Leaps
and
Bounds.
So
I
encourage
anybody.
B
That's
interested,
you
know
to
look
around
and
I
would
be
humbled
if
anybody
would
want
to
come
and
work
with
us
on
wazencloud
and
the
ecosystem.
G
G
I
will
pull
a
link
and
drop
it
in
chat,
real
quick,
but
if
you
have
any
thoughts
or
concerns
or
whatever
we're
leaving
it
open
the
rest
of
I
believe
this
week
was
my
plan
and
then
next
week
we'll
be
merging
it
in.
So
please
give
any
feedback
there.
If
you
have
any
and
that'll
be
it.
A
All
right
yeah,
thank
you.
Thank
you,
Liam.
Let's
see,
then
now
I
think
it's
time
for
open
floor.
Anything
anybody
wants
to
talk
about
any
questions
from
today
or
from
something
whatever
well
before
that
to
anyone.
C
So
Brooks
I've
been
was
following:
along
with
the
pet
clinic
example:
I
think
I
have
it
almost
working
once
we're
done,
recording
I.
Can
we
stay
on
for
a
couple
minutes
and
make
sure
I
actually
have
it
worked.
B
Andrew,
do
you
would
you
mind,
sharing
your
screen
just
doing
a
live
debug
session
I
mean
I,
have
no
problems
with
just
hanging
on
the
end
of
a
call.
I
mean
I.
Think
that
folks
that
are
following
along
at
home
I
know.
We
do
usually
have
a
couple.
You
know
100
to
200
people
that
watch
the
videos
from
the
meetings
and
maybe
somebody's
been
following
along,
and
they
may
have
some
same
questions.
You
do.
B
C
All
right,
so,
what's
going
on
here,
I
grabbed
the
wasmcloud
examples
went
into
the
pet
clinic
directory
I
checked
out
the
branch
that
Brooks
you
had
I
was
not
able
to
run
initially
checkup.sh,
so
I
installed,
postgres
and
then
I
installed
wash
just
a
note.
I
went
with
the
cargo
install
wash
CLI
option,
I
tried
one
of
the
others,
and
it
was
installing
too
many
things.
So
I
just
went
with
cargo
install,
so
I
hope,
that's.
Okay,
then
I
ran
run
dot
sh.
All
that
didn't
actually
completely
work.
C
C
Over
in
here,
is
it
four
thousand
yep
yeah?
That's
it
the
dashboard.
C
Yeah
this
we
may
or
may
not
be
able
to
get
to
that
foreign.
For
some
reason,
certain
ports
are
specially
protected,
so
whatever,
but
I
I
was
able
to
see
the
dashboard
when
I,
like
physically
logged
into
that
machine
and
after
the
second
run.sh
run,
I
was
able
to
see
actors
and
links
with
your
advice.
Brooks
I
was
able
to
go
into
here
and
just
I
just
sort
of
picked
a
random
Trace
ID
from
somewhere
this.
C
A
A
A
So
what
I
would
recommend
if
you
wanted
to
see
a
better
Trace?
Were
you
able
to
hit
the
pet
clinic
at
localhost
8080.
D
A
Like
what
so,
that
is,
I
mean
it's
kind
of
cool
when
you
do,
this
is
going
to
be
a
cooler
Trace.
Actually,
if
you
want
to
paste
this
one
in
this,
one
is
actually
going
to
the
pet
clinic
API,
and
then
it
should
reach
out
to
the
pet
clinic
UI
actor.
So
you
can
see
kind
of
the
actor
to
actor,
call
to
the
other,
to
the
UI
actor,
which
is
the
one
that
Taylor
wrote,
that'll
fetch
the
JavaScript
and
everything
for
the
pet
clinic
UI.
C
G
I
love
how,
when
everyone
gives
that
example,
they
still
talk
about,
like
oh,
like
Taylor,
just
put
the
UI
stuff,
and
it's
like
I
literally
just
like,
took
this
big
like
garbage,
can
of
like
UI
stuff
and
dumped
it
into
the
into
the
Wasa
module,
and
it
was
like
everyone's
like
yay.
So
I
just
get
a
kick
every
time.
Someone
talks
about
it.
A
So
yeah
Andrew,
if
you
wanted
to
see
like
the
sequel,
for
example,
you
could
curl
localhost
8080,
slash,
that's
and
that'll.
Give
you
a
that'll
invoke
the
Vets
actor
and
you'll
get
the
same
thing
that
we
were
looking
at.
A
C
A
A
Yeah
I
mean
this
is
I
mean
to
be
perfectly
honest:
I,
don't
know
how
we
survived
up
into
this
point
without
trading
yeah
yeah.
But
you
know
this
is
the
kind
of
stuff
that
now
that
we
have.
This
gives
us
so
much
more
information
for
doing
like
performance
tuning
for
specific
things.
C
A
Well,
yeah
cool,
so
I
mean,
as
you
saw
from
this
Amber
there's
nothing
that
you
really
need
to
do
extra,
so
I
look
forward
to
seeing
some
of
the
like
the
ml
inference
or
the
Wazee
NM
type
of
stuff
plugged.
In
with
this,
this
would
be
cool
yeah.
C
So
I
I
don't
know
if
Steve's
still
on
but
yeah,
it
looks
like
Steve
you're
still
on
I
sort
of
okay
I.
Don't
I
can't
currently
run
everything
that
Christoph
has
built
in
a
way.
That's
easy
to
run
and
I
think
you
were
working
on
some
something
similar
to
what
Brooks
did
here,
where
it
would
make
it
easier,
or
at
least
document
what's
necessary
to
make.
That
example
run.
H
Yeah,
that's
still
in
the
still
in
the
queue.
We
should
we'll
try
to
turn
that
around
pretty
soon
for
you,
because
that'll
that's
important
to
see
where
the
where
the
latency
is
coming
in.
C
A
C
A
Right,
everyone
well
we're
a
little
bit
over
two.
Thank
you
all
for
for
coming
today,
I
am
happy
to
hang
out
for
a
little
bit
but
I'm
going
to
go
ahead
and
end.
The
call
and
we'll
see
you
next
Wednesday.