►
From YouTube: SIG Cluster Lifecycle - Cluster API 22-05-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is,
may
4th
2022
star
wars
day
and
we're
here
for
clustering
api
office
hours.
Meeting
a
reminder:
cluster
base
is
a
project
of
c
cluster
lifecycle.
We
have
a
meeting
etiquette
which
just
comes
down
to
raise
your
hand
with
when
you
want
to
speak
up
and
just
be
kind
to
each
other.
B
A
All
right,
so,
let's
keep
going
with
the
agenda.
Let's
go
through
open
proposal.
Readout.
Are
there
any
updates
throughout
this
document
that
we
should
talk
about.
C
A
D
I
just
want
to
call
attention
once
again
to
machine
pool
machines.
We
had
a
great
meeting
on
friday,
a
lot
of
you
were
there
and
thank
you
very
much.
I
think
the
two
main
changes
are.
We
decided
not
to
be
prescriptive
about
what
resource
a
provider
would
have
to
implement
to
back
a
machine
pull
machine.
So
earlier
we're
saying:
you'd
have
to
invent
a
new
resource
and
it
would
have
this
name.
We
did
away
with
that,
so
you
can
use
an
existing
info
machine
or
create
a
new
resource.
D
If
you
need
to
the
other
change,
is
the
cases
for
managed
machine
pools
that
can't
support
individual
deletion?
We
were
going
to
just
leave
them
as
is,
and
that
would
create
kind
of
a
split
user
experience,
and
I
think
we
had
a
consensus
that
we're
going
to
go
ahead
and
create
machine
pull
machines,
even
in
that
case.
D
A
Thanks
matt
jeff.
E
Yeah,
real
quick
just
wanted
to
call
out
that
we
had
a
very
animated
discussion
last
week
about
the
cluster
api
add-on
orchestration
proposal
using
helm,
libraries,
jonathan
went
through
an
initial
demo
of
his
prototype
got
tons
of
great
feedback.
I
just
pasted
a
link
to
the
recording,
so
that's
it
moving
along.
F
Yeah
so
richard
and
I
actually
opened
the
created,
the
proposer
manages
corn
edition
cappy
and
we
actually
got
some
initial
feedback
from
recap
g
folks,
but
we
actually
need
the
feedback
from
cappy
folks
and
also
any
other
providers
who
are
interested
in
creating
many
different
quantities.
A
Okay,
I
I'm
sure
that
there's
going
to
be
like
a
lot
of
feedback
for
that
one
is
the
proposal
in
here,
though,
or
not
yet
the
torres.
Oh.
Okay,
sorry!
I
missed
it
okay,
so
this
needs
to
review
and
assume
feedback
as
well.
A
Okay,
that
sounds
good
like
folks,
please
give
it
a
read
and
we
can
go
from
there.
A
Are
there
any
comments
on
the
on
this
proposal,
specifically
before
we
move
on
or
open
sending
questions
that
we
should
use
this
time
for.
A
A
G
Yes,
hello,
my
name
is
laura
and
also
here
on,
the
call
is
ishmeet
we're
actually
following
up
on
a
project.
Sig
multi-cluster
was
working
on
and
gave
a
psa
in
this
very
meeting
back
in
january
21.
I
remember
talking
to
vincent
cecile
way
back
then
and
oh
need
to
apparently
didn't
do
the
sharing
permissions
properly.
G
Let's
fix
that
change
to
anyone
with
the
link
there
we
go
should
be
better,
but
we
brought
a
couple
slides
to
kind
of
keep
ourselves
on
track
and
I'll
actually
hand
it
over
to
ishmee2
give
the
overview
really
quick.
H
Hi
this
is
ishmit,
I'm
not
sure
if
I
would
be
able
to
share
the
screen
or.
H
F
H
So
so,
basically,
we
have
an
agenda
of
about
about
about
api,
which
has
just
gone
to
alpha,
which
is
one
of
the
sub
projects
for
sigmulti
cluster,
and
we
wanted
to
you,
know,
group
around
and
see
if
there
would
be
any
potential
use
cases
for
this
particular
controller
and
crd
in
the
lifecycle.
H
Api
cluster
lifecycle
api.
So
what
exactly
is
about
api
is
well
documented
in
this
cap2149.
H
I
would
request
you
to
review
at
you
know
when
you
have
some
time
about
it,
but
basically
what
we're
trying
to
say
in
an
opinionated
and
kubernetes
way
how
a
cluster
can
self
identify
itself
in
a
group
of
clusters.
H
One
of
the
projects
for
sigma
multi
cluster
was
a
multi-cluster
api
and
there
was
requirement
for
cluster
to
self-identify
in
itself
in
a
cluster
set
cluster
set.
Is
you
know
closely
related
clusters
for
a
particular
cluster
set?
So
what
this
crd?
This
erd
is
cluster
scope
and
what
cap
provide?
You
is
basically
an
opinionated
way
how
you
can
store
the
cluster
metadata
and
what
would
it
contain,
and
originally
it's
described
for
signal
title
cluster
purposes
only
which
the
basic
tag
for
this
is
id
case:
dot
io.
H
Some
of
the
examples,
as
you
can
see
in
my
on
the
right
side
of
the
screen
is
it
has
a
metadata.
It
should
have
a
metadata
id
id.kx.io
and
it
has
a
value
which
actually
its
cube
system,
namespace
uuid,
but
this
is
pretty
flexible.
You
can
also
store
arbitrary
properties
in
this
cluster
property
as
well.
You
can
have
more
than
one
cluster
property
for
a
particular
cluster,
as
you
can
see
in
the
example
on
the
right,
it's
showing
showing
of
storing
a
fingerprint
value
in
this
property,
but
there
are
some
guidelines
to
it.
H
It
should
not
conflict
with
any
well-known
properties
and
it
should
use
the
suffix.com
or
whatever
you
want
to
use,
but
it
cannot
use
any
any
one
of
the
reserved
suffixes
for
kubernetes
such
as
kx
dot,
io
or
kubernetes
dot
io.
You
know
other
well-known
suffixes
and
we
have
an
example
here.
I
am,
I
want
to
show
you
here.
How
does
it
looks
like
in
the
current
setup
so
when
a
cluster
starts
itself
with
this
particular
crd?
H
This
is
how
its
id
looks
like,
and
it's
for
now
as
for
use
for
mcs
api.
This
cluster
id
should
be
unique
for
a
particular
cluster
and
it
should
live
until
the
lifestyle
of
the
its
membership
to
a
cluster
set
or
its
lifetime
as
its
cluster
is
there
so
for
for
for
cappy,
specifically,
we
were
thinking
like
what
could
be
other
information
that
you
store
in
a
specific
nodes,
or
maybe
a
node
label,
or
a
pod
label
that
can
be
stored
as
a
cluster
property.
H
Probably
it
could
be
a
service
or
node
part
side
ranges
that
could
be
stored
as
cluster
property
that
are
unique
to
a
particular
cluster,
or
you
can
store
some
cloud
provider.
Related
information
like
this
cluster
is
born
in
aws,
with
that
particular
arn.
Some
arn
information
could
be
useful.
H
Let's
say
for
aws
or
for
azure
a
subscription
id
or
subscription
information
could
be
useful
to
be
stored
as
cluster
property,
which,
which
would
be
unique
for
that
cluster
or
any
cluster
specific
information
that
could
be
stored
in
you
know
a
resource
annotations
or
labels,
which
is
like
distributed
all
across
different
resources,
but
we
want
to
centralize
it
and
give
it
a
more.
You
know
kubernetes
way
to
store
that
information,
so
that's
about
it
for
the
project
laura.
If
you
want
to
add
something.
D
G
We
wanted
to
share
what
the
current
status
of
this
is,
and
also
just
generally
tell
everybody
that
it's
an
alpha
and
and
can
be
used.
But
I
would
love
to
hear
from
the
group
here
if
these
potential
use
cases
that
we
discussed
if
they
like,
resonate
at
all
or
if
there's
something
else
that
we're
missing.
G
I
know
that
some
of
the
things
we
were
even
talking
about
obviously
exist
already
in
the
provider
provider's
apis,
but
if
there's
some
value
to
some
of
those
things
or
some
of
the
things
that
might
be
currently
in
annotations
being
put
into
the
crv
instead
for
the
cappy
project,
we're
really
interested
to
hear
if
you
have
any
of
those
ideas
top
of
mind
and
would
also
love
to
collaborate
on
it.
E
Hey
laura,
so
what
comes
to
mind
immediately
would
be
an
identifying
cluster
property
that
says
this
cluster
is
managed
by
cluster
api.
That
would
be
very
useful
to
have
on
the
workload
cluster
itself,
and
one
can
imagine
pairing
that,
with
admission
webhook
type
foo
to
prevent
any
sort
of
direct
modification
of
core
cluster
stuff.
If
it's,
you
know
managed
by
a
central
authority
somewhere
else.
H
A
Well,
the
management
cluster
will
keep
track
of
watercolor
clusters
and,
like
names
and
like,
I
don't
think
we
have
like
a
global
uni
like
unique
identifier.
A
The
closest
thing
that
that
I
could
think
of
which
is
related
to
the
life
cycle
of
the
managing
cluster
is
the
uid
of
the
cluster
object,
which
you
know
like
in
a
backup
and
resource
scenario.
That
would
be
regenerated,
but
it's
not
really
a
great
way
to
do
it.
A
But
the
combination
of
the
management
cluster
plus
the
namespace
and
the
cluster
name
is
kind
of
globally
unique
within
cluster
api.
I
think
this
would
be
probably
like
a
good
way
to
just
push
down
some
information
about
like
where
the
workload
cluster,
maybe
like
precise,
like
maybe
like
some
other
information
like
about.
H
Yeah
yeah,
I
was
thinking
like
if,
if
you
are
storing
information
about,
let's
say
it's
a
cluster
which
has
been
spin
up
in
eks,
then
any
arn
information
or
any
cloud
provider
information
that
you
want
to
store
for
that
cluster
could
be
potentially
one
use
case
over
there
or
different
arn
like
for
subnets
and
stuff,
like
that.
That
you're
using
could
be
a
cluster
property.
H
If
you
want
to
pull
that
from
cluster
property,
and
that
would
stay
till
the
cluster
for
its
lifetime
right
like
when
it's
deleted,
then
those
values
would
change.
You
know
that
could
be
a
placeholder
for.
H
A
Well,
in
the
close
respect,
there
is
like
a
network,
a
configuration
that
like
contains
that
spec
that
you
can
refer
to
but
like
so
there
is
a
couple
of
problems
with
that,
like
one
it's
like
that,
you
know
like
it
needs
to
be
pushed
down
to
the
cluster
and
the
other.
One
is,
if
I
remember
correctly
like
well,
we
have
like
an
ipam
proposal
as
well
like
for
for
other
purposes
which
we
might
want
to
reconvene.
H
We
have
heard
that
in
the
past
that
when
they
they
were
so
there
was
a
setup
that
they
were
creating
multiple
clusters
in
different
cloud
providers
and
they
wanted
to
pull
that
information
from
each
cluster
or
you
know,
or
basically,
a
central
way
to
know
what
ciders
are
being
used
in
what
clusters,
what
subnet
ranges
are
being
used
for
each
like
service,
node
and
pod,
so
that
I'm
guessing
like
that
would
probably
also
be
an
ask
for
somebody
who
is
using
a
cappy
down
the
lane
to
know
more
information
from
the
cluster.
H
A
That
that
could
be
something
that
we
can
definitely
explore.
I
think
it
would
be
great
to
capture
like
a
bunch
of
these
use
cases.
I
guess
an
issue
and
go
from
there,
like
you
know,
in
a
matter
of
next
steps,.
A
G
I
can
segue
just
because
I
put
myself
as
the
next
topic
too,
but
the
the
question
is
very
related
to
all
this
as
well.
So
I
we
were
just
talking
about
how
the
management
cluster
can
identify
each
cluster
uniquely
today,
and
I
know
that
part
of
that
is
the
cluster
name,
which
I
assume
is
from
the
provider.
G
Basically,
and
I'm
wondering,
is
there
any
validation
on
cappy's
side
for
the
cluster
name
that
you
get
from
a
provider
in
particular?
G
The
reason
I'm
asking
is
because
sigmult
cluster
is
still
debating
whether
a
cluster
name
in
a
cluster
property
must
strictly
be
a
dns
label,
and
this
is
something
that
I
think
many
providers
already
restrict
you,
but
I
would
love
to
benefit
from
your
potentially
wild
past
stories
of
whether
that
is
truly
true
or
not
and
or
how
safe
it
is
for
us
to
make
such
a
claim
for
the
cluster
property
idcapate.io.
A
I
left
the
cloud
for
us
because,
like
the
in
terms
of
the
cluster
name
specifically
like,
if
I
do
remember
correctly
like
right
now,
we
are
actually
still
allowing
dots
in
it,
but
I
know
that
azure
doesn't
like
that
cecil.
Do
you
want
to
speak
to
that?
A
Class
yeah
the
cluster
named.
I
think
there
was
a
requirement
right
for
azure,
for
example,
to
not
have
dots
inside
of
the
cluster
name,
because
it's
just
yes
names.
I
Yeah,
so
it's
not!
That
azure
has
the
requirement
for
the
questionnaire
it's
just
that
we
use
the
cluster
name
by
default
to
new
certain
azure
resources,
like
the
resource
group
that
the
cluster
lives
in,
and
so,
if
you
don't
provide
a
resource
group
name
explicitly
and
you
name
your
cluster,
something
that
is
not
a
valid
resource
group
name
that
has
special
characters.
A
Yeah
to
answer
your
question
laura
like
I
don't,
I
was
trying
to
to
look
right
now,
but
we
don't
actually
specifically
validate
the
cluster
name
in
of
its
own.
It's
going
to
be
like
whatever
metadata.name
allows
to
go
through.
A
The
other
problem
that
we
have
faced
with
name
specifically,
is
that
two
clusters,
with
the
same
name
in
some
cloud
providers,
can
cause
trouble
because
they
will
create
resources
with
the
same
name.
Truthfully
honestly,
we
should
have
hashed
all
of
these,
but
it's
also
like
another
great
user
experience
when
you
type
something
as
your
name
and
then
you
see
something
else
in
the
console.
For
example,.
G
Okay,
so
sounds
like:
if
I
understood
this
correctly,
it
is
possible
with
providers
that
you
have,
that
you
work
with
right
now
for
cluster
names,
to
b
sub
domains
or
like
basically
have
dots
in
their
name,
and
that
has
caused
cluster
api
problems
when
you
use
that
to
create
other
resources
with
that
name,
but
otherwise,
there's
no
validation
or
enforcement
that
a
cluster
name
that
was
set
in
some
provider
is
a
valid
dns
label
just
partially,
because
that
is
not
even
necessarily
true
for
all
the
providers
that
you
support.
A
G
All
right
well,
thank
you
so
much,
I
think,
we'll
open
an
issue.
I
guess
in
cappy
just
about
some
of
these
ideas
that
we
talked
about
in
case
people
want
to
collaborate
like
on
using
about
api,
to
store
that
that
cluster
is
managed
or
any
other
properties
of
cluster
class.
That
might
be
interesting
to
put
there,
and
so
we
can
catch
that
out
there
and
again,
thank
you
for
your
time
and
feel
free
to
help
myself
or
ish
meet
up
on
slack
or
what
not.
J
Only
if
requested
so
this
is
something
that's
kind
of
tangentially
related
that
some
some
folks
in
my
group
been
working
on,
and
I
thought
it'd
be
kind
of
fun
to
show
here,
because
we've
been
using
cluster
apis
as
a
substrate
for
it,
so
we're
working
on
making
webassembly
more
accessible,
running
even
cluster,
so
making
it
easier
to
do
and
one
of
the
routes
that
we've
gone
down-
and
I
think
it's
it's
it's
been
pretty
successful
so
far-
is
making
a
container
d
shim
to
execute
webassembly.
J
So
basically,
it
packs
in
a
webassembly
runtime
and
executes
modules
that
are
packed
up
into
oci
images.
J
So
you
put
it
out
and
it's
it's
actually
pretty
easy
to
put
together
with
cluster
api,
and
if
interested,
I
would
be
delighted
to
show
how
it
kind
of
fits
together-
and
you
know
possibly
if
folks
are
have
continued
interest.
I
would
love
to
drop
in
a
pr
into
like
image
builder,
and
just
you
know,
by
default,
if
folks
want
to
go
and
turn
it
on,
like
maybe
add
an
environment
variable
or
something
image
builder
folks
can
have
a
webassembly
enabled
clusters
that
can
run
containers
and
webassembly
side
by
side.
J
Cool
number
one:
let's
see
if
it's
gonna
work.
J
I'm
working
on
it
and
I'm
I
may
actually
have
to
restart.
I
think
yeah
I'll,
be
right
back
sorry,.
E
Mine
is
not
nearly
as
exciting
as
david,
so
mine's
going
to
be,
in
fact
very
boring.
I
have
been
consuming
a
lot
of
the
the
cappy
and
then
test
framework
convenience
functions
for
the
benefit
of
capacity
in
particular,
and
so
for
that.
E
Thank
you
to
everybody
who
has
written
those
conveniences
and
allowed
them
to
be
reused.
What
I've
observed
is
that
a
lot
of
those
functions
are
sort
of
native
ginkgo
and
so
inside.
The
execution
flow
you'll,
see
ginkgo,
exception
assertions
which
can
make
it
difficult
to
introspect,
what's
going
on
at
times,
and
also
basically
impossible
to
actually
branch
your
your
code
in
response
to
error
errors
occurring
at
runtime
and
so
in
an
ideal
world.
E
What
I
would
propose
is
that
we
sort
of
refactor
all
those
to
return
the
error
to
the
to
the
caller,
instead
of
just
throwing
an
exception
inside
the
execution
flow
and
short-circuiting
the
entire
test,
but
because
these
are
exportable
functions,
as
evidenced
by
the
fact
that
I'm
reusing
them
in
a
different
project
that
would
be
breaking
for
a
lot
of
folks.
So
I
think
what
I
would
love
to
be
able
to
do
is
to
maybe
target
to
do
an
audit
of
this
code
and
for
all
error
types
that
are
retriable
introduce.
E
You
know
some
sort
of
retry
loop
inside
to
at
least
avoid
error
flakes
causing
a
short
circuit
of
the
entire
test
flow,
which
would
require
re-test,
so
yeah.
That's
sort
of
the
background,
any
questions
on
what
I'm
talking
about
or
any
thoughts
on,
the
proposal
to
sort
of
retry
the
errors
before
throwing
an
exception.
D
Was
just
gonna
say
I
feel
like
this
has
come
up
before
yeah
and
and
I
think
we
stopped
short
of
doing
that,
but
I
I
totally
understand
what
you're
describing
and
I
think
eventually
we
need
to
go
there
because
it
makes
it
really
hard
to
because
if
something
essentially
ends
in
an
eventually,
then
it's
a
black
box
from
when
you're
using
the
helper
method
so
yeah
anyway.
I
think
it'd
be
great.
If
we
could
do
that,
it
sounds
like
a
huge
amount
of
work.
E
Well,
there
could
be
a
good
reason
to
to
keep
there
there.
There
are
terminal
error
states
and,
if
cappy
wants
to
enforce
those
in
its
own
test,
suite,
then
really
it's
just
up
to
the
client
like.
If
you
want
to
not
do
that,
then
just
don't
use
this
exportable
interface.
E
My
main
concern
right
now,
the
lowest
hanging
fruit
is
flakes,
so
things
like
grabbing
a
cluster
cube,
config
secret.
That
can
fail
one
time
and
I'd
like
to
just
give
them
give
a
few
more
retries
an
opportunity
to
make
forward
progress
in
the
tests.
E
So
it
sounds
like
that's,
not
a
controversial
proposal,
and
so
you
may
see
some
some
targeted
pr's
for
me.
I'll
certainly
make
the
prs
small,
so
it'll
be
one
function
at
a
time.
A
A
E
Cool
I
mean
I,
I
assume,
thanks
for
the
issue
cecile
I'll
read
through
this,
but
I
assume
that
it
was
made,
but
this
decision
was
made
in
a
prior
spike
that
you
know
refactoring
it
to
return.
An
error
was
not
really
practical,
because
I'd
be
actually
happy
to
do
that.
Work
it'd
be
a
decent
amount
of
work,
but
I'm
weird
that
kind
of
stuff
is
satisfying
for
me.
It
would
just
be
that
we'd
be
breaking
everybody,
so
I
assume
that's
not
really
something
we
want
to
consider
right
now.
A
To
return
the
error
like
yeah,
we
did
discuss
it
like
at
length
over
the
course
of
all
like
few
of
a
year.
I
guess,
like
I
said,
like
there
was
just
like
some
pros
and
cons.
A
Like
you
know,
the
pro
that's
like
you
can
like
kind
of
act
on
the
error
instead
of
like
getting
failures,
but
the
con
would
also
be
that,
like
it
gets
like
exponentially,
also
more
like
harder
to
just
like
write
these
tests,
because
then
you
have
to
like
kind
of
handle
every
error
instead
of
like
just
letting
closer
api
to
figure
those
out
for
you,
but
having
it
retry,
mechanics
like
I,
even
if
it
even
better,
if
it's
configurable
across
the
board.
A
Awesome
thanks
jack.
Thank
you
for
bringing
that
up
any
other
questions,
comments
and
concerns
on
this
topic.
J
Give
me
another
shot:
okay,
just
a
reminder:
we're
gonna
look
at
some,
not
not
actually
a
lot
of
webassembly,
mostly
just
cluster
stuff.
So
let's
go
with.
J
Yep
looks
good,
fantastic,
thank
you
so
much
okay,
so
I'm
gonna
hop
into
terminal
and
I'm
just
gonna
say:
okay
get
cluster
api
and
just
to
preview
I
actually
have
a
cluster
api
cluster.
There
are
no
tricks
up
my
sleeve,
and
here
it
is
so
we
have
a
cluster
ready.
I've
actually
deployed
out
a
workload,
and
so
I'm
just
gonna
say
config
equals.
J
Cube,
okay
got
pods
and
we
can
see
in
our
running
workload
cluster.
I
have
an
nginx
deployment
and
a
wasm
spin
deployment.
So
I'm
just
gonna
go.
Take
a
quick
look
at
my
services,
and
here
we
have
two
extremely
well
balanced
services.
I'm
gonna
go
take
a
look
at
the
wasm
service,
real
quick
and
the
wasm
service.
I'm
just
gonna
curl
it
and
say:
give
me
go
okay,
so
that
was
from
go
asm
and
rs.
This
is
from
russ
wesson.
J
So
what
we
have
in
these
pods
is
we
actually
have
these
pods
deployed
where
they're
running
this
thing
called
spin
spin
gives
us
really
nice
development
experience
for
running
basically
has
a
router
and
then
routes
to
different
web
assembly
modules
within
the
pod.
So
we're
going
to
describe
let's
describe
pod
wasm
service.
J
Oh
actually,
why
doesn't
we'll
just
take
this
one?
So
these
pods
having
are
running
this
particular
demo
image?
They
they
look
pretty
normal,
there's
really
not
much
to
it.
They
act
like
regular
container
pods.
What
we
end
up
doing
is
in
here.
I'm
gonna
go
up
to
here's,
here's
a
pretty
familiar
site
for
everybody.
Do
I
need
to
blow
this
up
a
little
bit,
or
is
it
big
enough
that
folks
can
read
it
all
right?
So
here
what
we're
doing
in
pre
comedie
m?
J
Just
because,
like
we
don't
have
this
baked
into
the
image
right
away,
we're
gonna
go
through
and
we're
gonna
mutate,
the
container
d
config,
so
that
we
can
register
a
new
container
d
shim
we
go
through.
We
pull
down
the
container
d
shim
that
we're
gonna
be
using
drop
it
into
user
bin,
our
user,
local
bin
user
local
bin.
Now
we
just
restart
container
d.
J
So
what
that
does
is
that
gives
us
our
new
container
d
shim,
that's
going
to
be
that's
enlightened
to
run
wassup,
so
we're
using
wasm
time
and
western
time
is
one
of
many
run
times
that
that
you
can
take
advantage
of
this
one's
from
the
bike
code
alliance
and
it's
kind
of
the
one
that
we're
using
right
now.
J
We
then
tell
kubernetes
about
the
runtime
class
how
to
take
a
pod
and
then
align
it
to
execute
with
that
shim
through
container
d,
and
we
do
that
by
applying
the
runtime
class
here
next
we
set
up
our
deployment
and
service
just
the
same
way
as
like
we've
done
with
you,
know,
pods
or
you
know
positive
deployments
for
a
long
time.
Here
we
have
our
service.
J
We
just
set
up
load
downloads
for
service
just
like
normal
and
what
we
end
up
doing
is
we
just
end
up
saying:
here's
our
run
type
class
and
we're
gonna
do
use
wasm
time,
and
then
it
executes
webassembly.
What
does
that
actually
end
up?
Looking
like
so
I'm
gonna
hop
over
to
here.
J
And
here's
one
of
our
shims
and
we
have
images
that
we
made
here
so
for
running
this,
this
actually
in
the
docker
file.
Let
me
go
back
to
the
docker
file,
so
docker
file
just
copies
in
this
directory.
So
let's
imagine
we
were
running
this
and
then
this
ends
up
specifying
some
of
the
spin
configuration
and
then
where
it
routes
to
and
the
wisem
module
that
it
routes
to
the
shim
starts
this
up
and
we
get
a
running
web
application
running
application
in
the
pod.
J
J
This
has
nothing
to
do
with
azure
or
microsoft,
or
anybody
else
other
than
like
writing
code
and
putting
it
out
into
repos.
So
it
should
work
equally
everywhere.
Yeah.
Any
questions.
A
Thanks
david,
I
think,
like
from
the
I
don't
know
about
the
image
builder
side,
like
the
only
two
concerns
I
have
like
is
to
integrate
it
in
image
builder
is:
are
there
any
plans
to
have
like
at
this
shims
part
of
either
container
d
or
in
a
different
repo
that
you
know,
community
driven
I've
been
ready
to
see
ncf
or
something
like
that.
J
That
is
that's
definitely
where
we're
targeting
we.
We
don't
want
to
keep
this
private,
we're
we're
actually
just
like
iterating
on
it
right
now
and
then
submitted
into
you
know,
continuity
area
most
likely.
A
Yeah,
I'm
huge
plus
one
is
this
like
this
is
like
seems
like
really
interesting,
and
I'm
really
thinking
like
about
rundown
sdk
and
how
you
know
in
the
future
like.
How
can
we
leverage
this
to
just
like
make
it
even
easier
to
just
deploy
extensions
on
top
of
cluster
api
specifically?
But
then
you
know,
even
when
you
think
about
like
the
future
of
applications
up
top
kubernetes,
like
it
just
seems
like
a
new
area
that
could
be
cool
to
explore.
J
Yeah,
if
folks
are
interested,
I
dropped
a
link
to
like
this
spin
kitchen
sink.
I
believe,
if
not
I'll
drop
one
into
the
chat,
but
basically
you
can
see
like
python
examples.
You
can
see.
There's
go
rust,
there's
all
sorts
of
different
languages
that
are
coming
up
with
support
c-sharp,
you
know.
So
it's
it's
not
just
like
rust
and
assembly
script.
It's
really
starting
to
grow.
So
if
folks
are
kind
of
interested
in
it,
it
gives
a
really
nice
way
to
run
wasm
on
your
existing
cluster
api
clusters.
A
A
Going
once
twice
three
times
awesome,
we
are
at
the
end
of
the
group
discussion.
We
have
provide
some
provider
updates
for
today.
Are
there
any
last-minute
group
topics
that
we
should
chat
about
before
moving
on.
K
Yep
just
wanted
to
give
a
quick
update.
We've
released
a
version
that
now
supports
cluster
class,
so
excited
about
that,
and
we
had
our
first
office
hours,
which
we
didn't
super
publicize,
because
we
wanted
to
do
a
trial
run,
and
so
we
feel
a
little
bit
more
confident
about
hosting
office
hours.
So
next
month,
we'll
broadcast
a
little
bit
more
widely
and
feel
more
confident
in
talking
with
all
the
users.
A
Awesome,
thank
you.
We
need
you
have
the
next
one.
F
I
Cecil
yeah
just
not
an
update
yet,
but
a
soon-to-be
update
that
we'll
have
the
1.3
miner
release
likely
out
by
end
of
day,
if
not
end
of
week,
and
that
we're
also
having
discussions
in
kavsey
on
aligning
on
a
release,
cadence
kind
of
mirroring.
What's
going
on
in
capy
right
now,.
A
L
I
put
in
the
chat,
but
maybe
if
there's
a
specific
place,
people
are
planning
on
meeting
up
at
kubecon
eu.
I
don't
know
if
there
is
anything
specific,
but
whether
it
be
contribution
summit
or
I
know
I'll-
be
hanging
around
the
microsoft
booth.
A
lot
so
everyone's
welcome
there,
but
like
yeah.
If
there's
any
particular
happy
presence
planned
for
kubecon
eu.
A
Absolutely
I'll
come
to
say
hi,
I'm
sure
the
other
folks
who
will
be
there
as
well.
It
will
be
fun.
I
guess
first
kubecon
like
since
2019,
for
I
like
in
person
at
least
so
it's
a
good
idea.