►
Description
Don’t miss out! Join us at our upcoming event: KubeCon + CloudNativeCon Europe in Amsterdam, The Netherlands from April 17-21, 2023. Learn more at https://kubecon.io The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects.
A
Libby,
you
did
a
great
job.
Thank
you
so
much
it's
impossible
to
sort
of
guess
how
you
pronounce
my
name
because
I
was
born
and
raised
in
Seattle
Washington
and
spent
most
of
my
life
in
San
Francisco.
A
A
While
we
do
that,
though
I'm
just
another
human
on
this
planet,
I
was
born
and
raised
in
Seattle
spent,
most
of
my
life
in
Seattle
or
San
Francisco,
but
my
grandfather
was
born
in
Calabria
in
southern
Italy,
and
so
my
last
name
is
actually
Italian
or
Calabrese
as
they
say,
but
that
was
in
1910
and
my
whole
family's
been
in
the
United
States
since
then,
so
I
was
raised
with
it
being
squillous,
because
most
Americans
would
look
at
it
and
go
okay,
squill
Ace,
you
know
kind
of
thing,
but
in
fact,
somewhere
along
the
line,
my
father
decided
he
wanted
to
be
more
Italian,
so
he
called
it
squealachi
and
so
I
spent
my
whole
life
pronouncing
it
squealachi
until
I
met
my
wife
from
Southern
Italy
and
my
wife
promptly
informed
me
that
I
do
not
pronounce
my
name
correctly
at
all,
looks
more
like
squealage,
so
nobody
pronounces
my
name
correctly,
including
me.
A
A
So,
let's,
let's
play
with
this,
so
feel
free
to
interrupt
me,
I'm,
going
to
go
ahead
and
start
the
presentation
and
the
first
part
of
it
won't
take
that
long,
but
I
want
to
sort
of
establish
a
groundwork
of
why
we
are
in
this
space
that
we're
in
and
we'll
go
from
there.
Okay!
A
You
tried
this
before
and
there
we
go
and
then
I
go
ahead
and
you've
got
Inception
and
then
I
hide
it
and
then
I
go
ahead
and
do
this
so
in
theory,
Libby
confirm
audibly
that
you
can
see
the
screen
because
otherwise.
A
All
right,
great
okay,
so
this
is
what
we're
gonna
do
it's
gonna
be
quite
informal:
I
am
rascalaji
I'm,
a
principal
program
manager,
manager
on
Azure
core
upstream,
and
that
is
my
team.
Is
the
Deus
Labs
team,
the
larger
Azure
core
Upstream
team
handles
pretty
much
everything
that
funnels
into
any
container
or
oci
oriented
service
and
Upstream
projects.
A
So
on
the
kubernetes
side,
that
will
be
things
like
gatekeeper
and
service,
mesh
work
and
I
used
to
be
the
helm
PM
for
years
and
years
and
various
other
things:
the
vs
code,
kubernetes
extension,
which
is
in
the
cncs,
interestingly
enough.
But
we
also
do
things
like
the
open
source
work
that
goes
into
Azure,
kubernetes
service
and
so
forth,
and
so,
if
that
team
finds
a
bug,
they
think
is
the
kubernetes
related
bug.
A
They'll
will
help
them
do
a
Repro
and
then
we'll
take
the
fix
and
push
it
Upstream
so
that
everybody
can
benefit
that
way.
The
AKs
team
can
concentrate
on
the
actual
service
and
still
get
a
fixed
right
away,
and
so
that's
sort
of
where
I
sit
now
Deus
Labs
is
an
interesting
part
of
that.
We
actually
only
do
open
source
stuff
to
help
fill
niches
in
development
that
we
think
are
critical
for
moving
forward
or
doing
new
work
that
we
couldn't
do
before.
A
So
it's
not
strictly
speaking
container
work,
although,
as
I
say,
I
used
to
be
the
helm
PM,
so
a
lot
of
it
was
still
is
I'm
the
porter
PM,
if
you're
familiar
with
Porter
and
the
cloud
native
application
bundle,
but
if
you're
not
mostly
what
I
do
now
is
webassembly
and
kubernetes
and
it
turns
out
that's
going
to
involve
containerdy
shims.
So
let's
talk
a
little
bit
about
the
agenda,
which
is
webassembly
what
the
heck,
because
we've
got
to
sort
of
square.
A
The
circle
understand
why
we
even
care
kubernetes,
is
the
JavaScript
containers
and
by
the
way,
I
claim
the
trademark
on
that
and
I
will
come
and
collect
royalties.
If
anybody
uses
it
so
please
use
it
because
I
need
more
royalties.
A
Containerd
turns
out
to
be
at
least
for
me,
the
magic
sauce-
and
you
know,
I'll,
try
and
provoke
Giuseppe
later
on,
because
he
does
couldn't
incrimination
integration
in
a
different
way
and-
and
this
is
very
early
days,
so
it's
worth
talking
about
or
thinking
about
all
the
ways
you
might
do
it,
and
then
we
can
let
the
yelling
begin.
We
can
open
it
up
but,
as
I
said,
that's
sort
of
a
normal
agenda
and
this
agenda
should
be
interruptible
all
right
just
to
do
just
to
set
the
stage
here.
A
In
the
beginning
we
used
to
do
Native
code
all
over
and
then,
when
the
cloud
sort
of
started,
they
were
like.
Let's
take
a
VM,
because
we
had
things
like
VMware
kind
of
early
on
in
Native,
when
you're
in
Rackspace,
or
something
like
that.
A
There
was
some
hypervisor
that
was
used
actually
after
bare
metal
and
eventually
we
got
to
Cloud
things
and
the
VMS
were
the
thing
that
were
operable
there,
which
meant
that
the
developers
were
really
always
delivering
native
code,
which
could
have
the
the
state
of
which
could
have
changed
at
any
time
between
when
the
developer
last
touched.
A
A
button
and
when
it
was
running
in
in
a
VM
and
in
addition
it
would
acquire
State
while
it
was
running
things,
would
change
in
the
environment,
the
running
environment,
and
so
things
like
that
were
Troublesome,
shall
we
say,
but
we
worked
around
them
because,
basically,
if
you
pay
humans,
money
they'll
do
all
kinds
of
crazy
things.
A
Shareability
of
it
they
were
smaller,
much
much
much
smaller
and
they
were
much
much
much
faster,
which
meant
it
wasn't
just
the
shareability.
It
was
the
pure
speed
of
something
you
didn't
know
about
so
I
always
demo
when
we
were
doing
conferences
in
in
person
and
I
hope
to
do
this
again.
I
always
demo,
a
container
that
does
the
Matrix
right,
like
the
Matrix
screen
as
well.
I
have
no
idea
how
to
do
The
Matrix
screen,
but
containers
are
fantastic
because
all
I
need
to
know
is.
A
Excuse
me.
Where
is
a
container
that
does
the
Matrix
and
I
can
call
it
in
and
boom
it's
on
my
screen
and
it's
just
fantastic,
and
you
can
do
that
with
run
times,
production,
workloads
and
so
forth.
So
containers
still
are
the
greatest
things
since
sliced
bread
and
that's
not
really.
Why
we're?
Having
this
conversation,
the
sort
of
the
reason
we're
having
the
conversation
is
because
all
was
well,
but
over
the
last
10
years
the
practitioners
I
noticed
I
misspelled
practitioners,
because
that
kind
of
stuff
drives
me
crazy.
Please
insert
mentally
an
eye
in
there.
A
There
are
other
operating
systems
in
the
entire
world
software
environment,
yeah
sure
you
can
talk
about
Windows,
but
really
it's
not
just
that
there
are
things
like:
oh
real-time
losses
and
they're
bsds
and
there
are
unix's
and
there
are
all
kinds
of
things
there
are
new
operating
systems
being
built
all
the
time
because
the
environment
we
work
in,
it's
always
changing,
so
posix
is
great
but
turns
out
it's
not
everything
in
addition
to
things
like
Windows,
which
is
obvious
since
I
happen
to
work
for
Microsoft.
A
We
also
noticed
that
like
architectures
ruled
everything,
but
that
was
not
as
nice
as
it
seems.
So
it's
really
great
when
you're
looking
at
an
AMD
64.
A
it
then,
if
you've
got
an
arm,
it's
okay,
but
then
you
realize
is
it
Arch
64?
Is
it
arms
64?
Is
it
Arch
64?
Is
it
V7
or
V8
and
we're
pretty
sure
that
risk
5
is
coming,
and
is
that
really
going
to
be
the
only
architecture
if
you're
interested
in
it?
We
don't
really
know,
and
so
when
we
think
about
like
hyperscale
like
the
world,
we're
in
right
now
and
I
work
for
Azure,
so
think
about
something
like
Azure
or
Amazon
or
Google
or
digitalocean
or
Alibaba
or
ovh.
A
It
doesn't
matter
what
cloud
you're
talking
about.
That
approach,
tries
to
standardize
the
architecture
and
standardize
the
operating
system,
so
that
containers
work
really
well
in
all
these
places,
and
it
turns
out,
if
you
do,
that,
it
really
does
which
is
fantastic,
but
in
the
rest
of
the
world.
It
doesn't
work
that
way,
and
so
what
you
end
up?
A
Having
is
this
weird
desire
to
run
kubernetes
all
over
in
little
environments
all
over
the
place,
and
to
do
that,
you
actually
have
to
rebuild
all
your
containers,
and
you
have
to
you
know
if
it's
an
operating
system,
it's
different.
You
have
to
figure
out
whether
containers
even
exist,
and
then
you
have
to
port
kubernetes
to
that,
and
so
all
of
a
sudden
it
gets
to
be
a
little
bit
more
difficult.
A
So
those
scenarios
it's
fantastic,
to
be
able
to
dump
the
code
in,
but
on
the
other
hand,
if
we
start
running
really
critical
workloads.
How
do
I
know
that
this
image
really
is
okay
to
run
you?
You
might
be
well
aware
that
there's
lots
of
stuff
going
on
in
the
world
about
supply
chain
and
security
and
signatures
and
so
forth,
but
that
problem
is
so
large
that
even
when
we
get
things
like
the
big
one
we
hear
about
now
is
s-bombs
and
signatures
turns
out.
Neither
of
those
things
really
solves
your
vulnerability
problem.
A
It's
merely
one
step
forward
in
the
vulnerability
problem.
Right
Moving
from
HTTP
to
https
was
a
step
in
the
right
direction,
but
it
doesn't
doesn't
prevent
you
from
going
to
the
wrong
URL.
You
still
may
have
that
problem,
and
it
may
also
be
that
the
URL
is,
by
definition,
malicious.
You
think
you're
going
to
the
right
URL
and
it
may
even
be
there
right,
UL
or
URL
and
you'll
know.
A
One
thing
about
that
URL
is
that
it
is
the
one
that
you
know
is
is
serving
SSL
to
you,
but
that
doesn't
mean
you're
not
going
to
get
hacked
or
or
extorted
or
fished
or
whatever
it
might
be.
So
all
the
work
we're
doing
right
now
has
to
do
with
all
that
complex
code.
It's
possible
and
I
believe
it's
true
that
the
way
forward
for
a
good
chunk
of
our
code,
not
all
by
any
means,
is
to
actually
deliver
less
of
it,
far
less
of
it
and
part.
A
Finally,
the
last
thing
was
that
darn
kernel
right,
the
great
thing
about
container
ecosystem
is
that
you
were
given
c
groups
and
namespaces,
but
you
can
hit
the
kernel,
the
shared
kernel,
and
that
meant
that
you
could
just
needed
the
kernel
to
be
the
same
and
that's
pretty
easy
with
posix
and
Linux,
which
is
great,
so
that
was
a
tremendous
benefit
but
being
able
to
hit
the
kernel
meant
that
we
spent
the
last
six
years
in
distributions
both
physical
distributions
and
also
things
like
you
know,
eks
and
gke
and
AKs,
and
anybody
else's
Cloud
distribution.
B
A
You
don't
really
want
to
own
the
kernel
if
you've
got
other
things
running
on
it,
so
there
there.
These
are
problems
that
we
live
with
today,
but
they
don't
tell
us
that
somehow
containers
are
bad.
That's
not
what
they
tell
us.
What
they
tell
us
that
there
are
some
things
we
want
to
be
able
to
do
right,
that
we
can't
do
as
easily,
if
at
all,
with
containers.
A
So
those
people
geez
another
typo.
They
yeared
for
I
need
an
N
here
and
with
an
I
forgot
and
an
N
okay,
I
got
to
get
them
down.
They
yearn
for
things
like
super
fast,
cold
start
times.
So
the
one
thing
that
native
had
is
you
had
the
ability
to
start
up
pretty
darn
fast,
whereas
the
whole
container
system
was
built
for
sort
of
like
long-running
processes
in
a
data
center.
A
That's
what
containers
containers
are
much
faster
than
vhds
and
in
that
sense
they
seem
like
they're,
instant
and
for
developers
that
experience
was
fantastic,
but
in
in
production.
When
you
talk
about
like
really
hosting
fast
fast
functions,
for
example,
it
turns
out
that
they're,
really
not
that
fast
and
especially
compared
to
Native,
and
so
what
you
really
want
is:
how
can
we
get
stuff
to
move
faster
than
containers
generally
do?
A
And
if
you
put
the
kubernetes
ecosystem
around
the
containers,
then
the
gearing
is
a
two
to
three
second
delay:
no
matter
what
it
is,
that's
just
the
way
kubernetes
worked
and
it's
a
good
thing
for
its
design,
but
not
for
other
scenarios.
A
A
We
really
thought
that
developers
would
find
a
Docker
file
easy
and
so
what
they
would
do
is
they
would
actually
develop
their
Docker
files.
But
that's
not
actually
what
happened?
What
happened
was
people
would
code
natively
x
copy
into
a
Docker
file?
A
It's
not
their
fault
at
all.
It's
not
that
they
can't
go
back
and
optimize
the
image.
It's
that
they
just
don't
have
the
time
to
do
it.
They've
got
to
go
and
build
another
image
tomorrow
or
there's
a
a
tire
fire
in
their
service
and
boy.
Whatever
their
background
task
was
today,
it's
gone
now
and
it
could
be
gone
for
an
entire
week
while
the
service
is
stabilized,
so
that
results
in
big
containers.
A
A
It's
just
too
hard
to
spend
the
time
to
make
them
smaller
people
do,
and
you
can
right
and
you
can
speed
up
their
start
times
and
so
forth,
but
it's
just
really
hard
and
now
as
a
developer.
Instead
of
working
on
your
application
or
your
business
feature
right,
you're
now
working
on
some
little
teeny,
optimization
from
your
point
of
view,
that
has
nothing
to
do
with
your
service
has
to
do
with
getting
smaller
or
doing
for
you
know
fast
starting,
and
so
it
just
didn't
happen.
A
Webassembly
is
an
interesting
solution
here,
but
before
we
get
there,
I
just
want
to
point
out
that
size
became
a
problem
for
those
I,
probably
I,
don't
know
if
I'm
supposed
to
say
this,
but
quite
frequently
people
complain
about
the
size
of
a
Java
or
a.net
container,
no
I'm,
not
even
going
to
say
the
one
I'm
supposed
to
say
you
can't
you
can't
drag
it
out
of
me,
but
net
and
Java
containers
are
routinely
500
megabytes
to
a
gig
and
that's
just
because
there's
a
lot
of
stuff
there
I
mean
it's
just
the
way
it
is.
A
If
you
want
to
dump
a
complex
python
in
there,
you're
going
to
bring
in
so
many
Python
scripts,
it's
not
even
funny,
and
so
even
python,
which
Theory
shouldn't
be
or
J
or
node
or
whatever.
These
are
big
chunks
of
code
and
really
all
they're
doing
is
delivering
strings
in
files.
So
it's
fascinating
and
by
default
we
want
no
kernel
access.
So
we
learned
this
because
we
spent
a
whole
bunch
of
time
doing
mitigations
in
kubernetes,
I
and
any
number
of
professionals
have
spent
a
good
chunk
of
time
doing
mitigations.
A
A
All
of
this
is
is
really
designed
to
prevent
you
from
using
the
kernel
ore
to
control
which
kernel
aspects
you
use,
but
even
if
you
use
a
kernel
aspect
and
you're
being
controlled,
if
you
know
a
zero
day
in
that
sys
call
you
still
own
the
kernel,
even
if
that
is
the
only
thing
you've
been
given
permission
to.
So
you
it's
a
complex
problem
when,
by
default
you
get
access
to
the
current.
Well,
we
don't
want.
We
don't
want
that.
A
We
mean
things
like
you
know:
can
you
bring
the
code
to
the
data,
but
at
the
same
time
we
really
wanted
all
the
usual
container
benefits
we
want.
You
know
immutability,
we
want
signability.
We
want
to
be
able
to
attach
a
bunch
of
metadata
that
says
where
it
came
from
and
all
that
s-bomb
stuff,
because
it
will
provide
value
as
we
go
forward
and
we
want
to
be
able
it
to
be
able
to.
You
know,
do
the
things
that
we
do
with
containers.
A
We
want
to
be
able
to
store
it
in
some
sort
of
registry
and
you
know
pull
it
down
with
the
stuff
we
already
had
so
that's
kind
of
interesting.
A
So
while
we
were
doing
this,
all
of
that
kubernetes
in
container
Land
There
was
hostile,
browserland
and
JavaScript
becomes
slower
during
that
period,
clearly
the
way
to
program
the
browser,
but
then
they
in
those
environments
they're
very
rapidly
understand
that
they
want
actually
faster
stuff,
because
JavaScript
is
an
interpreted
language.
So
the
first
thing
that
the
browser
has
to
do
is
have
a
JavaScript
engine
that
grinds
the
actual
text
down
to
an
abstract
tree
and
then
spits
that
out
in
some
sort
of
intermediate
code
that
can
be
run
in
the
VM.
A
You
know
ground
down
into
machine
language
and
running
the
VM
eventually
and
boy,
that's
fine
for
popping
up
a
color
or
a
little
pop-up
and
stuff.
But
if
you
want
to
do
something
really
complex,
that's
pretty
slow
right,
okay!
A
No
access
to
the
operating
system,
so
the
whole
syscall
thing
is
right
out
and
that's
because
obviously
they're
being
attacked,
and
so,
if
somebody
can
get
into
the
process
in
some
way,
you
know
through
the
front
door
that
is
through
the
browser,
usually
right,
they
can't
be
permitted
to
go
ahead
and
download
a
script
that
then
goes
and
and
hoses
the
operating
system.
That's
very
that's
very
bad,
and
then,
finally,
of
course,
because
the
more
JavaScript
you
have,
the
more
text
you
have
and
pretty
soon
your
web
request
slows
down
and
users
get
frustrated.
A
They
have
to
wait.
You
know
three
seconds
to
you,
know,
make
a
million
dollars
and
that's
very
disappointing,
because
you
want
to
do
it
in
like
three
milliseconds
right,
so
it
has
to
be
very,
very
small,
and
there
are
other
reasons,
other
things
that
are
that
have
come
up
and
that
in
during
that
period,
but
I'm
sort
of
trying
to
bucket
the
the
process
in
hostile
browser
land
and
what
came
out
of
that
is
webassembly.
A
And
so,
if
you
think
about
webassembly
in
your
thinking,
it's
from
the
browser
and
it's
a
browser
thing.
You're
sort
of
right
right
that
that
was
where
it
was
developed,
but
before
we
go
somewhere,
I
want
to
really
kind
of
talk
about
webassembly
outside
the
browser.
Right
first
of
all,
is
the
the
key
elements
of
webassembly.
That
I
think
are
critical
and
I'd
love
to
have
an
argument
about
this
right.
First
of
all
there
and
this
deck
we'll
make
sure
we
share
and
everything
like
that.
But
the
this
is
a
link
to
the
VM
specification.
A
It's
a
stack
based,
abstract
VM.
So
when
we
were
thinking
earlier,
we
didn't
want
access
to
the
kernel
and
we
thought
that
sounded
like
a
hypervisor
or
a
VM
of
some
sort.
You
know
some
sort
of
of
emulation
and
whatever
this
is
exactly
what
that
is,
and
you
can
think
about
it.
Like
the.net
framework,
you
can
think
of
it
like
Java,
VM
and
whatever
it
might
be.
It
is
not,
strictly
speaking,
emulation
it's
its
own
operating
system
statement.
A
You
know
in
a
way
as
a
way
to
think
about
it,
which
is
why
assembly
sort
of
sort
of
fits,
even
though
it's
not
really
assembler
either,
but
it's
been
in
all
major
browsers
since
2018
for
things
like
Google,
Earth
and
Adobe
Lightroom,
and
things
like
this
in
fact,
I'm
almost
certain
that
this
underlying
platform
that
we're
showing
right
now
that's
streaming.
My
video
to
you
almost
certainly
is
built
with
webassembly
from
browser
to
browser
right,
I,
don't
actually
know,
but
I
would
lay
money
on
it.
A
So
some
of
the
critical
things
are
is
that
the
host
in
webassembly
in
in
the
browser,
that's
a
JavaScript
engine,
but
outside
of
the
browser
anything
could
run
and
including
a
JavaScript
engine
like
you
can
code,
V8
or
spider
monkey
or
any
other
JavaScript
engine
to
host
webassembly
modules
and
execute
them
all
code
in
that
environment
is
deemed
untrusted
and
the
sandbox
in
the
the
sandbox
for
webassembly
is
unbroken.
There
have
been.
A
There
are
security
areas
where
there
where
it's
problematic
and
you
have
to
pay
very
close
attention,
but
the
sandbox
itself
works.
It's
it's
not
a
Sandbox
problem
and
that's
really
really
great.
A
There
are
problems
that
doesn't
have
there's
no
threading
okay,
so
you
can't
think
of
like
you
can
do
thread
pooling
and
GC,
there's
no
GC
or
memory
management
and
there's
a
bunch
of
other
things
that
are
not
there
and
it's
only
32
bits.
So
your
addressable
memory
is
like
you
know,
four
gigs,
but
and
I
do
want
to
be
quoted
as
being
the
first
to
say.
A
Really,
nobody
should
ever
need
more
than
four
gigabytes,
so
I'll
you
know
claim
that
one
most
of
these
things
have
proposals
that
are
on
the
way,
but
they
won't
be
here
for
a
few
years,
and
so
the
real
question
is
like
okay.
A
So
what
can
you
do
with
it
and
what
you
probably
shouldn't
do
and
that's
the
way
to
think
about
it,
because
then
we're
thinking
about
engineers,
the
other
two
interesting
areas
that
I
think
are
critical
for
what
we're
doing,
especially
in
kubernetes
and
for
the
the
kind
of
definition
of
the
problems
that
that
most
of
the
ecosystems
run
up
against.
Are
the
wasm
system,
interface
or
Wazi,
which
is
really
sort
of
an
abstract,
syscall
spec?
So
you
can
think
of
it
as
like
an
operating
systems,
a
set
of
Kernel
calls
right,
but
they're
virtual.
A
They
don't
they're
not
specific
to
any
particular
thing,
so
they're
not
positives
or
anything
like
that,
and
the
component
model,
which
is
a
way
to
create
tightly
constrained
interfaces
that
Define
capabilities,
that
the
module
can
perform
or
requires
the
host
to
perform
on
its
behalf,
and
so
there's
two
sides
of
a
component
interface
and
the
way
it
works
is
basically
that
the
Wazi
syscall
set
right.
That
specification
will
sit
on
top
of
the
component
model.
They'll
be
component
model
interfaces
or
at
least
that's
the
way.
It's
being
looked
at
and
discussed
right
now.
A
A
So
what
do
those
elements
provide?
Well,
strangely
enough,
they
provide
a
lot
of
the
problem,
features
areas
that
we
identified
as
being
an
issue
with
the
container
ecosystem,
and
so
we
love
containers.
But
then,
if
we
need
these
particular
things,
we
might
have
to
mix
containers
with
something
else,
and
so,
when
we
do
that,
we
look
at
these
categories,
because
the
host
owns
the
mod
of
module,
the
host
can't
permit
or
deny
anything
it
wishes.
A
A
module
cannot
request
any
feature
or
any
behavior
from
the
host
unless
the
host
permits
it
right
and
the
interface
model
constrains
any
ability
of
the
module
to
make
calls.
So
if
you
have
a
component
and
it
implements
one
interface,
for
example,
if
you're
talking
about
an
HTTP
request,
the
interface
might
say:
I
must
consume.
A
You
know
I
Implement,
something
that
receive
a
function.
One
function
that
receives
a
request
and
I
return.
That
function
returns
a
response
and
that's
it,
and
if
a
host
implements
the
opposite
side
of
that
contract
right,
then
that
module
can
be
coded
to
that
contract
and
you
can
code
in
any
language,
and
you
can
see
that's
part
of
the
portability.
A
That
means,
though,
that
both
the
host
and
the
module
right
can
only
perform
that
function,
which
is
fantastic,
so
interfaces
constrain
the
abilities
of
a
module
to
be
to
to
do
whatever
it
wants.
So,
even
if
a
house,
if
a
module,
has
malicious
code,
even
if
it
you
know,
might
find
a
way
to
invoke
a
a
call,
it
can
only
invoke
a
call
through
the
interface
and
not
only
is
the
shape
known
by
the
host,
but
the
host
could
dynamically
inspect
those
interfaces
as
well
and
make
decisions
on.
A
You
know
what
kind
of
content
the
actual
return
value
had
and
do
something
good
or
bad,
depending
on
what
the
host
decided.
The
modules
have
implementations,
but
they
own
nothing,
and
that
is
a
great
stance.
So
the
other
one
is
the
speed
size
combo,
because
they're
binaries
right
and
because
they're
not
containers
in
our
optimized
for
Speed
the
speed
and
size
combo
is
amazing,
so
you
can
treat
it
all.
A
It
really
depends
on
your
workload,
but
we're
talking
about
somewhere
between
hey
a
10x
reduction
in
size
to
a
50
and
60
70,
where
the
X
reduction
in
size.
That's
really
amazing
and
Native
cold
starts
are
approachable,
so
optimized.
You
can
actually
start
a
webassembly
module
and
enter
a
function
in
low
nanoseconds.
It's
entirely
possible,
but,
most
importantly,
it's
really
important
to
realize.
Microseconds
low
microseconds
is
like
out
of
the
box,
and
so
the
throughput
there
with
the
size
that
is
the
module,
gets
delivered
very,
very
fast.
A
You
can
have
density
and
you
can
cold
start
per
request.
A
new
module,
so
your
multi-tenant
location
of
focus
becomes
the
module
and
not
the
whole
host
process,
which
is
very,
very
advantageous.
So
now
you
can
start
to
see
why
things
like
cloudflare
and
fastly
and
netlify
and
versel
and
even
Akamai,
does
this
now
and
I'm
sure.
All
of
the
major
clouds
will
do
this
soon
use
this
for
their
CDN
functions.
That's
how
they.
B
A
Absolutely
that
is
the
risk,
so
here's
the
deal
the
reason:
I
love,
webassembly
the
reason,
I'm
betting,
our
company's
investment
on
it
and
supporting
the
Upstream.
So,
for
example,
my
company's
is
a
sponsor
of
the
bike
code,
Alliance
Foundation,
which
is
doing
most
of
the
w3c
work
for
the
specifications
involved
here.
Things
like
that,
my
team
contributes
to
those,
but
we
also
support,
for
example,
the
the
python
compilations.
We
support
the
infrastructure
for
building
and
testing
the
python
builds
to
Wazi,
for
example.
A
A
A
If
you
put
a
web
server
in
a
pod
in
a
container,
then
create
pods
and
host
them
in
kubernetes
right
those
those
web
servers,
they
have
their
own
threading
pools,
they
have
their
own
connection
pools
all
that
kind
of
stuff
that
was
built
for
architecture
is
like
three
tier
models
25
years
ago,
and
if
you
think
about
it,
the
container
model
enables
us
to
continue
building
applications
that
way,
but
webassembly
does
not,
but
the
benefit
it
gives
you
for
not.
Building.
A
That
way
is
that
you
can
do
all
kinds
of
things
you
can't
do
with
the
container.
So
that's
the
the
sort
of
the
dividing
line.
The
super
fast
super
small,
single
threaded,
do
your
work
and
get
back
the
business
and
create
another
module
for
the
next
work
item.
If
that
sounds
sort
of
functiony
or
serverlessy
or
microservices
it
should
that
leans
into
that
kind
of
approach.
Technologically,
but
what
happens
when
you
want
to
actually
run
a
big
long-running
process
as
a
web
assembly?
A
Well,
remember:
you
only
got
four
gigs,
so
you
can't
do
it
now,
but
if
you
get
64
bits
now
you
cut
the
memory
space.
If
you
get
threading,
if
you
get
async
all
the
things
that
enable
you
know,
concurrency
in
a
large
process
like
an
OS
style
process,
what
you're
going
to
end
up
doing
is
everybody
who
knows
how
to
build
three-tier
models
and
three-tier
software
that
is
still
being
built
that
way,
we'll
just
Port
the
same
stuff
to
webassembly
now.
Will
it
be
good?
Yes?
Will
it
be
better?
A
Yes,
it
will
just
in
the
same
way
that
containers
were
better
than
native
okay,
but
it
is
not
actually
going
to
be
so
much
better.
It
is
going
to
be
the
same
code
with
the
same
environment,
for
example,
you're
going
to
see-
and
you
already
have
seen
extinct
experiments
with
full
web
VMS.
In
fact,
there's
one
called
webvmi
dot,
IO
right,
which
I
think
is
chirps
attempt
at
a
like
a
full.
A
You
know
container
in
or
VM
in
webassembly,
and
the
thing
about
that
is
it's
fantastic
for
what
it
is
and
maybe
that's
the
feature
you
want
so
I'll
use
you
Nigel
as
an
example.
So
maybe
Nigel,
that's
the
feature
that
you
want
right,
I,
don't
want
it
I,
don't
want
it.
If
I'm
going
to
run
a
long
running
server
process
that
wants
all
that
stuff
right,
I'll,
probably
do
it
native
or
I'll
do
a
container.
A
Maybe
I'll
do
a
web
assembly,
but
only
if
I
need
that
webassembly
to
run
on
any
architecture
and
I.
Don't
care
about
speed,
I!
Don't
care
about
any
of
that.
Other
stuff
I,
basically
just
want
to
get
the
kind
of
default
security
and
the
portability
that
I
get
out
of
web
assembly
and
those
may
be
enough
right,
but.
A
You're
not
going
to
get
that
for
a
while
and
frankly,
if
you
do
that,
you're
going
to
still
run
into
the
same
problem,
we
have
with
VMS
and
containers
now,
which
is.
A
We
have
no
idea
how
to
secure
that
entire
operating
system,
and
so,
unless
you're,
going
to
write
code
that
leans
into
webassembly
and
a
and
be
a
binary
instead
of
a
collection
of
processes
with
an
environment,
you're
gonna
bring
things
with
you
that
you're
going
to
end
up
having
to
mitigate
anyway,
even
though
you're
inside
a
web
assembly
and
remember
that
those
big
processes
now
you're
going
to
have
to
open
up
all
the
kernel
features
to
the
module
that
essentially
start
to
at
some
part
of
the
gradient
start
to
negate
the
constraints
that
the
security
model
of
the
component
interface
starts
to
bring.
A
A
A
There
will
not
be
faster
and
bigger
chips
in
all
the
little
devices
we're
talking
about
around
the
world,
not
floating
or
not,
in
five
genome
Towers,
not
in
space,
not
in
submarines,
not
in
ships.
Row
boats,
my
cell
phone,
these
things
are
going
to
get
bigger,
but
they're
going
to
get
more
electrically
efficient
and
slower
at
the
same
time
per
unit
cost,
and
that
will
be
good.
We
need
them
to
burn
less
electricity,
but
in
that
environment
you
don't
want
to
bring
your
data
set.
A
B
B
A
I
expect
there
is
a
GC
proposal.
Dan
I
know
is,
is
in
if
Dan
is
still
here,
he
could
probably
drop
the
pr
for
the
proposal.
The
issue
for
the
proposal
there's
GC,
there's
memory
management,
all
the
things
that
people
will
want
are
coming,
and
so
what
I'm
really
saying
is
you
know?
Dan
might
have
a
different
opinion,
but
my
opinion
is
those
features
will
not
arrive
in
anything
concrete
before
two
years
from
now
and
maybe
I'm
pessimistic,
but
I
actually
think
I
don't
want
them.
A
I,
don't
need
them
all
right.
I
want
a
new
module.
I,
don't
want
a
new
thread,
that's
the
way,
I
look
at
it,
okay,
so
these
are
the
portability
and
so
forth
and-
and
you
know
we'll
go
on
and
and
you
know
this
is
Verner
vogel's
doing
the
AWS
thing.
They're
actually
Prime
video
bases
basically
uses
webassembly
right
now
for
more
than
8
000
device
types
and
that's
the
kind
of
radical
portability
you
get
with
a
single
module.
A
That's
incredible
and
of
course
it
doesn't
mean
that
the
module
is
the
only
thing.
That's
on
those
device.
Dice
right,
you
gotta
have
something
that
runs.
It,
but
the
point
here
is
that
the
the
services
job
is
to
make
sure
that
the
thing
that
runs
it
is
there,
but
the
developer
only
has
to
think
about
the
package
for
the
process,
which
is
webassembly,
and
it's
like
you
compile
to
webassembly.
So
it's
more
like
a
native
process.
There
is
no
Docker
file
to
build
in.
A
You
know
for
webassembly
itself,
although
I'll
show
you
that
there's
more
stuff
to
be
done
to
make
it
work
in
kubernetes
at
the
moment,
all
right
so
kubernetes-
and
this
is
my
take
part
of
my
hostile
provocation-
is
the
JavaScript
of
containers
and
by
that
I
mean
you,
don't
necessarily
like
it.
You
might
not
have
built
it
that
way
yourself,
but
you've
got
it
and
you
use
it,
and
so
does
everybody
else,
and
then
they
gripe
about
yaml
and
service
meshes
and
a
few
other
things.
A
Oh
yeah,
so
ingresses
are
not
good
and
you
know
it's
like
this
kind
of
stuff,
but
we
use
it
and
it
turns
out
that
at
Network
effect
wise
in
the
in
the
world.
It's
incredibly
useful
right.
So
the
question
is
like
what?
What
do
you
want
to
do
right?
You
want
to
integrate,
and
people
tell
us
this
right
now.
So
speaking
from
the
perspective
of
azure,
all
I
hear
from
customers
are
I
want
to
run.
Kubernetes
on
my
automobile
I
want
to
run
kubernetes
in
my
house.
I
want
to
run
kubernetes
in
my
you.
A
Is
okay
and
then
they
say
but
like
when
I
try
and
like
run
kubernetes
it's
either
too
big
or
I,
don't
I
have
different
machines
or,
like
my
I,
have
got
arm
and
Intel
and
I've
got
you
know
various
other
things.
In
other
words,
what
they
end
up
saying
is
some
combination
outside
of
the
hyperscale
environment,
where
everything
is
the
same.
Skew
you've
got
to
choose
everything
and
line
up
your
cluster.
A
thousand
nodes,
the
same
way
they're
identical
clusters,
work
great
there
and
containers
work
great
there.
A
That's
what
they're
asking
for,
but
what
they
don't
know
because
they
haven't
been
told-
and
that's
partly
my
job
is
that
kubernetes
could
work
in
these
strange
environments
where
either
they're
heterogeneous
by
Os
or
by
architecture
right
or
by
language,
something
like
that
or
they're
constrained.
They
don't
have
a
lot
of
room.
A
They'd
have
flaky
networks
or
very
thin
attenuated
network
connections,
and
they
may
be
like
you
know,
could
barely
have
any
Ram,
let
alone
disk
storage
and
things
like
this,
so
the
size
has
to
be
really
really
small
and
the
performance
has
to
be
really
really
good
for
unit
of
artifact
right
those
environments,
don't
work
really
well
in
containers
and
like
I,
said.
The
point
here
is
not
that
containers
are
bad.
It's
that
we're
talking
about
compute
environments,
where
containers
don't
really
work
well
and
for
sure
and
CDN.
A
It
turns
out
that's
true
for
Pure,
cold
start
and
security,
so
the
all
of
those
environments
are
web
assembly.
Okay,
that
makes
sense.
Those
are
those
are
technological
choices,
not
religious
battles
or
emotional
feelings,
and
so
people
sit
there
and
think.
Okay
I
want
to
do
kubernetes,
but
with
webassembly,
and
so
there's
lots
of
work
here.
So
you
know
Giuseppe's
on
the
line.
A
I
know
he's
done
a
lot
of
work,
we've
been
squabbling
recently,
socially
about
see,
run
integration
right,
like
so
C
run,
can
run
containers
or
webassembly,
which
is
a
cool
way
to
do
it,
or
some
companies
like
wasm
Cloud,
which
is
an
open
source
kind
of
actor-based
model.
For
the
most
part
they
have
a
microservice
orientation,
but
but
the
sweet
spot
for
them
is
actors.
A
They
have
a
kubernetes
integration
that
allows
their
actor
model
to
essentially
be
scheduled
from
the
crd,
and
you
use
the
crd
and
that's
their
integration
with
kubernetes,
but
in
reality
you
can
use
it
in
kubernetes.
If
you
like
the
model-
and
you
have
kubernetes,
you
know
control
planes,
but
it's
really
designed
to
run
anywhere
and
it
does
its
own
scheduling
and
networking
things
like
this,
and
so
you
can
do
all
kinds
of
cool.
A
You
know
transparent
networking
and
neat
mesh
scenarios
like
that,
so
that's
one
way
to
integrate
with
kubernetes,
and
the
question
is
whether
it's
the
useful
one
for
you-
and
maybe,
if
was
implied,
is
your
kind
of
thing.
There
are
also
CRI
implementations
and
cubelet
implementation
that
I'm
going
to
talk
about
those
in
a
second
and,
of
course
the
interesting
thing
about
webassembly
is
because
it
can
be
embedded
very,
very
easily.
It's
already,
probably
in
your
kubernetes
cluster.
A
You
just
don't
know
it
right,
so
Envoy
Network
filters
except
run
webassembly
and
now
that's
you
know
all
the
rage.
So
almost
certainly
some
of
your
Envoy
Network
filters
or
webassembly
based
Cube
Warden
and
a
couple
others
do
policy
enforcement
using
webassembly.
A
So
if
you
use
those
projects
you're
using
webassembly
and
that's
there-
it's
already
there
so
it
it
strangely
already
exists
pretty
much
everywhere,
even
in
kubernetes,
but
people
aren't
conscious
about
building
web
assemblies
for
kubernetes
and
just
or
using
kubernetes
to
orchestrate
them,
and
that's
what
we're
interested
in
and
to
do
that
we
use
a
cncf
project
called
container
D,
which
is
our
magic
sauce.
A
So
what
we
did
is
we
failed
a
lot,
and
so
when
I
say
we
I'm
going
to
call
back
to
my
team
at
Azure,
which
is
the
Deus
Labs
team
and
we
do
opens
completely
open
source
and
we
failed
to
integrate
kubernetes
in
a
project
called
walk.
So
if
you
go
to
https
github.com.
A
You
will
find
a
CRI
implementation
and
boy
did
that
not
work
and
the
short
version
is
very
straightforward.
Cri
really
is
a
container
runtime
interface
and
it
wants
to
be
a
container,
and
so
it's
very
much
hardcore
coded
good
hard-coded,
two
containers,
and
so
when
we
implement,
we
were
basically
faking
out
all
the
API
calls
that
are
container
specific,
that
had
nothing
to
do
with
webassembly
and
we
realized
that
the
abstraction
wasn't
there
there
were.
A
We
don't
continue
to
invest
in
Crescent,
except
for
to
review
people's
PRS
and
so
forth.
There's
several
people
that
run
with
it.
Many
people
use
it
for
the
rust-based
cubelet
and
also
for
the
rest
based
oci
distribution
crate.
A
So
that's
something,
and
it's
very
useful
for
that.
There's
also
a
state
engine
crate
in
there
called
creator.
That
is
really
really
good,
but
the
problem
with
creslet
is
that
it
tends
to
treat
a
node
as
if
it's
a
special
node,
and
so
you
actually
have
to
say
this
is
the
node
that
runs
webassembly
and
no
weather,
and
that
gets
to
be
problematic
because
we
don't
need
kubernetes
to
teach
us
more
things
to
pay
attention
to.
We
need
kubernetes
to
slowly,
but
surely
let
us
forget
about
things
more
and
more.
A
I
would
like,
in
five
years
for
kubernetes
to
sort
of
finally
find
a
default
configuration,
that's
pretty
much
a
simple
but
manageable
and
robust
orchestrator
we're
on
that
path.
But
it's
going
to
take
time
to
get
there
and
crosslet
is
not
added
that
it
adds
complexity
to
your
operations,
even
if
the
developers
have
a
clear
path
to
using
it.
So
for
us,
a
containerdy
shim
turns
out
to
be
the
right
path.
The
great
usage
with
kubernetes
containerdy
is
the
the
the
wedge.
A
The
hinge
in
vanilla
kubernetes
between
the
cubelet,
which
calls
container
D
API
to
schedule
a
workload
and
the
actual
implementation
of
scheduling
which
could
use
different,
different
binaries.
It
could
make
different
adjustments.
It
might
do
different
things
and
so
forth,
but
container
D
is
just
the
API
facade
or
an
implementation
inside,
which
is
what
you
typically
call
a
shim.
So,
for
example,
Giuseppe
I
think
runc
cron
has
a
shim.
A
The
default
shim
in
kubernetes,
vanilla
kubernetes
is
the
Run
C
shamp
I
think
it's
called
containerdy
red
C,
and
so
we
basically
use
that
same
functionality
to
plug
in
a
different
runtime,
which
was
awesome
time
and
that's
a
bytecode
Alliance
runtime
and
we
then
are
able
to
schedule
Wazi
run
loads.
It's
runwazi
not
run
wasm,
so
runwazi
does
not
run
web
assembly
only.
A
It
has
to
be
conforming
to
the
webassembly
system,
interface
model
and
use
components
in
that
regard,
as
we
go
forward
because
the
sweet
spot
is
not,
let's
run
some
custom
module.
That
has
custom
interfaces
of
something.
If
you
want
to
do
that,
you're
more
than
welcome
to
grab
the
shim,
including
runwazi
and
and
like
actually
rip
out
the
innards
and
modify
it
and
do
something
custom,
but
we
want
to
be
able
to
actually
enable
an
ecosystem
where
Wazi
implementations
of
wasm
components
right
can
be
used
on
any.
A
You
know,
standard
kubernetes
interface
excuse
me,
kubernetes
cluster,
pretty
much
anywhere
and
it
just
works,
and
it
all
brings
the
same
level
of
guarantees
that
I
talked
about
earlier,
and
so
that's
our
objective
there
to
scale
out
the
possibility
that
kubernetes
could
be
used
in
these
weird
areas.
That's
that's
what
we're
doing
there
and
containerdy
lets
us
do
it
now.
There
is
a
little
bit
of
History
here
and
I
want
to
sketch
this
out.
A
Where
are
we
on
on
time?
I
think
we're
at
what.
A
So
the
critical
thing
here
is
that
runwazi
our
run
was
to
use
wasm
time
and
run
was
he
was
forked
by
a
second
state
because
they
were
working
with
Docker
desktop
and
they
wanted
to
use.
The
second
States
was
image
runtime,
and
so
they
have
a
second
state
run
Ozzie
okay,
and
that
is
what
the
docker
desktop
project
uses
and
then
our
run
was
see.
A
The
days
last
run
was
he
was
accepted
into
the
containerd
project
in
the
cncf
and
second
state
in
Docker,
immediately
sat
down
and
said:
hey,
let's
figure
out
how
to
bring
composable
run
times
to
run
was
the
in
container
D.
So
it's
really
the
same
shim
and
so
we're
all
working
on
that
together
and
then
we're
also
working
on
oci
compliance
and
oci
artifacts
are
on
the
roadmap
and.
A
Demos
but
I'm
going
to
switch
gears
and
come
back
to
Inception
and
basically
stop
sharing
and
yeah.
So,
oh,
it
starts
in
five
minutes.
Oh
no
wonder
I
timed
it
for
this!
This
is
great.
So
if
there
are
any
other,
you
know
questions
go
ahead
and
ask
them
either
in
chat
or
in
the
Q.
A
the
chat's
fine,
but
I
will
do
one
a
quick
demo.
So
you
understand
what
webassembly
really
brings
I
can
also
do
a
quick
demo
that
makes
it
helps.
A
You
understand
it
in
kubernetes
everybody
vote
right
now
or
ask
questions
which
demo
would
we
want?
We
want
to
see
what
webassembly
can
do.
That's
really
really
important
and
make
sense
that
you
would
want
to
run
kubernetes
cluster
for
it
or
do
you
want
to
see
how
it
actually
runs
in
kubernetes,
so
you
can
try
it
out,
go
ahead
if
you're
still
here,
even
there's
22
that
are
technically
still
here,
but
they
have
the
TV
on
over
here
the
ladder
how
it
actually
runs
in
case
okay.
A
So
here's
what
I'm
gonna
do
I'm
going
to
share
the
screen.
A
A
A
So
what
I'm
doing
is
I'm
showing
you
k3bs,
and
so
this
uses
our
runwazi
shim,
and
so
you
should
think
that
you
would
be
able
to
in
the
future
do
this
with
the
docker
desktop
release
or
any
other
desktop
release,
and
so
we're
going
to
use
k3ds,
and
you
see
right
here
that
what
we're
going
to
do
up
here
is
we're
just
calling
in
the
a
shim
right
here:
it's
not
a
shim
a
a
container
and
in
k3ds
the
nodes
are
modeled
by
containers
and
that
container
has
the
shim
already
on
it.
A
So
we'll
go
ahead
and
create
how
this
works.
So
you
should
be
able
to
do
this
right
and
in
the
deck
that
we'll
share
we'll
drop
in
the
quick
start
for
this,
but
you
should
be
able
to
find
it
very
very
easily
on
the
web.
This
is
just
k3ds.
This
is
basically
real
time,
so
you
can
start
this
pretty
much
anywhere.
You
can
find
k3ds
and
it
you
should
have.
A
Roughly
speaking,
the
same
experience,
so
what
we're
going
to
do
here
is
deploy
a
two
wasm
time
based
applications
using
fermions
spin,
but
before
we
do
that
end,
Deus
Labs
slight
application.
Before
we
do
that,
we
got
to
tell
kubernetes
that
the
shims
exist
and
so
right
there
we've
told
them.
There
are
two
one-time
classes
and
those
are
the
names
one
for
each
app
model
right
and
all
of
a
sudden.
A
When
we
deploy
the
application,
you
can
see,
we've
got
five
slight
applications
and
five
spin
applications,
and
the
interesting
thing
is
we're
actually
waiting
now
for
a
container
workload.
Now
I'm
going
to
stop
that
and
note
we
are
waiting
for
the
Ingress
container
to
load.
While
we
have
already
started
every
single
application,
but
we
can't
reach
them
because
there's
no
network
path
blocked
by
the
ingestion
mounting
and
execution
of
the
traffic
condition.
So
if
we
sit
here
and
wait
a
minute
just
a
second,
the
container
comes
in
there.
A
It
is
so
now
we
can
go
ahead
and
start
curling,
and
here
I
sped
up
the
demo
a
little
bit,
but
basically
I'm
waiting
for
the
traffic
container
to
create
the
Local
Host
path.
Okay,
I've
got
it.
This
is
just
a
hello
world
right
and
so
now
you
can
see
both
of
them
there
right.
So
that's
one
real
easy
thing
from
here:
I'm
gonna,
so
I'm
going
to
show
you
both
curls,
but
from
here
you
can
see
you
can
do
any
kind
of
thing.
This
is
hello
world.
A
Now,
if
I
from
now
on
now
on,
this
demo
is
going
to
tear
it
down
if
I
go
over
here.
One
of
the
things
I
want
to
show
is
how
it
looks
in
kubernetes
and
what
you
can
do,
and
so
here
I'm
in
AKs,
so
you
could
run
this
in
any
cluster.
You
know,
AKs
has
a
service
that
allows
yawazi,
node
pool
service
that
that
this
infrastructure
is
supports
and
I've
got
now
two
node
pools
arm
and
AMD,
and
so
I'm
going
to
actually
load
a
regular
container
app.
A
It's
the
voting
app
and
you
can
see
that
it
gets
scheduled
to
Aussie
pool
too
and
it's
failing,
but
there's
a
reason
and
the
reason
is
it's
being
loaded
on
an
arm.
Node
now
remember
arm
nodes
cost
about
30
to
50
percent
less
and
they
consume
less
energy.
So
what
happens
if
I
do
this
with
a
five
replicas
I'm
going
to
do
the
same
workloads
that
I
ran
on
the
on
k3ds?
A
A
Build
I
have
to
have
now
have
two
copies
of
the
container,
but
for
the
module
it
just
doesn't
care,
but
there's
only
one
module
like
it's
not
like
there
are
two
or
anything
like
that,
and
I
can
demonstrate
that,
but
now
I'm
going
to
go
in
and
choose
wazipool
one,
which
is
the
AMD
version
and
I'm
going
to
delete
it
right
out
from
underneath
the
the
cluster
okay.
If
I
do
that
now
watch,
this
is
not
sped
up
terminating
the
nodes
and
all
the
applications
are
now
redeployed.
A
Every
single
one
is
now
Wazi
node
pool
2.,
and
that
means
that
I've
just
migrated
my
entire
workload
from
AMD
to
arm
and
it
never
stopped.
There
were
no
there's,
no
redeploy
here
for
the
five
that
were
already
distributed
to
node
pool
2,
which
means
your
workload
continued
to
be
serviced
and
you
went
ahead
and
redeployed
all
the
AMD
ones
to
arm,
and
it
happened
just
that
fast.
You
didn't
touch
it
right,
kubernetes
did
it
and
all
the
while
the
Azure
vote,
which
is
a
regular
container
version
right,
just,
doesn't
work
because
I
haven't
done.
A
The
extra
labor
I
need
to
do
in
order
to
migrate
the
workload
to
AMD,
and
it's
not
that
I
can't
do
it.
It's
not
that
it's
impossible
to
do
it's
just
one
of
those
things,
so
that
in
theory
should
be
good,
I'm,
gonna,
stop,
sharing
and
I
want
to.
Thank
you
very
much
for
coming
and
then
the
last
thing
I'm
going
to
do
is
drop
in
that
length
to
the
stuff.
Well,.
B
B
A
B
Thank
you.
Everyone
grab
that
link
before
we
end
our
end
our
session
and
also
you
can
probably
hit
Ralph
up
on
slack
on
the
cncf
slack
Channel.
Absolutely
if
you
need
to
get
any
other
information
or
links,
and
thank
you
all
once
again
for
joining
us.
Thank
you
Ralph,
so
much
for
hopping
in
and
providing
such
a
great
webinar
for
today
and
again,
this
will
all
be
posted
later
today
on
the
website.
B
You
can
catch
the
recording
all
that
good
stuff
and
thank
you.
Everyone
for
joining
us
have
a
great
rest
of
your
2012
too.