►
From YouTube: Kubernetes Multitenancy Working Group 20210406
Description
Vigorous discussion about hostile multi-tenancy versus soft/hard concepts.
A
Hey
everybody
and
welcome
to
your
regularly
scheduled
multi-tenancy
working
group
for
kubernetes.
Today
we're
going
to
talk
about
hostile
multi-tenancy,
which
adrian
will
briefly
define
for.
B
Us
so
I
I
think
it
was:
was
it
a
microsoft,
blog
post
that
that
we
saw
last
morning,
do
we
have
any
folks
from
microsoft?
On
this
call,
I
don't
think
we
do.
B
Oh
hi,
there
hi
tommy
dude.
Were
you
involved
in
any
way
in
the
the
chat
about
or
the
blog
post?
I
think
it
was
about
hostile
multi-tenancy.
C
A
Okay,
so
adrian
you
just
reminded
me,
I
had
completely
forgotten
about
that.
So
where
this
came
from
was
someone
at
microsoft,
who's
very
senior
was
was
tweeting
and
blog
posting
about
hostile
multi-tenancy
versus
what
he
was
calling
enterprise
multi-tenancy,
and
I
just
thought
that
that
was
a
really
odd
way
to
describe
it,
because
what
we
talk
about
in
this
working
group,
the
language
that
we've
sort
of
coalesced
around
historically,
has
been
soft
versus
hard
multi-tenancy.
And
so
the
idea
is
that
soft
multi-tenancy
is
where
you
can
sort
of
trust
the
other
users.
A
So,
and
so
this
really
aligns
with
what
the
idea
of
enterprise
multi-tenancy
was
in
what
we
were
reading,
because
the
idea
is,
I'm
an
enterprise,
I'm
internal
I.t,
I'm
providing
kubernetes
as
a
service
for
my
developers
to
leverage,
and
now
I
can
kind
of
trust
people
not
to
abuse
the
system
or
ddos
each
other
or
whatever,
because
they'll
get
fired
right,
and
I
just
thought
that
that
was
a
really
odd
phrasing
to
say
enterprise
and
then
and
then
what
he
was
calling
hostile
multi-tenancy
was
the
idea
of
maybe
I'm
providing
kubernetes
as
a
service
on
the
internet.
A
And
so
maybe
I
have
a
free
trial
and
people
can
come
in
and
I
might
have
all
sorts
of
people,
and
so
I
have
to
be
prepared
for
this
hostile
mindset
and
I
just
thought
that
was
a
really
strange
sort
of
delineation
of
the
two
ideas
because
a
you
know,
if
you
talk
to
anybody
in
tech,
malicious
actors
internally,
if
you're
in
a
significant
scale
or
if
money's
involved
like
bitcoin
like
you,
got
hostile
all
over
the
place.
A
Right,
like
I've,
talked
to
some
bitcoin
tech
companies
like
years
ago,
and
they
were
saying
they
can't
even
trust
their
own
physical
security
team,
because
people
are
just
infiltrating,
like
so
aggressively
like
in
every
possible
way
to
try
to
get
access
to
bitcoin
that,
like
every
like,
they
see
themselves
as
a
security
company
first
and
bitcoin
sort
of
a
side
project.
A
So
I
thought
that
was
kind
of
weird
and
then
the
other
piece
is
you
can
accidentally
ddos
your
co-workers
and
it
still
hurts
just
as
much
as
if
you
intended
to
and
so
not
having
like
those
lower
level
controls
and
like
separation
and
like
trying
to
kind
of
control
the
firestorm,
when
you
have
like
more
than
say
two
people
working
in
any
any
environment
or
even
just
two
like
that,
was
like
a
little
odd.
So
I
think
I
went
on
this
rant
on
slack
and
then
adrian
was
like.
A
B
B
I
think
I
would
agree
it's
like
I
mean
I
have
my
own
problems
with
soft
and
hard,
but
as
long
as
we
understand
that
they're
on
a
spectrum
like
it's
not
one-dimensional,
it's
you
know
at
least
one
and
a
half
dimensional,
but
it's
like
as
long
as
we
understand
that
it's
on
a
spectrum.
I
think
that
that
it
becomes
a
lot
more
palatable,
at
least
for
me,
to
talk
about
the
degrees
to
which
you
trust
the
other
people
on
your
cluster
and
yeah.
So
it's
not
just
you
know
external
actors.
B
It
could
be
internal
threats
as
well.
It
could
be
accidental
or
malicious
and
and
like
simple
scale,
can,
can
drive
a
lot
of
that
as
well.
So
but
yeah
like
when.
I,
when
I
think
about
the
types
of
multi-tenancy
that
are
like
completely
distinct,
I
think
of
things
like
multi-team
tenancy,
where
everybody
who's
using
the
cluster
is
a
team
that
you
know
are
working
at
the
same
company.
B
So
that's
one
whole
like
attack
surface,
that
you
can
basically
just
shut
down,
but
they
you
can
assume
that
they
are
mutually
hostile
towards
each
other
like
actively
hostile,
perhaps
to
a
greater
extent
than
you
would
for
different
teams
in
a
small,
let's
say
like
wordpress
hosting
business,
and
so
in
that
case
you
need
a
lot.
You
might
not
need
like
control,
plane.
Protections
like
are
back
or
network.
Well,
maybe
network
policies,
but
you
don't
need
things
like
our
background
quota
as
much,
but
you
certainly
need
things
like.
B
I
think
that
the
microsoft
blog
post
says
you
need
a
hypervisor.
Is
the
only
acceptable
level
of
security?
I
don't
know
how
that
fits
in
if
they
would
consider
like
g
visor
or
catec
containers
to
be
hypervisors.
B
I
don't
know
where
that
would
fit,
but
you
certainly
need
something
at
the
data
plane,
because
any
vault,
otherwise
any
vulnerability.
Somebody
can
exploit
and
break
out.
D
The
one
that,
throughout
the
hypervisors
and
and
the
problem
with
g
visor
right
is
it's
a
virtualized
kernel,
but
you
still
don't
you're
not
guaranteed
like
cpu
level,
isolation,
yeah
and
so
that's,
that's
kind
of
where
the
where
that
came
from
is
like.
If
you
care
about
the
hardware
level
cpus
right
now,
they
literally
only
have
provisions
for
hypervisors
to
give
you
that
isolation,
and
so
that's
where
that
comes
in.
But
that
being
said,
both
intel
and
amd
are
working
on
container-based
isolation
at
the
cpu
and
memory
level.
So
that's
something.
E
F
C
Instead,
I
am
from
the
I'm
a
cloud
solution
architect,
and
so
I
work
with
different
companies
that
are
migrating
their
applications
over,
and
I
was
very
interested
in
what
you
had
to
say
about
the
sas
multi-tenancy,
because
that's
the
realm
that
I
typically
live
in
and
so
I'll
have
to
admit
that
even
within
I
was
curious
as
to
like
where
what
microsoft
was
defining
as
the
malicious
or
sorry
the
hostile
multi-tenancy,
because
I
was
curious
if
they
came
up
with
that
idea
or
if
this
was
something
from
a
kubernetes
standard
or
you
know
the
sig.
C
A
Think
it
was
just
someone
being
super
like
random
and
off
the
cuff,
and
I
think
the
only
reason
we
remember
it
is
that
I
had
such
a
strong
reaction
to
it.
A
E
A
But
because
of
the
way
you're
phrasing
it
you're
like.
Well,
you
don't
need
this
hostile
thing.
You
need
enterprise
multi-tenancy,
that's
kind
of
like
well,
not
really
sure
about
that,
but
yeah
I
don't.
I
don't
think
I
I
have
a
feeling
it
was
sort
of
like
a
random
post
like
I
don't.
I
would
never
presume
that
this
is
like
microsoft's
governing
strategy
as,
like
you
know,
with
regards
to
like
how
to
actually
achieve
secure
cloud,
but
I
actually
adrian
when
I
put
this
as
our
topic.
A
G
Yeah
and
one
of
the
challenges,
I
think,
though,
with
you
know
this
notion
of
any
any
form
of
multi-tenancy
being
friendly
or
being
something
where
you
can
relax
security
constraints
is
a
problem
right
because
I
think,
like
we
talked
about
even
last
week,
independent
of
whether
it's
a
single
tenant
in
a
cluster
or
multiple
tenants.
G
Most
of
the
security
concerns
you
would
have
related
to
bot
security
role,
bindings
access
controls,
etc
still
apply
and
still
need
to
be
configured
correctly
right,
because,
even
if
it's
a
single
tenant
on
a
cluster,
maybe
there's
some
shared
services
which
are
managed
centrally.
G
You
don't
want
the
tenant
to
be
able
to
accidentally
take
those
downs,
so
you're
still
going
to
use
name
spaces,
it's
just
perhaps
to
a
lesser
degree.
So
there's
no
such
thing
as
friendly
multi-tenancy
or
hostile
right.
It's
just
right.
D
But-
and
I
mean
I
think,
kubernetes
in
general
has
made
it
clear
that
hard
multi-tenancy
is
guaranteed
to
be
impossible
right
now,
kubernetes,
like
the
only
form
of
of
all
the
tents
you
can
have
and
kubernetes
technically,
depending
on
you
know
your.
I
guess
the
attack
vectors
you're
worried
about,
but
it's
like
soft
multitasking
is
really
all
that
is
available
from
a
single
cluster
perspective
right,
so
you
yeah
like
you,
you
want
to
lock
it
down
as
much
as
you
can,
regardless,
because
you
know
it's
like
you've
got
one
cluster
right.
D
D
No,
so
they
they
had
a
bug
right
where
they
at
the
at
the
control
plane
level.
They
were
using
default
ports
and
they
were
properly
securing
it,
and
so
it
was
just
like
that.
Make
bitcoin
miners
were
just
literally
scraping
the
internet
looking
for
open
ports
and
they
were
injecting
bitcoin
containers
into
mesos
clusters
in
production.
It
happened
to
a
lot
of
people,
and-
and
you
know
that
that
sort
of
thing
is
possible,
and
so
you
need
to
treat
your
kubernetes
cluster
like.
D
C
A
Yeah,
I
you
know
what
I
realized
sometimes
is.
When
we
start
having
these
conversations,
we
did
a
pretty
big
breakdown
of
different
use
cases
of
multi-tenancy
that
I
can
pull
up,
that
we
can
kind
of
go
through.
A
What,
I
would
say
is
the
so
we
have
kind
of
different
people
trying
to
solve
different
problems,
who
are
all
part
of
this
group,
so
I
think
adrian,
the
set
of
problems
that
you're
trying
to
solve
might
align
the
most
with
kind
of
what
tommy's
thinking
about,
because
you're
basically
running
like
the
gke
service
right
and
so
like
that's
more
like
sas
where
people
are
coming.
Is
that
what
you're
thinking
tommy's
like
pass,
or
are
you
thinking,
like
I'm,
a
sas
provider
who
happens
to
use
kubernetes
to
deploy
containers.
C
C
D
D
So
anything
like
that,
where
you
can
potentially
have
you
know
access,
you
know
breaking
out
of
the
container
or
breaking
out
of
the
application
to
get
access
to
the
container
if
they
have
open
access
to
do
whatever
right
like
the
service,
the
service
account
can
list
all
namespaces
or
whatever.
Then,
eventually,
you
know
with
enough
effort
that
you
know
they
could
break
through
and
so
you're.
I
mean
you
really
at
that
level,
only
as
safe
as
the
application
you're
running
in
that
container,
and
that's
why
you
know.
B
So
you
don't
have
to
worry
about
quotas
like
you
can
just
assume
that
whatever
I
assume
that
your
customers
will
be
building
some
kind
of
automation
that
you
know
deploy
the
helm
chart
for
them
because
they're
not
doing
it
by
hand
every
time
a
new
customer
on
boards,
so
you'll
have
some
kind
of
automation,
and
you
can
trust
that
automation,
and
so
it's
just
then
you're
worried
about
like
yeah
you're,
worried
about
people
breaking
out
of
the
containers
and
continue
escape.
G
Exactly
but
that
could
also
lead
to
access
to
the
control
plane
right,
because
if
you
can
get
out
of
the
container
you
can
access
secrets
in
the
namespace
or
let's
say
you
get
to
any
host
resources
that
gets
you
to
other
config.
If
you
happen
to,
you
know,
figure
out
where
the
control
plane
was
nodes
are
running,
you
can
even
get
to
fcd
from
that
right.
G
So
to
me,
and
I
you
know
adrian,
we
had
this
discussion
also,
and
perhaps
we
can
segue
also
into
the
blog
post
and
resolve
some
of
the
comments
on
that.
Oh.
B
G
Yeah,
so
to
me,
I
don't
I
don't
really.
You
know,
I
feel
both
in
both
cases,
whether
even
if
the
end
consumer
is
using
the
application
as
a
sas,
the
tendency
models
are
still
applicable
and
still
seem
just
as
important,
because
if
you
don't
have
the
right
controls
underneath
any
any
exploit
in
the
or
any
vulnerability
in
the
container
which
is
exploited,
could
lead
to
bigger
challenges
and
issues
like
what
we're
describing
the
other.
The
other
kind
of
comment
on
that
is
even
if
the
end
consumer
is
using
the
application
as
a
sas.
G
Typically,
what
we've
seen
is
there
are
customer
success
teams
and
other
teams
assigned
to
different
customers,
and
they
may
need
access
to
just
those
name
spaces
to
manage
that
customer
right.
So,
ultimately,
you
still
need
to
figure
out
if
you're
doing
clusters
as
a
service
or
namespaces
as
a
service
each
you
know
each
kind
of
model
has
to
be
secured.
B
Correctly,
if
I
remember
there
was
this
great
talk,
I
think
at
cubecon,
north
america,
2019
or
so
called
like
what,
if
your
attacker
knows,
parkour
and
it's
like
what?
What
can
you,
what
like,
what
kind
of
mischief
can
you
get
up
to
if
you
can
escape
the
container
on
on
the
node?
Was
that
the
the
keynote
from.
D
Oh
gosh,
I
can't
remember
their
name
yeah,
but
I
think
that's
what
you're
talking
about
anyway
yeah.
It
wasn't.
D
D
It
was
rice
is
the
last
name,
and
I
can't
remember.
F
B
Know
what
you're
talking
about
yeah
all
right?
It's
got
the
word
parkour
in
it.
It's
pretty
easy
to
search
on
youtube,
but
it
was
a
great
talk.
I
can't
remember
like
if
you
can
escape
that.
If
you
can
escape
your
container,
then
yeah.
What
do
you
have
access
to?
Do
you?
You
do
you
have
access
to
like
any
secret
and
any
service
account?
That's
ever
been
used
on
that
on
that
node.
D
So
that
all
comes
down
to
how
you've
actually
written
your
container
right,
that's
the
problem
right!
It's
like
a
container,
it's
just
a
c
group,
and
if
your
container
is
you
know,
uid
one
as
root
and
you
break
out
of
that
container.
That
uid
translates
over
right
to
that
node.
So
then
you
suddenly
are
user
one
or
you
zero.
Whatever,
like
your
root
on
the
node,
and
then
you
have
all
the
root
things.
D
So
that's
the
problem
with
container
breakout,
but
even
if
you,
like
you
break
out
of
the
application
into
the
container,
you
have
a
service
account
mounted
right,
and
so
you
that
service
account
whatever
it
can
do
and
talk
to
the
cluster.
That's
your
access!
So
you
can.
You
know
if
you
give
too
much
too
much
permissions
on
your
specific
containers
from
service
account
level,
because,
like
oh
yeah,
let's
let
this
container
list
all
the
name
spaces,
because
who
knows
what
they
need
right?
D
Suddenly,
you
know
they
have
everything
so
and
like,
and
the
main
thing
right
from
from
from
the
container
level
is
it's
what
tools
they
have
right.
So
if
you
just
have
the
binary
running
and
they
break
out
of
that
application
into
the
binary,
it's
pretty
impossible
to
do
a
container
breakout
right
from
that,
and
you
know
unless
you're
doing
some
pretty
crazy
things
in
the
application.
D
So
if
you
don't
have
things
like
you
know,
bash
and
and
all
the
linux
tools,
if
you're
not
using
alpine,
it
makes
a
lot
difficult,
a
lot
more
difficult
to
break
out.
So
you
know
limiting
what
you
put
in
your
containers
is
super
important.
G
It's
also
possible
to
run
other
containers
with
higher
privileges.
If
that
namespace
allows
that
right,
because
once
you're
there
right
yeah.
D
So
yeah
I
mean,
if
you
break
out
a
container,
though
I
mean
the
host
fonts
and
stuff
kind
of
facilitate
that,
but,
like
a
container
breakout
means
you
kind
of
you
know
you
break
out
of
that.
G
D
And
then
well
yeah,
and
then
it's
like
that's
why
it's
also
important
to
like
enforce
at
a
cluster
level
like
you
always
have
to
pull
new
containers.
You
know,
caching
them
to
the
node
sounds
convenient.
D
F
D
D
It
well,
it
depends
right,
you
can
still
technically
like
and
I'm
not
a
security
expert.
Clearly
right
like
I'm,
you
know,
but
like
it,
you
can
still
break
out
of
container
through,
like
cves
that
have
existed,
like
we've
totally
seen
them
in
kubernetes,
but
if
you
broke
out,
you're
still
only
have
the
levels
of
that
user.
You
are
that's
why
it's
like
it's
like.
If
you
just
give
a
uid
and
a
gid
that
doesn't
exist
on
that
node
or
like
a
high
one.
D
That's
why
a
lot
of
people
just
do
something
in
the
thousands,
because,
unlike
you,
have
you
know
users
into
that
level,
then
you
basically,
you
know
you
have
no
permissions
to
do
anything
right
like
and
so
then,
even
if
you
did
break
out,
you
have
some
sort
of
kind
of
prevention.
There
right
you
know,
and
you
might
have
read
access
some
places
and
you
can
kind
of
glean
some
in
for,
but
it's
not
as
much
but
like.
D
B
D
Yeah,
oh
well,
yeah
yeah
and
that's
the
that's
the
main
problem
with
the
original
bot
security
policies.
Right
is
their
cluster
level,
and
so
anyone
deploying
a
container
can
use
your
you
know
your
least
restrictive
psp.
So
if
you
have
the,
if
you
suddenly
got
the
capability
to
do
what
what
jim
was
saying
is
deploy
a
new
container
with,
and
you
had
a
pod
security
policy
that
said:
oh
yeah,
we
let
we
left
this
one
container
have
privilege
mode.
D
Well,
you
know
I
can
just
request
that
psp
in
my
you
know,
when
I
deploy
my
new
container
and
suddenly
I
can
be
privileged
user.
B
D
D
D
B
I
thought
the
big
problem
with
them
is
that
is
that
is
that
they're
either
on
or
off
at
a
cluster
level,
and
so
you
can't
like
incrementally
enable
them.
So
if
you
haven't
set
up
your
entire
cluster,
you
can't
like
gradually
turn
it
on
it's.
You
turn
them
on
and
like
every
pod
on,
your
system
will
suddenly
stop
running.
D
G
Yeah,
that
was
one
of
the
challenges.
The
other
challenges
was.
This
sort
of
you
know
somewhat
of
a
conflict
of
interest
right
where
you're
trying
to
you
can
assign
psps
based
on
roles.
But,
ultimately
you
know
it's
a
like
a
deployment
or
some
other
part
controller
running
that
pod,
not
not
a
role
right,
so
you
get
into
situations
like
that
with
trying
to
manage
psps
correctly
across
different
applications
or
workloads.
G
So
I
think
the
new
proposal
now
is
is
just
doing
this
based
on,
like
we
discussed
briefly
last
time
like
on
name
spaces,
just
to
make
that
boundary
clear
that
the
only
only
strong
isolation
boundary
is
a
name
space
and
anything
within
that
namespace.
If
you
can
run
a
part
in
that
namespace,
you
can
get
the
same
privileges
as
the
highest
privilege
pod
in
that
namespace.
B
B
So
it's
not
like
open
season,
but
if
that's
it
does,
I
think,
to
jim's
point
it's
hard
to
use,
and
so
it's
probably
quite
likely
that
a
lot
that,
in
in
real
life,
a
lot
of
service
accounts,
would
be
allowed
to
use
a
lot
of
policies
just
to.
D
E
B
Oh
resource
name
in
the
yeah,
you
say
basically
a
resource
name,
so
you
can
actually
name
individual
policies.
D
D
D
For
a
while
I
mean
it's,
it's
a
lot
easier
to
work
with,
but,
and
it
kind
of
seems
like
customize
is
almost
like,
like
they're
kind
of
removing
it
from
cube
control,
aren't
they
or
it
seems
like
it's,
not
as
as
pushed
as
it
once
was.
I
thought
I
was
read
something
I
don't
know.
G
G
So
do
we
want
to
go
through
the
remaining
comments
on
the
blog
and
get
that
resolved
yeah?
If
most
of
us
can,
you
know,
get
the,
I
guess,
add
the
approval
on
the
pr
and
then
I'll
circle
back
with
the
blog
reviewers.
G
That's
all
right,
I
think
this
is
good
feedback
and
I
did
yes,
I
reorganized,
and
so
let
me
tasha,
I'm
not
sure
if
you
have
the
link
and
want
to
share
or
if
you
want
I
can
share,
but
I
might
need
permissions.
G
G
And
yeah
on
the
vr,
so
I
think
yeah.
So
we
were
talk
just
talking
about
teams
versus
tenants
right
and
so
that's
one
one
thing
we
should
decide
on
on
the
terminology
and
I'm
fine
with
just
if
you
want
to
remove
multi
from
the
title,
we
can
kind
of
just
call
it
two
common
tenancy
models.
I
think
that
that
was
one
of
your
suggestions
to
adrian.
B
Sorry,
it
was
muted
yeah
that
I'm
I'm
gonna
back
down
on
calling
it
multi-team
tenancy
so
yeah.
Let's
just
take
that
off
the
table,
I
did
think
of
maybe
calling
it
just
tenancy
models
as
opposed
to
multi-tenancy
models,
just
because
I
think
most
people
consider
lots
of
different
clusters
to
not
really
be
multi-tenancy.
B
I've
heard
a
couple
of
names
used
for
that
I've
heard
like
multi-single
tenant
or
just
like
cluster
per
team,
or
something
like
that
or
a
cluster
for
something
but
yeah.
I
don't
know
it
was
just
a
thought.
D
That
I
mean
I,
I
know
a
lot
of
people
too,
like
the
computer
science,
like
definition
of
multi-tenancy,
is
only
hard
multi-tenancy
and
anything
else
is
not
real
multi-tenancy,
so
yeah
like
it
does,
it
does
put
some
people
off
just
by
using
that
when
you're
not
talking
about
you,
know
the
strict
definition.
G
B
G
Yeah,
so
I'm
I'm
fine
with
that
too,
like
if
you
want
to
pull
out
virtual
clusters
from
clusters
as
a
service.
So
right
now
it
just
says
you
know
namespaces
as
a
service
and
clusters
and
the
thinking
there
was
the
end
user
may
not
care
whether
the
cluster
is
virtual
or
not.
But
of
course
there
are
some
differences,
so
yeah
any
other
thoughts
on
that.
Should
we
stay
with
two
or
or
what
would
we
call?
B
I
think
if
you
just
just
clusters
as
a
service,
I
mean
fake
and
chime
in
or
chris
here,
if
they
want
to,
but
like
clusters
and
virtual
clusters
seems
like
a
reasonable
distinction
because
like
to
me,
the
big
difference
between
those
two
is
that
with
clusters,
as
a
service
or
clusters
like
multi-cluster,
no
two
pods
can
ever
sit
on
the
same
node,
whereas
with
vcs
they
can
that's
kind
of
the
point,
and
so
that's
like
you,
get
an
additional
level
of
security
separation
and
noisy
neighbor
separation.
B
So
it's
like
it
is
a
sort
of
compromise
between
the
two,
so
I
don't
think
it
needs
to
be
like.
I
don't
think
it
needs
to
be
like
drastically
rewritten.
It's
just
like
right,
yeah
right
about
that
paragraph.
There
you
can
just
say,
like
virtual
clusters.
B
If
a
ticket,
you
would
not
object
to
us
calling
out
vcs
with
more
prominence.
H
So
I
I'm
okay
with
currently
writing,
but
I
see
a
point,
so
you
want
to
make
a
distinct,
which
was
you
know,
dedicated
cluster
versus
dedicated
control
claim
right.
So
there
is
a
slight
difference
there
right
so
dedicated
cluster
means.
You
know,
as
you
said,
10
and
the
parts
cannot
arrive
in
the
same
node.
G
Yeah
so
one
way
of
titling
it-
and
I
think
I
had
that
at
one
point,
as
you
could
say
something
and
tasha-
and
you
would
use
these
terms
too,
like
shared
cluster,
dedicated
cluster
and
then
portal
clusters
right.
So
those
could
be
three,
but
then
it
moves
away
from
the
as
a
service.
So
I
guess
which
is
fine.
We
could
we
could.
H
F
That
was
exactly
what
I
was
going
to
say.
I
basically
refer
to
vc
as
control
plane
as
a
service
versus
cluster
as
a
service,
just
because
that's
really
that's
really
what
you're
getting
out
of
it.
You
shared
cluster
at
the
at
the
outset,
but
really
it's
just
a
dedicated
control
plane
that
you
get
to
muck
with.
H
B
B
D
And
then
I
think
jim,
we
still
need
a
third
approval
from.
G
D
Like
who
do
was
it
the
the
blog
team
needs
to
do
we
get
a
technical
and
editor,
and
a
grammatical
editor
is
that
I.
G
I
believe
so
I
think
tim
linked
the
process,
but
I
don't
know
who
yeah.
D
G
G
Okay,
so
obviously
like
the
first
thing
once
we're
all
settled
on
any
changes
we
want
to
make,
I
can
go
back
to
them
and
see
yeah
check
with
them
and
others,
because
they'll
have
to
go
through
it
again,
and
you
know,
because
of
some
of
the
changes
and
once
they
sign
off
they'll
figure
out
who's.
The
last
reviewer.
G
Yes,
so
we
have
tim,
this
gentleman
tang
and
then
karen
bradshaw,
who
signed
off.
G
Okay,
yeah
so
I'll
check
with
them
and
see
what's
going
on
and
just
mention
that
we
want,
you
know
if
you
can
get
this
published
before
kubecon
eu.
That
would
be
awesome,
and
so
I
don't
know
what
the
exact
release
timeline
is
but
yeah
once
based
on
when
that
goes
out.
We
could
time
it
before
or
after.
E
D
A
Yeah,
no
we're
we're
about
16
feet
off
the
floor
yeah,
but
yeah
cool.
Well,
my
mouse
briefly
worked.
So
I'm
gonna
stop
the
recording.
While
I
still
can.