►
From YouTube: Kubernetes UG VMware 20211202
Description
December 2, 2021 meeting of the Kubernetes VMware User Group. Free form discussion around app and service availability when mapping Kubernetes availability features and issues to vSphere.
A
We've
got
one
agenda
item
for
today's
meeting.
That
was
suggested
at
the
end
of
last
meeting
and
that
is
to
hold
a
discussion
and
share
ideas
about
best
practices
for
high
availability,
running
kubernetes,
hosted
applications
and
services
on
top
of
vmware
infrastructure.
I
did
not
get
a
an
authoritative
speaker
for
this
group.
A
It
was
a
little
trouble
trying
to
line
something
up
after
the
u.s
thanksgiving
holidays,
but
I
thought
that
maybe
we
could
just
have
open-ended
discussions
and
ideas
shared
here
and
focus
it
a
little
and
then
maybe
sometime
next
year.
A
If
we've
targeted
specific
areas,
I
could
I
should
be
able
to
recruit
expert
level
speakers
on
specific
areas,
just
to
I'll,
say
a
few
remarks
just
to
get
things
started
and
then
I
think
miles
told
me
that
he
had
a
few
things
going
too.
A
You
know
one
of
the
things
I
I've
pointed
out
to
others
in
the
past,
who
asked
me
about
high
availability,
for
this
is
there's
often
some
prerequisites
that
involve
things
that
you
maybe
could
run
in
kubernetes,
but
perhaps,
if
you're,
at
an
edge
location
or
an
on-prem
that
is
subject
to
disconnection
or
bad
service
from
you
know
the
cloud
layers
or
the
internet.
Above
you,
one
of
the
issues
with
high
availability
is
putting
in
place.
These
prerequisite
features
which
I'd
say,
include:
identity
management,
a
load
balancer
for
the
kubernetes
api,
dns
ipam.
A
So
those
are
some
things
that,
for
the
most
part,
it's
entirely
possible
to
host
things
like
your
identity
management
in
kubernetes,
but
it
could
lead
to
these
chicken
in
the
egg
scenarios
where
you
need
it
out
at
a
restart,
but
it
isn't
available
yet
because
the
thing
hosting
it
isn't
there
and
luckily,
with
vmware
infrastructure,
you
may
be
able
to
host
those
on
vms.
Taking
advantage
of
high
availability
features
and
it's
possible
that
that
would
be
a
good
idea
in
some
scenarios,
so
those
are
just
a
few.
B
Yeah,
so,
with
regard
to
availability
and
stuff,
like
that,
I
guess
it
just.
It
makes
sense
to
to
chat
about
what
is
availability
in
the
context
of
a
modern
app
on
kubernetes
and
then
in
the
context
of
a
modern
app
on
kubernetes
on
vsphere,
because
we
assume,
because
vsphere
is
underneath
it
that
you're
going
to
use
the
vsphere
features
for
as
well
as
the
kate's
features
for.
But
there
are
circumstances
where
you
know
h.
A
at
v
sphere,
plus
h
at
kubernetes
does
not
equal
more.
B
It
actually
equals
less
and
heart.
You
know
greater
operational
complexity
and
that
kind
of
thing,
so
I
think
it
might
be
worth
talking
about
when
does
it
make
to
you-
make
sense
to
use
certain
features
in
case
and
when
does
it
make
to
make
sense
to
use
some
of
those
features?
Just
natively
in
hypervisor
or
vsphere.
B
Also,
maybe
from
the
player
as
well,
because,
like
steve,
was
saying,
if
you're
having
network
disconnections
and
it
is
at
an
edge
location
or
something
like
that,
you
know.
If,
if
you
have
something
on
that
premises,
say
it's
a
retail
store
that
has
a
bad
uplink
or
whatever,
and
it
still
needs
to
process
transactions.
How
do
you
do
that,
while
the
central
location
is
is
offline
and
how
does
that
sync
back
up
over
time
as
well.
A
So
tables
open
for
anybody
else
to
make
comments
or
try
to
steer
this
in
a
direction
you're,
particularly
interested
in
with
questions
or
observations
recommendations
just
go
for
it.
It's
a
small
enough
group.
I
think
we
can
maybe
just
try
speaking
up
when
you
feel
like
it
and
we'll
see
how
that
works.
If
we
get
a
lot
of
people
bumping
into
each
other,
we'll
go
with
hand
raises
or
something.
C
So
this
this
topic
is
interesting
for
me
right
now,
because,
because
I'm
I'm
busy
writing
some
tkgi
designs
still
for
for
some
customers,
one
of
the
one
of
the
things
that
I
noticed,
that's
one
of
the
reasons
I
thought
this
was
an
interesting
topic
to
cover
is:
is
you
don't
see
much?
C
C
So
what
I've
been
doing
in
the
in
some
of
these
design
documents
is,
is
creating
chapters
specifically
on
availability,
where,
where
you
do
where
these
worlds
do
meet
and
and
it's
it's
interesting
because
with
we
say
you
end
up
usually
with
basically
mapping
onto
drs
in
some
way
and
taking
you
know,
the
effects
of
into
account
and
tkgi
is
is
even
a
little
bit
more
interesting
because
it's
got
bosch
underneath
and
bosch
has
various
features
and
functions
like
the
bosch
resurrector.
C
C
If,
but
I
I
can't
remember
what
the
current
state
is
in
in
1.4
of
I'm
sure
scott
will
know
exactly
of
the
support
for
either
drs
or
at
least
the
the
ability
to
do.
I
remember
they
introduced
some
kind
of
placement
logic.
D
Vsphere
topology
of
availability
zones,
it's
available
in
tce
cluster
api
upstream.
If
we're
not
going
to
talk
product
necessarily
here
and
be
a
bit
more
open,
it's
the
same
thing
in
cluster
api
as
it
would
be
in
tkgm
or
tcu,
or
anything
else
built
off
of
cluster
api
vsphere,
where
there
is
support
for
availability
zones,
which
can
be
either
clusters
or
host
groups.
D
And
then
you
have
regions
which
would
be
data
center
or
cluster.
There
is
still
in
cluster
api.
There
is
still
no
support
for
multi-v
center
clusters,
which
you
can
do
in
things
like
rancher
or
I
don't
believe,
openshift
supports
that.
But
there
are
other
distributions.
If
you
were
just
do
a
pure
cube
adm,
you
could
do
that.
That's
not
in
the
cluster
api
abstraction.
D
As
of
this
point
nor
in
tkgi,
because
bosh
is
also
single
vcenter.
But
basically,
what
we're
seeing
at
least
from
our
side
is
more
use
of
host
groups,
because
that's
something
that,
in
the
end,
aha
of
vsphere
and
the
same
issue
that
exists
in
bosch
exists
in
cluster
api.
Because
there
you
have
machine
health
checks
and
there
it's
even
worse
than
the
resurrector,
because
it
actually
just
deletes
the
vm
and
recreates
it
and
it
tries
to
delete
it.
D
While
it's
in
the
middle
of
doing
an
hpa,
failover
and
it
could
get
a
lot
of
fun
can
happen
there.
But
in
general
what
we're
seeing
is
host
groups
make
a
lot
of
sense,
because
the
last
thing
you
want
is
for
all
of
your
applications
to
run
on
the
same
esxi
host
that
crashes.
D
D
But
that
way
when
you
create
with
pod
distribution
budgets-
and
you
can
actually
deploy
your
pods
so
that
they
are
always
running
on
nodes
that
are
on
different
esxi
hosts
that
way,
and
then
you
make
sure
that
even
if
an
esxi
were
to
fall-
and
you
had
four
worker
nodes,
let's
say
running
on
there-
your
pods,
you
know
that
only
one
replica
went
down
if
one
esxi
host
goes
down.
D
So
I
think
that
h-a-d-r-s
are
less
important
a
lot
of
times,
especially
in
the
immutable
infrastructure
type
kubernetes
deployment
saying
rancher
is
using
cluster
api.
Now
openshift
is
using
the
machine
api
which
is
similar
to
cluster
api.
D
It's
really
becoming
the
standard
for
on-premise
deployments,
as
well
as
cloud
deployments
these
days,
that
aren't
managed
services
but
making
sure
how
to
separate
onto
different
hosts.
Your
pods
has
become
the
real
key
factor
and
I
think
host
groups
are
the
way
of
doing
that
today
in
the
least
complex
way
of
doing.
B
B
The
reality
is
that
whenever
you
deploy
stuff
on
the
case,
if
you're
using
like
pdb
pod
disruption,
budgets-
or
you
know,
specific
placement
that
allows
things
to
be
placed
across
kate's
nodes
like
anti-affinity
or
pod,
anti-affinity
or
whatever,
you
have
to
be
able
to
assume
that
each
of
those
kate's
nodes
is
on
a
different
physical
host,
underneath
to
ensure
that
that
availability
concept
actually
maps
through
because
otherwise
you
know
if,
if
you
have
drs
rebalancing
stuff
and
you've
got
two
case
nodes
on
the
one
host
and
it
goes
down.
D
Exactly
that's,
that's
always
been
the
issue
that
we've
seen
many
times
with
any
kubernetes
running
on.
Vsphere
is
what,
but
we
set
it
up
to
run
on
10
replicas
yeah,
we'll
all
10
worker
nodes
were
scheduled
on
the
same
esxio.
Sorry,
your
application
is
down,
and
that's
really
the
difficulty.
When
I,
when
you
talk
about
the
cloud,
you
don't
need
to
care
about
this
right.
When
you
talk
public
cloud,
you
don't
care
what
the
hypervisor
they're
using
or
how
it's
running
there.
D
You
just
know
that
you
get
the
availability
and
that's
all
dealt
with
behind
the
scenes
here.
You
kind
of
have
to
deal
with
that
as
well,
which
requires
a
lot
of
tagging
and
things
like
that
which
brings
up
the
really
interesting
one
of
how
it
was
done.
D
You
know
in
the
end
back
when
we
did
the
session
on
viba
on
the
user
group.
A
few
months
ago,
one
of
the
things
I'd
shared
there
was
a
function
that
we
actually
built
internally,
that
tags
virtual
machines
and
places
them
into
different
host
groups
as
well,
so
even
for
systems
that
may
not
be
cluster
api
based
with
availability
zones.
There
are
ways
of
doing
this
as
well.
In
order
to
set
up
those
host
group.
D
You
know
affinity
things,
but
the
one
consideration
there
that's
very
important
to
know
because
they
are
tag
based.
Most
of
these
solutions
be
careful
of
how
many
tags
you
create
in
large
clusters.
It
can
get
complex.
D
Vsphere
has
a
wonderful
new
rest
api
and
a
interesting
soap
api,
but
its
tagging
is
very
strenuous
on
the
api
and
can
cause
issues.
So
you
have
to
be
very
careful
what
tags
you
send
and
how
many
tags
to
set,
and
it
should
always
be
the
bare
minimum
possible
to
get
what
you
need,
because
it
can
destroy
the
vapi
in
large
environments.
B
Scott,
do
you
know
in
in
cappy?
Is
there
are
not
cappy,
but
cavvie
rather
is
there
any
work
going
on
with
regard
to
the
machine
sets
allowing
to
set,
for
example,
an
affinity
policy
for
the
nodes
within
a
machine
set
or
across
machine
sets,
or
anything
like
that.
D
So
the
answer
is
not
really,
but
what
is
coming
in
is
the
ability
to
tag
the
virtual
machines,
so
that
would
allow
that,
with
like
a
vm
rule
that
you
could
create
in
a
drs
rule,
that
would
just
say
separate,
and
then
you
could
do
that
tag
based
kind
of
so
there
are
ways
of.
There
is
a
pr
that
I
believe
was
merged.
D
I
think
it's
in
v1
beta
1
of
cluster
api,
though
so
that
would
be
v100,
so
not
in
any
products
out
there
today
of
any
vendor
using
cap
fee,
but
hopefully
soon.
D
D
The
issue
and
the
reason
I
don't
believe
that
that's
in
cap
v
or
in
any
of
these
providers
today
really
is
because
there's
a
very
strong
notion
that
availability
zones
are
mapped
to
a
machine
deployment
is
in
one
availability
zone.
There
is
no
way
to
stretch
a
machine
deployment
across
availability
zones.
D
Create
multiple
machine
deployments
is
the
idea
of
how
that
is
supposed
to
be
set
kcp
on
the
other.
On
the
cubed
m
control
plane
that
is
meant
to
stretch
availability
zones,
because
that's
your
control,
plane
nodes
and
that
does
have
that
capability.
D
Machine
deployments
do
not,
and
there
was
actually
talk
about
that
in
le
this
week's
yesterday
in
the
cluster
api
meeting,
it's
not
planned
for
the
near
future
either.
So
tags
would
be
the
way
to
do.
B
It
would
you
view
azs
as
vsphere
clusters
within
a
visa
open
question
to
anyone
is
an
az,
a
v
sphere
cluster
within
a
v
center,
or
is
it
a
higher
level
concept
where
individual
v
centers
completely
separate
sites
utterly
controlled
playing
disconnected
from
each
other?
Because
I
mean
the
the
true
definition
in
my
opinion
is
that
there
should
be
no
shared
component,
even
vcenter,
between
azs,
but
that
does
make
scheduling
challenging,
because
then,
how
do
you
schedule
kids
clusters
onto
those?
B
You
know
you
would
have
separate
instances
of
cap
v
or
whatever
it
is
that
you're
using
to
schedule
them,
but
is
is
vcenter
that
much
of
I
suppose
it
is
from
like
a
storage
perspective,
because
cns
exists
within
that.
If
you
have
a
vcenter
failure,
you
can
provision
volumes
and
that
would
affect
all
your
azs.
I'm
just
curious
what
people's
opinions
on
what
an
az
is
in.
A
B
D
A
D
It's
an
interesting
one.
I
think
that's
an
interesting
question,
but
in
the
end,
I'm
not
sure
the
v
center
falls
under
that
because
aws
api
is
singular
across
every
region
and
availability
zone,
and
if
the
api
were
to
go
down,
if
the
ebs
api
were
to
go
down,
then
you
don't
get
ebs
provisioning
of
volumes
as
well.
I
think
that
the
api
level
can
be
separate.
It's
the
infra
that
needs
to
be
separated
in
availability
zones
right
and
drs
and
h
a
both
work.
D
If
v
center
is
down
or
drs,
does
an
h
a
does,
though
aha
will
still
work
if
the
v
center
is
down,
so
you
still
have
in
terms
of
availability.
I
view
availability,
as
is
my
application.
That's
working
right
now
going
to
continue
to
work,
meaning
can
it
be
standalone
and
continue
to
run
and
then
there's
higher
levels
of
what
you
need
in
terms
of
functionality
of
okay.
Can
I
provision
new
volumes?
Can
I
provision
new
nodes?
Can
I
do
a
scale
out?
Can
I
do
all
of
that?
D
That's
already
above
the
idea
of
an
availability
zone.
That's
a
system
wide
issue,
at
least
from
my
perspective
there
and
I
think,
availability
zones.
D
The
way
that
cpi
has
viewed
it
in
vsphere
is
whatever
you
tag,
so
the
three
objects
would
be
either
a
host
group
or
a
cluster
or
a
data
center
cluster
api
limits
that
to
cluster
or
post
group,
though
currently,
because
cpi
does
support
multi,
vcenter
and
then
because
you
need
in
the
end,
what
we
have
to
realize
is
public
clouds
right,
there's
the
availability
zone
in
a
region
and
the
two
of
them
are
have
to
exist
in
order
for
their
it
to
work
in
kubernetes,
so
the
region
is
vcenter
or
a
data
center
and
then,
accordingly,
the
level
below
that
would
be
your
availability
zone.
C
I
think
it,
I
think
it
also
kind
of
depends
on
on
on
what
the
the
risk
profile
is
you're
trying
to
mitigate
against.
I
mean
this
is
true.
Just
for
in
general
sense,
I
mean
a
lot
of
a
lot
of
customers,
we're
we're
seeing
coming
from
the
infrasight
right,
so
they
view
they.
They
may
not
even
be
familiar
with
with
aws's
concept
of
an
availability
zone
and
that
they
view
things
very
much
from
from
well.
C
You
know,
or
at
least
what
we're
trying
to
protect
against
it's
either
a
single
host
failure
or
it's
a
data
center
failure.
Very
rarely
do
I
see
discussion
of
anything
in
between
right.
There's,
no,
not
usually
talk
of
a
rack
failure
right
within
a
campus.
You
might
have
a
data
center
failure.
You
know
you
still
got
somewhere
else
on
the
campus,
but
the
netherlands
at
least
you're,
usually
talking
about
a
different
city.
C
So
so,
but
when
it
comes
to
mapping
these
concepts,
the
constraint
there
immediately
becomes
vcenter,
but
but
you're
right
scott,
because
there
is
a
difference
between
is
my
app
still
running
versus
all
these
other
ancillary
functions
that
I
I
would
be
used
to
from
a
visceral
environment.
The
funny
thing
is
most
again
most
customers.
I've
talked
to.
They
don't
consider
that
difference.
They
they
simply
say
look.
You
know,
yeah.
The
application
needs
to
keep
running,
but
to
be
able
to
provide
the
service
that
we
as
infra
are
providing.
C
I
also
need
some
of
those
ancillary
functions
to
function.
If
they're
not
functioning,
then
my
service
is
not
functioning
and
depending
on
the
environment
I
mean
some
of
the
stuff
can
can
be
very,
very
critical.
I
it
kind
of
depends.
I
mean
I
mean
there.
I
can
imagine
plenty
kinds
of
processes
that
that
require
work.
You
know
that
don't
that
rely
on
more
than
just
a
bobbing
up
but
require
some
kind
of
storage
provisioning
to
be
available
on
demand.
C
You
know
by
a
minute
to
minute
scale
and
did
all
the
other
stuff
he
would
lose
so
so
I
it's
kind
of
it's
kind
of
kind
of
hard
to
map.
What
I've
seen
with
tkgi
at
least,
is
very
often
within
one
data
center.
C
The
scale
of
customers
in
the
netherlands
is
not
really
it's
not
very
big,
usually
so
we're
doing
host
groups
within
a
single
cluster
or
it's
separate
clusters,
but
there's
not
many
many
esx
hosts
and
then
and
then
I
do
wonder
like
what
exactly
are
they
then?
What
are
they
protecting
against?
C
B
The
way
we
used
to
construct
vsphere
clusters
was
either
completely
arbitrary
or
it
whenever
I
worked
at
the
at
an
msp
was
based
on
the
hardware
that
we
were
using
so
cluster
with
our
720s
cluster,
with
r730s
clustered
with
r740s,
and
they
were
managed
separately
because
we
had
different
unpatching.
You
know
baselines
for
them
and
firmware.
C
C
Against
exactly
and
and
so
so
and
this-
and
this
is
where
it
becomes
a
conundrum,
because
one
of
the
most
attractive
features
that
the
vsphere
ecosystem
has
when
it
comes
to
data
center
redundancy
is
a
vsan
stretch,
cluster
yeah,
so
naturally
people
will
ask
this
question:
hey
is,
can
I
can
I
can
install
kubernetes
on
any
communities
right
on
on
vsa
stretch,
cluster
yeah
and
of
course
the
answer
is
well,
I
mean
technically
it
works,
but
it's
not
supported
and-
and
I
had
a
you
know-
there's
a
wonderful
discussion
in
the
expert
slack
on
this.
C
Where
I,
I
basically
tried
to
explain
to
a
few
people
say:
look,
this
is
a
very
logical
question.
People
are
going
to
ask
especially
coming
from
the
infrasight.
Is
a
sexy
feature?
It's
a
it's
a
great
feature.
It's
one
of
the
best
availability
features,
vmware
probably
has
in
a
product.
Of
course,
people
got
kubernetes
on
it.
It's
the
way.
We've
always
done
things
exactly
so
yeah
and
then
it's
up
to
me
to
say
well,
look
I
mean
it'll
kind
of
work,
it'll
work
from
perspective,
but
it
won't
it
doesn't.
C
A
B
Yeah
I
mean
I
I've
been
neck
deep
in
these
conversations
for
a
while
now
stretch.
Cluster
is
our
number
one
requested
feature
for
tkgm
and
tkgs,
and
I
I
can
understand
why?
Because
you
know
we
talked
to
the
the
vsphere
crowd
and
that's
the
way
they've
always
done
things.
This
case
is
just
another
workload.
It
should
run
on
stretch
cluster
and,
like
robert
was
saying,
yeah
it'll
run
on
stretch
cluster.
You
can
install
it,
but
the
operational
complexity
and
the
failure
modes
are
so
complicated
and
so
contrived
that
you
would.
B
B
You
know
due
to
eti
debugs
cube
adm
bugs
cappy
bugs
you
know
the
whole
way
through
the
chain,
because
simply
cakes
was
never
designed
to
be
failed
over,
like
that.
It
is
designed
for
applications
that
know,
or
rather
more
critically,
don't
know
that
they
are
part
of
you
know
different
data
centers
or
whatever
they're
complete.
These
should
be
completely
decoupled.
B
So
there's
there's
lots
of
good
reasons
why
we
don't
support
it.
It
doesn't
stop
it
being
our
number
one
requested
feature,
because
that's
the
way
people
have
always
done
things.
D
Is
it
above?
Is
it
above
data
store
clusters
or
csi.
B
Right
exactly,
but
this
this
is
the
thing
right
is.
I
I
find
that
whenever
I
challenge
people
as
to
okay,
why
do
you
want
to
run
on
a
stretch
cluster?
What
reason?
What
has
the
app
team
or
the
platform
team
told
you
that
says
that
you
need
to
have
a
stretch
cluster
in
here,
and
there
is
no
answer
to
that
because
they
haven't
asked
and
they
haven't
been
told
they
do
not
know
that
this
is
a
requirement,
because
it's
not
a
requirement.
B
C
Yeah
and
and
of
course,
that
people
forget
what
stretch
cluster
was
invented
for
right.
It's
a
storage
play
mostly,
you
know
it's
the
it's,
it's
the
it's,
the
it
only
becomes
available
as
a
concept.
If
you
have
storage
replication
in
some
form,
so
you
know
stretch
cluster
was
a
thing
before
vsan
was
a
thing
right.
I
did
strike
clusterron
on
emc,
v,
plex
and,
and
that
worked
fantastic.
C
You
could
run
kubernetes
on
that
as
well.
You
have
the
same
problems
but
the,
but
of
course
the
I,
the
irony
of
that
is
that
kubernetes,
you
know.
Doesn't
it
doesn't
care
about
storage?
In
that
way,
you
know
the
relationship
it
has
with
storage
is
very
different,
so
of
course,
that
now
now
this,
actually
this
is
so.
This
is
where
my
my
kubernetes
knowledge
starts
to
break
down
a
little
bit.
It's
simply
because
I
haven't
done
enough
with
it
in
practice.
C
So
so,
when
we're
talking
about
persistent
persistent
volumes
right,
how
does
this
map
on
to
this
this
three-way
three-way
a-z
concept,
because,
if
because
because
again
like
what
I've,
what
I'm
used
to
what
I've
seen
is
with
the
storage
is
always
shared
because
I'm
always
doing
it
within
one
data
center.
But
if
you
were
to
actually
go
with
the
amazon
model,
how
does
that
work
then,
with
with
persistent
volumes?
If,
if
also
the
storage
is
three
separate
layers
or
is
that?
Is
that
currently
a
no-go?
D
Yeah,
so
so
that
means
you
use
cod
all
of
your
pods
you
set,
which
availability
zone
they're
going
to
run
in
and
wherever
the
persistent
volume
is
that's
where
the
pods
gonna
run.
So
if
you
have
three,
if
you
have
nine
worker
nodes
and
there's
three
in
each
availability
zone,
if
you
bring
up
now
a
pod
that
needs
a
persistent
volume
and
that
gets
scheduled
in
availability
zone,
one
that
pod
from
then
on
or
any
pod
that
is
going
to
use
that
persistent
volume
will
be
an
availability
zone,
one.
D
What
you
can
do
in
like
a
stateful
set,
though
robert
is
you
can
set
a
distribution
of
the
stateful
set
so
that
you
get
like
three
replicas
of
let's
say
a
rabbit,
mq
cluster.
You
could
have
one
in
each
availability
zone,
so
you
would
have
one
persistent
volume
in
each
availability
zone
and
one
pod
always
running
in
each
availability
zone
that
all
exists
and
has
been
it
does
make.
D
I
will
tell
you:
availability
zones
and
kubernetes
make
the
life
of
a
developer
terrible
because
they
need
to
write
the
yamls
that
deal
with
availability
zones
like
kubernetes,
is
a
pile
of
yaml
and
add
15
piles
of
yama.
On
top
of
that,
in
order
to
deal
with
availability
zones
correctly,
but
they
have
that
capability,
then
yeah.
I
think
it
depends.
B
B
If
you
are
using
a
storage
system,
that
is
a
cloud
native
storage
system,
say
cassandra,
redis
minio,
whatever
self-replicating
understands
azs
itself
can
do
its
own
replication.
Can
you
know
synchronously
or
asynchronously
copy
data?
You
do
not
need
that
because
then
you
just
use
the
local
disk.
That's
on
that
site.
You
do
not
fail
over
a
cross
aces.
You
fail
over
within
an
az
right.
The
the
the
persistent
volume
and
the
pod
concept
is
designed
to
protect
against
a
host
failure
within
an
az,
but
you
you
don't
ever
fail
over
across
ac's
there's.
B
A
C
My
next
question:
that's
exactly
because
what
again,
what
we're
seeing
again
with
our
customers,
so
you
know
just
see
through
our
lens,
is
a
lot
of
people
are
just
trying
to
play
with
kubernetes
now.
So
a
lot
of
what
they're
playing
with
is
non-cloud
native
application,
stacks
they're
monoliths
being
put
in
docker.
C
Who
has
which
who
has
which
responsibility
when
it
comes
to
exactly
this
layer
of
the
of
platform
management,
because
you
know
if
they've
developed,
some
of
these
developers
are
just
gonna
start
with
kubernetes.
Now
then
to
unload
all
of
these
extra
requirements
on
them,
which
translates
to
you
know
like
15
extra
ammo
is,
is
a
huge
burden.
One
of
the
things
I'm
I'm
looking
with
is
like:
where
do
you
split
the
difference
between
a
platform
team
and
a
developer
team?
C
How
much
of
this
can
you
can
we
unburden
the
developers
by
implementing
this
in
policy
in
some
way,
you
know
taking
care
of
some
of
that
abstraction
layer,
basically
acting
like
you're
in
amazon
right,
because
you
know
you
have,
because
this
is
exactly
the
kind
of
stuff
the
developers
did
not
want
to
deal
with
and
did
not
want
to
have
to
deal
with
thinking
about
storage
storage,
topology
thinking
about
ac
topology,
they
just
want
to
deploy
their
thing
now.
C
Of
course,
we
can
force
them
in
certain
directions,
with
policy
and
and
and
just
making
agreements
with
them
right
and
just
making
a
rule.
Bookers
say
this
is
how
you
deploy
an
app
in
production.
Thou
shall
not
do
it
in
a
different
way,
but
that's
really
tough,
because
the
amount
of
experience
out
there
to
do
this
properly
is
still
so
limited.
Something
I'm
really
really
struggling
with.
B
Sounds
like
you
need
a
platform
honestly
like
this.
This
is
why
you
you
build
a
path
or
this
is
why
you
buy
a
paz,
is
to
you've
got
these
concepts.
These
availability
concepts,
these
distribution
concepts
whatever
it
is
that
you
want
to
be
in
a
part
of
all
of
your
apps
and
it's
boilerplate.
B
Frankly,
you
know
it
should
just
happen,
so
you
build
or
buy
a
path
that
takes
the
container
and
you
know
wraps
the
yaml
around
it
and
deploys
it
for
you.
You
know,
I
I
don't
think
the
developer
themselves
or
the
person
that's
interacting
with
the
you
know.
Building
the
container
is
the
person
that
needs
to
be
responsible
for
building
the
kate's,
manifest
and
deploying
it
as
well,
because,
like
you
say,
if
you're
telling
them
here's
all
these
requirements,
you're
relying
on
them
to
follow
the
guidelines
and
all
that
kind
of
thing.
B
C
And
then,
if
you
think
about
the
storage,
so
so
what
are
the
storage
layers
that
do
do
this
properly?
That
are
ac
aware
that
do
the
storage
replication
for
that
create
an
you
know:
intermediate
storage,
abstraction
layer.
Well,
you
know
things
like
rook
and
cef
tool
sets
like
that
mineo
who's
going
to
manage
them
right.
So
do
I
again.
Do
I
place
the
burden
of
under
of
suddenly
learning
all
those
extra
infrastructurish,
tooling
layers?
C
Do
I
place
that
burden
exactly
again
with
the
developers
or
with
the
platform
team
so
and
again,
every
no
one
knows
how
to
where
that
responsibility
should
be.
I
mean
we
saw
the
same
thing
in
cloud
foundry,
where
it's
like
all
these
additional
services.
You
need
around
cloud
foundry.
Oh,
I
want
my
sequel
brokerage
hosting
my
sequel
as
well.
Suddenly
you
have
to
be
at
my
sql
database
admin.
C
Oh
and
a
red
is
admin
and
a
rapid
mq
admin
by
the
way
and
all
this
other
stuff-
and
the
same
is
going
to
happen
now
with
platform
teams
that
are
rolling
out
kubernetes
for
their
developers.
Same
questions
are
being
asked,
and
so
it's
it's
either.
You
know
so
so
you
end
up
with
the
just
the
plain
yellow
you
I'll
also
end
up
understanding.
These
kind
of
you
know,
services
layers.
C
It
really
makes
the
barrier
to
entry
so
much
higher,
because
it
really
only
makes
it
possible
to
adopt
kubernetes
at
a
certain
scale.
When
you
have
the
people
and
the
expertise
to
actually
be
able
to
do
a
proper
platform
deployment
and
and
like
yeah,
I
mean
it
would
be
great
to
have
a
pass
today.
That
did
all
that,
for
me,
we're
not
quite
there
yet,
and
I
mean
cf.
You
know
pcf
especially,
was
was
great
in
that
way,
but
you
know
that's
that's
kind
of
going
away
at
the
moment,
so
it
it.
C
D
C
C
There
I
understand
the
intention
there,
but
we're
not
there
yet
and
no
and.
A
C
The
so
it's
but,
but
even
even
if
we
were
still
right,
because
even
if,
even
if
you
bake
a
lot
of
those
layers
and
into
a
pass
that
you
boiler
plate
it,
you
build
up
all
these
extra
features.
Inside
of
you
know
what
we
might
be
ending
up,
calling
you
know
the
the
the
new
version
of
what
is
now
saitas.
C
Even
then,
you
know
you
still
need
the
expertise
somewhere.
You
still
need
to
be
able
to
troubleshoot.
You
used
to
be
able
to
manage
all
those
layers
and
the
burden
that
places
on
infrastructure
teams
well
on
platform
teams,
if
they're
coming
from
infrastructure,
they
understand
the
bottom
half
of
the
stack,
but
they
are
clueless
about
the
top
half.
If
they
come
from
the
developer
side,
they
understand
the
top
half
of
the
stack,
but
the
clue
there's
about
the
bottom.
B
C
A
Yeah,
I
think
you
made
the
point
before
in
user
group
exactly
you
know.
These
are
likely
different
people
so
that,
if
something
isn't
working
right,
you're
to
have
to
bring
in
a
team
and
go
through
the
the
meeting
cycle
of
figuring
out
where
the
problem
is
maybe
with
observability
challenges
so
that
you've
talked
about
the
difficulty
of
15
different
yaml
files.
A
But
that's
just
even
the
act
of
bringing
home
the
puppy,
not
maintaining
it,
and
you
know
that
gets
it
running
in
the
steady
state,
everything's
working
condition,
but
really
the
harder
problem
is
probably
the
troubleshooting
when,
if
it
inevitably
something
goes
wrong
and
are
there
any
big
picture,
observability
solutions?
That
kind
of
you
know
that
there
are
a
lot
of
products
that
contend.
You
steer
all
the
logs
here
and
some
kind
of
ai
machine
learning
is
going
to
help
you
out
to
diagnose
issues,
diagnose
issues,
and
sometimes
they
save
you.
Sometimes
they
don't.
A
B
I
think
that
yeah
part
of
the
challenge
is
that
a
lot
of
this
is
custom,
because
a
lot
of
it
is
app-centric,
so
it
inevitably
ends
up
that
you
need
to
have
custom
monitoring
and
custom
logging
added
in
you
need
to
build
custom
dashboards
and,
realistically
you
know,
kate's
is
is
more
of
something.
If
you're
building
a
sas
app
that
you're
going
to
resell,
then
you're
you're,
building
an
internal
website,
for
you
know
a
database
front
end
or
warehouse
management,
or
something
like
that.
It's
some
of
those
things.
B
Yes,
monoliths
are
bad,
but
we
have.
You
know,
answers
for
failover
and
h.a
for
monoliths
that
don't
require
a
massive
team
of
people
to
look
after,
and
I
think
realistically,
like
if
you're
going
to
use
kubernetes
that
you
should
probably
you're,
probably
more
looking
in
the
area
of
we
are
going
to
sell
software.
Not
really
you
know
issue
it
internally
to
internal
customers,
so
I
mean
I
could
be
completely
off
the
ball.
There.
D
I
think
in
terms
of
who
kubernetes
was
aimed
at
you're
completely
right
or
who
it's
right,
for.
I
think
that,
unfortunately,
in
today's
world,
in
high
tech,
everything
is
buzzwords
so
we're
going
to
the
cloud.
Why?
Because
we
need
to
be
in
the
cloud,
why?
Because
the
cto
said
we
should
be
in
the
cloud.
Why
did
he
say
that?
D
Because
it's
a
buzzword,
same
thing
with
kubernetes,
we
have
people
that
are
moving
applications
that
are
monolithic
into
a
container
as
an
entire
monolithic
into
one
single
app,
not
following
any
of
the
12-factor
app
whatever
just
to
say
we
are
containerized,
we
are
cloud
native.
No
I'm
sorry,
you
aren't
you're
in
a
container
running
a
monolithic
application.
D
Unfortunately,
though,
that
is
the
case
that
we're
in
in
the
world
and
that's
where
a
lot
of
these
issues
come
because
availability
zones
are
easy
when
you're
dealing
with
12-factor
applications,
because
11
of
the
12
don't
need
any
persistence
and
you're
all
good.
The
issue
is
when
you're
running
applications
that
are
not
built
for
this
in
kubernetes,
and
then
you
know,
that's
where
I
think
things
like
kaiverno
or
thank
god.
Finally,
a
gatekeeper.
D
Just
added
mutation
policies:
there
are
ways,
even
without
a
full
bone
blown
pass
that
you
can
bake
in
some
of
these
things
by
mutating
any
deployment
that
comes
in
and
adding
a
pod
distribution
budget,
adding
in
automatically
to
a
stateful
set
the
separation
amongst
availability
zones
and
setting
up
anti-affinity
rules
right.
There
are
ways
to
do
this,
but
they're
going
to
be
specific
to
the
customers,
environment
or
the
environment
you're,
deploying
into
because
they
may
have
three
availability
zones.
D
They
may
only
have
one
so
the
changes
that
you
have
to
make
are
different,
but
there
usually
is
a
relative
standard
in
an
environment
of
what
that
best
practice
should
be.
The
issue
is,
is
getting
that
documented,
because
I
think
the
knowledge
exists
out
there.
D
It's
just
no
one
has
actually
sat
down
and
said:
okay,
these
are
the
10
topologies
of
general
topologies
of
vsphere
environments.
We
see
what
are
the
best
practices
doesn't
really
exist
in
this
world.
Of
you
know,
policies
at
least,
but
I
think
mutation
is
definitely
a
way
of
getting
around
this,
because
you
don't
put
the
burden
on
the
developer.
C
B
Of
this
stuff
down
well,
I
was
actually
just
thinking
that,
maybe
you
know
when
it
comes
to
kubernetes
and
operationalizing
it
and
and
coming
up
with
like
a
set
of
guidelines.
It's
it's
like
developing
software
itself.
You
know
we
got
to
have
an
mvp,
we
got
to
start
somewhere
because
it's
very
easy
to
rabbit,
hole,
kubernetes
and
go
okay.
B
The
ultimate
design
is
completely
separated
multi-az,
you
know
it's
got
opa
and
gatekeeper
and
you're
using
you
know,
services
toolkit
or
whatever,
to
have
separate
clusters
for
different
services
and
all
these
kinds
of
things,
and
you
can
think
about
it
for
literally
years
and
not
actually
do
anything.
But
I
think
like
having
a
starting
point,
which
is
right:
we've
got
a
single
cluster.
B
What
do
we
do
within
a
single
cluster
to
provide
availability,
pdbs,
anti-affinity
staple
set
affinities
all
that
kind
of
stuff
right
and
we
go
from
there
and
we
say:
okay,
that
was
one
cluster.
How
do
we
solve
that?
For
three
clusters,
or
whatever
you
know
the
same
maps
across
three
clusters
and
just
build
it
up
bit
by
bit,
and
then
we
can
look
at
more
perfect
scenarios
like
okay,
instead
of
deploying
stuff
all
into
these
yeah.
B
You
know
single
clusters,
how
about
we
have
services
clusters
and
we
have
abstraction
clusters
that
provide
an
api
that
you
request
a
service
back
from
and
you
get
like
a
cf
style
binding
right.
I
want
my
sql
database
or
it's
a
bad
example.
I
want
a
cassandra
database,
for
example,
and
it
gives
me
a
standard
database
that
provisions
somewhere
else
on
a
different
cluster,
but
what
I
get
back
is
the
url
you
know,
so
I
think
it's
it's
one
of
these
things
that
we
can
build
up
over
time.
B
A
Let
me
ask
you
what
would
be
a
good
form
for
this?
Somebody
threw
out
the
idea
that
there's,
maybe
10
different
permutations
is
that
within
a
reasonable
ballpark,
and
would
you
start
it
by
just
sort
of
classifying
apps
or
get
very
specific,
like
here's,
a
best
practices,
white
paper
for
cassandra
on
vsphere
for
high
availability
and
focus
it
at
the
app
level
rather
than
kind
of
generic
characteristics
of
an
app
that
would
potentially
apply.
C
A
A
bunch
of
them
and
also
how
what
would
the
sell-by
date
be
on
these
if
we
compose
these
white
papers
or
whatever
they
are,
you
know
it's
is
it
like
bringing
home
the
puppy
where
they
go
stale
in
a
year?
And
then
you
know
bad
information
is
sometimes
worse
than
no
information.
If
things
keep
moving
enough,
that
they
go
bad
and
we'd
really
be
signing
up
for
maintaining
them.
In
addition
to
doing
the
first
edition.
D
The
availability
zone
code
in
kubernetes
is
relatively
static
and
isn't
changing
much.
There
isn't
much
change
happening
around
that,
because
all
the
knobs
exist
and
kubernetes
is
not
around
building
higher
level
things
above
those
knobs.
It's
about
exposed
every
single
knob
and
you
figure
out
or
bring
a
platform
that
you
know
best
suits
those
knobs.
I
don't
think
that
a
white
paper
like
this
should
be
something
about.
You
know
cassandra
or
mango,
or
anything
like
that.
D
You
know
single
rack,
multi-cluster,
multi-v,
center,
multi-data,
multi-data,
center
type,
topologies
and
then
accordingly
set
with
those
three
application
profiles.
What
the
best
practices
would
be
and
the
benefit
of
doing
something
like
that
also
is.
I
think
that
a
lot
of
this
knowledge
would
be
helpful
for
aws
and
google
and
it's
not
just
a
v-sphere
specific
thing
when
you
start
talking:
okay,
what
are
the
best
practices
upon
distribution,
budgets
and
anti-affinity
rules
when
running
a
persistent
application
that
doesn't
deal
with
replication
itself
when
running
on
kubernetes
in
a
multi-easy
environment?
B
Internet
yeah,
I
think
there
could
actually
it
could
be
even
more
generic
than
that
scott
as
well,
and
and,
like
you
say,
a
lot
of
the
case.
Apis
are
v1
or
are
stable
at
this
point
and
are
not
going
to
evolve
in
any
real
significant
way,
so
like
pdbs
and
anti-affinity
for
pods
and
all
of
those
kind
of
things
like
the
the
region
zones
apis.
B
Those
are
all
fairly
static
and
I
think
we
can
be
generic
about
it
and
say
what
is
the
best
way
to
distribute
an
app
and
use
pdb
and
use
anti-affinity
to
ensure
for,
like
those
those
stateful
workload,
types
that
scott
mentioned
non-replicating
and
self-replicating?
B
What
is
the
best
way
to
run
those
because
that
again,
like
scott
said,
is
generic
to
all
clouds?
It
doesn't
matter
if
it's
on
vsphere,
if
there
is
a
right
way
to
run
a
non-replicating
stateful
app
inside
a
cluster
using
pdb,
anti-affinity
staple
sets,
that
is
generic
across
all
kate's
on
cloud
or
on-prem
or
or
whatever.
So
if
you,
if
you
take
it,
you
know,
you
eat
an
elephant,
a
bite
at
a
time,
little
bits
and
pieces
like
that,
and
then
you
can
say
right
what
is
the
best
way
to
bring
that
all
together?
B
C
C
What
I
do
know
for
a
fact
is
there
to
bring
it
back
to
to
to
the
vmware
world
what
I
do
know
for
fact,
there's
nothing
out
there
today
that
maps
that
onto
the
vmware
solution
stack-
and
you
know,
and
that's
kind
of
where
my
first
thought
goes,
because
I'm
thinking
of
all
the
infra
teams
that
are
currently
having
to
go
down
there
being
asked
to
go
down
this
kubernetes
path,
they're
the
ones
looking
for
the
for
that.
C
So
you
know
it's
more
obviously
more
vendor-centric,
but
it's
it's
kind
of
the
world
we're
more
in,
and
so
I
you
know
that
one
of
the
things
I'm
trying
to
achieve
with
with
the
customers
I'm
working
with
now
is
to
create
it
create
something
like
this,
but
for
the
so
it's
a
very
it's
very
constrained
scope
right.
C
It
is,
it
is
a
best
practices
set
for
tkgi
and
tkgs
and
dkjm
for
the
kinds
of
customers
that
we
deal
with,
which
are
pretty
small
scale
stuff,
because
I
need
to
be
able
to
explain
it
to
infra
teams,
customers
and
I
need
to
explain
to
info
consultants
on
our
side,
so
it
has
to
be
so
so
that
is
what
so,
I'm
kind
of
already
doing
this,
but
it
is
a
very
constrained
scope
and
it's
very
vmware
oriented,
obviously
and
broader
than
that.
C
I
totally
agree
by
the
way
I
mean
this
this
they.
If
there's
one
thing
I
haven't
noticed
just
broadening
the
communities
community.
What
I've
seen
of
it
is
that
not
enough
thought
is
being
put
into
exactly
this
kind
of
stuff
and
if
it
and
if
it,
if
the
thought
is
going
into
it,
it
is
coming
top
down
right.
It
is
very
much
from
from
an
application
architecture,
point
of
view.
C
D
And
I
think
what
is
it
they
say.
The
two
hardest
things
for
an
open
source
project
is
finding
a
name
and
writing
documentation
in
the
end,
there's
a
reason
that
sig,
docs
and
kubernetes
is
the
sig
that
has
been
requesting
for
more
people
to
join
for
the
longest
time,
and
I
don't
think,
have
any
new
people
in
the
last
probably
decade.
No
all
the
news,
sig
docs
people.
D
It's
awesome
that
just
a
few
people
just
joined
now,
but
sig
docs
and
anything
that
has
to
do
with
documentation
is
always
the
hardest
part
in
any
tooling
system,
especially
something
as
wide
and
open
as
kubernetes
is
because
it
really
is
the
platform
of
platforms
as
joe
beta
likes
to
call
it
right.
It
isn't
a
platform
for
applications.
It
is
a
platform
for
platforms
to
run
applications
like
k
native,
like
open
fast,
like
dozens
of
other
systems
out
there
that
are
built
to
make
it
easier,
because
kubernetes
is
the
hardest
system.
D
D
I
think
it's
just
it's
an
unsolved
issue,
because
no
one
has
been
able
to
tackle
the
documentation,
because
usually
the
people
with
the
knowledge
are
the
with
the
deep
enough
knowledge
to
actually
say
what
the
best
practices
are
or
the
people
that
are
probably
the
worst
at
writing.
Documentation.
D
Aka
engineers,
engineers
are
great
at
writing
code
and
terribly
documenting
as
a
general.
I
think-
and
you
know
it's
just
a
hard
thing,
especially
in
such
an
evolving
world
like
kubernetes,
where
every
few
months
we
have
a
new
release
with
what
is
it,
I
think,
52
new
features
that
came
in
in
123,
which
is
going
to
be
released
next
week.
It's
just
insane
the
amount
of
new
capabilities
we're
getting.
D
A
Think
the
documentation
is
cheap
in
in
terms
of
human
resource,
but
it
never
is.
I
mean,
and
it's
like
coding,
where
you
can't
just
code
it
and
assume
it's
good.
You
really
have
to
test
that
documentation
or
have
written
it,
while
you're
cutting
pasting
from
an
actual
working
example,
because
there's
nothing
worse
than
bad
documentation.
That
leads
you
down
a
path
that
just
flat
out
doesn't
work.
B
Yeah,
that
seems
really
interesting.
As
a
concept
I
mean
I,
I
would
love
to
contribute
to
something
like
you
know,
best
practices,
for
you
know
single
kate's
cluster.
You
know
my
my
first
kate's
cluster
or
whatever
you
want
to
call
it,
and
you
have
your
stateful
app
that
doesn't
do
replication,
okay,
right,
we'll
we'll
cover
that
and
we'll
say
right.
This
is
because
that's
where
most
of
the
vmware
customer
base
is
going
to
be.
B
This
is
probably
the
best
that
you
can
get,
but
you
know
there's
a
lot
of
drawbacks
and
then
we
can
illustrate
other
examples
like
okay.
This
is
if,
if
you
take
that
a
step
further
and
you
make
your
persistence
layer,
nice
self-replicating,
what's
the
best
way
to
do
that
and
what
advantages
do
you
get
of
that
over
the
previous
model?
It's
not
a
trivial
change,
but
if
you
do
make,
the
change
here
are
the
benefits.
You
know
that
kind
of
thing:
it's
just
a
colossal
amount
of
work.
A
A
So
you
know
I'd
love
to
see
it,
but
anybody
got
ideas
for
how
we'd
cause
it
to
come
about.
I
mean
it's
fairly
trivial,
I
think
under
the
auspices
of
this
user
group
or
kubernetes
in
general,
we
could
publish
the
things
once
they're
written
and
get
them
exposure,
maybe
get
blogs
presentations
at
conferences,
but
the
hard
part
is
getting
the
actual
documentation,
written
testing
it,
and
you
know
finding
people
who
are
authoritative
to
do
this
documentation
in
the
first
place.
A
C
That
that's
what
I'm
going
to
start
with
within
within
within
my
company
is
just
you
know,
create
a
standard
set
of
guidance
across
these
topics.
Saying
okay,
these
are
these
are
the
layers
involved.
You
know
this
is
this
is
how
we'll
this
is
how
we'll?
Well
you
know
this
is
it's
like
the
minimum.
You
know
set
of
policies
you
can
introduce
for
any
kubernetes
poc.
You
do
that
will
enable
these
and
these
in
these
gates
to
you
know
and
it
maps
on
to
these
and
these
and
vsphere.
C
C
You
know
just
within
my
company
at
some
point,
I'm
you
know
if
I
have
the
energy
for
it,
I'm
going
to
stick
it
on
git,
github
and
and
then
it'll
be
there
and
I'll
probably
do
a
post
about
it
and
we'll
probably
make
a
brilliant.
You
know
a
vmug
session.
Well.
C
A
Maybe
do
it
because
so
far
miles
and
I,
under
the
auspices
of
this
user
group,
have
been
getting
a
maintainer
track
session
at
the
kubecon
events,
but
there's
nothing
wrong
with
members
taking
over
that
role
or
doing
joint
presentations.
That
I'd
be
happy
to.
You
know
put
that
on
a
future
kubecon
as
the
submission
for
this
user
group's
presentation
at
kubecon.
A
If
you
wanted
to
get
a
forum
to
get
a
big
reach,
the
nice
thing
about
those
is,
I
think
that
the
cncf
itself
actually
ends
up
publishing
the
deck
and
the
video
and
it
gets
really
good
search
engine
placement
by
virtue
of
being
underneath
the
cncf
rather
than
just
you
know
some
random
blogger
out
there,
so
that
you'd
reach
a
lot
more
eyeballs.
C
A
The
deadline
for
uconn
europe
is,
I
think,
december
17th
for
a
regular
cfp
and
then
maintainer
tracks,
I'm
guessing
they
haven't
actually
even
put
out
the
invite
for
those.
But
if
history's
a
guide,
that's
kind
of
going
to
be
january
ish
in
terms
of
when
the
deadline
is
so.
You
can't
go
there
last
minute
and
land
one
of
those
kubecon
sessions.
C
A
So
let
me
know
if
you,
if
you
do,
want
to
go
in
that
direction
or
anybody
else
on
this
call
or
anybody
watching
the
video.
This
meeting
afterward
reach
out
to
me,
because
even
if
you
write
something
up
internally,
I'd
be
happy
to
sign
up
and
do
some
work
to
kind
of
genericize
it.
You
know
proofread
it
and
maybe
turn
a
written
document
into
a
powerpoint
deck
for
presentation.
A
Maybe
if
there's
multiple
people
doing
that,
we
could
do
this
as
a
as
a
panel
at
a
future
kubecon
but
or
any
of
the
other
tech
conferences.
For
that
matter,
yeah.
I
think
I
won't.
B
We
are
going
to
make
this
available
on
case,
and
this
is
how
we
make
it
and
we
have
a
git
repository
that
shows
the
different
iterations
of
just
deploy
it
as
a
pod,
put
it
into
a
deployment
replica
set,
stateful,
set
pod
disruption,
budget
and
just
piece
by
piece
build
it
up.
You
know
layer
by
layer,
add
more
and
more
complexity,
so
people
can
see
the
different
iterations
and
what
pros
and
cons
each
each
layer
has.
Maybe
that's
the
approach,
because
that's
nice
and
iterative
small
bites
that
we
could
take
yeah.
I
agree.
D
Yeah,
I
agree
completely
and
I
think
that
if
we
do
want
to
go
down
that
white
paper
type,
you
know
best
practices,
docs
and
whatever.
I
would
be
more
than
happy
to
join
any
like
working
group
or
of
whatever
and
try
and
help
out
there.
And
all
of
that
I
mean.
Maybe
we
can
set
up
a
kind
of
working
group
on
a
dock
like
that.
If
there's
enough
people
that
are
interested
to
you
know,
you
get
someone
who's
an
expert
in
vsphere
someone
who's
an
expert
in
kubernetes,
someone
who's.
D
You
know
whatever
in
the
availability
world
of
kubernetes,
even
if
it's
not
in
vsphere,
but
someone
who
really
understands
v
series.
I
can
get
people
just
to
sit
together
in
a
room
or
a
virtual
room
on
zoom
and
to
kind
of
brainstorm
all
these
things
and
talk
about
like
what
you
were
mentioning
miles.
D
I
never
even
thought
about
that
of
the
issue
with
the
stretch
cluster
on
vsan
right,
but
once
it
come
once
something
like
that
comes
up
or
what
the
issues
are
with
storage
drs
clusters,
which
I
know
the
csi
team
are
working
on
that,
but,
like
all
of
like
these
different
things
that
you
bring
them
up
that
have
to
do
with
availability
and
whatever
all
those
things
you
bring
them
up.
And
then
you
know
we
can
come
up
with
what
the
best
practices
are
and
then
start
to
document.
D
But
I
think
it's
really
just
a
bunch
of
brainstorming
sessions
that
have
to
go
on
and
really
talking
about
the
edge
use
cases
that
oh
and
this
happens
here.
And
what
do
you
do
in
this
case?
Or
what
are
the
failure
modes
of
host
groups
and
what
are
the
benefits
or
negatives
of
that?
Or
I
mean
what
is
the
best
I've
seen
two
different
topologies
with
clusters:
either
people
that
do
a
cluster
where
every
host
is
in
a
different
rack.
D
I've
seen
clusters
where
every
host
in
the
cluster
is
in
a
different
rack,
usually
in
very
large
environments.
But
that
way
a
cluster
doesn't
have
a
single
point
of
failure
and
then
a
vm,
a
monolith
that's
running
in
there
can
always
fail
over
to
a
different
host.
Or
do
you
design
your
application
to
be
spread
across
multiple
vsphere
clusters
and
each
cluster
is
actually
like
a
traditional
availability
zone
where
it's
a
single
rack
right
and
you
have
these
different
topologies
each
one
has
a
plus
and
minus
what
works
better
for
kubernetes.
D
B
We
could
bring
it
up
in
the
modern
apps
the
expert
group
as
well,
because
you
know
there
would
be
people
that
are
keen
to
contribute
in
there
too.
I'm
sure
definitely.
A
Okay,
I
think
we're
reaching
the
time
limit
here,
but
it's
been
a
great
conversation.
I
think
summary
is
that
we've
got
a
plan
to
make
a
plan
to
do
something
about
this,
but
the
plan's
not
there
yet
very
good
to
follow
up
on
this.
One
thing
I
want
to
bring
up
before
we
close
is
the
scheduled
next
meeting
in
january
is
the
week
of
january
the
1st
I'm
not
sure
how
many
people
are
even
likely
to
be
working
that
week,
whether
we
should
consider
just
canceling
it
and
having
the
next
meeting
in
february.
B
A
What
I
find
is
half
the
people
show
up
at
the
original
one
without
having
noticed
the
move,
and
then
hardly
anybody
shows
up
at
the
move
date,
but
no
harm
in
doing
it.
I'm
willing
to
okay,
even
though
I'm
not
sure
is
it
unanimous.
You
want
to
just
move
it,
or
should
I
put
a
poll
out
and
maybe
maybe
a
broader
group
would
respond
to.
A
The
choices
will
be,
I
guess,
hold
it,
normally,
cancel
it
all
together
until
february
or
move
it
and
I'll
put
it
out
on
the
slack
channel.
Okay
with
that
said,
I'm
going
to
call
this
to
a
close
I'll
upload.
The
recording
sometime
and
thanks
for
coming
you've
been
great,
some
great
conversations
and
great
time,
great
ideas,
contributing
bye.