►
From YouTube: Kubernetes UG VMware 20220505
Description
May 5, 2022 meeting of the Kubernetes VMware User Group with discussion of upcoming events at KubeCon Europe, alternatives to the way user groups operate under the Kubernetes project, and discussion of the VEBA event broker and limitations when this is used with multiple vCenters.
A
The
agenda
for
today's
meeting
so
far
is
kind
of
light,
but
we're
going
to
talk
about
upcoming
events
at
kubecon
europe,
which
is
about
a
week
away,
maybe
a
week
and
a
few
extra
days,
depending
on
whether
you
count
the
co-located
events
as
part
of
the
official
program
or
not.
Then
we're
good
going
to
move
on
to
talking
about
a
proposal.
That's
coming
from
a
few
people
on
the
kubernetes
steering
committee
about
changing
the
way
user
groups
operate
underneath
the
kubernetes
project.
A
At
this
stage.
These
are
just
proposals,
but
I
just
wanted
to
make
sure
people
are
aware
that
this
is
going
on
and
you're
entitled
to
an
opinion
and
to
influence
how
this
comes
about.
So
I'm
very
much
interested
in
people's
thoughts
on
this.
You
can
give
them
directly
to
the
cncf
and
steering
committee
of
kubernetes,
but
I'd
be
happy
to
pass
things
along
too
and
then
in
anticipation
of
the
talk
that
this
group
will
be
giving
at
kubecon
europe.
A
I
just
wanted
to
try
to
drop
in
a
little
bit
and
discuss
zones
in
kubernetes,
which
is
a
feature
that,
among
other
things,
can
be
used
to
achieve
high
availability
and
the
reason
I'm
talking
about
it
now
instead
of
during
the
session
itself
is
the
session
is
only
allotted
35
minutes.
So
we
don't
want
to
spend
a
lot
of
time
just
covering
zones
with
very
little
to
cover
the
specifics,
and
I
thought,
maybe
by
dropping
a
little
bit
of
discussion
about
that
here.
A
In
this
meeting
we
can
kind
of
lay
some
groundwork
that
would
be
useful
to
people.
So
with
that
said,
the
first
item
is
at
kubecon
europe.
A
There
is
a
session
I'm
doing
with
michael
cash
on
optimizing.
The
experience
of
using
kubernetes
on
vsphere
by
using
event
driven
automation
and
michael,
is
going
to
cover
this
free,
open
sourced,
viva
appliance.
What
this
does
is
already
a
lot
of
the
vmware
infrastructure.
Emits
events
that
can
be
monitored
through
apis
and
this
appliance
goes
and
captures
these
things
and
publishes
them
in
the
form
of
something
called
cloud
events.
A
This
is
just-
and
I
guess,
if
you're
really
new
to
this
events,
can
be
notifications,
that
something
happened
or
they
can
be
data,
the
expectation
if
their
data
is
that
they're
small
amounts
of
data,
but
they
inform
you
that
something's
happened
or
here
is
the
state
of
something
and
cloud
events
is
a
cncf
specification
that
attempts
to
publish
these
in
a
standard
way
by
and
through
that
standardization,
making
it
easier
to
write
apps.
That
would
consume
these.
So
you
know
if
they
come
at
you
in
25
different
forms.
A
It
can
be
very
tough
to
write
the
code
behind
that
compared
to
if
they
were
standardized,
so
the
cncf
backs
cloud
events,
and
then
there
is
an
open
source
project
that
is
very
popular
and
growing
in
popularity
called
k
native.
That
is
in
a
position
to
consume
these
events
and
by
the
way
it
can
also
emit
events.
But
for
purposes
of
this,
the
consumption
side
is
probably
more
interesting,
so
k-native
can
consume.
These
events
act
on
them
or
forward
them.
A
You
know,
manipulate
them
into
new
events,
etc
and
michael
is
going
to
demo
a
way
to
monitor
things
chain
configuration
changes,
maybe
health
changes
of
vsphere
infrastructure
and
react
to
it
up
at
the
kubernetes
level,
because
you
know,
kubernetes
has
in
place
a
system
to
utilize
these
zones,
but
they're
based
on
it.
It
was
originally
designed
for
the
major
public
clouds
that
already
implement
pretty
opinionated
forms
of
availability
zones
with
vsphere.
A
One
of
the
issues
you've
got
going
on
is
that
it
is
very
unopinionated.
You
know
it's
up
to
you,
the
user,
to
define
what
constitutes
an
availability
zone
in
your
specific
data
center
or
whatever
location
you
have
this
in.
So
perhaps
you
know,
in
a
moderate
sized
implementation,
a
zone
of
a
zone
could
be
defined
as
a
rack,
maybe
in
a
bigger
installation
it
could
be
to
correlate
to
an
aisle
and
the
goal
of
defining
these
zones.
A
Is
you
try
to
declare
that
this
constitutes
a
zone,
because
I
think
that
it
has
very
few
or
no
common
points
of
failure
between
one
zone
and
the
next
zone?
Now?
The
other
thing
you
have
going
on
in
vsphere
infrastructure
terms
of
flexibility
is
there
is
actually
no
demand.
A
You
know
your
compute
up,
your
compute
resource
could
have
a
collection
of
common
or
uncommon
points
of
failure,
and
your
storage
could
also
have
the
same
thing
going
on
as
well
as
your
networking
and
it
might
be
convenient
to
align
those
one-to-one.
A
But
the
vsphere
platform
doesn't
force
you
to
do
that,
and
there
might
be
some
reasons
why
you
don't
have
that
going
on
either
economic
constraints
or
something
else,
and
what
michael
is
going
to
cover
is
monitoring
this
kind
of
things,
because
they
can
go
in
day,
one
in
one
state
and
something
happens
either.
You
know
a
failure
of
a
hardware
device.
Somebody
goes
and
changes
a
configuration
that
change
change.
A
What's
going
on
at
the
infrastructure
level
and
you'd
like
to
have
this
recognizable
up
at
the
kubernetes
level,
so
that
the
scheduler
does
the
right
thing
so
anyway,
that's
coverage
of
that
topic
at
kubecon
number
two
is
a
few
people
are
proposing
that
we
have
a
for
the
people
who
are
there
in
person
that
we
have
a
physical
meeting.
Probably
over
dinner,
if
the
size
isn't
too
enormous
and
michael
gash
proposed
that
that
be
friday
evening,
simply
because
it
won't
conflict
with
other
events
going
on
at
kubecon.
A
You
know
they
have
the
standard
receptions
and
things
going
on,
but
friday
would
be
the
last
day
of
the
conference,
and
his
thought
was
that
most
people
probably
stay
that
extra
evening
rather
than
fly
back
right
away.
But
I
don't
want
to
leave
anybody
out.
So
if
somebody
does
have
to
return
before
friday
evening,
we'll
entertain
putting
that
at
some
other
day
and
time
so
thoughts
on
that.
I
think
before
the
official
start
of
this
kyle
said
that
he
doesn't
think
he
can
make
it
but
david,
I
think,
you'll
be
there
and
scott.
B
Yeah,
I
will
be
there.
I
know
I
can't
do
friday
night
for
religious
reasons.
I
won't
be
near
there,
but
yeah
so.
A
Another
day,
then,
I'm
completely
flexible,
so
I'm
open
to
anything
and
we
don't
necessarily
have
to
decide
it
right
here
in
this
meeting.
But
if
you
want
to
get
back
to
me
through
a
dm
with
a
list
or
something
because
I
sure
would
like
to
you
know,
we
haven't
had
the
opportunity
yet
to
meet
in
person,
and
I
think
michael
wanted
to
do
that
too.
So
friday
was
just
a
suggestion,
not
a
request.
B
A
Okay,
I
think
maybe
what
I'll
do
is
I'll
compose
some
kind
of
document
or
spreadsheet,
of
the
list
of
attendees,
and
we
can
mark
yeah
we'll
use
that
to
solidify
a
specific
time
and
date
and
place
and
also
allow
people
to
specify
you
know
their
preferences
or
requirements.
If
we
turn
it
into
a
dinner
of,
you
know,
help
us
choose
where
that
might
be
awesome.
Anybody
else
got
any
topics
related
to
kubecon.
If
so,
chime
in
now.
A
A
Some
people
on
the
steering
committee
are
making
the
observation
that
perhaps
these
user
groups
could
be
structured
better
and
that
perhaps
the
scope
should
be
broader
than
just
kubernetes
alone,
which
in
fact
the
way
we've
been
operating.
I
I
would
contend
it
is.
You
know
we're
talking
about
all
kinds
of
things
like
load.
Balancers
you
know,
k
native
is
going
to
be
in
this
upcoming
session,
and
the
proposal
is
that
perhaps
the
user
group
would
be
better
to
be
placed
under
the
cncf.
A
You
know
that's
a
different
repo
on
github.
They
technically
have
a
different
organization
and
broaden
the
scope
and
they're
also
wondering
if
breaking
it
down
by
the
cloud
provider
you're
running
on
is
perhaps
the
best
way
to
have
attempted
to
split
this.
Just
because
vsphere
appears
to
be
the
only
one
that
actually
formed
a
group.
A
You
know
that
during
the
meeting
where
they
came
up
with
this
a
few
years
ago,
I
think
ibm
thought
that
they
would
host
a
user
group
for
ibm
cloud
and
aws
thought
they
would,
and
I
think
in
some
cases
they
actually
started
the
pr
to
form
it
but
never
followed
through
and
completed
it.
So
the
group
never
went
into
operation
and
one
of
the
things
on
the
table
was
to
perhaps
you
know,
compose
user
groups
by
use
cases
like
you
know
the.
A
If
somebody
is
a
user
and
is
predominantly
engaged
in
machine
learning,
maybe
there
should
be
a
machine
learning
user
group.
This
is
just
a
proposal,
but
I
think
there
will
be
some
discussion
of
this
going
forward
as
to
you
know
how
they
would
go
about
this.
A
Another
thing
that
has
come
up
is
even
that
you
know
this
is
a
user
group
that's
essentially
worldwide
in
scope,
but
the
kubernetes
project
had
before
covet
bit
in
a
lot
of
regional
groups
operating
as
meetup
groups
on
you
know
under
meetup.com,
and
I
think
from
my
observation,
at
least
in
the
region
of
the
world
I
live
in.
Those
have
died
off,
but
they
would
like
to
start
operating.
A
User
groups
at
even
local
levels,
because
the
thought
is
that
I
know
in
los
angeles,
I
would
go
to
that
kubernetes
user
group
and
a
lot
of
this
is
even
the
social
interaction
just
like
going
physically
to
a
kubecon
and
physically
meeting
people
builds
a
little
more
camaraderie
that
opportunity
to
have
these
regional
groups
where
people
are
meeting
face-to-face
is
is
desirable
and
they
don't
really
have
a
good
structure
to
do
that,
because
once
again
they
have
these
kubernetes
meetup
groups,
but
in
reality
that
cncf
landscape
has
expanded
to
the
point
where
nobody
really
uses
kubernetes
in
isolation,
that
what
you're
going
to
end
up
talking
about
on
a
practical
level
is
the
whole
gamut
of
things
from
the
kubernetes
orchestrator.
A
To
I
don't
know,
load
balancers,
disaster
recovery
tools,
the
whole
gamut
of
things
where
you
might
find
projects
hosted
under
the
cncf
repo.
So
I
don't
know
I've
talked
this
out,
but
does
anybody
else
have
any
thoughts
on
this?
And
what
I'm
saying
is
I'm
trying
to
let
you
know
this
is
going
on
so
that
if
later
in
the
year,
there's
some
announcement
that
this
got
refactored?
A
B
B
As
you
were
saying,
and
I
think
that
the
scope
needs
to
be
broadened,
I
do
I
do
think
that
there
still
needs
to
be
some
weather,
it's
not
vendor-specific
and
you
call
it
on-premise
user
group
or
you
call
it
air
gap,
user
group
or
you
call
it
vsphere
user
group
or
you
call
it
whatever.
B
The
actual
breakdown
is
of
that
there
are
in
the
end,
the
differences
between
google
and
azure
and
aws
are
much
smaller
than
the
differences
between
veeam
vsphere
on
premise
and
any
of
those,
because
all
cloud
providers
in
the
end
have
a
built-in
load.
Balancer.
All
cloud
providers
have
availability
zones
and
regions.
All
cloud
providers
have
an
api
for
the
storage
that
you
use,
and
things
are
very,
let's
say,
standardized
in
that
world
up
there
down
here
in
the
data
center,
it's
a
bit
different.
B
Now
that
doesn't
mean
I
think
it
would
make
sense.
I
mean
there's
nutanix,
there's
vmware,
there's
a
hundred
different
on-premise
solutions
and
having
an
on-prem
user
group,
let's
say,
could
be
very
interesting
right
having
it
kind
of
be
targeted
at
that.
A
Yeah,
I
I
fully
agree.
I
mean
I
there
used
to
actually
be
an
on-prem
group
in
kubernetes
and
the
organizer
changed
jobs
and
the
meetings
kind
of
ceased.
I
mean
there's,
there's
work
involved
with
doing
these,
but
I
used
to
really
enjoy
those
meetings
they
were
done
by
somebody
who
worked
for
core
os,
but
okay.
A
I
think
you're
on
to
something
there
really,
if,
if
there
are
those
big
differences
in
kubernetes
across
the
cloud
providers
in
the
public
clouds,
kubernetes
screwed
up,
because
the
whole
premise
of
it
is
that
it
abstracted
those
differences
out
but
yeah.
When
you
manage
your
own,
that's
that's
a
different
beast
and
the
the
I
kind
of
like
opening
up
a
little
bit
broader
than
just
you
know,
one
vendor's
platform.
A
A
A
I
know
that
I've
I've
been
in
the
it
industry
long
enough
too,
that
there
are
other
models
for
user
groups.
You
know
going
back
to
the
old
usinix
unix
user
groups
and
in
vmware
they
have
the
vmugs
and
things,
and
one
of
the
goals
is
always
that
you
want
users
to
actually
like
try
to
manage
it,
but
for
many
users,
it's
tough
for
them
to
justify
to
their
employer
spending
the
hours
it
takes
to
recruit
speakers
and
actually
manage
meetings
and
go
to
go
to
conferences
and
put
on
presentations
and
things.
A
So
you
know
in
a
in
the
perfect
world,
everything
would
be
for
users
by
users,
but
in
reality,
vendors
are
in
a
position
where
you
know
they'll
contribute
to
somebody's
salary
for
doing
the
work.
That
is
behind
conducting
actual
meetings,
and
you
even
see
that
with
those
meetup
groups
on
a
local
level
where,
ultimately,
if
you
have
a
meeting
space,
somebody's
got
to
pay
for
that
space,
maybe
bring
some
pizzas
in
and
that
sort
of
thing,
and
it's
generally
going
to
be
the
vendors
that
have
an
economic
incentive
to
do
that.
A
So
one
of
the
arts
of
doing
this
is
to
provide
a
forum
where
vendors
can
justify
that
it's
worth
their
time
to
keep
this
thing
going
without
turning
it
into
just
a
wall-to-wall
sales
pitch,
which
you
know,
nobody
really
wants
the
users
don't
want
it.
You
know
you
if,
if
it's
a
sales
pitch,
you
can't
trust
the
things
that
are
being
getting
presented,
because
you
need
somebody
to
keep
them
honest.
B
People
that
are
using
nutanix
could
learn
a
lot
from
the
people
that
are
that
have
the
experience
on
vsphere
and
the
people
on
vsphere
could
learn
a
lot
from
the
people
running
on
nutanix
or
that
are
running
on.
You
know,
whatever
other
hyper-v
or
whatever.
Anyone
is
running
on,
because
in
the
end,
it's
the
same
challenges
right
and.
A
I
have
to
say
even
working
for
a
vendor,
which
I
do
you,
you
really
as
a
business
model,
should
keep
tabs
on
everybody
out
there,
because
you
don't
want
to
be
oblivious
to
somebody
coming
up
with
a
new
idea,
and
you
know
your
own
implementation
doesn't
even
have
it,
so
I
actually
am
in
favor
of
having
these
cross
vendor
boundaries,
because
I
need
to
keep
educated
on
this
anyway.
So.
B
Yes,
I
I
think
that
it
does
make
sense
to
break
it
up,
not
by
cloud
provider
but
on.
Let's
say
I
wouldn't
say
to
split
it
up
to
necessarily
yes,
I'm
machine
learning
is
an
entire
world
and
of
itself
that
has
implications
at
the
entire
stack
right.
So
a
machine
learning
one
would
make
sense
right,
but
there
are
things
that,
like
okay
load,
balancing
user
group,
I
don't
think
makes
sense
right.
Storage
user
group
doesn't
make
sense,
but
in
on-prem
does
make
sense
in
edge
computing.
B
One
does
make
sense
a
public
cloud,
one
does
make
sense
and
you
can
have
something
that
affects
the
entire
stack
is,
I
think,
kind
of
the
break
up
of
one
type
of
user
group
as
well
as
regional
ones,
and
I
think
it
makes
sense
to
have
both
because
having
the
global
ones,
I
think,
is
great
they're,
more
focus
oriented
right,
which
is
awesome
and
they
can
be
done
on
zoom.
B
The
regional
ones
are
great
for
being,
I
would
say,
broader,
just
as
cncf
user
group
of
israel,
of
los
angeles,
of
whatever
it
is,
and
that's
just
a
way
to
meet
people
in
the
ecosystem.
That
great
there
will
be
one
meetup
that
you
won't
go
to
because
it's
on
machine
learning
and
you
aren't
interested.
But
the
next
meetup
of
that
user
group
is
going
to
be
on
kubernetes
availability
zones.
And
you
are
interested.
A
Right,
yeah,
that
that
makes
a
lot
of
sense,
so
the
locals
would
be
wide
open
in
scope,
probably
and
really
having
managed
a
local
group.
You
need
to
because
you're
trying
to
get
speakers
in
on
all
topics
and
a
lot
of
the
reason
I
would
contend.
People
come
to
those
local
groups
is
not
only
to
share
experiences,
but
frankly,
a
lot
of
people
use
them
for
job
leads,
and
things
like
that
too,
or
or
to
go
the
other
direction
for
recruiting
so
yeah.
A
A
So
but
I
I
am
going
to
proactively
pass
some
of
these
ideas
along
and
we'll
see
what
happens?
I
think
the
time
frame
for
this
is
nothing's
going
to
happen
in
the
next
month
or
two,
but
maybe
something
will
have
happened
by
kubecon
north
america.
B
A
Think
it's
person-to-person,
but
you
know
you
brought
the.
I
brought
up
the
idea
of
edge,
so
there
is
something
that
I
was
involved
with
and
it
technically
wasn't
a
user
group.
It
was
called
the
kubernetes
iot
edge
working
group
and
when
it
was
originally
formed,
people
thought
that
the
reason
it
became
a
working
group
rather
than
a
user
group
was
that
people
thought
that
this
group
would
discuss
changes
to
the
kubernetes
code
base
itself
to
accommodate
edge.
A
A
Yet
this
working
group
continued
to
hold
meetings
and
the
the
reality
of
those
meetings
were
they
were
effectively
edge
user
group
meetings,
but
they
weren't
called
that
and
a
pr
did
get
open
to
discuss
that
group
leaving
the
kubernetes
project
and
it's
looking
very
much
like
that-
will
land
over
in
the
cncf
repo
and
still
continue
to
operate
there,
and
it
was
the
same
sort
of
thing
that
kubernetes
at
edge
brings
in
a
bunch
of
collateral
projects
that
some
of
them
are
kubernetes,
like
variants
of
kubernetes
custom
built
for
edge,
but
also
collateral
projects
that
would
be
used
alongside
kubernetes
at
edge,
so
that
one
does
have
published
docs.
B
B
Because
they
can't
be
a
user
group
technically
under
the
rules
of
what
are
used
or
group
is
defined
as
in
kubernetes
yeah
right,
but
they
I
they
have
spin-offs
of.
You
know,
projects
that
came
out
cluster
api,
nested
or
virtual
cluster
or
hierarchical
namespaces,
but
like
they're
projects
that
broke
off
from
there
and
it's
just
meetings
of
people
come
in
and
nearby
came
in
and
talked
about.
Kaiverno
and
other
people
came
in
and
talked
about
other
products
and
came
right.
Yeah.
A
You
know
the
body
is
no
longer
has
a
beating
heart,
so
let's
just
kill
it,
and
I
think
there
was
one
called
big
data
that
was
in
that
category
too,
of
being
a
de
facto
user
group,
but
couldn't
sustain
itself
but
anyway,
the
broader
issue
is
maybe
these
are
best
hosted
over
at
the
cncf.
And
it's
not
going
to
surprise
me
if
that
happens,
but
no
immediate
change
in
the
short
term.
A
Okay,
so
on
the
agenda,
I
wanted
to
discuss
a
little
bit
about
kind
of
a
101
on
zones
inside
kubernetes,
just
because
we
will
have
this
deeper
dive
application
of
them
in
the
presentation
at
kubecon
europe,
but
we're
only
allotted
35
minutes.
So
if
we
were
to
do
the
zones
101
there,
we
would
burn
up
all
our
runway
and
not
have
a
whole
lot
of
time
to
get
into
the
meat
of
the
presentation
but
I'll
and
I'm
going
to
start
trying
to
do
this.
A
A
That
might
be
too
elementary
for
the
people
on.
This
call
is
just
to
say
that
you
know
kubernetes
is
designed
so
that
a
single
kubernetes
cluster
can
run
across
multiple
failure
zones
and
typically
in
a
public
cloud
implementation.
These
zones
fit
together
with
another
logical
grouping
called
a
region.
A
You
know,
which
is
a
geo
region,
but
within
a
single
geo
region,
say
the
united
states
west
or
something
a
cloud
provider,
might
still
have
isolated
areas
that
you
could
purchase
vms
running
on
implemented
in
such
a
way
that
they
don't
share
common
points
of
failure.
A
The
best
practice
would
be
to
run
kubernetes
spread
across
at
least
three
availability
zones,
and
typically
in
some
public
clouds
within
a
region,
you
only
have
three,
but
some
of
them
might
have
more,
and
this
is
this
would
be
where
you
would
purchase
the
vms
that
host
your
kubernetes.
A
If
you
buy
a
managed
kubernetes,
this
purchasing
might
automatically
be
being
done
under
the
covers
so
that
you
don't
have
to
be
be
aware
of
it.
But
you
know
if
you
are
standing
up
your
own,
you
probably
are
aware
of
it.
So
that's
what
goes
on
in
a
public
cloud
on
vsphere,
it's
actually
up
to
you
to
define
what
constitutes
zones.
Vsphere
is
far
less
opinionated
than
these
public
clouds,
which
gives
you
both
flexibility
opportunities,
but
also
challenges,
and
it's
up
to
you
to
define
what
are
these
zones?
A
You
know
it
could
be
that
for
you,
in
a
big
data
center,
a
zone
is
effectively
an
aisle
or
a
collection
of
aisles,
but
in
a
smaller
one
it
might
be
that
you
have
independent
racks
and
these
racks
might
have
single
points
of
failure
within
a
rack
but
redundancy
across
racks.
A
So
it
is
up
to
you
to
to
decide
what
constitutes
a
zone,
and
you
know
that
we
all
understand
that
there's
budget
constraints.
So
in
a
public
cloud
they
have
independent
networks,
independent
power
for
your
data
center,
which
has
a
finite
amount
of
spend
available.
May
you
approach
this
as
best
you
can
you
know?
Maybe
you
have
independent
power,
but
you
don't
have
independent
network.
You
don't
have
indepen
fully
independent
storage,
so
the
parameters
are
going
to
be
keeping
your
compute
up.
A
Keeping
your
network
up
keep
maybe
having
shared
storage
that
is
resilient
and
has
redundancy
built
in,
but
you
may
or
may
not
have
all
of
these
still
you.
It
doesn't
mean
give
up
all
hope.
It
means
try
to
do
the
best
you
can
when
hosting
kubernetes
on
this
and
how
you
map
this
to
kubernetes
is
zone
labeling.
A
So
what
can
get
impacted
in
kubernetes
is
both
the
hosts.
The
the
the
cluster
nodes
that
host
kubernetes
would
typically
need
to
be.
They'd
typically
have
to
tell
kubernetes
hey
what
zone
am
I
in,
because
kubernetes
isn't
going
to
be
in
a
position
to
automatically
figure
this
out?
It
could
be
that
your
distribution
of
kubernetes,
you
know
whatever
you
choose
to
do-
would
attempt
to
automatically
label
these
cluster
nodes
as
part
of
the
install
process.
A
But
if
you're
in
there
using
pure
upstream
kubernetes
code
compiling
it
yourself
and
then
installing
it
yourself
that
labeling
would
be
up
to
you.
A
So
you
you
can
do
this
with
labels.
What
are
called
constraints
that
are
based
on
topology
keys
and
parameters
like
skus,
there's
also
an
opportunity
to
ask
for
or
define
affinity
and
anti-affinity
zones
so
that
if
you've
got
a
modern
app
that
has
some
form
of
resiliency
in
the
form
of
a
clustered
pods
behind
a
load
balancer,
you
can
make
requests
to
the
scheduler
to
implement
affinity,
rules
or
anti-affinity
rules,
and
then
these
play
into
the
zoning.
A
Michael,
is
going
to
be
covering
techniques
for
monitoring
changes
in
health
of
esxi
cluster
nodes,
as
well
as
changes
in
configuration
and
communicating
these
up
into
the
kubernetes
layer
automatically,
because,
obviously
you
don't
want
a
person
making
a
bunch
of
hand
edits
on
any
kind
of
well.
Frankly,
even
a
home
lab
situation,
I
wouldn't
want
to
be
making
hand
edits
to
these
things
to
accommodate
those
changes
down
at
the
infrastructure
level.
If
there
was
a
better
way
to
do
it,
so
does
anybody
else
have?
A
B
Yeah
I
mean
zones
are
an
interesting
thing,
especially
on
vsphere.
I
think,
because
I
go
back
and
forth
on
if
it's
something
that's
worthwhile
or
not,
especially
since
cluster
api
came
out
and
the
ability
to
create
clusters
so
easily,
there
is
a
lot
of
added
complexity,
especially
when
you
manage
the
infrastructure
yourself
in
doing
a
topology
aware,
kubernetes,
cluster
topology
keys
are
not
the
most.
B
Friendly
syntax
to
add
into
kubernetes
levels,
that's
an
understatement.
I
think,
and
a
lot
of
helm
charts
you
get
in
the
community,
things
that
exist
out.
There
may
not
take
topology
into
consideration,
making
using
some
upstream
projects
also
difficult.
B
On
the
other
hand,
if
you
do
things
like
the
vmware
sources
for
k
native
to
do
automatic
tagging
of
esxi
hosts
automatic
tagging
of
when,
because
all
of
your
hosts,
no
matter
what
level
you
define
an
availability
zone
and
in
vsphere,
which
is
either
a
cluster
or
a
or
a
drs
host
group,
you
need
to
tag
every
single
esxi
host
with
vcr
tags
so
that
the
cloud
provider
interface
can
utilize
it.
B
So
the
amount
of
work
that
that
takes,
if
you
don't
have
automation
in
place,
that's
consistently
event
driven,
is
very
difficult
to
manage
yeah,
I
would
say.
B
Which
is
why
what
I
find
very
interesting
in
this
world
is
like
cube,
fed
like
kubernetes
federation,
which
I
think
is
in
its
like
fourth
iteration
now
and
oh
probably
at
like
the
15th
or
20th
iteration
it'll
actually
be
like
g8,
but
the
idea
of
multi-clustered
and
having
like
a
single
api,
that's
backed
by
multiple
clusters
and
removing
the
complexity
of
topology
out
of
kubernetes
and
just
having
multiple
small
clusters
and
having
some
thin
layer
above
that
can
schedule
across
those
separate
clusters
is
an
interesting
thing
that
the
community
is
looking
at,
because
even
in
public
clouds,
they're
realizing.
A
We
might
only
have
time
to
acknowledge
that
this
issue
exists
but
not
be
able
to
demo
it,
but
in
reality,
what
you've
often
got
going
on
in
an
on-prem
kubernetes
installation
is
that
the
kubernetes,
apps
or
services
are
interacting
with
legacy
things
that
are
stood
up
as
just
vms
and
not
that
that's
even
a
bad
model,
because,
frankly,
some
of
these
things
that
might
be
labeled
legacy.
Like
a
you
know,
a
transactional,
sql
database
server,
really
we're
never
architected
to
take
advantage
of
three
or
more
nodes.
A
You
know,
and
there
are
established,
proven
techniques
of
hosting
them
on
vsphere
as
vms,
with
resilient
storage
and
replication,
so
that
you
just
leave
them
where
they
are,
but
have
the
newer
apps
interact
with
them
and
putting
together
the
pieces
so
that
you
can
schedule
those
kubernetes
workloads
and
optimal
ways
to
deal
with
whatever
go
might
go
on
at
the
legacy
level
could
have
a
lot
of
utility
to
people.
So
just
throwing
that
out.
A
There
is
another
thing
that,
where
this
whole
viba
event-driven
thing
could
actually
provide
useful
operational
assistance
as
health
changes
or
configurations
change
down
at
the
infrastructure
level,.
B
Is
that
viva
or
even
the
ev,
the
vmware
event
sources
for
k
native,
which
is
the
new?
Like
version
of
that,
I
guess
kind
of
in
the
k
native
world,
because
they're
tied
to
a
single
v
center.
There
are
a
lot
of
people
that
do
topology
cross
v
center,
so
they'll
have
a
kubernetes
cluster
that
spreads
multiple
vcenters.
B
B
So
if
you
have
two,
you
need
to
make
sure
that
you're
syncing
the
two
of
them,
because
the
system
can't
do
that
and
that's
by
design
that
it's
to
a
single
vcenter,
because
otherwise
you
have
terrible
issues
but.
A
I'm
kind
of
curious
is
that
something
commonly
used,
or
is
that
something
that.
B
It's
something
that
I
have
seen
come
up
multiple
times
on
the
cloud
provider
github,
because
that
was
something
that
was
not
implemented
originally
in
cpi
in
csi
and
was
in
the
entry.
So
there
were
people
that
were
complaining
that
that
wasn't
in
the
out
of
tree
providers
and
because
of
that
they
couldn't
move
to
out
of
tree,
but
they
wanted
to
use
csi,
but
couldn't
because
they
needed
to
use
the
entry
vcp
because
it
only
supported
this.
And
so
it
came
up
a
lot.
A
Yeah
another
thing
that
would
be
like
that
of
arguably-
and
you
know,
for
a
common
use
cases
it's.
It
strikes
me
as
maybe
biting
off
a
real
challenge
for
managing,
but
I,
I
suspect
that
you
could
do
it
is
having
what
I
will
call
cloud
native
apps
things
that
can
spread
themselves
around
with
you
know,
with
redundancy
spread
across
multiple
kubernetes
clusters,
and
I
think
you
could
do
it
whether
it
would
be
smart
is
another
issue,
but
you
could
do
it.
B
Listen
now
that
we
have
a
lot
of
really
good
multi-cluster
service
meshes
that
can
actually
deal
with
the
networking
layer.
That
was
the
biggest
challenge
right.
The
deployment
in
the
end
we've
automated
right,
continuous
delivery.
There
are
a
thousand
tools
that
can
do
it
for
us.
If
we're
doing
git
ups,
you
can
go
with
flux.
Cap
controller,
argo,
cd,
whatever
you
want
to
use
for
the
deployment
part
and
targeting
one
cluster
or
five
clusters,
really
isn't
the
challenge
today.
B
I
think
because
that's
basically
a
solved
issue,
it's
just
a
question
of
what
tooling
you're
going
to
use
and
what
little
bit
of
glue
code
you're
going
to
write.
But
the
networking
issue
was
always
the
issue
service
discovery
across
clusters
and
whether
you
go
with
commercial
offerings
that
exist
out
there
from
google
from
vmware
from
whatever
or
you
go
with
open
source.
B
Multi-Cluster
service
meshes
that
exist
as
well
things
that
have
come
out
of
hashicorp
different,
like
editions
on
top
of
istio
that
have
come
out
there,
which
is
now
cncf,
ea
or
other
ones
like
kuma
all
have
multi-cluster
support.
So
really
that
challenge
the
requirement
of
a
single
cluster
cross.
V,
centers,
I
think,
is
basically
going
away,
because
the
main
challenge
that
we
had
is
going
away
as
well,
which
is
how
do
we
get
networking?
A
Arguably
gets
reduced,
and
perhaps
your
model,
then,
is
you
know
one
one
vcenter
one
kubernetes
cluster
and
the
app
floats
across
multiple
kubernetes
clusters
at
a
higher
level.
B
Even
one
vsphere
cluster,
one
kubernetes
cluster
and
don't
traverse
multiple
vsphere
clusters
right,
like
you,
can
even
break
it
down
to
smaller
levels
and
even
cni's,
like
andrea,
have
just
introduced
in
the
last
version
or
the
version
before
that
multi-cluster
networking.
So
just
even
without
a
service
mesh,
you
can
do
multi-cluster
networking
with
andrea.
You
can
already
do
that
with
celium.
B
Yes,
there
are
complexities
in
setting
up.
Obviously
the
overlay
networks
between
clusters,
but
it's
possible
to
be
done
and
the
cni's
out
there
that
are
common
are
adding
this
support.
Celium
has
had
it
for
a
long
time,
andrea,
just
added
it
and
is
improving
it
constantly.
B
I
don't
remember
if
calico
has
it,
but
basically
I
the
whole
stack
is
solving
this
issue
so
that
kubernetes
clusters
can
be
as
simple
as
possible
so
that
you
can
just
deploy
to
a
single
ac,
because
I
think
in
public
cloud,
where
the
azs
are
taking
care
of.
We've
learned
that
they're
a
pain
to
deal
with
for
the
developer,
so
we
try
and
take
that
away
now.
B
At
the
same
point,
none
of
this
stuff
is
100
ready.
Yet
right
so
we
do
still
need
availability
zones,
but
I
think
they're,
a
temporary
solution
until
these
other
things
solidify
a
bit
more
around
multi-cluster
service
measure
on
multi-cluster
cni's
around.
You
know
how
we
deal
with
all
of
these
things.
Obviously,
when
you're
dealing
with
multiple
clusters,
you
need
some
higher
level
of
management
that
can
push
down
policies
equally
across
all
of
the
clusters.
B
There
are
again
commercial
offerings
and
open
source
offerings
that
can
do
that,
but
you
need
something
to
manage
policy,
because
when
we
look
at
a
single
cluster,
so
you
can
just
push
policies
into
that
cluster.
A
cholesterol
binding
gave
you
a
cholesterol
binding.
Now
you
have
seven
clusters.
Are
you
really
going
to
create
seven
cluster
role?
Bindings?
Are
you
gonna
create
seven
name
spaces,
because
you
wanna
make
sure
the
app
can
run
in
every
single
cluster?
B
A
A
B
Yeah
and
listen,
I
think
that
what
was
it
there
was
a
great
there
was
a
great
one
from
vmworld
there
was
a
meme
that
came
out.
I
think
it
was
like
two
vmworlds
ago
where
it
was
like.
The
cio
of
a
company
goes
to
vmworld
with
the
question
of
cooper
what
and
he
couldn't
figure
out
what
it
was,
and
he
came
out
saying
oh
great,
I
need
multi-cluster
service
mesh
with
defined
slos
like
at
the
end.
B
He
comes
back
after
listening
to
a
session
on
tatsu
right
and
like
that's
how
he
came
out
right.
In
the
end,
these
changes
of
what
was,
if
we
looked
three
years
ago
at
what
the
larger
enterprises
were
wanting
and
doing
in
their
kubernetes
environments,
is
not
what
the
small
little
companies
are
doing
in
their
kubernetes
environments,
because
the
big
companies
learn.
It
learn
the
lessons
from
it.
B
A
A
great
observation-
and
it's
not
part
of
that
working
in
is
just
having
published
materials
that
describe
that.
Here's,
how
I
did
it.
It
actually
worked,
and
you
know
right
now:
you
go
even
looking
at
definitions
of
zones
and
stuff
inside
the
kubernetes
documentation.
It
is
pretty
light.
I
glanced
at
it
just
before
this
meeting
for
my
attempt
at
a
zones
101-
and
you
know
I
found
two
or
three
pages
total
and
none
of
this
was
act
yeah.
A
You
didn't
find
a
lot
of
working
examples
of
walking
you
through
the
path
of
do
step,
a
b,
c
and
d
and
then
you're
there
and
you're
done,
and
you
know
we
did
it
on
these
three
platforms
and
it
worked
great
and
if
I
couldn't
find
that
kind
of
thing
on
the
first
couple
pages
of
google
search,
probably
no
yeah,
those
smaller
people
aren't
in
a
position
to
to
do
that.
Yet.
B
Exactly
exactly
so,
this
is
starting
at
the
larger
companies
they're
going
to
do
this,
and
then
it's
going
to
make
its
way
into
the.
If
it's
something
that
catches
at
those
levels,
it'll
move
into
the
greater
market,
because
you'll
start
to
have
blog
posts
out
there
and
you're
going
to
start
to
have
pr's
to
do
kubernetes,
io,
documentation
and
you're
going
to
have
all
the
sudden
cncf
podcasts
coming
out
on
this
and
webinars.
B
Exactly
and
you're
gonna
have
all
of
a
sudden
if
this
becomes
so
big,
so
the
cpi
may
be
able
to
do
certain
things
for
you
right.
Why
can't
the
cloud
provider
interface
do
certain
things
around
this
for
you
right?
If
it
becomes
big
enough,
then
maybe
it
would
who
knows
right
like
there's
all
these
different
levels,
but
it's
always
the
question
of.
Do
you
put
the
effort
right,
the
engineering
effort.
Do
you
put
into
something
before
you
know
that
it's
going
to
be
adopted,
but
you're
not
going
to
adopt
it
until
the
engineering
effort?
B
A
B
Which
is
why
I
think,
actually,
with
the
124
release
that
just
came
out,
kubernetes
seems
to
be
making
a
change
on
a
stance
on
that
which
is
very
interesting,
which
is,
if
I'm
not
sure
if
people
saw
but
beta
apis
as
of
124
in
four
words
beta
apis
and
kubernetes.
Any
new
beta
apis
will
not
be
enabled
by
defaulting
kubernetes
till
now.
B
Alpha
apis
were
not,
and
you
had
to
enable
them,
but
beta
ones
were,
and
now
you
need
to
enable
them
via
feature
flag,
as
well
beta
flags,
which
is
going
to
be
very
interesting
to
see
how
many
people
are
going
to
turn
on
those
beta
features,
because
you
actually
have
to
go
in
and
enable
them
now.
A
A
Yeah,
I
don't
know
I
thanks
for
pointing
that
out.
I
I
I
guess
I've
heard
that
but
didn't
take
note
of
the
importance
and
the
change
to
that.
I'm
not
sure
I
recollect
ever
hearing
what
was
the
basis
of
that
decision
either.
The.
B
B
Those
apis
are
there
and
there
was
an
issue
with
that,
because
a
lot
of
people
were
reaching
bugs
in
beta
quality,
apis
and
beta
quality
software
that
they
were
running
in
production,
because
someone
saw
that
there
was
a
object
and
they
could
create
an
ingress
and
awesome
well
yeah,
but
it's
beta,
but
it
doesn't
matter
because
it's
there
and
it's
that
overall
debate,
that's
going
on.
I
think
overall,
in
the
cncf
I've
seen
this
in
a
lot
of
projects
of
the
api
versus
the
tool
versus
the
code
itself.
B
Like
I
mean
ingress
was
ga,
but
the
api
was
beta
right.
You
could
say
that
the
functionality
of
ingress
was
ga,
but
the
api
is
beta
or
alpha
or
ga
and
there's
a
separation
between
the
two
and
the
issue
was.
Is
people
were
constantly
having
issues
with
beta
apis
in
production
clusters,
which
shouldn't
happen
that
they've
decided
to
turn
it
off
yeah.
A
You
know
I
could
see
a
world
where
you
could
have
a
kubernetes
distribution
that
gets
published
in
what's
declared
the
standard
form
of
the
the
beta
apis
are
hidden
and
then
offer
another
one
where
it
is
the
old
style
of
default,
where
they're
on
and
you
just
elected
which
of
these
paths
you
get
by
which
distro
you
install
and
somebody
who
publishes
a
distro
could
do
it
in
the
two
forms,
and
maybe
it
simplifies
things
a
bit
rather
than
forcing
people
to
just
cherry
pick.
A
api
by
api.
B
Yeah
and
I
think
the
difficulty
with
that
becomes,
if
kubernetes
is
supposed
to
be
that
it
runs
on
kubernetes
here,
it
should
run
on
kubernetes
there
once
we
start
with
deviating
what
that
conformant
kubernetes
is
or
what
that
distribution
is.
It
becomes
then
very
difficult
to
you're
going
to
have
a
lot
of
things
of
what,
but
it
worked
on
my
eks
cluster
yeah.
B
It
doesn't
work
on
your
eks
buster,
but
they're
the
same
kubernetes
version,
oh
because
right
like
if
you
have
to
manually,
go
and
set
that
feature
flag.
You
know
that
in
the
other
cluster
you
need
to
set
that
feature
flag,
because
you
manually
chose
to
do
that.
You
didn't
just
follow
some
blog.
That
said
here
run
this
cube,
adm
command
that
happens
to
install
the
data.
You
know,
channel
of
kubernetes.
A
Yeah,
although
those
blogs
that's
another
thing,
I
found
in
the
past
too
that
the
fact
that
these
blogs,
don't
have
some
automatic
sell
by
date
and
are
removed
from
the
shelf,
means
you
find
these
that
maybe
were
written
when
a
feature
was
in
beta
and
it's
no
longer
in
beta.
Yet
that
isn't
that
isn't
specifically
called
out
or
you
missed
it
and
weren't
aware
of
it,
and
it
makes
this
whole
act
of
trying
to
just
get
things
to
work
on
kubernetes.
As
a
user
treacherous.
B
The
amount
of
examples
out
there
that
are
still
using
older
api
groups,
of
like
ingress
or
of
deployments
or
of
things
that
just
don't
work,
you
get
a
yaml
manifest
in
the
thing
here.
Do
a
cube,
ctl
apply
to
this
and
you
go
and
do
that
and
know
that.
Sorry,
we
don't
have
an
api
anymore
for
ingress
v1
beta
one
yeah,
it's
like!
Oh
hey.
What
do
I
do.
A
Okay,
I
just
noticed
that
in
my
time
zone
we
hit
12
noon
so
last
call
for
any
partying
remarks,
comments,
agenda
items,
but
I
think
we're
about
at
our
time
limit
for
the
meeting
today.
A
And
I
guess
I'll
take
that
silence
as
no
additional
things,
so
at
least
for
some
of
the
people
on
the
call.
I
look
forward
to
meeting
you
in
person
at
kubecon
europe
and
if
I
don't
catch
you,
there
we'll
have
another
meeting
in
a
month,
so
bye
everybody
and
thanks
for
attending
the
end
two
weeks.