►
From YouTube: Kubernetes UG VMware 20201203
Description
December 3, 2020 meeting of the Kubernetes VMware User Group. An introduction to load balancers for on prem Kubernetes followed by discussion. An introduction to some work underway related to allowing Kubernetes apps on vSphere utilize GPU resources which will be continued in the next meeting.
A
Hi
welcome
to
the
december
3rd
meeting
of
the
kubernetes
vmware
user
group.
In
today's
meeting
we've
got
last.
I
looked
a
fairly
light
agenda
robert,
I
think,
at
the
tail
end
of
the
november
meeting,
you
brought
up
a
discussion
related
to
load
balancers
running
on-prem
for
kubernetes,
and
we
didn't
have
time
to
cover
it,
so
we're
going
to
we're
going
to
segue
the
tail
end
of
last
meeting
into
this
one
and
talk
a
little
bit
about
load
balancers.
Now
this
is
a
repeat
for
long
time.
A
Members
of
the
group
we
did
have
some
coverage
of
load
balancers
back
in
the
may
2020
meeting,
but,
like
all
things,
kubernetes
things
change
a
lot
and
even
the
way
you
might.
You
know
from
features
of
offerings
to
people
discovering
new
best
practices
and
deciding
that
there's
a
new
way
to
do
things
that
works
better
than
the
way
people
used
to
do
it.
So,
given
that
that
may
one
was
six
months
old,
I
think
it's
appropriate
to
go
touch
on
the
subject
again
and
with
that
said,
let
me
get
started.
A
I
prepped
a
deck
starting
late
last
night,
so
there
might
be
some
rough
edges
on
this.
I
haven't
practiced
this
or
even
gone
back
to
proofreading
it
for
typo,
since
I
edited
the
last
slide
just
about
10
minutes
ago.
So
forgive
me
if
there's
some
rough
edges
to
this.
So
let
me
open
this
deck
and
share
my
screen
just
a
moment.
A
A
Okay,
hopefully
everybody
can
see
a
screen
that
says
what
is
a
load
balancer,
okay,
great,
so
yeah
load,
balancing,
isn't
new!
These
things
came
became
common
in
the
90s
way
before
kubernetes
existed.
Maybe
they
came
about
before
the
90s,
but
that's
the
first
time
I
became
aware
of
them,
so
even
predate
vms
being
used
for
services
that
were
exposed
on
a
network.
A
So
you
know
why
would
you
need
one
well
they're
used
to
expose
something
to
the
outside
world
and
they
can
also
distribute
incoming
requests
that
perhaps
are
at
high
volume
across
a
number
of
nodes
to
allow
improved
capacity
and
maybe
improved
availability
such
that?
If
you
have
one
instance
of
a
service
crash
but
have
multiple
as
far
as
users
are
concerned,
for
the
most
part
they're
going
to
see
minimal
disruption,
you
know
when
so
here
let
me
move
on
to
the
next
one,
then
so
within
a
coober
load,
balancing
is
generic.
A
You
could
use
it
for
kind
of
any
network
exposed
service.
But
what
does
this
mean
in
the
context
of
kubernetes?
Well,
it
turns
out
that
there's
a
couple
possibilities
here.
One
is
to
put
the
load
balancer
in
front
of
the
kubernetes
control
plane
itself,
and
this
is
actually
a
best
practice
for
a
large-scale
deployment
of
kubernetes
in
production.
You.
A
A
A
This
particular
one
shows
for
redundancy
three
nodes,
but
there
could
be
some
other
number.
It
shows
ncd
cola
located
on
those
nodes,
but
there
is
an
op.
There
is
an
option
to
stage
independent
ncd
nodes
that
I've
sometimes
seen
used.
Now.
Let
me
point
out
things
a
few
things
that
maybe
aren't
immediately
obvious
here,
particularly
for
people
new
to
kubernetes.
A
Your
objective
for
putting
this
load
balancer
here
is
to
presumably
allow
for
continued
services
if
a
single
node
fails
or
gets
network
partitioned,
but
this
is
only
going
to
improve
things
if
the
load
balancer
itself
has
inherent
high
availability.
You
know
if
the
load
balancer
is
a
single
point
of
failure.
A
Does
it
really
did
it
really
add
anything
to
go
with
this
architecture
unless
that
load
balancer
is
highly
resilient?
Also,
if
you
intend
to
use
a
software
solution
for
your
load,
balancer-
and
there
are
load,
balancers
can
be
hardware,
appliances
and
or
recently
recent
meaning.
A
I
guess
these
things
have
been
out
here
for
decades,
but
the
first
ones
were
hardware
only
they
can
be
software
implemented
and
it
might
be
initially
attractive
to
consider
the
possibility
of
putting
that
software
load
balancer
in
a
container
and
hosting
it
in
kubernetes
itself,
but
there
are
going
to
be
some
challenges
with
hosting
this
on
the
very
same
kubernetes
cluster.
It's
going
to
protect.
You
know,
you've
got
a
chicken
and
egg
scenario
where
what
happens
if
you
lose
power
to
the
whole
cluster
and
it
comes
back
up
the
load.
A
Balancer
is
how
you
talk
to
it,
but
if
it's
self-hosted,
how
is
that
going
to
work
out
for
you?
So
if
you're,
on-prem
and
you're
running
on
vmware
infrastructure
running
a
software
load
balancer
in
a
vm
protected
by
vsphere
features
might
be
a
really
good
solution.
You
know
those
people
before
the
ages
of
kubernetes
deployed
load
balancers
in
vms
and
the
vms
can
allow
for
redundancy
at
the
hypervisor
layer
so
that
that
load
balancer.
A
A
A
You
write
yourself
and
deploy
to
kubernetes
or
many
people
find
containerized
software
packages
from
third
parties,
either
open
source
or
not
that
are
capable
of
running
in
containers,
pulled
from
docker
image
registries
and
hosted
on
kubernetes,
and
if
you
discover
that
you
would
like
to
not
only
run
these
but
expose
them
out
to
the
outside
world,
a
load
balancer
can
be
a
great,
perhaps
even
necessary,
tool
to
do
what
what
would
be
termed
north-south
load.
Balancing
of
your
kubernetes
hosted
service.
A
Now
I
like
this
diagram,
but
it
comes
from
the
cloud
flare
website,
but
it
shows
all
the
permutations
you
have
with
doing
this.
It
turns
out
that
you
can
actually
do
load
balancing
at
multiple
locations
and
layers
and
for
some
use
cases.
Maybe
you
want
to
even
do
them
all.
You
know
this
was
prepared
by
cloudflare.
A
So
obviously,
they
have
a
business
model
of
selling
load
balancer
as
a
service
hosted
at
their
content
delivery
network
sites
around
the
world,
and
this
would
be
kind
of
an
initial
point
of
contact,
contact
for
people
or
ser
or
running
applications
trying
to
interact
with
a
load
balance
service.
A
This
could
you
know
having
the
external
hosted,
load,
balance
or
is
an
option,
so
it
could
be
eliminated,
but
the
next
tier
that's
there
would
be
a
load
balancer
hosted
by
your
cloud
provider,
and
this
diagram
shows
google
cloud
as
well
as
amazon's
aws
cloud
and
those
public
clouds
in
my
experience,
always
have
a
load
balancer
provided
by
the
cloud
provider,
and
you
pretty
much
have
to
go
through
that
that
one
is
not
an
option
if
you're
running
in
that
public
cloud
and
using
kubernetes
the
kubernetes
cloud
provi,
the
what's
called
the
cloud
provider
that
plug
into
kubernetes
has
built-in
interaction
with
the
cloud
providers
load
balancer,
and
this
is
the
means
by
which
your
you
consume.
A
Public
ips
exposed
on
the
internet
and
once
again,
this
one
probably
can't
be
eliminated.
If
you're
running
in
a
public
cloud,
if
you're
on-prem,
on
the
other
hand,
this
one
isn't
externally
managed
the
load,
balancer
might
still
exist,
but
it's
on
you
to
deploy
it
and
on
you
to
manage
it.
The
kubernetes
architecture.
With
regard
to
these
load,
balancers
you'll,
find
in
the
kubernetes
api
a
a
resource
called
a
load
balancer.
A
It
probably
should
be
amplified
to
use
the
terminology
external
load
balancer,
because
kubernetes
itself
was
designed
as
an
abstraction,
realizing
that
these
cloud
providers
will
have
an
existing
load.
Balancer
that
is
subject
to
use
and
configuration
in
order
to
use
it.
So
the
kubernetes
resource
called
loan
balancer
is
a
means
to
specify
the
configuration.
A
But
what
happens
is
an
external
piece
of
code
called
a
controller?
I
believe,
monitors
your
declaration
of
intent
that
you
make
through
the
kubernetes
api
and
goes
and
properly
configures
this
external
load
balancer,
so
that
it
works
in
the
way
you've
declared
you
want
it
to.
You
have
the
option
of
doing
the
same
technique
when
you
deploy
kubernetes
on
vsphere
infrastructure
using
the
vsphere
cloud
provider,
but
with
the
vsphere
cloud
provider.
A
We
intentionally
did
not
build
in
a
tight
integration,
because
this
would
have
required
us
to
be
a
king
maker
and
pick
one
particular
load
balancing
solution
to
be
deployed
on-prem
and
we
just
didn't
think
it
appropriate
to
go
there.
So
we
realized
that
there
are
some
on-prem
deployments
that
might
be
using
something
like
an
f5
hardware
load
balancer,
others
that
would
use
software
based
load.
Balancer
is
either
hosted
in
vms
or
bundled
with
a
kubernetes
distribution
and
by
not
putting
the
load
balancer
as
a
built-in
opinionated
choice
within
the
cloud
provider.
A
We
left
it
open
for
individual
users
to
make
their
own
solution
as
to
what
load
balancer
technology
they
want
to
use.
But
in
any
event,
even
though
this
diagram
doesn't
show
the
vsphere
on-prem
as
one
of
the
boxes,
it's
very
similar
to
one
of
these
public
clouds,
except
that
the
load
balancer
choice
is
not
an
opinionated
hard
coded
choice.
That's
been
made
for
you,
it's
up
to
you
to
go,
identify
a
load.
A
You
may
not
need
a
load
balancer
when
you
deploy
kubernetes
on-prem,
but
a
lot
of
this
depends
on
the
nature
of
your
services.
So
there's
another
thing
built
into
kubernetes
called
an
ingress
controller,
and
this
is
this:
does
something
similar
to
load
balancing,
but
it
presumes
that
you're
up
at
layer
7
like
http,
so
it
goes
into
the
actual
network
protocol
itself
and
uses
characteristics
of
the
inbound
url
to
potentially
multi-task
one
incoming
ip
to
multiple
web
services,
for
example
by
looking
at
characteristics
of
the
url.
A
But
for
the
most
part,
this
is
constrained
to
services
that
are
http
so
that,
if
you
have
things
in
alternate
protocols,
that
really
are
aren't
something
that
the
ingress
control
controller
can
examine
and
use
for
dispatching.
A
A
So
I
I
sympathize
with
any
of
you
who
are
users
and
newbies
getting
into
the
space,
because
when
you
google
search
for
this,
you
will
find
things
that
are
self-described
as
ingress
controllers
that
maybe
have
an
option
to
use
them
as
a
load
balancer
and
it
can
become
pretty
confusing.
I
think
I'm
not
convinced
I've
fully
learned
this
whole
space,
and
I'm
also
going
to
give
a
caveat.
I'm
the
presenter
here,
but
I'm
not
self-declaring
myself
to
be
an
expert
on
this
subject.
Really.
A
This
is
a
network
thing
and
I
think
you
never
stop
learning
on
networking
and
I'm
kind
of
trying
to
cover
a
lot
of
things
in
addition
to
networking
so
they're
you're
free
to
ask
me
questions,
but
you
might
I'm
going
to
try
to
admit
one?
What
I
don't
know
here.
There
have
been
some
people
who've
given
presentations
to
this
group
in
the
past
and
who
are
on
slack
like
steve
sloka,
who
have
deeper
knowledge
to
me.
A
So
if
you
have
tough
questions
on
the
details
of
these
feel
free
later
to
ask
them
in
this
user
group
slack
channel-
and
I
think
we've
got
people
monitoring
it-
who
have
deeper
knowledge
in
this
subject
matter
than
I
do
so
load
balancer
versus
ingress.
This
is
just
a
little
history
of
it.
I'm
not
gonna,
read
it
to
you.
I
think
I
kind
of
telegraphed
this,
like
I
said,
I'm
first
time,
I'm
giving
this
deck.
A
So
you
know
there
is
this
distinction
of
ingress
being
up
at
l7,
whereas
load
balancers
would
typically
use
addresses
and
ports
that
are
found
in
the
tcp
packet
header
l7
goes
and
delves
into
the
actual
traffic
itself,
and
once
again
there
are
some
of
these
solutions.
A
I
know
there
are
things
that
nominally
describe
themselves
as
ingress
controllers.
When
you
do
the
google
search
that
definitely
have
l4
capability
and
there
might
well
be
l4
solutions,
I'm
not
that
have
l7
capability.
So
it's
not.
There
isn't
a
black
and
white
line
here
on
some
of
these.
Some
of
them
have
a
gray
line
between
their
functionality.
A
There
is
also
this
subject
of
east-west
versus
north-south
load.
Balancing,
so
north-south
is
about
exposing
services
kind
of
the
world
outside
your
kubernetes
cluster,
but
there
is
potentially
load
balancing
going
on
within
your
kubernetes
cluster.
You
know
if
you
use
a
microservices
model,
you're
likely
to
have
blocks
of
code
that
are
talking
to
other
blocks
of
code
and
that's
beyond
the
scope
of
this
presentation.
A
Today,
kubernetes
itself
does
rudimentary
load
balancing
it's
built
into
kubernetes
you're
allowed
to
put
configuration
information
in
your
field,
definitions
for
services,
things
like
type
equals,
node,
port
or
type
equals
load,
balancer,
some
other
objects
options.
A
A
If
you're
dealing
with
a
an
entire
app
hosted
within
kubernetes
itself,
that's
probably
an
anti-pattern
to
go,
drag
in
the
external
load
balancer,
because
that's
going
to
break
your
portability,
you
know
that
that
external
one
you
have
on-prem
is
not
likely
to
be
the
same
one.
You
have
in
a
public
cloud,
so
you're
just
looking
for
trouble,
so
keep
your
east
west
using
the
standard,
plain,
vanilla,
kubernetes
functionality
and
I
think
you're
going
to
have
a
better
road
forward.
C
So
in
not
the
last
slide,
but
the
one
before
you
had
like
a
ingress
versus
low
balancer
thing,
and
I
just
want
to
make
it
clear
to
everyone
that
it's
not
ingress
versus
low
balancer.
They
have
different
purposes,
but
generally
you
use
both
together.
So
you
use
your
load
balancer
or
you
know
something
that
provides
service
type
load
balancer
to
get
traffic
into
the
cluster,
whether
that's
metal,
lb
or
cube,
vip
or
one
of
those
kinds
of
solutions.
And
then
you
also
use
an
ingress
inside
the
cluster.
C
A
Okay,
yeah,
I
fully
agree.
I
I
that's
upon
reflection,
that's
probably
a
bad
title.
It's
just.
I
put
verses
just
to
explain
what
the
differences
are,
because
people
often
get
them
confused
and
there
are
scenarios
where
you
could
live
with
just
an
ingress,
particularly
a
full-featured
one.
So
I
know
that
this
is
a.
This
is
something
where
newbies
often
get
really
confused.
In
fact,
even
somebody
working
with
kubernetes
for
years
gets
confused
on
this.
B
A
So
when
I
say
they're
loosely
coupled
and
talked
about
the
cloud
provider,
I'm
talking
about
the
open
source
cloud
provider
that
is
done
for
kubernetes
itself,
like
the
piece
that
comes
into
play.
When
you
run
any
kubernetes
on
top
of
vsphere,
that
could
be
red
hat
open
shift.
It
could
be
that
you
download
the
source
code
to
kubernetes
out
of
github
and
build
it
yourself.
A
It
could
be
that
you
use
rancher
whatever
as
long
as
it's
kubernetes
that
cloud
provider
comes
into
play
and
that
cloud
provider,
unlike
the
ones
for
the
google
or
aws
cloud,
did
not
build
in
support
for
configuring
a
load
balancer,
because
we
wanted
to
leave
the
choice
open.
Now
there
are
commercial
solutions
where,
if
you
buy
a
commercial
package,
kubernetes
package
from
vendors
like
vm
vmware
is
one
of
them.
A
That
might
indeed
bundle
a
load
balancer
solution
so
that
it
takes
care
of
managing
this
and
has
an
opinionated
choice
of
the
load
balancer
because
it
included
a
load
balancer
with
that
commercial
product.
But
we're
prescribed
from
talking
about
commercial
products
here,
so
they
do
exist.
But,
like
I
say
I
don't
want
to
turn
this
into
the
you
know
it
it's
kind
of
a
delicate
area
where
you,
if
you
start
delving
into
the
details
of
one
commercial
product,
it's
not
fair
to
to
omit
the
others,
and
this
just
isn't
the
venue
for
that.
B
Okay,
let
me
rephrase
it
so
in
cases
where
there
is
no
strong
interaction
with
the
external
load
balancer,
then
that
means
that
the
configuration
of
the
external
load
balancer
is
going
to
be
pretty
static,
and
so
I'm
just
wondering
what
that
configuration
would
usually
look
like
you
know
what
what
is
it
effectively
load
balancing
and
it's,
and
it's
really
only
doing
it
on
layer.
Four,
then,
if
I
understand
correctly.
C
So,
are
you
talking
about,
like
an
appliance
say
like
an
f5
or
something
like
that?
Robert.
C
Okay,
generally,
like
like
steve,
said
to
keep
your
app
as
like
portable
as
possible.
You
probably
want
to
use
the
minimum
feature,
set
that
you
can
from
any
of
those
kinds
of
solutions
and
if
it
does
provide
something,
maybe
just
responding
to
a
service
type
load.
Balancer
ip
address
allocation
would
probably
be
the
best
thing
for
it
to
do.
Using
an
external
like
hardware
load
balancer
like
that
in
with
a
kubernetes
app
you're,
not
really
going
to
see
any
benefit
from
it.
C
You
know
it's
not
going
to
load
balance
it
across
the
nodes,
any
better
than
any
other
solution.
That's
just
written
in
software
that
runs
on
top
of
kubernetes
would
do
and
it's
going
to
add
rigidity
to
the
application.
That
means
it's
specific
to
that
environment,
so
my
opinion
at
least,
would
be
that
if
you
are
going
to
use
some
kind
of
hardware
external
load
balancer
whatever
it
is
that
you
try
and
use
the
absolute
minimum
features
that
you
need
and
then
do
any
of
the
intelligent,
tls
termination,
http,
header,
routing
stuff
inside
kubernetes,.
D
Itself,
steve,
can
I
add
one
thing
here
also:
yes,
please
yeah
sign
things
like
f5
and
things
of
that
sort.
A
lot
of
them
do
offer
also
a
controller
for
kubernetes
that
you
can
deploy,
which
allows
you,
then
to
use
service
type
load
balancer,
so
just
because
it
doesn't
come
as
kind
of
as
part
of
the
cloud
provider
that
part
of
the
cloud
provider
can
then
be
in
that
part
that
was
taken
out
of
the
cloud
provider
from
the
public.
D
A
Exactly
in
fact,
I
think
it's
fair
to
say
that
any
you
want
to
use
if
they're
mainstream,
unless
they've
decided
they
don't
want
to
sell
to
people
using
kubernetes,
which
strikes
me
as
a
pretty
dumb
decision.
They
probably
do
are
going
to
invest
in
allowing
it
to
be
controlled
in
an
on-prem
scenario.
D
A
Okay,
cool-
and
that
being
said,
if
the,
if
there
are
any
vendors
of
other
external
load,
balancing
solutions
there,
given
that
nsx
went
in
there,
that
cloud
provider
is
open
source
and
anybody
can
submit
a
pr
on
that
project.
So
it
did,
it
does
support
nsx
built
in,
but
we've
left
open
the
option
should
other
vendors,
you
know
f5
or
anybody
else
want
to
commit
resources
to
building
something
into
the
cloud
provider
as
another
option.
A
A
A
Well,
I
don't
need
to
read
you
the
whole
list,
but
one
thing
I
do
want
to
point
out
for
somebody
going
into
production
and
kind
of
the
fact
that,
if
you're
using
the
load
balancer
for
for
availability,
it's
already
telling
me
that
you're
really
concerned
with
availability
and
things
like
security,
one
of
the
big
things
to
evaluate
if
you're,
still
at
the
stage,
where
you're
choosing
your
solution-
and
this
is
personal
opinion.
A
But
I
think
this
is
a
lot
like
choosing
a
cni
for
kubernetes,
the
cni
being
the
network
plug-in
that
there
are
many
choices,
but
one
of
the
biggest
factors
it
relates
to
the
observability.
You
know,
if
you
want
to
maintain
this
in
production,
the
world
has
devolved.
Unfortunately,
this
place
where
ransomware
attacks
are
rampant
and
you
it's
only
a
matter
of
time
before
I
think
you
can
be
exposed
to
attempts
to
be
a
attack.
A
Attempts
from
black
hat
hat
hackers
having
an
ability
built
into
your
load
balancer
to
do
observability,
particularly
observability
of
traffic
patterns
related
to
kubernetes
objects,
might
be
something
that
some
solutions
have
and
others
don't.
You
know
it's
great
to
have
overall
macro
stats
that
hey
I've.
A
My
traffic
level
is
x,
packets
per
hour,
and
suddenly
it's
quadrupled
so
maybe
somebody's
doing
a
denial
of
service
attack
or
whatever,
but
being
able
to
map
those
potentially
to
individual
kubernetes
clusters
to
kubernetes
resources
like
pods
or
services
would
be
a
great
feature,
and
if
it
could
only
tell
you
that
this
traffic
is
going
to
your
kubernetes
api
or
going
to
kubernetes
in
general,
that's
not
as
good
as
being
able
to
track
it
all
the
way
down
to
individual
services.
Individual
ingress
controllers-
and
I
propose
to
you
that
there
might
be
some
differences
there.
A
When
you
go
look
at
these
things
and
that
could
be
far
more
important
than
one
being
you
know,
maybe
one
can
do
20
more
traffic
with
a
given
number
of
cpu
cores,
but
I
think
kind
of
your
security
features
and
your
observability
features.
Might
that
would
be
how
I
would
choose
my
solution?
Not
not
necessarily
just
you
know
the
bits
and
bytes
efficiency
level.
A
I
would
be
interested
in
ideas
from
other
users
here
in
this
meeting,
though,
because
I
I
don't
want
to
be
that
I
realize
that's
just
my
personal
opinion.
The
next
and
final
slide,
I'm
just
calling
out
some
load
balancer
options.
These
are
just
the
ones.
A
I
think
there's
at
least
twice
as
many
that
didn't
make
my
list
here,
but
I
didn't
want
to
go
to
two
slides
and
I
think
these
by
my
personal
observation
might
be
the
more
popular
ones.
So
hardware
we
already
called
out
f5,
there's
a
barracuda
solution.
There
are
other
hardware
solutions
cloud
hosted
basically
you're
going
to
get
the
one
that
the
cloud
provider
provides.
A
You
could
conceivably
add
a
software-based
load,
balancer
up
there
as
well,
but
even
that
software
based
load
balancer
might
have
to
be
hooked
to
the
cloud
providers
load
balancer
so
that
you
stage
them.
You
know
back
to
back,
I'm
not
an
expert
on
that
subject,
but
I
I
do
know
that
typically
people
just
use
the
built-in
one
from
the
cloud
provider
in
software-based
load,
balancers,
you're,
so
solutions
in
no
particular
order.
A
These
don't
mean
to
imply
popularity,
see
saw
is
one
that
was
originated
by
google,
and
this
is
the
one
that
on-prem
anthos
kubernetes
uses,
or
it's
the
one
they
support
anyway,
and
the
one
they
call
out
in
their
docs.
A
It
isn't
constrained.
However,
I
don't
see
why
it
couldn't
be
used
with
kubernetes
implementations
other
than
amphos,
though,
and
it
is
open
source
h.
A
proxy
is,
I
think,
a
very
popular
one.
You've
I've
frequently
seen
blog
posts
that
describe
how
to
deploy
this
with
kubernetes
on-prem
and
I've
done
it.
A
few
times
myself.
A
Engine
x
often
is
viewed
as
ingress
and
I'll
caution
you,
as
a
newbie
with
nginx,
there's
a
whole
bunch
of
variants
that
are
all
slightly
different
that
have
engine
x
in
the
title
and
make
sure
you're.
Looking
at
the
right
thing,
I
think
nginx
got
out
there
as
a
long
time
solution
as
a
reverse
proxy
for
web
services,
but
it
was
open
sourced.
So
in
a
way
I
would
almost
describe
these
as
forks
that
have
nginx
somewhere
in
the
title,
and
if
you
go
out
there
reading
about
it,
just
be
really
careful
that
it.
A
I
know
I
got
confused
initially
trying
to
learn
about
nginx
when
I
found
two
or
three
blog
posts
talking
about
things,
and
this
is
how
this
works
and
I
didn't
realize
that
they
were
talking
about
three
different
variants
of
nginx
and
then,
when
I
tried
to
combine
all
of
these
instructions
together,
it
just
didn't
even
come
close
to
working
so
that
that
is
something
to
watch
out
for
traffic,
and
it's
spelled,
I
I
assume
they
put
t-r-a-e-f
just
so
that
you
could
google
search
for
it.
A
But
I've
seen
a
number
of
presentation
recordings
on
it.
Where
it,
I
believe,
it's
pronounced
traffic,
even
though
it
isn't
spelled
like
that,
and
this
is
a
load
balancer,
that's
commonly
used
with
rancher,
so
the
rancher
docks
would
often
lead
you
down
a
path
of
doing
this,
but
it
is
open
source
and
you
could
use
it
in
other
scenarios.
A
Middle
lb
was
one
that
was
done
by
one
person.
I've
heard
some
criticism
that
the
originator
of
the
metal
lb
project
announced
that
he
was
too
busy
and
couldn't
pursue
this
and,
like
some
of
these
other,
like
many
on
open
source
projects,
they're
often
dominated
by
one
individual
or
a
very
small
team
and
subject
to
community
health
issues.
Should
that
something
happen
to
the
originator.
A
So
I
believe
that
came
about
in
middle
alba
to
where
there
were
some
serious
concerns
about
its
potential
future
about
a
year
ago.
But
then
I've
also
heard
recently
that
other
people
may
have
stepped
in
to
pick
up
the
ball
and
that
that
that
might
be
a
perfectly
viable
solution.
As
a
load
balancer
for
on-prem
and
I've
used
metal
lb
myself,
it's
actually
really
convenient
and
easy
to
use.
A
So
I've
done
a
number
of
talks
where
I
needed
a
load
balancer
and
discovered
that
and
found
it
very
easy
to
deploy
and
small
enough
in
resources
that
I
even
managed
to
deploy
it
on
a
desktop
hypervisor
literally
running
it
on
my
laptop
so
certainly
for
a
learning
experience.
I
personally
have
been
successful
with
that,
whether
that's
suitable
for
a
large-scale
production
thing.
That's
for
you
to
evaluate
then.
Finally,
there's
the
commercial
load,
balancing
solutions,
vmware's
got
a
couple:
citrix
has
some
I'm
sure
there
are
others
out
there
as
well.
A
If
anybody
on
these
calls
wants
to
bring
up
one
they're
familiar
with
go
for
it,
one
other
thing
I
didn't
cover.
This
is
my
last
slide,
but
I
want
to
point
out
that
in
the
kubernetes
world
there
are
a
couple
things
commonly
used
by
people
with
kubernetes,
one
of
them
being
service,
mesh
and
another
being
serverless
like
k-native,
and
these
also
have
the
ability-
and
maybe
even
the
necessity,
to
use
a
load
balancer
so
that
if
you
deploy
service
mesh,
an
istio
service,
mesh
kind
of
expects
to
use
a
load
balancer.
A
Well,
I
can
see
one
reason
why
you
do
it:
maybe
you're
using
a
commercial
distribution
to
bundle
the
particular
software
load
balancer,
so
you're
not
going
to
want
to
switch
that
out.
But
then,
if
you
independently,
picked
up
service
mesh
on
your
own
or
from
a
second
vendor,
maybe
it
makes
a
different
choice
of
load
balancer
and
you
ended
up
having
two
of
them
out
there
to
maintain
and
feed
and
then
finally,
like
I
said,
you
can
use
service,
mesh
and
kind
of
orthogonal
to
that.
A
So
I'm
gonna
stop
my
share.
I
just
finished
the
deck,
so
I
didn't
put
a
link
in
the
agenda
notes,
but
shortly
after
this
meeting
ends
I'll
upload,
this
deck
to
a
google
drive
with
a
share
on
it
and
put
the
link
in
the
notes
for
this
meeting.
So
that's
it
and
I'd
like
this
to
be
this
whole
meeting
on
a
recurring
basis
to
be
shared
with
users.
So
does
anybody
else
have
thoughts
or
experience
on
using
load?
A
D
I
definitely
think
that
cubevip
is
also
an
awesome
solution
for
the
open
source
software
load,
balancing
which
can
be
used
actually
both
for
the
api
server
load
balancer
as
well
as
service
type
load
balancer,
and
offer
some
really
really
cool
things
like
being
able
to
set
different
siders
for
cider
blocks
for
the
load,
balancer
vips
per
name
space,
so
you
could
actually
set
up
multi-tenancy
and
things
like
that.
So
it's
another
really
cool
option
to
look
at
in
that
sphere.
A
D
Cube
vip
who's
behind
it.
I
think
it's
dan
finneran
wrote
it
who,
I
believe,
used
to
work
at
vmware
as
well,
formerly
heptio
as
well,
but
I
it's
an
open
source
project.
D
It's
actually
what's
used
in
cap
v,
so
cluster
api
vsphere
for
the
load
balancer
of
the
api
server,
it's
based
off
of
leader
election
and
it's
a
really
lightweight
easy
solution
that
offers
really
I'm
it's
adding
support
now
for
a
lot
of
different
options,
also,
whether
that
be
dynamic,
dns
support
for
load,
balancers
and
things
like
that.
So
it's
really
moving
forwards
in
upnp
support
and
different
networking
topologies.
D
So
it's
one
of
the
more
advanced
software
load,
balancers
metal
lb,
is
kind
of
limited
to
either
just
layer,
two
arping
or
bgp
configuration
which
can
get
can
very
complex
in
metal
lb
for
those
that
have
dealt
with
the
bgp
configurations
and
when
it
comes
to
cube
vip,
it
makes
things
much
easier
and
integrates
better
with
more
complex
networking,
topologies
that
you
may
have
in
your
data
center.
C
Completely
open,
it's
really
good
for
home
labs
as
well
I'll,
add
because
dan
recently
added
added
a
feature
where
it
supports
claiming
addresses
from
dhcp
for
service
type
load
balancer.
C
So
even
if
you
just
have
a
home
lab-
and
I
run
my
raspberry
pi's
on
metal
lb
at
the
minute,
but
I'm
moving
them
over
to
cube
vip
and
the
idea
is
with
metal
lb
like
scott
was
saying
you
have
to
give
it
a
static
layer,
two
sider
and
honestly
for,
like
home,
lab
stuff,
it's
easier
to
just
use
dhcp
for
some
workload,
things
that
get
spun
up
and
down
so
cubevip
has
dhcp
support,
so
it
will
claim
an
address
from
dhcp
and
then
arp
for
that
address
the
whole
time
and
in
addition,
it's
added
upnp
support.
C
A
That
sounds
great.
It's
one
thing
I
I
failed
to
mention,
but
should
have
on
load
balancing.
Is
some
of
them
have
sort
of
what
I
would
call
almost
an
ipam
feature
or
integration?
In
that
you
hand
them
over
a
block
of
ips
and
they
handle
allocating
and
assigning
ip
addresses
out
of
that,
they
may
not
go
put
things
in
dns
for
you,
though,
but
if
cube
vip
goes
off
in
that
direction,
particularly
the
dhcp,
I
can
see
that
as
being
really
attractive,
particularly
for
home,
lab
and
people
playing
around
trying
to
learn
things.
D
So
that's
already
happening.
Aws
have
already
done
that,
so
what
we
have
in
vsphere
now
this
capability,
it's
not
even
a
limitation.
Really
in
my
mind,
it's
a
capability
to
choose.
What
you
want
is
the
load.
Balancer
is
actually
the
way
the
cloud
providers
are
moving
into
as
well.
A
Yeah,
that's
interesting.
I
was
just
watching
some
of
the
reinvent
sessions
this
week
and
aws
has
come,
has
announced
a
version
of
eks
that
can
be
hosted
on-prem,
so
I
can
see
that
maybe
another
reason
for
them
breaking
that
out
is
that
if
they
intend
to
convince
the
world
that
their
cloud-hosted
distribution
is
viable
for
on-prem,
that
step
of
divorcing,
the
attachment
of
the
load
balancer
would
seem
to
be
necessary.
A
A
I
dropped
it
in
the
chat
and
I
threw
it
into
the
documentation
as
well
or
the
the
overview
doc
that
we
have
for
this
meeting,
and
it
gives
you
like
a
blow-by-blow
feature
by
feature.
What
each
ingress
controller
supports
and
further
deceives
point
all
of
the
different
variations
of
ingress
are
in
there
as
well,
so
you
can
tell
exactly
which
one
does
what.
A
Okay,
anybody
else
have
experience
with
or
opinions
or
questions
on,
load,
balancers.
A
The
next
thing
on
the
meeting
I
haven't
gone
back
to
see
if
there
were
any
laid
ads
to
the
agenda
just
a
minute.
Let
me
look,
but
I
think
maybe
miles
why
maybe
you
can
say
a
little
bit
about
what
you've
been
up
to
with
regard
to
gpu.
I
know
that
you
were
doing
this,
maybe
aspiring
to
be
able
to
talk
about
it
today,
but
that
we're
gonna
roll
it
into
the
next
meeting.
A
But
maybe
you
can
give
us
a
little
teaser
yeah,
so
I've
been
building
a
stack
if
you
want
and
a
presentation
for
the
v
mugs
as
like
a
roadshow.
So
I'm
doing
that
with
a
guy
on
my
team
called
near
niels
hohort
and
another
guy
called
johann
van
amasfort,
who
robert
works
with
and
we're
basically
showing
what
it
takes
to
run
an
ml
type
workload
on
kubernetes
on
top
of
the
vmware
stack.
A
So
we've
actually
written
all
the
code
for
this.
It's
all
up
on
github
all
of
this
stuff.
You
can
do
today.
It
works
just
you
know,
with
what
we
have
today
we're
using
bitfusion
inside
of
our
container
to
remotely
mount
gpus
over
the
network
and
dispatch
jobs
of
ml
image
processing
to
them.
So
the
story
is
there's
like
a
flower
market
and
it's
trying
to
automatically
count
flowers
and
what
we
do
with
kubernetes
is
use
horizontal
pod,
auto
scaling.
We
set
a
desired
processing
rate,
so
say
I
want
to
process
400
flowers
per
second.
A
I
set
that
in
horizontal
polar
autoscaler
and
it'll
scale
out
and
keep
allocating
gpus
until
it
can
actually
meet
that
so,
rather
than
having
a
workload
defined
as
a
okay,
we've
got
this
ml
workload,
we're
going
to
give
it
a
gpu
for
a
slice
of
time.
We
wanted
to
focus
on
the
outcome,
so
it's
we've
got
this
workload.
We
would
like
to
make
it
run
at
this
rate,
and
then
we
just
let
the
system
figure
out
how
much
resource
it
needs
to
allocate
to
achieve
that
goal.
A
It
isn't
quite
working
just
yet
like
I
said
it
was
deployed
just
at
the
start
of
this
meeting.
It's
not
up
and
running
now,
but
we'll
have
a
look
at
it
next
week,
maybe
next
month.
Maybe
I
don't.
I
don't
want
to
steal
too
much
out
of
your
next
month,
but
maybe,
if
you
can
tell
people
not
aware
of
bitfusion
what
it's
about,
I
think,
if
I'm
not
correct
that
it
goes
and
takes
a
physical
gpu,
potentially
even
carves
it
up
into
a
sub-allocated
call
them
virtual
gpus.
A
A
This
is
why
I
like
bitfusion
versus
any
of
the
other
solutions
that
are
out
there
today,
because
there's
nvidia,
v
gpu
and
there's
a
whole
bunch
of
different
ways
that
you
can
skin
this,
particularly
on
lucky
cat,
but
bit
fusion
to
me
aligns
really
nicely
with
the
kubernetes
way
of
doing
things,
because
it
doesn't
tie
the
gpus
and
those
gpus
into
the
nodes.
In
the
cluster,
so
if
you
deploy
your
kubernetes
cluster,
you
don't
have
a
hard
allocation
of.
You
know:
there's
a
gpu
in
each
one
of
those.
A
So
it's
just
a
debian
package
that
you
add
into
your
docker
file.
Obviously
the
docker
file
is
probably
going
to
be
written
on
ubuntu
because
that's
where
most
machine
learning
stuff
goes,
but
you
install
this
bitfusion
client.
It
automatically
discovers
all
of
the
servers
that
it's
connected
to
you
pass
it
a
ca,
cert
file
and
then
it
can
do
auto
discovery.
So
whenever
it
spins
up
it'll
ask
automatically
bitfusion
hey.
A
A
It
passes
those
batch
of
batches
of
images
to
the
bitfusion
server.
It
does
the
computation
on
the
gpus
and
passes
back
the
result
and
that
works
really
well
for
ml
type
stuff,
because
you
don't
need
a
high
bandwidth
connection
between
the
gpu
and
the
client.
All
you
want
to
do
is
pass
the
job
over,
get
it
processed
and
get
the
result
back.
So
that's
what
we're
going
to
be
looking
at
next
time.
A
Okay,
that
sounds
really
cool
and
I
guess
go
ahead.
Scott
yeah!
No,
I
think
bit.
Fusion
also
adds
a
huge
capability
in
the
sense
also
because
it
makes
the
setup
of
your
kubernetes
cluster
much
easier
in
terms
of
drivers
and
things
like
that
within
the
os
anyone
who's
dealt
with
nvidia
drivers
in
their
worker
nodes.
Things
like
that.
A
It's
not
a
fun
task
to
deal
with,
so
that
fusion
completely
eliminates
that
yeah
and
it
doesn't
matter
what
what
your
underlying
node
os
is
either,
which
is
critical
because,
again
you
don't
install
the
nvidia
driver
on
the
the
node
anymore,
because
this
bitfusion
shim,
I
guess
you
could
call
it-
gets
added
to
the
container
it
takes
over.
Those
cuda
calls
and
the
driver
stuff
and
dispatches
it
to
the
bitfusion
server
instead,
so
it
actually
keeps
your
kubernetes
clusters
really
quite
tight.
A
A
When
you
went
to
things
where
the
storage
was
dispersed
on
the
compute
nodes,
it
initially
sounded
like
a
great
idea,
but
some
of
those
solutions
forced
you
to
go
in
lockstep.
If
you
had
to
expand
compute
capacity
and
keep
the
nodes
heterogeneous.
That
was
problematic,
because
maybe
you
had
to
spend
money
on
storage
you
didn't
need,
or
vice
versa,
to
expand
storage.
You
had
to
buy
extra
compute
nodes
that
maybe
you
didn't
need
the
compute
capacity
and
being
able
to
break
these
gpus
completely
independent
of
the
compute
nodes.
A
Really
should
offer
you
a
lot
of
flexibility.
The
other
thing
I
have
to
observe
is
the
gpu
space
is
seems
like
more
rapidly
moving
than
even
maybe
the
compute.
You
know
in
terms
of
our
x86
cpu
still
moving
along
the
moore's
law
path,
maybe
not
so
much
but
gpus
from
one
generation
to
the
next
seem
to
be
going
up
pretty
dramatically,
but
realistically
you
as
a
particular
on-prem
user,
don't
want
to
buy
a
bunch
of
those
discover.
A
A
Maybe
this
bit
fusion
gives
you
this
great
opportunity
to
abstract
out
the
differences
just
like
virtualization
did
for
allowing
non-homogeneous
hardware
to
kind
of
act.
The
same
and
kubernetes
allows
you
to
make
different
clouds
act.
The
same
having
an
abstraction
on
top
of
the
gpu
chip
seems
like
a
fundamentally
great
idea.
A
So
we've
got
two
minutes
to
go
so
time
check
last
call:
does
anybody
have
any
other
subjects
they
want
to
bring
up
or
better
yet,
since
the
time
is
limited
suggestions
for
things,
we
can
get
recruit,
presentations
on
or
discuss
in
the
january.
A
You
know
wait
quite
long
enough,
so
I
posted
something
in
the
slack
channel
about
some
issues
that
I'm
seeing
with.
Basically
after
a
nick
failure,
which
we've
seen
this
happen
during
drs
events,
the
cubelet
doesn't
reconnect.
A
So
I
think
scott
replied
scott.
You
replied
to
me
earlier
today.
It's
something
that
we
have.
We
have
thousands
of
clusters
and
it's
something
we
see
pretty
regularly,
but
it's
not
something.
We've
been
able
to
purposely
like
reproduce
yet,
but
one
of
the
comments
on
that
kubernetes
issue
is
for
the
the
work
around
for
vmware
is
to
disable
drs,
which
kind
of
defense
defends
the
point.
Yeah
I
understand
yeah
I
mean
so
on
that
github
issue.
A
They
did
mention
that
it's
fixed
now
and
or
will
be
fixed
in
120,
and
then
cherry
picked
back
to
one
team,
but
not
earlier
than
that.
So
it
seems
it's
a
core
kubernetes
issue
with
the
go
packages.
It's
using
that's
going
to
be
fixed.
A
A
A
Okay,
so
we're
we're
trying
to
investigate
it,
what's
causing
some
of
those
issues,
but
it's
we're
not
encountering
this
on
on
other
hypervisors
or
other
clouds.
So
right
right
now
we're
only
really
encountering
this
on
vmware
and
it
seems
to
be
relating
back
to
when
there's
like
a
drs
event
or
yeah,
basically
a
drs
event
on
the
other
hype,
but
not
when
we
do
drs
like
we
try
to
make
it
happen.
We
haven't
been
able
to
like
forcefully
like
recreate
it.
A
So
sorry,
what
was
that
miles
on
the
other
hypervisors?
Is
there
live
migration
enabled
as
well
just
so
it's
like
an
apples-to-apples
comparison
like?
Are
they
doing
a
stun,
for
example,
doing
a
live
migration
and
then
unstunning
the
vm,
or
is
that
exclusive
to
the
vmware
platform
I'll
have
to
investigate
exactly
how
they're
doing
yet?
A
But
so
the
only
the
only
thing
that
I
can
think
of
is
whenever
a
vm
moves
from
one
esxi
hypervisor
to
another
in
a
live
migration,
the
vm
gets
moved
to
the
other
side
and
then
the
v
switch
reverse
arps
the
address
or
gratuitous
arps
that
address
backwards
to
update
the
switches
to
say
this
ip
address
now
lives
on
this
mac
address,
which
would
probably
align
with
what
scott
is
saying.
A
Is
that
there's
probably
some
logic
missing
inside
the
the
go
code
to
account
for
that
kind
of
mac,
address
change
or
something
similar
to
that
yeah?
It's
basically,
though,
when
cubelet
doesn't
connect,
it
dot,
its
connection
dies,
but
the
underlying
http
2
connection
from
the
base
go
code
actually
stays
alive,
not
allowing
cubelet
to
come
back
up.
So
you
have
to
run
like
a
cron
job
that
basically
checks
for
a
certain
log
in
journal,
ctl
and
workaround,
and
restart
cubelet
on
the
node.
A
That's
the
workaround
before
1.19
next
cut
release
and
we've
found
so
we've
we've
implemented
like
that,
similar
to
what
other
people
put
as
a
workaround
wait
like.
First,
it
tries
to
restart
the
cubelet.
If
restarting
the
cube,
it
doesn't
work,
then
it'll
restart,
docker
and
the
cubelet,
and
I
think
we've
it,
and
that
has
reduced
the
number
of
issues
that
we
have
coming
out
of
this.
A
A
A
A
I
don't
know
who
we've
got
and
what
roles
on
this
meeting
right
now,
but
it'd
be
great
from
my
perspective.
A
If
we
could
link
up
bryson
if
he's
got
an
ability
to
test
any
proposed
solution,
or
maybe
the
120
release
is
the
solution
but
it'd
be
I'd,
I'd
prefer
to
see
it
actually
tested
to
prove
that
this
was
the
root
cause
and
it's
been
fixed
bryson
as
like
a
an
interim-
and
I
know
this
isn't
a
fix
but
to
maybe
make
sure
it
doesn't
happen
as
often
as
it
does
if
it
is
related
to
drs
change
the
aggressiveness
of
the
drs
algorithm
down
to
the
lowest
that
it
is
so
that
it
will
only
move
stuff
in
the
event
that
it
needs
more
resource
on
a
given
node,
because
there
will
be
no
performance
benefit
for
that
anyway.
A
So
just
turn
the
drs
aggressiveness
all
the
way
down
to
try
and
limit
the
number
of
emotion
operations
should,
at
least
you
know,
stop
this
from
occurring
as
much
so
we
we
currently
have
our.
We
have
affinity
rules
set
up
so
that
they
only
move.
If
they
really
have
to
okay,
okay,
so
we
have
our
master
separated
onto
different
nodes
and
workers
on
different
nodes
and
then,
if
the
nodes,
if
that
v
host
is
gone,
then
it'll
try
to
start
it
up
somewhere
else,
but
if
it
comes
back,
it'll
move
it
back.
A
So
my
guess
is
preferred.
It
would
be
interesting
to
see
also
if
this
does
have
seven
as
well
with
the
changes
in
drs,
and
I
mean
the
stun
time
in
vsphere
7.
I
don't
know
where
you
are
on
your
migration
to
seven,
but
the
changes
in
vmotion
and
drs
and
the
shorter
stun
time
that
happens
should
make
it
also
less
common
would
be
my
guess,
just
based
off
of
those
changes.
A
It
would
be
interesting
to
see
if
it's
reproducible
in
a
seven
environment
as
well,
just
because
of
those
underlying
vsphere
fixes
that
happen
to
make
it
better.
That
is
if,
if
it's
related
to
stun
time,
however,
if
it
is
related
to
mac
change
which
it
may
be,
then
it
would
make
no
difference,
because
the
fundamental
fact
is
there
would
be
a
mac
change
anyway.
So
if
it
is
done,
then
that
would
fix
it,
but
if
it
is
mac
address
changes,
any
vmotion
operation
could
could
cause
that
right.
A
But
no
considering
that
most
I
mean,
if
it
was
mac
address
changes.
I
would
think
that
you
would
see
it
more
often
in
more
environments,
meaning.
My
guess
is
that
it's
the
drs
stunt
time
in
those
specific
cases,
because
it's
not
reproducible
on
a
regular
case
of
here.
I
just
reproduced
it
it's.
It
does
deal
with
a
mac
address
change.
A
Maybe
there's
some
issue
where
at
some
point
it
didn't,
but
it
seems
the
sun
time
is
the
logical
place
that
it
could
be
starting
from,
but
you
know
who
knows
so
by
not
reproducible.
Are
you
telling
me
that
you
can
force
a
drs
migration
and
it
doesn't
always
happen
because
it
there
should
always
be
a
mac
address
change
when
that
happens
right?
A
Yes,
I'm
saying
we
haven't
been
able
to
like
literally
like
we
when
we
try
to
do
it
it
it
doesn't
happen.
One
thing
we've
found
most
of
these
times.
It's
happening
during.
A
A
But
when
they're
doing
some
of
those
upgrades
it's
pretty
often
that
it
happens
but,
like
I
said
like
sometimes
when
you're
looking
at
it
trying
to
find
it,
you
know
they're
going
to
do
an
upgrade
on
it.
It
doesn't
happen.
A
So
it's
not
every
single
time,
but
if
it
is
more
common
with
with
upgrades,
I
would
say
that
does
lend
evidence
or
credence
to
the
stun
time
thing,
because
the
stun
time
scales
based
on
the
bandwidth
allocated
to
v
motion,
so
the
more
concurrent
v
motions
you
have
going
on
the
longer
your
stun
time
would
be,
which
would
make
sense
then,
because,
if
you're
evacuating
a
host,
you're,
evacuating
all
the
vms,
which
means
you
would
have
the
longest
possible
stun
time.
A
A
Which
is
why
a
possible
workaround-
I
mean
it's
a
manual
one,
but
when
they're
moving
things
into
maintenance
mode
would
be
to
migrate.
Some
of
the
vms
manually
first
migrate
them
in
sections
your
vms
off
of
the
host
and
then
put
it
into
maintenance
mode
when
there's
less
vms
on
it,
so
that
the
stun
time
of
those
vms
is
shorter.
A
So
if
you
had
50
vms
on
the
host
move
them
10,
10,
10,
10,
and
then
let
let
the
maintenance
mode
take
the
last
10
or
20
off
of
it
automatically,
but
that
should
shorten
the
list
of
emotions
happening
and
shorten
the
stun
time
accordingly,
as
well
bryson
on
the
different
hardware.
Another
thing
to
check
would
be
you
mentioned
vxrails,
I
believe
they're,
all
10,
gig
nicks.
A
If
you
see
that
on
10
gigniks
and
not
more
dense
knicks
like
25
40,
100
gignics.
That
would
also
make
sense
because
then
there's
not
as
much
bandwidth
from
emotion
to
use,
which
would
again
result
in
longer
stunt
times
on
a
vmotion
operation.
So
if
it's
older
hardware,
you
see
it
on
look
at
the
nic
size
of
the
affected
hardware
versus
the
hardware
that
you
don't
generally
see
it
on.
A
A
Okay,
maybe
we
can
keep
this
going
in
the
slack
channel
then
and
we're
10
minutes
over
already.
So
let's
declare
this
meeting
over
and
look
forward
to
you
next
time.
I
did
look
by
the
way.
I
was
concerned
that
the
meeting
next
month,
we're
always
on
first
thursday,
I
checked
to
make
sure
it
didn't
fall
into
the
new
year's
holiday,
but
it
looks
like
we're
safely
into
the
second
week
of
january,
so
that
the
meeting
should
be
at
the
normally
scheduled
time
and
hope
to
see
all
of
you
and
invite
your
friends.