►
From YouTube: Kubernetes UG VMware 20210701
Description
July 1 2021 meeting of the Kubernetes VMware User group. This meeting was a discussion of load balancer options when running Kubernetes on VMware infrastructure.
A
Hi
welcome
to
the
july
1st
meeting
of
the
kubernetes
vmware
user
group.
The
agenda
today
will
be
on
load
balancers
for
running
kubernetes
on
top
of
vmware
infrastructure,
presumably
in
an
on-prem
scenario.
A
We
had
originally
hoped
to
recruit
dan
finnerman
to
speak
today
to
do
kind
of
a
deep
dive
on
load,
balancers
and
probably
cube
vip
in
particular,
but
he
was
unable
to
attend.
A
We're
gonna
try
to
cue
that
up
for
a
future,
but
we're
still
gonna
stick
with
the
load
balancer
topic,
just
because
some
users
suggested
an
interest
at
the
end
of
the
last
meeting,
and
this
should
if
we
can
get
an
intro
down
now,
it
should
be
a
good
segue
to
maybe
get
some
real,
deep
dive
coverage
in
the
next
meeting
or
whatever
meeting
we
managed
to
recruit
dan.
For
so
with
that
said,
I've
got
a
quick
intro
deck
just
in
case.
A
Then
I
have
an
idea
that
I'd
like
to
do
an
audience
poll
on
what
load
balancers
people
have
actually
are
using,
or
have
tried
using
in
the
past,
just
to
kind
of
focus
discussion
so
that
people
are
aware
of
what
expertise
people
might
be
bringing
to
the
table
here.
So
let
me
start
by
sharing
my
screen.
A
A
Okay,
so
load
balancers
for
two
bird
and
he's
on-prem,
and
I
didn't
think
we
had
covered
this
for
a
while,
but
I
actually
found
this
deck
from
last
year.
So
I've
updated
it
a
little
bit
the
we'll
start
with
what
is
a
load
balancer?
Well,
it's
a
it's
a
tool
that
you
would
commonly
use
for,
making
the
kubernetes
control
plane
high
availability.
A
You
know
scenario
might
either
be
that
this
is
some
test
or
learning
infrastructure,
that's
kind
of
a
throw
away,
but
it
could
even
be
a
something
running,
a
an
app
or
a
service
that
where
a
single
node
is
good
enough.
Just
because
you
can
tolerate
some
downtime
and
maybe
have
technology
in
place
to
quickly
rebuild
a
kubernetes
cluster
from
scratch.
If
something
bad
happens,
but
if
you
need
high
availability
well,
you
need
a
load
balancer.
A
You
typically
need
more
than
just
a
load
balancer.
You
also
need
a
high
availability
data
store
that
holds
the
state
of
your
kubernetes
cluster
and
that
would
generally
be
something
like
an
ncd
cluster,
with
a
number
of
node
an
odd
number
of
nodes
greater
than
one
now.
You
typically
have
at
least
three
nodes,
then
for
fcd
and
common
practice
is
to
co-locate
your
control
plane,
components
with
that
that
data
store,
but
technically
two
control,
plane
nodes
is
enough.
A
You
don't
you
probably
don't
need
the
three,
but
if
you're
gonna
collate
co-locate
them,
I
think
common
practice
is
that
you
go
with
three
ncd
nodes
and
three
control
plane
nodes
use
the
load
balancer
in
front
of
those
control
plane
nodes.
Now,
if
this
isn't
apparent
in
order
to
achieve
high
availability,
not
even
you
not
only
need
those
control
plane
nodes,
but
the
load
balancer
itself
has
to
have
inherent
high
availability.
I
mean
if
the
load
balancer
is
a
single
point
of
failure.
A
A
There's
also
a
second
solution
called
cubevip
and
it
works
by
putting
a
well-known
ip
address
on
a
node
one
of
the
control
plane
nodes
and
then
moving
it
to
another
host.
If
the
original
fails
under
the
covers
these
sorts
of
solutions,
utilize,
arp
or
bgp-
and
they
do
have
limitations,
if
you
can't
use
can't
utilize,
multiple
ips
on
a
host,
often
the
case,
if
you're
in
a
public
cloud
environment
and
if
you're
in
a
public
cloud
environment
more
than
likely
you're
going
to
use
that
cloud's
load
balancer
anyway.
A
So
what
we're
talking
about
today
is
probably
valid
for
running
kubernetes
on
vsphere
infrastructure.
A
If
you're
going
to
expose
an
app
or
a
service,
there's
a
couple
scenarios,
I'm
we're
talking,
probably
exposing
it
to
the
outside
the
load.
Balancer
is
like
a
general
purpose
solution
that
occupies
potentially
multiple
levels
in
the
network
stack.
But
if
you're
talking
about
http
https,
there
is
an
option
to
use
an
ingress
controller
to
accomplish
exposing
that
too.
So
there
is
also
the
issue
of
so
ingress
would
typically
be
up
at
l7
load
balancing
down
at
l4.
A
There's
also
the
issue
of
load
balancing
to
the
outside
world
outside
your
kubernetes
cluster
and
load
balancing
within
the
kubernetes
cluster
within
the
kubernetes
cluster
kubernetes
has
its
own
built-in
solutions
for
that.
So
you
may
not.
You
wouldn't
have
to
deploy
something
like
cubevip
h,
a
proxy
to
load
balance.
The
things
consumed
entirely
within
your
kubernetes
cluster
load.
Balancers
vary
by
features.
Some
of
them
may
have
all
of
these,
some
of
them
just
a
few
I'll.
A
Let
you
read
the
list,
I'm
not
going
to
read
the
slide
to
you,
and
here
are
some
available
load
balancers
for
use.
On-Prem.
First
of
all,
in
the
hardware
category
you
know,
load
balancers
aren't
new.
I
think
they've
been
available
since
the
1990s
and
you're
likely
to
have
needs
to
load
balance
even
legacy
things
that
are
running
on
vms
or
potentially
bare
metal.
So
in
many
organizations,
if
it's
a
big
data
center,
things
like
an
f5
are
in
common
usage
and
they
can
be
and
often
are
integrated
with
kubernetes.
A
If
you're
on
a
cloud
rather
than
on-prem,
you
typically
use
whatever
that
cloud.
Provider's
built-in
solution
is
when
you
get
to
a
modern
containerized
solution
and
don't
have
something
like
an
f5.
There
are
software
solutions
and
no
particular
order.
A
I've
listed
them
here,
I'll
publish
this
a
link
to
this
deck
afterward
in
the
agenda
note
stock
and
the
ones
that
you
see
in
the
green
font
are
links
to
the
repos
for
these
different
projects,
not
that
there
aren't
links
to
aha
proxy
and
nginx,
but
I
was
whipping
this
slide
deck
together
just
this
morning
and
didn't
have
quite
time
to
finish
this,
and
then
there
are
some
commercial
ones,
oops,
citrix
netscaler
and
then
some
with
vmware
infrastructure.
It's
not
uncommon
for
vmware
customers
to
utilize.
A
Some
of
these
paid
non-open
source
solutions
like
nsx
and
the
avi
load
balancer
by
the
rules
of
the
kubernetes
project
that
sponsors
this
user
group.
We
can't
turn
this
into
a
sales
pitch,
so
I'm
going
to
be
have
a
pretty
light
touch,
I'm
going
to
mention
that
these
vmware
things
exist,
but
not
go
into
any
details
or
promotion
on
these.
A
So
what
I'd
like
to
do
here
as
a
next
step,
is
to
have
an
audience
poll
of
what
low
balancers
that
you've
actually
tried,
and
let
me
see
if
I
can
get
to
this
poll
to
post
it
in
chat,
could
somebody
else
post
it
in
well?
Let
me
see
it's
taking
up
my
whole
screen,
so
I'm
going
to.
B
A
B
Oh,
it
appears
I
pasted
in
the
admin
link,
not
the
the
poll
itself.
A
B
A
A
So
looks
like
unless
somebody
is
still
in
the
process
of
voting,
it's
all
cube
fip
so
far,
aj
proxy.
It's
interesting
because
I
think
we
had
a
meeting
talking
about
load
balancers
about
a
year
ago
and
middle
lb
kind
of
won
the
popularity
contest
back
then
now.
Obviously,
this
group
is
not
representative
of
the
whole
world
either.
Potentially
it's
kind
of
a
selective
polling,
but
it's
still
interesting.
A
You
know
if,
if,
if
I
did
vote
in
this
poll,
for
example,
my
own
use
I'd
characterize
as
almost
home
lab
like
right,
I
mean
at
most
I
might
be
publishing
one
of
my
own
blogs,
but
you
know
I
can't
claim
that
I'm
running
production
kubernetes,
but
nonetheless,
it's
interesting
that
I
think
cubevip
is
relatively
new.
Yet
it
seems
to
be
getting
an
awful
lot
of
traction
and
I
have
to
say
it's:
I've
found
it
pretty
easy
to
use
and
I'm
a
former
metal
lb
user.
A
C
Yeah
I
mean
I
use
cubevip,
mostly
when
it
comes
to
home,
labbing
and
also
for
production
things
a
lot
of
times
when
not
using
commercial
offerings.
It
just
is
by
far
the
most
customizable
in
terms
of
the
type
of
networking
settings
you
can
configure,
whether
it
be
upnp
or
other
more
advanced
technologies.
C
It's
really
easy
to
set
up,
and
also
I
used
to
use
metal
lb,
but
metal
albee's
community
is
struggling
to
say
the
least.
There
are
I
mean
issue
tickets
open
since
2014
there
basically
and
cube
vip
is
much
more
active
currently,
so
that's
why
I'm
using
it.
It
has
supported
the
newer
versions.
As
of
a
day
later,
in
kubernetes
and
metal
ob
took
a
month
and
a
half
to
support
121.,
it's
just
more
maintained,
which
is
why
I'm
using
it
mostly.
C
A
Yeah,
I
was
going
to
ask
if
anybody
had
experience
using
it
with
like
if
the
sdo
service
mesh,
because
I
I
had
I
did
use
metal
lb
for
that
a
long
time
ago
and
found
it
to
be
somewhat
difficult
to
set
up.
But
but
I
did
eventually
get
it
to
work
and
haven't
gone
back
to
trying
to
do
that
with
cubevip.
C
B
That's
nice,
cube
vip
is,
is
really
good
for
home
lab
stuff
and
it
runs
on
arm
as
well,
as
does
metal
lb,
which
is
the
first
one
that
I
was
really
exposed
to,
because
I
run
a
rashford
pi
cluster
at
home,
so
metal
lb
primarily
and
then
moved
on
to
cube
vip
once
they
added
upnp
support,
because
and
and
the
ability
to
pull
addresses
from
dhcp.
B
A
C
I
haven't
used
it
there,
but
I've
used
it
in
kind
and
I've
used
it
in
esxi
on
arm.
So
I'd
say
that
if
it
works
on
a
vm
running
on
esxi
on
a
raspberry
pi
and
runs
in
kind,
the
kind
networking
is
pretty
similar
to
when
you're
using
fusion
and
things
like
that.
So
I
don't
think
there
should
be
any
issues,
but
I
haven't
actually
tested
that.
B
B
I
answered
hey
proxy
in
mind
because
most
of
my
stuff
just
runs
on
tkg
service,
these
days,
which
is
fronted
by
hi
proxy.
So
I
don't
think
about
what
lb
I'm
using,
because
that
kind
of
takes
care
of
it
for
me.
But
if
I,
if
I
were
to
spin
up
spin
up
one
myself,
it
would
probably
be
cube
vip
or
if
I
have
more
advanced
uses
than
maybe
avi.
A
Yeah,
I
think
that
most
of
the
commercial
kubernetes
distributions
in
their
installer
at
least
give
you
an
option
of
spinning
something
up
for
you
in
the
software-based
load
balancer
category.
But
I
think
it's
also
true
that
most
of
them,
let
you
override
that,
if
you
choose
to
does
anybody
else,
have
any
questions
or
things
they
want
to
bring
up
with
regard
to
load,
balancers
we've
still
got
like
30
minutes
or
more
left
here,
and
I
didn't
come
here
with
a
a
lot
of
prepared
places
to
go.
A
D
Hi
there
I
had
a
question
about
people
who
are
doing
this
for
production
workloads.
If
anyone
has
experience
pursuing
support
for
like
the
avi
product,
because
right
now
we're
using
metal
lb
with
tkg,
multi-cloud
and
and
that's
been
working
great,
but
we've
looked
at
some
of
the
things
we
get
with
that
tkg
multi-cloud
as
it
becomes
tanzy
standard,
would
be
like
something
like
avi
would
be
potentially
included.
I
just
wonder
if
people
have
tried
the
support
on
that
or
not
kubevip,
for
that
level
has
been
scary
since.
D
Well,
you
don't
get
it
with
tkgm
there's
no
support
docs
from
vmware
on
it.
So
and
it's
been
broken
in
a
number
that
tkg
releases
you
you
have
issues
where
it
doesn't
properly.
Arp
for
new
ips
and
stuff
so
been
a
hesitant
on
that,
but
I'm
glad
to
hear
that
it's
working
well
for
others.
We
might
check
that
out
more
too.
C
I
also
use-
and
I
use
it
on
tk
gm
in
other
stances,
and
it
works
great
as
well
so
yeah
and
cube
vip
is
used
in
any
tkgm,
no
matter
what
also
just
because
it's
used
for
the
control
plane,
load
balancer,
just
not
necessarily
for
service
type
load
balancer
so,
but
both
of
them.
I've
had
working
in
tikka
gm
very
well
and
avi
just
comes
with
additional
capabilities
that
are
pretty
nice
from
a
commercial
offering
standpoint.
B
I
think
avi
can
do
if,
if
not
now,
it's
something
that
certainly
they've
talked
about
doing
is
layer,
seven
low,
balancing
on
ingress
as
well.
So,
rather
than
just
being
dumb
layer,
four
lb,
you
can
do
layer,
seven
lb
with
it
as
well.
C
It
does
that
it
does
ingress,
not
in
tanzu
standard,
intense
advanced.
It
does
that's
just
a
licensing
thing
for
commercial,
but
it
works
and
yeah.
It's
got
layer,
7,
load,
balancing
layer,
4,
load,
balancing
global
service
load
balancing
and
it
comes
with
like
it
automatically
does
dns,
so
it
kind
of
makes
external
dns
not
necessary
in
a
lot
of
cases,
because
it
just
creates
dns
records
for
you
automatically.
It
has
a
built-in
dns
service
that
you
can
set
up.
D
C
Yeah
there's
more
and
this
isn't
a
commercial
offering
here
but
feel
free
to
reach
out
to
me
also
on
slack
and
whatnot.
I
don't
want
to
cross
the
boundaries
here
of
what's
allowed
or
not
in
terms
of
commercial
offerings,
but
more
than
welcome
to
reach
out
to
me
on
the
kubernetes
or
vmware
slack
and
more
than
happy
to
explain
those
differences
after
I
spent
about
a
month
trying
to
figure
it
out
myself.
A
And
maybe
some
of
us
vmware
people
will
take
back
the
notification
that
we
could
perhaps
use
a
little
bit
our
training
with
the
people.
Customers
are
accounting
and
countering
so
keith.
I
I
see
in
the
chat.
I
don't
know
if
you're
free
to
go
on
the
microphone,
but
you
appear
to
have
had
some
experience
on
these
and
made
a
comment
about
them.
A
Yeah,
it
could
be
a
lot
of
times.
People
join
this
with
the
cell
phone
and
they're
kind
of
limited
on
participation.
A
Anybody
else
got
questions
on
load,
balancing
or
even
it
sounds
like
there's
commonly
a
need
to
combine
l7
and
l4
functionality,
and
you
know
kind
of
whether
people
have
used
any
of
these
software-based
solutions
in
a
combo
scenario,
because
I
think
at
least
a
few
of
them
claim
they
have
the
potential
to
be
usable
at
multiple
levels.
But
I
can't
say
that
I've
tried
that
myself.
A
A
I
didn't
have
time
to
put
together
one
of
those
poll
junkie
polls,
so
maybe
we
can
just
try
to
do
this
in
either
responding
by
chat
or
audibly
in
terms
of
what
ingress
solutions,
you've
tried
and
liked
or
tried,
and
not
liked.
B
I
can
kick
off
if
no
one
else
is
going
to
the
number
one
that
I
usually
use
is
actually
trafic.
I
used
traffic
1.7
for
ages
on
the
home
lab
stuff
because
it
had
built-in
acme
support.
So
as
you
added
stuff
that
was
service
type
low
balancer
used
a
an
ingress,
you
know
like
a
root.
It
would
automatically
generate
a
cert
for
that
which
is
publicly
signed
and
then
assign
it
to
the
service
for
you.
B
That
was
the
main
reason
I
used
it,
and
then
I
got
very
annoyed
because
they
did
a
v2.0
and
completely
changed
the
structure
of
how
you
request
those
middlewares
and
all
this
kind
of
stuff
and
basically
just
broke
absolutely
everything
I
would
have
had
to
go
through
a
lot
of
learning.
So
I
gave
up
on
that
and
just
went
with
cube,
fit
and
ingress
engine
x.
A
So
what
was
it
like
when
it
worked?
Well,
you
just
handed
it
you
some,
let's
encrypt
suitable
credentials,
and
it
just
did
everything
automatically.
B
Yeah,
all
because
it
was
external
dns
integrated
it
automatically
added
the
dns
records
for
me,
and
then
it
would
ask
let's
encrypt
for
the
certificates
which
it
used
cloudflare
to
authenticate
that
I
owned
the
domain.
So
my
domain
is
set
on
cloudflare,
so
it
used
my
cloudfare
flare
api
key
to
put
something
into
dns,
say:
hey!
This
is
a
legit
request.
It
signed
it
and
then
sent
me
back
the
request,
so
it
could
be
done
completely
offline.
B
I
didn't
actually
have
to
open
any
ports
inwards
for
it
to
do
the
certificate,
authentication
or
domain
authentication,
so
it
worked
really
well
and
it's
still
sitting
there
on
1.7
just
because
I
haven't
been
bothered
to
upgrade
it
to
2.0
and
it
kind
of
just
works
at
the
moment,
but
yeah
it's
becoming
more
and
more
flicky
as
time
goes
on,
but
it's
it's
hard
to
find
something
that
does
all
that
stuff
together,
especially
if
you're
new
and
trying
to
piece
together.
B
B
You
have
any
idea
as
to.
A
B
No,
it's
got
a
huge
amount
of
attention,
but
they've
got
a
commercial
offering
as
well
and
they're
nerfing
the
upstream
offering
in
oh.
A
I
noticed
in
chat
that,
in
terms
of
ingress
solutions,
keith
said
he's
putting
the
kids
to
bed
but
he's
used
contour,
and
I
know
I've
used
contour
myself.
C
Yeah
I've
used
contour
a
lot
and
for
the
acme
I
just
use
cert
manager,
which
integrates
via
annotation
with
contour,
both
ingresses
and
http
proxies.
But
in
my
clusters
I
have
both
nginx
and
contour,
because
each
one
can
do
things
that
the
other
ones
can't.
C
So
I
use
both
of
them
and
that's
the
nice
thing
versus
load
balancer
providers,
where
you
can
only
have
one
in
a
cluster.
You
can
have
multiple
ingress
controllers
in
a
cluster,
so
that's
at
least
the
nice
part
here
is
that
I've
used
in
the
same
cluster.
I've
had
three
or
four
ingress
controllers,
also
in
the
past,
so
it's
nice,
but
contour
is
kind
of
the
standard
that
I've
been
going
to
recently.
C
Just
because
of
how
easy
it
is,
and
in
terms
of
multi-team
security,
it
offers
some
things
that
other
ones
don't
have.
The
capabilities
in
that
regard.
A
Yeah,
I'm
not
claiming
to
be
an
ingress
expert,
I'm
far
from
it,
but
I've
heard
there
are
potential
differences
there,
even
in
supporting
different
name
spaces
and
things
to
get
multi-tenancy.
B
A
C
I
overall
am
an
envoy
fan
just
because
I'm
also
an
istio
fan
and
just
envoy
the
whole
way
I
like
using
the
same
technology
and
when
service
meshes
and
all
that
are
coming
in.
I
think
that
it's
nice
to
have
a
leveled
playground
where
there's
you
know
the
same
tooling,
being
used
across
the
different
networking
stack
just
kind
of
makes
things
easier
to
debug.
C
You
only
need
to
learn
one
type
of
tool
that
you
need
to
debug
really,
which
is
why
I
use
it,
but
I
haven't
found
any
like
performance
differences
or
anything
in
that
case,
just
more
because
other
tooling
is
using
it.
I
found
that
easier
to
implement.
A
Yeah
one
issue:
I
had
I'm
recollecting
now
what
it
was
like
when
I
first
got
into
this
a
couple
years
ago,
and
one
of
the
issues
I
had
with
nginx
comes
about
just
because
there
are
multiple
things
built
using
nginx,
an
nginx
foundation,
so
that
when
you
google
search
for
things,
there
are
paid
versions,
open
source
versions
and
then
even
within
the
open
source.
I
think
it
got
forked
once
or
twice
and
you'll
find
a
blog
post
or
something
purporting
to
describe
how
to
use
it.
A
C
Yeah
the
hardest
part,
I
think
with
ingress-
is
that
everyone
has
annotations,
especially
nginx
are
known
for
this
and
aws
for
having
ingresses
that
all
of
the
advanced
capabilities
you
have
to
write,
json
files
and
add
them
into
annotations
into
the
ingress
so
jetstack,
actually
who
also
created
cert
manager.
I
have
an
open
source
tool
that
I
just
sent
the
link
to
a
website
where
you
can
actually
build
an
ingress
through
a
gui,
and
it
creates
the
ammo
file
for
you,
including,
like
things
like
cert
manager.
C
Integrations
external
dns
integrations
it's
very
nice
if
you're
using
nginx,
because
trying
to
know
how
to
build
those
jsons
out
for
advanced
use,
cases
is
a
doctorate
to
understand
how
to
do
it.
B
Yeah
I'd
seen
some
stuff
to
do
with
ingress
v2,
or
is
it
like
service
v2
api
right
yeah?
It's
supposed
to
address
a
lot
of
this
annotation
sprawl
that
we
have.
C
Which
is
it's
definitely
a
huge
improvement,
although
what
I
can
say
is
that
tgik
and
those
who
don't
know
tgik
amazing
podcast
every
week,
basically
was
the
only
product
that
even
borked
the
people
on
tgik
that
they
couldn't
figure
out
exactly
how
to
grock
it.
It's
not
there
yet
100,
but
it
definitely
gets
rid
of
annotations.
Just
makes
it
very
difficult
to
understand
how
you
actually
create
a
basic
example
as
well.
C
Too
stable
it's
an
alpha,
it's
an
alpha
feature
and
they're
making
changes.
I
think
it's
v
one
alpha
two
now
or
it
is
going
to
be
in
one
twenty,
two
and
they're
changing
it
around
a
lot
right
now,
because
they've
gotten
some
interesting
feedback.
If
you
follow
the
github
on
it,.
A
It
might
be
kind
of
a
cold
statement
on
the
condition
of
the
world.
The
fact
that
you
need
a
tool
to
use
a
tool
that
might
indicate
things
are
a
little
bit
more
complex
than
they
need
to
be
scott.
What's
your
what's
your
take
on
even
using
just
istio
service
mesh
instead,
does
that
help
you
avoid
some
of
going
down
the
rat
hole
of
configuring,
these
annotations
and
things
for
directly
using
an
ingress,
or
are
they
just
two
different
categories
that
don't
intersect.
C
It
can
there
is
istio
gateway
which
can
do
layer,
seven
load
balancing
and
can
be
an
ingress
controller.
I
happen
to
like
kind
of
a
dividing
conquer
approach,
where
I
think
that
istio
is
very
good
at
what
it
does
best,
which
is
service
mesh,
and
I
think
that
there
are
other
products
that
do
ingress
better
than
they
do
in
terms
of
from
external
services.
C
So
I
don't
use
you
know
istio
for
that,
although
I
have
seen
it
used
by
people
who
are
very
happy
with
it.
I
happen
to
like
separating
the
two
out,
although
it
can
do
everything,
it's
just
a
less
capable
ingress
controller
than
others
are
in
many
cases
or
in
other
cases,
it's
just
more
complex
to
use
than
other
ingress
controllers,
because
it
uses
a
crd
for
its
management
like
contour
does,
but
its
crd
is
much
more
cumbersome
than
in
http
proxy
crds.
A
So
if
we
manage
to
get
dan
finnerman
on
a
future
meeting
here,
to
do
a
deep
dive,
do
people
have
ideas
for
questions
they'd
like
to
ask
what
I'm
thinking
is
that
if
we
approach
him
and
we
can
give
him
a
list
of
questions
or
topics
in
advance,
it
would
help
him
give
us
a
more
focused
presentation
on
what
people
are
interested
in,
whether
it
be.
A
You
know
how
you'd
go
about
using
it
or
a
wish
list
for
feature
enhancements.
I
don't
know
if
somebody's
got
ideas,
let's
discuss.
A
A
A
But
if
any
group
members
have
something
they'd
like
to
give
miles-
and
I
a
homework
assignment
on
to
cover
at
the
kubecon
event
and
whether
you
can
go
to
kubecon
or
not
they-
the
cncf
always
publishes
these
eventually
out
on
youtube,
so
you'd
still
be
able
to
catch
it.
C
A
Yes,
that's
true,
I'm
on
the
program
committee
and
if
you
watch
it
live
streamed,
it's
you
know
the
the
registration
is
comparable
to
what
it
has
been
for
the
covid
conferences
over
the
last
year,
so
relatively
inexpensive
and
for
physical
attendance.
It's
back
to
that
old
world.
Preliminary
indications
seem
to
be
that
a
fair
number
of
people
are
in
intending
to
attend
physically
too
so
it'll
be
interesting
to
see
what
happens
there.
A
A
You
were
lamenting
some
of
the
state
of
the
world
with
kubeflow
and
maybe
covering
that
you
know
maybe
a
little
broader
than
just
machine
learning
but
bringing
in
bit
fusion
and
how
to
share
gpus
when
running
kubernetes
on
vsphere
infrastructure
would
be
an
interesting
topic
just
to
confirm
that
any
of
any
of
the
members
here
in
the
meeting
have
want
to
give
us
a
thumbs
up
or
a
thumbs
down
on
whether
you
think
that
would
be
a
useful
or
interesting
topic
to
cover.
B
I'm
always
happy
to
talk
about
bitfusion,
I'm
just
you
know
wary
about
the
commercial,
offering
thing
that
we've
got
going
on
with
you
know
so
because
it
is
a
commercial
product
or
it's
part
of
his
fear.
Technically,
you
know
you
have
to
pay
for
it.
So
I
don't
know
where
that
falls
well.
A
I
think
if
vsphere
inherently,
this
is
about
running
on
vsphere.
So
that's
true,
that
is
okay.
We,
what
we
can't
do
is
pitch
it,
meaning
giving
you
know,
ordering
information
prices
that
kind
of
thing,
but
if
it's
about
using
so
long
as
you
can
use
pure
upstream
kubernetes,
like
either
anybody
else's
commercial
distro
or
just
get
the
pure
open
source,
kubernetes
and
layered.
On
top
of
these
fair,
that's
fine
for
coverage
under
the
rules.
B
I'm
happy
I'm
always
happy
to
do
ml
stuff
if
people
are
interested
in
ml
stuff.
If,
if
not,
that's,
that's
also
fine
as
well.
I
know
that's
a
very
niche
subject.
That
only
applies
to
a
limited
subset
of
our
users.
C
I
know
I
find
it
very
interesting
and
especially
on
vcr7
with
assignable
hardware
and
bit
fusion,
it
becomes
much
easier
to
utilize
within
upstream
kubernetes,
so
I'm
always
a
plus
on
that.
But
I
know
it
is
a
niche
subject
that
not
many
people
necessarily
are
dealing
with,
although
it
is
one
of
the
harder
points
on
vsphere,
so
it
is
something
that
makes
sense
in
the
vmware
user
group.
C
B
Yeah
we
could
do
something
like
that
steve.
You
know
the
best
way
to
get
value
out
of
your
gpu
hardware
on
vsphere
and
kubernetes,
or
something
like
that,
and
we
can
discuss
the
different
options
that
there
are
on
vsphere
for
doing
gpus
and
machine
learning,
whether
bit
fusion
or
assignable.
Hardware
like
scott
was
saying-
or
you
know,
any
number
of
ways-
yeah.
A
B
A
Well,
I
think
I've
been
going
to
kubecons
for
three
years
now
and
and
caught
I
caught
some
of
the
earliest
cube
flows
where
the
the
talks,
the
talk
emphasized,
how
easy
it
was.
But
then
afterward
I'd
hear
countless
war
stories
of
people
who
said
you
know.
I
saw
that
thing
on
how
easy
it
was.
But
then
I
tried
to
do
it
at
home
and
it
wasn't
easy
at
all.
A
But
I
think,
maybe
if
we
do
that,
we
should
try
to
come
up
with
even
some
use
case
that
that
we
can
use
to
emphasize
it
and
maybe
something
that
people
could
do
at
home.
B
Well,
I've
I've
actually
got
one.
So
that's
the
reason
I've
been
working
on
this
for
the
last
I'd
say
three
or
three
weeks
to
a
month
now
is
we
have
an
actual
use
case,
which
is
you
have
a
say,
a
car
park
at
an
airport
right-
and
you
have
a
camera,
pointed
at
the
the
gate
and
when
cars
roll
up,
if
they've
already
prepaid
for
parking,
it
recognizes
the
number
plate
and
it
opens
the
the
barrier
on
the
way
out.
Likewise,
it
says:
okay,
this
is
that
car.
B
They
owe
this
much
money
put
in
your
credit
card.
Take
the
payment.
So
basically
we
built
a
model,
it
does
the
license,
plate
inferencing
and
then
we've
packaged
that
up.
It
runs
on
top
of
cloud-native,
runtimes
or
k-native.
Basically,
and
then
it
takes
those
snapshot.
Does
inferencing
sent
the
data
back
that
kind
of
thing,
so
we
do
have
a
real
use
case
and
we're
trying
to
integrate
actual
cameras
and
stuff
into
it
as
well.
So
maybe
that
would
be
an
interesting
one,
because
it's
something
tangible.
You
know
at
least.
A
A
They
really
don't
have
to
guess
between
a
few
potential
candidates
to
spot
that
hey
this
car
that
just
pulled
in
the
pickup
line
is
very
likely
to
be
correspond
with
this
order
that
we
have
ready
to
go.
Maybe
an
employee
could
carry
out
the
bag
of
food
to
the
person
and
not
have
them
tie
up
a
space
in
the
line,
and
you
know
there's.
B
Pretty
cool,
no,
that
sounds
interesting
for
sure.
It's
it's
a
bit
of
a
rabbit
hole
once
you
get
into
it,
though
I
like
all
this
stuff,
you're
like
how
hard
could
it
be,
and
then
you
start
reading
documentation.
You
realize
very
quickly
how
hard
it
could
be
except
there's.
You
know
like
five
layers
of
how
hard
could
it
be
whenever
you
run
it
on
top
of
k-native
on
kubernetes
on,
and
you
know,
there's
all
these
dependencies.
C
Yeah,
I'm
actually
working
on
a
pet
project
on
this
also,
but
actually
around
building
ml
around
vsphere
events,
that's
actually
using
k
native
and
the
vmware
event
broker
appliance
as
a
way
to
get
ingest
the
data
of
all
the
events
from
sphere
which
can
then
trigger
through
k
native
functions
that
do
ml
on
all
that
to
try
and
detect
anomalies
within
a
vsphere
environment
as
well.
So.
A
C
One
month
ago
it
was
the
last
session
yeah
exactly
which
is
now
using
k-native,
and
that
comes
with
a
bunch
of
capabilities
that
make
this
a
bit
easier,
because
it's
using
now
the
new
cloud
events
which
are
easier
to
ingest
into
machine
learning
systems.
I
would
say.
B
Yeah,
the
only
the
only
challenge
I've
had
with
the
cloud
event
stuff
in
in
my
particular
use
case
is
because
it's
image
data
a
cloud
event,
so
I
I
learned
from
some
of
the
engineers
working
on
k-native
today
is
that
cloud
events
theoretically
have
a
size
limitation
of
64k,
and
you
know
I'm
sending
images
that
are
upwards
of
a
meg
at
this
thing
and
wondering
why
sometimes
it
doesn't
quite
work
the
way
it
should
and
they're
like.
B
B
Right
so
what
we've
actually
been
doing
is
the
images
are
in
s3
and
we're
using
trigger
meshes
s3
aws
s3
trigger
source
for
k,
native
eventing,
so
anytime
something
gets
uploaded
to
an
s3
bucket.
It
sends
an
event
with
with
the
link.
So
that's
exactly
what
we're
doing
it
gets
the
link
it
downloads
the
link,
but
then
it
has
to
base64
and
code
it
and
send
it
to
the
the
inference
server,
and
that
in
and
of
itself
is
an
event
which
is
again
just
a
meg
in
size.
B
A
I'm
wondering
I
you
know
I'm
trying
to
learn
the
whole
cloud
events
thing
myself,
but
with
regard
to
using
the
vsphere
event,
integration
into
and
machine
learning,
it
might
eventually
get
into
something
related
to
that
same
size
limitation.
A
I
can
see
that
if
you,
you
can
use
the
machine
learning
to
give
you
early
warning
on
problems,
and
that
would
certainly
be
valuable,
but
then
you
might
want
to
take
it
to
root
cause
analysis
where
you'd
want
potentially
log
files
and
things
that
could
also
be
very
large
and
open-ended
so
that,
if
that,
if
you
were
to
use
that
pattern
of
providing
a
link
to
a
bigger
source
as
something
that
could
be
consumed
upstream,
that
could
be
just
a
generically
useful
pattern.
If,
if
that's
workable,.
C
Yeah-
and
it
definitely
is
workable
because
you
can
also
just
ingest
into
a
similar
or
I
know
it
would
be
a
different
model,
but
you
could
ingest
into
it,
for
example,
from
a
syslog
that
collects
all
of
your
logs
from
vcenter
and
then
because
of
timestamps
between
events
and
the
logs.
You
could
also
correlate
accordingly
to
understand.
C
You
know,
anomalies
between
the
event
and
the
logs
that
it's
triggering,
if
there's
any
anomaly
there,
that,
even
though
the
event
came
off,
it
actually
was
a
bit
different,
because
the
events,
you
know
aren't
necessarily
always
a
hundred
percent.
The
exact
same
thing
happened.
The
end
goal
is,
but
maybe
it
had
a
jitter
in
the
middle
and
it
had
to
do
a
retry
and
there's
a
built-in
retry.
C
You
could
get
that
through
the
logs,
but
not
through
the
event,
and
then
that
could
possibly
be
something
that
you
would
learn
through
ml,
that,
like
there's
a
precondition
to
a
failure,
that's
about
to
occur
when
all
of
a
sudden
the
logs
have
to
do
two
retries,
then
three
retries.
You
know
that
in
ten
minutes
your
environment
is
going
to
go
down
or
whatever
it
is
things
like
that.
C
You
definitely
could
by
doing
a
cross
correlation
between
two
data
sources,
but
that's
a
not
so
simple
model
I
would
say
to
build,
but
definitely
is
within
the
realms
of
possibility.
A
A
A
Early,
okay,
I'll
take
that
silence
as
nothing
to
add
to
the
agenda
and
I'll.
Thank
everybody
for
coming
and
we'll
miles,
and
I
are
to
get
a
proposal
out
for
kubecon
we'll
let
you
know
what
we
came
up
with
at
the
next
meeting
and
we're
going
to
try
to
recruit
dan
to
talk
about
cube
vet
and
can't
promise
that
recruiting
effort
will
be
successful.
But
we're
going
to
try.
A
So
once
again
thanks
everybody
for
joining
and
contributing
to
this
I'll
go
and
patch
up
some
of
the
links
that
came
through
in
the
chat
here
and
append
it
to
the
meeting
agenda.
Notes
document
as
well
as
a
link
to
that
deck
that
I
presented
so.