►
From YouTube: TGI Kubernetes 188: vCluster
Description
Join Daniel Esponda and Derrik Campau to learn about running multi tenant virtual clusters with vCluster!
A
All
right
awesome,
I
think
we're
live
so
welcome
everyone
to
tgi,
kubernetes
episode,
188
feel
free
to
introduce
yourselves
in
the
chat.
Let
us
know
who
you
are
and
where
you
are
joining
us
from
today
we're
going
to
talk
a
little
bit
about
v
cluster,
which
is
something
that's
dear
to
thank
my
and
derek's
heart,
as
we
are
part
of
the
vmware
sas
platform
team,
which
is
where
a
lot
of
the
different
services
that
vmware
runs
in
the
sas
way
are
hosted.
A
So
we
deal
a
lot
of
a
lot
with
having
to
deal
with
multiple
tenants
and
running
in
a
single
cluster,
managing
their
permissions
really
just
having
to
separate
the
brushes
and
just
make
it
really
easy
for
them
to
run
their
services
on
our
platform
so
yeah.
So
that's
where
we
are
I'm
a
staff
software
engineer
here
at
vmware
again
running
for
that
our
team
name
is
vdp
and
it's
basically
running
the
vmware
sas
platform.
Derek
did
you
want
to
introduce
yourself.
B
Sure
I'm
derek
I'm
a
staff
sre
on
the
same
team
as
daniel,
so
two
sides
of
the
same
coin.
You
know
we
run
the
platform
and
keep
everything
running
and
adding
features
and
have
a
blast
doing
it.
A
Yeah
awesome
welcome
martin
he's
joining
us
from
the
netherlands;
that's
pretty
cool
all
the
way
from
across
the
ocean
and
then
and
from
cupertino
california.
I
think
yeah,
I
think
derek's
also
in
california,
so
yeah
I'm
over
here
from
dallas
texas,
yeah.
A
A
Hey
rich
from
portland
oregon
from
la
class:
okay,
hey
welcome,
b
cluster
right
there
yeah
and
then
juka
much
probably
gonna
mispronounce,
the
last
name.
So
I
won't
even
try
but
welcome
from
finland.
That's
that's
pretty
awesome,
yeah!
So
derek!
I
know
we
have
some
things.
We
want
to
share
first
about
kubernetes
and
what
has
been
going
on
in
the
kubernetes
world
this
week.
So
if
you
want
to.
B
Go
ahead
and
share
that,
yes,
we've
got
a
couple
things
here
so
kind
of
a
week
interview
here.
It's
been
one
year
since
kubernetes
has
changed
to
three
releases
a
year
and
there's
a
forum
on
the
show
notes
here
to
kind
of
get
some
feedback
on
that
change
and
also,
if
you
are
involved
in
submitting
code
changes
and
whatnot
in
upstream
kubernetes
code
freeze
is
next
week.
So
please
be
aware
of
that
date.
A
A
A
Yeah
a
lot
of
testing,
especially.
B
Especially
when
apis
get
removed
right
after
being
deprecated
and
we
don't
control
the.
B
A
lot
of
the
deployment
teams
use
so
exactly
we're
kind
of
at
their
mercy
to
guide
them
towards
getting
the
change
done
in
a
timely
fashion.
A
Yeah,
that's
really
one
of
the
hardest
things
as
well
right,
getting
all
of
the
tenants
and
so
again
we're
running
a
multi-tenant
cluster
here,
getting
them
all
number
one
informed
about
the
changes
and
then
a
lot
of
times
they
come
and
say:
hey.
We
don't
have
the
capacity
or
the
time
to
make
these
changes
to
upgrade
to
the
new
kubernetes
api
version,
so
that
delays
a
lot
of
things.
A
Now
when
I
looked
at
v
cluster,
I
thought
you
know,
that's
maybe
one
of
the
things
that
would
help
and
we'll
explore
that
a
little
bit
here,
but
one
of
the
things
that
v
clusters
also
is
like.
You
still
have
to
keep
the
whole
cluster
and,
at
the
very
least,
what
I
have
seen
so
far
in
the
same
version.
A
So
it's
not
like
you
you're,
escaping
having
to
maintain
those
kubernetes
api
versions
and
getting
your
tenants
to
all
upgrade
at
the
same
time
as
well.
A
Good
all
right,
let
me
share
my
screen
here:
yeah,
hey
everyone
else
that
has
joined
hi
from
chicago
engine,
rich
jeremy.
Yep.
Welcome!
Welcome
all
right!
Let
me
share
my
screen
here
and
we'll
go
to
my
screen.
One.
A
So
yeah,
let's
talk
a
little
bit
about
virtual
v
clusters,
where
they
are
again
so
they're
kind
of
virtual
clusters,
which
are
fully
working
kubernetes
clusters
that
run
on
top
of
kubernetes
clusters,
so
they
allow.
A
So
one
of
the
things
that
we
often
do
as
platform
owners
is
that
you
know
they
want
to
deploy
certain
crds
or
they
want
more
permissions,
more
administration
permissions
in
the
cluster,
and
that
really
it's
it's
challenging,
because
it
requires
a
lot
of
manual
stuff
right
on
a
day-to-day
basis
and
them
having
to
request
those
permissions
and
not
having
to
approve
them
and
then
manually
making
changes
to
the
cluster.
A
A
So,
let's
talk
a
little
bit
about
an
overview
of
what
the
cluster
does
and
what
it
is
and
what
it
isn't
right.
So
v-plus
is
an
application
that
runs
as
far
as
kubernetes.
It
will
run
in
a
particular
namespace
that
you
tell
it
to
and
when
so,
basically,
when
you
create
a
v
cluster,
it's
going
to
run
in
a
particular
namespace.
A
You
have
a
syncer
here,
which
is
one
of
the
core
components
of
v
cluster,
which
syncs
the
objects
that
you're
creating
in
that
virtual
cluster
down
to
the
underlying
actual
cluster,
where
it
lives
and
you'll
see
here
that
a
lot
of
the
it's
you'll,
the
people
who
are
managing
that
the
actual
the
real
cluster
are
still
able
to
see
all
of
the
resources
that
those
tenants
are
creating.
A
So
in
a
way
as
an
administrator
of
managing
a
cluster,
you
can
still
manage
and
do
a
lot
of
the
security
policies
that
you
would
run
that
cluster.
It's
just
that
the
tenants
are
only
going
to
have
very
limited
visibility
into
that
virtual
cluster
they're
not
going
to
be
able
to
see
other
tenants,
workloads
or
other
resources
they're
not
supposed
to,
and
I
think,
for
example,
the
way
we
do
that
today
is
kind
of
yeah.
What
is
that
derek?
A
What
is
that
service
today
that
we're
using
to
kind
of
prevent
tenants
from
use
from
seeing
other
cluster-wide
resources.
A
Yeah
capsule
yeah,
so
that's
another
interesting
project
right
via
capsule
we're
able
to
sort
of
intercept
some
of
those
requests
for
cluster-wide
resources
and
limit
it
to
only
the
resources
that
the
tenants
should
have.
But
this
provides,
I
feel
like
possibly
an
easier
way
to
manage
that.
B
I
think,
for
me,
one
of
the
most
interesting
things
that,
if
we
have
time
today
to
go
over
would
be
cool,
is
understanding
how
it
handles
crds,
because
that's
like
one
big
pain
point
that
we
have
is
10
and
a
wants.
The
c
or
d
10
to
b
went
to
c
or
d
and
keeping
those
updated
and
not
having
them
have
collisions
right
can
yeah
it's
hard.
B
A
Yeah
absolutely
yeah,
so
we're
going
to
go
through
here
and
explore
some
of
the
stuff
that
we'll
look
at
so
I
already
installed
v
cluster
on
my
local
machine.
I
would
definitely
recommend
that
if
you're
going
to
be
trying
out
eastern,
I
definitely
recommend
you
try
it
out,
because
it's
a
really
cool
tool
that
you
go
out
there
and
start
playing
around
with
some
of
this.
A
B
The
is
the
screen
size,
okay
for
everyone,
yeah.
A
A
All
right
yeah,
so
I
was
already
setting
a
few
things
here.
So
one
of
the
things
that
I
want
to
show
is
interacting
and
creating
a
b
cluster
and
how
easy
it
is.
So
here
I'm
connected
to
one
of
the
clusters
that
we
manage.
So
if
you
do
right
now,
for
example,
okay
thoughts
we're
in
the
default
namespace
there
we
go
so
right
now
we
have
nginx
running
in
this
cluster.
So
let's
go
ahead
and
create
a
cluster.
I
already
downloaded
the
v
cluster
tool,
so
we
have
this
b
cluster
cli.
A
A
So
we'll
do
v,
cluster
create
and
let's
just
say
that
we'll
create
cluster
a
let's
just
leave
it
that
way,
and
actually
there's
also
a
flag
that
the
developers
recently
introduced
to
create
it
in
an
isolated
mode,
and
I
think
that's
going
to
be
really
important
because
again,
one
of
the
things
that
we
want
to
make
sure
when
we're
creating
virtual
clusters
is
that
tenants
are
still
maintaining
that
isolation
and
that
pods
are
not
able
to
talk
to
each
other
unless
they
really
want
to
be
able
to
across
the
different
clusters.
A
So
there's
a
few
different
ways
of
managing
the
configuration
for
the
isolation
mode,
usually,
the
way
that
you
would
create
a
v
cluster
in
a
production
environment
would
more
than
likely
be
a
helm
chart
in
here,
because
we're
really
just
exploring
the
basic
functionality
of
v
cluster,
I'm
using
it
via
others,
they're
available
cli,
but
in
a
production
environment
you
would
probably
be
doing
this
via
helm
chart,
but
let's
go
ahead
and
create
this
cluster
via
the
cli
all
right.
What
did
I
do
here?
B
Rich,
we
actually
put
up
a
pr
yesterday.
Daniel
did
it
and
looks
like
it's
been
approved
already
to
add
another
cider
range
to
the
default
network
policy
because
at
least
for
our
clusters,
we're
using.
I
forget
the
rfc
for
it,
but
the
cgnat
cider
range
basically
for
our
cluster
networking
and
that
was
still
allowing
traffic
with
isolate
enabled
until
we
made
that
fix.
A
Yep
and
so
over
here
we
can
see
what
it
does
is
that
it
actually
installs
a
home
chart
on
the
background
to
actually
get
all
the
components
up.
So
if
we
now
do
gay
get
fought
in
the
namespace
cluster,
a
no
resources
are
there,
but
it
actually
created
that
name.
So
we
didn't
have
that
nation's
existing
before,
but
now
that
we
ran
the
command
b,
cluster
create
cluster,
a
isolate.
It
actually
created
that
namespace
for
us,
and
now
we
can
actually
run
a
command
to
access
that
virtual
cluster.
A
Great,
so
what
this
does
is
that
this
creates
a
proxy
in
my
local
host
machines
that
is
going
to
proxy
the
request
over
to
to
that
basically
virtual
cube
api
that
is
now
running
in
that
in
my
actual
kubernetes
cluster.
A
So
the
way
that
I
would
imagine
this
running
in
a
production
environment
is
that
this
cube
api
virtual
cue
api
would
be
exposed
via
load
balancer,
and
we
would
generate
these
key
configuration
files
for
tenants
to
use
so
that,
while
that's
running
I'm
going
to
open
a
new
terminal,
then
I'm
going
to
make
this
bigger
all
right.
So
cd,
cluster,
a
and
so
now
we
have
the
keep
config
that
was
created
via
that
command
right
there
and
let's
do
okay,
get
past
a.
A
And
there
we
go
so
now
we
are
actually
running
cube,
ctl
commands
in
that
virtual
cluster
that
we
just
created
and
that's
how
simple
it
was
to
create
that
cluster
so
and
you
can
see
that
here
it's
running
just
the
core
dns
pod.
A
Now,
let's
try
launching
something
for
the
first
time
here.
So,
for
example,
I'm
gonna
do
okay
create
deployment.
Actually,
let's
do
this.
I'm
gonna
try
to
run
a
privileged
deployment
right,
so
nginx
and
then
gen
x.
So
this
image
is
going
to
try
to
run
as
root.
A
A
A
All
right,
so
why
is
it
not
able
to
run
because
it
says
the
container
has
run
as
an
unroot
and
image
will
run
us
rude?
So
when
I
saw
one
of
the
things
when
you
try
to
run
it
when
you
specify
isolated
mode,
it
actually
puts
a
policy
in
there
that
prevents
containers
from
running
as
root.
So
that's
one
of
the
cool
features
already
that
you
already
have
by
default,
turned
on
running
in
isolated
mode
will
prevent
pods
from
running
history.
A
A
A
All
right
all
right
and
now
it's
running
so
that's
cool
right,
so
we
saw
that
just
by
changing
the
container
to
a
pot,
that's
not
running
as
root.
We
are
able
to
run
this
nginx
container
now.
What
I
want
to
do
is:
let's
go
back
and
try
to
create
a
second
cluster
that
so
that
we
can
see
the
interconnectivity
and
if
we
are
able
to
actually
isolate
or
or
network
right,
so
up
a
new
tab
here
then
set
my
keep
config
to.
A
A
A
B
Are
the
paws
not
able
to
pull
because
of
docker
hub
limits.
B
A
A
B
B
A
Yep,
that's
one
important
thing
to
note:
it
needs
to
be
in
a
different
name
space,
which
that
makes
sense
actually
makes.
B
B
A
A
A
All
right
cool,
so
there
we
go
a
little
bit
of
issues
there,
but
now
we
can
see
that
there's
we
have
a
v
cluster,
a
here
which
a
tenant
who
has,
and
here
that
tenant
ran.
You
know
they
created
a
deployment
of
nginx
and
they
are
able
to
see
that
so
when
they
do
cake
at
pot
stash
a
so
getting
the
pots
in
all
of
the
name
spaces,
they
see
their
pot
x.
But
when
a
tenant
in
cluster
b
too
many
tabs
here,
they
just
keep
multiplying.
B
B
So
I
think
that
would
have
to
be.
You
know
the
same
cni
on
the
cluster.
Of
course
storage.
I
mean,
I
believe
that
would
also
pull
in
whatever
storage
classes
are
available
on
that
host.
B
I
think
that's
a
good
good
question.
If
you
can
create
custom
storage
classes
unique
inside
each
v
cluster
it.
It
sounds
like
based
on
from
reading
on
the
documentation
that
that
answer
would
be
yes,
but
we,
you
know,
of
course,
need
to
test
that
because
it's
basically
mocking
the
kubernetes
api.
So
I
think
all
those
commands
are
there
except
anything
it
needs
to
forward
onto
the
real
server.
A
All
right,
so
what
we
see
here
is
I
I
did
k,
get
pot
o
dash
y
dash
a
on
cluster
b,
and
you
can
see
here.
I
can
only
see
chord
dns
and
I
can
see
the
ips
of
the
spot
now.
What's
interesting.
To
note
here
is
that
the
ip
address
that
is
attached
to
this
node
is
still
within
that
same
ip
space
of
that
host,
because
what
it's
doing
is
again,
it's
syncing
that
part
to
the
underlying
actual
cluster.
A
A
A
B
A
A
Image
pull
back
up,
that's
that's
a
docker
hub
stuff,
but
it
can
still
they
can
still
hit
it,
and
the
reason
that
is
is
because
of
again
the
unpatched
network
policy.
So
let's
take
a
quick
look
at
that.
So
let
me
open
yet
another
new
tab
here,
all
right,
so
we'll
export
my
cube.
Config
back
to
my
original
kubernetes
original
kubernetes
cluster.
A
Yep
and
I'm
gonna
use
canines,
which
is
one
of
my
favorite
tools
to
actually
look
at
a
kubernetes
cluster
that
has
just
a
bunch
of
different
stuff
right.
So
when
you
run
a
cluster
in
isolated
mode,
what
it
does
is
that
it
creates
a
network
policy
to
isolate
those
workloads.
So
let's
take
a
look
at
it
right.
So
let's
look
at
the
cluster
a
workloads
network
policy.
A
You
know,
clusters
that
b
clusters
to
isolate
that
traffic.
We
actually
need
to
include
the
ip
address
space
that
we
use
for
or
kubernetes
for
our
kubernetes
cluster,
which
is
100.64.0.0.
A
Is
that
right,
yeah.
A
B
So
there's
another
question:
can
you
control
who
can
use
what
resource
as
in
a
policy
that
deny
allow
ram
and
cpu
usage?
So
I'm
in
the
show
notes
I
link
to
the
cluster
isolation
page,
which
also
has
kind
of
a
dump
of
the
value
symbol
there,
and
there
is
a
resource
quota
section.
B
A
Yeah,
I
think
you
would
probably
again
in
a
real
production
setting.
You
would
probably
be
running
this
stuff
of
a
helm
chart
where
you
would
have
more
customization
and
more
like
you
would
have
more
control
over
how
that
network
policy
gets
applied.
So
you
probably
need
to
put
a
little
bit
more
thought
into
that
network
policy
and
how
to
isolate
your
workloads,
but
I
think
it's
still
completely
doable
right,
yeah,
yeah,
okay,
so
with
that
done,
let's
create
a
second
pod,
all
right.
So.
A
B
A
Yeah,
I
think
we're
going
to
run
into
that
docker.
I
o
issue
again,
which
is
something
with
here,
but
what
I
wanted
to
show
here
is
if
this
would
actually
pull
from
docker.
Is
that
making
a
curl
command
from
this
spot
to
this
part
would
actually
work
now,
because
this
part
would
have
the
label
that
it's
being
managed
by
the
v
cluster
a
this.
Is
this?
The
reason
that
this
isn't
pulling
is
nothing
to
do
with
v.
B
B
Yeah,
I
think
that's
about
all.
I
have
to
say
on
that
sorry
yeah,
I
think
v
v
cluster,
this
v
cluster
is
definitely
it
seems
easier
to
get
going
with
right.
A
Yeah
v
cluster
has
been
super
easy
to
show
easy
super
easy
to
use.
I
mean
you
saw
me.
I
just
created
like
what
two
v
clusters
in
a
really
short
time
period,
while
using
cluster
apis
a
little
bit
more
involved.
A
A
A
B
The
default
values
are
resource
quota,
enabled
true,
ephemeral
storage
looks
like
it's
160
gig
and
it's
adding
three
gig
per
pod,
I
think,
is
a
default
request,
but
there's
no,
maybe
eight
gig
as
a
default
limit.
It
seems
like
a
miscalculation,
though,
because
with
only
three
pods
in
there
right.
A
A
B
B
B
A
A
Then,
let's
see
for
those
there,
we
go
no
connectivity.
So
now
we
show
that
again
it's
using
that
network
policy
to
prevent
connectivity
from
two
different
vehicles
of
talking
to
each
other.
B
A
Yeah,
so
I
mean
I
think,
just
from
this
alone,
I
think
there's
already
a
lot
of
a
lot
of
good
use
cases
that
you
could
use
v
cluster,
for
especially
in
like
a
development
and
ci
environment.
I
think,
where
you're
really
going
to
start
having
some
challenges,
but
I
think
that
I
mean
again.
I
really
think
that
this
is
a
good
way
to
continue
investigating
to
actually
use
in
a
production
type
setting
in
the
future,
but
some
of
the
challenges
that
you
might
run
into
is
things
like.
A
A
Now
again,
the
pods
are
being
actually
synced
to
the
actual
real
host
themselves.
It's
not
like
the
posts
that
the
pods
are
running
in
some
kind
of
imaginary
land
right,
so
you
could
use
kiam
at
the
actual
real
cluster
level
right,
but
I
think
you
lose
the
ability
for
a
tenant
to
be
able
to
manage
what
iem
roles
are
able
to
be
used
on
a
per
namespace
level
inside
the
v
cluster
right,
because
those
spaces
don't
actually
exist
on
the
host
namespace
they're,
just
virtual
namespaces.
B
A
Yeah
now
one
of
the
things
that
I'm
sure
that
would
wouldn't
be
too
hard
to
do
is
to
expose
the
namespace
that
they're
running
on
the
v
cluster
as
labels
right
and
then
you
could
probably
use
labels
somehow
there
to
to
say
hey
these
these
name
spaces
the
the
pods
that
have
this
label
on
it.
As
this
virtual
namespace
can
only
assume
x,
y
and
z,
iim
rolls
and
other
parts
that
have
another
different
virtual
name.
Space
can
only
use
abc
iem
rules.
I
think
that
would
be
a
possibility.
A
Performance
network
latency,
I
don't
think
there
would
really
be
a
performance
network
36
again
it's
using
the
that's
using
the
same
networking
layer
as
the
as
a
host
as
a
host
network
right.
So
I
mean
again,
if
you
looked
at
the
calls
that
I
made
here,
it's
not
like
there's
a
virtual
cni,
that's
running
on
top
on
this
cluster,
it's
using
the
same
cni.
B
And
I
know
the
docs
talk
a
little
bit
too
about
api
performance
and
that
v
cluster
in
larger
clusters
can
actually
help
with
that,
because
it's
reducing
the
amount
of
load
and
calls
on
the
true
underlying
api
server,
rich
added
a
comment
about
opa
still
being
able
to
do
mission
control
because
it's
still
getting
scheduled
and
underlying
hosts.
I
guess
that's
a
good
point.
A
Yeah
exactly
yeah,
there's
no
multi-cni,
it's
literally
just
still
using
the
same
cni
as
below.
So
I
think
that's
a
good
thing
right,
because
it
means
that
the
tenants
don't
have
to
worry
about
managing
the
cni
or
managing
all
of
the
other
kubernetes
administration
tasks
right.
They
still
get
a
v
cluster
that
they
can
manage
and
that
they
can
play
with,
but
they
don't
have
to
manage
the
day-to-day
operations
activity
that
a
platform
team
like
us
would
have
to
like.
A
A
And
a
caveat
that
that's
there,
though,
is
the
reason
I
had
to
use
an
aws
cluster
is
because
a
lot
of
the
a
lot
of
the
local
clusters
like
kind
or
like
the
docker
desktop
cluster.
They
don't,
I
don't
think
they
use
calico
cni,
so
those
that
those
cni's
don't
allow
the
use
of
network
policies.
So
if
your
cni
doesn't
allow
the
use
of
network
policies
or
doesn't
doesn't
enforce
network
policies,
then
you're
not
going
to
have
this
isolated
environment
like
you
are
going
to
have
in
a
production
environment.
A
Awesome
any
other
questions
from
the
chat.
A
Yeah
yeah,
I
remember
back
in
the
days
when
I
used
to
work
with
eks
having
an
ek,
eks
cluster
spin
up
takes
up
to
back.
I
don't
know
if
it's
gotten
better,
but
back
when
I
was
trying
to
demo
it.
It
would
take
up
to
maybe
20
30
minutes
for
an
eks
cluster
to
get
into
ready
status
before
you
can
even
use
it.
A
Another
question
is:
could
you
connect
a
physical
node
dedicated
to
a
v
cluster?
I
guess
that
would
come
down.
Can
you
do
dedicated
notes
to
a
single
namespace
right,
because
a
b
cluster
is
basically
running
in
a
single
name
space.
So
if
you
have
a
node
group,
that's
dedicated
to
a
single
name
space,
then
I
would
see
that
being
easily
achievable
right.
A
A
A
B
B
I'm
sure
many
people
are
aware
of
it,
but
it's
been
deprecated
right
in
1.21
and
looks
like
it's
supposed
to
go
away
in
1.25
and
that's
been
replaced
with
pod
security
admission,
which
is
beta
in
1.23.
B
So
you
know
anyone.
That's
looking
into
that.
Like
encourage
you
to
look
at
pod,
security
and
mission
looks
like
it's
a
bit
more
flexible
model
than
psps
were.
I
know
we'll
have
to
do
a
lot
of
work
on
our
side
to
to
migrate
over
to
it,
but
I
think
in
the
end,
it'll
be
a
good
thing
for
the
community.
B
Yeah
we
actually
jerry
made
your
comment
about
big
clusters
of
using
api
servers.
We
actually
had
first
time
at
least
on
this
team,
that
I've
seen
where
our
control
plane
was
actually
undersized.
B
I
we
were
killing
the
api
servers
just
because
it
was
a
system
kill
because
it
was
too
much
load
on
it
for
the
number
of
nodes
and
resources
on
the
cluster,
so
that
was
our
first
time
kind
of
having
that
for
what
we
do.
So
we
end
up
up
that.
Can
the
control
plane
nodes.
A
Yeah,
that's
a
good
point
that
rich
just
brought
up
having
an
api
server
namespace,
that's
kind
of
feather
right
thing,
so
I
think
yeah,
an
api
server
for
namespace
could
also
help
like
distributing
the
load
of
the
api
right,
because
if
you
don't
have
tenants
all
just
clobbering
the
main
actual
names
cuba,
api
server,
that
would
distribute
the
load
between
the
different
api
servers
because
again
the
way
that
the
cube
api
server
works
is
it
had
it
selects
a
leader
right,
and
so
it's
it's
a
lot
harder
to
scale
to
horizontally
scale.
A
Yeah,
I
don't
know
how
much
I
can
talk
about
it,
but
one
of
our
projects
that
we
are
working
on
is
student
federation
of
different
clusters
right.
So
I
think
this
could
tie
neatly
into
that
once,
for
example,
making
sure
that
all
of
your
clusters
have
the
same
opa
policies,
but
this
might
actually
make
that
simpler,
because
the
opa
policies
would
still
live.
You
wouldn't
have
to
replicate
your
lpa
policy
in
many
different
actual
clusters.
The
opa
policy
would
still
live
there
in
that
host
cluster
and
the
tenant
cluster.
B
Yeah
go
ahead,
yes,
maybe
off
topic,
but
I'm
working
on
offloading
network
and
security
functions
as
container
pods
on
is
that
data
processing
units
I'm
not
familiar
with
dpu.
Do
you
have
any
pointer
thoughts
on
post
security
mission
may
or
may
not
work?
In
that
context,
then
federated
security
agent,
slash
cubelets
question.
Mark
I
don't
yeah.
I
don't
really
have
any
pointers
or
thoughts
on
that.
I
don't
think.
We've
really
gotten
that
far
ahead
with
looking
at
the
pod
security
mission,
yet.
B
A
A
One
of
the
use
cases
yep
yeah.
I
don't
think
we
actually
deal
with
many
machine
learning
workloads
today,
even
in
our
platforms,
do
we.
A
B
A
A
Yeah
I
mean
I
could
definitely
see
us
using
this,
and
it's
just
because
it's
just
so
easy.
I
think
that's
the
main
thing.
I
think
a
lot
of
the
other
virtual
cluster
tools
that
I
have
seen
out.
There
are
just
nowhere
as
easy
to
use
because
they
try
to
do
more
of
heavyweight
kubernetes
like
actually
running
a
kubernetes
cluster,
with
all
the
different
components.
B
Rich
asked:
do
you
guys
spend
a
lot
of
v
clusters
like
in
a
short
period
of
time?
I
don't
know
if
that
is
directed
at
us
or
not.
I
think
we
are
just
demoing
the
cluster
for
some
internal
stuff.
I
don't
we're
not
currently
making
use
of
it
in
any
pipelines.
A
Yeah
right
now,
I
think
we'd
like
to
use
it.
I
guess
like
especially
for
like
ci
and
dev
environments,
but
we
need
to
think
a
little
bit
more
about,
I
think,
within
our
team.
Specifically,
we
really
need
to
think
about
the
security
certifications
that
we
have
to
maintain
and
how
would
we
apply
those
different
security
standards?
A
You
know
like
hipaa
and
pci
to
something
like
v
clusters
and
I'm
sure
you
can
do
it
and
because
it's
open
source-
and
it's
so
flexible
with
that
singer
and
still
being
able
to
apply
the
host
policies-
it's
totally
doable.
But
it
needs
a
lot
of
research
before
we
can
get
to
a
phase
where
we
can
say,
hey
yeah.
We
can
totally
run
this
in
a
pci
environment
or
a
hipaa
environment.
A
Yeah
rich,
and
I
think
you
would
be
a
good
person
to
maybe
ask
of
how
what's
the
best
way
or
have
you
guys
thought
about
how
to
run
qty
m
or
k.
I
am
on
a
v
cluster.
Is
the
recommended
way
to
do
it
via
actually
installing
kim
on
the
virtual
cluster,
or
would
you
just
stick
with
running
kim
on
the
host
network
cluster
itself.
A
A
Figuring
out
how
to
run
tgi
sorry
v
clusters
in
ci
with
kiam
that
I
think
that's
one
of
the
next
things
that
I
need
to
figure
out
how
to
do.
A
Yeah
that'd
be
awesome,
yeah,
just
all
around
figuring
out
how
to
administer
b
clusters
is
really
what
we're
going
to
be
focusing
on
next
and
diving,
deeper
into
could
be
even
a
ba
next
tgi
kubernetes
presentation
once
you
get
deeper
into
those
topics,
but
this
is
really
just
an
introductory.
How
do
you
what
is
v
cluster?
B
A
A
A
Docker
io
is
limiting
a
lot
of
the
pools,
so
you
need
to
make
sure
you
rather
have
a
dedicated
repository
for
your
images
or
have
an
enterprise
version
of
docker
that
lets
you
hit
a
little
bit
more
frequently,
then
the
other
one
was
that
we
ran.
I
think
there
we
were.
We
didn't
run
out
of
ephemeral
space.
There
was
some
kind
of
quota
that
was
hit
there.
A
A
Awesome
well,
thank
you.
Everyone
for
joining
us
and
yeah.
Well,
we'll
definitely
keep
exploring
this,
and
if
we
get
to
a
point
where
we're
running
this,
probably
maybe
more
like
a
staging
production
environment
with
kim
running
on
top
yeah,
I
think
we'll
come
back
and
maybe
we'll
even
have
rich.
A
B
Good,
let's
go
ahead
and
close
it
up
thanks
everyone
for
spending
an
hour
of
your
friday
afternoon
or
evening
or
morning
or
wherever
part
of
the
world.
You
are,
we
we
love
it
and
thanks
for
yeah,
thanks
for
spending
time
with
us.