►
From YouTube: 20201201 Kubernetes Working Group for Multi-tenancy
Description
- [christopherhein] New Provider Repo (slowly moving VirtualCluster to here)
-- https://sigs.k8s.io/cluster-api-provider-nested
- Kiosk, Loft, and wg-multitenancy
-- Discuss projects from Loft and its OSS components and how they compare with wg-multitenancy efforts.
[Daniel Sover] Gauge interest on operator multitenancy update
A
Cool,
hey
everybody
and
welcome
to
the
kubernetes
working
group
for
multi-tenancy.
This
is
our
regularly
scheduled
tuesday
december
1st
2020
meeting
and
our
agenda.
Today
chris
is
going
to
go
over
the
new
provider
repo
and
then
adrian
is
going
to
lead
a
conversation
about
kiosk
loft
and
some
of
the
multi-tenancy
working
group
projects
just
to
understand
how
we
may
or
may
not
align
there
and
then
daniel
was
going
to
chat
about
operator
multi-tenancy
update
cool,
so
I'll
hand
it
over
to
chris
and
chris.
I
will
make
you
a
presenter.
B
Cool
you'll
just
get
to
look
at
myself.
I
don't
actually
have
anything
to
share
other
than
you
can
click
on
that
link
in
the
google
doc
fan.
I
have
started
to
kind
of
like
pivot,
a
little
bit
of
the
work
that
we're
doing
we're
still
going
back
and
forth
between
this
repo,
with
the
incubator,
virtual
cluster
project,
but
long
term.
The
goals
are
to
to
move
most
of
that
code
base
or
a
lot
of
that
code
base
out
of
the
multi-tenancy
repo
and
we're
re-platforming
the
actual
api.
B
I
talked
about
this
a
little
bit
ago,
at
least
a
couple
months
ago
that
we
were
looking
into
doing
this,
but
the
idea
now
is
that
we
have
a
cluster
api
provider,
nested
repo
and
we're
rebuilding
the
the
base
api
layer
to
do
provisioning
of
control
planes
that
are
nested
within
clusters.
The
end
goal
being
what
we
get
out
of
virtual
cluster,
with
syncing,
with
syncing
resources
between,
in
capi
terms,
at
least
a
management
cluster
and
the
actual
tenant
control
planes
and
yeah.
B
C
What
kind
of
who's
so
it's
going
to
be
sick,
multi-cluster!
That's
the
long-term
owner
of
this
now
is
that
correct
cluster.
B
Yeah,
so
where
cappy
is
right
now
it
hasn't
gone
through
all
of
those.
The
every
every
cappy
provider
right
now
is
under
a
v
one
alpha,
one
through
four
api,
we're
in
the
process
of
redefining
or
defining
what
alpha
four
is
right.
Now
long
term
plans
are
for
api
reviews
and
security
reviews,
we're
going
to
be
doing
that
on
the
way
to
beta,
which
will
go
across
all
the
providers
at
that
point,
so
for
right
now
we
are
there.
There
hasn't
been
one
of
those.
B
We
have
a
new
design
dock
that
we're
getting
the
folks
from
cappy
involved
in
reviewing,
which
brings
up
a
new
way
of
just
deploying
these
if
you're
familiar
with
what
virtual
cluster
was
doing
around
the
cluster
version.
The
idea
now
is
that
each
individual
component
has
its
own
cr
and
can
individually
be
mutated
so
from
the
from
scd
controller
manager
and
the
cube
api
server
they're
individual
crs,
and
then
you
have
an
orchestration
cr
that
goes
and
deploys
all
of
those
resources.
D
B
C
Cool,
and
is
this
going
to
cause
a
lot
of
disruption
for,
like
your,
your
customers,
like
your
users,
already
remember
that
you
got
called
out
by
apple
during
their
q
con
keynote
they're
using
virtual
clusters
right.
E
B
Yeah
we
did
get
a
call
out
yeah.
We
didn't
get
to
change
that
to
say
to
to
be
cap
end
before
before
the
actual
presentation
was
recorded
and
all
that
but
yeah.
In
essence,
the
the
idea
is
we're
currently
working
on
on
virtual
cluster
internally
and
we're
going
to
be
moving
over
and
pivoting
to
using
all
of
the
new
cap
end
stuff,
as
as
we
kind
of
move
forward,
because
it's
being
an
alpha
api,
we
kind
of
expected
that
we're
there's
going
to
be
quite
a
lot
of
changes
over
time.
B
B
There's
more
people
getting
involved-
and
this
is
kind
of
this-
is
exactly
that.
Another
call
out
of
hey
if
you're
interested-
and
you
want
to.
If
you
want
to
help
push
this
forward
in
new
and
exciting
ways,
please
get
involved.
We
have
a
couple
folks
that
have
been
talking
to
us
from
ibm
a
couple
folks
from
from
red
hat
as
well,
that
are
that
have
reached
out
about
a
couple
questions.
E
E
A
C
Yeah,
so
I've
invited
is
it
lucas
or
lukash.
How
do
I
pronounce
your
name?
It's
lucas,
hey
lucas,
hey
lucas,
and
I
see
that
fabian's
here
as
well.
So
yes
thank
you
to
both
of
you
for
joining
us.
Sorry,
like
my
video's,
not
working,
I
don't
know
if
you
missed
that
little
part
of
the
beginning
yeah.
So
I
was
looking.
C
I've
been
asked
a
couple
of
times
about
what's
the
difference
between
hierarchy,
controller
and
and
virtual
clusters
and
loft,
and
so
I
thought
I
would-
and
I
couldn't
answer
that
question.
Although
I
had
a
couple
of
ideas,
so
I
thought
I'd
invite
you
folks
on
to
talk
about
what
you
are
up
to
at
loft
with
kiosk
and
how
that
relates
to.
I
think,
all
of
the
projects
like
dev
space.
C
I
don't
fully
understand
how
all
of
your
projects
fit
together
can
have
a
chat
about
what
you're
doing
and
then
talk
about
whether
it
makes
sense
for
us
to
join
forces
in
some
way
either
share
lessons
or
even
just
start
sharing
some
infrastructure,
because,
as
you
heard,
the
the
the
working
group
version
of
multi-cluster
of
virtual
clusters
is
now
going
into
the
the
cluster
life
cycle
thing.
So
that
might
be
something
you
could
take
advantage
of.
If
you
wanted
to
so
yeah,
do
you
have
a
presentation
or
something
that
you
want
to
offer?
A
Also,
just
as
a
quick
note,
since
this
has
come
up
on
a
previous
call,
please
do
note
that
this
is
an
open
source
meeting,
this
meeting's
being
recorded.
We
do
post
the
recording
to
youtube,
so
please
don't
deliver
any
state
secrets
during
this
call
with
regard
to
product
through
that.
F
Make
sense
yeah,
I
I
put
together
a
couple
of
slides.
Let
me
share
my
screen.
Thank
you.
That
would
be
great.
F
All
right
perfect,
first
of
all,
thank
you
so
much
for
reaching
out
adrian
and
for
for
inviting
fabian
and
me
to
the
call
happy
to
talk
a
little
bit
about
what
we're
doing
at
loft,
and
you
know
try
to
find
out
what
the
differences
might
be
to
what
the
virtual
you
know,
clusters
and
the
multi-tenancy
group
in
general
is
working
on
in
general,
with
loft
being
a
commercial
product.
F
What
we're
essentially
trying
to
do
is
offering
kind
of
like
a
basic
building
block
for
companies
to
build
their
internal
kubernetes
platform
on
rather
than
having
to
you
know,
build
everything
from
scratch,
so
we're
trying
to
be
a
solution
that
you
can
adopt
in
days
rather
than
months
or
years.
You
know
that
is
extendable,
but
not
you
know
doesn't
require
everybody
to
write
like
you
know,
50
operators
before
they
can
even
give
developers
any
access
to
kubernetes.
F
This
is
kind
of
a
high
level
architecture
of
where
loft
sits.
So
essentially,
loft
is
an
api
gateway
that
essentially
connects
to
external
authentication
providers
for
authentication
that
that
can
be.
You
know
git
la
gitlab
or
github,
but
can
also
be
ldap
or
any
kind
of
other.
You
know
enterprise
authentication
system
we're
essentially
just
integrating
with
decks
here.
You
know
to
make
authentication
possible
and
then
users
that
are
authenticated
with
loft
get
access
to
connected
clusters
and
these
connected
clusters.
F
You
know
users
have
accounts
in
these
clusters,
which
essentially
defines
you
know
their
permissions.
What
they
can
do
in
the
clusters,
because
loft
should
be.
You
know
like
a
self-service
systems
for
users
to
create
namespaces,
and
I
think,
that's
kind
of
a
touch
point
with
the
hierarchical
namespace
controller
and
then
also
to
spin
up
virtual
clusters
in
case
they
need.
You
know,
more
power
than
a
pure.
You
know
basic
namespace
and
kubernetes.
F
If
they
want
to,
you
know
work
with
their
own
crds
and
things
like
that
which
which
might
not
be
possible.
You
know
in
a
single
name
space,
and
in
that
case
you
can
spin
up
a
virtual
cluster.
We
do
provide
a
ui
and
a
cli,
but
they're,
essentially
just
wrappers
for
cube
ctl.
So
everything
basically
goes
to
that
api
server
gateway.
C
C
Sorry,
if
you
just
go
back
one
when
you
say
connect
to
cluster
a
and
b
are
those
real
clusters,
virtual
clusters,
or
either.
F
Those
are
real
clusters,
but
you
can
also
connect
virtual
clusters.
Technically,
you
can
also
create
virtual
clusters
in
virtual
clusters
and
those
kind
of
things
you
can
get
as
crazy
as
you
want
to,
but
I
mean
typically
a
connected
cluster
is
like
a
you
know:
gke
cluster
or
eks
cluster,
or
something
like
that.
C
F
Right,
I
mean
it's
a
I
mean
generally,
when
you
riff
the
with
the
cli
when
you're
running
you
know,
loft
create
virtual
cluster
command,
it
spins
up
that
virtual
cluster
and
defines
a
cube
context
on
your
local
machine
for
that
particular
virtual
cluster.
F
We
do
have
a
github
repository
with
instructions
on
how
to
deploy
a
virtual
cluster.
You
know
outside
of
loft
and
set
up
the
cube
context
manually,
so
that
would
also
be
possible.
C
F
Right
so
I
mean
sort
of
the
very
the
easiest
setup.
So
if
you're
like
you
know,
10
person
startup
just
starting
out
not
like
a
large
organization,
the
easiest
setup
is
you
install
love
to
a
single
cluster,
and
then
you
connect
the
same
cluster.
That.
G
F
Right
exactly
so,
we
have
our
own
proprietary
implementation
for
virtual
clusters.
F
I
think
it
is,
I
would
say,
closer
to
what
darren
shepard
did
about
like
one
and
a
half
years
ago
with
this,
like
suggestion
of
you,
know,
k3s
based
virtual
clusters,
so
it's
a
bit
closer
to
that
in
terms
of
the
implementation,
but
the
goal
is
essentially
the
same
to
spin
up.
You
know
virtual
clusters,
that
run
inside
of
you
know
essentially
the
name
spaces
of
a
you.
G
B
All
right
does
it
use
the
same
mechanisms
there
in
terms
of
like
what
the
k3v
project
was
doing,
where
it's
in
a
single,
where
all
resources
end
up
in
a
single
name,
space.
F
Exactly
I
think,
that's
the
biggest.
I
think
that
is
the
biggest
architectural
difference
between
you
know
what
the
multi-tenancy
sick
currently
has
in
the
repo
and
what
we
do
internally.
So
that
was
just
a
decision
for
us
because
of
because
of
the
way
we
built
loft
so
essentially
loft
in
the
beginning.
Our
first,
like
you
know,
private
better
product
was
essentially
just
provisioning,
name
namespaces,
and
then
you
know
the
request
came
up.
F
How
can
we,
you
know,
provide
users
with
the
capabilities
to
you
know:
work
on
controllers,
install
crds,
install
something
from
you
know
now,
artifact
hub
previously
helmhop
right
that
ships
crds
right.
You
want
to
have
these
capabilities
and
you
want
to
give
them
to
your
users,
but
we
had
a
self-service,
namespace
provisioning
system
right
and
then
the
idea
was
you
know.
How
can
we
extend
that
in
a
way
that
is
very
lightweight
and
allows
users
to
deploy
those
virtual
clusters
themselves
within
those
namespaces?
F
So
it
was
important
to
kind
of
give
users
that
power
to
spin
up
a
virtual
cluster
rather
than
having
to
make
changes
to
the
underlying
cluster.
That's
essentially
where
we,
you
know,
started
working
on
our
own
implementation.
C
F
Yes,
exactly
yeah,
okay,
all
right,
and
if
we
go
one
level
deeper
in
this
diagram,
and
so
essentially,
if
you
have
a
connected
cluster,
so
let's
start
on
the
very
right-hand
side
in
that
connected
cluster
and
that's
how
kind
of
the
open
source
projects
that
we
work
on
fit
together.
So
the
first
thing
that
that
loft
does
is
when
you
connect
the
cluster
it
installs
keyers
to
that
cluster
and
then
kiosk
ships
with
you
know
a
couple
of
basic
resources.
F
I
think
that
is
closest
to
the
tenant
controller.
If
I'm
not
mistaken
in
the
in
the
multi-tenancy
repository
where
we
essentially
you
know
at
crds,
we
call
them
a
little
bit
differently,
so
we
don't
call
a
tenant.
F
For
example,
we
call
it
account,
you
know
an
account
quota
and
those
kind
of
things,
but
essentially
that's
what
kiosk
extends
that
connected
cluster
with,
so
that
we
have
like
the
basics
for
multi-tenancy
in
that
particular
connected
cluster
and
then
the
loft
api
server,
as
well
as
so
that
the
loft
instance
in
the
management
cluster,
which
may
be
connected
as
well
right,
has
crds
on
its
own
right,
for
example,
connected
cluster.
Is
one
of
these
resources
right?
F
So
each
connected
cluster
is
a
crd
itself
and
that
lives
in
the
management
cluster,
and
then
you
know
there's
user
and
team.
For
example,
those
are
all
loft
crds
that
live
in
this
management
cluster
and
then
you
know
we
have
an
extension
api
server
for
certain
things.
You
know
that
you
know
kind
of
helps
you
with
a
list
problem,
because
obviously
you
want
users
to
be
able
to.
F
You
know
list
their
name
spaces
right,
but
it
is
hard
to
kind
of
reflect
on
our
back,
so
we
essentially
have
a
resource
called
space
with
which
a
little
bit
abstracts
from
name
spaces,
but
fabian
can
can
go
a
little
bit
or
can
talk
about
that
in
just
a
couple
of
seconds.
C
That
similar
shift
does
right.
Basically,
you
have
to
maintain
a
the
reverse
mapping.
F
Idea
as
like
openshift
has
projects,
so
space
is
very
comparable
to
like
a
project
and,
of
course,
right.
Okay,
thanks
all
right,
and
because
you
know
everything
is
essentially
a
crd.
It's
pretty
easy
to
do
get
ups
with
this
as
well.
We
do
provide
a
cli,
just
you
know,
for
convenience
reasons.
If
you
don't
want
to,
you
know,
build
your
own
cli
if
you're
a
smaller
team
or
you
want
to
integrate
our
cli
into
whatever
cli
you're
working
on
it
should
be
pretty
straightforward.
F
We
do
provide
a
ui
as
well,
which
is
basically
just
an
abstraction
for
for
cube,
ctl
commands.
So
when
you,
you
know
list
clusters,
it
runs
cube,
ctrl,
get
clusters
essentially
and
clusters
being
a
crd
of
loft
and,
as
you
can
see
here
in
the
screenshot,
if
we
dive
into
one
of
these
clusters,
you
can
also
see
you
know
how
release
is
installed
like
nginx,
here
or
search
manager
or
whatever
is
installed
in
your
cluster
and
then
kiosk
being
one
of
these
helm,
charts
that
by
default
are
installed
into
each
connected.
F
Cluster
is
obviously
also
visible
for
each
cluster,
and
I
think
fabian
is
going
to
talk
a
little
bit
more
in
detail
about
what
kiosk
is
because,
as
you
mentioned
before-
and
this
is
an
open
source,
working
group
and
kiosk
is
really
entirely
open
source
and
you
know
usable
is
aside
from
love,
so
you
can
just
you
know,
install
it
to
any
cluster
of
helm
and
get
those
multi-tenancy.
F
You
know
basic
resources
without
even
using
our
commercial
product
yeah.
So
I
mean
lucas
already
mentioned
it.
Kiosk
is
basically
some
sort
of,
like
extension
for
kubernetes
in
terms
of
multi-tenancy.
So
when
we
initially
built
gears,
we
already
knew
like
that.
Loft
would
be
a
product
on
top
of
it,
but
we
wanted
to
have
like
the
basic
core
open
source
and
we
looked
at
like
what
you
and
the
moody
tendency
group
are
doing,
and
we
really
liked
the
ideas
and
we
needed
something
like
the
tenant
controller.
F
But
we
had
like
certain
requirements
which
we
felt
were
not
so
easily
doable
with
the
tenant
controller,
which
we
ultimately
decided
to
like
build
our
own
open
source
project,
where
we
can
then
build
a
loft
upon,
and
basically
kiosk
has
like
this
basic
idea
of
an
account
similar
to
what
the
taming
controller
has
a
tenant.
F
As
lucas
already
mentioned,
and
in
here
there
are
basically
three
big
or
like
two
big
components
which
kiosk
ultimately
is
implemented
on,
and
the
first
thing
is
that
piers
starts
its
own
kubernetes
api
server
for
certain
resources,
which
are
all
virtual.
In
this
case,
you
can
see
them
in
the
in
the
top.
F
That's
the
tenancy
peers,
dot,
sh
api
group,
which
mostly
are
just
like
basically
views
that
reflect
other
resources
that
supplement
the
list
problem
kind
of
like
the
project
in
openshift-
and
these
are
these-
are
these-
are
completely
optional?
You
don't
have
to
use
them,
but
it
just
makes
it
easier
for
users
like
to
query
certain
resources
they
have
access
to,
and
then
we
have
space,
which
is
like
one
of
the
basic
building
blocks.
F
I
would
say:
peers
builds
upon
because
space
is
like
a
very
central
resource,
which
is
basically
a
one-to-one
relation
to
a
namespace,
but
the
the
actual
api
server
does
a
little
bit
more
in
the
background
when
you
create
a
space,
for
example,
because
when
you
create
a
space
through
the
api
server
of
our
peers,
it
also
creates,
based
on
your
account,
definition
that
you
have
and
certain
resources
in
that
space,
as
well,
for
example,
on
the
roll
binding
or
certain
pots
network
policies,
and
so
on.
F
These
resources
are
basically
defined
in
so-called
templates,
so
this
is
a
little
bit
similar.
It's
like
the
underlying
idea
is
a
little
bit
similar
to
like
what
hierarchical
namespaces
are
doing,
but
it's
a
different
approach.
Ultimately,
so
we
want
to
try
to
accomplish
the
same
goal,
but
we
do
it
a
little
bit
in
a
different
way
there,
where
you
can
define
in
the
template.
F
It's
basically
resources,
and
then
you
have
a
template
instance,
which
instantiates
these
resources
a
little
bit
like
a
helm
chart
and
like
a
hand
release
in
this
case,
you
even
could
define
like
a
helm
chart
in
the
template
and
a
template
instance
would
deploy
that
handshard
into
a
namespace.
Upon
like
a
space
creation,
then
you
define
an
account.
An
account
is
basically
bound
to
a
user
or
a
group
in
kubernetes
and
within
the
account
when
you
define
like
limits
for
this
account
as
well.
F
So,
for
example,
you
say
the
user
can
only
create
10
spaces
and
then
the
the
api
server
makes
sure
that
the
user
does
not
exceed
these
spaces
when
when
it
is
contacted
through
the
fields,
api
server-
and
it
also
creates
templates
that
are
specified
in
the
count
automatically
upon
space
creation
and
makes
sure
the
user
only
gets
access
to
the
namespace.
F
After
all,
these
templates
are
deployed
into
the
namespace
itself.
Another
thing
we
needed
for
aloft
was
count
quotas,
so
that's
very
comparable
to
what
openshift
has,
which
is
called.
I
think
cluster
resource
quota.
F
So
here
you
basically
say
that
a
certain
account
can
use
a
certain
amount
of
resources
like
pots,
for
example,
in
all
of
these
main
spaces.
Together,
so
let's
say
a
user
or
an
account
has
three
namespaces
in
which
each
two
pots
are
running,
and
you
could
define
an
account
quota
which
limits
this
user
to
six
pots
and
when
you
would
try
to
create
a
report
in
any
of
these
namespaces,
the
request
would
get
rejected
because
it
would
exceed
the
quarter
limit.
F
So
that's
basically
the
idea
in
account
quota
and
there's
also
account
quota
set,
which
is
not
implemented
yet,
but
there
was
the
idea
that
we
could
automatically
deploy
account
quotas
for
accounts
with
a
certain
label
selector.
F
The
first.
Could
you
switch
the
slides
thanks?
So
here
you
see
the
the
relationship
a
little
bit
more
in
in
detail
between
those
resources
and
basically
all
these
resources
are
just
building
blocks
you
can
build
upon,
but
you
don't
have
to
use
that
kind
of
like
non-invasive
and
kiosk
integrates
with
already
existing
namespaces
and
already
existing
resource
quotas
and
namespaces
and
so
on
just
optional
things
that
we
thought
are
very
handy
when
it
comes
to
implementing
multi-tenancy
within
the
kubernetes
cluster.
F
One
one
quick
sentence
regarding
this
diagram
so
essentially
we're
looking
at
two
different
roles
and
that's
what
I
try
to
outline
with
this
diagram.
So,
typically,
you
have
the
cluster
admin
and
then
you
have
an
account
user.
The
cluster
admin
is
the
person
that
essentially
has
admin
permission
to
the
connected
clusters.
So
that's
the
person.
You
know
that
configures
the
accounts
sets
the
account.
F
Quotas
in
the
future
will
also
manage
account
quota
sets,
but,
as
fabian
mentioned,
that
it's
not
implemented
yet
and
also
defines
the
templates,
and
you
can
also
define
certain.
So
let's
say
you
have
a
template
for
a
network
policy
or
you
know
for
certain
resource
quotas
or
things
like
that
right.
F
You
can
also
enforce
these
templates
inside
the
account
right
you
and
so
inside
the
account
you
can
essentially
list
which
templates
should
be
automatically
instantiated
in
each
space
that
a
user
creates,
and
then
you
have
the
user
role,
which
is
essentially
the
you
know,
a
user
or
a
kubernetes
group
or
service
account,
which
is
now
using
this
account
to
you,
know,
authenticate
with
the
cluster
and
gets
the
permission
to
you
know
list
create
the
lead
spaces,
which
is
essentially
just
a
representation,
an
abstraction
of
a
namespace
which
they
then
you
know,
work
on
with
regular.
F
You
know
parts
deployments
etc.
So
it's
really
just
about
you
know
creating
managing
name
spaces
and
creating
that
abstraction
for
users
to
you
know,
create
a
self-service
system
for.
F
Them
all
right,
if
there
are
no
questions
regarding
kiosk,
I
have
a
couple
more
slides
regarding
virtual
cluster
and
dev
space.
Just
to
outline
you
know
what
the
differences
are
and
how
that
all
fits
together.
C
I've
got
a
question
or
two
on
on
this
before
we
go
back
to
the
virtual
investor,
so
thanks
so
much.
This
is
super
useful.
Can
an
account
user
like
a
user
or
group
or
service
account,
be
a
member
of
multiple
kiosk
accounts
at
the
same
time,
or
is
it
oh?
Is
it
one-to-many
basically.
F
F
C
Label,
okay-
and
I
think
the
only
other
question
I
had
then
was
a
template.
So
what
does
a
template?
Look
like
a
ticket.
It's
a
kubernetes
object
that
basically
just
has
free-form
yaml
inside
it.
Is
that
roughly
correct.
F
Yeah
exactly
so,
we
have
basically
in
the
template,
you
can
you
have
two
options.
You
could
either
define
like
a
helm
chart
which
has
specific
hand
related
options
such
as
chart
name,
chart
repository
and
so
on,
and
then
you
have
this
kind
of
free
text,
yummy
part
where
you
can
define
like
an
array
of
yammers,
which
defines
other
kubernetes
resources
that
should
be
deployed
inside
this
namespace.
H
Yes,
okay,
so
this
is
all
open
source,
so
I
can
just
go.
Look
at
this
yeah
yeah,
yeah
sure
yeah.
This
is
completely
completely
open.
All
of
the
things
that
just
that
you
can
just
look
up.
I
F
I
Oh
okay,
so
you
I
I
think
in
this
figure
you
say:
users
can
operate
inside
namespace,
so
does
user
actually
can
access
namespace
or
they
don't
even
allow
to
access
namespace.
F
Yeah
they
can
actually
also
access
the
underlying
namespace
then,
but
it
creates
so
the
main
difference
between
like
why
users
do
not
directly
access
name.
I
mean
they
directly
access
named
spaces,
but
they
do
not
directly
create
new
spaces,
so
they
create
spaces
because
during
a
space
creation,
we
have
the
possibility
to
do
extra
things.
F
While
we
cannot
like
intercept
a
namespace
creation,
the
user
does-
and
that's
basically
the
reasoning
behind
this,
because
then
we
can
deploy
templates,
make
sure
the
user
can
only
access
the
namespace
and
after
all,
the
templates
were
employed
into
the
namespace
and
everything
is
set
up
correctly
and
then
give
rights
to
the
user
to
actually
access
this
namespace.
I
Oh,
I
see
so
so
people
still
can't
access
the
namespace
through
the
api,
but
but
they
cannot
at
least
they
miss.
The
lab
space
list
of
list
operation
is
forbidden
right.
So.
F
The
api
extension
server
will
basically
check
like
through
airbag,
which
which
namespaces
you
are
allowed
to
view
to
get,
and
then
just
return
these,
instead
of
like
the
usual
kubernetes
way,
it's
saying
am
I
allowed
to
list
all
namespaces
or
none,
so
we
kind
of
supplement
this
problem
by
implementing
our
own
list
operation
for
spaces.
I
F
Just
to
add
another
example
regarding
this,
because
I
think
that's
probably
the
most
common
complicated
puncture
cutting
kiosk
with
the
with
the
question
that
adrian
said,
regarding
how
does
this
fit
together
with
the
accounts
so
essentially
when
it
works
like
this,
if
the
user
already
owns
an
account,
so
it's
already
connected
to
an
account.
So
the
account
essentially
has
a
you
know,
just
like
an
arabic
like
a
subject
list.
F
So
if
my
authenticated
user-
whoever
I
might
be
you
know,
I'm
authenticated
with
a
token
towards
the
api
server
as
a
service
account
or
a
user
right,
and
I
access
the
kubernetes
api
server
and
I'm
listed
in
this
account.
I'm
part
of
this
account
right
and
if
I
want
to
now
create
a
namespace-
and
you
know
afterwards
deploy
something
to
that
namespace
and
do
whatever
I
want
within
that
namespace.
F
I
would
actually
create
the
space
resource,
so
I
would
not
run
cuba
city
I
create
naming
space,
I
would
run
create
space
or
consider
apply
to
be
more
accurate
and
then
this
space
resource
has
then
a
field
account
which
references
which
account
I
want
to
be
using,
and
then
we
can
essentially
check
is
the
user
that
runs
this
http
request
right.
This
api
server
request
part
this
account
and
do
we
allow
this
space
to
be
created?
The
space
itself
is
not
persistent.
It's
a
completely
virtual
object
right.
F
What
we
essentially
do
is
we
then
create
this
name
space.
We
enforce
the
the
template
instances
right,
which
are
defined
for
this
account
right
to
limit
this
account.
So,
for
example,
the
network
policy
that
has
been
defined
in
the
template
and
referenced
in
this
account
to
be.
You
know
a
mandatory
template
to
be
instantiated
that
would
be
instantiated
first
and
just
inside
this
namespace,
and
then
we
would,
you
know,
also
add
our
back
to
that
namespace,
etc.
F
To
essentially
give
this
user
or
the
subject
all
the
subjects
that
are
part
of
this
account
access
to
this
namespace.
To
now
you
know
create
deployment
services,
whatever
the
admin
has
configured
and
then
we
that
was
what
fabian
mentioned
earlier,
since
you
defined
the
account
in
this
virtual
resource.
F
D
F
One
just
one
small
correction
to
find
out
which
you,
which
account
can
access,
which
namespace
is
completely
done
through
airbot,
so
you
basically
check
since
we
have
like
all
the
rolls
and
roll
bindings
and
cluster
rolls
and
cluster
or
bindings
in
the
cache
anyway.
F
We
basically
just
do
the
the
kubernetes
check
if
I
can
like
get
the
name
space
directly
so
usually,
for
example,
when
you
have
like
a
roll
binding
or
like
a
roll
and
a
row
binding
in
the
name
space
that
allows
you
to
write
to
get
namespaces,
you
essentially
get
the
right
to
get
this
specific
namespace
and
that's
what
we
are
checking
there
if
you
are
allowed
to
directly
get
the
namespace
regardless
of
the
label,
so
the
label
doesn't
really
matter
for
that.
It's
just
pretty
pure
road
based
access
control.
I
F
Yes,
exactly
the
account
quarter,
selects
the
name
spaces
by
the
label.
Here.
I
Oh
okay,
so,
but
there's
no
actually
giving
the
code
across
multiple
namespace
or
did
you
have
that
functionality
or
you
just
every
time
you
just
check
the
sum
of
the
resources
of
all
the
name.
Space
cannot
over.
You
know
cannot
be
over
the
concord
hub,
but
there
is
no
detailed
diving
of
the
code
off
to
multiple
namespaces.
I'm
correct.
F
Sorry,
I'm
not
sure.
I
F
So
if
you
want
to
have
different
quotas
in
each
namespace,
you
create
a
resource
quota
in
that
namespace.
If
you,
the
account
quarter,
is
for
the
use
case
that
you
want
to
like
limit
resources
as
a
sum
over
all
namespaces.
The
account
owns.
F
I
F
Kind
of
yeah,
but
you
don't
need
the
resource
resource
quota
and
it
will
not
be
created
the
resource
you
can
use
that
without
the
resource
code.
That
is
like
something.
So
the
account
quota
in
the
background
does
not
create
resource
quarters
or
anything.
H
Exactly
yeah,
you
check
the
sum
of
resources
across
the
nice
spaces
that
belong
to
an
account.
Yes,
I.
I
F
All
right
are
there
any
any
more
questions
regarding
this
diagram
or
kiosk
in
general,.
G
F
Then
I'll
proceed
with
virtual
costa,
real
quick.
So
essentially,
this
is
what
we're
trying
to
do,
and
this
is
the
actual
hotel
in
amsterdam
by
the
way
which
consists
of
70
stacked
houses.
It's
pretty
crazy,
but
I
guess
you're
already
familiar
with
virtual
clusters,
so
we're
all
trying
to
you
know
work
on
this
crazy
mission
here,
so
it's
essentially
about
creating
the
single
tenant
experience,
but
inside
or
on
top
of
a
multi-tenant
kubernetes
cluster
right.
F
That's
essentially
what
we're
trying
to
do
give
everybody
their
own
little
house
where
they
can
do
whatever
they
want,
but
then
essentially
having
that
in
one
combined
multi-tenant.
You
know,
architecture
and
falloff.
That
essentially
works
like
this,
and
I
think,
as
I
said,
that
is
probably
the
biggest
difference
here.
We'll
take
a
kubernetes
cluster
have
a
controller
running
inside
that
kubernetes
cluster.
F
F
Rather,
you
know
than
having
to
do
that
on
the
on
the
cluster
level,
and
then
users
create
their
virtual
clusters
inside
the
namespaces,
the
cluster
controller,
instantiates
them
right,
so
spins
up
the
api,
server,
etc,
and
since
that
is
we're
actually
using
the
k3s
api
server
here,
it's
a
you
know,
as
you
know,
certified
kubernetes
distribution,
etc.
So
it
behaves
like
any
other
kubernetes
cluster
as
well.
F
F
But,
as
I
said
before,
you
can
also
set
it
up
differently,
essentially
just
deploy
the
virtual
cluster
and
not
having
to
go
through
the
api
server,
but
that
is
more
manual
work
to
do,
and
then
you
know
again,
the
api
is,
you
know,
capable
of
doing
this
for
multiple
underlying
clusters.
So,
as
a
user,
I
get
a
cube
context
actually
for
each
namespace
and
for
each
virtual
cluster
that
I
create.
So
it
doesn't
really
matter
to
me
how
the
underlying
cluster
infrastructure
is.
F
I
just
work
with
my
namespaces
and
with
my
virtual
cluster
and
I
switch
my
context
and
then
get
a
new
virtual
cluster
or
a
new
namespace
access
depending
on
you
know,
default
namespace
configured
in
the
in
this
cube
context
and
then
the
api
gateway,
essentially
routes,
requests
to
the
connected
clusters
unlocked
and
then
last
but
not
least,
since
you
also
asked
about
devspace
adrian.
F
I
just
put
another
slide
in
here
regarding
devspace,
so
devspace
was
essentially
the
tool
that
we
started
out
with
in
2018,
so
I
think
it
was
like
a
developer
tool
number
three
or
so
that
was
created
for
kubernetes.
I
think
by
now
we're
probably
at
like
50
or
something
like
that
open
source
to
resolve
that
that
you
can
use
for
kubernetes
based
development.
The
idea
was
essentially
to
replace
you,
know,
docker
compose
and
make
that
switch
from.
You
know
docker-based
development
towards
more
kubernetes-based
development.
F
In
that
regard,
it
is
similar
to
the
two
projects
that
were
already
in
the
market
when
we
got
started.
So
when
we,
when
we
started
in
2018,
there
were
two
projects
out
there
already.
That
was
draft,
which
I
think
spun
out
of
the
azure
team
or
at
microsoft.
F
Somehow
and
then
scaffold,
which
is,
I
think,
connected
with
google,
and
both
tools
had
essentially
a
similar
idea
that
we're
following
as
well
that
the
developer
is
assumed
to
have
access
to
kubernetes
cluster,
whether
that
be
mini,
cube
or
you
know,
a
gke,
cluster
or
whatever,
and
then
essentially
can
use
a
cli
only
tool.
So
no
server
side
component
required
to
kick
off
something
like
a
ci
cd
pipeline
which
deploys
an
application
to
a
namespace
in
kubernetes.
F
And
then
you
know,
start
port
forwarding
and
lock
block
streaming
to.
Essentially,
you
know
get
a
hold
of
that
application
while
it
runs
in
kubernetes,
so
that
was
essentially
the
the
state
of
the
art
when
we
got
started
with
devspace
the
thing
that
that
bothered
us
when
we
when
we
saw
that
was
essentially
that
the
idea,
at
least
in
2018,
was
whenever
there's
a
you
know,
a
file
change
or
where
you
want
to
reiterate
over
this
change
your
application.
F
Then
you
had
to
rerun
this
this
local
pipeline.
So
I
think
we
were
the
first
tool
that
shipped
this
this
bottom
part
here,
which
was
essentially
an
idea
of
how
to
solve
this
kind
of
you
know
hot
reloading
experience,
because
on
your
local
in
your
local
development
environment,
on
your
on
your
local
laptop
you're,
just
used
to
you
know,
re-running
your
applications
without
having
to
rebuild
images,
tag
them
push
them
to
registry,
et
cetera
right.
That's
a
lot
of
steps
involved
to
instantiate
something
in
kubernetes.
F
So
we
wanted
to
have
it
closer
to
the
local
experience
where
you
just
change
a
file,
and
you
run
your
go
run
command
again
or
where
you
change
your
file,
and
you
have
something
like
a
hot
reloader
like
nodemon
for
node.js,
or
you
know,
I
think
springboot
and
others
in
java
have
hot
reloaders
as
well.
So
we
thought,
how
can
we
achieve
this
and
the
way
we
did
that
with
dev
space
is
essentially
in
this
pipeline?
F
In
your
configuration
file
in
in
dev
space,
you
can
make
certain
overrides,
for
example,
to
the
docker
file.
You
can
say:
please
append
you
know
those
five
instructions
to
the
docker
file.
Please
only
build
to
the
build
stage
rather
than
to
my
production
stage,
which
is
only
a
binary
in
a
distal
image
right.
F
What
am
I
supposed
to
do
there
as
a
developer,
so
those
kind
of
things
you
can
configure
in
dev
space
so
that
it
only
builds
to
your
build
stage
and
then
installs,
a
hot
reloader
or
installs,
a
remote
debugger
and
those
kind
of
things.
And
now
you
have
an
application
running
inside
a
namespace
with
a
container
that
is
more
powerful
than
you
know.
F
Maybe
your
production
image,
which
is
very
you,
know,
stripped
down
to
the
essential
application,
and
then
we
establish
a
file
synchronization
between
the
local
file
system
and
the
container
file
system.
F
We
essentially
do
that
by
injecting
a
little
binary
inside
the
container
and
that
binary
we
establish
a
connection
between
the
local
file
system
and
remote
file
system,
so
we're
watching
both
file
systems,
essentially
with
file
watchers.
We
also
run
over.
You
know
the
the
the
three
of
of
files
for
for
the
particular
folders
that
you
configure
and
whenever
we
see
a
file
change
depending
on
your
configuration,
we
sync
the
file
changes
from
your
local
file
system
up
to
the
container
file
system
and
vice
versa.
F
If
you,
if
you
like
to
configure
this,
and
because
we
do
that
directly
inside
the
container,
with
this
binary
that
we're
injecting
it
doesn't
really
matter
to
us,
if
you're
underlying
you
know,
folder
that
you're
synchronizing
to
is
in
is
ephemeral,
storage
or,
if
you're,
using
a
persistent
volume
that
you
mounted
inside
a
container,
it
doesn't
really
matter
to
death
space.
As
long
as
we
can
inject
that
binary,
so
we're
essentially
having
you
know,
read,
write
permissions
on
on
the
file
system,
which
you
can
pretty
easily.
F
You
know
change
it
by
changing
the
pipeline,
adding
you
know
things
to
your
docker
file
so
that
you
know
everything
that
typically
security
people
would
not
want
you
to
do
in
production,
like
you
know,
running
your
containers
as
root,
and
things
like
that.
Right
makes
it
easier
for
developers,
but
it's
something
you
probably
don't
want
in
production.
But
with
that
way
you
can
have
a
hot
reloader
inside
your
container
running.
That
essentially
restarts
your
application,
and
then
you
really
get
this
great
experience
where
you
change
line
of
code
in
your
local
ide.
F
You
have
dev
space
running
in
the
terminal
of
your
ide
right
and
then
you
see
the
container
restarting
you
see
the
lock
output
and
you
refresh
your
browser
or
whatever
you're
doing,
and
you
essentially
you
know,
get
a
really
great
experience
of
how
to
work
inside
a
kubernetes
cluster
with
the
capability
to
connect
to
services
that
may
run
in
other
namespaces.
F
Like
a
shared,
you
know,
kafka
message
key
or
things
like
that
or
shared
large
database,
or
anything
that
you
can
have
in
gke
or
aws,
which
you
know
may
be
connected
to
your
to
your
kubernetes
cluster,
and
then
loft
essentially
connects
to
dev
space
in
a
way
that
loft
may
be
the
provider
for
you
to.
You
know:
spin
up
your
namespace
spin
up
your
virtual
clusters
that
you
then
use
to
do
your
development
work
with
that
space
with,
so
that's
essentially
how
it
fits
together.
F
C
I
I
know
that
we
got
one
more
thing
on
the
agenda.
I
have
more
questions
to
ask
I'd,
love
to
go
on
and
talk
a
little
bit
more
about
like
how
we
could
collaborate
in
the
future,
but
we
might
be
out
of
time
for
that
daniel.
How
much
time
do
you
need
for
the
for
your
item.
J
Yeah,
I
just
need
a
minute
or
two.
You
can
definitely
just
go
ahead
with
additional
questions.
C
Okay,
sure,
okay,
so
we'll
I'll
cut
it
off
in
eight
minutes,
so
that
you
can
have
the
last
word
so
yeah.
I
think
that
my
question
would
be
like
it
seems
so
faye
and
I
were
actually
just
talking
about
the
tenant
crd
which,
as
you
said,
is,
is
kind
of
the
most
direct
analog
for
for
kiosk
and
so
yeah.
I
agree.
C
Those
are
kind
of
two
alternatives
and
I
can
see
that
the
differences
between
them,
h
and
c
is
at
a
lower
level,
and
I
think
I
can
see
I
don't
know
I
feel
like
there
could
be
in
some
ways
it
could
operate
at
a
lower
level
than
kiosk
and
I
could
imagine
kiosk
being
relo
like
rebased
on
top
of
hierarchical
namespaces.
In
another
way,
it
has
a
fairly
separate
model
like
this
whole,
like
system
of
accounts
that
is
separate
from
namespaces.
C
I
have
considered
actually
adding
very
similar
to
what
you
call
templates.
I
have
considered
adding
those
to
agency
and
even
wrote
a
very
quick
design
dock
on
it
once
upon
a
time,
but
it's
about
at
the
moment.
We
don't
have
a
anybody
who
really
wanted
to
go,
implement
that
and
we
weren't
rushing
to
do
it
ourselves.
C
I
think
that
would
be.
The
main
thing
is
that
if
you
were
to
rebase
this
on
top
of
hierarchical
namespaces,
there
would
be
a
bunch
of
changes
to
how
your
customers
used
it,
and
that
might
not
be
worth
the
change
them
at
right
at
this
point.
Is
that
how
you're
thinking
as
well
or
or
have
you
thought.
F
We've
just
briefly
talked
about
that
last
week
after
you
reached
out,
and
so
I
think,
as
you
said
correctly,
I
think
the
hierarchical
namespaces
is
closest
to
what
we
have
in
terms
of
templates
and
template
instance,
because
it
essentially
allows
you
you
know
to
have
like
a
parent
name
space
with
certain.
You
know,
network
policies,
and
you
know,
restrictions
that
you
want
to
apply
to
the
child
name
spaces.
F
So
what
we
are
thinking
about
is
essentially,
we
could
extend
kiosk
in
a
way
that
the
account
gives
you
the
option
to
either
specify
a
template
or
specify
a
parent
namespace,
essentially
for
all
spaces
that
this
account
creates,
so
that
that
would
not
be
like
a
breaking
change
for
us
in
kiosk,
but
it
would
allow
us
to
extend
it
to
essentially
incorporate
hierarchical
namespaces
as
well.
C
And
that
also
gives
you
a
nice
place
to
put
that
account
quota,
which
right
now
is
cluster
level
and
so
therefore
quite
difficult
to
control
access
to
via
our
back.
If
you
can
put
that
account
quota
into
a
namespace
itself,
then
it
becomes
much
more
feasible
to
limit
access
to
it.
F
F
Haven't
discussed
yet,
but
that
sounds
definitely
interesting.
That
might
also
you
know
solve
the
issue
because,
as
I
said
earlier,
account
quota
set
is
something
we've
been
thinking
about,
so
essentially
how
to
create
account
quotas
automatically
once
you're
creating
an
account.
That
is
something
we've
been.
You
know
I
mean
we
thought
about
this
implementation
for
account
coder
set
similar
to
essentially,
you
know
like
a
replica
set.
F
Essentially
that
was
the
analogy
right,
but
we're
not
fully
happy
with
it
yet
so
that
might
be
something
to
explore
with
our
hierarchical
name
spaces
as
well.
That
definitely
is
very
interesting
to
look
at.
You
know.
C
Yeah,
I
think
I'd
love
to
continue
this
conversation
afterwards
and
as
for
vcs,
I'm
not
too
involved
in
those,
but
maybe
chris
and
faye
anything
that
you
I
don't
know.
How
are
you
guys
feeling
and
chris
and
faye
are
the
the
leads
or
faizal
was
the
person
that
I've
has
been
my
go
to
for
this?
C
I
think
chris
is
pretty
involved
now
as
well
any
thoughts
about
whether
it
makes
sense
to
have
any
kind
of
collaboration,
or
I
mean
maybe
another
way
to
put
it
is
for
the
loft
folks
like
why
was
was
the
working
group
multi-tenants
was
the
working
group
vc
concept,
not
ready
when
you
started
to
when
you
needed
one?
Is
that
why
you
didn't
use
that
one.
F
Yeah,
I
think
that
is,
I
mean
it's
pretty
similar
to
hierarchical
namespace
as
well,
so
when
we
started
building
loft
that
was
essentially
december
last
year,
so
that's
like
over
a
year
ago-
and
I
think
you
know
both
projects
have
probably
gained
a
lot
of
maturity
compared
to
when
we
first
looked
at
it.
So
and
since
we
didn't
know
you
know
how
much
so
I
mean
we're
we're
a
small
startup
right,
so
we
need
a
lot
of
flexibility.
You
know
our
customer
complains
hey.
F
This
is
not
working,
so
we
need
to
make
a
change
right
now
and
that
was
essentially
for
us.
You
know
the
thing
where
we
said:
okay,
let's
take
a
look
at.
You
know
the
option
to
create
those
projects
ourselves
where
we
can
essentially
make
changes
very,
very
quickly
and
then
later
on,
see
if
we
can
somehow
merge
that
back,
especially
you
know,
since
kiers
is
fully
open
source
and
it
was
always
clear
that
it
will
be
open
source.
I
mean
we
even
you
know
said
that
you
know
at
some
point.
F
If
you
know
people
like
it
and
want
us
to
put
it
in
cncf
or
something
like
that,
we'd
love
to
do
that
as
well,
so
we're
really
open
to
all
of
these
things,
but
that
was
mainly
decision.
Both
projects
were
very
early
when
we
looked
at
them
actually.
F
The
same
accounts
for
the
tenant
controller
right,
so
that
was
the
main
decision
for
us,
but
I
think
going
back
now
and
taking
a
closer
look
at
all
projects
again
and
see
where
we
can.
You
know
join
forces
on
certain
things.
That
is
definitely
something
we
we
would
love
to
do.
F
I
Yeah,
my
current
thought
is,
I
think,
kiosk
is
many
component
concepts
is
very
similar
about
our
the
tenant
controller
design,
like
you
know,
accounts
similar
to
tenant
crd
and
the
space
is
very
close
to
the
tenant
name,
space
crd
and
in
addition
to
that,
they
have
a
current
quarter
stuff,
which
is
a
definitely
supplemental
to
current
tenants.
So
I
feel
the
qsq
has
done.
I
think,
a
better
job
than
what
current
the
tenant
operator
has
been
doing.
That's
my
feeling
in
terms
of
the
virtual
cluster.
I
I
I
see
the
differences
now
and
sometimes
because
you
use
yo.
I
fully
understand
your
choice
and
I
guess
indeed
that
the
project
doesn't
come
to
motion
until
early
this
year.
So
I
I
for
that.
I
feel
there
are
some.
Maybe
what
chris
and
I
was
trying
to
do-
may
not
be
it's
up.
It
depends.
You
do.
You
feel
think
if
that
fits
your
requirements,
because
I
think
your
tenant
master,
you
use
a
k3s,
you
is
so
you
probably
don't
have
this.
I
You
know
talent,
component
management,
problem,
yeah
and
so
in
terms
of
syncing
logic.
I
think
it
may
depends
on
the
use
case
whether
people
really
use
that
or
not,
but
our
design
was
was
very
verse.
I
We
were
trying
to
keep
the
kubernetes
api
semantic
as
co
as
as
in
act
as
possible.
So
that's
you
guys
probably
will
face
some
compatibility
issues
if
you
compare
all
resources
in
one
name
space.
That's
my
guess,
but
if
you
intend,
if
you
have
some
thoughts
about
that,
so
you
don't
like
to
talk
about
it
or
you
think
there
are
something
that
the
virtual
cluster
in
upstream
virtual
cluster
can
be
enhanced
to
certain
use
case.
That's
a
certainly
welcome.
I
F
Think
that
discussion
would
probably
require
a
couple
more
hours,
but
definitely
look
for
looking
forward
to
to
have
a
follow-up
call
on
that.
C
Why
don't
we
have
it?
Why
don't
we
follow
up
on
slack
and
then
maybe
we
can
schedule
some
one-off
meetings
if
we
want
to
keep
talking
live.
I
just
want
to
make
sure
there's
enough
time
for
daniel,
but
thanks
so
much
fabian
and
lucas.
This
has
been
very
informative.
So
thanks
for
taking
the
time
to
come
and
present.
C
Thanks
generally,
you
want
to
go
ahead.
J
Yes,
hi
everyone.
I
just
had
a
quick,
quick
update.
As
you
may
recall,
back
in
march,
I
give
a
presentation
to
the
multi-tenancy
working
group
around
some
of
the
work
that
is
going
on
around
multi-tenant
operators.
J
I'm
going
to
drop
a
link
in
the
chat
here
to
to
that
presentation,
and
I
we've
been
working
a
lot
internally
on
how
to
grapple
with
some
of
the
complexity
around
multi-tenant
operators
and
we've
made
some
pretty
big
internal
decisions
around
sort
of
the
path
forward,
and
I
was
thinking
about
maybe
coming
back
to
the
group.
J
In
light
of
some
of
these
changes
and
doing
a
new
presentation,
I
think
you
know
the
reason
that
multi-tenant
operators
are
relevant
is
sort
of
multi-tenancies
is
is
desired
since
it
lets
you
run
different
workloads
and
since
operators
are
some
of
the
most
complex
type
of
workloads,
they're
relevant,
so
yeah,
I
was
just
wondering
if
people
are
interested
and
then
I
can
put
together
some
some
slides
for
a
presentation
outlining
sort
of
the
change
in
strategy.
J
Interested
great
so.
A
Well,
I
think
that
this
is
really
relevant
to
some
stuff,
we're
all
doing.
J
Yeah,
I
I
think
I
think
it
is,
and
it's
interesting
for
us
to
gauge
sort
of
you
know
your
your
guys's
reactions
based
on
sort
of
what
we're
doing,
because
I
think
actually
it
integrates.
J
It,
may
actually
integrate
better
with
some
of
the
projects
like
like
the
loft
project
that
was
presented
and
the
virtual
cluster
stuff
versus
what
we
had
before
with
sort
of
operator
groups
and
looking
at
it
from
a
single
clustered
perspective
but
yeah.
In
that
case
great,
I
will
try
to
put
together
a
presentation.
I
know
evan's
on
the
call.
So
me
you
know
me
and
him
we'll
look
at
it
and
come
back
to
you
guys
with
something,
maybe
in
the
next
in
the
next
couple
of
meetings.
A
Awesome
sounds
really
good
thanks,
thanks,
okay,
thank
you,
everybody.
This
is
a
really
great
presentation
and
if
you
have
topics
for
next
week,
definitely
or
for
two
weeks
from
now
definitely
put
them
on
the
agenda
we
may
it
feels
like
there
might
be
more
to
talk
about
with
loft
and
everything.
So
I
think
we
definitely
appreciate
you
know.
If
people
have
questions
and
stuff,
I
think
let's
keep
the
conversation
going,
but
cool
chat
with
y'all
later.