►
From YouTube: 20200714 Kubernetes Working Group for Multi-Tenancy
Description
- Special announcement from Kubernetes Steering Committee
- Discussing of Arktos, part 2
A
Okay,
hi
everybody
welcome
to
our
regularly
scheduled
multi-tenancy
working
group.
Today
we
have
pretty
packed
agenda.
So
lucky
is
here
from
the
steering
committee
to
give
us
an
update
on
some
changes,
I
think
to
how
they
are
overseeing
the
various
six
and
working
groups,
and
then
we
are
going
to
have
a
curated
conversation
about
CDs
and
multi-tenancy
and
then
I
think
we
have
the
team
on
from
our
last
meeting
to
take
any
questions
about
their
multi
tenants,
kubernetes
solution
so
lucky.
If
you'd
like
to
take
it
away.
Wonderful.
B
Thank
You
Tasha,
hello,
everybody
and
working
group
multi-tenancy.
My
name
is
Lachlan
Evensen
and
I,
representing
kubernetes
tiering.
Today.
This
conversation
is
mainly
for
the
leads
of
this
working
group,
but,
however,
everybody
will
be
involved
in
this
conversation.
So,
as
you
can
see
on
the
agenda,
there
was
a
mail
sent
out
about
a
week
ago,
introducing
the
concept
of
annual
reports
and
it's
steering's
goal
that
as
kubernetes
is
getting
larger
and
larger
that
we
want
to
create
scalable
ways
to
have
reporting
process.
B
So
we've
introduced
this
concept
of
annual
reports
because
previously
all
working
groups
and
seeds
were
required
to
actually
go
and
provide
quarterly
updates
at
the
community
meeting,
so
we're
trying
to
figure
out
new
ways
to
make
this
process
more
scalable
and,
as
such,
we've
introduced
this
concept
of
annual
reports.
So
this
is
a
way
for
the
community
to
see
how
working
groups
and
things
are
going
and
steering
also
to
have
a
finger
on
the
pulse
of
where
we
can
help
assist
these
different
teams
and
processes
within
the
kubernetes
community.
B
So
I
think
the
the
main
point
I
wanted
to
get
across
today
is.
There
is
a
link
there
to
the
mailing
list.
If
you
take
a
look
at
that
link,
you
can
go
down
to
the
bottom
and
you'll
actually
see
the
process
as
defined
of
what
we're
trying
to
do.
So
all
this
to
say
is:
there's
a
template
that
you
need
to
go
through
and
we're
asking
each
working
group
to
actually
take
a
lead
on
getting
that
done
within
the
next
30
to
60
days
and
we're
just
starting
with
working
groups.
B
So
I
just
wanted
to
initiate
the
conversation
with
this
working
group.
The
other
thing
is
I'm,
actually
the
liaison,
as
on
behalf
of
steering
to
this
working
group,
so
I'm
here
to
help
you
through
the
process.
So
I
don't
want
to
take
too
much
time
in
your
meeting,
because
you
do
have
a
packed
agenda,
but
I
just
want
to
introduce
that
concept
and
who
needs
to
be
involved
in
that
annual
reports
process.
We're
asking
the
leads
of
the
working
group,
obviously,
but
we
don't
want
to
limit
it
to
that.
B
B
If
I
may
I
added
one
more
agenda
item
and
then
I'll
give
you
back
the
rest
of
time,
we've
also
introducing
we're
going
to
send
out
a
mail
in
an
effort
to
make
more
take
a
more
active
role
in
creating
an
inclusive
community.
We're
actually
asking
the
leads
to
attend
the
Linux
Foundation
unconscious
bias,
training
who
are
asking
every
lead
in
the
kubernetes
community,
whether
there
are
technical,
leads
or
leads
of
things
like
working
groups
or
SIG's
to
participate
in
that
so
I've
added
a
link
there.
B
If
you
have
already
taken
it,
please
feel
free
to
go.
There
I
know
it's
Sanjeev
and
Tasha
who
lead
this
working
group.
Please
go
and
attend
that
because
we
want
to
make
sure
that
we're
creating
for
them
and
educating
folks
on
how
to
use
an
inclusive
language
to
all
their
community
members.
So
with
that
I'll
take
a
moment
to
answer
any
questions.
But
I
see
this
as
an
ongoing
conversation
and
if
you
have
anything,
one
need
anything
from
steering,
feel
free
to
ping
me
and
at
me
I'm
at
Lockheed,
83
and
kubernetes
slack.
A
B
Absolutely
so
I
did
it
recently,
as
well
as
part
of
being
on
steering
it's
a
fantastic.
It
takes
about
30
minutes,
it's
completely
online
and
digital,
and
even
you
know
it's
a
course
designed
around,
even
if
you're
speaking
at
events
like
come
on
how
to
use
inclusive
language
and
even
confront
your
own
unconscious
bias,
oh
I
wouldn't
limit
it
to
anybody's
welcome
to
take
it.
It's
free
we're
just
requiring
that
the
leads
of
the
communities
must
take
it
within
30
days.
B
B
A
A
Okay,
so
did
anyone
want
to
kick
off
that
conversation
because
I
think
was
Rodolfo,
but
I
don't
see
Rodolfo
on
the
call
right
now?
So
if
you
want,
we
can
just
talk
about
arc
dos
or
I
know
a
bunch
of
other
people
were
chatting
about
multi-tenancy
NCR
DS
on
the
mailing
list.
So
if
anyone
wants
to
kind
of
jump
in
with
some
thoughts,
we
can
do
that
as
well.
I.
C
C
A
Yeah,
it
doesn't
look
like
he's
online,
but
I
pinged
him.
So
if
I
hear
back,
you
know
we
can
always
switch
cool.
So,
let's
see
the
only
other
topic
that
we
had
was.
If
people
had
questions
about
our
presentation
last
week
for
about
arc
dos
and
I
know
that
we,
we
didn't
actually
finish
the
arc
dos,
the
entire
arctos
deck.
So
if
people
have
any
questions
or
the
arctos
team
wants
to
finish
their
presentation,
we've
got
time.
If
you,
if
that's
something
you'd
like
to
do.
E
F
F
So
Daniel
is
asking
on
the
chat.
Could
anyone
do
overview
of
the
proposed
multi-tenancy
architecture
project
so
Daniel?
There
isn't
like
a
single
single
multi-tenancy
architecture.
We
basically
we've
got
a
few
different
projects
that
are
sort
of
supported
by
this
group,
and
you
know
the
wikis
have
more
details
on
these.
So
at
any
of
there
isn't
it's
it's
not
just
one
unified,
multi-tenancy
architecture,
there's
two
or
three
different
projects.
E
E
E
Our
multi-tenancy
model
is
a
virtual
class
model
with
a
provides
strong
isolation
among
talents,
but
the
difference
than
the
previous
of
multi-turn.
What
you
class
seen
in
what
coryza
in
our
implementation?
It's
a
long
one
control
plane.
We
do
not
a
deploy
like
separated,
control,
plane
for
each
talent
and
for
for
the
talents
for
each
talent,
it's
pretty
transparent.
They
are
still
using
existing
kubernetes
api
like
under
kubernetes.
It
was
to
interact
to
share
the
this
one
physical
cluster
and
they
can
also
manage
their
talent,
uses
the
using
the
existing
api's.
E
For
example.
If
a
talent
admin
wants
to
do
some
authorization
teamwork,
he
can
still
use
this
cluster
or
customer
binding,
Aurora,
binding
or
quota,
or
all
these
existing
familiar
ideas.
For
sorry
all
this
existing
familiar
API
objects,
but
all
these
API
objects
is
the
transparently
limited
at
towards
a
telescope.
E
So
the
the
the
fort
and
keep
design
principle
in
our
design
refers
to
providers
is
serum
isolation
and
this
autonomy,
which
means
talented
me,
can't
manage
its
own
talent
without
going
to
class
timing
every
time
and
also
the
backward-compatible
all
the
api,
the
back
of
the
compatible.
The
last
one
is
a
manageability.
We
provide
some
across
calendar
like
a
CRT
sharing
mechanism
mechanism
to
provide
manage
village.
This
is
the
overview.
E
D
E
Well,
this
talent
is
telling
the
object
it's
a
one-to-one
making
to
the
actual
tenant
space
when
you
created
this
object,
telling
the
controller
watch
late
and
they
initialize
the
tenant
space
and
creates
all
the
default
objects
in
the
tenant
space.
When
this
object
is
deleted,
this
talent,
space,
we're
all
telling
space
will
also
be
deleted.
E
Since
we
introduced
talent
space,
the
internal
UI,
our
object
object
that
UIL
has
been
changed.
If
we
expose
this
UI
out
directly
into
customers,
it
will
break
other
like
water
compatibility
right.
So
we
introduced
a
new
feature
called
it's
called
shorter
path.
The
idea
is
that
we
customer
can
still
use
the
same
relaxi
URL
2002
access
a
resource,
but
we
were
transparently
resolved
it
and
change
it
to
the
internal
full
path.
So
here
you
can
see.
There
are
two
talents.
There
are
two
user
fun
from
two
different
talents.
E
E
And
the
further
access
control
we
extended
the
current
authentication
magazine
mechanism.
We
still
support
other
authenticators,
it's
just
that
for
each
Authenticator
we
add
a
new
field
called
the
talent
in
the
in
the
corresponding
fire
order
talking
so
after
the
authentication.
After
the
same
step,
we
have
now
four
fire
incoming
requests.
We
have
the
pending
information
in
addition
to
the
original
username
and
a
group
information
for
the
authorization.
We
also
support
our
existing
authorizers,
but
actually
there
isn't
a
much
change
with
each
tester.
Add
several
rules
to
enforce
the
talent.
E
That's
the
that's
the
kind
of
the
talent,
isolation
rule
that
are
we
added
into
the
authorization,
and
after
this
step,
it's
all
the
original
authorization
for
it,
the
authorized
checks.
The
checks
is
like
a
beggar
back
configuration
to
do
the
original
authorization
check
inside
the
space.
So
it's
not
a
big
change.
It's
just
some
extra
chain
extra
check-in
in
front
of
the
current
authorization
check.
I.
E
G
E
There
pro
preserve
all
the
capability
for
auditing,
because
before
we
convert
the
converter
for
pass,
we
have
all
the
original
information
like
where
the
records
to
come
from.
What's
a
user
identity,
a
credential
associated
with
the
request,
what's
the
original,
because
the
UI
are
all
the
information
is
still
there,
it
didn't
diminish
and.
G
E
This
is
a
very
good
question
in
the
beginning
of
the
project.
We
also
thought
about
implementing
the
who
our
solution
using
some
D
capital
approaches,
like
a
say,
a
DS,
but
after
some
investigating,
we
find
it's
very
difficult
to
provide
a
complete
completed,
a
strong
isolation
to
be
the
only
the
IDS.
Let
me
give
you
some
examples.
E
Let's
say
a
way
in
order
to
toast
about
this
backwater
to
my
village,
we
have
two
changes
in
the
point
in
the
20
resolution
codes
in
that
castle,
leather
heart
is
very
difficult
to
implement
as
a
CID,
you
basically
another
changes,
the
internal
logic
with
the
ID
and
also
for
for
some
convenience.
So
we
want
to
have
this
talent.
E
Have
this
talent
a
field
as
a
part
of
metadata,
so
it's
kind
of
more
consistent
than
for
any
time
for
you
to
recognize
which
object
belong
to
which
the
tenant
space
right-
and
this
is
also
very
difficult
to
be
achieved
through
the
CID.
Of
course,
we
can
say
we
use
annotation
something,
but
on
this
were
kind
of
from
design
perspective.
It
looks
like
a
not
very
adoption.
E
H
E
H
E
E
First,
the
Archos
itself
is
not
an
a
long
project.
We
will
continue
our
cause
to
add
the
new
features
and
offer
to
some
customers,
for
which
it
requires
multi-talent
capable
operating
system
and
in
meanwhile,
as
I
mentioned
earlier,
I,
don't
I'd
be
happy
to
work
with
them
monitoring
the
working
group
to
up
screen
before.
If
it's
the
community
is
willing
to
accept
these
kind
of
changes,
but
I
guess
this
would
be
a
kind
of
long-term
effort
right.
C
E
C
F
E
C
Yes,
there
any
reason
why
you
called
the
top-level
division,
tenants
and
not
spaces,
and
the
reason
I
ask
this-
is
that
that
any
different
definitions
of
the
word
tenancy
and
you
can
have
some
tenants
and
and
producers
and
consumers
and
assess
context,
and
so
the
term
tenant
I
think
has
often
been
caused.
A
lot
of
confusion.
E
E
I
also
briefly
mentioned
this
in
my
last
talk.
It
was
totally
written
historical,
ok,
well,
I
I
would
change
it
to
space
to
make
it
clear,
because
we
wanted
to
clear
differentiation
between
the
between
the
subject
user
subject
and
the
resource
object.
If
we
call
it
a
penalty,
it's
computing,
but
at
this
time
of
historical
couldn't,
read
it
so.
Yes,
it.
C
Sorry
I
apologize
if
I
asked
this
two
weeks
ago,
but
my
memories
I'll
write
down
the
answer
this
time.
But
what
is
the?
What
are
the
main
reasons
that
you
see
people
wanting
to
use
a
space?
Is
it
is
it
to
create
their
own
CR
DS,
and
if
so,
why
are
they?
Are
they
developing?
You
see
ideas
or
they
just
installing
operators,
and
then
the
other
question
I
had
is:
if
it's
not
seer
DS.
E
C
If
I
can
just
clarify
I
understand
what
the
features
are,
basically
anything
that's
currently
cluster
scoped
is
a
space
booked,
and
so
it
can
be
duplicated,
so
not
just
your
DS,
but
also
main
spaces.
We
everything
and
less
patrols
everything
so
I
understand
what
can
be.
What
can
now
be
divided
by
ten,
but
I'm
asking
is,
in
your
experience,
what
have
been
the
main
drivers
for
the
people
wanting
to
use
this
feature.
So
now
what
can
be
done?
What
is
done?
Yes,
I
I.
E
See
in
first,
our
current
deployment
is
a
pretty
limited.
It's
only
sharing,
it's
only
shared
among
several
small
teams.
We
haven't
had
any
large-scale
deployments
in
the
real
production,
yet
so,
okay,
based
on
this
limited
experience,
my
observation
is
the
the
things
the
driving
force.
Is
this
clean,
viscosity
and
management?
E
C
E
I
Shawny
can
I
add
a
few
words
here.
Yes,
actually
agent
agent,
if
you're
looking
to
our
design
down,
you
can
see
they're
actors,
it's
a
project.
We
are
working,
try
to
make
kubernetes
to
be
available
and
big
the
public
forum
with
very
good
scalability.
So
once
we
think
about
popping
across
public
the
public
platform,
then
we
would
like
to
have
a
strong
isolation
between
different
tenants
because
different
tenants.
There
are
no
chests
this
one
scenario
that
we
are
considering
so.
C
I
C
Okay
and
so
the
that
make
sense,
and
then
in
that
is
so
virtual
clusters,
so
just
to
compare
and
contrast,
the
difference
is
that
virtual
clusters
is
going
to
use
that's
somewhat
more
resources.
Is
that
correct
to
run
the
additional
copies
of
the
API
server?
Is
that
the
main
difference
between
the
two,
because
it
sounds
like
they're
kind
of
addressing
a
similar
problem
space.
E
C
E
C
C
C
In
the
other
scenario,
you
want
to
allow
a
some
amount
of
sharing
between
the
two,
and
so,
in
that
case,
it's
more
likely
that
this
isn't
really
kubernetes
as
a
service.
This
is
probably
kubernetes
being
run
within
a
single
organization,
but
the
different
teams
just
don't
want
to
deal
with
each
other
unless
and
what
and
they
wanted
to
be,
an
opt-in
that
they
deal
with
each
other,
so
they
were
multiple
name
faces.
They
want
possibly.
So
it's
not
like
mainly
multiple
namespaces
was
the
key
driver.
I
E
C
E
Haven't
done
that
apart,
yet
for
now
a
regular
talent,
a
regular
talent,
can
only
see
the
see
the
noodle
name
from
like
some
pod
objects
properties,
but
he
cannot
actually
carries
a
I've
shown.
No.
The
object,
I
see
okay,
but
I
mean
some
way
to
see
some
requests
from
a
scenario
that
the
ajik
meeting
scenario.
Then
they
probably
want
to
enable
some
attendant
notes.
That's
something
we
haven't.
We
haven't
even
and
we
don't
have
detail
yet,
but
we
see
some
feedbacks
from
teams
shouting.
F
E
F
E
See
you
in
the
hi
okay,
I've,
seen
hi
seeing
eye
I
see
eyes.
Now
it's
not
a
space
stuff.
It's
it's
a
network
network,
so.
E
E
We
introduced
a
new
object
network
to
encapsulate
to
abstract,
abstract,
isolated,
a
network
seeing
eyes
associated
with
the
type
of
this
net
object.
Let's
say
if
you
create
a
network,
excuse
me,
object,
type
metal
object
and
the
type
is
like
the
tablets
of
the
PC.
Then
you
need
to
give
higher
at
RSVP
see
accordingly,
if
we,
the
network,
hyping
a
neutron,
dines
effect
by
OpenStack
Neutron,
and
you
need
to
deploy
a
neutron
seeing
hyah
cutting.
So
it's
it's
associated
with
within
this
network
type.
Instead
I
was
a
workspace
workspace
I.
E
Change
as
I
mentioned,
the
public
clouds
may
riser
is
a
very
important
scenario
we
started
with,
so
you
are
in
our
design.
Those
networks
are
totally
isolated.
You
cannot
cross
network
traffic
unless
you
use
a
public
public,
a
Yankee
I
think
that's
also.
The
form
of
current
will
be
seen
in
public
cloud.
Unless
you
elaborate
piece
appearing,
you
cannot
do
intervie
busy
communication.
E
J
J
I've
been
working
with
recently
is
a
priority
class
object,
and
so
it's
cluster
wipe
and
influences
for
the
scheduling
of
pods,
and
you
can
imagine
a
scenario
where
one
tenant
is
making
a
pod
in
his
space
with.
Maybe
they
can
extremely
high
priority
class.
That's
above
the
priority
of
the
you
know,
coupe
system
pods,
and
then,
if
the
note
that
those
pauses
are
running
on,
you
know
turns
over
the
scheduling.
B
J
E
That's
our
kind
of
feature
to
plan
the
plan
efficient,
like
this
talent
level,
using
performance,
isolation
and
a
narrator
limiting
tour
to
isolate
the
performance
performance
of
our
multiple
multi
talents
to
prevent
any
calendar
to
abusing
from
abusing
this
shared
a
control,
plane
and
also
for
scheduler.
We
have
a
tenancy,
fair,
tenancy
fairness,
scheduling
plan.
By
the
way
we
don't
have
time
to
work
out
that
part.
Yet.
J
J
E
Operator
sorry
means
sies,
it's
the
kind
of
humans
are
the
CID
operate,
prefer
the
operations
there
are
controllers,
I'm,
not
sure.
If
this
this
slides
answer
your
question,
we
we
basically
we
supported
each
tenant
to
install
their
own
CRT
and
operate
independently,
so
it'll
work,
tenènte
and
society
operator.
You
don't
want
to
impact
others,
but
we
also
support
us
here.
The
Saudis.
C
Right
but
general,
what
do
you
mean
when,
when
he
said
like
different
operators,
have
different
tenancy
models?
Do
you
mean
like
different
instances
of
the
same
operator,
but
in
smaller
scopes,
or
do
you
mean
operators
that
could
somehow
conflict
with
each
other
or
just
like
completely
orthogonal
operators,
yeah.
J
I
mean
like
the
first
thing
you
said,
for
example,
like
some
operators
want
to
watch
all
namespaces.
Some
operators
want
to
watch
one
namespace
like
conceptually
this
thing.
This
is
a
picture
of
users
installing
operators
in
their
space,
which
makes
sense-
and
you
know,
if
you're
working
on
a
system
that
is
installing
operators
for
a
user
on
behalf
of
the
users.
Then
you
think
about
the
operators
themselves,
as
potentially
being.
J
E
Yeah
in
the
left
side
that
you
can
see,
let's
say
three
talents
that
each
install
different,
the
CID
operator.
This
is
the
operator
you
can
just
rerun
the
existing
operator
binary
and
no
could
change.
And
when
then,
in
your
example,
I
said
operator
a
wants
to
list
our
namespace
operator
P
only
list
some
other
some
partial
namespace,
but
as
they
are
totally
independent,
there's
a
see
different
namespace
they're,
not
interacting.
C
J
One
of
the
drivers
is
actually
relevant
to
the
cross
plane
presentation
they
had
where
people
are
running
their.
These
operators
are
designed
to
do
different
things
on
cloud
providers
outside
of
the
cluster
and
they
kind
of
want
to
run
multiple
versions
of
the
same
operator
for
different
teams
like
different
I
am
accounts
or
whatever
for
each
operator.
That's
that's
a
good
classic
scenario
where
it's
like
I
want
multiple
copies
at
the
same
thing
for
that.
For
that
reason,
so.
C
J
J
J
C
J
C
J
C
Yes,
I
would
agree
with
that
I
guess
so
what
I
met
my
so
that
my
thought
about
this
proposal
and
I
think
I
may
have
mentioned
this
last
week?
Is
that
there's
a
lot
of
complexity
and
it
results
in
a
very
large
change
and,
for
example,
like
the
attack
surface
of
the
API
server,
I
think
or
sorry
not
there's
kind
of
like
an
implicit
promise
that
those
spaces
are
fully
isolated,
a
promise.
C
That's
never
really
been
there
within
the
ever
before,
and
so
that's
going
to
be
challenging
and
it
does
affect
the
behavior
of
a
very,
very
large
number
of
components.
So,
there's
going
to
be
a
lot
of
complexity
if
we
want
to
upstream
this,
and
so
what
I'm
trying
to
understand-
and
this
is
behind
all
of
my
questions-
is:
what's
the
magnitude
of
the
gains
that
we're
gonna
get
as
a
result
of
this?
C
Are
we
if
it's
going
to
be
well,
some
operators
who
would
previously
have
had
to
think
about
how
one
C
or
D
is
shared
between
multiple
instances.
They
no
longer
need
to
think
about
that.
That
might
not
be
a
that
might
not
be
very
motivating.
Given
that,
for
example,
the
authors
of
every
cluster-level
component
in
kubernetes
now
needs
to
think
about
this
concept,
so
you've
shifted
a
lot
of
that
work
to
a
group
of
different
people
who
might
not
be
willing
to
accept
it.
C
J
Just
want
to
say
that
that's
and
that
question
makes
a
lot
of
sense,
and
one
thing
is
just
that
you
have
operators
that
depend
on
out
their
operators
and
that's
what
that's
what
it
probably
gets
more
complex
because
of
you
know.
A
lot
of
operators
want
certain
require
certain
API
or
C
or
DS,
that
other
operators
are
providing,
and
so
that's,
where
you
get
a
lot
of
the
complexities
like
you
get
a
jumbled
web
of
these
basically
and
then
upgrading
that
it
becomes
much
more
complicated.
C
J
I
mean
like,
if
you
update
this
unity
and
you're
not
doing
it
safely,
you
can
basically
break
the
CRA
cluster
in
theory.
Yes,
they're
like
they
can
be
backwards
compatible,
but
a
lot
of
operated
authors
well
like
the
best
reading
operators
are
about
Chris
but
backwards
compatible
in
theory,
but
in
practice.
J
B
E
I
want
to
quickly
add
something
in
for
regarding
the
coaching:
what
was
the
driver
for
this
kind
of
change?
Right
you
are
wishing.
The
CID
is
just
a
small
part.
Our
vision
is
that
we
want
to
deliver
multiple
organizations
to
safely
and
easily
share
the
same
physical
cost,
and
this
can
be.
There
can
be
different
scenarios
once
an
error
is
a
cloud
of
provider
can
provide
the
virtual
communities
class
the
service
to
to
uncross
to
the
public
cloud,
the
users
that's
for
public
class
scenario.
E
The
Nelson
error
is
a
private
cloud
within
one
company
you
have
different
teams
and
that
they
don't
want
to
deploy
different
of
physical
kubernetes
cluster
and
operate
maintained.
There's
just
one
whose
share
a
larger,
critical
crust
first
to
to
share
the
illogical
uber
result
poor
instead
of
aesthetically
partitioning
the
resource
and
the
second
one
was
have
just
the
wrong
kind
of
one
group
to
maintain
to
operate
the
infrastructure
instead
of
each
team,
how
to
maintain
and
operate
of
their
own
infrastructure.
K
Ensuring
this
is
me
yeah,
so
the
number
the
first
question
is
a
you
were
mentioning
that
there
were
some
cross
cannon,
the
collaboration
that
can.
G
E
Of
the
cases
that
we
already
discussed
these,
we
want
to
label
this
shared
the
CID
right.
So
it's
common
to
have
some
like
a
system-level
CID
results.
For
example,
now
some
storage
like
a
rook
or
some
networking
solution,
they
are
also
implemented
lgrd
and
for
all
the
payments
on
our
class
sim
cast
there
they're,
actually
very
likely
they
all
needed
CID.
So
the
first
first
we
use
the
case
in
says.
We
want
to
share
CID
among
talents
for
some
special
systems,
the
IDS
and
the
second
that
we
are
currently
thinking
about.
E
K
E
It's
kind
of
you
first
system
class,
the
operators
first
that
deploys
its
ard
in
system
space
and
tested
after
its
tested
and
no
and
working
fine.
It
can
apply
a
tag.
Special
annotation
and
designed
well
kind
of
be
published.
I
mean
pop
I,
publish
I
mean
it's
a
virtual.
In
fact,
it
will
appear.
You
call
the
tenant
space
immediately.
Okay,.
E
K
K
E
Currently,
we
are
willing
forwarded
to
one
solution:
data,
it's
kind
of
the
operator,
the
class
the
operator
publishes
it
some
away
unknown
or
support.
Ida
know
the
selectors.
You
can
use
butter,
regular
user,
you
don't
you,
don't
see
the
extra
knows.
It's
like
you
visited
a
cluster
class,
the
class
of
website,
our
documentation.
You
tell
you
what
kind
of
labels
like
a
GPU
we
just
mentioned
are
supported
on
this
cluster.
This
you
can
use
in
your
pod
pod
another
select,
but
you
don't
need
to
carry
the
action.
Node
objects.
E
K
E
K
K
E
F
E
Yes,
yes,
we
we
haven't
decided
when
we
can
operate
alleged
machine
to
Casa.
As
you
see,
we
made
some
a
kind
of
entry
changes.
You
know
what
tell
sometimes
first
other
wait,
but
octo
ctcf
definitely
will
be
a
kind
of
move
forward
less
than
a
long
project
and
the
way
regardless
kubernetes
community,
except
these
are
not
in
fact,
are
closely.
The
self
has
a
much
broader
vision
then,
and
the
multi-talent
itself.
E
H
E
Have
to
update
some
can
we
have,
though,
some
update
some
other
components
like
the
scheduler
accumulator
controllers,
but
all
the
changes
are
actually
very
similar.
It's
a
more
like
copy-paste
changes,
because
you
for
all
this
civt
at
least
watch.
We
just
need
to
add
a
new
parameter,
the
talent
and
we
already
changed
the
like
15
controllers.
Basically,
one
person
can
change.
I
want
two
controllers,
maybe
three
controller
a
week:
it's
very
fast
and
the
client
go.
E
We
need
to
make
some
changes,
but
we
are
based
on
the
wrong
I
sing
along
15
RC
Wersching,
so
the
clinical
stir
done.
The
supporters
is
explicit.
The
concept
parameter
with
the
new
version
of
clinical.
It
already
has
I
think
it
already.
Has
these
contacts
parameter.
It
would
be
much
easier
faster
to
change.
The
content
it
should
be
mature
should
be
much
reduced.
If
we
have
that
I
explicit
the
context
permit.
K
E
E
K
I
worry
more
about,
like
all
the
metadata
like
EDC
D.
So
yes,
yes,
yeah
yeah,
that
is
my
biggest
worry
I'll
tree
I.
F
Think
you
wouldn't
upgrade
an
existing
cluster
to
this.
You
would
upgrade
all
the
you
would
port
all
the
apps
over
and
they
should
work
transparently.
You
there
isn't
a
use
case
to
take
an
existing
companies
cluster
and
transform
it
into
this
one,
but
you
could
have
all
the
applications
on
an
existing
cluster
pretty
much
transparently
move
over
to
this
kind
of
a
cluster.
F
K
That's
really
the
more
important
thing
that
yeah,
but
that's
the
assumption
that
people
you
know
would
accept
the
option
that
recreate
everything
but
I
would
say
cases
I
see
a
lot
of
cases.
People
just
say:
don't
change
anything
I
pot,
don't
you
and
make
my
pot
down
you
all
things
transparently.
Otherwise,
don't
do
it.
We
would
yeah.
K
E
No,
no,
no,
no,
no
faith!
That's
what
I'm
trying
to
a
clarify!
Let's
say:
if
we
do
some
line
migration,
all
the
stuff
in
the
current
ID
city
will
be
treated
as
a
system
space.
We
actually,
we
can
have
like
an
optional
space
keepers,
so
it's
kind
of
compatible.
You
don't
need
to
recreate
anything
after
the
migration,
it's
basically
it's
a
cluster
with
the
only
one
space
that
is
telling
the
space.
Now
you
can
create
a
new
talents.
I
Here,
I,
like
Edison
words
here,
cuz
I'm,
the
one
who
make
change
the
edct
data
here,
so
first
I
Payton,
you
don't
need
to
worry
like
older,
like
the
old
formats
of
their
etcd
data.
We
still
can
comprehend
it.
If
we
see
it
still
format.
We
know
it's
something
belong
to
the
system
space
and
this
linen
you
later
like
when,
with
the
tenants
information
inserted
in
the
EDC
D
key,
we
can
also
understand
it
so
here
what
I
would
I
say
all
the
EDC
D
data
it's
backwards
compatible.
So
so
it's
okay.
I
Yeah
yeah
cuz,
like
we
can
sync
single
system
space
as
the
as
the
code,
clap
kubernetes
cluster
and
then
the
Mari
tenancy,
where
I'm
not
a
tenancy
I,
just
had
some
news
basis
from
different
new
tenants.
So
all
the
data
like
you,
actually
there
is
no
minor
migration.
Just
the
met
there
and
and
many
our
actors
knows
it.
F
E
E
Already
implemented,
it's
a
I
probably
can
I
know
it's
another
related
to
that
more
tenancy
topic,
but
I
can
quickly
learn
two
minutes
to
explain
its
basically,
we
currently
in
cloud
data,
centers,
more
and
more
containers
are
running
on
bare
metal,
so
we
have
to
operate
accurate,
oxidation
stack.
Why
is
the
kubernetes
based
for
containers?
E
The
other
uses,
the
older
like
the
VMware
or
the
OpenStack
for
VMs,
so
our
our
reaching
here
they
want
to
expand
the
kubernetes
to
have
a
single
expression
stack
to
handle
both
containers
and
VMs
and
in
the
way
we
did
use
different.
Is
that
the
current
some
approaches
based
I,
don't
based
approaches?
We.
E
What
you
extended,
Zapata
definition,
you
can
see.
This
is
original
part
of
definition.
You
can
have
contained
multiple
containers.
Now
we
have
three
aims
and
we,
because
we
extended
this
most
fundamental
object
apart
and
we
can.
We
can
reuse
everything.
On
top
of
that,
like
a
replica
set
deployment,
jobs
and
a
schedule,
it's
a
one
API.
Why
I
wrong
scheduling
wrong
special
stack?
We
already
we
already
implement
part
of
the
phone
lines,
but
because
VM
lifecycle
is
a
complicated
I,
it's
not
only
started
stopping
a
rebooting
VM.
You
have
like
this
attention.
E
E
F
F
E
Targeting
and
we
are
targeting
the
user
case
or
the
scenario
that
is
a
cloud
or
provide
essentially
logic,
large-scale
cloud
pride.
Well,
because
we
look
at
the
current
open
source
of
projects,
there
isn't
actually
our
open
source
project
qualified
for
large-scale
and
Monte
Monte,
Monte
tenancy
and
this
unified
vm
container
infrastructure
for
large-scale
cloud
providers.