►
From YouTube: WG Multi-Tenancy Bi-Weekly Meeting for 20201006
Description
WG Multi-Tenancy Bi-Weekly Meeting for 20201006
A
Hi
everybody-
this
is
adrian
ludwig
hosting
the
meeting
sitting
in
for
tasha
who
couldn't
be
here
today.
So
this
is
the
bi-weekly
meeting
of
the
multi-tenancy
working
group,
although
we
actually
ended
up
skipping
all
of
september
due
to
either
lack
of
agenda
or
short
work
weeks.
So
we
have
a
light
agenda
today.
A
The
only
thing
is
just
an
update
from
all
of
the
projects
that
are
being
actively
worked
on
by
the
multi-tenancy
working
group,
and
so
those
three
projects
are
going
to
be
hnc,
which
is
an
update
that
I
will
give,
and
then
we
will
have
jim
talking
about
multi-tenancy,
benchmarking
and
faye
talking
about
virtual
clusters.
After
that,
I
can
see
a
few
more
people
are
popping
on
after
that.
A
If
there
are
any
open,
we
can
open
this
up
as
a
round
table
and
failing
that
we
can
go
back
and
do
more
work.
So
with
that
said,
why
don't
I
give
a
quick
update
for
hnc
the
hierarchical
namespace
controller,
so
we
have
released
hmc
0.5.3,
which
I
am
hoping
will
be.
The
final
release
on
the
0.5
branch,
so
0.5
has
been
basically
the
first
real
release
of
hmc.
A
It's
the
first
one
where
we
felt
like
we
had
a
decent
feature,
set
a
relatively
stable
api
and
reasonably
good
performance
and
behaviors,
and
so
I
think
0.5
first
came
out
in
I
believe
it
was
june
and
ever
since
then
about
once
a
month
we
have
been
giving
it
a
patch
update,
usually
not
changing,
behavior
that
much,
but
we
have
actually
changed
some
fairly
significant
pieces
of
behavior.
A
Let
me
just
bring
up
and
just
bring
up
the
the
list
of
changes
that
we
made
over
the
last
little
while
so,
as
I
said,
I'm
hoping
that
this
will
be
the
last
update,
because
ichi
has
been
working
who
many
of
you
know.
Ichi
has
been
working
very
hard
on
the
upgraded
api,
so
we
started
as
many
projects
do
with
v1
alpha
1..
A
We
got
a
bunch
of
reviews
on
that
to
make
it
more
sort
of
kubernetes
style
compliant,
and
so
we've
been
working
hard
on
implementing
that
and
so
we're
hoping
to
release
that
by
the
end
of
this
month,
and
that
will
be
the
0.6
release.
0.6
will
not
be
exactly
backwards
compatible
with
0.5
you'll
have
to
update
your
client
tools.
However,
we
are
building
features
that
will
automatically
take
all
of
your
existing
objects
on
your
cluster
from
the
old
api
and
immediately
upgrade
them
to
the
new
api
for
you
without
any
intervention.
A
So
what
we've
done
in
the
last
release
is
we've
added
support
for
crew,
so
crew
is
a
standard
way
to
download
and
install
plugins
for
your
cube,
cuddle
command
line,
and
so
the
the
hns
hierarchical
namespace
plugin,
which
is
part
of
hmc,
is
now
supported
by
a
crew,
and
we
have
made
a
couple
of
bug
fixes.
So,
for
example,
you
weren't
able
to
delete
something
spaces
if
there
were
propagated
objects
in
them.
A
It
took
us
a
surprisingly
long
time
to
notice
this,
but
that
was
a
very
quick
and
easy
fix
once
we
discovered
it,
the
probably
the
more
significant
change
and
the
most
significant
change
in
this
release
was
that
if
there
were
conflicting
objects
between
parents
and
children,
they
were
propagated
inconsistently
and
what
we've
done
now
is
we've
added
the
rule
that
if
there
is
a
conflicting
object
in
a
parent
and
a
child,
the
parent
will
overwrite
the
child.
A
However,
we've
also
added
a
bunch
of
web
hooks
that
prevent
you
from
accidentally
creating
a
conflicting
object
in
a
parent
so
that
nothing
ever
gets
overwritten.
So
as
long
as
the
web
hooks
are
always
working,
you
can
never
overwrite
your
own,
your
own
data,
but
if
somehow
you
managed
to
bypass
the
web
hooks,
let's
say
you
installed
a
bunch
of
stuff
and
that
well
hnc
was
disabled
and
then
turned
it
back
on
again.
A
This
will
actually
overwrite
the
existing
objects,
because
we
felt
that
having
guarantees
on
the
consistency
of
policy
application
was
more
important
than
preventing
data
loss.
In
this
case,
and
so
this
was
all
discussed
on
the
slack
channel
through
design
docs.
Let's
see,
we
also
added
support
for
kubernetes
119
and
a
couple
of
minor
other
corner
cases.
A
So
yeah,
as
I
said,
I
am
hoping
that
this
will
be
the
last
the
the
last
release
on
the
0.5
branch
and
I'm
hoping
to
see
a
release
candidate
of
0.6
out
in
about
two
weeks
with
the
final
version
of
0.6
coming
out
soon.
After
that,
as
I
said,
the
only
major
change
we're
expecting
in
0.6
is
the
new
api.
A
We
may
have
a
sort
of
experimental
or
possibly
even
alpha,
release
of
a
new
feature
called
exceptions
where
you
can,
where,
instead
of
objects
being
propagated
to
all
descendants
as
they
are
today,
you
can
limit
which
namespaces
in
the
subtree
that
they
go
to
by
using
label
selectors.
So,
for
example,
you
can
say
well.
I
would
like
this
resource
quota
policy
to
be
propagated
to
all
child
name
spaces,
except
that
one
and
then
you
can
exclude
that
one
and
add
your
own
resource
quota.
That
has
a
let's
say,
a
higher
limit.
A
So
this
gives
administrators
more
control
over
how
they
create
their
policies
and
how
they
propagate
them
throughout
their
cluster
and
so
jenny
who
some
of
you
may
have
if
you're
watching
the
the
pull
request.
Jenny
is,
has
been
contributing
to
h
c
all
summer
and
so
she's
working
on
that
feature
right
now
so
yeah.
That
is
what
is
going
on,
plus
the
the
regular
kind
of
stability
and
testing
enhancements
that
come
along
with
any
growing
project.
So
before
we
go
on
any
questions
about
hnc
right
now,.
B
So
anyway,
a
few
things
so
first
you
mentioned
cruz
crew-
is
that
yeah?
That
is
a
k-r-e-w.
A
Haven't
tried
that
yeah,
it's
very
cool
crew,
is
it's
developed
by
one
of
the
cigs.
I
imagine
like
that
of
the
cli
sig
and
and
as
soon
as
the
blog
post
on
agency
went
out,
which
got
something
like
14
000
reads,
which
was
pretty
nice.
This
was
in
august
as
soon
as
I
went
out,
the
the
maintainer
of
crew
contacted
me
and
said:
hey.
You
should
be
able
to
download
this.
So
it's
very
nice.
It
auto
detects.
If
you
are
on
linux
or
mac
and
it
installs
the
correct
plug-in.
A
We
haven't
built
a
windows
plug-in
yet,
but
we
will,
if
there's
demand
and
so
yeah,
whether
it
will
download
and
install
it,
for
you
can
uninstall
it
as
well.
It
only
supports
one
version.
At
a
time,
you
can't
ship
multiple
versions
on
crew
at
once.
A
So,
basically
as
soon
as
we
release
0.6
with
a
new
api,
we'll
only
be
able
to
you'll
only
be
able
to
use
crew
to
get
the
latest
plug-in,
but
the
old
way
of
doing
it,
which
is
just
downloading
the
binary
and
installing
on
your
path,
that's
still
available
as
well,
and
we've
updated
our
build
process
to
automatically
generate
all
of
the
artifacts
required.
C
By
crew,
why
do
you
store
your
plug-in
binary,
actually
in
github
or
so
sorry
in
docker.
A
So
so
the
the
the
binaries
are
just
stored
on
github,
as
they
always
are
they're
just
artifacts
of
the
release.
So,
for
example,
if
you,
if
you
go
to
the
release,
a
connection.
A
Can
everybody
see
the
0.5.3
release?
Okay,
yeah?
So
if
you
scroll
right
down
to
the
bottom,
you
can
see.
We've
got
quite
a
lot
of
instructions
up
here
you
get
all
the
the
artifacts.
So
this
is
the
gamma
file
that
you
can
use
to
install
it
on
your
cluster
this
right
here.
This
is
the
tar
file
that
contains
both
mac
and
linux,
as
well
as
the
license
file,
and
this
is
what
is
used
by
crew
and
so
crew
itself
is
they
distribute
their
plugin
through
a
different
repo?
A
And
so
basically,
I
have
checked
in
a
yaml
file
that
points
to
this
file
in
their
repo
and
whenever
you
say,
crew,
install
or
crew
update.
The
first
thing
is,
it
does.
Is
it
clones
that
repo
locally
downloads,
this
tar
file,
extract
and
then
extracts
the
correct
files
for
your
platform,
which
it
auto
detects?
A
Okay,
but
if
that's
not
working,
you
can
feel
free
to
download
these
files,
rename
them
to
remove
the
the
suffix
and
as
long
as
they're
anywhere
in
your
path.
That
will
work
as
well.
But
crew
is
very
nice
because
it
puts
them
in
like
in
a
hidden
directory
and
it
can
install
them
and
uninstall
them
and
update
them
as
you
as
you
wish.
I
see
I
see
I
see
yeah,
that's
good.
B
Yeah,
because
that
related
to
vc
we
are
trying
to.
If
you
see
one
of
the
issues,
we
are
trying
to
move
the
vc
ctl
to
a
kui,
kubi-ctl
plug-in.
That
is
more
formal.
This
is
our
plan
actually.
A
Yeah
I
can.
I
can
certainly
share
with
you
some
of
the
work
that
we've
done
there.
It's
not
actually
part
of
the
it's
not
part
of
the
0.5
build
process,
but
on
the
trunk
actually
whoops.
That's
not
what
I
meant
to
do.
A
Yaml
file
expected
by
crew,
so
here
you
can
see
it's
part
of
our
hnc
hack
directory,
and
this
is
the
yaml
file
that
actually
gets.
It
gets
modified
to
have
the
correct,
for
example,
the
correct
image,
the
sha-256,
the
link
to
the
correct
download
repo,
and
this
is
what
drives
all
of
crew.
And
then
our
new
build
process
actually
fills
this
all
in
and
we'll
include
this
as
one
of
our
artifacts
and
then
your
only
role
is
to
download
that
file
and
then
submit
that
as
a
pr
to
the
crew
index.
D
B
B
Great,
so
another
thing
is
so
so
yeah
at
this
stage.
I
think
of
also
the
part
project
at
least
seeing
hnc
vc
is
kind
of
much
not
100
mature.
But
what
is
the
progress
of
moving
to
the
another
independent
repo
for.
A
So
for
us,
the
the
main
blocker
there
is,
the
api
is
implementing
the
new
api
and
also
we're
getting
a
code
review
and
a
security
review
from
I
forget
which
thing
it
is,
but
basically
mike
dinassi
and
a
few
other
folks
are
giving
us
a
review.
Once
we
pass
the
code
review
once
we
pass
the
api
review,
that
is,
then
we
can
then
we'll
be
in
a
better
position
to
get
our
own
repo.
A
So
really
we've
been
focusing
on
the
api,
because
that's
the
long
pole,
I've
gotten
some
early
code
review
comments
from
mike
and
I
think
we're
gonna
save
those
to
a
little
until
0.7
at
this
point,
because
they're
not
hard,
but
they
you
know
moving
files
around
and
break
is
always
a
little
bit
annoying.
A
I
don't
think
so,
just
because
we
have
limited
bandwidth
and
I
want
to
get
0.6
out
the
door
if
we
waited
a
bit
longer,
we
could
so
in
0.6
we're
going
to
finish
the
api
and
then
in
0.7.
That
will
probably
be
the
point
at
which
we're
ready
to
to
to
look
for
our
own
repo.
A
Okay,
thanks
faye
anything
else
for
me
before
we
go
on
and
given
that
phase
already
been
talking,
maybe
we
can
go
to
vc
next,
but
before
that,
any
other
questions
for
hierarchical,
namespaces.
A
Hearing
none,
why
don't
we
go
on
to
faye
so
fade?
Do
you
want
to
give
us
an
update
on
vc.
B
Yeah,
so
in
vc
side
I
mean
so,
we
see
in
upstream
version.
We
have
a
two
major
part
y
is
the
synchro
part.
The
second
is
the
tenant
master
life
cycle
management,
which
is
a
vc
manager.
I
think
single
party
is
pretty
much
enough,
so
there
is
one
which
I
think
is
very
important.
Enhancement
for
the
synchro
is
now
we
support.
Sharing,
not
not
exactly
cody
is
about
90
percent
conflict.
B
Now
is
so
so
so
the
idea
is,
even
though
you
have
each
tenant
is
given
a
dedicated
name,
aks
server,
but
synchro
is
the
central
hub
so
for
the
sinker
worker
queue.
You
still
accept
the
request
from
all
the
tenants
masters
there's
potentially
that,
because
what
worker
queue
is
a
fiber
queue,
so
there's
potentially
there
is
in
the
contention
that
faculty
level.
So
now
in
just
in
the
past
a
few
weeks
ago
now
we
implement
the
fair
sharing
in
the
thinker
worker
queue.
B
Now
it's
a
support
you
know
round.
Robin
is
actually
is
a
weighted
round-robin
algorithm,
so
by
default
each
tenant
has
the
same
weight.
So
we
don't
play
with
the
way
that
and
now,
but
the
algorithm
support
different
weight.
So
now
it's
kind
of
when
we
serve
the
reconcile
requests
from
the
tenants
masters.
Noise
can
go
with
the
round-robin
order,
so
to
avoid
any
salvation
and
the
priority
inversion
kind
of
thing,
which
I
think
is
a
major
enhancement
for
the
sinker
to
support.
B
You
know
slr
kind
of
thing
which
so,
if
we
finish
this
picture
now
now
is
you
can
see
the
tenant
master.
Has
you
know
isolated
environment
system?
In
terms
of
both
you
know,
control
plan
and
also
the
performance
wise.
B
They
have
kind
of
guarantee
when,
even
if
you
have
multiple
tenants,
are
actively
sending
real
or
really
sketchy
intended
master,
most
likely
is
the
right
either
you
know,
creation
or
update
those
operations
will
be
populated
in
a
more
fair
way
that
is
function,
functionality,
wise,
and
that
is
a
major
upgrade
for
vc.
We
see
thinker
the
second.
A
Those
those
requests
like
that
was,
or
that
feature
was
that,
based
on
your
own
internal
usage,.
B
Okay,
now
we
don't
have
real
usage
to
drive
this,
but
in
theory
you
have
the
power
arm
for
sure,
because
for
the
I
can
make
up
for
the
michael
benchmark
to
illustrate
this
problem,
I
have
one
tenant
master,
keep
generating.
You
know,
5k
qps,
for
the
for
the
thinker,
that's
just
keep
upgrade
and
another
one.
You
just
generated
one.
So
there
is
easily
you
can
create
a
case.
You
have
a
kind
of
starvation
that
one
there's
one
request
need
to
wait.
Another
five
thousand
loop,
let's
get
finished
until
you
get.
B
Thinker,
so
micro
benchmark
wise,
like
we
can
you
can,
we
can
make
up
the
case
a
real
use
case,
not
really
until
you
hit
the
tender
master
really
hard
at
this
moment.
Even
our
production
is
not
that
hard
at
this
moment,
but
in
theory
we
just
fix
a
potential
problem
which
we
can
get
attacked
by
any
you
know
any
people
say
you
still
have
a
weak
point
in
your
design.
Yes,
I
have
that
at
the
meeting.
B
Yeah
yeah,
another
part,
is
the
vc
manager
part
which,
because
we
are
also
thinking
about
the
thing-
is
how
to
move
this.
The
receive
intel
project
up
to
the
c
repo.
We
probably
have
to
go
through
a
different
round
other
than
you
guys
do,
which
you
guys
would
think
of.
We
are
thinking,
maybe
we
we
should
do
another
way
which
we
enhance
the
tenant
master
management
using
the
cluster
api
style.
B
So
now
now
we
are
working
with
apple
chris,
maybe
without
sick.
Today
we
are
trying
to
make
that
happen
to
get
bonds
from
the
c
class
api.
So
because
now
class
api
has
all
the
the
the
differential
things
is
nowadays
the
class
api
focuses
on
providing
more
intel
cluster,
including
node
resource.
There
are
quite
a
bunch
of
things
on
you
know,
provision
machine.
B
B
So
that's
probably
a
new
thing
that
we
are
currently
trying
to
completely
revise
the
way
that
we
create
tenant
master.
So
that
is
our
plan,
and
we
hope,
after
designing
that
we,
we
should
present
this
to
c
cost
api
to
see
that
feedback.
Eventually,
I
wish
I
mean
I
wish
now
the
the
the
thing
is.
The
the
project
will
looks
like
a
little
bit
a
little
weird
in
the
sense
that
okay,
it's
a
class
api
project,
but
that
component
is
only
useful
in
multi-tenancy
contacts.
B
A
That's
interesting:
it's
it's
really
worth
having
the
discussion
with
them
and
seeing
where
that
goes
yeah.
So
it
is
a
type
of
multi-cluster.
B
You
type
the
marketplace
yeah.
Without
knows
it
looks
a
little
bit
weird.
We
need
to,
you
know,
clarify
all
the
concepts
of
this.
So
in
reality
the
actual
use
cases
for
vc
is
the
best
use
case
for
vc
I
mean
other
than
that.
Maybe
nobody
else
wants
to
create
so
many
canon
masters
without
nodes,
so
right.
A
So
yeah
it's
it's
interesting.
Certainly
you
could
use
the
kind
of
kubernetes
as
a
control
plane
idea.
It
would
be
nice.
B
Yeah,
I
think
some
of
the
other
issues
is
that
you
create
you,
create
many
many
control
plans
and
you
install
virtual
complete.
That's
one
way
that
I
can
think
of
yeah
other
than
that
is
vc.
Definitely
the
best
fit
yeah
is
slightly
narrow
scope
for
this.
I
hope
you
know
other
api
guys
will
like
this
so
yeah.
I
think
chris
has
some
initial
initial
conversation
with
them.
We
are
just
working
on
some
design
docs
to
push
forward
for
that.
A
Sounds
good,
I
had
a
question
or
two
if
you're
finished
or.
A
Okay
yeah.
The
question
I
have
was
that
I've
been
seeing
some
like
that.
There's
gonna
be
more
activity
heating
up
just
like
in
the
community
about
multi-tenancy,
and
some
of
them
are
kind
of
like
in
the
sort
of
canon,
crd
hnc
area
like
capsule
and
loft,
and
and
what's
the
latest
one
I
saw
angel
was
here
to
talk
about
that
a
couple
of
weeks
ago.
The
name
escapes
me
right
now.
A
Caseman,
thank
you,
kate,
spin,
and
so
I've
been
talking
to
angel
about
possibly
joining
forces
on
some
of
that
stuff,
and
I've
also
seen
that
there's
at
least
one
other
kind
of
implementation
of
the
virtual
cluster
idea.
Faye
have
you
reached
out
to
the
like?
Have
you
looked
into
those
other
projects?
A
Do
you
know
what
the
differences
are?
Is
the
laughter
one
it
may
be
locked
yeah
loft
was
up
to
credit.
It
was
doing
a
couple.
B
B
Idea
is
similar,
but
it
I
mean
idea
is
give
if
the
idea
can
be
as
simple
as
give
each
template
a
dedicated
control
master
master
component.
That.
B
B
The
rancher
guy's
greatest
case
rev,
if
I
I
think
rajit
mentioned
last
last
year,
actually
it's
like
so
so.
Basically,
last
year
we
proposed
in
a
virtual
crossfit
that,
after
a
few
months
that
the
rancher
has
a
case
of
implementation,
so
idea-wise
the
same
thing,
but
the
implementation
are
different.
I
can
give
some
key
differences
here,
but
since
I
didn't
see
their
code,
so
I
don't
know
the
exact
difference
concept.
Otherwise,
first
they
don't
use
the
centralized
sim
card.
B
They
use.
You
know
each
channel
as
an
individual
thinker.
That's
number
one
number
two:
is
they
they
they
don't
exactly
like.
We
did
things
like.
We
have
a
one-time
matching
between
the
talent
and
memphis
and
the
supermarket
namespace.
They
group
all
the
tenant
objects
into
one
supermass
namespace.
So
the
name
specifies
there
is
no
one
mapping
between
tenant
and
super,
which
I
think
is
a
problem.
B
A
Compatibility,
so
I
think
where
I
was
going
with
this,
is
that,
does
it
make
sense
for
you
to
reach
out
to
the
law
folks
and
say
like
hey?
Should
we
should
we
join
forces
should
instead
like
because
I
know
that
they're
selling
a
lot
more
than
just
a
virtual
cluster,
just
like
kate
spin,
is
selling
a
lot
more
than
just
hierarchical
name
spaces?
Does
it
make
sense
for
them
to
adopt
our
vc
and
maybe
contribute
to
it
when
like
when
they
need
a
feature?
B
I'm
not
so
this
part,
I
I
don't.
I
actually
don't
know
quite
quite
because
this
they
are
a
commercial
company,
they
sell
it.
So
I
don't
know
if
there
are,
I
I
mean
I
don't
have
any
problem.
They
use
vc,
but
I
just
don't
know.
Maybe
I'm
not
kind
of
familiar
with
this
process.
If
there
is
a
commercial
company,
is
selling
a
project.
Is
that
okay
to
ask
them
to
use
upstream
or
cut.
A
A
I
thought
that
at
least
some
of
their
stuff
was
open
sourced.
If
that's
incorrect,
then
maybe
that
doesn't
make
a
difference,
but
I
thought
they
had
a
github
repo.
A
Yeah,
so
I
think
that's,
the
thing
is
like
we
don't
care
if
they're,
commercial
or
not
I
mean
yeah,
most
of
us
are
all
we
care,
I
believe,
is
if
the
code
is
apache
too
licensed
right,
and
so,
given
that,
if
they
are,
it
might
be
worth
re.
A
So,
given
that
they
they
have
something
that
kind
of
intersects
with
both
of
our
products,
maybe
say
you
and
I
should
reach
out
to
them
at
some
point
together
and
say,
like
hey,
you
seem
to
be
doing
a
lot
of
the
same
stuff
that
we're
doing
doesn't
make
sense
to
join
forces
because
they,
what
they
then
get,
is
that
they
can
claim
oh
yeah,
their
standards
compliant
plus
they
get
the
benefit
of
all
of
our
work.
A
What
we
get
is
their
user
base
and
any
additional
contributions
that
they
are
willing
to
make
or
that
they
need
to
make
for
their
customers.
So
I
think
that
it's,
it
could
be
as
long
as
the
the
models
are
sufficiently
compatible.
This
could
work.
Well,
I
mean,
obviously
I
don't
know
if
it
will
until
we
we
look
at
it,
but
does
that
sound
say?
Maybe
you
and
I
can
coordinate
on
that.
B
Yeah
so
yeah,
so
basically
that,
as
I
mentioned,
you
know
the
biggest
difference,
what
I
mentioned
is
we?
Don't
they
don't
do
the
one
one
one
two
one
mapping
for
the
name
space.
I
don't
know
any
other
theory
behind
that.
There
is
other
reasons
they
have
to
do
that.
So,
besides
that,
I
would
assume
the
same
thing
has
no
difference.
A
Yeah
exactly
so,
and
if
they're
not
wedded
to
that
idea,
if
that's
just
what
we
started
with,
perhaps
they're
willing
to
change
it,
especially
if
there's
a
good
reason
not
to
do
it,
which
they
hadn't
thought
of.
A
B
A
Don't
even,
but
they
have
something
they
have
a
concept
of
self-serve
name
spaces
that
is
overlapping.
What
agency
does
so?
I
think
it
would
be.
It's
really
useful
to
get
them
to
start
joining
this
group
and
talking
about
like
if
there
is
some
reason
why
the
the
projects
that
we
are
building
is
not
suitable
for
their
customers.
That
would
be
useful.
I
think
feedback
for
us
both
okay.
A
F
F
Say
I
did
have
one
question.
You
mentioned
something
about
network
policies.
I
didn't
quite
understand
you.
You
said
there
was
like
I
guess,
as
you
were
talking
about,
I.
B
Just
I
just
make
up,
I
just
make
up
use
case.
For
example,
the
tenant
can
create
a
net
memory
policy
in
tenant
master
so
but
decorative
policy,
narrow
policy.
You
have
to
specify
the
namespace
okay.
So
if
you
keep
the
1t1
mapping
between
the
tenant,
namespace
and
the
supername
space,
then
we
say
when
someone
thinks
the
talented.
B
The
net
policy
objective
is
super.
You
still
can
guarantee
that.
Never
policy
only
work
on
the
specific
namespace,
although
the
name
has
been
changed,
but
logically,
the
same
name
space,
although
the
name
is
changing,
but
logically
the
path
ownership
is
not
changed
right
only
in
these
parts
below
this
name
service,
so
network
policies
still
work
basically
because
they
never
was
it.
But
if
you
get
rid
of
the
name
certain
mapping,
you
only
have
one
name
space
in
the
supermaster.
B
It
is,
is,
I
I
don't
think
I
think
name
space,
which
is
like
case
beings
like
more
like
a
name
space
device
mutualization,
because
they
are
actually
doing
exactly
what
your
cluster,
but
only
even
because,
if
you.
E
B
At
the
api
that
they
provide
to
the
customer,
it's
a
full
cluster,
it's
not
a
namespace
okay,
so
they
are
going
with
the
virtual
cluster
based
virtualization.
The
only
problem
is
that
I
mean
so
so
the
interface
wise,
our
approach
and
their
project
the
same
because
they
give
the
people
the
virtual
cluster.
The
only
thing.
What
I
worry
is
the
implementation
wise
right
imagination.
They
have
a
single,
we
have
a
thinker,
so
the
thinker
may
do
differently.
What
I
see
the
biggest
difference.
Is
they?
Don't
they
don't
directly?
Sync
the
namespace.
I
don't
know.
B
E
I
mean
I'm
looking
at
the
architecture
right
now
and
I
mean
to
me
it
seems
like
they
more
or
less
kind
of
have
like
a
tiller
type
thing
that
is
just
a
federated
cli
that
you
authenticate
with
with
its
own
our
back
and
they
separate
it.
Nate
it's
in
it's,
it's
kind
of
like
a
tiller
with
the
the
open
shift,
is
kind
of
what
the
architecture
looks
like
and
then
a
nice
ui
on
top
of
it.
E
So
that's
why
they
do
a
sing,
a
name
space
per
and
their
nested
name
spacing
seems
still
to
revolve
around
kind
of
a
name
space
per
cluster
and
their
virtual
cluster
is
is
a
name
space
from
their
docs.
It
looks
like.
A
E
If
that
is
not
enough
for
you,
so
I
mean
the
self-service
namespace
provisioning
says
that
I
mean
at
least
on
their
docs.
It
says
that
it's
from
their
loft
cli,
which
to
me
looking
at
that,
and
then
they
have
a
a
loft
extension
api
server.
It
seems
to
me,
like
you,
just
authenticate
with
cli
and
then
it'll,
basically
federate
out,
to
create
you,
the
a
namespace
if
you're
authorized,
is
kind
of
what
it
seems
like.
E
E
F
F
So
you
can
you
know
if
you
go
into
the
multi-tenancy
benchmarks
for
our
project
and
click
on
the
link
for
running
the
conformance
test
with
coop
cuddle
mtb,
then
you
go
into
a
sub
page,
which
describes
in
quite
a
bit
of
detail
the
process
of
downloading
and
running
the
benchmarks
right
so
where
we
are
right
now
and
I'll
go
back
and
show
the
list
of
benchmarks,
but
this
is
fairly
straightforward
to
run
on
any
namespace
and
the
way
this
has
been
designed
and
everything
that's
working.
F
Is
you
basically
can
create
a
role
within
a
namespace
run?
These
benchmarks
initially
you'll
get
a
bunch
of
fails
and
then,
if
you
install
gatekeeper,
oppa
or
kiverno
policies
which
to
give
you
the
right
level
of
pod
security,
so
as
you
see
most
of
these
tests
are
related
to
pod
security.
F
So
then,
you
will
start
getting
your
test
to
pass
right
and
there's
other
things.
Like
quotas,
etc,
you
know
all
of
the
best
practices
you
need
for
multi-tenancy,
so
this
tool
works
pretty
well
and
in
fact
we
did
a
demo
to
cigar
a
few
weeks
ago,
and
there
was
interest
in
perhaps
reusing
this
also
for
the
replacement
to
psps,
which
is
you
know,
like
the
standardized
spot
security
profiles,.
A
F
Right!
No,
but
as
a
way
of
benchmarking
for
a
level
or
testing
or
measuring
for
a.
E
F
Of
conformance
to
different
pod
security
profiles,
cool.
F
No
no,
this
is
the.
I
think
it's
called
like
the
standard
profile
levels
for
pod
security
right,
so
this
is
being
proposed
within
cigar
as
a
possible
replacement
or
as
a
way
of
sort
of,
as
I
think
in
1.22
psps
will
be
deprecated,
and
at
that
point
it
will
be
up
to
tools
like
kiberno
or
gatekeeper
oppa,
to
measure
and
report
on
pod
security.
F
But
potentially
the
benchmarking
that
we're
using
for
mtb
could
also
be
an
in
cluster
tool,
which
anybody
can
run
and
say.
Are
they
conformant
to
a
level
of
pod
security
or
not
because
a
lot
of
the
checks
we're
doing
here-
and
these
are
runtime
checks,
they're
all
they're,
very
much
related
also
to
the
different
security
pod
security
levels.
F
Yeah,
so
a
few
other
things
that
we
need
to
complete
to
get
past
our
so
going
back
on
the
main
page,
where
the
list
of
profile
level
one
checks
are
so
here,
we've
like
I
mentioned
everything,
that's
complete,
is
more
or
less
related
on
the
security
context.
There's
also
some
single
name:
space
checks
for
quotas
limits
things
like
that.
F
The
the
one
area
where
we
need
a
little
bit
more
discussion
and
perhaps
some
feedback
on
direction,
is
how
do
we
handle
things
where
we
want
to
do
behavioral
checks
across
name
spaces
right
and
the
problem
there
is,
you
know,
with
each
different
variation
of
how
multi-tenancy
is
handled,
there's
different
ways
of
creating
these
namespaces,
with
tools
like
hnc
or
even
like,
as
we're
discussing
with
capsule
or
loft,
or
even
if
you're,
using
just
kiverno
and
creating
namespaces.
Each
one
has
a
slight
variation.
F
So
we
need
some
some
general
mechanism,
or
perhaps
a
plug-in
mechanism
to
allow
for
these
variations
to
do
tests
where
we're
actually
checking
if
two
namespaces
can
talk
to
each
other
or
creating
namespaces
for
different
tenants,
but
the
plan
the
so
that's
a
little
bit
longer
term,
but
the
immediate
plan.
The
next
thing
we're
going
to
wrap
up
is
tests
on
the
storage
side.
F
So
there
it's
more
like
on
pvc
checks
within
a
single
namespace,
and
then
also
we
will
you
know,
there's
a
few
others
like
the
that
came
up
in
discussions
with
other
folks
like
this
cap
drop.
All
is
just
checking
again
for
part
configurations
of
which
we
will
add,
but
as
it
is
right
now,
this
is
fairly
usable.
It's
pretty
simple.
If
you
haven't
tried
it
out,
definitely
just
go
through.
You
know.
F
If
you
click
on
this
running
link,
you'll
see,
there's
a
video
as
well
as
their
steps,
detailed
instructions
on
running
this
on
any
namespace.
We
did
test
this
with
hnc.
So
now,
with
some
of
the,
I
think
bug
fixes
that
came
in
the
prior
release.
This
works
fine
with
hnc
everything,
looks
good
cool
yeah.
We
haven't
tested
it
yet
with
virtual
clusters,
but
I
don't
expect
any
issues
there,
because
they're
creating
a
namespace
is
just
you
know,
a
regular
namespace,
but
that
would
be
a
good
good
kind
of
milestone
as
well.
A
Yeah,
I
think
that
the
most
valuable
thing
that
this
can
provide
is
really
ensuring
isolation,
because
what
agency
does
in
a
way
is
it
breaks
down
barriers
between
namespaces,
because
it's
like
namespaces
are
the
way
you
isolate
the
control
plane
anyway
in
kubernetes
and
but
they're
too.
Limiting
you
don't
want
to
have
them
completely
isolated.
A
Whereas
things
like
hnz
agency
versus
capsule,
there
is
no
agreed-upon
standard
between
them
right
and-
and
perhaps
it's
premature,
especially
if
we
start
talking
to
the
loft
folks
and
the
kate
spin,
and
eventually
we
start
to
say
well,
yeah,
let's
standardize
on
agency,
which
is
at
the
lowest
level
like
it's
at
a
lower
level
than
either
of
those
features,
and
then
they
and
then
we
can
start
saying
like
okay,
so
that
is
the
primitive.
A
Does.
Capsule
and
kate
spin
use
them
in
the
correct
way.
F
F
You
know
leveraging
that
as
a
standard,
the
other
option,
what
we're
thinking
of
with
mtb
is
just
to
maybe
allow
a
plug-in
model
where,
based
on
the
tool
that
you're
using
you
know,
the
user
can
provide
how
a
namespace
gets
created
exactly
as
long
as
there's
some
coupe
cuddle
or
some
scripting
available
to
create
that
namespace,
then
you
know,
mtb
can
call
out
to
that
and
perform
checks
across
multiple
namespaces.
A
And
that
will
work
as
long
as
the
concepts
are
similar
enough
right.
Yes,
the
matter
if
the
command
is
different,
that's
pretty
easy.
If
there's
some
kind
of
conceptual
difference,
that's
a
little
bit
harder,
not
maybe
not
impossible,
but
harder.
F
Automatically
create
those
namespaces
as
needed
for
various
checks.
So
if
you
can
say
tenant
a
here's,
a
namespace
and
b
here's
a
namespace.
Now
we
can
validate
that.
Oh.
A
F
I
see
you
can't
talk,
you
can't
have
two
applications
talking
to
each
other
or
referencing
each
other
directly,
right.
A
Yeah,
you
could
say:
okay,
here's
this
or
you
could
just
have
a
setup
script.
The
subscript
would
be
like
the
name.
Spaces
are
called
a
and
b
and
in
this
scenario,
they're
part
of
the
same
tenant,
and
in
this
scenario
they're
not
show
me
what
is
that
accessible,
because
that
will
depend
on
all
kinds
of
stuff.
For
example,
if
you
set
up
your
your
network
policies,
maybe
you
want
to
talk,
and
maybe
you
don't.
F
A
F
B
F
Well
so
the
best
practice
that's
recommended-
and
I
can
I'll
have
to
pull
up
some
references
to
it-
is
to
to
drop
everything
and
then
add
back
the
capabilities
that
are
required.
So.
A
F
Container
does
require
a
few
privilege
capabilities.
Then
you
can
add
those
in
explicitly.
The
problem
is,
if
you
don't
drop
everything,
each
container
runs
with
a
very
fairly
large
set
of
privileges
by
default
right,
which
are
yeah.
B
People
there
are
about
20
some
right
right,
yeah.
I
know,
because
because
libra
brought
up
these
questions
recently,
if
you,
if
you
are
aware
there
is
a
cbe
which
is
right,
the
the
the
network
raw
capability
has
a
security
attack.
So
I
I
know
we
are.
We
are
thinking
about
the
solution.
How
do
you
work
around
that?
I
know
one
solution
is
drop
on
and
add
one
by
one,
but.
F
F
F
You
know
concerns
like
this,
so
this
is
one
of
the
places
where
they
recommend
and
it's
fairly,
even
in
the
docker
community.
This
is
recommended
when
you're
building
secure
containers-
I
don't
recall
often
if
it's
a
cis
benchmark
necessarily,
but
certainly
for
pod
security.
It's
recommended.
B
Yeah
so
yeah
I
understand
now
my
question
is
I
see
the
reason
so
actually
I
don't
quite
see
the
reason
to
put
these
mkp
because
we
are
empty
starting
to
address
security
issue.
If
you.
E
F
Obviously
you
want
pods
to
have
a
certain
level
of
security
like
you,
don't
want
them
to
access
host
resources,
things
like
that,
but
then
there
are
other
security
concerns
which
may
not
directly
be
related
to
multi-tenancy,
but
which
are
best
practices
right.
So
now
so
I
mean
you,
could
you
could
sort
of
argue
that
if
you
don't
secure
your
pods,
some
a
malicious
user
or
somebody
who
accesses
the
pods
could
potentially
get
access
to
other
cluster
resources
and
that's
sufficient
reason
to
secure
your
pods
for
multi-tenancy
as
well,
but
you're
right?
It's
not
directly.
F
F
At
that
point
you
could,
we
could
just
say
for
multi-tenancy
you're
required
to
have,
let's
say
the
restricted
profile
of
which
is
the
highest
level
of
spot
security.
In
that,
in
their
definition,.
B
F
So
yes,
a
require
would
be
that
we
would
check
for
it
now
whether
we
you
know
how
we
score,
whether
we
say
that
that's
you
know
is
it
again.
We
have
to
decide
and
get
some
feedback
also
from
folks
who
are
working
on
some
of
these
pod
security
profiles
for
what
they
recommend.
But
I
think
this
is
one
of
the
discussions
I
had.
I
think
it
was
with
rory
who
works
on
some
of
the
security
standards
he
had
recommended,
adding
this
in.
F
So
this
would
yeah
so
if
you're
running
the
benchmarks,
the
mtb
ben,
you
know
what
would
happen
is
this
would
show
up
as
a
as
an
error
or,
as
a
you
know,
failure
right
within
your
for
that
particular
test.
Now
the
other
you
know
kind
of
nuance
on
this.
Is
we
have
to
decide
of
each
one
of
these
tests
just
like
with
cis?
There
are
some
benchmarks.
They
recommend
but
they're
not
scored
right.
So
they're
saying
that
yeah
we
highly
recommend
you
do
that,
but
they're
not
going
to
count.
A
F
B
You're
right,
I
think
the
those
things
could
be
optional.
We
shouldn't
you
know
100
percent
of
you
people
to
do
this
because,
in
my
opinion,
the
other
ones
the
otherwise
is
about
isolation
for
sure
you
need
to
block
it.
So
if
you
didn't
block
it,
you
fail
it
all
right,
but
for
others
which
which
not
directly
correlated
to
isolation
like
security
one.
This
should
be
just
recommendation
kind
of
right.
F
F
F
I
guess
somebody
breaking
into
different
tenants
if
they're
running
pods
in
this
manner,
but
it
doesn't
in
one
way
or
another,
directly
impact
multi-tenancy.
B
Yeah,
okay,
there's
one
more
thing
because
coming
to
my
mind,
so
it's
random
things.
So
it's
about
the
block
use
of
the
host
networking.
So
for
this
kind
of
thing
you
have
are
you
checking
is
the
part
using
the
same
ip
as
a
host
network,
or
are
you
checking?
Can
the
pod
pinhole.
F
Yeah,
so,
for
so,
are
you
this
is
the
one
you're
referring
to
right,
host
networking
and
host
sports?
Yes,
yes,
yes
in
this
particular
check
and
you
can,
when
you
click
on
any
of
these,
the
actual
checks
and
everything's
in
here.
F
So
if
I
recall
correctly
for
this,
we're
checking
for
configuration
options
on
the
pod
spec
right,
so
we're
making
sure
that
this
is
not
configured
with
host
network
as
true
and
then
the
host
ports
are
also
set
to
nil
right,
so
that
yeah
yeah
and
the
way
this
works
is
we
actually
do
the
so
we
run
the
check
and
of
course,
if
we
are
able
to
run
a
plot
with
that
configuration
that
the
test
will
fail.
F
B
Okay,
yeah,
because
I
I
thought
you
are
checking
you
probably
will
do
most
strictly
check.
Like
you
know,
you
do
exactly
opinion
test
you
see.
Is
your
eye
is
complete
correctly,
so
configuring
the
way
that
your
product
can
never
access
the
host.
B
B
F
Sure
so,
if
you
have,
if
you
have
the
admin
permission
to
configure
a
cni,
then
of
course
yeah
you
can
do
whatever
you
wish
right.
So,
but
here
we're
checking
more
at
the
user
level
and
most
of
these
checks.
Yes,
so
some
will
do
things
at
the
in
a
configuration
level,
but
others
will
be
behavioral
right,
so
we
will
try
runtime
access,
depending
like
we
were
discussing
across
name
spaces.
Things
like
that.
B
F
If
you
have
some
ideas
on
how
to
you
know
what
exactly
that
test
would
look
like,
let's
discuss,
because
those
would
be
interesting,
so
we
do
want
to
add
those
tests
across
namespaces
like
like
we
were
just
previously
talking
about,
but
even
if
there's
some
way
to
check
for
well-known,
like
host
resources,
things
like
that
we
can.
B
Add
yeah
yeah,
because
that
that's
the
main
thing
I
want
to
talk
about
since
the
moment.
If
I
glance
all
the
tasks
it
seems
like
configuration
based
most
mostly
so
what
I'm
trying
to
print
out
is
the
random
base
is
still
very
important.
B
F
Right
right,
so
some
of
those
yeah
if
your
cluster
is
not
configured
or
yeah.
If
there's
some
other
issues
there,
then
yes
yeah.
So
we
do.
You
know
adding
some
runtime
checks
and
I
think
there
were
a
couple
here
which
are
more
just
runtime,
which
so
this
will
actually,
you
know,
try
to
access.
It
will
bring
up
a
pod,
try
to
access
different
things
in
the
cluster,
but
there's
most
of
these
are
more
configuration
and
checking
at
the
pod
level
for
various
security,
best
practices.
B
Yeah,
okay,
yeah.
I
wish
you
know
we
have
one
more
long-term
thing
so
now,
because
again,
if
this
becomes
standard,
you
know
I
can
say,
we
see
positive
tests,
so
we
have
right
so,
which
is
much
more.
B
I
mean
important
than
the
config
check,
because
in
the
production
level,
if
you
want
to
talk
about
with
hyper
level
guys
they
don't
look
at
your
detailed
implementation,
they
just
say
show
me
the
bank,
you
have
the
runtime
chat.
B
A
Okay,
well
thanks
everybody
for
that
update.
That
was
a
little
bit
more
involved
and
interesting
than
I
thought
it
was
gonna,
be
so
yeah.
If
a
you-
and
I
will
get
we'll
we'll
talk
over
the
next
couple
of
weeks.
A
Okay,
thanks
everybody,
so
this
meeting
will
be
posted
onto
youtube
as
always
and
and
feel
free
to
reach
out
on
the
mailing
list
or
on
slack.
If
there's
anything
we
didn't
cover
here
have
a
good
week.
Everyone
all
right,
yeah,
bye,.