►
From YouTube: Kubernetes SIG Auth 20170125
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20170125
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
B
A
Reservoir
for
one
six
is
continuing,
so
cube.
Adam,
Andrew
I
forget
his
last
name.
New
contributor
did
a
lot
of
work
to
get
cube,
Adam
able
to
set
up
an
AR
back
cluster
and
grant
the
right
roles
to
the
initial
credential
users,
and
things
like
that.
So
that
was
great
cops.
Has
the
ability
to
turn
on
our
back
I,
don't
think
it's
doing
anything
else,
special
to
to
grant
permissions
to
the
various
control
plane
components,
but
it's
a
good
start
cube
up
on
GCE
will
enable
our
back
by
default.
A
I
am
working
on
enabling
legacy
a
back
policy
if,
if
you
use
this
to
upgrade
so
if
you
have
a
1/5
cluster,
which
was
set
up
using
a
back
and
upgrade
that,
in
addition
to
turning
on
our
back,
we
would
want
to
include
the
previous
a
back
policy
so
that
your
existing
control,
plane
components,
continued
working
and
then
I
know
gke.
Some
of
the
folks
they
are
working
on
getting
past
shut
up
as
well.
Yeah.
C
A
A
C
Know
encourage
people
to
not
do
that.
I
just
see
the
one
you
know.
Small
set
of
users
with
the
workflow
of
create
new
cluster
do
some
stuff
that
this
change
will
potentially
break
them
and
if
we
give
them
a
nice
path
like
hey,
this
is
the
old
way,
keep
doing
it
for
as
long
as
you
want
but
come
do
things
the
right
way
when
you're
ready.
D
A
We
wanted
an
option
to
create
a
permissive
cluster.
I
would
rather
see
it
be
permissive
using
our
back
just
so
that
they
could
tighten
it
back
up
dynamically
I
mean
that's
what
we
will
be
documenting
for
our
backers
right.
You
start
up
in
our
back
cluster
and
you
have
your
initial
user,
which
is
a
super
user,
and
then
you
can
delegate
permissions
out.
However,
you
want,
you
can
slice
it
up
her
name
space.
A
You
can
give
some
users
power
over
the
whole
cluster
and
so
starting
with
one
powerful
user,
and
then
delegating
is
the
normal
way
you
set
up
a
cluster.
That's
the
right
way
to
do
it
and
if
the
way
they
want
to
delegate
is
to
say
you
know
what
I
don't
care,
delegate
all
powers
to
everyone,
then
that's
their
call.
That
doesn't
actually
seem
wrong
to
me
to
document
doing
that
with
our
back
or
give
them
an
option
to
do
that
with
our
back.
So
yeah.
C
Maybe
it's
a
specific
DGK
year
or
managed
type
thing.
I
know
that
that's
harder
for
us
to
undo
programmatically.
So
if
we
are
creating
you
a
cluster
and
you
want
to
run
it
the
old
way
until
you're
ready
to
flip
it
to
the
new
way.
If
we
create
a
bunch
of
our
back
bindings
for
you
and
then
you
say,
alright
I'm
ready
make
me.
Okay,
we
don't
know
which
ones
we
can
delete
automatically,
and
maybe
it's
fine
for
us
to
just
say.
Alright,
we
here's
the
are
back
policies.
We
created,
go
clean
them
up.
A
Okay,
I
hadn't,
thought
of
that.
So,
if,
if
they
say
I
want
premise
of
our
back,
you
can
create
holes
and
roll
bindings
and
label
them
gke
permissible,
binding
and
then
reap
them
later.
If
you
want
to
okay,
I
didn't
have
a
lot
of
other
things
on
the
the
are
back.
There
are
new
roles
being
added.
So
as
we
as
we
roll
this
out,
and
we
find
components
that
need
permissions,
we
are
creating
roles
for
them.
A
Given
a
nice
security
cluster,
how
he
would
go
about
granting
access
to
people
so
we're
working
on
that
there's
an
issue
about
writing
one
six
documentation
before
the
ones
expansions
cut
in
the
docs
repo.
So
we're
trying
to
work
through
that.
But
but
I
expect
that
documentation
to
be
somewhere,
probably
within
the
next
week.
E
A
A
A
Think
it
was,
was
it
Mike
that
are
you
on
the
you
on?
Do
you
want
to
talk
about
the
upper
namespace,
DNS
sure.
A
B
So
my
memory
of
the
existing
our
Beck
is
I
may
be
a
little
not
entirely
correct,
but
is
relevant
so
anyway,
the
DNS
stuff
is
just
one
toe
in
this
bigger
waters
of
the
multi-tenancy
discussion,
we've
been
having
and
I'm.
You
know
busily
trying
to
use
the
existing
abilities
of
kubernetes,
which
allows
you
know.
I
would
cite
in
multi-tenancy
to
be
imposed
and
just
work
out
how
to
do
what
I
want
to
do.
Whether
I
can
do
what
I
want
to
do
and
contribute
to
the
McKenzie
discussion.
B
So
it's
not
just
namespaces
and
groups
and
Ning
spaces,
so
the
the
vision
is
I'm
talking
to
developers
who
are
working
for
organizations
through
your
in
the
software
as
a
service
business.
So
such
an
organization
has
multiple
SAS
offerings
and
multiple
customers
and
they're
cross-cutting
things
so
I,
a
given
customer
of
this
organization
may
buy
multiple
services
from
the
ordinary
Lord
and
forgiven
service.
B
Of
course,
there
are
multiple
customers
so
I,
so
it
might
be
natural
to
say:
oh
well,
we'll
first
do
a
hierarchy,
so
this
has
say
this
organization
has
services
and
the
services
have
customers.
That
loses
the
fact
that
it's
the
same
customer.
Even
if
it's
a
customer
of
different
services,
it
cut
it
the
other
way,
but
you
lose.
The
is
the
same
service.
One
affects
for
multiple
customers,
so
you
really
want
the
the
two-dimensional
nature
to
be
up
front
and
a
priori
not
try
to
you
know
hack.
B
It
pretend
that
it
too
little
hierarchy
is
a
2-dimensional
thing.
It's
not
so
the
trick
is
that
the
kind
of
access
is
that
they
need
to
allow
they
need
to
allow
in
one
such
organization.
They
need
to
control
that
can
allow
so
think
of
one
customer
of
one
service
as
having
one
namespace
and
also
for
a
given
service,
offering
they
typically
have
a
central
namespace
that
have
stuff
that's
central,
not
specific
to
one
customer.
This
is
stuff
that
needs
control
over
some
customer
stuff
and
also
has
central.
B
You
know,
services
that
means
network
access
to
we
and
or
from
the
customer
namespaces,
and
similarly
they
also
have
some
stuff
too
specific
to
a
customer,
but
not
a
service,
but
it's
still
a
two
dimensional
matrix
and
they
need
access
between
particular
cells
in
this
matrix.
They
need
to
control
in
terms
of
the
cells
of
the
matrix.
B
I
made
was
kind
of
a
baby
step
in
this
direction,
and
the
review
said
well,
let's
read
about
the
baby
step,
let's
just
go
or
we
want
to
go
so
that's
why
I
brought
up
my
understanding
of
where
my
my
users
want
to
go
so,
and
this
applies
not
just
it
access
in
terms
of
who
can
look
up
one
in
DNS.
It's
also
access
in
terms
of
who
can
access
one
and
the
API
who
can
connect
to
life
in
the
network.
E
Mike
I
was
gonna:
ask
do
you
think
that
in
most
cases,
services
DNS
and
network
are
correlated
heavily
in
tenancy?
Yes,
because
I
know
like
just
knowing
the
DNS
implementation
top
to
bottom?
This
is
one
of
the
original
things
they
punted
on
because
of
various
like
performance
with
a
challenges
but
like
even
in
the
very
beginning,
with
service.
E
As
we
said,
the
idea,
like
we
run
very
heavily
multi-tenant
clusters
today,
and
we
know
that
there's
a
fixed
upper
limit
for
services
and
the
fan-out
to
each
node
for
very
heavy
multi-tenancy
for
services
is
already
a
problem.
So
we
had
kind
of
assumed
that
some
point
this
segregation
would
have
to
happen
for
services,
namespace
and
DNS
for
sure.
E
You
talking
about,
though,
to
distribute
all
services
and
all
changes
to
all
services
to
all
nodes
is
quadratic
right
and
if
you
are
running
at
high
multi-tenancy
so
like
in
some
of
our
cases
were
running.
Ten
thousand
ten
minutes
on
the
same
cluster
for
ten
thousand
individual
services
is
actually
very
high
like
and
so.
B
First,
off
I
think
we
in
that
fan
up
from
you
bringing
in
a
dimension
that
I
wasn't
even
trying
to
talk
about,
which
is,
they
know,
is
hosting
stuff
from
a
variety
of
I,
get
a
little
nervous
about
the
word
tenant
because
it
tends
to
make
people
thinking.
One
dimensionally
and
I
really
want
to
keep
the
two-dimensional
or
a
Multi,
multi
dimensional
view
central,
let's
say
a
node
and
stuff
for
multiple
namespaces,
but
it
does
be
given.
B
A
That
is
I
think
some
of
the
Service
Catalog
stuff
like
right
now
the
system
doesn't
actually
know
which
services
a
particular
pod
is
going
to
be
using
the
pod
just
hit
be
enough
for
it
and
gets
directed
there
and
so
having
something
explicit
that
is
indicating
that
this
pod
uses
this
service
service.
Catalog
is
interested
in
doing
that,
so
that
the
platform
can
provide
additional
assets.
Things
like
the
credentials
that
you
might
need
to
use
that
service
or
configuration
that
you
might
need
to
use
that
service.
A
But
having
that
explicit
indication
that
this
pod
is
going
to
be
consuming
this,
this
other
service
could
be
used
by
q
proxy
to
limit
what
it's
watching.
So
we
don't
have
a
good
way
for
q
proxy
to
sort
of
cherry-pick
I
want
to
watch
these
thousand
services
scattered
across
these
500
namespaces.
We
don't
have
an
effective
way
to
do
that.
Right
now,.
E
A
A
B
E
B
Look
I
mean
you
can
imagine
a
lot
I
supposed
I'm,
just
gonna
represent
what
I
know
from
my
what
I'm
hearing
from
my
potential
users
and
I
want
to
get
back
to
the
question.
I
was
asked
about
that.
Someone
asked
me
about
something
about
large
scale
or
small
scale
or
something
so
again.
Let
me
reiterate:
I'm
dealing
with
organizations,
I
work
for
IBM
right
and
we
have
some
big
organizations
that
are
in
the
SAS
business.
They
sell,
you
know
pretty
serious
services
and
they
they
sell.
Typically,
today
they
sell
to
big
customers
right.
B
They
want
to
get
in
the
business
of
selling,
more
smokes
and
more
small
customers.
So
such
an
organization
has
a
typically
a
handful
of
services.
You
know
as
many
as
they
can
manage
the
third
develop
and
maintain
with
their
development
team.
So
you
know
I'm
hearing
numbers
on
the
order
of
maybe
dozens,
something
like
that
say
dozens
of
services
in
such
an
organization
currently
with
maybe
hundreds
to
thousands
of
customers
made
me
more
and
and
hopefully
when
we
will
get
to
a
lot
more
ok,.
E
Because,
ok
and
I
bring
this
like
from
the
open
ship,
Red
Hat
perspective
like
openshift,
it
does
with
kind
of
two
tenancy
montt.
Well,
three
tendency
models:
cluster
per
tenant,
commonly
that's
kind
of
more
like
big
classic
I
run
I
own
this
entire
cluster
and
I
get
to
do
what
I
want,
but
would
potentially
like
weaker
subdivision.
So
within
that
you
know,
I
might
have
many
different
teams,
but
they
all
basically
report
to
the
same
management
structure.
So
somebody
really
goes
outside
the
rules.
E
You
just
fire
them,
and
you
know
the
security
boundaries
are
a
little
bit
more
classic
enterprise,
IT
security
boundaries
and
then
there's
the
extreme
multi-tenancy
which
would
be
you
know.
Tens
of
thousands
of
you
know
one
namespace
per
tenant
where
a
tenant
is
like
some
guy
off
the
internet
using
containers.
So
that's
like
the
open,
shipped
online
use
case
in
tens
of
thousands
of
individual
users
were
very
small
amount
of
resources
per
and
then
in
the
middle.
E
We
kind
of
also
see
that
organizational
tenancy,
we're
anywhere
from
you,
know,
six
to
hundreds
of
organizations
using
the
same
cluster
where
the
organizational
admin
might
have
a
big
chunk
of
control
over
the
namespaces
under
them,
or
a
set
of
resources
with
some
segregation
of
security
or
all
that,
but
that
fine-grained
role
of
it's
not
just
one
a
cluster
and
it's
not
just
one
per
namespace,
but
it's
in
between.
So
we
see
the
same
desires
from
these
cases.
E
B
E
And
at
that
point
you've,
if
DNS
is
being
answered
from
the
node
and
services
are
being
answered
for
node.
You
already
have
to
serve
if
you're
solving
the
service
tenancy
problem,
which
is
connections
from
X,
are
allowed
to
talk
to
Y.
Then
you've
also
solved
the
on
that
node
connections
from
X
are
allowed
to
ask
questions
about.
Why
that's
why
I
see
those
as
similar?
Okay.
B
Right
right,
so
blonde
right,
okay,
you've
got
an
invitation
that
ties
the
two
together
I
think.
Maybe
the
question
again:
I
tend
to
organize
things
as
first
the
API
and
then
the
implementation.
So
maybe
the
question
is
here:
do
we
want
to
tie
these
things
together?
In
the
API
I
mean
just
speaking
from
the
use
cases
that
I
understand
that
I've
been
hearing
about
yeah
the
DNS
is,
you
know,
being
used
in
order
to
look
up
services
that
are
then
going
to
be
accessed
in
the
network,
so
there's
a
correspondence
in
what
should
be
accessible.
A
Policy
is
pod,
selecting
and
it's
only
ingress,
so
I
I
in
the
abstract.
It
seems
like
either
something
like
network
policy
or
an
extension
of
network
policy
to
also
address
egress
and
having
that
be
able
to
be
connected
to
the
stuff
that
the
Service
Catalog
is
talking
about,
which
declares
I
consume
these
services,
so
that,
if
you
declare
you
consume
these
services,
then
you
can
describe
a
network
policy
that
says,
because
you
just
consume
these
services.
Therefore,
you
may
egress
to
write
to
those
network
addresses
and.
E
E
Egress
is
egress
is
enforced
down,
because
it's
how
you
control
how
you
access
the
rest
of
the
cluster
ingress
might
be
something
that
the
the
tenant
themselves
can
control
and
service
injection
in
a
sense
is
a
desire
like
when
you
say,
like
I,
want
to
consume
a
service.
The
thing
that
goes
in
provisions
the
service
for
you
is
the
one
who
knows
what
you're
allowed
to
talk
to,
or
has
been
granted
that
or
through
some
other
process,
gets
approval
to
inject
and
to
open
up
that
Network
flow
into
that
service.
B
On
this
topic,
we
have
debated
in
the
network
context
the
duality
between
ingress
and
egress,
and
there
is
someone
in
the
claim.
There's
a
duality,
and
it's
not
exactly
this
neat
suppose
you
might
think,
but
on
this
access
business
in
general
I
mean
if
you
focus
only
on
intra
cluster
accesses
its
gets
easier
to
talk
about
a
duality
and
it
I
think
I've
been
discussing
this
in
terms
of
what
I've
tried
to
suggest
in
the
PR
and
approach
based
on
namespace,
selectors
and
fine-grained
delegation
in
our
back.
B
So
the
idea
being
that
a
user
that
says,
if
a
has
access
to
namespaces,
suppose
we
can
express
this
accessibility
in
terms
of
namespace
selectors.
So
user
might
have
the
privilege
to
acts
the
namespaces
that
match
a
particular
selector
and
such
a
user
could
then
grant
or
another
user
access
to
namespaces
that
match
any
subset
of
that
selector.
B
B
Based
enforcement
today
right
this,
isn't
me
an
expansion
of
our
back
so
that
it
could
do
some
fine
grain,
more
fine,
grained
enforcement
and
delegation.
Well,
what
I
was
trying
to
make
was
that,
if
we
did,
it
I
think
we
might
have
a
choice
between
the
directionality
right,
because,
ultimately,
the
control
it
needs
to
be
expressed
is
between
cells
and
matrix
I'm.
Talking
about
and
one
way
to
think
about,
it
is
for
a
new
cell.
What
cells
can't
reach
out
to
or
another
way
to
think
of
it
for
a
new
cell?
B
F
We
already
sort
of
have
them.
We
already
sort
of
have
something
in
our
back
that
you
can
sort
of
say
this.
Particular
user
can
see
a
service,
for
instance.
So
that
seems
like
a
logical
extension
that
if
you
can
see
a
service,
you
can
hit
it,
but
it
I,
don't
I,
don't
know
how
well
that
falls
into
our
back,
because
our
back
talks
about
users
versus
you
know,
origin
or
something
I
will.
E
Say
that
a
service
account
is
intended
to
represent
the
concept
of
what
a
plot
can
do
so
in
other
contexts
we
extended
this
to
say
the
service
account
is
the
one
who
has
to
be
able
to
use
a
service
and
service
accounts.
There's
different
opinions
on
where
we
want
service
accounts
to
go
as
the
concept
of
it's
the
machine
entity.
That's
running,
regardless
of
whether
we
eventually
add
something
to
service
accounts
like
delegation,
which
is
this
service
account
of
on
my
behalf.
E
It
was
the
original
intent
of
the
circus.
Account
would
represent
that
identity
for
the
purposes
of
our
back.
We
started
down
that
path,
a
little
bit
with
secrets
and
then
backed
off
Wow
other
stuff
came
to
fruition,
but
for
pod
security
policy,
for
instance,
the
service
account
is
the
one
that
has
to
ultimately
have
the
permission
to
has
to
be
bound
to
a
policy
that
allows
it
to
run
so.
E
E
B
E
E
E
So
we've
David
and
enduring
that
kind
of
planning
around
a
couple
ideas
recently,
but
we
talked
about
organizational
multi-tenancy,
so
an
open
ship
we'd
like
to
introduce
organizations,
we
already
have
cluster
resource
quota
that
eventually
I
think
will
make
it
into
cube,
which
is
a
quota
that
applies
to
namespaces
by
selector.
So
you
can
say
this
organization
gets
access
to.
You
know
700
cores
an
individual
namespaces
might
have
their
own
limits
over
instance.
E
If
you
wanted
to
swap
quota
across
700
cores
of
700
cores
across
15
namespaces,
that's
a
use
case
that
we
hit
pretty
early,
and
so
we
started
there
rather
than
starting
from
an
organization
working
backwards.
We
started
from
building
up
tools
that
allow
those
higher-level
concepts
to
be
expressed
so
with
organizational
multi-tenancy.
We
had
kind
of
already
been
naturally
thinking
in
the
idea
of
namespaces
being
label
selected
and
for
just
for
grouping
like
from
I
guess
an
administrator.
E
If
I
was
an
administrator
looking
at
a
cluster
and
there's
700,
namespaces
and
I
wanted
to
say
a
hundred
of
these
belongs
to
an
organization,
probably
the
natural
way
for
me
to
do.
That
would
be
to
label,
select
them
right,
that's
what
label
selectors
are
for
that
gets
into
a
couple
of
things
like
who's
allowed
to
change
those
label
selectors,
which
we
really
haven't
dealt
with.
E
Although
they've
been
a
number
of
proposals
over
the
last
couple
years
on
that
we
do
have
sparser
multi-tenancy
cases,
which
is
kind
of
like
what
I
was
talking
about
before,
where
an
individual
might
have
one
or
two
or
three
namespaces
those
kind
of
tend
to
be
granted
individually.
So
that's
like
a
self-service
namespace
model
where,
instead
of
letting
in
any
user,
create
a
namespace,
you
come
up
with
some
sort
of
self-service
portal
or
some
sort
of
flow
that
allows
someone
to
say:
I
want
a
namespace
and
we
give
you
one.
E
It
has
a
kora,
and
then
it
works.
Okay
for
a
budget
for
the
first
year
and
a
half
there's
different
patterns
there
right,
like
you,
can
do
it
with
a
self-service
portal.
You
can
call
the
API
as
a
higher
privileged
user
or
you
could
come
up
with
a
quota
system
that
says
hey.
Every
user
is
allowed
to
create
a
namespace,
but
then
you
have
to
get
into
ownership
of
namespaces,
which
requires
some
sort
of
metadata
to
track
that,
and
so
we've
never
really
talked
about
quota
of
cluster
resources
on
that
vein.
E
E
That's
a
higher
level
thing
that
goes
and
creates
namespaces,
so
the
programming
down
and
with
both
of
those
there
kind
of
comes
to
this
case,
where
we
start
talking
about
scoped,
integrations
and
so
a
scoped
integration
would
be
I
want
to
go,
build
an
ingress
controller
and
use
it
on
a
tentative
system.
Well,
the
system
administrator
is
not
going
to
give
me
read
access
to
secrets
across
all
namespaces
and,
for
instance,
ingress
controller
today
requires
read
access
to
all
secrets,
which
is
like
we
kind
of
like
I.
E
Those
are
our
two
scopes
we
support
and
we
can't
efficiently
like
we
would
need
to
efficiently
solve
some
variant
in
between
label
selectors
on
namespaces
or
label
selectors
on
on
resources
themselves
at
the
cluster
scope
is
one
option
so
when
we
were
talking
about
this,
if
you
think
about
a
if
I'm
an
ingress
controller
and
I
need
to
access
certain
secrets,
I
need
a
permission.
That
scales
out
right,
like
every
individual
user,
needs
to
grant
me
permission
me
being
the
ingress
controller.
It's
account
permission
to
go
access,
a
specific
thread.
E
So
one
way
to
do
that
is
to
create
one
one
auerbach
rule
and
one
binding
for
every
single
secret
that
wants
to
be
exposed
and
there's
some
scaling
problems
with
that.
Another
option
would
be:
the
user
can
opt
in
to
exposing
that
secret
by
giving
that
secret
a
label
selector
that
the
ingress
controller
has
access
to,
and
so
we've
been
kind
of
playing
around
with
some.
E
This
idea
that
I,
don't
think
I
know
where
these
baits,
but
thinking
about
this,
as
what
is
the,
what
is
the
level
between
cluster
scoped,
an
individual
name
and
namespace
so
basically
like
that
the
idea
might
be
label
selection
of
some
form
or
calculating
indexes
of
what
permissions
you
have
on
individual
namespaces
in
either
case.
The
rest
of
the
issue
is
really
a
discussion
of
what's
there
but
I.
E
Think
to
my
to
your
point
when
you
come
at
this
from
the
other
angle,
programming
down
works
great
until
you
want
to
make
API
calls
against
the
master
to
efficiently
do
multiple
namespaces,
and
then
everything
falls
over
in
our
current
model,
so
I
think
in
the
short
term.
One
of
the
questions
I
want
to
ask
this
group
and
from
an
engineering
perspective,
get
ideas
flowing
is
if
we
ever
want
to
have
a
integration
that
doesn't
talk
to
the
whole
cluster.
E
We
probably
need
a
set
of
common
mechanisms
in
place
to
enable
that,
whether
that's
our
back
or
efficiency
and
watching
or
efficiency
on
this
thing,
like
it's
just
it's
going
to
have
to
happen,
and
so
I
would
like
to
get
some
I
want
to
get
at
least
the
discussion
moving.
So
we
can
talk
about
it
and
you
know
over
the
next
well.
E
C
E
So
that
kind
of
I
think
it's
solvable
like
David
and
I
kind
of
tossed
around
some
ideas,
there's
some
hard
problems
in
there.
Various
people
have
experience
with
this,
like,
in
you
know,
from
enterprise
access
control
systems
that
they
worked
on
in
the
past,
or
it's
not.
These
aren't
impossible
problems,
it
would
be.
Do
we
think
it's
important
enough
that
in
the
short
term,
we
would
invest
in
that
probably
and
I?
You
know
I'd
be
interested
in
knowing
who
else
kind
of
has
this
problem
today,
Mike
I
know
you
do.
E
E
I
think
the
the
secret
integration
one
I
threw
in
there
just
because
it's
kind
of
a
weird
one
where
I
want
to
let
users
opt
in
to
exposing
their
secrets,
and
we
can
do
that
through
our
back
with
many.
Many
different
are
back
roles,
but
that
one's
almost
like
a
specific
set
of
resources
within
a
namespace
but
you're
right.
It's
basically
between
namespace
and
cluster.
A
F
A
E
A
Just
because
it'll
take
multiple
releases
to
get
those
mechanisms
through
the
API
and
storage
layers.
I
think
some
of
the
story
proposals
was
talking
about
more,
like
richer
indexes
on
top
of
on
top
of
data,
and
then
just
the
mechanics
of
how
you
would
express
a
you
know:
authorization
aside,
like
just
how
you
would
even
Express
I,
want
to
watch
this
subset
sliced
in
these
two
dimensions.
Yeah.
E
Yeah,
this
secret
thing
is
really
blowing
up
right
now
for
us,
because
people
are
finally
realizing.
The
secrets
are
secret
and
so
I
think
like
there
have
been
other
discussions
about
secrets
like
people
want
to
integrate
them
from
the
outside.
In
with
something
like
fault
service
account
tokens
being
commingled
within
user
secrets,
is
something
that
there's
been
various
discussions
about,
trying
to
tease
apart
I
when
we
talk
about
some
of
those
solutions.
I
think
it'd
be
useful.
A
E
E
D
Yeah
so
Sarah
asks
for
SIG's
to
gave
updates
to
the
community
meeting
and
I
am
wondering
we
have
the
contribute
experience
working
group
going
tomorrow.
I
was
wondering
if
a
goth
would
be
willing
to
prepare
a
brief
summary
of
the
things
you
know
plan
to
work
on
during
one
six
at
tomorrow's
meeting,
yeah.
A
A
One
other
thing
to
mention
with
our
back
cube
mark
just
started,
enabling
it
and
so
cute
mark
is
now
running
the
scale
tests
with
our
back
enabled
and
so
a
little
memory
bump
I
think
it
was
like
a
1%
11.5
percent
memory
bump,
but
scale
tests
are
running
with
it
on.
So
that's
that's
good.
That
was
the
scale
testing
was
on
one
of
the
big
things
on
the
on
the
weighted
b1.
So.
A
And
yeah
for
the
for
the
isolation
thing,
API
machinery
getting
them
involved
early
for
the
the
mechanisms
and
then
the
Signet
work
and
sig
off
we're
kind
of
the
API
and
the
network
dimensions.
I
think
it
would
probably
be
worthwhile
to
set
up
a
meeting
once
we
kind
of
have
specific
questions
set
up
a
meeting
where
we
pull
in
people
from
all
of
those
talk
through
things
together,
but
yeah.
We
can
kind
of
group
around
the
the
poll
that
you
had
Mike
or
the
namespace
organization,
one
that
you
open
Clayton.
D
E
E
Actually
so
one
point
on
the
encryption
at
rest,
like
I,
do
think
we're
gonna,
try
and
push
that
so
Daniel
had
asked
for
reviewers
and
nobody
spoke
up,
but
I
think
I'm
going
to
put
myself
on
that
reviewer
list
just
because
it's
coming
up
more
and
more
for
us.
So
if
there's
anybody
else
on
this
cig,
who
cares
about
encrypted
at
rest,
I
would
recommend
getting
involved
in
the
proposal.
E
A
C
E
C
E
E
C
A
And
I'll
put
in
links
to
that
all
right.
That's
all
I
had
appreciate
everyone's
time.
Yeah
see
you
in
two
weeks.
Everyone.