►
From YouTube: Kubernetes SIG Auth 20170503
Description
Kubernetes Auth Special-Interest-Group (SIG) Meeting 20170503
Meeting Notes/Agenda: https://docs.google.com/document/d/1woLGRoONE3EBVx-wTb4pvp4CI7tmLZ6lS26VTbosLKM/view#
Find out more about SIG Auth here: https://github.com/kubernetes/community/tree/master/sig-auth
A
A
And
what
they're
trying
to
access
and
then
Jordan
is
going
to
talk
about
the
scoped
Cupid
API
access,
I,
don't
know
of
any
other
designer
of
note.
If
someone
has
one
or
a
pull
request
that
we
need
to
talk
about,
you
can
go
ahead
and
add
it
to
the
agenda
while
we're
talking
and-
and
we
will
circle
back
to
it
for
now
is
kingly
here.
C
A
B
So
you
can
jump
over
to
the
github
poll
if
you
want
to
follow
along,
but
basically
we
are
taking
the
the
access
that
nodes
currently
have
and
trying
to
partition
the
mutations
that
they're
allowed
to
do
to
nodes
and
pods
so
that
they
only
get
to
mutate
themselves
and
the
pods
that
are
scheduled
to
them.
And
then
we
also
want
to
limit
their
ability
to
read
secrets
and
config
maps
in
particular
to
just
the
ones
required
by
pods
that
are
scheduled
to
them.
B
So
it's
it's
limiting
the
impact
of
a
compromised
node
to
itself
and
workloads
that
were
running
on
that
node
and
so
I
kind
of
laid
out.
The
two
stages
where
we
would
accomplish
this
subdivision
for
those
who
aren't
familiar
and
I'm
actually
going
through
now
and
kind
of
making
updates
to
this.
In
response
to
some
comments
for
those
who
aren't
familiar,
we
have
an
authorization
phase
which
all
requests
all
API
requests
go
through.
B
B
Yes,
you
are
allowed
to
create
a
pod,
but
we
can't
decide
if
you
are
allowed
to
create
this
particular
pod
based
on
some
attribute,
because
that's
inside
the
request
body-
and
so
you
see
in
this
proposal-
enforcing
some
things
at
the
authorization
point
and
then
later
enforcing
some
things
with
an
admission
plugin
an
admission
plugins.
At
that
point
we
have
parsed
the
request
and
we
have
the
full
API
object,
and
so
we
can
look
at
particular
fields
and
make
decisions
based
on
those
fields.
B
So
that's
that's
why
this
is
kind
of
divided
into
two
to
enforcement
phases
and
I
laid
out
the
node
authorizer,
what
we
would
enforce
there
and
then
for
the
node
admission
plugin.
We
would
enforce
there
I'm,
also
kind
of
putting
together
a
future
work
for
future
improvements
section
at
the
end,
there
were
some
questions
about.
You
know
well
nodes,
look
at
all
sorts
of
things.
They
look
at
persistent
volume
and
persistent
volume
claims
and
other
other
resources
like
that.
B
There
should
not
be
sensitive
information
in
those
API
objects.
To
my
knowledge
there
is
not
sensitive
information,
API
objects
and
so
that
the
important
the
most
important
things
to
do
first
are
protecting
config,
Maps
and
secrets
and
protecting
the
mutations
that
a
node
can
make.
That
would
affect
other
nodes,
so
those
in
pods.
So
if
we
want
to
expand
this
in
the
future,
that
is
certainly
something
that
we
can.
We
can
look
at,
but
we
want
to
get
this
initial
initial
version
in
to
protect
the
issues
we
know
exist.
This.
B
A
B
B
E
I
have
a
follow-up
question,
I
think
that's
one
example
of
like
there's
some
stuff
that
could
be
part
of
this.
Either
videos
currently
planned
not
to
be
part
of
it.
It
would
be
nice
to
include
a
proposal
what
we
know
we
could
potentially
do
part
of
you
know
a
phase
5
the
furniture
and
I
think
so,
for
example,
should
a
cubelet
be
able
to
see
all
pods
in
the
system
or
can
only
see
pods
that
are
scheduled
Devon
it
same
thing
with
the
participant
columns?
What
are
all
of
those
other
objects?
F
F
B
B
That's
a
good
question
actually
so
so
right
now
static,
pods
are
not
allowed
to
reference
secrets
and
never
have
been,
and
so
it
was
actually
my
understanding
that
they
were
not
supposed
to
reference
any
API
objects.
They
were
supposed
to
be
statically
defined
because
they
are
able
to
run
on
a
node.
It
does
not
have
a
connection
to
an
API
I,
don't
think
other
types
of
API
references
are
actually
enforced.
So
I
think
you
could
set
up
a
set
of
cloud
that
reference,
the
config
map
or
breasteses
in
volume
claim
for
purposes
of
subdivision.
B
B
E
Yeah
and
I
think
to
be
clear.
No
one's
proposing
to
that
must
all
be
done
in
one
big
group
is
to
stay,
yeah
I
think
breaking
it
up
the
authorizer
an
admission
part
all
it
makes
sense
to
me.
It's
just
be
nice
to
know
how
does
this
fit
in
within
the
larger
scope
plans
work
to
get
us
to
this
efficiency
so.
F
F
Alright,
so
anyway,
it
might
be
I
mean
so
one
idea
here
and
this
would
actually
move
it
to
the
authorizer.
If
there
was
a
special
namespace
called
like
you
know,
cube
static
or
something
like
that
static
pods
would
only
show
up
there
and
then
the
scheduler
would
know
not
to
look
at
that
thing.
Then,
essentially,
that's
a
write,
only
sort
of
like
place,
and
that
might
be
a
way
to
sort
of
move
this
earlier.
But
that's
that's
a
more
disruptive
change.
Obviously
yeah.
F
B
B
F
B
F
G
B
So
I
would
consider
that
like
a
strict
mode.
So
if
you
try
to
do
anything-
and
we
say-
oh,
we
see
you're
a
node,
but
we
can't
actually
figure
out
which
node
you
are
and
right
now
we're
saying.
Well,
you
can
fall
back
to
the
broad
axis
because
we
have
to
for
compatibility,
but
I
I
would
like
a
strict
mode.
That
does,
if
we
think,
you're
a
node,
but
we
don't
know
which
node
sorry
you
you
have
to
tell
us
so
rejected,
but
but
that
would
not
be
the
default.
B
Well,
so
our
back
is
interesting.
So
if
we're
going
to
switch
to
a
per
node
authorizer,
our
back
will
stop
having
opinions
about
nodes.
This
will
be
a
special
authorizer
for
nodes.
Just
like
we
have
a
special
authorizer
for
the
api
server
talking
back
to
itself.
Maybe
we
want
to
avoid
special
cases,
but
I'm
comfortable
in
saying
that
every
useful
kubernetes
cluster
will
have
nodes.
So
if
you're
going
to
have
a
bespoke
authorizer
having
it
on
nodes
or
something
that
has
like
really
complex
transitive
relationships
doesn't
seem
an
awful
thing
to
do
it.
On.
A
B
B
If
you
haven't
segmented
out
your
nodes,
then
enabling
it
and
having
it
ignore
them
is
probably
not
great
all
right.
Let
me
think
about
that.
Whether
we
want
to
do
strict
mode
by
default
and
the
way
you
would
opt
out
is
just
give
your
notes
permission
some
other
way,
which
you
must
already
be
doing
and
not
you
admissions
again
yeah,
because
it
is
a
configurable
chain.
That's
true,
all
right,
I'll!
Think
about
that
and
update
the
proposal.
So.
D
B
F
I'm,
just
thinking
of
like
on
AWS
there's
the
cloud
provider
in
the
cubelet
like
lunges
the
node
name,
to
make
it
fully
qualified
so
that
DNS
now
works
for
that
hosts
name
and
it's
not
clear
how
that
money,
ins
gonna
be
working
outside
if
you're,
creating
the
certificates
before
you
launch
the
cubelet.
There's.
F
F
C
Think
the
idea
was
also
that
we
would
pair
this
with
all
the
work
that's
been
going
into
the
TLS
bootstrapping,
because
that
will
give
your
couplets
the
exact
correct
certificate
that
they
need
to
act
as
the
does
that
node.
So
hopefully,
as
that
work
progresses
and
as
more
distributions
switch
over
to
TLS
bootstrapping,
then
I
mean.
C
A
C
This
is
more
just
one
of
the
gaps
and
that
current
TLS
bootstrapping
is
that
the
node
won't
request
a
serving
certainly
will
only
request
a
TLS
thereat
client,
sir
this
site.
The
reason
I'd
link
the
PR
is
because
I
think
it
explores
a
little
bit
about
it,
exposed
a
little
bit
about
the
current
approval
process
for
nodes
going
through
TLS
this
or
the
TLS
bootstrapping.
C
You
know
if
I'm
using
a
bootstrapping
token,
to
requested
a
certificate.
Yeah.
Maybe
I
do
some
initial
thing,
but
there
is
a
different
one.
If
a
couplet
has
an
established,
client
cert
and
wants
to
request
a
parent
serving
cert,
so
I
guess
it
it.
The
the
conversation
here
is
the
possibility
of
expanding
that
approver
built
into
the
controller
master
or
the
controller,
to
start
being
a
little
bit
more
smart
than
just
hey,
you're
part
of
this
group.
In
there
before
we
approve
you,
yeah.
B
Yeah,
whatever
there
were
always
multiple
types
of
approvals
that
we
were
going
to
want
to
have
the
the
flag
was
just
sort
of
a
quick
and
dirty
experimental.
It's
in
the
name
type
of
flag.
So
the
types
of
approvals
we
have
or
the
bootstrap
approval,
where
you
can
request
client
cert
for
any
node.
So
that's
where
your
bootstrapping,
a
bunch
of
nodes
from
one
shared
credential,
then
there's
the
renew
my
client
certificate
type
of
approval,
where
you
ask
for
a
client
certificate
that
matches
your
current
user.
B
B
A
Were
in
trying
to
manage
those
with
different
controllers
as
I
recall,
it
managed
the
approvals
yeah.
Exactly
in
would
was
that
we
would
not
forever
have
and
approve
all
flag
and
that
approval
would
be
managed
separately.
It
was
some
sort
of
check
that
could
actually
confirm
that
the
Cupit
had
actually
asked
right
right.
B
So
the
different
levels
are
either
just
granting
the
permission
like
I
wants
you
to
identify
that
a
particular
csr
is
one.
It
falls
into
one
of
those
categories.
You
can
either
just
to
prove
that
type
of
request
to
say
any
any
node
can
ask
for
a
new
client
cert
for
itself
or
you
could
go
all
the
way
to
like
attestation
or
TPM
checks
against
the
node
kinda
out-of-band
I
would
love
to
see
those
I
haven't
seen
those
getting
built,
I'm,
not
sure
if
mike
has
an
but
but
yeah,
you
have
a
variety
of.
B
D
D
There's
a
couple
ways
on
there's
a
couple:
ideas
that
we've
thrown
around
one
is
to
create
a
specific
authorizer
just
for
JW
teas
that
we
would
receive
from
the
metadata
server
or
that
identity
document
that
we
were
received
from
the
AWS
metadata
server.
Just
passed
that
as
the
initial
bearer
token.
We
can
then
authorize
that
with
a
specific
node
name
and
a
group
that
is
allowed
according
to
the
subject,
access
review.
D
F
F
So
it's
very
easy
for
that
to
leak
and
move
around.
So
it's
not
something
that
is
sort
of
a
one-time
use
or
assigned
payload
type
of
thing.
Unfortunately,
so
I
think
in
my
mind,
that's
best
used
as
extra
data
to
to
help.
You
know
verify
this,
but
I,
don't
think
I
wouldn't
consider
it
good
enough
proof
in
and
of
itself
what.
D
F
I
think
the
question
is
not
so
much
as
what
needs
to
be
breached,
because
if
somebody
gets
access
to
a
metadata
server,
they
own
the
machine,
regardless
to
some
degree
from
me
from
a
I,
am
role
point
of
view,
but
I
think
it's
it's.
There
are
situations
where
people
will
take
that
that
identity
document
handed
out
to
others
and
not
realize
that
they're
actually
handing
out
something
that
can
then
be
used
to
impersonate
that
machine
I.
Don't
think
many
people
use
it
in
that
mode
where
they
consider
it
private,
sensitive
information.
So.
D
F
Just
afraid,
like
the
Machine
comes
up
at
bootstraps
and
then,
like
you
know,
you
know
there
I
it's
worth
digging
into
and
I
and
and
so
I
I
think
it's
it's
worth
considering
a
model
where,
whatever
this
authorizer
is,
instead
of
actually
having
one
piece
of
information,
we
have
a
way
that
we
can
sort
of
create
some
sort
of
score
of
like
how
much
do
we
trust
is
based
on
a
plurality
of
pieces
of
data
that
we
can
get.
Yes.
H
F
That
anything
that
you
get
from
the
metadata
server
right
now
at
Amazon
is,
is
you
you
people
tend
to
pass
that
around
to
third
parties
and
then
those
third
parties
could
then
turn
around
and
use
it
right.
So
there's
no
sort
of
challenge
response
type
of
mechanism
that
you
can
use
the
metadata
server
for
right
now,
so
you
can't
create
a
unique
token
for
one
use
right
so.
E
Thing
is
right,
something
with
access
to
the
metadata
server.
That's
what
you
have
proof
of
or
something
that
was
passed,
something
from
access
to
the
metadata
server.
Is
that
sufficient
to
guarantee
that
it
is
exactly
the
cubelet?
And
the
answer
may
or
may
not
be
true?
You
find
it
with
things
like
you
know,
can
listen
on
a
privilege
system,
port,
for
example,
or
lots
of
other
things.
I.
Think
the
meta
point
is.
E
B
I
think
it'd
also
be
interesting
to
to
make
make
sure
we're
dividing
up
the
approval
process.
Well
so,
like
you
say,
there
is
going
to
be
some
environment
dependent
aspects
right
like
if
you're
running
at
Google
versus
Amazon
versus
bare-metal
versus
whatever,
like
did
this
request
come
from
this
node,
it
can
be
verified
a
lot
of
different
ways
and
that's
very
environment,
specific.
B
The
categorization
that
we
were
talking
about
earlier,
like
is
this
a
request
by
a
node
for
a
renewed
client
cert
or
by
a
node
for
a
server
cert
like
that
categorization
can
be
done,
independent
of
environment
and
so
trying
to
avoid
duplicating
the
work
among
all
the
cloud
providers
to
do
that.
Categorization
and
let
them
focus
just
on
the
given
this
CSR
and
maybe
some
other
data
associated
with
it
thumbs
up
or
thumbs
down.
Did
it
come
from
this
node
trying
to
make
sure
we
divide
that
up
well
would
be
helpful
in
introducing
work.
Yeah.
B
B
Don't
know
that's
a
new
condition
like
wait
right
now
we
have
the
approved,
rejected,
I,
guess
conditions.
Is
there
another
condition
that
says
like
did
someone
verify
that
this
request
actually
came
from
this
node?
Maybe,
and
maybe
that's
like
a
separate
condition
of
both
the
approved
condition
and
the
verifier
condition
have
to
be
present.
I'm.
E
Not
sure
so
I
think
what
you're
describing
is
a
lot
of
implementation
detail
and
you
have
to
step
back
and
say
what
are
the
attack
scenarios
I
care
about
right?
Why
am
I
rotating
these
node
service
and
if
somebody
has
leaked
one
or
compromised
it,
allowing
someone
who
is
compromised
it
just
arbitrarily
renew?
E
It
is,
of
course
not
going
to
help
your
security
properties,
so
just
because
I
had
a
search
for
the
node
doesn't
mean
I
should
always
forevermore
and
therefore
I
think
you
have
to
have
some
more
claims
as
part
of
that,
like
the
verification
process
to
expire
after
1224
hours,
it
has
to
be
revalidated
that
it
really
is
the
cubelet.
On
that
note
not
a
container
on
the
note,
not
somebody
who
sneaked
it
in
exfiltrate
jism.
C
B
B
A
The
API
was
originally
designed.
We
had
discussed
how
approval
would
actually
happen,
and
we
had
discussed
it
in
terms
of
having
separate
controllers
that
could
be
managed
or
created
by
anyone
with
the
right
API
access
to
provide
to
provide
the
signature
and
the
result,
and
so
the
API
was
based
around
the
when
you
create
it.
This
is
the
assertion
of
who
submitted
the
request
to
the
server
use
that
whoever
you
will
to
do.
Whatever
verification
you
wish
to
do
to
determine.
B
A
F
E
D
B
F
Think
that
there's
there's
two
pieces
of
this
puzzle,
there's
the:
how
do
we
actually
create
a
way
for
the
cubelet
to
add
extra,
identifying
information
to
a
CSR
that
may
be
domain-specific
or
environments
just
specific
and
then
there's
the
on
the
server
side?
How
do
we
have
a
pluggable
model
for
something
to
evaluate
that
extra
proof
and
use
that
to
make
decisions
in
terms
of
whether
it
approves
the
CSR
or
not.
F
D
F
D
Yes,
yes,
I
can
I'll
try
to
extend
that
I
was
unclear.
Did
you
did
you
mean
that
the
group
approver
in
its
current
state
is
not
expected
to
stick
around
long
term
or
do
you
think
that,
even
with
the
extension
of
subject
access
to
subject
access
reviews
for
specific
couplet
uses,
if
we
had
like
a
note,
specific
approver
that
did
some
subject
access
reviews?
Would
you
see
that
sticking
around
for
longer
or
yeah
yeah.
B
A
B
The
signing
controller
is
just
give
it
a
signing
certain
key
and
say
sign
anything
was
approved
and
that's
broadly
applicable,
a
controller
that
does
permission,
checks
might
work
for
some
clusters
might
not
work
for
other
clusters.
I
would
feel
weird
telling
someone.
Oh
you
got
to
run
this
cluster
and
it'll
sign
and
maybe
it'll
do
Commission
based
checks,
but
you
can
disable
that,
like
this
particular
way
is
much
easier
too
much
cleaner.
Just
tell
someone,
oh
yeah,
the
run
Missoni
controller,
if
you
want
to
sign,
run
the
permission-based
approving
controller.
F
E
F
I
think
that
the
it
but
but
seriously
I
think
on
the
cubelet
side,
and
this
is
something
to
actually
look
at
the
sum
totality
of
this
feature.
Is
there
a
way
where
we
could
have
like
a
dot
d
file
where
you
drop
in
a
program
similar
to
like
CNI,
where
it
says?
Ok,
as
we
actually
are
asking
for
a
cert,
we're
gonna,
run
the
programs
in
this
directory
and
give
them
a
chance
to
modify
the
csr
in
some
format
before
it
gets
sent
forward
right.
F
B
F
F
B
F
E
F
F
B
D
D
Yeah
this
quarter,
I
know
Jacob,
beech
him
and
Chris
have
been,
and
jacobsen
have
all
been
looking
at
that
specifically
for
gtp,
and
we
have
discussed
exact
based
plugins,
there's
gonna
be
a
lot
of
variability
in
what
is
required.
Some
are
gonna,
be
mains
that
run
in,
like
you
know,
less
than
a
second,
and
then
some
are
going
to
be
need
to
start
up
web
servers
depending
on
I
guess
the
approval
process
so
and
then
there's
the
distribution
of
these
exact
plugins
that
that
was
discussed
and
kind
of
adds
a
bit
more
complexity.
D
A
F
F
We've
been
talking
about
this
in
the
context
of
kubernetes,
but
it's
actually
a
more
generic
problem
than
that,
and
so
this
is
something
that
I've
been
talking
about
forever
and
there's
a
group
of
people
that
some
of
them
sort
of
in
the
kubernetes
orbit
and
some
people
outside
of
the
courtney's
orbit
that
that
starting
to
get
more
and
more
serious
about
this.
But
the
idea
is:
can
we
take
all
of
that
identity
control,
plane
stuff
who
has
access
to
what
certificates?
How
do
we
serve
them
up?
How
do
we
rotate
them?
F
How
do
we
verify
a
node
and
then
use
that
to
bootstrap
other
types
of
trust?
Can
we
break
that
out
into
a
system
that
actually
is
runs
beside
and
outside
of
and
underneath
and
works
with
kubernetes?
And
so
now,
if
we
had
something
like
that,
then
the
cubelet
would
defer
to
spiffy
for
getting
its
identity
and
its
certificates
and
then
that
same
API
or
mechanism
for
how
the
cubelet
gets
its
identity
could
then
be
used
for
workloads
running
inside
of
kubernetes,
so
that
they
could
then
use
that
identity
to
talk
to
each
other.
F
So
there's
this
and
it
sort
of
breaks
down
into
three
problems.
Number
one
is:
how
do
we
act?
We
encode
identity
into
into
a
certificates.
How
do
we
verify
that
certificates
and
and
by
encoding
the
identity?
It
means
you
know
just
coming
up
with
some.
You
RN
type
thing
that
we
can
put
in
the
common
name,
that
everybody
or
subject
alternative
names
into
the
this
hands
that
everybody
can
agree
upon,
something
that
is,
you
know,
a
global
namespace
that
everybody
can
can
come
down
to.
How
do
we
verify
that?
F
So
the
problem
is
not
just
distributing
client
certificates
or
server
certificates.
Projects
certificates
sign
certificate,
but
it
also
roots
right
because
right
now
we
either
pass
in
a
bundle
of
routes
to
everybody
and
now
there's
a
separate
problem
of
how
do
you
actually
update
that
bundle
of
routes
that
everybody's
relying.
B
F
And
and
then
there's
the
question
of
which
routes
apply
to
which
parts
of
the
namespace,
so
it's?
How
do
we
encode
this
stuff
into
certificates?
How
do
we
deliver
those
certificates
and
those
identities
to
the
workload
and
then
and
then
a
reference
implementation
of
this
that
sits
outside
of
anyone,
Orchestrator
or
system
like
kubernetes
and
then?
F
Finally,
how
does
this
work
with
kubernetes
a
sort
of
an
outside
controller
such
that
you
could,
you
know
by
convention
as
part
of,
like
maybe
an
admission
controller,
say
or
built
into
the
service
accounts,
say
we
map
a
service
account
to
a
spiffy
ID.
Every
pod,
that's
launched
with
access
to
that
service.
F
Account
then
gets
a
certificate
that
lets
us
act
as
that
service
account
not
just
to
the
kubernetes
api,
like
with
the
JWT
s
that
we
have
now,
but
also
between
user
services
that
are
running
within
a
cluster,
because
right
now,
service
accounts
are
useful
in
a
star
topology
for
everybody
to
into
the
API
server.
They
are
not
useful
for
a
tanking
workload
B.
So.
A
F
It's
been,
it's
been
floating
around
for
a
while
there's
a
lot
of
discussions.
I
think
one
of
the
goals
out
of
this,
then
it's
not
just
within
a
particular
cluster,
but
between
clusters
between
different
orchestrators,
between
orchestrated
containerized
workloads
and
non
containerized
workloads,
and
then
also
you
know,
given
the
right
saturation,
this
could
be
used
across
the
internet
for
talking
to
Web
Services,
so
I
just
wanted
to
you
know
this.
This
is
in
some
ways.
F
F
This
is
inspired
by
this
system
at
Google,
called
low-ass
and
I
know
that
you
know
some
of
the
is
tio
folks
are
have
also
been
looking
at
this
in
terms
of
if
the
workload
itself
doesn't
want
to
consume
and
use
those
certificates.
Can
this
be
done
in
a
sidecar
proxy,
then
also
so
I
just
wanted
to
get
the
word
out
there,
because
this
is
starting
to
to
be
an
area
that
a
lot
of
folks
are
starting
to
look
at
and
invest
in
I
think
within
kubernetes.
F
It
has
a
handful
of
flags
which
are
TLS
this
and
TLS
that,
and
then
you
have
to
configure
both
sides
of
it
and
I've,
seen
that
that
customers
have
an
incredibly
hard
time,
creating
managing
and
specifying
the
right
certs
across.
You
know
dozens
of
flags-
it's
probably
not
dozens
yet,
but
it'll
get
there
if
we
can
continue
down
this
path.
So
how
can
we
actually
reduce
this
down
to
the
point
where
it's,
like
things
run
in
a
spiffy
enabled
environment?
And
then,
when
you
say
so-and-so
is
allowed
to
talk
to
so-and-so?
A
F
G
This
definitely
piques
my
interest,
so
I'm
Greg
and
I'm,
a
Google
on
the
communities
team.
So
one
of
the
use
cases
we've
kind
of
been
running
into
is
basically
secrets
and
Internet
interacting
with
external
secret
stores
and
so
having
having
some
sort
of
identity
that
we
can
use.
You
know
to
talk
to
services
like
Google,
kms,
Amazon
can
as
vaults,
etc.
So
some
sort
of
externally
recognised
identity
is,
you
know,
a
thing
we're
bumping
into
right
now
and
we've
got
some
kind
of
secrets.
G
F
F
The
one
of
the
realizations
that
actually
came
to
this
is
that
I
looked
at
hashey,
core
vault
and
then
from
my
experience
at
Google
I'm
like
there's,
really
nothing
that
maps
to
evolved
at
least
back.
Then
there
was
you
know
some
some
systems
that
were
explicitly
not
supposed
to
be
used
for
production
certificates
and,
and
then
the
question
was
why.
Why
did
Google
not
have
to
invent
something
like
handshake
or
ball?
F
And
and
my
takeaway
from
that
is
that,
because
lo
s
existed,
there
was
so
little
need
to
store
he
credentials
for
your
my
sequel
database
right
because
you
know
BigTable
knew
how
to
consume
lowest
credentials,
natively,
and
so
the
realization
there
is
that
a
large
percentage
of
the
use
of
secret
stores
is
to
really
act
as
an
identity,
translation
stores,
service
right.
So
people
put
a
secret
in
there.
F
They
present
themselves
to
the
secret
store
with
one
identity
so
that
they
can
attain
a
proof
of
a
different
identity,
and
so
it's
kind
of
like
a
Kerberos
ticket
vending
system
in
some
ways,
but
but
in
a
more
generic
way,
and
so
the
root
of
the
problem
is
that
we
don't
have
a
sort
of
lingua
franca
around
identity.
And
so
can
we
start
establishing
that
Secret
stores
aren't
gonna
disappear
anytime
soon,
but
we
can
at
least
start
driving
some
commonality
around.
How
do
we?
G
I
think
it
actually
unlocks
a
lot
of
a
lot
of
doors.
So
once
you
have
that
I
mean
it's
a
very
in
sort
of
internal
to
Google.
It's
basically
a
dial
tone
for
kind
of
stuff
that
you
run
in
internally
right.
So
you
you
have
this
identity,
it's
just
there
and
transparent,
ready
to
use
and
I.
Think
we'd
like
to
have
something
like
that
for
Canaries.
Where
you
you
know
your
identity
is
just
it's:
it's
there,
there's
not
a
huge
amount
of
effort,
and
you
can.
You
can
use
it
to
talk
to
external
things.
G
F
F
Beautiful
thing
is
that,
from
the
developer's
point
of
view
they
get
a
secure
by
default
environment
without
actually
knowing
that
they're
getting
it.
It's
it's
really
a
pretty
beautiful
thing.
So
so
that's
that's
kind
of
the
the
hope
out
of
this.
So
I
just
wanted
to
get
the
word
out
there.
I
think
you
know
folks
are
starting
to
light
up
around
this
stuff.
So
so
you
know
feel
free
to
reach
out
to
me
and
we
can
have
some
conversations
around
that.