►
From YouTube: Kubernetes sig-aws 20180629
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody:
it
is
Friday
June
29th.
This
is
our
bi-weekly
sick,
address,
meeting
I
am
your
host
or
co-host
today,
just
in
Santa
Barbara
at
Google,
and
we
have
a
fairly
full
agenda.
Please,
if
you
do
have
things
you
want
to
discuss
to
make
sure
they
are
on
the
agenda.
Otherwise,
let's
just
dive
in
and
think.
Our
first
item
is
about
pod
IPS,
with
an
L
bees
and
I'm
Mike
I
presume
you
added
that
you
want
to
take
the
for
a
minute.
Oh,
you
are
muted
Micah.
A
B
There
we
go
okay,
great
two
different
mutant,
so
yeah
thanks.
So
one
of
the
things
that
had
heard
asking
about
is
basically
the
ability
to
use
like
a
network
load
balancer
with
in
combination
with
the
V
PC
networks
with
our
CNI
provider.
It's
on
the
DPC
network,
same
network
as
the
nodes.
So
why
not
directly
route
to
pods
instead
of
nodes?
So
that's
something
I
wanted
to
sort
of
bring
up
for
discussion
as
I
put
up.
B
Follow
wishes
last
night
kind
of
thinking
that
it
could
be
in
similar
thought
very
similar
to
the
current
and
they'll
be
implementation.
But
one
difference
is
that
instead
I
think
the
current
update
cloud
provider
gets
called
whenever
there's
a
node
change,
but
not
when
there's
an
endpoint
change
on
a
service.
B
If
the
service
itself
has
changed
like
the
port
has
changed
or
something
like
that
or
annotation
has
changed,
the
cloud
provider
will
call
update
service
a
fate
load,
balancer
service,
but
I,
don't
believe
that
happens,
so
it
might
need
to
be
an
out
of
tree
thing.
You
know
if
you
had
any
input
on
that
Justin.
A
I
think
the
big
gotcha
with
these
sort
of
things
in
the
past
has
been
if
the
cloud
provider,
AWS
or
GC,
or
whatever
you're
back
in
discount
factory
key
pop
or
has
a
latency
that
the
behavior
could
be
awkward
but
I
mean
I,
think
I
presume
with
like
NLB,
it's
okay,
and
so
it
would
I
think
I
think
it
would
be
great
to
do
it
out
of
out
of
tree
at
least
initially
as
a
prototype,
whether
it
goes
into
the
cloud,
controller
manager
or
entry
guy
I
don't
know.
A
I
I
would
also
suggest
it'd
be
great
to
get
it
in
some
way.
That's
generic
right,
I,
don't
know
what
that's
practical,
but
it
feels
like.
There's
the
lift
CNI
provider
just
similar
to
you
know,
sort
of
similar
functionality,
and
so,
if
there's
some
way,
we
can
make
it
be
something
that
could
be
used.
Generically
I,
think
that'd
be
great
I,
don't
know
whether
all
services
would
use
it,
presumably
not.
Presumably
some
yeah,
annotation
and
I
think
there
is
today
an
annotation
for
what
is
it
notice,
our
local
services,
where
they
open
up
I?
A
Think
it's
on
GCE,
primarily
where,
with
a
load
balancer,
you
can
open
up
a
second
health
check.
Port
and
the
traffic
will
just
skip
nodes
that
don't
have
any
pods
running
on
it.
So
it
avoids
some
of
the
traffic
some
of
that
hops
inside
the
question,
so
that
sort
of
approach
might
be
nice
in
terms
of
how
the
annotation
is
expressed.
But
if,
if
you
can
do
that,
I'd
love
to
see
it,
I
think
it
can
probably
live
outside
of
tree
outside
of
kubernetes
kubernetes
and
certainly
but
we.
A
B
Sure
I
think
I'll
probably
do
a
prototype
out
of
tree
just
initially
because
of
the
needing
to
update
on
endpoints.
Not
just
note
updates
and
then
I
think
so.
The
point
of
this
is
also
the
five
hats
keep
proxy
entirely
so
not
use
either
local
or
cluster.
It
might
just
be
I,
don't
know
a
CRT
and
I,
don't
know
what
we
more
discovery,
I
guess
to
actually
get
that
correct.
Foot
bed.
A
You
check
it
out,
there's
definitely
some
I
mean
there's
some
we're
starting
to
get
towards
like
ingress,
and
the
sort
of
similar
feel
there,
where
we
have
these
serene
grasses.
Turning
one
big
API
object
and
we're
thinking
about
like
how
to
split
it
or
how
to
provide
all
these
different
implementations
and
see
I
your
your
experimentation
on
CRTs
or
not
will
be
very
interesting
and
I.
Think
I
personally,
you
probably
do
it
as
a
start
as
to
keep
it
as
a
service,
it's
of
type
cluster
IP
and
then
have
an
annotation.
A
B
A
C
C
A
Not
be
the
best
person
to
explain,
but
my
understanding
is
it's
sort
of
a
trick
to
to
avoid
having
to
reprogram
the
low
points
all
the
time
we
should
have
create
a
secondary
port
and
there's
a
health
check,
because
health
checks
are
much
faster
than
reprogramming
this
sort
of
control,
plane.
I,
don't
have
anyone
else's
experience,
more
experience,
okay,.
B
Hi
yeah
I
thought
this
was
built
into
NLP.
Actually,
eld
doesn't
have
this
support,
but
NLB
does
where
you
can
stay
local
and
the
NLB
health
check
won't
hit
the
service
board.
It'll
help
hit
a
health
check,
port
and
only
route
traffic.
To
that
note,
if
there's
a
positive
APOD
for
that
service,
coming
yeah.
C
A
A
Think
if
you
I
like
the
pod
approach,
if
you
we
could
probably
implement
local
in
a
different
way
than
a
health
check.
If
you
have
a
back
thing
that
can
reprogram
fast
enough
and
supports
weights,
but
otherwise
I
like
I
like
the
product
wrote,
the
pod
purge
seems
like
a
nice
one
to
investigate.
It
seems
like
it's.
It's
the
next
step
on
the
like
CNI,
it's
like
a
logical
next
step
for
that.
That's
e
and
I
approach
that
you
guys
are
working
out.
D
A
E
A
E
Is
specifically
work
for
CNF
providers
that
you
use
the
V
PC
native
IP
addresses
right
so
for
overlays
you
have
to
use
the
existing
behavior
because
overlays
are
technically
not
part
of
the
V
PC
networks.
So
I
know
these
are
not
able
to
read,
which
is
why
it's
going
to
the
North
Fork.
So
I
want
to
wager
that
the
cube
net
might
work,
but
it's
still
TBD
to
figure
out
whether
this
change
is
okay
with
the
cube
net
approach.
But.
F
This
would
also
work
kind
of
generically
if
you
had.
Basically,
the
requirement
is
that
your
load
balancer
can
address
pot
ip's
directly,
which
question
of
this
scene
I
plug
in
in
in
AWS,
but
is
also
true
of
like
bare
metal
kubernetes
with
like
calico
in
layer,
three
mode,
like
there's
lots
of
ways
that
you
can
achieve
that
so
agree,
I
agree,
yeah,
yeah
that.
C
B
A
D
So
I
think
I
can
introduce
this,
but
I.
Think
if
Hall
is
on
the
line,
you
could
do
most
of
the
talking
I'm,
not
sure
if
he's
here,
but
so
basically
we're
trying
to
solve
the
problem
of
getting
I
am
credentials
to
pods.
We
talked
about
this
a
little
bit
in
the
previous
cigarettes,
but
I'll
just
sort
of
go
over
some
of
the
basic
approaches
that
various
people
in
the
community
I'm
thought
of.
So
one
of
them
is
using
ec2
metadata
and
there's
two
approaches
there.
D
D
So
the
the
proposal,
the
kept
that's
that
was
just
written,
is
a
sort
of
an
iteration
of
a
project
called
ki
m,
and
so
that's
the
it's
one
of
the
ec2
metadata
approaches.
It
started
as
running
so
there's
an
agent
which
run
in
the
original
work
ran
as
a
daemon
set
and
then
there's
a
server,
and
so
the
the
server
or
so
the
demon
said
pod
requests
credentials
from
the
agent
I
believe
and
then
the
agent
forwards
it
to
the
server.
D
The
validation
is
done
on
pod
IP,
there's
mutual
TLS
from
the
agent
to
the
server
and
the
server
actually
makes
the
request
to
does
the
assumed
role
request
and
then
returns
the
credentials
back
to
the
agent
I
believe
and
correct
me
if
I'm
wrong.
If
anyone
on
that
call
has
a
better
understanding,
so
if
anyone
wants
to
dive
in
and
explain
that
a
little
bit
better
I
was
just
gonna
raise
some
concerns
with
the
cab,
but
I
specifically
had,
but
right
now
just
open
up
the
floor
for
anyone
else.
To
add
to
that
I.
G
Have
concerns
over
the
caching
that
goes
on,
but
I
think
at
a
high
level.
I
agree
with
your
summary
I
think.
If
we
get
into
the
details
of
the
caching
of
roles,
there's
some
non-repudiation
type
use
cases
in
cloud
trail
that
that
concern
me
a
little
bit
I
think
that's
lower
level
than
where
we
are
right
now,
right.
D
G
D
Cap
has
a
sort
of
instead
of
using
a
daemon
set.
It
has
a
sidecar
and
a
net
container
and
so
sort
of,
like
my
concerns,
are
kind
of
injecting
that
into
every
single
pod
is
a
bit
of
a
burden
on
the
user.
So
for
more
of
an
operational
standpoint,
and
just
you
know,
I
don't
know
if,
for
example,
from
you
from
e
gases
perspective,
I
don't
know
if
we
want
to
be
injecting
a
bunch
of
containers
into
every
single
pot
from
like
a
user's
perspective.
What
I
would
just
confuse
everybody?
D
That's
trying
to
just
be
a
customer
of
the
cluster
right
like
we've
got
I,
don't
know
a
few
hundred
engineers
that
are
trying
to
use
kubernetes.
They
expect
to
get
what
they
put
into
it
and
if
you
start
mutating
what
they're
doing
it
confuses
them
totally
yeah
so,
but
so
there
are
some,
maybe
Mike.
Maybe
you
can
talk
to
some
of
the
benefits
of
that
I.
G
Agree:
I:
don't
like
the
fact
that
I'm
not
forcing
a
sidecar
container
in
there
I,
don't
like
that,
the
challenge
I'm.
Having
is
how
do
you
actually
get
unique
identification
capabilities,
because
if
I
don't
do
something
to
get
the
service
account
right
now,
I'm
strong
to
get
a
unique
pot
ID.
So
let's
say
you
have
hundreds
of
users.
You
know
deploying
thousands
of
containers
and
to
get
compromised.
If
anything
happens,
that
actually
starts
to
go
down
the
line.
G
F
G
F
A
problem
to
allow
the
node
to
a
test,
the
pod
that
it's
like,
if
you
had,
if
you
had
the
proxy
running
on
on
the
node,
do
we
not
consider
that
they,
if
the
no
to
tests
this
is
I'm
making
this
request
on
behalf
of
this
pod
and
I
know
that
that
pot
is
running
on
that
note.
Is
that
not
good
enough?
So.
G
F
G
That
puts
us
running
on
something
next,
like
the
ECS
agent,
something
next
to
the
cubelet
right
in
that
model.
No
I
I
think
I
think
at
the
end
of
the
day,
I
expected
eks
to
end
up
there
and
so
I'd
be
happy
if
we
ended
up
there
before
you
cast
it
actually.
I
just
didn't
know
who
wanted
that
imple,
because
now,
if
Amazon's
gonna
take
that
back
upstream
or
if
we're
gonna
be
in
a
little
bit
of
a
fork
version,
there.
D
A
So,
to
my
mind,
they
actually
aren't
that
different.
In
other
words,
I
think
that
most
like
ninety
90
percents
to
say
of
the
code
is
going
to
be
the
same
and
then
there's
you
could
have
like
two
separate
configurations,
one
of
which
runs
as
a
psych
up
one
of
microns
as
a
diamond
set
I.
Think
and
there's
like
a
little
bit
of
like
look
up
to
be
like
how
do
I
get
the
service
account
right.
A
Do
I
just
pull
up
to
my
local
file
or
do
I
look
up
the
IP
and
do
that
sort
of
thing
right,
so
I,
don't
I,
don't
want
to
rat
hold
on
that
too
much.
If,
if
we
think
there
are
two
viable
configurations
that
one
of
which
is
like
we
don't
love
the
sidecar
pod
and
the
other
one
of
which
is
maybe
some
people
don't
wanna
run
a
daemon
set,
because
maybe
it's
the
less
secure,
then
we
can
maybe
support
both
right
I,
don't
see
them
being
that
fundamentally
different
I,
actually.
F
Have
some
security
concerns
about
the
sidecar?
The
sidecar
approach
relies
on
rules
that
live
in
a
pod,
x'
network
namespace.
In
order
to
do
that,
interception
of
the
metadata
API,
so
any
pod
that
runs
with
net
cabinet
admin,
can
override
that
and
then
send
its
request,
presumably
to
the
real
instance,
metadata
API
and
then
like
assume
the
credentials
for
the
node,
which
is
bad
I.
Do.
F
I,
the
other
thing
is
that
you
wanna
what
you
want
to
do
is
intercept
just
the
credentials
and
allow
other
kinds
of
things
to
go
through
because,
like
a
pod,
may
have
a
legitimate
reason
to
know
what
zone
well
like
what
availability
zone
it's
running
in
and
so
as
soon
as
you're
saying,
okay,
we're
gonna,
allow
particular
HTTP
requests
to
go
through
and
other
ones
to
be
blocked.
Like
you,
you're
running
a
proxy
on
on
the
node
right,
like
that's
just
the
situation
that
you're
in.
G
Agree
with
like
a
quick,
quick
question,
would
we
do
you
have
any
interest
in
separating
the
concept?
Ecs
separated
the
concept
of
that
metadata
URL,
but
there's
actually
two
real
URLs.
There
could
really
actually
be
that
container
metadata
URL
that
might
provide
us
some
of
that
information
and
there
might
still
be
an
actual
node
metadata
URL.
H
G
G
We
blended
them
together
in
ECS,
because
amazon
gave
us
a
nice
solution
where
it
worked
right,
but
now
I'm
looking
at
us
as
a
crew
Nettie's
operator,
where
I
have
to
operate
these
nodes
still
I
feel
like
there's,
actually
a
unique
use
case
from
both
of
them
that
you're
really
bringing
the
light
there.
Yes,.
F
G
E
I
mean
if
I
have
to
be
secure
by
default
right.
The
pod
really
cannot
be
reaching
into
the
instance
metadata
URL
right,
because
I
mean
if
you
could
have
two
different
endpoints
to
talk
to.
But
if
the
pod
has
an
access
to
the
instance
metadata
Europe,
it
could
get
the
credentials
all
right.
The
whole
idea
being
you
don't
want
this
part
to
get
the
instance
credentials
right
so
so,
which
means
that
you
would
have
to
go
the
route
of
blocking
minkins
metadata
in
addition
to
providing
the
secondary
endpoint
that
you're
providing
to
the
pod.
G
G
I
Cool
yeah,
that's
a
a
section
called
intersecting
AWS
apo
is
where
we
actually
do
say
to
you
that
we
will
present.
Acs
meta
data
might
be
addressed,
not
not
really
instance
metadata.
Is
it
helpful?
Should
we
actually
talk
formally
in
that
cap
around
like
issues
around
non-repudiation
and
things
like
that,
because
I
think
those
are
gonna
be
where
you
might
make
that
decision
around
using
the
diamond
set
versus
something
else?
And
maybe
if
we
just
document
those
choices?
Clearly,
then
oh
yeah.
D
I
F
G
G
Getting
AWS
credentials
so
vault
has
the
STS
API
in
it
as
well,
and
so
we
could,
if
we
chose
not
to
run
ki
and
we
could
run
vault
outside
the
cluster
and
I
could
configure
that
agent
to
instead
of
talking
to
the
kid
I
am
server.
I
could
configure
it
to
authenticate
to
the
vault
cluster
using
its
kubernetes
off
method
and
that
vault
do
the
assumed
role.
There's
some
advantages.
Some
people
might
argue
are
in
there
being.
G
The
audit
trail
is
now
synced
to
vaults,
and
if
I
had
a
large
corporation
running
vaults,
I
have
one
out
of
trail
and
I
know
what
that
out
of
trucks
like
it
was
encrypted
and
protected
all
in
the
same
fashion,
ID
site,
but
it
would
require
additional
configuration
in
order
us
to
do
it,
and
it
would
require
service
account
token,
whereas
K
I
am
could
work
without
service
account
token.
So
there's
it's
definitely
not
a
free
ad.
G
F
Ideally,
we
would
have
a
a
solution
that
that
doesn't
suffer
from
that
that
drawback,
like
mutual
TLS
or
something
like
that,
but
I
mean
you
know.
Your
point
is
that
well
vault
uses
those,
and
so
if
you
want
to
integrate
with
vault
so
then
you
need
to
do
that,
but
there
there's
some
downsides
to
to
using
those
those
tokens
in.
F
Well,
you
could
have
the
node
authenticate
itself
to
the
server
and
then
it
could
just
make
an
attestation
saying
this
is
on
behalf
of
this
container
and
then
the
server
is
watching
the
kubernetes
api
and
knows
that
that
that
container
really
is
running
on
that
node.
And
then
we
could
call
that
sort
of
good
enough.
It's
based.
Basically,
all
we
can
do
because
ultimately,
every
every
credential
that
you
put
into
a
pod
is
put
there
by
the
node
in
the
first
place.
B
If
you
could
use
the
service
account
type.
This
is
a
lot
of
it,
but
if
you
could
use
a
service
account
of
him
and
if
kubernetes
was
an
oid
c
provider
self,
you
could
use
AWS.
I
am
Federation
because
I
am
supports.
Oh
I,
DC
is
Federation
and
if
you
could
use
that
service
account
token
hypothetically,
as
the
Federation
token
against
the
STS
API,
then
essentially
a
service
account
token
becomes
an
I.
Am
identity
in
itself
or
can
become
an
I.
B
D
F
Means
that
we're
no
longer
proxying
the
metadata
service
and
instead
the
pods
just
talked
directly
to
ask
STS.
So
that's
a
that's
a
much
bigger
change
to
to
the
SDKs
I
wonder
if
anyone
from
Amazon
wants
to
weigh
in
on
on
the
feasibility
of
that
I
guess,
we
already
saw
at
least
kind
of
one
problem
where
projects
were
built
with
the
old
versions
of
the
SDKs
and
then
deployed
in
kubernetes
and
didn't
work.
Is
that
like?
Is
that
actually
a
viable
route
to
say?
D
It's
not
it's,
it's
not
that
it's
not
possible.
I
did
have
a
conversation
with
the
member
from
the
SDK
team,
specifically
about
using
the
refreshable
secrets,
and
so,
and
that
was
sort
of
it
wasn't
a
non-starter,
but
they
were
resistant
to
it.
So
we
would
definitely
have
to
have
a
conversation
with
them
and
push
for
it.
So
it
could
be
a
long
process,
but.
B
There
is
president
already
in
Amazon,
as
you
can
see
like
to
have
a
second
metadata
endpoint,
that
you
so
I
do
think
that
it's
possible,
but
I
think
we
if
there
was
support
for
computer
Nettie's,
and
it
was
a
pause
like
a
good
idea
and
it
had
been
vetted
in
everything.
I
think
that
that
could
happen.
But,
like
you
said
it
would
take
time.
You'd
have
to
have
all
the
SDKs
updated
applications
that
you
would
have
to
use
the
newer
SDKs
that
have
that.
So
that
is
one
of
the
risk
of
this.
So.
F
That
has
some
nice
properties,
because
there
is
work
in
the
kubernetes
pod
identity
working
group
to
make
those
tokens
that
are
issued
to
pods
more
useful
make
it
so
that
they
actually
have
an
expiration
date
that
they
have
an
audience
things
like
that,
but
it's
gonna
take
a
while
for
kubernetes
to
actually
support
that,
and
so
I
don't
know
what
kind
of
time
frame
people
people
have
to
get
something
working
and
blessed.
Here.
B
G
We
also
liked
the
Oh
IDC
solution
that
into
that
was
one
someone
that
we
liked.
We
went
away
from
it
after
some
conversations
that
I
am
was
gonna
kind
of
rule,
the
auspat
urn
here
he
thought
so
I
kind
of
showed
that
with
Nick
yesterday,
when
I
met
with
him
up
there
well
I'll
say
it
again
here
like
if
we
were
to
go
down
into
IDC
path,
was
a
long-term
solution.
I
think
we
would
be
interested
in
figure
out
where
that
was
going.
A
Think
there's
a
great
cap
I'm
very
happy
to
see
that
thank
you
for
doing
that
and
let's
continue
the
conversation
on
the
cap
and
yeah.
The
next
item,
I
think,
is
not
entirely
unrelated,
so
it
looks
at
the
authenticator
plans,
so
the
I
am
offended
layer
which
is
different
from
what
we
just
discussed,
which
is
I
am
into
the
pipe
wrench
was
into
the
bot.
This
is
I,
think
using
credentials
to
access
the
API,
so
I
understand
it's.
Who
who
is.
J
So
I
put
this
proposal
in
and
I
kind
of
want.
I
want
feedback
on
the
design
before
I,
actually
dive
into
the
code.
I
started
building
out
a
proof
of
concept
of
it.
Couple
iterations
of
it,
but
the
idea
here
is
the
current
structure
of
the
I
am
Authenticator
uses
a
config
map
that
mounts
a
file
directly
into
the
root
of
the
the
the
back-end
webhook
server.
And
what
ends
up
happening
is
every
time
you
do
an
update
to
its
in
the
open
source
project.
J
You
have
to
go
and
actually
re
reload
the
daemon
set
to
actually
make
that
apply
across
the
whole
cluster.
So
you
won't
have
rolls
applied
to
the
author
allowed
to
authenticate
unless
you
go
and
restart
everything
that
goes
on
on
the
API
server,
which
obviously
creates
blips
in
the
authentication
for
your
cluster.
J
So
this
proposal
is
pretty
much
just
using
I
am
identity
as
a
CR
D
and
sets
the
the
webhook
Authenticator
to
basically
watch
that
resource
Andry
update
its
internal
cache
every
time,
there's
a
new
role
added
because
it
uses
the
Ciardi
design
we'd,
be
able
to
also
put
restrictions
similar
to
what
we
can
do
with
config
maps
and
tagging
down
to
a
specific
resource
on
who
could
mutate
this
and
create
create
new
roles
and
yeah?
That's
about
it.
Yeah.
D
I
think
it's
a
really
solid
idea:
I,
like
the
sort
of
validation
that
we
can
do
so
eks
as
the
stopgap
had
gone
with
a
config
map
where,
in
the
role
bindings
are
laid
out
and
just
in
that
config
back
key
values,
but
I
think
I'd
prefer
the
the
CRD
proposal.
The
only
issue
that
I
had
with
it
is
I'm
not
completely
sold
on
the
pattern
that
you
have
of
the
two
different
custom
resources,
I,
updated.
J
D
J
Right
here
it
basically
is
just
it
has
a
spec
key
under
a
spec
underneath
that
it
has
the
same
keys
that
were
defined
in
the
config
map,
so
roll
a
RNA
user
name,
and
then
the
groups
associated
with
that
user,
the
I
think
the
resource
would
be
called
an
IM
identity
under.
I
right
now
have
a
documented,
as
authenticated
AWS,
/v
1
alpha,
there's
not
much
else
to
it.
It's
really
simple.
I.
A
J
A
That
that
happened,
but
yes
here
these
are
definitely
the
way
forwards
in
general
and
even
replace,
even
preferring
CR
DS
to
you
know
the
go
defined
types
like
more
formal
types
and
CR.
These
are
getting
validation
and
version,
while
they
have
some
validation
now
and
they're,
getting
versioning
and
all
these
sort
of
things.
So
yes,
CR.
A
These
are
great
and
to
the
extent
that
they
have
shortcomings
today,
that
will
that
will
the
gap
will
get
smaller
over
time,
but
certainly
they're
better
than
config
maps
in
terms
of
already
today,
in
terms
of
structure
and
working
with
the
tooling
and
all
that
so
yeah
I
think
this
is
a
great
a
great
suggestion.
As
you
say.
Yes,
there
is
a
there's,
a
migration
issue,
but
it's
a
relatively
new
projects,
I
think
it's!
It's
ok,
yeah.
J
Yeah
and
Matt
pointed
out
in
there,
and
someone
pointed
out
it
was
in
the
in
the
actual
issue,
for
it
that
Prometheus
operator
did
a
very
similar
thing.
Stephen
did
and
they
basically
provided
a
CLI
tool
that
you
can
do
that.
You
regenerate
all
your
CR
DS,
based
on
the
config
map
from
that
Prometheus
operator.
That
could
be
a
potential
for
helping
anybody.
That's
already
using
you
chaos
or
this
in
your
open
source.
Just.
A
D
Bring
up
two
other
things
about
the
Authenticator
so
right
now
we
have
these
in
the
in
the
in
the
mapping
and
the
username
and
I
think
you
can
do
it
in
groups
too.
We
have
these
sort
of
template
ization
our
template
variables
that
you
can
use
like
session
name,
ec2
private
dns
name
account
ID,
and
so
since
we're
already
looking
up,
for
example,
the
ec2
private
dns
name.
When
we
get
so
we
get
the
color
the
get
color
infinity
back.
We
get
the
instance
ID
from
there
and
then
we
you
know
in
DES.
D
We
assume
a
role
to
look
up
this,
because
it's
the
customers
account
if
you're
just
running
it
in
one
account
you
would
just
do
the
lookup.
One
thing
that
I
was
wondering
is
so
right
now
you
can
only
specify
groups
in
the
mapping
itself,
but
AWS
does
have
a
concept
of
groups
which
are
just
collections
of
users.
D
H
A
F
J
J
F
D
The
the
Authenticator
returns
the
so
the
Authenticator
actually
gets
an
STS
get
car
at
any
request
that
was
signed
but
not
sent.
It
sends
it
it
against
the
it'll,
yes,
identity,
which
is
the
role
Arn.
Essentially
so
it
could
be.
A
user
are
in
a
role
Arn,
and
then
it
looks
up
in
the
mapping,
the
user
name
and
groups
associated
with
that
user
and
returns
that
to
the
guy--
server.
D
F
I
guess
what
I'm
asking
is
when
kubernetes
receives
a
request,
the
groups
that
it
that
it
receives
today
not
before
any
of
these
proposals
are
there
any
groups?
Where
do
those
groups
come
from?
Are
they
just
configured
in
the
authenticator
or
is
there
some
other
way
that
you
can
put
users
into
into
groups?
So
incriminate
is
itself.
D
D
A
F
A
Yeah
I
mean
I,
think
I
think
that's.
It's
super
important
I
think
to
have
some
way
to
build
our
code
in
a
way
that
we
can
trust
and
that
I
know
that
the
main
kubernetes
repo
ranae's
itself
has
recently
switched
to
add
some
tooling.
So
there
was
a
tool
called
an
ago
which
was
basically
a
shell
script
and
I
think
it's
recently
been
been
made.
So
it
will
work
on
GCB
who
container
builder,
which
is
sort
of
like
you
know.
A
The
Google's
hosted,
build
service
and
I
think
it'd
be
great
to
have
amazon's
one
as
an
option
as
well.
For
you
know,
all
our
projects,
III
do
think
it'll
be
great
to
have
a
world
where
we
have
multiple
people
building
the
code,
ideally
in
an
automated
fashion,
and
they
produce
the
exact
same
shop
right
so
that
we
know
that
if
you
want
to,
if
you
want
to
like
compromise
the
code,
you
have
to
like
break
Google
and
Amazon,
which
is
a
lot
harder
than
just
breaking
one.
A
K
A
Yes,
I
mean
I,
think
the
yeah
that
there's
sort
of
there's
there's
two
things,
one
of
which
is,
you
want
to
have
a
bunch
of
accounts
which
are
running
which
produce
which
do
the
builds
and
don't
necessarily
have
to
sign
them,
but
at
least
produce
the
builds.
And
then
we
can
look
at
the
various
builds
and
make
sure
they
are
the
same
across
the
builds
and
then
the
other
one
is.
A
We
need
some
sort
of
trusted
party
that
goes
and
like
holds
the
signing
key
or
holds
the
credentials
to
push
it
to
the
official
GC,
our
repo
or
the
official
EC,
our
repo
or
the
official
like
buckets
or
whatever.
It
is,
and
that
one
needs
to
be
a
lot
more
secure.
Obviously
so
I
I
feel
like
yeah.
The
first
step
is
to
get
it
working
in
GCB
and
getting
it
working
in
code
build
and
getting
work
and
other
things,
and
then
we
can.
A
We
can
go
from
there
and
there
was
a
basil
bug
where,
or
there
is
a
basil
buck
where
the
builds
are
not
reproducible
right
now,
so
the
real
problem
is
I
think
getting
reproducible
one
of
the.
If
we
like
this
approach,
one
of
the
real
problems
is
getting.
Reproducible
builds
to
happen
at
all.
That's
sort
of
pretty
non-trivial
and
basil
makes
it
easier.
There
is
a
bug
right
now,
with
the
go
build
with
seed
builds
that
they
are
not.
A
Reproducible
I
actually
tracked
that
down
last
night,
so
hopefully
that
will
get
fixed
in
the
next
version
of
the
basil.
Rules
go
but
yeah
I
want
to
write
up
this
sort
of
idea
of
how
to
do
the
builds,
but
it's
definitely
an
interesting
topic
that
I
don't
think
one
has
any
input
on
how
we
should
do
build
in
the
project,
but
we
want
I
think
we
need
to
get
away.
My
cops
is
built
like
on
you
know,
a
machine
that
I
spin
up
temporarily
write
an
ephemeral
machine
and
it's
nice.
A
A
A
Not
I
can't
remember
I,
did
it
or
not
I'm
guessing,
given
you
asked
the
question
that
I
didn't
do
it
it's
not
hard
to
do,
but
I
will
we
could
we
could
do
another
video
where
we
do
it
together
and
we
talked
about
the
we
talked
about
like
how
it
how
the
system
works.
If
you
want
that
could
be
a
fun
one.
It's
not
a
terribly
hard
thing
to
do,
though,
but
yes,
we
should
definitely
get
it
under
testing
and
yeah.
We
should
get
under
testing
I'll
reach
out
to
you
about
that
too.
Awesome.
A
L
I
put
this
document
while
ago
and
I'm
looking
for
a
reviewer,
especially
from
Amazon,
some
I
need
somebody
who
understands
the
volumes
and
especially
the
attachment,
because
the
API
that
Amazon
provides
for
attachment
is
quite
different
to
anything
else.
That
community
supports
and
the
thing
that
kubernetes
or
the
CSI
driver
has
to
choose
the
device
and
to
date,
reliably
and
resilient
to
errors
and
superstars
and
everything
it's
quite
difficult
to
do
it
right.
The
current
code
we
have
in
Amazon.
Does
it
wrong?
D
J
C
F
A
There
are
also
the
new
any
volume.
Don't
have
this
requirement
right
and
I.
Don't
know,
then
anyways
prepared
to
comment
on
whether
that
is
the
future
or
whether
the
we
will
still
code
in
say
two
years
for
like
not
whether
everywhere,
whether
every
instance
time
we'll
have
nvme
or
whether
it
will
still
need
this
code
in
future.
A
Anyway,
yes,
so
anyway,
yes,
the
but
the
the
nvme,
the
initially
the
alligators
in
their
dates
from
when
the
allocation
was
done
on
each
node.
So
it
would
not
terribly
surprise
me
if
they
were
issues
once
it
was
moved
to
the
controller
manager
and
even
the
Oh
even
had
issues
even
then.
So
thank
you
for
the
help
on
fixing
those
issues
young,
but
it
does.
It
will
be
easier
with
the
nvme
code
in
that
you
don't
have
to
assign
a
device
name
device,
not
point.