►
From YouTube: Kubernetes WG K8s Infra - 2019-08-21
Description
GMT20190821 153322 k8s infra 2400x1600
A
A
C
A
Okay,
so
let
me
give
some
context:
I
have
been
building
this
tool
called
argot,
which
uses
a
technology
called
certificate,
transparency,
and
the
big
idea
is
that
when
software
gets
released,
you
need
some
way
of
ensuring
that
that
release
software
never
gets
tampered
with,
and
this
has
kind
of
been
an
ongoing
issue
in
the
software
community.
A
The
thing
that's
interesting
about
it
is
that
it
also
downloads
this
shop,
256
cells,
file
from
the
github
or
wherever
the
thing
has
been
uploaded
to,
and
then
it
validates
that
some
256
file
and
the
contents
of
that
against
a
specially
crafted
domain
name,
and
this
specially
crafted
domain
name
ends
up
in
the
certificate
transparency
logs,
which
is
an
append-only
log
that
is
ran
by
a
number
of
different
organizations
and
any
certificate.
That's
issued
by
NECA.
A
So
the
big
advantage
of
this
is
that
essentially,
you
can
use
various
tools
or
like
get
an
RSS
feed
of
any
changes
against,
like
you
know,
star
github,
kubernetes,
and
if
so
many
reports
that
are
gets
not
downloading
or
downloading.
Because
of
an
error.
We
know
that
somebody
tampered
with
the
file
and
didn't
update
the
Shaka
systems.
If
somebody
is
watching
RSS
feed
and
sees
that
you
know
release
1.10
of
kubernetes
got
updated.
A
Yes,
today,
that's
a
little
weird,
because
you
know
we
haven't
had
a
release
of
kubernetes
110
in
a
while,
and
so
we
can
go
in
and
investigate
and
find
out
if
we,
if
somebody's
github
credentials
or
compromised.
Otherwise,
we
we
really
don't
have
a
good
way
of
doing
that.
I
certainly
not
a
way
that
has
cryptographic
foundation
behind
it.
So
that's
the
tool,
the
requirements
for
kubernetes
to
start
using
the
tools,
essentially
to
just
publish
us
sha-256
tomes
file
alongside
the
binaries
that
go
up
on
github
and
that's
it.
A
The
UX
workflow
after
that
is
that
you
run
a
called
submit
and
that's
just
as
the
sha-256
comes
to
the
database
that
that
the
service
behind
argued
that
issues
the
suit.
If
against
so
it's
aware
of
that
and
then
it'll
start
to
issue
certificates
the
next
time
somebody
runs
the
actual
argument
command.
A
So
yeah
the
project
is
an
alpha,
but
it's
it
works
today,
like
I,
have
at
CB
a
couple.
Not
to
be
releases
in
that
tool
and
it's
super
low
friction
for
projects
to
try
it
out,
because
you
just
publish
a
shot.
You
just
stick
some
file
which
could
be
like
generated
by
you,
know
core
utils
or
whatever
so
yeah.
That's
that's
the
tool.
I
have
okay.
Let
me
show
you
on
on
github,
where
it's
at
or
I'll
just
put
in
and
it's
actually
it's
up
on.
A
Github
I
also
wrote
a
blog
post,
essentially
explaining
and
playing
English
similar
to
what
I've
done
here,
why
the
tool
exists
and,
in
my
opinion,
I
kind
of
see
this
as
an
alternative
to
code.
Signing
for
most
projects,
particularly
community
develop
projects,
because
key
custody
is
just
such
an
impossible
thing
for
a
community
project.
To
do
correctly.
You
end
up
with
lots
of
people,
have
in
custody
or
running
some
server
somewhere
with
an
HSM
and
then
who
gets
SSH
access
to
that,
and
then
is
there
an
audit
log
and
who
runs
the
audit
log?
A
Just
key
custody
is
really
hard
and
yeah.
So
that's
the
tool.
How
do
you
take
any
questions
so
I
apologize?
If
you
mentioned
this
in
the
hon,
I
can
I
missed
it.
The
you
are
eyes
for
releases
of
large
open
source
projects
like
Etsy,
D
or
like
kubernetes,
are
pretty
guessable.
How
do
we
know
that
the
sums
that
are
there
for
a
given
URI
have
been
put
there
by
an
authorized
person?
Yes,
so
it
bootstraps
off
of
off
of
the
authentication
and
authorization
of
github.
A
And
we
hooked
up
the
survey,
transparency,
databases,
RSS
feed
to
say
an
email
that
goes
to
Cuba
96
releases
and
in
that
case,
like
we
notice
well,
that's
kind
of
weird
like
we
didn't
do
a
release
today,
and
not
only
that
it's
a
release
that
we've
made
two
weeks
ago.
Why
is
something
getting
modified?
So
those
are
the
two
cases
so.
D
A
C
E
A
Just
scope
the
project
to
be
super
simple,
just
to
try
to
get
the
concept
done
so
I'm
thinking
about
and
then
yeah
the
tool.
Also
over
time.
I
may
get
rid
of
the
requirement
to
to
polish
off
two
to
six
sums
and
just
make
this
a
straight
URL
based
tool,
and
luckily
the
NGO
community
has
actually
built
a
bunch
of
tooling
to
make
that
super
easily
as
well,
and
so
I
may
just
get
rid
of
that
requirement
altogether.
A
The
only
reason
I
have
that
requirement
in
the
tool
right
now
is
like
I,
can
very
deterministically
tell
if
somebody's
trying
to
ddos
the
service,
because
because
he
shot
two
three
six
stumps
should
be
like
plaintext.
They
have
a
certain
format
and
I
won't
download
more
than
like
a
kilobyte
of
shot
to
do
six
tones.
You
know,
because
because
the
files
shouldn't
be
that
big,
so.
D
A
Yeah
so
so
I
mean
just
run
argued
on
a
TV
again,
so
this
do
highlight
show
up.
Do
you
see
me
highlighting
yes
yeah,
so
that
little
thingy
there
is?
Is
the
merkel
route
of
the
shot
to
v6
thumbs
file,
and
so
essentially,
what
we
have
to
do
is
write
a
little
bit
of
tooling
that
ensures
that
everyone
has
the
same
merkel
root
for
the
same
release.
You
know,
so
I
make
sure
that
you
know
Google
and
Amazon
and
Microsoft
are
all
presenting
the
same
mercury
for
their
given
nearing
URL.
D
A
So
so,
because
I'm
I'm
using
that
essentially
I'm
hacking,
this,
on
top
of
the
survey
transparency
logs
I,
have
a
service
which
this
is.
The
domain
is
Merkel
County,
calm
and
that's
that's
just
because
I'm
issuing,
let's
encrypt
certs
in
order
to
to
get
the
in
order
to
pin
to
the
log
and
so
essentially
you're
you're,
trusting
this
service
to
be
a
reliable,
reliable
recorder,
but
I
mean,
if
there's
no
incentive
for
this
service
to
lie
to
you
either,
because
then
the
download
fails
again.
A
D
A
D
A
So
what
I
think
that
the
after
talking
to
the
go
team
and
they're
trying
to
run
a
similar
service?
Really
what
you
want?
Is
you
really
want
one
service,
that's
doing
the
reporting
and
then
lots
of
auditors
over
the
recording
and
so
yeah.
Ideally,
there's
only
one
of
these
services
in
existence
doing
all
the
recording
just
because
it
does
nobody
really,
it
doesn't
add
any
additional
security
and
it
complicates
people
who
want
to
actually
out
at
the
log.
So
he
has
now
that
multiple
sources
and
to
audit
from
okay.
A
So
there's
a
couple
of
github
issues
that
I
filed
essentially
there's
two
approaches.
If
I
continue
down
the
path
of
the
shot
to
be
a
6-ounce
thing,
you
just
put
the
the
docker
image
sha-256
in
the
file
and
then
we
would
need
to
write
either
like
an
admission
controller
or
like
an
auditing
tool
that
would
run
on
Cluster.
A
A
So
the
reason
I
feel
like
I've
had
people
tell
me
about
this
tool
in
the
past
is
in
the
context
of
discussions
around
Debian
and
RPM
packages
for
kubernetes,
which
currently
are
published
to
google's
repos
signed
using
google's
key,
and
I
feel
like
this
has
been
put
forward
as
possibly
something
that
could
get
us
out
of
that
situation
as
an
alternative
to
having
a
bunch
of
people
get
together
and
have
a
key
signing
party
so
that
we
have
a
community
owned
key
used
to
sign
packages.
The
way
that
apache
and
OpenStack
do
things
I'm
curious.
A
A
A
My
like
I,
think
my
ideal
and
say
it
would
be
creating
a
tool
that
goes
and
reads
the
audit
log
and
if
it
sees
something
in
the
audit
log,
it
just
signs
it
and
then
it's
kind
of
automated
and
closed
off
and
nobody
really
like
it's
just
a
standalone
computer
just
blindly
signing
stuff
that
ends
up
in
the
log,
there's
just
all
sorts
of
problem
with
public
key
cryptography
for
these
sorts
of
software
releases
and
community
projects.
I
just
don't
know
how
to
solve
it.
A
Okay,
yeah
that
that
that
makes
sense
yeah.
My
concern
with
the
automated
key
signing
of
the
audit
log
is
like
he
said
it
sounded
like
this
is
predicated
upon
having
a
bunch
of
people
actually
monitor
the
audit
log
and
there
might.
We
might
want
some
way
to
ensure
that,
like
humans
have
the
ability
to
catch
this
before
the
machines
act
up
on
it,
yeah
I
mean
it's
kind
of
a
tricky
right
like
the
problem
with
key
custody.
A
Is
so
many
seals
the
key,
and
then
you
actually
no
idea
whether
the
key
was
stolen
and
who's
cycling.
What
like
nothing's
ever
in
a
hog
log,
at
least,
if
you
invert
it
like
because
audit
log
and
then
signing
you,
know
deterministically
what
what
was
signed.
Okay,
yeah,
it's
I
mean
it's
just.
It's
really
tricky
yeah.
A
C
A
F
I
just
want
to
say
I
think
I
think
you,
this
is
cool
I,
think
you
made
a
fairly
clear
problem
statement
previously,
which
is
interesting,
which
is
you
said.
The
problem
with
like
GPG
style
signing
is
that
there's
no
audit
log
and
that
this
gives
a
log.
What
if
we
didn't
like
what,
if
we
bridged
the
two
worlds
right?
What
if
we
had
GPG
signing
that
was
sort
of
handled
by
something
which
put
it
into
an
audit
lock
and
we
wouldn't
have
a
bridge
like
we're.
Gonna
have
to
reinvent
too
much
yeah.
A
I
mean
that's
sort
of
what
serviette
transparency
is
about
is
like
they
do
the
inverse,
which
is
you
you
trust
the
certificate
authorities
which
are
just
you
know,
public
key
sightings
at
the
end
of
the
day,
and
then
their
signature
isn't
trusted
until
it
ends
up
in
the
audit
log,
and
so,
but
that's
just
an
adapter
to
how
their
business
model
works.
I
mean
you
can
do
it
the
other
way
too,
which
is
only
sign
stuff
that
you
found
in
the
audit
log.
A
F
A
A
A
Actually
have
the
key
and
then
the
FDA
itself,
threshold
of
signatures
to
say
we're
gonna
sign
it
Peter
like
goes
off
and
actually
decides
it.
It's
like
a
group,
did
you
put
it
into
a
log
or
not?
It
doesn't
because,
like
you
know,
opt
doesn't
actually
check
the
log.
You
know
right.
I
think
that
was
maybe
where
my
question
was
aimed
is
I've
heard
from
cluster
lifecycle,
folks
that,
like
you,
know,
working
with
rpm
and
working
with
at
his
table
stakes
for
their
world
and
so
to
me
as
awesome
as
this
tool.
A
Is
it's
unclear
to
me
how
that
interoperates
with
that
world
and
solves
the
problem
of
key
custody
that
we
have
today
or
those
artifacts
yeah
yeah?
That's
fair
I
mean
I'll
write
up
a
issue
on
on
this
whole,
like
how
do
we
bridge
GPG
signing
world
like
rating
a
ready
like
robot
tool
that
Stein's
based
on
things
found
in
the
audit
log
I,
think
that's
a
reasonable
idea.
A
A
F
F
E
F
F
A
F
Are
you
talking
about
the
Delta
or
are
you
talking
about?
We
don't
know
this,
the
purpose
of
the
Delta
or
the
cause
of
the
Delta
yet,
but
we
we
can
see
a
new
item.
He,
which
is
yes,
that
there
is
an
increase
in
spent
and
that's
because
I
started
staging
the
cops
binary,
artifacts,
ok,
so
it's
taking
transfer,
there's
also
actually
some
problems
because
I
actually
promoted
someone
to
prod
as
well
temporarily
did
like
a
test.
The
promoter
there
ok
looks
like
someone
else.
Oh.
F
A
Alright,
so
I
don't
know
that
they're
really
gonna
have
time
to
walk
us
through
the
entire
board
today,
for
those
folks
who
are
interested
in
the
board
and
what's
on
it,
I'm
posting
a
link
in
the
chat
but
I
think
this
kind
of
answers.
My
questions
or
later
it's
billing,
which
is
it,
looks
like.
Yes,
we
can
break
down
storage
by
a
bucket.
You
can't
break
it
down
by
path
inside
the
bucket,
oh
yeah
yeah.
It's
right.
F
The
one
thing
I
would
say
is
I
propose
the
way
we
attack
the
breakdowns
is
when
the
numbers
get
big,
then
we
break
it
down.
So
I'd
say
like
when
it's
$2
or
$3.
We
don't
need
to
worry
about
the
storage
costs
per
path,
but
when
it
becomes
a
$100
on
transfer
week,
we
do
want
to
know
the
storage.
The
transfer
costs
per
path
like
but
I
feel
like
it's
pretty
obvious
everyone's
time
to
do
it
for
a
$5
to
do
a
breakdown
on
the
$5.
Well,.
D
Way
wait.
That
is
the
case
today,
but
I
before
we
commit
to
that
as
a
path,
because
that
has
management.
Overhead
too
I
would
like
to
I
think
we
should
proceed
with
a
single
bucket
that
we're
talking
about
now
or
the
single
primary
bucket
and
see
what
we
can
do
before
I
before
we
commit
to
having
a
hundred
what
we
call
the
prod
state,
prod,
storage,
repos,
right,
okay,.
F
D
D
F
Say,
like
I'm,
relatively
confident
we
can
write
a
billing
report
that
can
break
it
down
by
path
for
storage.
I
was
saying:
let's
wait
until
we.
Actually,
the
numbers
are
big
enough
that
we
care
like
because
I
think
the
main
point
is
confidence
that
we
can
I.
We
can,
because
we
just
list
all
the
files
and
add
up
the
sizes
for
storage.
That
would
then
won't
show
us
transfer
costs.
Correct
transfer
is
separate.
Okay,.
A
D
A
D
G
G
We
can
either
like
merge
and
iterate
or
we
could
like
if
you
want
to
sit
down
sometime
this
week
and
like
actually
put
time
in
the
calendar
Tim
and
just
like
hey,
let's
put
20
minutes
in
the
calendar
and
just
walk
through
it,
so
that
you're,
you
have
the
same
confidence
that
I
do
I'm
like
what's
in
there.
We
could
also
do
that.
Yeah.
D
D
A
D
It
just
it's
just
such
a
big
step
forward
to
turn
this
thing
on
that
I'd
like
to
make
sure
I
understand,
I,
just
don't
know,
terraform
and
I
want
to
understand
the
operating
model,
so
Christophe
I
might
very
well.
Take
you
up
on
that
offer
and
we
do
can
shoulder
surf
and
explain
to
me
what's
going
on
and
you
can
actually
watch
me,
try
it
and
we
can
validate
together.
G
Okay
sounds
good.
The
other
thing
that
I
was
going
to
ask
for
is
I,
don't
know
where
the
right
place
to
put
it
is,
but
I
don't
have
permissions
to
whatever
prod
project.
We're
gonna
put
this
in
so
I'm
going
to
dig
up
that,
and
if
nobody
has
any
objections,
I'm
going
to
like
PR
myself
into
wherever
I
need
to
do
to
get
permissions
for
the
prod
project.
That's.
G
A
Work
this
this
to
me
sounds
like
one
of
those
gates.
We
could
use
to
get
us
into
a
merge
major
eight-state
faster,
where
I
feel
like
Tim's
concern
is
once
this
merges.
It's
gonna
be
pointed
at
Prato,
know
where
we
could
merge
it
where
it's
pointed
at
something,
that's
not
prod,
with
notes
and
the
readme
that
indicate
that
this
is
not
yet
turned
on.
This,
isn't
the
talk
of
the
entire
thing,
but
because
it
has
landed,
it
gives
people
a
chance
to
like
look
at
it.
Experiment
with
it
works.
D
G
What
is
pointing
at
right
now
is
the
dev
project
that
we're
going
to
blow
away
and
the
the
other
kind
of
design
decision
that
I've
made
as
far
as
terraform
is
concerned,
is
the
bigger
of
your
terraform
config.
Then
the
bigger
your
potential
fallout
is,
if
something
breaks
so
like
you
can
have
at
one
massive
terraform
config
that
does
all
of
our
clusters
in
all
of
our
projects,
but
that
is
my
opinion
is
bad.
So
what
I'm?
What
I've
set
up
right
now
is.
G
The
folder
structure
is
in
Kate's,
dot,
IO,
there's
a
folder
called
clusters
and
announced
in
underneath
clusters.
There
is
a
folder
for
each
project
that
we're
gonna
set
up
a
cluster
in
so
then
that
folder
contains
the
terraform
configs
just
for
that
project.
If
we
need
to
like
if
we
want
to
at
a
later
date
like
dry
out
and
configuration
between
multiple
things
we
can,
but
this
basically
gives
us
like
hey.
If
that
project
goes
away,
we
can
just
hit
apply
again,
and
the
cluster
is
back
up.
G
D
If
we,
if
we
want
to
get
it
merged,
pointing
at
the
experimental
project,
I
have
no
objections
at
all
to
that.
I
still
will
go
through
it
and
try
to
comprehend
it
and
make
sure
that
I
know
what
that
gets
doing.
But
I
remove
my
objections
to
merging
if
we
weren't
talking
about
actually
doing
it
for
reals.
G
D
A
Thank
you.
Everybody
for
moving
forward
I
still
think
the
idea
of
walking
through
would
be
a
great
one,
possibly
even
recording,
because
I'm
sure
there
number
of
people
you
might
be
interested
in
this,
so
I
can
chat
with
you
about
how
to
do
that
offline
and
then
I
feel
like
the
like,
do
be
officially
turn
it
on
decision
we
can
have
within
the
next
two
weeks
would
be
like
the
PR
that
would
be
gated
that,
for
that
would
be,
you
know,
switching
it
to
the
product
project.
A
G
Yeah,
if
you
have,
if
you
have
a
specific
suggestion
for
the
code
like
leave
a
comment
on
the
existing
PR
and
we
might
urge
we
might
merge
today
and
then
open
up
a
new
PR
to
iterate.
But
if
you
leave,
if
you,
if
you
have
a
comment,
leave
it
on
the
current
PR,
because
then
we
know
where,
where
to
go.
Looking
for
for
like
improvements
once
it
gets
merged,
perfect
sounds
good.
Thank.
A
So
sort
of
close
the
loop
on
the
proud
cluster
thing
I'm
inclined
to
suggest
that
music
group
are
blocked
until
we
can
get
comfortable
with
terraform.
Is
our
primary
mechanism
was
standing
up
clusters
the
the
counter
the
counter-argument
to
that
would
be
like
we
shouldn't
be
blocked.
We
should
allow
people
to
just
stand
up
a
cluster.
However,
they
want
was
like
g-cloud
container
clusters
to
get
moving.
A
G
What
I
would
suggest
so
we've
been
we've
been
blocked
on
this,
like
how
do
we
stand
up
clusters
thing
for
a
very
long
time?
I
really
hope
that
we
are
in
the
homestretch
of
this,
and
that
we
are
like
that
by
two
weeks
from
now.
If
we
can
have
if
it
is,
this
is
basically
last
call
if
you
have
concerns
please
if
you
have
concerns
or
suggestions
and
want
to
get
involved,
please
do
because
I
would
hope
by
the
end
of
two
weeks
when
we
meet
next
in
two
weeks.
G
I
would
hope
that
we
are
completely
unblocked
on
how
to
do
this,
and
if
we
want
to
stand
up
a
new
cluster
for
prowl
or
whatever,
then
we
should
be
able
to
do
that
in
two
weeks
like
within
an
hour,
would
be
helpful
as
far
as
actually
doing
that,
and
the
thing
that
can
be
done
alongside
in
the
next
two
weeks
is
defining
the
shape
of
the
cluster.
So
if
we
have
a
specific
need
for
a
specific
cluster,
what
does
that
cluster?
Look
like
how
many
no
pools?
How
big
are
the
node
pools?
G
What
kind
of
machine
types
the
key
things
that
we
need
to
decide
as
far
as
how
what
the
cluster
is
shaped
like?
That
is
something
that
we
can
definitely
collect
in
the
next
two
weeks
and
then,
when
we
have
like
okay,
this
configuration
for
terraform
we
are
comfortable
with
and
we
are
going
to
move
forward
with.
Then
we
just
plunk
in
the
the
data
points
that
we
need
hit
terraform
apply
and
it
stands
up.
A
new
cluster.
G
A
A
Saying
I've
heard
this
every
two
weeks
for
the
past
couple
of
months,
but
I
hear
you.
Let's
I,
think
that
ask
comes
from
I.
Think
Eric
theta
is
getting
particularly
cranky
that
we're
leaning
on
provocated
on
IU
as
the
trusted
cluster
and
would
really
like
to
see
a
trusted
cluster
for
trusted
jobs
like
The
Container
image
promoter
to
live
in
this
project,
managed
by
this
group
of
people
and
I,
hear
and
appreciate
that
I
just
feel
like
we're
still
not
quite
ready
to
take
that
on
and
if
and
if
we're
in
the
same
situation.
A
C
B
G
G
G
A
guess
at
what
he
at
what
he
was
concerned
about,
but
I
don't
want
to
necessarily
move
forward
with
that
being
just
a
guess.
My
guess
is:
the
current
system
is
the
trusted.
Jobs
are
probably
running
on
the
same
cluster
as
the
control
plane,
Wow,
so
I'm
guessing.
He
wants
like
a
separate
trusted
cluster
that
can
run
the
actual
jobs
that
is
different
from
the
control
playing
cluster,
which
is
yeah.
A
F
Have
an
implied
PR
I
think
there
was
there's
a
good
discussion
going
on
where
I
put
a
PR
up
about
like
a
disaster.
Recovery
thing
and
Tim
was
like
you
can
just
use.
Gsutil
ARS,
Thank,
You,
dummy
and
I
was
like
damn
at
times
right
and
but
there's
some
good
discussion
there
about,
like
whether
we
put
on
whether
we
have
a
blank
retention
policy
and
whether
we
lock
it
so
I
would
just
refer
people
to
that.
A
A
H
H
Image
builds
to
our
staging
bucket
I
know.
We
had
an
outstanding
issue
where
we,
where
we've
talked
through
the
potential
ways
that
we
could
do
this,
but
I
wanted
to
see.
We
can
start
formalizing
process
for
how
do
you
set
this
up?
I
know
we
need
to
have
a
set
of
service
credentials
with
access
to
the
bucket
that
can
be
configured
as
a
prowl
secret,
and,
ideally
you
know
those
credentials
would
not
be
the
same
credentials
that
I
use
to
manually
upload
to
the
bucket.
So
we
need
a
way
to
request
those.
H
So,
in
this
case
the
build
would
be
running
from
prowl
itself.
We
would
want
to
configure
it
as
a
post
submit
job.
So
that
say,
you
know
a
PR
gets
merged
to
master.
We
update
the
latest
kind
of
you
know.
We
can
update
the
latest
tag
in
the
staging
bucket
so
that
other
developers
can
you
know,
consume
those
hi.
I
So
I
was
looking
for
presidents
here
and
the
one
that
I
found
was
the
C
advisor.
Currently
we
do
push
images
to
docker
hub.
There
is
a
docker
user
and
docker
password
that
is
in
Pro.
You
know
as
a
secret
and
then
that
gets
pulled
by
Python
and
shell
script
and
then
it
gets
pushed
you
know
it
gets
built
and
pushed
into
you
know.
So
all
the
pieces
are
there
for
docker
hub
right
now,
but
then
do
we
want
to
start
with
that?
I
A
A
suspicion-
this
is
where
the
previous
ask
is
coming
from
to
move
the
container
image
promoter
chops
into
they're
in
the
cluster,
because
the
story
for
secret
management
for
Crowell
today
is
virtual
sneakernet,
it's
not
great,
and
what
Jason
is
proposing
very
rightly
is
about
like
adding
more
secrets
and
getting
a
little
more
strict
about
it.
But
I.
Don't
think
that
the
team
of
people
who
supports
all
the
thousands
of
jobs
that
run
on
kubernetes
wants
to
continue
to
take
on
an
increased
burden
of
like
hey.
Can
you
have
this
secret?
Hey?
H
F
That
thank
you.
I
just
was
Eddie.
Gke
recently
said
start
supporting
manage
pod
identity,
and
it
is
pretty
nice
in
that.
If
a
proud
job
can
run
with
a
community
service
account,
we
can
now
sort
of
transparently
and
easily
well,
with
terraform,
transparently
and
easily
by
net
to
a
Google
CCP
service
account.
So
that
could
be
and
there's
it
manages
the
secrets
for
you
other
than
like
creating
urbanity
service
account,
which
I
can't
imagine
they
would
object
to.
So
that
is
that
that
solves
an
analyst
and
the
certificates
rotate.
I
Okay,
so
the
other
thing
that
I
wanted
to
point
out
is
it's
not
just
a
secret,
it's
actually
the
scripts
for
building
these
images
are
going
to
be
different,
based
on
the
different
projects
like,
for
example,
the
test
images
that
are
there
in
KK.
You
know
we
have
a
different
set
of
you:
have
different
parameters,
different
scripts
to
actually
build
those
I'm
sure
the
clustering
API
has
its
own
make
file
Target
or
whatever,
and
you
know
it.
I
I
A
Proposal
I
have,
for
that
is
the
model
that
we
use
for
a
variety
of
images
in
testan
cross
link,
to
which
I'm
dropping
in
chat
here
and
then
the
thing
that
actually
does
the
building
is
here
and
then
there
are
a
series
of
jobs.
So
the
TLDR
is
we
use
cloud
Google
cloud
build
and
every
image
inside
of
that
directory
has
variants
file
that
describes
how,
if
we
need
to
build
like
multiple
images,
most
notably
for
the
Koontz
or
we
need
to
build
like
a
different
image
for
each
branch
of
kubernetes.
A
How
to
do
that
and
then
we
have
a
bunch
of
Co
submit
jobs
configured
so
that
anytime,
one
of
these
image
directories
changes
a
post,
submit
job,
gets
and
off
the
cloud
build,
and
then
the
things
get
pushed
magically,
and
then
we
have
something
else
that
goes
and
scans
for
the
latest
images
and
all
the
registries
and
then
and
bumps
the
tags
in
all
of
our
job
files
or
deployment
manifests
or
what-have-you.
So
it
is
relatively
automated.
It
does
not
today
support
the
cross
cloud
or
sorry.
A
A
So
yeah
I'm,
not
I'm,
not
clear
whether
or
not
manifests
are
supported
here,
but
the
person
to
talk
to
about
this
would
be
Katherine
Barry.
So
this
is
an
example
of
like
so
the
tool
lives
inside
of
this
repo
and
the
image
is
also
living
the
side
of
this
repo,
but
it
can
be
used
in
external
repos
as
well
so
I
think
the
slack
infra
repo
in
kubernetes
6
also
has
an
image
directory
in
there
that
this
tool
builds
and
I.
Think
if
you
go,
you
know
searching
first
lacking
for
jobs.
A
A
Super
hear
you
I
think
Christophe
started
explained.
Why
going
an
approach
like
mounting
the
dr
socket
is
a
little
scary
and
just
for
what
it's
worth
a
number
of
these
image
builds
are
like
here's,
a
doctor
file,
docker
build
or
here's
a
make
file
run,
make
and
so
there's
not
a
whole
lot
of
magic
going
on
like
what
I
can
run
locally.
Generally
speaking
matches
what
runs
in
Google
cloud
build,
but
I
hear
your
concern
and
I
am
open
to
solutions
that
sort
of
alleviate
our
security
concerns.
A
D
Just
to
be
clear,
Ike
we
I
don't
think
we
should
shy
away
from
using
a
service
to
get
work
done
if
it
makes
sense,
but
we
should
make
sure
the
bar
is
held
really
high
for
transparency
and
accessibility
like
I've,
never
used
a
cloud
build
firsthand,
so
I
can't
say
for
sure.
If
it
gives
us
enough
of
that,
but
if
it
does,
we
should
not
move
away
from
it.
Just
because
it's
a
managed
service
I,
don't
know
that
we
should
turn
prowl
into
a
full
CIC
D
system.
D
G
Care
stuff,
so
actually
thinking
about
this
there's
one
there's
one
use
case,
I'd
like
to
explore
and
I
don't
want
to
be
held
to.
This
is
like
the
boss
solution,
but
something
that
may
be
worth
exploring
is
github
actions
because
github
actions
both
has
in
repo
secret
management
so
like.
If
you
have
admin
access
to
the
repo,
you
can
add
a
secret.
You
can't
ever
pull
it
out,
but
you
can
add
it
in
and
in
github
actions.
It's
like
an
isolated
VM.
H
G
So
if
we
did
something
like
if
we
had
a
flow
of,
we
set
up
a
service
account
for
it,
like
we
being
the
impre
team
like
if
we
have
a
process
of
like
set
up
service,
account,
install
credentials
in
your
repo
and
then
you
just
go
and
like,
and
even
you
don't
see
what
the
credentials
are.
It's
just
like.
You
were
doing
a
github
action
that
github
action
consumes
the
secret
and
uploads
the
thing.
I
love
it.
D
My
my
thought
on
that
is
the
same
as
it
was
before.
We
should
not
shy
away
from
hosted
services
if
they
serve
our
purpose
and
meet
the
bar
and
github
actions
meets
the
subject
to
the
same
criteria
that
anything
else
would
be.
So
if
it
has
the
transparency
and
the
integration
that
we
want,
then
let's
explore
it.