►
From YouTube: Kubernetes WG K8s Infra - 2021-05-12
Description
A
Hi
everybody
today
is
wednesday,
may
12th
2021.
This
is
the
wg
kate's
infra
kubernetes
infrastructure
working
group,
bi-weekly
meeting
it
is
being
recorded
and
will
be
posted
to
youtube
publicly
later.
During
this
meeting,
we
are
expected
to
hear
to
adhere
to
the
kubernetes
code
of
conduct,
which
basically
means
be
our
very
best
selves
to
each
other.
If
you
have
any
problems
with
the
conduct
of
this
meeting,
please
reach
out
to
conduct
kubernetes
dot,
io
or
you
are
also
free
to
reach
out
to
the
working
group,
leads
w
g
dash
k.
A
Infra
leads
at
kubernetes
dot
io.
I
have
no
idea
what's
on
today's
agenda,
so
I
will
just
start
with
that.
Did
somebody
have
something
they
wanted
to
launch
right
into,
or
do
you
want
to
give
me
a
minute
to
go,
find
it.
B
I
am
I'm
sharing
my
screen
if
you
want
to
I'll
roll
that,
for
you.
B
A
Okay,
I
see
rihanna
was
about
to
run
the
meeting,
that's
cool,
okay.
So,
first
off,
let
me
welcome
any
new
members.
I
feel,
like
I
see
a
couple
new
faces
here.
Would
anybody
like
to
introduce
themselves
and
tell
us
a
little
bit
about
yourself
and
what
you're
interested
in.
A
Or
we
can
move
right
along,
I
posted
a
link
to
the
meeting
notes
in
zoom
chat
if
you
want
to
sign
yourself
in
as
an
attendee
and
if
somebody
wants
to
be
awesome
and
take
notes
along
the
way
I
find
I
cannot
talk
and
type
at
the
same
time.
A
So
the
other
thing
we
typically
do
at
the
beginning
of
each
meeting
is
we
go
through
and
review
our
billing
report,
tim
hawkin,
tends
to
also
take
a
look
at
the
actual
gcp
billing
console
to
see
if
things
line
up,
I
feel
like
we're.
We
are
basically
at
the
point
where
I
feel
comfortable
not
needing
to
do
that
sanity
check
or
that
comparison
anymore.
D
I'm
I'm
with
you:
we've
been
within
spinning
distance
for
the
last,
what
like
12
plus
months.
So
I
pretty
I'm
pretty
confident
that
the
billing
report
is
not
egregiously
wrong.
A
A
A
So
if
I
think
of
our
annual
burn
rate
or
if
I,
if
I
were
to
pretend
that
our
budget
started
on
january
1st-
and
I
thought
of
our
budget
as
three
million
in
gcp
credits,
we
are
about
20
of
the
way
through
that,
and
we
are
whatever
5
over
12
is
we're
like
45
ish
through
the
year.
That
seems.
Okay.
D
A
Yes,
I
agree
so
I'll.
Stop
sharing
that
if
there
aren't
any
specific
questions
yeah-
and
maybe
we
can
come
back
to
that-
because
I'm
sure
some
of
the
I
folks
have
some
questions
about
billing
data.
I've
seen
a
lot
of
questions
about
sort
of
access
to
how
that
report
is
generated.
I
think
okay,
are
there
any
action
items
from
the
last
meeting
that
we
need
to
follow
up
on?
A
Looking
back
in
the
meeting
notes,
the
last
meeting
was
april.
28Th.
Let's
see
there
was
discussion
about
triaging
issues
for
122..
A
A
Maybe
we
can
wait
for
arno
to
show
up
vulnerability
scanning
will
be
billing
for
vulnerability
scanning.
So
the
way
I
looked
into
this
issue
a
little
bit,
I
will
see
if
I
can
find
the
issue
that
tim's
opened.
My
read
is
that
it's
about
two
point:
five
to
two
to
one
point:
seven
percent
of
our
spend
in
total,
so
I'm
not
like
completely
flipping
out
over
the
amount
of
money
we're
spending
on
it.
A
A
The
reason
I
don't
have
that
happening
by
default,
all
the
time
everywhere
is
there
are
a
number
of
services
that
are
active
in
some
of
our
like
productiony
type
projects,
and
I
don't
want
to
surprise
somebody
by
shutting
down
something
that
turned
out
to
be
extremely
critical.
It
just
wasn't
documented
at
all,
so
I'm
working
on
that
and
I'm
using
this
as
an
excuse
to
iterate
a
little
bit.
A
But
if
you
get
too
close
to
the
deadline
by
which
it's
a
soft
rollout
of
billing,
so
after
I
think
it's
like
may
19th
or
something
we
start
to
be
billed
at
50
and
then
sometime
in
july,
we
start
to
build
at
100
for
it.
So
if
we
get
too
close
to
that
I'll,
just
write
a
script
to
blow
through
everything
and
disable
it
everywhere,
but
the
the
tl
dr,
is
the
ci
images
project.
Is
the
vast
majority
of
our
spend?
A
It's
not
clear
to
me
yeah.
We
can
maybe
think
about
usage
patterns
later
that
seems
like
we
shouldn't
have
continuous
security
scanning
of
every
image
of
kubernetes
that's
produced
for
every
commit
of
kubernetes.
A
A
Let's
see,
are
there
any
other
actions,
somebody
was
going
to
reach
out
to
sigs
to
get.
A
Whether
they,
whether
we
have
all
staging
repositories
promoted
to
kates.gcr.io,
I
don't
know
if
anybody's
done
that,
I
didn't
actually
see
an
issue
about
doing
that.
That
sounds
like
a
great
thing
that
the
the
release
engineering
could
help
us
out
with,
I
feel
like.
Maybe
they
already
have
a
tool.
A
And
then
there
was
apparently
a
working
session
on
april
29th.
I
missed
that.
Okay,
so
I'll,
stop
there
any
any
questions
about
action
items
or
does
anybody
know
of
anything
that
we
should
have
addressed
by
today?.
E
A
Sure
I
don't
suppose
that
was
you
know
what
I'll
follow
up
with
arno
offline.
I
don't
know
if
it
was
like
documented
anywhere.
Okay,
so
rihanna
you
want
to
talk
about
pii
and
data
discovery.
A
D
I
think
that's
a
good
starting
point.
I
mean
generally
ip
addresses
are
considered
pii
by
anybody.
Who's
doing
same
pii
policy
right,
so
not
publishing
individual
ip
addresses
is
obvious,
not
publishing,
asms,
maybe
a
little
less
obvious,
but
maybe
not
not
a
whole
lot.
It
still
seems
pretty
questionable
or
I
feel
like
it's
interesting
information,
I'm
just
not
sure
it's
necessary
information
and
the
chance
of
exposing
something
that
you
know.
Somebody
made
a
mistake
and
that
we
don't
we
want
to
be
respectful
of
that.
D
So
so
I
guess
I
agree
with
what
you're
saying
I'm
I'm
also
questioning
whether
there
are
whether
we
want
a
human
filter,
anything
we
release
first
or
whether
we
want
to
automatically
release
like,
let's
say,
for
example,
the
top
number
one
image
was
aws,
something
something
right
and
for
historical
reasons,
it
was
one
of
the
images
that
we're
pushing.
D
So
maybe
we
shouldn't
feel
too
bad
about
it,
but
I
do-
and
I
would
prefer
that
that
if,
if
we
were
to
produce
a
report-
and
we
saw
that
oh
that's
strange-
that
we
might
human
filter
that
and
actually
do
a
reach
out
to
the
aws
folks
and
say
hey-
did
you
know
that
it
seems
like
a
large
number
of
your
clusters?
Perhaps
your
managed
service
are
pulling
a
particular
image.
Maybe
you
should
look
at
that.
A
No,
I'm
not
sure
I
guess
I
I
I
don't
even
I'm
not
sure
I
want
to
try
and
solutioneer
our
policy
on
the
fly
here.
I
think
I
want
to
kind
of
step
back
and
suggest
that,
like
it's,
the
very
first
time
we're
gonna
talk
about
something
that's
potentially
sensitive,
like
pretend
like
we
are
about
to
discuss
all
of
the
cves
that
all
of
our
infrastructure
has
right
now
that
are
unaddressed.
A
Is
this
the
sort
of
thing
that
we
would
want
in
a
publicly
recorded
meeting
posted
to
youtube?
I
feel
like
we
would
want
to
try
a
couple
trial
runs
and
then
review
whether
everything's,
okay,
which
I
can
do
since
I
typically
review
these
videos
before
I
post
them
for
recordings.
So
if
there's
anything,
that's
sensitive,
I'm
just
gonna
cut
it,
and
I
expect
everybody
who's
attending
here
to
use
their
best
judgment.
A
I
think
if
it
comes
to
committing
stuff
to
the
repo
that
needs
to
happen
someplace,
we
need
to
talk
about
how
to
do
that
privately
before
checking
it
in
publicly.
Does
that
sound?
Just
that
is
a
metapolicy
makes
sense
to
y'all.
F
Okay,
yeah,
I
think
that
makes
sense.
I
think
one
of
the
other
things
is:
where
is
this
the
meeting
to
discuss
those
things
or
do
we
have
a
closed
system.
A
No,
no,
no,
I
said
like
we
can.
We
can
go
forward.
I
just
wanted
to
try
and
like
establish
their
ground
rules
here
and
I
feel,
like
I
personally
still
haven't,
actually
looked
through
all
the
the
full
legal
document
that
cncf
has
sent,
but
I
think,
like
we
owe
the
community
some
kind
of
docs
and
policies
around
how
we're
going
to
operate
with
this,
like
y'all,
are
getting
pigs.
A
So
I'm
sorry
about
that,
but
you
know
we
like
to
merge
and
iterate
on
everything
else.
So
why
not
this
okay.
F
One
thing
that
we
did
in
temporary
is
to
create
a
private
repository
that
right
now
is
sitting
at
ii,
and
I
the
only
people
I
knew
that
it
was
that
they
would
have
to
have
in
that
would
be
the
chairs
of
this
sig.
Yes,
I
think
that's.
F
A
Think
that's
a
great
place
to
collaborate.
Okay,
we
can
talk
more
about
that
beautiful.
B
And,
and
also
on-
and
that's
part
of
the
reason
why
I
put
it
in
because
I
really
want
to
be
respectful
of
what
we
show
and
not
show
and
get
the
buy
in
and
also
it
would
not
be
wise
of
me
to
get
all
the
data
ready
and
say
these
are
the
main
of
whatever
and
I've
got
something
wrong
with
my
data
yeah,
I.
B
I
could
show
let
me
if
you
would,
you
won't
like
me
to
just
show
the
highlights.
I
did
black
block
out
all
the
sensitive
data,
so
let
me
share
that
screen.
Let's
do
this
share
screen
and
I'm
going
to
share
that
screen
there.
B
All
right.
You
should
be
able
to
see
my
screen
now
so
first
up,
which
I
think
should
not
be
sensitive
data
which
we
did
discuss
in
the
in
the
chat
and
slack
that
we
don't
have
a
way
to
link
this
back
to
specific
images
yet
but
interesting
to
see
the
highest.
Hitting
image
got:
7
million,
700
downloads,
700,
000
downloads,
it's
only
16
megabytes
size
of
the
image
and
total
gbs
is
128..
B
So
there's
some
interesting
information
here
and
then
also
the
one
down
is
the
highlight
the
number
of
downloads
for
top
10
images.
The
one
is
the
top
team
data,
so
that's
the
difference
there.
H
B
Exactly
what
it
is
basically
april,
but
I
defaulted
today.
B
C
B
We're
working
on
making
the
running
data
set
and
automating
to
pull
in
the
data
to
keep
it
fresh
live.
So
this
is,
as
I
said,
amount
of
data
downloaded
per
image,
and
then
that
is
the
volume
of
downloads
the
count
of
downloads,
so
the
the
most
popular
image
got
85
million
downloads.
B
Then,
looking
at
the
second
page,
we've
got
the
ips
related
to
this
you'll
see
the
highest,
hitting
I've
been
downloading
23
different
images,
9000
gbs
of
data
and
1
800
thousand
downloads,
then
top
10
ips
by
total
volume
of
downloads,
so
the
biggest
and
it's
a
bit
of
a
copy
of
the
one
at
the
top
for
the
first
half
and
the
rest
is
some.
Some
changes
can
start
to.
B
So
there's
a
single
ip
address,
downloaded
1.8
million.
It's
just
one
ip
address
that
downloaded
that
then
looking
at
asn's.
So
here
I've
got
them
split
by
asn.
One
of
the
problems
with
the
busy
having
we've
got
a
data
dump
that
we're
going
to
look
again
at
today,
we're
struggling
a
little
to
get
all
eyepiece
links
to
to
asn.
B
So
I've
got
450
million
ips
that
did
not
have
specific
asn
signed,
which
we
hope
to
resolve
soon.
Then
the
next
one
in
the
queue
downloaded.
If
we
look
at
the
price
at
which
the
downloads
go
according
to
the
billing
report
is
0.08
us
per
gb,
so
taking
into
account
the
first
ip
that
I
got
asn
that
I
could
actually
marry
to
a
data
set.
That's
a
158
000,
gbs,
11,
000
worth
of
downloads,
and
it's
24
million
downloads
for
that
ip
or
for
that
asn.
C
And
we
recognize
how
many
of
these
three
on
from
the
in
this
top
20.
B
In
the
top
20,
the
first
five
or
six
is
very
well
known.
It's
well-known
companies
in
the
community
for
the
first
five
six
there
after
it's
becoming
all
sorts
of
different
companies.
That's
I
don't
bump
into
regularly
into
the
community.
F
B
Yes,
yes,
that's!
The
next
thing
I
want
to
say
is,
I
think,
number
three
and
five
I
remember
correctly
now
is
this
the
same
company
but
it's
different
asn,
so
we're
trying
to
marry
now
asn
name
by
of
the
network
is
a
network
name
to
specific
companies,
so
we
can
actually
have
company
abc
and
the
the
company
specific.
So
we
want
to
try
and
lump
that
ism
belonging
to
say
amazon,
not
packet
or
google,
or
get
all
their
asn
together.
B
B
A
A
One
is
somebody
being
abusive
if
so,
let's
figure
out
how
to
wear
that
down
two,
I
think
the
goal
that
cncf
cares
about
is
how
much
money
could
we
save
if
we
didn't
have
to
pay
egress
to
external
cloud
providers,
therefore
like
theoretically,
so
I
could
see
you
need
to
know
like
who
is
capable
of
hosting
their
own
mirror
such
that
they
don't
have
to
pay
egress
to
get
container
images,
so
I
would
try
to
boil
it
down,
though,
to
that
question
the
specific
companies
is.
A
Yes,
that's
a
great
discussion
to
have
amongst
cncf
leadership.
I-
and
I
think,
kate's
in
for
leeds,
can
be
involved
in
that,
but,
like
I
really
don't
know
that
it's
worth
getting
into
the
long
tail
of
like
every
company
and
and
so
on
and
so
forth,
and
even
then
right,
that's
like
that's
a
max
potential
cost
savings.
F
I
don't
think
we're
trying
to
do
it
for
every
company
in
the
world,
but
definitely
trying
to
get
that
asn
map
and
some
companies
in
this
larger
list
don't
aren't
contributing
the
list
of
their
asms.
F
Identify
this
larger
list
and
publish
those
results.
A
It's
just
the
sort
of
thing
where,
like
I,
I
feel
like
in
the
in
the
abstract,
to
help
us
guide
some
sort
of
ballpark
decision
making
we're
going
to
want
to
know
really
abusive
outliers
and
then
we're
kind
of
kind
of
want
to
know.
You
know
I
don't
I
don't
know
if
it's
like
1090
or
2080,
but
there's
gonna
be
like
imagine.
We
could
just
magically
take
the
top
n
of
these
and
make
that
cost
disappear.
A
B
Analysis,
which
is
basically
saying
normally
statistically
80
of
your
hitters,
would
or
20
of
your
hitters,
would
cause
80
of
your
cost.
In
most
data
sets,
you
have
something
like
that
that
20
percent
of
the
consumers
is
responsible
for
80
percent
of
whatever
is
going
on
so
we're
looking
to
catch.
What
is
that,
especially,
as
you
say
them?
What
is
that?
20
percent
and
again
what
value
we
can
derive
out
of
cooperating
with
that
20
percent,
just
in
structuring
the
traffic
differ
and
with
those
specific
companies.
B
So
that
is
why
we're
aiming
towards
knowing
exactly
who's
doing
what
he's
not
to
point
anybody
out
and
that's
why
we
don't
want
to
show
any
any
names
now:
okay,
because
we're
not
solid
on
the
data
and
also
we
want
to
privately
first
discuss
the
data
and,
I
think,
there's
a
very
good
path
to
try
and
cut
cost
by
engaging
those
people
once
we
identify
them.
A
A
D
And
let's
also
be
clear,
we
can
do
two
different
analyses
right
we
can.
We
can
do
the
one
that
says.
Look.
We
know
that
86.2
percent
of
the
traffic
is
coming
from
these
three
asms
which
are
linked
to
these
three
companies
and
we
can
go
out
to
those
companies
with
the
cncf
head
on
and
say.
We
really
think
you
need
to
run
a
mirror
because
here's
the
data
right
and
in
fine
grained
data,
here's
what
you're
doing
go.
Do
that
that's
not
for
public
consumption.
D
I
think
because
we
just
don't
want
a
name
and
shame
and
there's
who
knows
what
information
in
there
and
then
there's
the
more
anonymized
statistical
stuff
that
I
think
should
be
for
public
consumption.
We
should
produce
a
report
of
the
you
know
sufficiently:
anonymized
and
and
de-labeled
information.
F
I
think
part
of
that
does.
It
is
important
to
know
who
those
players
are,
because
each
of
them
probably
have
a
strong
opinion
of
of
what
that
would
look
like
for
them
and
I
think,
identifying
those
top
three,
because
I
know
we're
talking
about
something:
that's
cross
cloud
everywhere,
but
I
I
I
think,
we're
going
to
get
some
pushback
from
some
of
these
vendors
on
deploying
something
that's
not
what's
there.
F
I
don't
know,
let's
say
their
flagship,
offering
or
something
that
we
maybe
we
promote
to
that.
Okay,
it's
one
of
those
things
we
could
say.
Well,
if
you
want
to
run
it,
here's
the
solution,
you
need
to
run
and
I
feel
that's
maybe
being
a
bit
prescriptive
for
their
solution
versus
saying
you
need
some
solution
that
accepts
our
302
redirects
from
our
distribution
engine
and
here's
the
interface
to
that.
A
I
view
this
as
like
a
risk
mitigation
thing.
Then
I
guess
I
would
assume
if
you're
and
I
apologize-
I'm
not
liking
the
weeds
with
your
design
docs.
I
would
assume
if
you've
got
a
design
doc
about
this.
A
So
then,
you
can
sort
of
proof
of
concept
what
those
things
look
like,
or
it
could
be
that
you
need
this
analysis
to
inform
which
of
those
implementations
are
worth
exploring
and
like
proving
out,
but
I
feel
like
you
could
probably
guess
off
the
top
of
your
head,
which
hosted
services,
but
I'll
put
it
this
way.
I
would
guess
I
would
want
to
look
at
something
that
works
in
amazon.
It's
something
that
works
in
microsoft.
Those
are
two
other
clouds
that
come
to
mind
and
then
something
that
works
for
a
bare
metal
environment.
A
And
then
I
would
want
to
start
thinking
about
how
you
know
what
is
it
build
versus
buy
and
how
we're
routing
traffic
and
all
that
stuff.
So
then,
I
use
the
the
analysis
to
kind
of
prune
that
tree
of,
like
which
branches
do
I
need
to
explore
and
how
deeply,
but
I
still
feel
like
you
can
probably
crawl
that
tree
for
quite
a
while
in
parallel
to
the.
F
I
think
it's
going
to
be
more
than
one
or
two
that
want
to
use
their
thing
without,
and
so
I
I,
if
I
were
to
look
at
what
I
what
I
see
the
design
document
we're
going
to
need
something
that
302's
to
in-house
for
for
bigger
vendors.
I
think
part
of
that
is,
there's
a
reluctance
to
deploy
to
deploy
service.
F
F
Redirect,
then
that
whole
process
of
where
our
our
artifacts
live
and
the
mapping
of
those
shaws
is
almost
like
an
input
in
a
way
to
this,
the
to
the
redirect,
because
we
got
this
stuff
all
living
in
gcs
buckets
now
and
then
how
do
they
with
the
basically
have
a
mirror
that
is
up
to
date
at
moments
when
we
do
our
cip
promotion
process,
it's
almost
as
if
promotion
from
those
buckets
that
are
staging
into
production
happens.
F
At
the
same
time,
for
all
of
those
providers,
if
everybody's
running
their
own
fancy
solution
or
a
builder
in-house
solution,
and
for
those
that
don't
then
there's
probably
a
mirroring
or
a
caching
one
that
would
hit
by
default,
but
they
still
deploy
one
and
we
still
can
set
it
up
to
go
to
them,
but
rather
than
pushing
it
out
there,
because
we
don't
want
to
be,
then
it
gets
more
complex.
Every
time
we
add
another
mirror
of
pushing
to
them
when
we
do
a
promote.
A
For
thoughts
but
yeah,
so
I
don't
want
to
drag
us
on
because
I
do
want
to
be
respectful
of
the
other
things
we
have
on
our
agenda.
A
I
just
I
guess,
because
everything
you're
saying
again
this
is
me
like
kind
of
seagulling
in
with
an
opinion
which
I
know
is
in
the
anti-pattern,
so
just
take
it
with
the
appropriately
sized
grain
of
salt,
but
I
want
to
make
sure
it
just
feels
an
awful
lot
like
I'm
seeing
y'all
frame
a
sufficient
analysis
of
gcs
access
logs
as
a
blocker
to
trialling
implementations
of
a
promotion
workflow
and
a
redirector
that
can
work
with
a
couple
best
guesses
at.
I
A
So
this
sounds
cool
pull
back.
This
is
interesting
data.
I
would
love
to
continue
to
see
aggregate
analysis,
but
I
hear
everything
I
just
said.
I
think
I
want
to
understand
like
it
will
be
helpful
for
me
to
understand
how
this
data
is
helping
you
act
and
decide.
A
I
have
lost
too
much
of
my
life
to
looking
at
interesting
data
for
interesting
sake
and
it's
really
fun,
but
I
know
we
have
a.
We
have
a
purpose
here.
So
any
other
questions
on
that
or
shall
we
move
on
fair
now.
E
I
think
mine
is
fed,
so
I
think
the
pickup
is
still
known
for
exporting
to
just
get
the
terraform
that
we
have
for
existing
things.
I
still
get
an
in
permission,
error
on
that
the
the
ticket
is
still
open
for
that,
but
I
think
that's,
okay
or
I
want
to
understand
or
kind
of
get
feedback.
Is
that
still
into
weeds?
E
That's
not
particularly
pertaining
to
image,
moving,
we're
more
trying
to
understand
specifics
on
the
infrastructure
and
the
bucket
and
the
things
that
currently
exist
so
that
we
can
set
up
for
moving.
A
I
I
think,
like
there's
an
umbrella
issue
out
there
somewhere
about,
like
our
audit
script,
takes
way
too
long
to
run
and
and
we
should
find
ways
to
improve
it,
and
then
it
also
talks
about
a
bunch
of
ideas
like
wouldn't
it
be
great
if
the
output
in
which
we
dumped
our
things
was
really
close
to
the
output
that
or
the
input
that
we
used
to
create
our
things,
and
so
sometimes
that
leads
us
to
like,
oh
crap,
let's
just
use
terraform
we'll
write
all
the
terraform
things
and
then
we'll
just
dip
them
against
the
terraform
input
things
and
boom.
A
A
So
then,
like
there
may
be
some
other
g
cloud
commands
from
the
cloud
asset
inventory
service
that
could
dump
most
everything,
not
everything,
but
most
everything
really
really
quickly
to
take
us
down
from
like
when
somebody
runs
a
script
after
pr
merges
like
it'd,
be
cool
to
find
out
within,
like
10
to
30
minutes
what
they
did
as
opposed
to
like
six
hours
after
the
fact.
A
There's.
Also
this
crazy
stuff
for
like
cloud
asset
inventory,
can
actually
send
notifications
to
a
pub
sub
channel
about
everything
that's
happening
within
your
organization
or
projects.
We
could
create.
A
Super
duper
advanced
here,
but
I
think
it's
ultimately
about
making
it
less
likely
for
infrastructure
and
what
not
to
fall
through
the
cracks.
A
I
think
you
all
have
a
great
opportunity
to
iterate
on
like
a
smaller
proof
of
concept,
because
as
you're
playing
around
with
stuff
in
your
sandbox,
you
don't
have
a
bunch
of
pre-built
infrastructure
that
you
have
to
maintain
as
it
is
and
go
through
the
migration
process
of
taking
it
something
else.
So
if
you
want
to
find
ways
to
experiment
with
the
input
and
the
output
are
basically
the
same,
and
that
makes
like
diffing
and
auditing
and
all
sorts
of
other
policy
enforcement
stuff
like
way
easier.
That
sounds
awesome.
A
A
There
are
pages
in
google
cloud's
documentation
that
specifically
enumerate
all
of
the
permissions
for
a
given
service
and
there's
another
page.
That,
like
says,
like
this
role,
gives
you
all
of
these
permissions
and
then
I
think,
using
that
information
you
can
take
an
educated,
guess
and
open
up
a
pull
request
against
some
of
the
custom
role.
Specifications
that
we
have.
A
The
intent
is
the
auditor
role
that
is
assigned
to
a
couple
of
people
within
the
working
group
and
a
couple
of
people
within
iii
is
supposed
to
give
like
view
access
or
read
access
to
everything
except
secret
or
sensitive
data,
but
you
should
so
that
would
be
the
the
example
could
be
like
you
can
list
that
there
are
secrets
within
a
project.
You
just
can't
see
the
contents
of
those
secrets
now
most
permissions
have
the
word
get
and
list
in
them,
but
maybe
not
all
of
them
do
so.
A
E
Yeah
good
feedback,
I
can
dig
in
more.
I
did
look
at
the
specific
things,
but
I'm
not
sure
about
the
boundary
of
well.
This
is
pedantic.
You
don't
need
to
tell
me
the
things
or
this
will
be
really
helpful,
and
this
gives
me
a
actually
get
the
pr
up
with
the
things
that
you
want
and
we're
happy
to
help.
Look
at
that,
but
yeah,
I'm
not
going
to
search
for
your
permissions
for
you.
That's.
A
Good,
that's
that's
yeah!
That's
helpful!
Like
I've,
I
went.
I
have
been
trying
to
go
through
and
document
why
we
need
the
permissions
that
we
need,
because
even
in
the
early
days
of
this
working
group,
like
people
just
sort
of
assigned
some
roles
and
created
some
things
without
documenting,
why
they're
needed.
So
I'm
just
trying
to
converge
us
to
that.
I
think
having
that
kind
of
conversation
on
pull
requests
is
super
helpful.
I
wish
I
was
more
familiar
with
the
tools
that
you're
using,
but
I'm
not
so.
A
Typically,
when
you're
asking
questions,
I
would
have
to
do
the
same
googling
and
reading
of
documents
that
that
you
would
but
I'm
happy
to
try
and
take
out
educated
guesses
when
I
have
the
benefits.
E
I
appreciate
direct
feedback
and
especially
for
these
kind
of
things,
I'm
more
intimidated
by
asking
something
that
is
you're
gonna
sit
behind
the
scenes
and
figure
it
out
rather
than
tell
me
hey.
This
is
a
way
to
make
it
simple
for
me,
so
this
kind
of
feedback
helps
a
lot
like
I
just
I
can
work
a
lot
faster
on.
E
This
is
places
that
you
can
give
me
the
things
that
I
need,
I'm
happy
to
engage
there,
but
dan's
sure
so
keep
this
coming
like
please,
as
you
see
things,
that's
like
if
you
direct
in
this
direction
that
helps
me-
and
I
know
you're
banned
with
this
gay
match
to
coach
as
well,
but
that
helped.
A
H
Words
yeah,
basically
for
branno,
I
made
a
comment
in
the
issue
you
opened
about
the
idea
or
you
use.
Did
you
just
change
that
and
try
it
again?
You
had
the
same
issue.
E
H
Okay,
we
can
look
at
that
later
privately.
Also,
I
put
the
in
the
chat
the
two
links
about
basically
the
gcp
roles.
You
need
to
understand,
basically,
there's
a
list
of
the
all
four
permission
for
gcp
and
there
is
a
repo
created
by
an
ark
and
that
basically
lists
all
the
permission
for
any
role
in
gcp,
so
you
can
basically
have
a
full
understanding
about
everything
we
use
in
the
repo.
E
H
Okay,
also,
once
I
want
to
add
to
what
aaron
say
I
feel
like
basically
convert
the
current
kids.
Central
structure
to
terraform
is
not
really
a
priority,
and
I
think
we
should
focus
on
that
next
year,
because
basically,
this
year
is
almost
over
after
cubecon
and
after
october,
the
year
is
over
for
everyone,
because
you
have
the
holy
days.
H
You
have
the
black
friday
for
everyone,
so
I
feel
like
switch
from
past
to
terraform
of
the
current
resources
is
a
very
complicated
subject.
It's
very
tricky
because
about
configuring
management
for
all
the
project
we
have,
and
I
feel
like
we
can
basically
not
do
that
right
now.
We
should
focus
on
yeah.
F
F
So
I
should
probably
just
roll
back
to
just
gcp
for
now
the
the
kubernetes
public,
okay
and
yeah.
A
F
G
F
A
Where
is
it
like?
It's
the
best
writing.
F
A
So
if
you
are
interested
in
doing
this,
I
would
encourage
you
to
take
a
look
at
the
resources
that
are
provided
by
terraform
by
the
config
resource.
Config
bulk
export
command
and
by
gclouds
or
by
google
cloud
asset
inventory
it'd
be
great
if
one
of
them
had
everything
or
if
some
combination
of
them
had
everything.
A
I
A
Okay,
so
I'm
gonna
move
on
a
little
bit
more
quickly,
so
caleb
you
wanted
to
talk
about
a
gke
cluster
with
prow
and
a
gcs
bucket
and
stuff
before
I
ask
you
why,
but
I
will
ask
you
why
I
will
mention
that
arno
is
trying
to
work
on
prow
running
in
the
kubernetes
public
project
on
the
aaa
cluster
and
get
that
working
is
kind
of
like
just
a
sort
of
prototype
b,
stagey
instance
of
prowl
that
could
one
day
replace
productcase.io
so
and
I
have
zero
problems
with
jobs
being
job
configs
being
put
inside
of
the
kubernetes
testing
for
repo
inside
of
the
wg
kate's
infrared
directory
to
do
wg
k-10
for
us
specific
stuff.
J
Appreciate
the
the
info
on
what
I
know
is
doing,
I
think,
that's
cool
yeah,
so
I've
been
doing
a
bit
of
research
trying
to
learn
prowl
recently,
that
was
very
cool
and
yeah
so
trying
to
get
stuff
set
up
to
in
the
sandbox
how
we
might
end
up
doing
things-
and
this
is
to
do
with
the
getting
prior
setup
for
like
syncing
of
container
registries
and
stuff
like
that
and
try
and
get
an
environment
up.
That
is
similar
to
what
we
might
be
using
of
yeah.
J
It's
a
notification.
It's
it's
quite
ready
to
play
around
with.
F
Of
the
reasons
we
brought
up,
the
the
ii
project
was
to
put
to
work
on
the
the
solutions,
and
so
part
of
that
has
been
us
using
setting
up
clusters
using
gke
and
using
it
in
the
same
way
that
we
do
in
the
public
pro
using
terraform.
F
F
A
Look
I
apologize.
I
really
apologize
if
I'm
coming
across
as
a
jerk
here,
because
y'all
are
awesome
and
it's
written.
Prow
is
awesome
and
it's
really
great
to
explore
this,
but
I
just
I
questioned
the
value
of
iterating
on
like
yet
another
prowl
deployment.
I
in
my
head,
I
think
of
the
fact
that
you
probably
have
a
prow
instance
for
the
cncf.
A
We
have
a
prow
instance
and
I
guess
I
should
talk
about
like
you
have
the
the
service
cluster.
So
the
thing
that
actually
gets
web
hooks-
and
you
know
you
slash-
commands-
go
go
to
it
and
stuff
right.
So
you
got
one
of
those
running
over
in
cncf.
A
Land
you've
got
one
running
as
proudcase.io
and
arno
is
working
on
getting
another
one
kind
of
up
and
running
as
an
intended
replacement
for
proud.kates.io,
then,
underneath
that
there
are
also
the
build
clusters
which
are
where
the
pods
themselves
run
and
then
there's,
like
you
know,
service
accounts
and
other
pieces
of
infrastructure.
You
can
deploy
on
the
build
clusters
like
the
github
cache
or
like
service
accounts
that
are
bound
via
workload,
identity
to
actual
gcp
service
accounts.
Things
like
that.
A
I
would
think
of
it
like
if
you
need
a
if
you
need
something
to
have
permissions
to
do
something
someplace
else,
then
we
should
be
talking
about
like
we
need
a
gcp
service
account
and
we
need
to
give
that
gcp
service
account
these
permissions
and
then
we'll
bind
that
service
account
via
workload,
identity
to
a
brow
cluster,
and
it
could
be
like
the
trusted
proud
cluster
that
you
know
we
currently
use
to
run
all
of
the
staging
gcp
build
jobs.
It
could
be
those
the
cncf's
proud
build
cluster.
E
Can
I
give
a
little
so
I
think
where
this
discussion
started
was
the
meeting
with.
I
know
they
came
up
a
fair
amount
of
things
of
crowd,
jobs
that
will
be
moved
and
our
prayer
cluster
we
stood
up
last
year
was
very.
It
had
one
job
that
we
wanted
to
ensure
that
well,
it
was
still
up
at
four
o'clock.
In
the
morning.
Everybody
made
sure
it
was
documented
with
the
things,
but
then
it
came
to
let's
start
iterating
on
what
we
already
have
and
add
a
little
bit
more
dave
working
on.
E
It
became
a
proud
to
stand
up
as
a
dev
environment,
where
I
can
quickly
test
things
and
break
things
and
move
on
to
the
next
is
not
a
readily
available
thing
at
this
point,
so
our
discussion
went.
Oh,
we
can
put
it
into
our
cluster
that
we're
running
currently
and
then,
as
part
that
evolved
into
if
we're
going
to
do
a
proud,
classical
cnc,
a
permanent
kind
of
habit.
As
this
where
we
can
develop
full
brow,
understand
what
we're
doing
and
move
up
into
a
production
trial.
E
E
Do
the
things
where
you
have
all
the
control-
and
I
think
so
if
this
is
going
into
a
wrong
direction,
that
having
prowl
as
a
thing
that
we
could
hold
in
our
hand
easier
in
terms
of
digging
faster
on,
it
was
kind
of
the
starting
point
for
and
hardening
what
we
have
for
cncf
as
this
lives
in
our
project
here
and
we
can
add
an
expander
if
we
want
to
add
cable
to
this.
Now,
as
it
is,
that's
okay,
because
this
is
a
cluster,
that's
not
talking
to
anything
else
real.
E
So
I
think
we
did
a
little
bit
of
we
want.
The
sandbox
to
blame
is
where
we,
and
if
this
is
going
down
a
rabbit,
hole
in
the
end
that,
while
we're
talking
about
images
that
needs
to
probably
stay
priority,
leave
the
proud
jobs
for
things
that
are
specific
to
that,
and
we
have
places
that's
okay,
but
that's
the
thinking
how
we
got
to
this.
H
I'm
gonna
quickly
answer
to
this
because
we're
two
minutes
left
in
the
meetings
and
once
eddie
have
a
basically
basically
two
minutes
to
talk
about
this
issue
and
close
your
I'm
sorry.
You
may
need
to
skip
the
issue
you
post.
H
Can
we
talk
about
this
as
in
slack
excuse
you,
okay,
so
basically
I
think
you
don't
need
to
use
the
current
precursor
used
by
pro
dot
skates
dot
io.
You
can
basically
create
a
new
gke
cluster
in
the
sound
project
project
and
use
it
for
pro
and
for
a
broadcaster
with
a
beginning.
You
can
look
at
basically
the
platform
configuration
we
made
for
pro
bills
in
case
of
io,
and
you
can
start
from
that.
E
My
understanding,
sorry
so
kind
of
what
caleb
is
working
on
bear
with
you
more
and
make
sure
that
gk
cluster
will
stand
up,
is
in
the
sandbox
kind
of
to
align,
with
what
you're
busy
doing
standing
up
the
second
trial.
Or
is
this
two
things
sorry.
H
Okay,
let's
talk
after
this
meeting,
I
will
walk
through
the
process.
A
Yeah-
and
I
I
think
I
think
I'm
in
this-
in
alignment
like
okay
yeah,
you
get
your
own
build
cluster
in
the
sandbox
and
you
would
copy
you
would
like
copy
paste,
something
something
like
that
which
has
the
build
cluster,
and
then
we
can
talk
about.
You
were
also
like,
but
I
just
want
to
be
able
to
like
schedule
arbitrary,
proud
jobs
for
testing,
because
the
proud
development
and
testing
of
jobs
experience
is
terrible
and
boy.
A
A
If
you
can
just
talk
to
the
prow
control
plane
and
use
the
command
called
mkpj,
where
you're
just
like
make
this
proud
job
on
the
prowl
control
plane,
and
then
it
will
schedule
it
to
your
build
cluster,
which
you
can
then
also
like
cuddle
like
look
at
the
pods
and
look
at
the
logs
and
like
paint
them
and
whatever
else
like.
Let's
do
it.
That
sounds
great,
but
maybe
the
crowd.
The
proud
control
plane
where
you
have
mkpga
access
to-
I
don't
know.
A
E
And
iterate
on
what
we
said
earlier,
I
think
a
fair
amount
of
what
I
start
with
is
fear
of
the
things
I
don't
know
and
the
things
I
might
write
so
pulling
us
back
to
hey.
We've
got
things
that
work,
and
this
is
how
we
unblock
staying
in
those
things
is
great.
My
default
is,
can
I
do
this
on
the
side,
so
you
can
then
come
and
get,
and
it
sounds
like
that
is
less
productive.
A
This
is
why
I'm
trying
to
respond
to
each
of
your
your
y'all's
questions
with,
like
what
specific
problem
are
you
trying
to
solve
how
you
know
like?
Where
are
you
blocked?
What
are
you
trying
to
do,
and
maybe
we'll
even
get
to
like?
Why
are
you
trying
to
do
that?
What
is
it
you're,
trying
to
anyway,
sorry
or
no
go,
relate.
H
K
H
K
Yeah,
so
just
real
quick,
I
saw
the.
I
saw
the
slack
message
in
the
channel
putting
on
my
employer
hat
for
a
minute.
I
do
work
for
amazon
web
services.
They
do
pay
me
to
work
on
kubernetes,
so
I
don't
know
who
owns
any
of
that
accounts,
but
I
can
poke
bears
or
badgers
or
I
can
help
out.
However,
will.
C
A
For
we
got
to
do
the
same
thing
for
our
gcp
projects.
The
thing
is,
it's
it's
like,
so
the
people
who
actually
have
the
keys
to
go
like
use,
amazon's
console
to
like
look
at
everything
and
configure
everything
is
those
three
people
in
the
aws
admins
group.
Just
in
santa
barbara
and
tim
hawkin.
A
A
Justin
is
responsive,
but
also
can
disappear
for
large
periods
of
time
and
ihor
will
say
yes
to
anything,
which
is
great,
but
I
would
only
use
that
as
a
tool
to
unblock
ourselves.
I
would
want
somebody
who,
like
knows
what
they're
doing
with
aws
and
is
conscious
of
the
costs
and
implications
of
doing
things
with
our
aws
account.
C
A
Stuff
that
we
have
with
the
gcp
stuff
and
justin
has
some
open
pr's.
I
think
to
help
with
some
of
that,
but
like
the
tl
dr
is
like.
If
we
can't
get
justin,
then
I
don't
know:
do
you
want
to
be
the
person
who
has
the
keys
or
like
who?
Would
you
suggest
that
we
find
to
get
the
keys
right?
So
it's
the
yeah
conversation
needs
to
go
that
way.
I.
K
Definitely
don't
have
any
of
those
answers
about
cost,
or
is
this
the
right
thing
to
do
and
I'm
happy
to
to
help
how
I
can
and
poke
people
and
maybe
find
someone
on
the
amazon
side
who
wants
to
do
it
so
sure.
H
I
feel
like
basically,
we
should
start
firstly
reach
out
to
justin,
because
I
think
he's
the
the
one
close
to
and
block
this
issue
and
later
talk
about
how
we
can
expand
the
people.
H
So
yeah
eddie,
if
you
can
basically
being
justin,
talk
with
justin
and
see
what
is
risk
if
he
can
unblock
people
from,
I
think,
from
cops
and
plus
just
a
life
cycle.
It
could
be
great
and
later
maybe,
as
an
amazon
employee
see
how
we
can
basically
expand
the
number
of
people
able
to
access
to
this
profile.
It
could
be
great.
A
Thank
you
yeah!
I'm
sorry,
I'm
sorry
it's
like
this,
but
we
really
appreciate
you
helping
us
to
make
it
better.
Okay,
we're
three
minutes
over
time.
Do
we
want
to
call
it?