►
From YouTube: Kubernetes WG K8s Infra Bi-Weekly Meeting 20200 29
Description
Kubernetes WG K8s Infra Bi-Weekly Meeting 20200 29
A
Hi
everyone,
my
name,
is
bart
smith
and
we
are
starting
our
bi-weekly
community
call
for
kate
syndra.
I
would
like
to
remind
all
of
you
about
our
code
of
conduct,
which
we
can
summary
summarize
as
be
excellent
to
each
other
and
at
the
beginning
I
would
like
to
ask
actually
two
people
to
help
one
for
making
notes
and
the
second
one
to
be
our
action
items
manager.
A
C
D
Sorry
I
was
having
some
audio
problems,
but
I
heard
my
name,
but
I
couldn't
hear
what
the
question
was.
The
question
actually
was
for
a
second.
D
A
Okay,
so
actually,
let's
start
with
our?
No,
maybe
if
there
is
anybody
new
who
would
like
to
introduce
him
or
herself
or
themselves.
E
I'm
some
kind
new,
but
I
already
introduced
myself
before
couple
of
months
say
a
couple
of
months.
I
have
a
some
rest
by
joining
this
kind
of
meetings,
but
I'm
planning
to
continue
enjoying
the
meetings
as
well
and
help
the
team
I'm
milton
by
the
way
I'm
coming
from
the
ikea
on
devops
team.
A
We
actually
did
a
billing
review
the
last
time,
so
we
are
skipping
this
right
now
and
let's
jump
into
the
action
items
review
the
first
two
action
items,
the
first
one.
Actually
I
didn't
do
a
lot.
I
was
focusing
mostly
about
moving
the
projects
or
new
infrastructure
where
we
have
actually
a
lot
of
successes.
C
C
The
reading
I
was
doing
says
that
you
can't
transfer
data
studio
reports
between
domains
like
that.
It's
going
to
have
to
be
created
from
scratch.
Unfortunately,
and
the
other
snag
that
I
hit
was,
I
tried
doing
what
you
did
bart,
which
was
copy
the
existing
data
studio
report
and
trying
to
monkey
with
it.
A
D
I
I'm
happy
to
poke
him.
What
is
the
specific
ai
like
what
the
hell
are
you
doing
or
turn
over
this
like?
Can
we
just
clone
his
report
and
reverse
engineer?
It.
C
A
Great,
so
the
octo
dns
working
without
service
account
key.
This
is,
I
started
the
process.
I
created
the
list
of
the
steps
which
I'm
I
started
doing
in
the
one
of
the
issues.
A
The
first
one
is
because
we
will
have
to
store
our
version,
our
image
of
ocular
dns
and
I
created
a
pull
request
with
the
preparation
for
that.
But
my
question
is:
I
wasn't
actually
the
number
of
the
pr.
It
is
pr
number
805
and
I
would
like
to
ask
if
everybody
is
okay
for
this
repository
to
be
called
the
infra
tools,
and
that
was
that
is
actually,
I
think,
the
only
my
question,
and
if
everybody
is
okay
with
that,
I
would
love
to
see
somebody
approval
of
that
pull
request.
C
C
Sorry,
I
could
repeat,
are
we
talking
about
octave
dns?
You
need
a
repo.
A
For
rocco
dns
yeah,
so
the
we,
because
we
are
using
the
docker
image
which
we
are
building
in
using
the
mic
file,
and
then
we
will
use
this
image
into
the
pro
job.
So
it
would
be
good
to
be
to
you
know,
store
this
image
under
our
managed
by
us
registry.
So
I
thought
that
we
can
create
the
registry
called
infra
tools.
D
Anybody
sorry,
I'm
jumping
between
windows.
That
seems
fine.
I
don't
love
the
name,
but
I
don't
have
anything
better.
So
I
in
fact
already
lgtm
did.
I
was
just
looking
to
see
what
happened
to
my
lgtm
and
I
pinged
justin.
So
he'll
get
back
in
touch
with
you.
A
I
I
you're
with
your
lgtmi.
I
pushed
some
change
because
there
was
like
some
need,
which
I
improved.
A
Okay,
so
that's.
Let's
say
that
if
there
will
be
any,
can
we
agree
on
that
name
at
this
point,
anybody
will
have
some
false
opinions
for
the
first
time.
C
A
The
next
item
is
the
perfect
namespace
access
issues
and
the
team.
Can
you
give
updates
for
that.
D
Basically
at
some
point
we
changed
the
default
for
groups
from
members
can
view
members
to
only
admit:
only
managers
can
view
members
and
that
broke
the
security
groups
mechanism,
which
was
documented,
but
not
very
boldly
so,
and
so
it
took
some
extra
eyeballs
explaining
it
to
the
rubber
duck
in
order
to
figure
out
what
was
going
on
once
we
fix
that
that
problem
is
resolved.
A
A
A
Without
problems
so
far,
I
think
we
decided
that
today,
if
there
won't
be
any
problems,
we
will
be
able
to
delete
the
development.
D
E
B
E
D
I
didn't
change
anything
all
right.
You
know
what
it's
not,
that
exciting
of
a
screen,
I'll
just
go
ahead
and
do
it.
Where
is
it
development
2?
Oh,
it's
warning
new
node
version
outdated
anyway,
all
right
goodbye
development,
2
fix
the
glitch
leading.
A
The
next
topic,
which
from
today
are
working
in
our
like
a
proper
domains,
so
they
are
also
working
on
aaa
cluster.
We
move
them
today
with
james
and
so
far
so
good.
A
C
That
for
kate's
ten
pounds,
image.
C
So
the
tool
that's
responsible
for
taking
configuration
in
the
community
repo
and
then
making
it
happen
in
slack
like
creating
channels
and
figuring
out
who's,
a
member
and
stuff
like
that
is
called
tim
palace.
It
also
comes
from
the
repo
called
slack
infra,
but
it's
not
like
deployed
as
a
service.
It's
run
as
a
crowd
job,
okay,
so
the
image
for
that
is
currently
hosted
in
some
google.com
project.
It
would
be
great
to
have
it
hosted
in
the
same
place
as
all
those
other
images.
A
A
if
everybody
is
okay
with
the
name,
slack
tools,
why
not
slack
infra?
Because
we
have
like
a
case
infrastaging
slack
infra,
I
didn't
want
to
duplicate
it,
but
I'm
okay
with
changing
it
to
slack
in
front
so
k10
for
staging.
C
Then
why
does
that
project
already
exist,
which
project?
Sorry,
I
thought,
okay,
so
for
consistency.
I
think
it
would
be
helpful
if
the
repo,
if
the
registry
into
which
we
promote
images,
is
named
after
the
subproject.
That's
that's
the
home
of
those
images
right.
So
in
this
case
some
project
is
slack
infra,
so
I
would
assume
they
have
a
cage
staging
slack
in
for
a
project
and
associated
buckets
and
gcr,
repos
and
stuff
like
that.
F
A
Okay,
I
I'm
okay,
let's
I
actually
not
sure
right
now
about
it,
but
if,
if
I'm
okay
with
that,
so
I
will
change
it
to
slacking.
D
A
Sounds
good,
okay,
the
next
topic,
selecting
for
our
secrets.
A
Yeah,
okay,
but
you
suggested
the
maybe
you
can
discuss
also
the
topic,
the
suggestion
to
not
use
the
oh
using
something
other
than
get
crypt.
Let's
get
to
that
later.
A
So,
let's
switch
to
prod
gcs
bucket
for
kind
continuing
pattern
used
for
cni
team.
Can
you
give
us
an
update?
A
A
I
tried
to
understand
how
the
stack
driver
works
and
if
we
can
do
some
dashboards
and
charts
etc,
but
I
I'm
not
sure
if
I
don't
have
permissions
which
I
should
have,
because
we
I've
we've
added
the
permissions.
I
feel
like
two
or
four
weeks
ago,
but
I
I
I
still
see
the
error
failed
to
load
and
without
any
explanation,
so
I
can't
play
with
it
at
all.
A
A
My
first
topic
is
the
consensus
about
over
the
way
of
adding
permission
for
service
accounts,
because
so
far
we
were
doing
it
via
directly
assigning
the
gcloud
commands
in
our
scripts
and
the
aaron
created
a
pull
request
for
where
he's
just
added
the
secret
service
account
to
the
google
group,
which
I
think
I
like,
but
I'm
not
sure
what
is
the?
What
do
you
think
which
approach.
C
So
in
this
case,
it
seemed
like
there
was
a
pretty
clear
mapping
between
a
kubernetes
io
group
that
was
designed
just
for
people
who
had
audit
permission,
and
I
wanted
a
service
account
to
be
able
to
do
the
exact
same
thing
that
those
people
could
do
so.
I
felt
like
it
was
more
appropriate
to
add
it
to
that
google
group-
I
think
there
are
other
times
when
we
want
service
accounts,
to
be
able
to
do
things
that
are
not
so
cleanly
described
by
humans.
C
D
Yeah,
I
think
it's
interesting.
We
should
figure
out
what
we
think
are
guiding
principles
for
this.
So
I
think
there's
a
couple
things
to
think
about
one.
We
didn't
really
use
gcp
roles
very
well.
I
I
didn't
I'll,
take
the
responsibility,
and
so
I
have
a
lot
of
places
where
I
just
open,
bind
various
service
account
names
or
group
names
to
indus
individual,
pre-established
roles.
D
What
we
probably
should
do
is
for
each
of
the
groups
that
we
define
in
google
groups,
define
a
role
and
bind
the
group
to
the
role
and
then
add
permissions
to
the
role.
So
we
don't
have
to
do
it
multiple
times
across
multi.
We
can
even
do
it
at
the
org
level,
so
we
don't
have
to
do
it
multiple
times
in
multiple
projects,
but
that's
a
bigger
change
that
needs
some
careful
thinking.
If
we're
gonna
make
that
change,
I
love
the
idea
of
adding
service
accounts
to
groups.
D
I
think
I'm
trying
to
think
through
if
there's
any
real
downsides
to
it,
especially
if
we're
adding
like
a
like
a
remote
service
account
to
it
because
it
doesn't,
you
don't
need
to
have
special
iom
permissions
in
order
to
do
that
right.
Anybody
any
one
of
these
this
group
can
can
do
that
by
it's
essentially
delegating
permission
to
link
to
established
permissions
as
opposed
to
being
able
to
set
im
bindings
on
gcp
right.
D
A
There's
because
I
I
asked
mostly
because
I
want
to
move
forward,
the
dns
updates
automation,
it's
I
think
I
have
all
planned
and
I'm
not
sure
if
I
should,
because.
A
A
service
account
for
that
purpose
to
give
the
pro
the
information
which
service
account
should
use.
So
I
think
I
will
start
trying
to
just
add
the
service
account
to
the
group
for
dns
admins.
D
Yeah,
so
I
you
know
hand
waving
if
I
could
do
it
all
over.
What
I
probably
would
have
done
would
be
define
a
dns
admin
role
that
actually
there's
a
predefined
role
for
that.
But
let's
assume
that
we
wanted
to
add
one
extra
permission
or
the
auditor
role
right
to
find
an
auditor
role
with
all
the
permissions.
We
want,
then
bind
the
group
to
the
role
and
then
anybody
we
want
to
give
that
role.
We
just
add
to
the
group,
I
think
that's
a
nice
chain
of
control.
D
The
only
caveat
that
I
can
think
of
here
is,
I
believe
the
group's
feature
is
still
technically
beta,
so
I
don't
know
if
we
worry
about
that,
I'm
not
particularly
like.
I
don't
expect
that
we're
going
to
abandon
it
or
anything,
but
otherwise
it
seems.
Okay
to
me.
C
D
And
actually
I
just
lied,
it's
not
the
gke
integration
is
beta.
The
gcp
integration
is
ga,
so
no
worries.
A
Sounds
good
to
me,
okay,
so
I
I
know
what
to
do
with
that.
The
next
topic.
C
C
Cool
okay,
so
I
spent
some
time
over
the
past
couple
of
days,
hacking
together
my
own
personal
prowl
instance
at
crowdupbashfire.dev,
and
then
I
stood
up
a
build
cluster
inside
of
kubernetes
public,
and
I
connected
this
instance
of
proud
to
that
build
cluster,
and
I
got
as
far
as
confirming
that
I
could
run
like
these
jobs,
which
seems
to
ui
is
hiding,
but
you
know,
running
end-to-end
tests
for
kubernetes
running
end
in
tests
for
node,
and
I
think,
if
I
refresh
here
I
this
was
the
auditing
thing
I
was
talking
about
earlier.
C
C
Okay,
so
I
feel
like
that's
given
me
enough
info
about,
what's
involved
in
migrating,
proud,
build
clusters
over
I've
put
together
this
document,
it's
linked
from
the
issue.
That's
crowd
migration
plan,
where
I
propose
that
we
focus,
first
and
foremost
on
standing
up,
build
clusters
and
e2e
project
pools
in
the
cncf
work,
because
this
is
where
we
expect
the
bulk
of
the
cost
to
come
from.
C
I
think
we
can
set
up
a
trusted
cluster
in
the
cncf
or
to
be
able
to
run
trusted
jobs
like
pushing
images
and
stuff,
and
this
will
also
empower
members
of
the
community
to
be
able
to
troubleshoot
why
certain
projects
or
jobs
are
unable
to
run.
This
is
in
direct
contrast
to
the
situation
a
couple
last
week.
I
think,
where
signature
couldn't
figure
out,
why
a
google.com
project
was
breaking
all
node
e2e
tests.
C
So,
given
the
time
that
I
have
I'm
going
to
assume
some
familiarity
or
something,
but
please
feel
free
to
ask
me
questions.
Basically,
I
feel
like
proposing
or
first
off
the
way
that
crowd
does
things.
Today
we
have
one
160
node,
build
cluster,
that
we
schedule
pretty
much
every
single
job
to,
and
then
we
schedule
trusted
jobs
to
the
cluster
that
runs
prow
itself.
C
What
I'm
proposing
instead
is
that
we
have
two
build
clusters
that
live
over
in
kubernetes
public
project,
one
for
all
the
untrusted
jobs,
one
for
trusted,
untrusted
jobs.
I
feel
like
that
cluster
doesn't
necessarily
have
to
start
at
160
nodes,
but
we
need
to
anticipate
it
could
grow
to
that
size
and
our
160.
C
N1,
okay,
wow,
that's
a
lot
of
course,
cheese
yeah,
it's
mostly
because
the
go
builds
and
the
basil
builds
kind
of
eat
memory
like
candy
for
kubernetes,
specifically
so,
let's
see
here,
but
I'm
trying
to
think
of
this
from
a
billing
perspective,
I
like
to
right
now.
I
don't
know
how
to
be
able
to
say
that
a
given
job
costs
us
this
much
money
per
month
and
it'd
be
nice
to
be
able
to
get
near
that
so
right
now
the
cost
of
a
job.
C
So
that's
what
I
call
the
e3
project
where
the
job
is
going
to
create
a
kubernetes.
You
know
it's
going
to
launch
a
couple
vms,
it's
going
to
install
kubernetes
on
it.
It
might
create
some
gcs
buckets
to
store
artifacts
things
like
that,
and
so
all
that
together
equals
the
cost
of
one
run
of
a
job.
C
So
one
thing
we
so
that's
what
is
done
today
and
I'm
proposing
we
basically
start
with
that
today,
but
as
we
start
to
feel
like
costs,
are
kind
of
unexplained
or
or
rising.
One
thing
we
can
do
instead
is
to
start
to
create
different
pools
of
projects
that
are
intended
for
different
sigs.
So
we
could
say
like
six
scalability,
you
use
these
projects
over
here.
Sig
note.
You
run
all
your
end
end
tests
over
here,
sig
release.
You
run
all
your
tests
over
here.
D
C
I
feel
like
that
causes
a
lot
more
administeria
in
job
creation
and
job
management,
but
that
is
theoretically
possible.
I'll
have
to
think
about
that
a
bit
more.
It
also
kind
of
comes
down
to
how
much
we
care
about
billing
in
the
first
place
right
now
so,
like
I
said,
I
think
we
will.
I
think
we
will
care.
D
Right
so
already,
I
I
pulled
the
billing
report
today,
just
to
take
a
look
for
the
month
to
date
and
the
month
to
date
is
it's
in
the
thousands
right.
So
it's
not
egregious,
but
it
is
several
thousand
the
majority,
and
once
we
add
this
sorry,
I
said
the
majority
of
that
comes
from
the
domain
that
I
agree.
Yes
sure.
D
So
I
I
expect
that
160
times,
eight
cores
is
also
going
to
drown
out
the
smaller
stuff
right,
and
so
you
know
we,
I
think
we
will
care
eventually,
and
we
will
want
to
perhaps
use
this
as
the
financial
motivator
for
fixing
test
flakes
right.
C
Right
and
right
that
to
me
goes
back
to,
I
feel
like
it's
up
to
six
to
fixed
test
flicks
and
it's
up
to
six
to
fix
their
tests,
which
is
why
I
was
thinking
of
bringing
the
costs
back
to
six
rather
than
on
a
per
job
basis,
or
we
start
with
sales.
And
then,
if
we
find
we
need
to,
we
could
go
to
a
per
job
basis.
Later.
D
C
So
I
just
need
to
you're
out
here.
10
minutes,
tim.
Sorry,
sorry,
if
you
have
more,
I
can
give
you
a
couple
more
minutes.
Before
I
bounce
I
mean
I
want.
I
want
to
get
consensus
that
so
this
this
test,
cluster
that
I
stood
up
today
was
just
like
a
couple
of
nodes
and
they
were
n1
hymen
twos.
C
D
D
Okay,
I
trust
your
judgment
on
this.
I
I
have
no
context
by
which
to
say
you're
wrong.
So
that's
fine.
What
about
auto
scaling?
Is
it
just
too
slow
for
the
use
case
here?
I
am
with
us
trying.
C
C
So
we
can
find
out
also
just
straight
up
copied
the
triple-a
clustering
configuration,
which
means
that
the
build
clusters
were
regional
clusters.
I
don't
know
that
they
have
to
be.
I
don't
know
if
we
care,
but
I
figured
I
just
copy
paste
for
easy.
Well.
D
They
don't
they
don't
do
a
lot
of
pod
to
pod
communication
right.
No,
none!
No!
Not
so!
That's
not
going
to
be
there's
no
charge
there.
There's
no
extra
charge
for
being
a
regional
cluster,
the
advantage
of
being
regional
is.
We
can
tolerate
the
master
upgrades
without
losing
the
control
plane.
So
I
don't
see
why
we
wouldn't
make.
C
The
regional,
okay,
there's
another
question:
I
have
for
you,
while
you're
here
right
now,
all
of
the
service
accounts
that
I
created
for
these
clusters
to
act
as
or
use
I
created
under
the
kubernetes
public
project.
C
I
don't
know
that
they
have
to
be
there.
I
just
created
them
there,
because
that's
where
I
was
also
creating
the
build
cluster
and
I
created
the
build
cluster
in
the
kubernetes
project,
because
I
felt
like
that's
where
you
would
want
everything,
that's
actually
being
used
or
consumed
to
be,
but
I
could
just
as
easily
create
these
build
clusters
and
these
service
accounts
in
their
own
gcp
projects
to
give
them
a
sort
of
logical
separation
from
the
kubernetes
public
project.
Do
you
have
any
opinion
there.
D
I
don't
have
a
strong
opinion.
We
we
laid
out
the
directory
structure,
so
you
can
do
either
one
right
and
it
should
be
trivial
to
do.
D
C
For
today,
for
reasons
basically
because
workload,
identity
only
has
one
identity
name
space
per
project,
it
would
be
easier
if
I
could
use
a
project
per
build
cluster
and
just
for
like
logical
separation
of
concerns,
I
don't
feel
compelled
that
they
have
to
exist
in
the
kubernetes
public
project.
C
I
can't
think
that
there
are
any
the
only
difference
I
would
see
for
reasons
to
access.
Prow
would
be
giving
community
members
the
ability
to
ssh
into
a
given
project
that
prow
is
using
to
stand
up
vms
or
instances
or
possibly
to
access
the
prowl
build
cluster
to
figure
out
what
it's
it's
doing.
So
there
could
be
some
piecemeal
like
adding
and
removing
of
people
that
I
would
want
to
do
by
groups.
I
don't
know
how
that
would
commingle
with
the
kubernetes
public
project.
D
C
C
I
think
that's
everything
I
wanted
your
sign
off
on
before
moving
further.
Did
you
have
any
questions
you
wanted
to
ask
before
you
have
to
go.
B
Yeah,
it's
just
a
minor
thing
like
we
can
also
try
to
encourage
the
jobs
that
create
gcp
resources
and
aws
resources
to
start
labeling
them,
and
that
should
then
flow
through
into
building
reports,
and
we
could,
in
theory,
have
almost
arbitrary
granularity
on
on
what
happens.
I
don't
know
what
happens
with
sort
of
the
cardinality
like
whether
we
can
actually
get
onto
the
individual
job
run
level,
but
we
can
certainly
get
onto
the
job
level.
I'd.
Imagine.
C
Okay,
because
I
I
was
wondering
yeah,
if,
if
not
the
jobs
themselves,
I'm
wondering
if
we
can
have
bosco's
do
that
when
it
look
when
it
leases
out
a
project
to
a
given
job,
it
could
like
label
the
project
with
the
name
of
the
job
that
it's
leasing
out
to
and
then
we
could
look
for
everything.
That's
got
a
given
jobs
label
or
something.
B
B
Yeah
it's
worth
thinking
about,
but
yeah,
I
think
like
I
don't
it's
probably
not
the
priority.
I
guess
right.
I,
like
the
boss
cause
approach
where
we
just
like
have
separate
gcp
projects
does
feel
much
easier.
C
C
Yes,
so
that
was
sort
of
the
naming
is
hard
portion
of
this
next
step.
Okay,
I
was
thinking
for
all
the
I
was
thinking
of
naming
the
proud
build
cluster
pro
build.
That's
the
naming.
The
trusted
prowl
build
cluster,
proud,
build
dash
trusted,
and
I
was
thinking
of
naming
all
the
projects
that
would
be
used
by
jobs.
B
I
don't
think
boss
cost
has
the
ability
to
split
up
like
has
a
partitioned
pool?
So
I
guess
we
just
create
different
pools.
They'd
be
different.
B
C
I
could
keep
going
about
the
rest
of
the
pro
stuff.
Essentially,
once
I've
created
up
once
the
build
clusters
are
created,
we
then
hook
them
up
to
proud.kates.io.
We
need
to
ask
for
that
proud
cluster
to
have
cluster
admin
privileges
to
these
build
clusters.
C
I
I
give
them
a
cube
config
and
that
works,
and
then
I
would
set
up
service
accounts
that
these
clusters
act,
as
that
will
have
project
editor
access
to
all
of
the
e2e
projects
in
the
cncf
board,
and
that
will
have
right
access
to
the
kubernetes
jenkins
bucket,
which
would
leave
us
in
the
situation
where
crowduptates.io
and
all
of
the
ci
artifacts
are
still
within
google.com,
but
all
of
the
job
execution
and
all
of
the
resources
being
exercised
are
over
in
cncf
and
being
billed
for
over
here.
C
C
We
empower
people
to
start
really
ssh
into
projects
if
they,
if
things
break
and
all
of
the
trusted
cluster
stuff,
that
we
need
to
bug
kate's
in
for
on
call
for
right
now,
like
pushing
images
and
stuff,
we
can
do
that
over
in
our
own
trusted
cluster,
as
we
develop
like
our
own
pool
of
trusted
people
who
make
sure
that,
like
you're,
you're,
really
checking
in
the
same
google
cloud
build
job
and
stuff
like
that,
and
then
there's
a
lot
more
to
go
on
from
there
about
like
how
we
move
over
all
the
ci
artifacts
and
how
we
move
over
proud,
case.io
itself.
A
C
The
the
place
that
I
want
to
get
to
is
so
I,
like
I
hacked
this
together
myself.
I
want
to
get
to
the
point
where
I'm
not
the
person
moving
the
jobs
over,
so
my
intention
is
to
try
and
get
build
clusters
set
up
and
hooked
up,
but
then
I
have
other
things
that
I
need
to
focus
on.
So
I
feel
like
the
best
way
to
make
that
happen
is
to
make
sure
I
leave
folks
with
a
dev
test
cycle
that
they
can
trust
for
migrating
jobs
over
and
hopefully
we'll.
A
C
Also
feel
free
to
drop
comments
on
that
doc
or
whatever.
If
you
have
suggestions,
oh
right,
I
did
also
want
to
say
that,
like
I
tried,
I
genuinely
tried
to
do
terraform
for
this
stuff
instead
of
adding
more
bash,
and
I
am
not
comfortable
enough
with
terraform
just
yet.
C
C
So
I
just
wanted
to
try
and
run
through
some
of
the
stuff
in
the
in
progress
column
and
see
if
any
of
it
can
be
closed
out
or
what
needs
to
be
done.
So
I
think
we
talked
about
the
namespace
billing
info
justin,
since
you
are
here.
Bart
has.
G
C
Trying
to
make
a
copy
of
your
billing
report
and
see
if
he
can
like
twiddle
some
bits
to
figure
out
how
to
do
per
name
space
billing,
but
he
finds
that
he
is
unable
to
copy
the
data
sources
that
are
used
by
that
report,
and
I
found
that
I,
as
a
google.com
person,
also
couldn't
copy
that
data
source.
I
wanted
to
confirm
we're
using
right
data
source
and
what
you
would
suggest
for
moving
this
data
studio
report
to
a
user
in
the
kubernetes
io
domain
instead
of
google.com.
B
I
can
I
can,
if,
if
you
create
me
a
user
or
something
like
that,
I
will
I
will
get
the
report
into
the.
What
is
it
the
the
google
groups
cncf
google
groups
or
kubernetes
google
groups.
C
B
Yeah,
I
mean
I'm
happy
to
do
that
figure
out
how
that
works.
As
far
as
I
know,
we're
just
using
the
what
I
thought
was
the
standard
bigquery
export
from
the
cncf
projects
the
numbers
match.
So
I
presume
that's
right,
I'm
not
doing
anything
fancy
there
and
I
don't
know
why
permissions
don't
work.
I
don't
know
whether
there's
a
problem
with
copying
the
data
studio
source
or
whether
it's
a
problem
with
which
actually
it
could
be
because
there's
something
weird
about
it
being
like
in
drive.
B
So
that's
not
impossible
that
that
is
the
case,
but
I
will
happily,
if
you
give
me
a
a
an
account
in
the
correct
location.
Whatever
that
location
is,
I
will
yeah.
C
Yeah,
so
I
already,
I
made
the
account
I'll
get
the
credentials
for
you
later.
Thank
you.
C
Does
that
sound
good
bart
or
is
there
anything
else
we
need
to
discuss
on
this
issue
sounds
good.
Okay.
Next
one
is
removing
the
bart
test
namespace
in
aaa
and
the
related
workload
identity
grants.
So
this.
A
C
C
I
don't
know
the
issue
off-hand
to
link
to,
but
I'll
figure
it
out.
I
don't
think
we
have
the
right
people
here
to
talk
about
allowing
subprojects
to
push
to
root.
I
feel
like
stephen
augustus
and
linus
and
tim
hawkins
need
to
be
here.
We
might
have
any
knowledge
about
it.
C
C
Yeah,
I
think
I
know
dim
said
something
about
two
weeks
earlier.
I
think
we
shouldn't
start
that
two
week
clock
until
you've
sent
out
the
message.
So
if
you
can
just
link
to
the
message
in
this
issue,
then
we'll
know
when
we
can
start
counting
down,
definitely
agree:
okay,
mighty
grading
slackin
for
services
to
cluster.
I
think
he
basically
updated
us
on
all
that.
Yeah,
okay,.
A
C
C
Next
up
develop
the
crown
migration
plan.
I
think
I
just
did
that.
I
will
comment
the
decisions
we
made
and
then
I
will
start
opening
up
issues
to
describe
the
work
to
be
done.
There
dns,
update
automation,.
A
C
C
Okay,
depending
on
how
quickly
I
get
moving
with
setting
up
build
clusters,
maybe
we
could
get
this
running
on
the
build
cluster
over
here.
Okay,
this
I
just
talked
about.
I
created
a
kubernetes,
I
o
user,
which
I
will
hand
the
credentials
to
justin
for
jim's
is
not
here,
but
we
had
something
about
him:
creating
a
script
to
generate
keys
for
conformance.
A
C
Okay,
static
ip
management.
Tim
said
he
wanted
to
talk
about
it
at
the
meeting
today
and
then
we
didn't
so
I'm
gonna
move
this
to
block
for
now.
That's
okay,
all
right
cool,
so
I
create
like.
I
said
I
created
issues
to
talk
about
turning
down
clusters
when
we're
done
with
them.
So
turning
down.
This
is
a
google.com
internal
cluster,
but
it's
helpful
for
those
of
us
who
are
tracking
what
we're
migrating
once
we
figure
out
what
to
do
with
nodeperfdash.
We
can
turn
this
down
storage
analysis
for
billing.
C
I
actually
have
no
idea
what
this
is.
I
also
don't
remember
I
I
okay.
I
think
this
is
maybe
justin.
Maybe
you
have
something.
B
Yeah,
I
think
this
is
just
about
further
drill
down
on
the
storage
costs.
I
don't
know
my
screen
is
really
small
but
like
currently,
we
just
have
a
per
project
spent
on
gcs
and
we
can't
say
like
which
subdirectory
or
we
don't
know
like
how
which
files
are
getting
pulled.
That
sort
of
thing
is
my
understanding
right.
B
I
don't
think
I
don't
think
this
is
gcr.
I
think
this
is
gcs.
Okay,
like
I
don't
think
we'll
get
this.
These
stats
from
or
the
path
to
get
the
gcr
stats
will
be
different.
I
think
from
the
past
to
get
the
gcs
stats.
I
agree
with
you
that
meta
point,
which
is
it
should
be.
We
should
prioritize
based
on
like
where
we're
spending
so
like.
If
it's
a
hundred
dollars,
it's
not
worth
anyone's
time
to
break
that
down.
If
it's
a
thousand.
F
C
I'm
gonna
punt
on
this
one
policy
around
granularity
and
grouping
for
just
yeah.
Where
should
people
store
their
images?
Okay,
set
up
a
job
to
scrape
and
audit.
I
am
policies
like
I
said.
I
have
something
that
does
that
the
job
looks
kind
of
like
this.
C
It
you
it
runs
in
a
service
account
that's
bound
to
the
auditor
permissions.
It
just
runs
the
audit
script.
It
dips
everything
it
commits
it
now.
What
I
need
to
do
is
actually
set
it
up
to
create
a
pr,
the
existing
code-
that's
out
there
to
do
that,
isn't
quite
flexible
enough
for
this.
I
could
totally
use
someone's
help
if
they
want
to
try
refactoring
this
tool
that
I
reference
the
pr
creator.
C
C
Is
I
either
need
to
give
an
ssh
key
to
that
private
repo,
which
I
don't
know
about
that
or
I
need
to
provide
an
oauth
token,
and
I
could
use
a
uri
of
this
form
and
so
I'd
love
to
have
a
tool
that
instead,
like
I
mounted
an
oauth
token,
and
it
just
used
that
to
push,
because
it
will
need
that
same
oauth
token
to
be
able
to
create
or
or
automatically
update
the
pr.
A
A
The
question
my
question
is:
is
it
a
big
automatically
or
is
it
periodically
called
job
or.
C
All
I
have
it
set
up
right
now
is
as
a
post
submit,
but
I
do
think
eventually.
We
would
want
it
to
be
periodically
because
the
post
submit
won't
work
yeah.
I
just
had
it
as
a
post
a
bit
just
for
for
testing
purposes
right
now,
but
I
agree
like
to
do
the
real
actual
thing.
It
would
be
a
periodic
and
like
it
wouldn't
bother
creating
a
pr
or
updating
a
pr
if
one
is
already
open
if
there
have
been
no
changes.
C
A
The
case
is
that
I
would
like
to
have
them
still
running
if
we
would
have
to
quickly.
You
know,
switch
dns
to
the
old
cluster
yet
but
let's
say
in
a
week
or
two,
I'm
happy
to
just
turn
them
down.
A
C
C
So
google
cloud
has
this
wonderful
thing
called
secret
manager,
which
lets
me
store
my
secrets
in
the
cloud
so
that
other
people
can
access
them
and
I
can
define
who
accesses
them
by
the
same
iam
policies
that
we've
been
using
to
define
access
to
everything
else.
I
feel
like
this
is
a
much
better
approach
for
us
to
use
when
it
comes
to
storing
and
sharing
secrets
with
each
other,
as
opposed
to
the
approach
we
use
right
now,
which
is
get
crypt
and
the
reason
is
because
get
crypt
is
one
size.
C
It's
like
one
repo
fits
everybody
so
like
any
secret
that
we
have
stored
in
the
repo.
Anybody
who
has
their
keys
added
to
get
crypt
can
can
view,
and
maybe
that's
not
quite
appropriate.
So
I
feel
like
I'd
like
to
suggest
we
abandon,
get
crypt
and
instead
move
to
using
secret
manager.
A
I'm
definitely
sounds
good
to
me.
I
will
ping
dims
what
he
thinks
about
it.
I
think
we
can
also
ask
team
hawking.
I
think.
C
I'm
fine
all
right.
I
was
thinking
as
like
a
first
proof
of
concept.
I
might
I'll
paint
him
to
see
if
it's
okay,
if
I
enable
the
api
on
the
kubernetes
public
or
on
the
g
suite
project
or
sorry
now
on
the
kubernetes
public
project,
and
then
I
want
to
try
undoing
our
get
crypt
integration
for
the
groups.
G
Just
as
an
interesting
side
effect,
I
think
that,
with
the
with
the
g
cloud
secrets
we
might
be
able
to
if,
for
example,
a
service
account
isn't
enough
for
a
job
or
something
the
g
cloud
secrets
could
be
mounted
into
a
pod
in
gke.
G
I
think
there's
an
integration
there,
so
that
would
be
an
interesting
workaround
if
something
isn't
working
with
identities
or
something
like
that,
which
was,
for
example,
something
that
might
we
might
run
into
with
octo
dns,
but
we
don't
know
if
it
actually
likes
the
identity
versus
a
token.
C
Sounds
good
to
me
right
yeah.
I
completely
agree.
I
I
couldn't
find
anything
in
the
one
blog
post.
I
read
that
talked
specifically
about
gke
integration,
but
I
would
be
shocked
if
there
isn't
gke
integration
for
this.
Thank.
C
Because
it's
telling
us
about
it,
I'm
gonna
stop
my
share
there
and
see.
If
there's
anything
else,
we
wanted
to
get
to
I'll
hand
it
back
over
to
you.
G
There's
one
that
came
up
again
and
justin
was
introducing
that
pr,
it's
about
binary,
artifacts
and
another
one
another
project
that
might
need
them.
So
the
the
actual
pull
request
is
on
a
staging
repo.
But
the
usage
is
on
binary
artifacts
and
I
think
they
might
be
user-facing.
G
So
we
should
probably
figure
out
if
anything
that
is
user-facing
on
the
binary
side,
we
want
to
have
in
staging
or
push
similar
to
kind
to
be
production
like
and
therefore
have
longer
retention,
and
we
create
a
manual
production
gc
gcs
bucket
for
now,
until
we
have
binary
promotion.
G
There
is
one,
so
it's
a
pull
request,
811
and
it's
a
staging
project
for
xcdm
and
they
want
to
pre-build
their
binaries
for
easier
user
access.
So
it's
something
that
is
not
currently
completely
necessary,
but
I
feel
like
using
staging
images
would
be
or
saging
the
staging
packet
would
be
kind
of
a
weird
decision.
B
Yeah,
if
I
can
just
say
the
so,
I
actually
opened
the
scda
adm
bill
request
and
I'm
also
hoping
to
work
on
getting
the
promoter
work
going
the
binary,
artifact
promoter.
Now
that
we
have
the
image
promoter
going,
I
I
was
sort
of
taking
a
break
from
it
until
we
got
the
image
motor
going
and
now
we
have.
B
Promoter
going
at
least
partially
I'm
going
to
try
to
get
the
binary
promoter
going
as
well.
C
I
would
also
throw
out
there.
I
think
there
are
a
few
enough
of
these.
Maybe
we
would
be
willing
to.
I
think
ben
just
went
through
this
for
kind
right,
so
maybe
ben
could
show
you
how
to
do
the
same
thing.
C
Oh,
I
thought
you
were
pushing
directly
to
a
prod
bucket.
F
C
A
Okay,
let's
move
the
further
discussion
to
our
slack
channel.
I
thank
you
very
much.
All
of
you
for
being
here.
There
was
a
lot
of
interesting
discussions
and
have
a
great
day.
Everybody.