►
From YouTube: Kubernetes WG K8s Infra BI-Weekly Meeting for 20200401
Description
Kubernetes WG K8s Infra BI-Weekly Meeting for 20200401
A
And
hi
everyone,
my
name
is
bart
smikla
and
I
will
be
our
host
for
today's
meeting.
I
would
like
to
remind
everyone
about
our
code
of
conduct,
which
we
can
summarize
as
be
excellent
to
each
other,
and
is
there
anyone
who
would
like
to
introduce
himself
or
herself-
and
I
don't
know.
C
A
C
I
can
speak
a
little
bit
to
that,
so
he
sent
out
another
notice
on
march
18th
about
the
vanity
domain.
Flip.
C
The
pr
for
handling
child
images
went
through
the
proposal
for
optimizing
backups
to
reduce
token
usage
also
went
through,
and
I
think
there's
a
meeting
scheduled
in
starting
in
two
hours
or
so
to
get
folks
into
the
same
place.
For
for
the
vanity
domain,
flip,
hey
linus,
I
was
just
trying
to
say
I
think
everything
is
set
for
the
vanity
domain
flip.
I
wasn't
sure
if
there
was
anything
specific
you
had
to
add
there
from
an
action
item's
perspective.
F
Yeah,
I
don't
I
checked
things
yesterday.
I
don't
think
there's
anything
to
add
other
than
I
just
need
to
make
sure
that
yeah,
the
new
prod
is
a
superset
of
the
old
one.
A
Okay,
so
let's
jump
to
the
audio,
actually,
I'm
not
sure
what
audit
we
are
talking
about
here,
but
there
is
a
new
gcp
audit
merged
last
week
in
our
repository,
so
feel
free
to
look
in
all
the
resources.
No,
no,
maybe
not
all
yet,
but
the
resources
which
we
are
currently
auditing.
A
So
it
should
be.
It
is
now
there.
G
So
we
talked
hey
guys,
sorry,
I'm
late.
We
talked
it
by
email
or
bug
or
something
about
maybe
setting
up
one
of
these
meetings
where
we
do
some
partial
audit
as
a
group
and
then
shard
out
the
work.
I
would
love
to
do
that
after
we
get
the
the
stuff
done
today.
So
maybe
we
plan
that
for
the
next
meeting
or
the
next
next
meeting,
yeah
sounds
good
to
me.
A
So
there
is
a
first.
So
this
was
the
last
of
our
action
items
from
the
last
time.
Let's
jump
into
the
open
discussion
right
now,
so
the
first
topic
is
policy.
Around
granule
granularity
and
grouping
for
artifact
storage.
Buffs
are
on.
C
Discussion
that
started
happening
on
a
pr
related
to
a
kubernetes,
csi
artifact,
and
I
felt
like
it
maybe
merited
discussion
with
people
here.
It's
maybe
not
clear
to
me
if
we
have
all
the
appropriate
people
here,
but
to
give
some
context,
somebody
from
csi
wanted
to
add
a
project
for
the
secret
store,
csi
driver
and
then
that
tim
ended
up
commenting
on
this
pr
about.
Like
hey.
C
G
C
G
So
the
reason
I
brought
it
up
is
only
because
the
like
enough
patterns
have
started
to
emerge
that
I
saw
that
we
have.
You
know
some
number
of
cluster
api
dash
prefixes.
That
are
all
excuse
me
all
very
similar,
and
I
you
know
my
organizat
organizing
brain
said.
Oh
maybe
I
should
organize
that
more,
maybe
not
I'm
okay
with
the
answer
being
not
worth
it,
but
I
thought
it
was
worth
throwing
out
there.
So
at
least
when
somebody
brings
it
up
again
later
we
can
say
yeah.
C
Okay
and
and
that's
fair-
and
I
feel
like,
maybe
that
we
can
have
that
discussion
with
the
cluster
api
folks
are
back.
I
sort
of
viewed
it
as
like
the
level
the
appropriate
level
of
granularity
might
be
that
each
subproject
kind
of
gets
their
own
credentials
of
their
own
bucket,
and
there
are
some
prod
some
sub
projects
like
csi,
for
example,
that
encompass
multiple
repos
and
multiple
artifacts,
whereas
cluster
api,
for
example,
each
of
the
individual
providers
are
listed
as
their
own
subprojects
and
stuff.
G
So
again,
like
there's,
there's
two
points,
there's
the
great
the
grouping
of
like
who
should
get
there
on
staging
they're,
basically
free
to
us.
So
we
shouldn't
be
stingy
about
them.
You
know
I'm
fine
with
lots
of
granularity,
although
you
know
we
already
have
what
50
something
projects
and
that'll,
probably
5x,
before
we're
done,
which
you
know,
makes
oversight
and
audit
a
little
bit
complicated,
but
not
terribly.
So,
especially
when
you
see
that
many
of
those
owner
groups
are
the
same
people
right,
I'm
totally
cool
with
people
declaring
multiple
hats
distinctly
right.
G
I'm
all
about
that.
Then.
The
second
question
was
like
from
a
ux
point
of
view,
end
user
ux
point
of
view:
does
it
make
sense
to
nest
the
concepts
in
the
final
result?
And
I
again
I
truly
don't
care.
I
just
thought
it
was
worth
asking
and
discussing
before
we
get
too
much
farther.
In
some
sense,
the
horse
is
already
out
of
the
barn,
like
cluster
apis,
already
sort
of
published
some
number
of
artifacts,
but
it's
not
so
far
out
of
the
barn
that
we
can't
throw
a
rope
around
it
still.
G
G
Benefit
the
benefit
is
that
you
see
that,
like
all
these
cluster
api
projects
are
explicitly
grouped
together
because
they
have
a
sort
of
directory
that
covers
them
all,
as
opposed
to
just
a
common
prefix.
Now
the
reality
is
like
within
cloud
storage,
which
backs
gcr.
Those
are
literally
the
same
thing
and
we're
replacing
a
dash
with
a
slash
right
but
and
how
people
look
at
their
image
names
it
could
be,
it
could
be
meaningful.
G
I
guess
it's
not,
since
most
people
will
reference
them
by
their
full
path
anyway,
so
the
other,
I
guess
the
other
thing
that
might
be
meaningful
is
if
people
go
browsing
for
hey.
What
are
all
the
cluster
api
providers
that
are
available?
They
could
use
the
docker
registry
api
to
list
everything
in
that
directory,
as
opposed
to
listing
everything
with
a
prefix,
which
I
don't
think
is
a
supported
operation.
G
A
My
opinion
about
this
is,
it
sounds
nice,
but
I
feel
like
we
have
so
many
things
started
not
finished
that
it
can
be.
You
know
that
the
stretch
goal
to
think
a
little
bit,
but
I
I
don't
think
that
it
is
so
necessary
to
like
decide
right
now
or
change
it
right
now.
C
Okay-
and
I
was
gonna,
say
something
similar
to
me-
it
sounded
like
that
it
was
a
a
nice
human-friendly
grouping
which
I
am
a
fan
of,
but
it's
also
unclear
to
me
that
there
are
that,
like
that,
seems
more
of
an
arbitrary
grouping.
That's
maybe
a
little
bit
tougher
to
mechanically
enforce,
as
we
look
for
a
policy
to
apply
to
other
subprojects
and
stuff.
G
That's
true,
and
I
guess
once
we've
done
the
vanity
flip,
the
pub
like
we've
been
telling
people
sort
of
don't
publish
these
urls
too
widely,
because
they're
going
to
get
a
vanity
domain
and
that
way
we
get
the
abstraction.
Once
we
have
the
vanity
domain.
I
expect
people
to
publish
these
these
image
names
pretty
widely,
and
at
that
point
we
don't
want
to
change
them.
So.
G
So
I'm
fine
with
just
saying
this
discussion
is
done,
we're
not
going
to
do
it.
It's
not
worth
the
energy
and
you
know
if
somebody
should
come
up
and
say:
hey,
I'm
going
to
have
10
staging
projects
and
I'd
like
them
to
be
all
grouped
together.
We
can
consider
that
as
a
you
know,
they
want
to
do
it
so
we'll
do
it
and
other
groups
don't
care,
and
so
they
don't
that's
fine
with
me
too.
A
G
Yeah,
I
mean
it's
not
complicated.
I
think
I
think
all
we
would
have
to
do
is
change
the
directory
name
in
the
manifest
and
commit
that
and
the
promoter
would
automatically
kick
in
and
do
its
work.
The
old
images
would
still
be
there
and
we
could
just
leave
them
there
for
anybody
who
had
old
references
to
them.
I
I
don't
know
if
linus
has
ever
tested
it
this
way,
but
I
bet
the
promoter
would
just
do
its
thing.
That's.
F
Right,
I
also
think,
but
I
just
can't
I
fail
to
recall
if
there's
an
explicit
test
that
did
test
this
part,
so
yeah.
F
Yeah,
I
would
just
add
a
test,
I
guess
as
an
action
item.
If
really,
we
really
wanted
to
make
sure
that
it
did
support
this.
But
currently,
as
I
recall,
the
the
promoter
currently
does
a
like
a
reconciliation
check
to
make
sure
that
no
two
manifests
step
on
each
other's
toes.
So
that's
what
leads
me
to
think
that
it's
currently
supported
today.
C
Okay,
last
thing
I'll
say,
is
I'll,
take
an
ai
to
find
and
see
if
I
can
get
some
input
from
cluster
api
folks.
C
B
So
yeah
this
is
more
of
just
an
fyi,
so
context
around.
This
is
that
we
ran
two
instances
of
the
publishing
bot.
Today.
One
is
the
good
apps
version
for
versions
1.14
and
below,
and
one
is
go
modules,
friendly
version
for
versions.
B
1.14
and
above
we've
kept
the
good
apps
bought
running
for
a
while,
even
though
it's
basically
a
new
op
run
in
case,
we
might
need
it
in
the
future,
but
the
last
on
the
final
patch
release
for
1.14
was
in
early
december,
so
it's
been
quite
a
while
and
now
we're
sure
that
we
don't
need
it
anymore.
I
sent
a
notice
to
the
mailing
list
about
this
last
week.
I
will
be
turning
down
that
instance
of
the
wart
after
this
meeting.
G
B
Yeah
there
were
some
review
comments
on
it.
It
also
like
that
pr
also
moved
the
configs
to
the
case.io
repo,
which
there
were
some
concerns
around
that,
but
I
saw
that
or
not
created
a
pr
just
now
I
got
an
email
about
it.
I
haven't
seen
it
yet,
but
I
can
like
follow
up
on
it
and
check
out.
What's
the
best
way
forward,.
A
Add
to
this
okay,
so
I
think
we
can
jump
into
the
next
topic.
Just
mine
and
the
one
of
the
last
thing
which
I
didn't
understand
in
our
repository
was
the
artifact
server.
So
I
started
discussion
with
teams
and
then
asked
a
few
questions
for
brendan
burns.
G
It's
as
far
as
I
know,
it's
not
running
anywhere
and
the
the
plan
was
to
make
it
something
that
we
could
use
as
a
redirector
to
other
to
mirrors
and
allow
other
providers
to
run.
Also
the
last
I
heard
was
months
ago,
brendan
started
it
up
with,
I
think
justin.
G
Maybe
I
want
to
sign
up,
but
I
think
justin's
name
was
on
that
and
they
were
going
to
discuss
how
to
do
the
signatures
and-
and
you
know,
crypto
verification
that
the
right
images
are
being
served
and
all
of
that
and
then
I
think
it
just
stalled
out
like
so.
The
question
is.
A
What
how
priority
it
is
because
it's
in
what
states
the
files
inside
the
repository
are
right
now,
because,
if
like
nobody
knows
or
there
is
because
there's
no
documentation
and
there's
no,
you
know
plan.
Maybe
we
should
remove
it
for
now.
G
A
Can
like
just
ask
questions
about
what
we
expect
from
it
and
to
write
something
some
proposal,
and
maybe
somebody
will
pick
it
up
or
maybe
I
will
have
some
time
to
to
start
doing
it.
But
at
this
point
I
couldn't
find
anywhere
and
if,
like
a
more
discussion
about
the
requirements
and
that
kind
of
stuff,
so.
C
Is
justin
here
today
he
is
not.
I
have
one
question
that
maybe
pertains
to
this,
so
my
understanding
is
the
we're
super
happy
with
the
container
image
promoter
and
staging
projects,
because
now
we've
given
subprojects
a
place
to
host
container
images,
we
still
don't
have
a
policy
or
process
for
some
projects
to
host
binary
artifacts.
C
That's
what
we
like
to
do
for
kind,
because
right
now
we're
using
somebody's
personal
bucket
to
host
some
kind,
artifacts
right,
yeah,.
G
C
I
think
that's
what
motivated
justin
to
work
on
this
in
the
first
place
was
that
this
was
going
to
be
our
solution
for
the
binary
artifacts
or
maybe
there
was
discussion
at
one
point
of
taking
like
the
container
image
promoter
and
also
making
it
like
an
artifact
promoter.
I
I
don't
really
know.
G
G
Linus
has
not
had
any
time
or
bandwidth
to
work
on
that
and
I
have
sort
of
informally
asked
his
boss
if
he
will
have
time
to
work
on
that.
The
answer
is
not
a
clear
yes,
so
though
I
guess
I
don't
know
if
liam
is
here,
I
don't
need
to
speak
for
him.
Hey,
hey,
you're,
getting
an
intern
right.
Is
your
intern
going
to
look
at
this.
F
G
C
F
Just
just
to
be
clear,
I
mean
justin
already
did
like
write
some
stuff
there
like
next
to
the
promoter
from
phase,
and
he
has
been
using
it
for
some
months
or
I
don't
know
how
many
months,
but
it
is
in
a
usable
state.
So
it's
not
like
there
has
been
no
work
done
on
it.
So
I
think
we
would
need
like
justin's
input
here
so.
A
The
thing
which
I
see
right
now
is
there
there's
a
lot
of
unknown
and
we
need
to
document
it.
So
I
will
start
asking
questions
about
this
and
I
will
ask
all
of
you
especially
like
this
is
the
completely
new
thing
I
didn't
know
about
it
and,
let's
document
and
let's
iterate
over
the
requirements-
and
you
know
some
thoughts
about
this,
I
can
also.
I
will
also
involve
the
brendan
asking
what
he
was
expecting
from
this.
A
So
we
will
see
how
priority,
how
can
we
prioritize
it
or.
G
So
I
and
I
think,
just
to
cross-reference,
there
were
two
goals.
One
was
let's
get
to
a
process
of
artifact
promotion,
but
also
to
enable
mirrors
in
other
providers
so
that
we
can
address
the
the
china
bug.
G
An
orthogonal
or
an
additional
thing
I
I
want-
I
mean
yes,
we
should
definitely
think
of
it
as
orthogonal
because
it
brings
new
requirements,
but
part
of
the
process
was
to
get
to
an
where
it
was
all
automated.
So
we
could
either
push
to
other
mirrors
or
have
it
in
a
way
that
the
mirrors
could
self-replicate.
G
A
Okay,
so
I
have
an
action
item
to
start,
like
writing
the
the
stuff
about
it,
let's
jump,
because
we
have
a
lot
of
topics
more,
let's
jump
to
another
one,
and
so
I
like
a
poke
the
topics
around
the
dnses,
and
I
saw
them
a
lot
of
like
to
do
the
directory
for
dnss
and
my
question
right
now,
mostly
for
team,
because
you
put
the
to-dos
there
what
still
needs
to
be
done
to
automate
it.
This
is
the
first
question.
A
G
Okay,
so
the
the
ideal
end
goal
would
be
that
dns
happened
automatically.
Somebody
approves
a
commit
to
the
dns
repository
and
some
bot
automatically
pushes
that
to
to
cloud
dns
which
we
can
start
to
set
up
now,
especially
we've
got
the
workload,
identity,
stuff
and
we've
got
the
cluster,
like
all
the
pieces
are
in
place.
G
Will
require
you
to
set
a
force
flag
if
there's
more
than
I
think,
30
updates
at
a
time
which
is
generally
not
the
case,
but
if
it
is,
we
just
need
to
decide
what
it
is
that
we
want
to
do.
G
We
can
always
set
the
force
flag,
but
then
that
means
that
if
we
accidentally
deleted
the
entire
dns
tree,
it
would
nuke
the
whole
thing
like
it
would
there's
no
human
override
there
or
we
just
say,
look
if
it
requires
the
force
flag,
we
just
have
to
throw
up
an
alert
and
make
a
human
do
it,
which
I'm
okay.
With
too
it's
a
little
bit
more
complicated.
Okay.
A
But
so
the
the
this
topic
also
involves
how
we
are
gonna
automate.
You
know,
so
we
need
to
set
up
some
web
web
hooks
to
point
to
our
cluster
to
do
some
work
and
we,
I
don't
think
we
have
a
process
for.
G
That
I'm
not
sure
that
we
need
to
set
up
a
web
hook.
We
can
simply
run
like
we
could
literally
just
run
a
a
pod
that
you
runs,
get
sync
as
a
sidecar
watches
for
changes
in
git.
Sync,
we
need
to
figure
out.
Just
you
know
like
take
a
store,
a
hash
of
the
dns
subdirectory
or
something,
and
when
that
changes
just
run
octo
dns
against
it.
So
that
sounds
good.
A
But
I
have
no
knowledge
about
it.
How
does
it
work
how
to
do
it?
Do
you
think
it
will
be
possible
for
us
to
do
some
programming,
maybe
to
show
sure
yeah.
So
my
question
for
you
is
action
item
to
you
to
find
a
time
for
me.
Okay,.
G
We
had
a
semi-regular
time
before,
with
this
new
world
order.
The
calendar
is
all
sorts
of
messed
up,
you're
still
cet
right,
yep.
Let's,
why
don't
we
shoot
for
tuesday?
Am
my
am
your
afternoon
all
right
I'll,
throw
something
on
the
calendar
right
now
and
I'll
copy.
You.
C
G
Areas
of
the
infrastructure,
so
we
could
totally
do
it
with
prowl,
except
that
we
don't
have
a
community
proud
yet
yeah.
We
could
do
it
in
the
non-community
prowl.
If
we
wanted
to.
I
don't
have
very
much
familiarity
with
prow
as
to
why
prowl
would
be
better
or
worse
than
this.
A
C
A
So
even
better
we
can
like
here,
I
didn't
know
about
one
thing
and
the
second
thing
too.
So
I
can
learn
from
that.
Okay,
okay,
perfect,
so
the
the
next
topic
actually
is
kind
of.
I
wanted
to
show
maybe
other
people
who
don't
know
what
the
process
is
currently
if
you
would
like
to
move
from
some
somewhere
to
the
new
infrastructure.
A
So
if
you
are
interested
just
look
into
these
steps,
which
I
wrote,
but
there
is
like
a
big
gap
which
I
see
is
about
how
to
manage
the
subdomains,
we
don't
have
a
process
and
if
we
are,
if
we,
for
example,
at
the
last
step
of
moving
our
project-
and
we
want
to
test
it
with
some
subdomain,
but
we
don't
want
to
you
know,
flip
the
switch
and
move
it
some
from
some
other
place
to
to
this
test
environment,
which
we
are
just
testing.
A
What
is
the
process
of
you
know
using
some
domain
not
proper
domain
yet,
but
to
test
you
know,
certificates
to
test
if
the
traffic
is
coming,
etc,
etc.
So,.
G
A
That's
that's
the
process
which
I
I
would
like
to
discuss,
because
I
think
that
it
would
be
good
to
have
some
domain,
which
will
be,
for
example,
directly
pointing
to
some
ingress
and,
for
example,
where
you
can
manually
create
some
subdomains,
which
would
be,
for
example,
scrapped
after
24
hours
just
for
testing
for
process,
or
something
like
that.
Okay,.
G
No,
I
have
no
major
objections.
We
don't
have
anything
like
that.
I
can
tell
you
what
I
did
for
the
kates
that
I
owe
and
gcs
web
flips,
which
was
I
brought
up.
The
the
ingress
in
the
new
cluster.
G
Obviously
dnx
wasn't
pointing
to
it
yet
and
then
I
but
I
created
the
certificate
resources
which
went
out
and
provisioned
a
self-signed
certificate,
and
then
I
wrote
in
tests
against
the
ip
address
with
the
hostname
header
and
the
insecure
flag
on
curl,
and
I
verified
that
all
of
the
hosts
and
paths
that
I
expected
to
work
on.
G
The
new
server
worked
on
the
new
server
and
then
once
I
was
convinced
of
that,
I
manually
copied
the
old
certificate
over
so
that
ssl
would
be
correct
and
then
flipped
the
dns
and
when
the
dns
flipped
certificate
manager
just
took
over
and
started
managing
the
new
certificate.
So
the
only
privileged
operation
in
there
at
all
was
reading
the
old
certificate
out
and
copying
it
up
into
the
new
server.
So
that's
good,
because
we
we
also
did.
A
That,
with
with
james
last
week
for
perf
dash,
so
we
checked
if
the
logs
look
look
right.
But
I
I'm
missing
the
point
where
which
you
wrote
the
tests
for
this.
So
if
you
have
have.
G
A
So
that's
good.
I
will
check
that,
but
I
will
also
start
a
discussion
and
I
will
try
to
document
the
suggestion
for
using
one
some
domain
to
and
then
scrap
every
24
hours
the
records.
G
H
No,
I
was
just
going
to
say,
yeah
like
staying
up.
If
we
can
actually
set
up
a
dns
record
with
any
domain
at
all
pointing
to
the
ip
address,
then
you
can
you
can
test
out
the
full.
Let's
encrypt
flow
there,
if
you,
if
you'd
like,
like
whether
we
use
a
subdomain
of
one
of
our
existing
ones
or
what
we
do
there,
I
don't
know
yeah.
I
guess
it
technically
shouldn't
be
unsafe
to
allow
like
having
something
like
staging.kate
style.
I
think
that
should
be
okay.
H
A
G
Xkatestudio
we
so
we
we
registered
the
domain
for
people
to
declare
crds
that
are
managed
by
sigs
that
are
sort
of
under
the
umbrella
of
the
kubernetes
project,
but
not
official,
like
they're,
not
part
of
the
required
set
of
apis.
A
G
I
I
I'm
open
to
hearing
why
I
don't
see
that
as
a
particular
problem.
If
we
are
smart
with
the
subdomain,
my
other
random.
C
A
For
us,
I
it's.
A
A
One
subdomain
will
be
as
a
wildcard
used
inside
the
ingress,
and
then
you
can
use
just
some
testing
sub
domain.
For
that
particular
case
of
just
testing.
G
So
I
I
think
what
we
need
here
is
like
a
something
that
writes
up
the:
what
are
the
goals,
what
are
we
hoping
to
test
and
prove
through
those
tests,
and
what
do
we
need
to
achieve
that?
If
we
want
a
subdomain
with
a
delegation
that
we
allow
more
automation
around
more
less
less
tightly
controlled
automation,
I'm
okay
with
that.
But
I
want
to
understand
what
exactly
we're
proving
through
that
test.
A
Okay,
so
I
I
will
actually,
I
created
the
issue.
So
let's
move
question
there,
because
you
know
I,
I
moved
the
deployed
the
perf
dash
to
the
new
cluster,
but
I
don't
have
enough
knowledge
about
the
paradise
itself
to
test
if
it's
working
correctly,
and
so
I
won't,
I
need
to
involve
the
people
from
the
perv
dash,
but
also
it
would
be
easier
to
just
have
the
tests
of
the
language
where
I
would
check
if
it's
even
like,
showing
anything
or
how,
if
it's
breaking
how
it's
breaking
in
a
ui
way.
E
A
Okay,
so
let's
move
the
discussion
to
the
to
the
issue
about
that
particular
case.
The
next
thing
is
about
the
billing,
because
we
all
know
that
we
have
billing,
but
there
is
like
a
discussion.
There
was
start
a
discussion
about
using
some
g-suite
account
to
to
have
access
to
edit
this,
and
I
would
like
this
discussion
again
because
right
now,
the
I
don't
know
the
only
one
is
justin.
I
think
who
have
access
to
it.
A
I
think
at
least
you
know
there
is
like
some
issues
like
adding
the
per
cage
namespace
billing
and
the
only
one
who
can
do
it
right
now
is
justin
and
maybe,
if
somebody
else
could
help
with
that
or
or
I
would
like
to
help
with
that.
But
I
can
because
I
don't
have
access.
G
So
I
thought
that
justin
had
shared
a
doc.
I
mean
it's
all
done
through
the
g
suite
api.
I
thought
he
had
shared
something
with
the
group.
That
is
the
a
counter
like
gcp,
accounting
or
something.
A
Maybe
maybe
it
is
maybe
others
was.
I
wanted
to
ask
about
these
two,
but
without
him.
G
I
think
there's
there's
two
issues.
One
is
that
we
should
probably
have
a
g
suite
account
to
own
the
dock,
a
kubernetes.iog
suite
instead
of
justin's
own
g
suite
or
his
own
gmail
address
just
to
just
to
own
it
just
to
give
it
an
anchor,
and
probably
there
will
be
other
such
docs
that
we
would
like
to
have
a
g
suite
account
that
creates
and
then
grants
edit
access
to
that's
something
for
steering.
Since
we
have
exactly
three
g
suite
accounts
right
now.
G
This
would
be
a
fourth
at
least
was
the
last
time
I
knew
about
it.
C
A
C
A
Can
I
can't
because
I
checked,
but
I'm
not
the
part
of
the
billing,
not
being
the
accounting
group.
I
think.
G
A
C
G
C
G
Using
we
need
like
I
want
to
make
sure
that
the
doc
is
owned
by
somebody
who
can't
disappear
right,
like
suppose
justin.
You
know,
because
he's
a
very
temperamental
guy
got
mad
and
decided
he
didn't
want
to.
Oh,
he
wanted
to
get
rid
of
the
dock
and
just
break
the
link.
So
just
having
a
g
suite
account
where
we
can
anchor
the
docs
is,
is
all
we're
asking
for
there.
G
C
Right,
let
me
let
me
take.
G
A
A
G
Yeah,
so
I
did
a
little
bit
of
digging
and
I
set
up
some
manual
monitoring
for
just
for
the
kates
that
I
o
site
to
make
sure
that
it's
up
and
I
was
able
to
set
that
all
up
manually,
I
filed
an
issue
to
try
to
figure
out
what
the
command
line
for
it
is,
and
lo
and
behold
there
was
a
command
line
coming
for
it
and
they
just
went
into.
I
think
g
cloud
alpha
fairly
recently
or
maybe
g
cloud
beta
fairly.
Recently.
G
I
have
not
yet
tried
to
script
what
I
did
manually.
I
would
like
us
to
try
to
have
a
script
that
recreates
what
we've
done
so
that
we
can
understand
it,
and
I
would
like
to
add
it
to
the
audit,
but
I
don't
know
how
to
do
that.
Yet
my
my.
A
Question
was
mostly
related
to
the
requirements
because
I
feel
like
we.
We
can
jump
straight
to
the
scripting
or
straight
to
the
monitoring,
but
when
we
don't
know
what
are
the
our
requirements,
it's
it's
okay
to
be
hard
to
document
after
if
it
will
be
easier,
if
we
would
have
like
the
requirements
and
then
do
the
script
or
do
the
work.
G
Absolutely
I
don't
have,
in
my
mind,
generalized
requirements,
and
I
it's
going
to
be
interesting
if
we're
trying
to
use
something
like
stackdriver,
which
is
still,
I
think,
very
click.
Ops,
heavy
for
you
know
an
arbitrary
app
running
in
our
cluster
to
set
up
monitoring
may
need
a
you
know,
a
shoulder
surfing
session
with
somebody
who
has
click,
ops
capabilities
and
the
app
owners
who
don't
so.
I
will.
A
Add
actually
to
me
to
actually
check
that
and
to
I
will
try
to
find
a
person
who
can
help
me
with
that,
and
I
will
try
to
write
some
requirements
and
suggest
some
requirements
and
then
can
discuss
in
the
next
one.
I
That's
a
small
comment
on
that.
I
think
the
stackdriver
api,
or
at
least
the
monitoring
finally
or
does
have
a
packages,
so
we
might
be
able
to
work
around
the
issue
of
the
clicks
heavy
stuff
with
just
jumping
into
terraform.
There.
A
G
A
Okay,
so
the
the
next
topic
it
was
actually
already
discussed,
so
the
gcp
auditing
automation.
So
I'm
not
gonna
start
this.
We
need
to
find
a
way
of
you
know
running
some
jobs
when
something
will
change
in
our
repository.
So
we
will
discuss
this
in
our
call.
A
So
what
is
the
status
of
moving
projects?
This
is
kind
of
interesting,
because
I
think
we
should
have
a
list
of
the
project
which
we
need
to
move.
We
have
two
issues
which
were
created
by
iran
about
the
clusters,
which
currently
exists
and
about
the
projects
which
needs
to
be
moved,
but
I
don't
feel
like
this
is
everything
there.
G
A
Can
prioritize
accordingly
perfect,
that's
that
would
have
helped
me
at
least
very
much
to
do,
because
what
I
was
doing,
I
actually
during
the
last
two
weeks
I
digged
into
the
all
corners
of
our
repository
and
issues.
I
think
I
looked
at
most
of
them
so
far
and
I'm
gonna
try
to
change
it
to
the
documentation.
C
Okay,
personally,
I
want
to
get
us
back
to
a
world
of
milestones
and
using
a
project
board.
I
know
we
drifted
away
from
that
a
while
ago.
I
think
yeah.
C
A
I'm
definitely
up
to
moving
back.
For
me,
it
was
the
last
two
or
three
weeks
was
very
I
opening
and
to
see
everything.
So
now
I
know
more
than
I
knew
like
the
two
months
ago.
Much
much
much
more
so
be
easier.
A
A
And
the
last
from
the
my
list
of
items
is
I
created
a
small
pull
request
to
close
one
of
the
issues.
If
you
can
look,
if
it's
okay
for
you.
G
J
G
J
G
So
we're
over
time
right,
but
we're.
G
A
G
Oh
I'm
off
by
a
half
hour.
Sorry,
yes,
I
thought
for
some
reason
I
thought
we
started.
Eight,
we
starting
the
war
room
in
another
a
half
hour,
45
minutes
from
now
was
the
link
already
posted
for
that
yep.
Okay.
Hopefully
it's
very
boring
and
not
much
to
see
here
and
we
get
that
process
going
once
we
get
this
behind
us.
The
world
is
our
oyster.
We
can
take
on
the
next
windmill,
which
maybe
is
proud.
C
Maybe
I
feel,
like
part
of
prince
still.
C
A
C
Do
they
understand,
I
I
hear
you
yeah,
I
I
don't
plan
on
like
just
doing
that.
I
plan
on
documenting
what
I
think
we.
G
Yeah
and
as
you
know,
as
much
as
we
raised
the
bar
on
the
image
promotion
stuff
to
get
this
all
moved
over,
I
hope
we
don't
need
to
raise
the
bar
quite
as
far
for
probably
actually,
I
think
prow
is
on
better
footing
than
the
rest
of
the
image
management.
Stuff
was
so
it
should
not
be
as
hard
as
this
one
was.
C
C
That
is
not
quite
as
well
documented
right
now,
so
that'll
take
some
time
moving
proud
itself
over
or
creating
a
separate
instance
of
prowl.
That
will
be
harder
it.
We
still
need
to
think
through
it,
but
it
kind
of
comes
down
to
like
how
much
down
time
we're
willing
to
accept.
Maybe
we
just
suggest
that
there's
a
weekend
or
something
where
the
kubernetes
project
is
just
kind
of
going
to
be
quiet
for
a
little
while,
while
we
get
things
flipped
over
but
anyway,
that's
the
crawl,
walk.
C
G
One
step
at
a
time-
and
you
know,
there's
no,
it's
not
a
rush,
but
the
money's
there
and
it's
not
being
spent.
So
let's
spend
it.
C
Yes,
I
think
something
I
will
want
this
group's
help
with
is
the
sheer
volume
of
jobs
that
we
run
would
destroy
our
spend.
So
we.
G
Should
consider
destroying
we,
we
budgeted
three
million
dollars
a
year,
you're
saying
we're
spending
more
than
three
million
dollars
on
pro
unclear
we'd,
better,
not
be
because
I
ran
all
the
analysis
before,
and
the
three
million
included
the
5k
node
scale
tests
and
all
ci
and
all
the
gcr
which
okay,
okay,
okay.
Now
that
was
that
was
a
year
and
a
half
ago
or
something
so
it's
entirely
likely
that
the
total
number
has
grown
since
then,
but
as
we
ramp
up
we'll
get
a
better
picture
of
it.
G
A
C
Just
a
random
for
what
it's
worth,
I
I
know
there's
a
lot
of
shell
in
a
repo.
I
know
there's
a
desire
to
have
less
shell
and
use
more
terraform.
C
I
think
it's
going
to
take
us
a
while.
I.
D
C
A
Would
like
to
add
one
thing
to
that:
I
don't
think
it
is
the
best
time
right
now
to
focus
on
moving
from
the
bash,
because
the
terraform
itself,
it
will
be
not
that
hard,
but
this
is
the
I
food
treated
as
a
stretch
at
this
point.
Let's
finish
the
things
we
started
using
this
approach
and
terraform
it
as
a
next.
C
G
Okay,
so
I
there's
a
pr
open
that
I
have
not
looked
very
deeply
at
because
my
terraform
is
weak,
and
so
I
opened
it
and
it
scared
me
what
I'm
hoping
to
do
still
is
small
refactorings
against
the
shell
code
that
make
it
more
obvious
what
the
hell
is
going
on
in
there,
so
that
the
terraform
will
map
more
closely.
G
A
The
all
this
is
only
possible
when
we
will
sit
and
decide.
Okay,
this
script
is
creating
the
gcr
gc
gc,
gcp
bucket
and
gcs
bucket,
and
these
and
decide
okay.
These
are
the
resources
which
are
being
created.
These
are
the
groups
which
needs
to
have
access
to
that
and
when
this
will
be
documented,
it
will
be
easier
to
you
know,
split
it
into
the
terraform
resources
which
which
we
need
to
work
on
right,
but
I
wouldn't
even
try
to
because
this
is
the
pr
right
now
is.
A
G
A
Started
so
there's
a
repository
which
I
created,
which
I
digged
through
the
whole
scripts
and
all
resources
which
are
created
by
the
scripts
are
in
this
repository,
which
I
shared
like
the
last
two
weeks.
But
we
did
some
cross-checking
and
start
like.
There
are
like
a
few
places
where
you
are
directly
as
an
owner
or
you
or
justin,
is
as
an
owner
and
that.
G
All
right,
I
will
cue
that
up
to
try
to
take
a
look.
A
So
that
that
actually
can
be
an
interesting
point
for
somebody
from
the
community
to
do
the
cross
checking.
We
have
the
audit,
we
have
my
analysis
of
the
scripts
and
we
can
check
how
they
compare.
A
G
Oh
wow,
look
at
you
all
right,
there's
a
lot
to
do
there.
G
A
lot
cool:
do
you
think
it's
so
I
did
a
little
bit
of
script
cleanup
in
the
last
couple
of
weeks,
just
just
to
make
things
easier
to
find.
A
So
I
my
suggestion,
and
it
helped
me
a
lot-
was
to
understand
which
are
the
real
resources
each
scripts
creates
and
which
are
just
the
iems
and
the
permissions
for
the
people
to
access
the
project.
So
what
I
would
really
really
like
all
of
you
to
is
to
to
document
it
not
start
rewriting
the
scripts
but
to
write
okay.
A
This
script
creates
the
gc
gcs
bucket
this
for
project
for
the
gcr
registry,
and
these
are
the
group
which
should
have
ownership
over
edit
access
or
just
view
access,
and
this
is
kind
of
easy
to
do,
and
it
will
be
much
much
much
help
easier
for
people
to
you
know,
improve
this
even
scripts
or
to
rewrite
it
to
terraform
the
future.
A
G
A
I
just
I
don't
want
to
over,
complicate
it
just
simple,
because
you
know
in
most
cases
it's
just
the
gcs
bucket
gcr
registry
and
and
everything
else
are
just
iems
so
and
the
iems
actually
are
like
a
eighty
percent
of
the
script
are
for
the
ielts.
That's
true.
These
are
kind
of
tricky
because
they're
hard
to
read
and
dig
and
understand
who
and
what
have
access
to
what
so,
okay,
it
helped
me
a
lot
and
that
there
are
only
few
resources
being
created.
Everything
else
are
just
the
access
management.
A
So,
thank
you
all
for
being
here
and
I
feel
see
you
later
in
two
weeks
or
in
half
an
hour
thanks
everyone.
Thank
you.