►
From YouTube: Kubernetes WG K8s Infra biweekly meeting 20210203
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody-
it
is
wednesday,
february
3rd.
This
is
the
wg
kate's
infra
bi-weekly
meeting.
I
am
your
host
aaron
krickenberger.
B
A
So
I
put
a
few
things
on
the
agenda
and,
like
I
said,
if
there's
other
stuff
people
want
to
talk
about,
we
can
totally
do
that.
First
up.
Let's
do
our
let's
see
I
supposed
to
welcome
new
members.
I
recognize
you
all
next
up
billing
review.
Let's
take
a
look
at
the
data
studio
report
and,
let's
see
if
we
can't
compare
that
to
what
tim
gets
to
see
on
the
internal
building
page
and
let's
see
if
there
are
any
surprises
or
things
that
look
unusual
to
us.
A
C
A
Yeah,
that
seems
pretty
good
to
me.
I
I
don't
know
I
don't
have
anything.
I
specifically
care
to
talk
about
with
regard
to
our
current
spend.
It
looks
predictable
to
me.
C
Yeah,
I
don't
I
there's
nothing,
that's
obviously
low
hanging
fruit
to
go
unoptimized
for
and
we're
not
really
at
that
stage
of
optimization.
Anyway,
nothing
seems
out
of
line
to
me.
It's
where
I
expected
it
to
go
yeah
and
we're
within
spitting
distance
of
each
other.
That's
really
my
main
check-in
that
I
want
to
keep
on.
D
A
Well,
maybe,
but
also
this
is
not
spread.
Oh
maybe
this
is
yeah,
comparing
the
first
phase
to
the
previous
one.
It's
not
surprising,
because
I
expect
I
would
see
a
jump
from
holiday
levels
of
activity
like
january
1st
and
january
2nd
we're
very
slow
if
it's
just
comparing
those
days.
C
If
I
look
at
the
the
two-month
graph
on
the
gcp
console,
it's
pretty
pretty
predictable
weekly
cycles
with
a
significant
dip
between
the
23rd
and
the
3rd
of
january
surprise,
and
it
is
very
gently
up
into
the
right.
A
Yeah,
I
can
show
something
similar
if
I
take
our
start
date
and
move
it
back
to
like
trigger
ends.
Let's
do
november
1st.
A
The
the
trend
I
would
expect
to
see
is
maybe
some
kind
of
peak
running
up
to
release
and
then
a
massive
dip
because
of
holiday
stuff
and
then
a
start
rise
again,
and
that
is
effectively
what
I'm
seeing
here
so
like
the
real
large
spike
in
compute
engine
to
me
is
a
real
large
spike
in
ci
related
costs.
Possibly
I
would
go
dig
into
projects
and
look
at
our
service
usage
by
skew
to
action
what's
going
on,
but
this
is
predictable
enough
at
the
moment
that
I
don't.
D
Just
on
that,
like
I,
I'm
not
sure
what
it
was
showing,
but
I
pulled
up
like
versus
the
previous
month
and
it
did
show
like
moderate
growth,
with
the
exception
of
container
registry
vulnerability
scanning,
which
doubled
to
2800,
so
not
huge,
but
like
that's,
not
unexpected,
and
that's
all
in
the
kate
staging
ci
images.
D
So
that's
I
guess
we
would
expect
the
thing
which
is
likely
to
become
more
of
a
problem
is,
it
looks
like
there
are
a
larger
number
of
kate's
infra
ede
boscos
scale
dash
number
number
projects
and
we're
not
aggregating
those
currently,
so
that
might
come
into
the
I
so
there's
a
later
topic
about
project
granularity.
A
Yeah
yeah
totally.
Okay,
I
will
stop
sharing
my
screen
for
now.
A
A
E
So
I've
put
some
readme
and
I
I
did
almost
everything,
but
I
still
need
that
james
to
to
review
the
pr
and
also
to
to
update
the
search
manager
from
the
cluster
about
the
the
search
manager,
monitoring
I've
made
up
a
first
call
that
I've
sent
you
just
wait.
I
know
that
he
is
like
crowded
also
with
day
jobs
teams
and
that's
a
really
a
really
stark
version,
and
I'm
probably
going
to
to
to
work
on
that
this
this
weekend.
E
A
A
A
So,
if
he's
not
able
to
get
you
up
to
speed,
let
me
know-
and
I
will
do
my
best
with
the
bandwidth
that
I
have
available
being
getting
acquainted
with
cloud
monitoring-
is
still
a
learning
curve
for
me,
but
I'm
getting
better
at
it.
I'd
really
like
on
that
note,
like
I'd,
really
like
to
see
us,
get
to
a
point
of
actually
doing
file
driven
configuration
changes
to
logging
monitoring
and
alerts
type
stuff
right
now.
A
I
seem
to
be
at
the
point
where
I'm
clicking
around
in
the
console,
so
I've
got
an
open
issue
tagged
as
help
wanted
that
describes,
I'm
pretty
sure
of
what
needs
to
be
done
for
at
least
different
monitoring
dashboards.
I'm
assuming
it's
very
similar
for
setting
up,
alerting
rules.
C
C
A
C
A
A
C
C
Yeah,
there's
a
there's,
a
known
issue
in
the
promoter
around
like
having
a
lot
of
updates
at
the
same
time
when
it
hits
quota
limits
and
backs
off
and
then
just
starts
getting
into
sort
of
a
death
spiral.
C
The
the
last
time
we
saw
alerts,
I
think,
was-
was
that
and
I
just
silenced
them
yeah
nothing
since
august,
but
but
as
steven
is
looking
at
some
of
the
promoter
stuff,
I
I
told
him
when
he's
when
he
feels
like
real
sig
release,
has
ownership
of
the
promoter
bot
to
let
me
know
and
we'll
do
this
sneaky
container
image
test
again
and
I'll
push
something
dirty
up
there
and
see
if
he
can
find
it.
A
Yeah
game
day
sounds
good,
okay,
yeah.
So
moving
on
to
the
agenda
items
I
added
first.
One
thing
I
wanted
to
talk
about
was
just
kind
of
a
couple
specific
questions
about
how
we're
organizing
our
projects
for
the
purposes
of
billing.
A
To
talk
about
it,
so
no
problem,
so
at
the
moment
my
impression
is
that
if
I
look
at
the
billing
just
curt
aids,
artifacts
prod,
that's
basically
the
cost
of
hosting
upstream
project
artifacts
right,
that's
pretty
handy
at
the
moment.
That's
just
container
images
so.
C
C
A
It
is,
but
I
was
actually
going
to
go
the
other
way.
I
feel
like
we're
really
close
to
having
all
of
our
artifacts
upstream
artifact
hosting
costs
lumped
into
this
one
project,
including
gcs
buckets
for
things
that
are
serving
raw
binders.
So
I
feel
like
we
need
to
sort
out
the
binary
promotion
process,
maybe,
but
so
I
was
thinking
when
we
talked
about
doing
trying
to
flip
dl.kates.io,
away
from
the
google.com
bucket
of
kubernetes
release
and
towards
a
kate's
in
for
bucket.
A
I
was
proposing
we
do
a
cadet
for
bucket
in
the
kate's
rfx
prod
project,
but
I
wanted
to
say
that
you
checked
that,
given
that,
like
I
don't
know,
if
we
want
to
try
and
make
treat,
kate's
artifacts
prod,
like
no
human,
should
ever
be
creating
or
touching
any
pockets
in
there.
We
shouldn't
artifact
permission
or
if
it's
cool,
if
I
create
like
a
cage,
release
bucket
and
dump
all
the
kubernetes
release,
artifacts.
C
Because
so
why
not
both?
I,
I
like
the
goal
very
much
like
the
goal.
In
fact,
it
is
exactly
what
I
was
hoping
for
at
the
beginning,
which
is
no
human,
ever
touches
kate's,
artifacts
prod,
except
in
very
rare,
exceptional
cases.
Why
does
the
release
process
not
need
to
go
through
some
promoter?
Why
shouldn't
the
release
process,
push
to
a
temporary
place
and
have
a
git
manifest?
That
says
this
is
the
thing
this
is
the
hash
this
person
signed
off
on
it,
move
it
into
production,
and
now
we
have
a
more
auditable
back.
C
A
D
Yeah
and
that's
tim's
gonna,
sorry,
yes,
the
we
do
have
it:
it's
not
automated
promotion,
but
you
will
occasionally
see
pr
spam
where
so,
like
the
the
k,
ops,
binary,
builds,
go
built
by
cloud
build
into
a
staging
bucket.
I
create
a
manifest.
I
is
the
person
doing
the
promotion
create
a
manifest
or
whoever's
doing.
The
release
creates
a
manifest
prs
that
to
the
infra
repo
and
eventually
that
will
be
done
when
it's
merged.
It
will
happen
automatically
right
now.
The
the
promotion
of
that
happens.
D
Okay,
the
promoter
binary,
but
we
have
that
and
actually
sig
release
moved
that
promoter
binary
into
their
own
into
their
own,
like
into
the
release,
get
repo.
I
guess.
D
Correct
and
moreover,
there
is
a
there's,
a
layer
of
abstraction
in
front,
so
it
it
almost
doesn't
matter.
We
have
a
gclb
which
has
a
sub
path,
actually
remember,
but
anyway,
there's
a
subdirectory
and
then
there's
cops,
and
so
we
can.
We
can
establish
almost
any
structure
underneath
that
gclb
we
want
to
and
without
changing
any
visible
urls.
The.
C
The
reason
I
asked
is
because
for
context
I'm
looking
at
the
console
now
we
have
aside
from
the
gcr-backed
buckets,
we
have
one
two
three
four
five,
six
other
buckets
like
artifact
cni,
artifacts,
cri
tools,
artifacts
csi,
artifacts,
that's
for
logs
number,
one
artifacts
kind,
artifacts,
vulnerability,
dashboard
and
at
some
point
we
made
a
call
to
make
separate
buckets
for
these
things,
because
we
can
only
tackle
buckets,
not
subdirectories
of
buckets
and
as
long
as
humans
were
doing
the
work
we
had
to
put
them
in
separate
buckets.
D
Correct,
if
we,
if
we
take
the
logs,
we
can
get
an
estimate
we
can
like
say
per
per
url,
how
many
downloads,
how
many
bytes,
I
don't
think
we
can
say
like
the
difference
between
like
I
think,
pac
bandwidth
is
more
expensive
than
us
bandwidth
and
like
that.
So
I
don't
think
we
can
do
that,
but
we
can
at
least
get
a
first
order
indication
right.
So.
C
D
A
C
C
Oh
no
idea,
I'm
not
sure,
I'm
not
even
sure
if
you
can
set
up
the
back
end
properly.
I
think
this
is
an
area
that
we
should
like
we're.
Just
gonna
have
to
try
a
few
different
things
and
and
see
how
things
turn
out
right.
A
Okay,
I
feel
like
the
first
thing
I'm
taking
away
from
this
is:
I
need
to
follow
up
justin
and
the
release
engineering
team
to
see
if
it's,
if
I
can
sort
of
get
it
on
their
roadmap,
to
put
together
the
binary
artifact
motion
process
for
this
quarter
or
if
we
can
at
least
document
the
state
of
how
it's
manually
done
today,
because
I
know
they're
still
kind
of
working
with
polish
and
maybe
move
the
artifact
promotion
stuff
for
a
religion
recorder.
So
they
might
have
a
lot
going
on.
A
Secondly,
I
feel
like
I
will
talk
to
them
about
for
trialing
dl.kates.io.
A
I
think
we
should
still
have
the
bucket
that
hosts
that
living
hates
artifacts
prod,
unless
you
think
I
should
have
it
live
elsewhere,
because
I
feel
like
we'll
see
that
we'll
when
we
start
incurring
costs
from
redirecting
traffic
back,
I'm
pretty
sure
we'll
see
the.
C
Delta,
absolutely
we
will
see
that
delta
and
so
we'll
have
a
one-time
like
this
is
how
much
it
cost
us
in
the
first
month
based
on
the
the
slope
change
of
that
graph,
but
we
won't
over
time
be
able
to
tease
those
apart
unless
we
process
the
logs
ourselves,
which
to
be
fair,
you
know
hippie's
starting
to
look
at
so.
A
Right
I
created
an
issue
about
how
we
would
do
the
dlk
tayo
stuff.
I
don't
know
if
you
want
to
walk
through
at
this
meeting
review
it
offline
or
just
trust
that
I'll
sort
it
out
with.
C
Engineering,
but
I'm
I'm
happy
to
let
you
drive
it.
The
biggest
issue
is
just
making
sure
that
the
content
is
synced
to
both
places.
In
my
opinion,
yes,
I
agree.
A
A
Getting
the
frequency
of
syncing
from
different
buckets
is
going
to
be
way
more
important
for
ci
builds,
which
are
updated.
Very
frequently.
Releases
are
kind
of
a
known
event
that
happened
at
a
human,
manageable
cadence,
so
if
it
has
to
be
synced
by
humans
or
now
that's
okay,
but
I'm
using
this
to
iterate
towards
an
automated
syncing
solution.
A
A
This
comes.
This
is
motivated
by
great
mccloskey,
a
member
of
my
team.
A
member
of
sick
testing,
has
been
trying
to
work
on
taking
a
lot
of
the
test
metrics
that
we
collect
for
all
the
jobs
that
run
on
prow,
store
them
into
a
bigquery
database
and
then
show
them
in
a
data
studio
report,
not
unlike
how
okay
the
case
infra
aaa
cluster
like
sends
all
of
its
billing
data
to
a
bigquery
database,
and
then
we
use
data
studio
to
run
queries
against
bigquery,
and
then
the
public
can
see
those
results.
A
C
A
No,
no
so
there
there
is
such
a
thing
as
like
a
publicly
available
bigquery
data
set
like
the
github
github,
you
can
go,
take
a
look
at
a
bigquery
data
set
that
has
all
the
github
events
for
the
past
years.
A
But
if
I
want
to
run
a
query
against
that,
I
need
to
give
a
project
to
build
the
compute
usage
too,
so
it
doesn't
get
charged
to
whatever
project
is
hosting
the
github
data
set.
It
gets
charged
to
at
the
moment
my
free
tier
personal
google
cloud
account
which,
like
as
long
as
I
don't
use
more
than
a
terabyte's
worth
of
query,
I
I
never
get
charged
for
it,
which
is
cool,
but
basically,
like
I've,
been
trying
to
do
some
reading
on
here.
I
guess
I'll
share
screen.
A
I
have
this
issue
linked
in
the
meeting
notes,
but
I'm
trying
to
figure
out
like
how
to
make
community
accessible
data
studio
reports,
so
the
community
should
be
able
to
like
trusted.
Members
of
the
community
should
be
able
to
edit
justin's
billing
report
and
trusted.
Members
of
the
community
should
be
able
to
create
ci
signal
reports,
but
data
studio
is
kind
of
weird.
It's
not
like
a
g-cloud
thing.
That's
honed
in
a
project
the
way
all
of
our
other
infrastructure
is,
it
seems,
kind
of
tied
to
g
suite.
A
So
if
any
of
my
fellow
co-workers
happen
to
make
data
studio
stuff
using
their
google.com
accounts,
it
gets
tied
to
the
google.com
g
suite,
which
prevents
public
access.
You
can
tie
it
to
like
anybody
within
the
g
suite
org.
You
can
tie
it
to
a
public
google
group,
but
you
can't
tie
it
to
all
you
for
whatever
reason
we
wanted
to
provide
a
no
login
dashboard.
That's
one
thing.
A
Second
thing
is
the
projects
that
get
built
for
bigquery
usage
in
data
studio
reports
are
something
you
set
up
at
the
data
source
level
and
then
it's
slightly
and
then
I
think
you
have
to
like
grant
people
access
to
view
the
data
from
the
data
source
as
well
so
like
viewing
a
data
studio
report
does
not
necessarily
mean
you
have
access
to
view
all
of
the
data
from
all
of
the
data
sources.
Inside
of
that
report,
long
story
short,
basically,
I'm
I
would
like
to
get
somebody
most
likely.
A
My
team
member
or
somebody
from
ci
signal
to
go,
tell
us
like
how
they
need
it
set
up
so
that
they
can
do
data
studio
and
bigquery
goodness
in
kate's
infra,
and
we
can
put
the
bill
in
a
reasonable
way
for
that.
But
I
want
some
agreement
on
what
project
we
should
use
for
charging
the
big
query
usage,
and
so,
as
for
the
billing
report,
I
think
we
could
start
by
asking
like
how
could
we
maybe
pick
up
that
task
of
taking
the
billing
report
thing
and
making
it
more
community
accessible.
C
So
we
currently
stay
take
all
of
the
org
level
billing
and
send
it
to
the
kubernetes
public
project
right.
That's
where
the
the
billing
dump
of
bigquery
goes
yeah
more
of
a
question
than
an
answer
for
you.
How
like
what
is
the
order
of
magnitude
number
of
data
sets
that
we
think
we
will
end
up
with
and
do
we
want
them
all
in
one
project
which
is
easy
for
people
to
consume,
but
may
be
more
difficult
for
management
or
do
we
want
them
in
different
projects?
C
And
I've
not
personally
played
very
deeply
with
the
access
control
for
different
data
sets.
Some
of
these
data
sets
are
inevitably
going
to
end
up
with
pii
right.
This
is
the
ongoing
discussion
with
cncf,
so
we
will
need
to
figure
out
what
does
and
doesn't
have
pii
and
govern
them,
possibly
differently.
D
Yeah,
I
was
just
gonna
say
like
I
think
what
you
said
is
true
about
how
the
permissions
work
we
one
thing
I
think
we
can.
There
are
two
models
for
sharing
a
bigquery
data
set.
We
can.
We
can
make
it
public,
which
is
the
where
people
sort
of
fork
it
or
clone
it,
and
then
it's
charged
around
or
we
can,
I
think,
just
grant
specific
people
or
groups
access,
in
which
case
I
believe
it
would
be
built
to
kubernetes
public
or
whatever
we
chose
to
like
you
know
the
host
project.
D
It
looks
like
you,
I
think,
you're
right
also
about
how
data
studio
works.
I
think
we
could
certainly
share
the
data
studio
reports
themselves
so
that
people
could
like
put
them
on
their
own
g
suite
accounts,
but
there's
not
a
nice,
as
far
as
I
know,
there's
no
like
underlying,
like
yaml
representation
that
enables
us
to
put
them
on
github,
which
I
think
we
were
like
hoping
for
before.
D
As
I
understand
it,
data
studio
is
essentially
a
ui,
and
so
it
could
be
that,
like
we
could,
if
we
make
the
bigquery
endpoints
accessible,
someone
could,
for
example,
like
use
observable,
which
I
hear
is
like
the
the
github
of
data
or
whatever
to
like,
like
create
their
own
sort
of
reports
and
share
those
sorts
of
things.
A
Yeah,
so
I
also
want
to
empower
both
things.
I
want
the
data
to
be
like
today.
All
the
the
build
data
is
publicly
accessible,
even
though
it
was
in
a
google.com.
This
isn't
the
gubernator
project
that
thing's
still
around
so
like.
I
want
that
type
of
data
build
data
to
still
be
publicly
accessible
for
people
to
use
whatever
they
choose
to
run
queries
against
it,
like
I'm
fine,
if
we
hook
it
up
to
griffana
again.
A
The
issue
we
had
in
the
past
was
when
we
were
using
grafana
to
display
the
results
of
queries
against
that
we
kept
getting
hacker
ones
like
too
many
people
were
finding
vulnerabilities
in
our
grifana,
and
so
we
don't
necessarily
think
we
would
have
that
problem
with
data
if
we
were
to
go
that
route,
but
if
anybody
else
wants
to
set
up
anything
that
queries
this
thing,
that
sounds
great.
I
still
think
that
to
what
is
the
project
that
we
would
allow
to
be
billed
for
running
well,.
C
A
A
Yeah,
it
could
be
both
where
to
host
it
and
where
to
build
it
hosting,
it
seems
maybe
slightly
hosting
it.
You
would
get
the
cost
of
like
ingesting
the
data.
I
guess,
but
most
of
the
bandwidth
for
things
like
bigquery
or
sorry.
Most
of
the
billing
for
bigquery
is
going
to
be
the
compute
usage
to
run
queries.
A
A
So
if
I
have
anything
that
I
want
for
broader
community
consumption,
I
should
put
it
there
but
like
if
we
don't
so
much
care
about
that
and
we
feel
like
we've
gotten
our
relationship
with
google
cloud
to
the
point
where
projects
are
like
candy,
I'm
totally
fine,
creating
as
many
one-off
purpose-built
projects
as
we
need
to
like
appropriately
segment
the
billing
for
all
this
stuff.
A
That
then,
leads
to
the
next
conversation
where,
like
we
already
have
a
bunch
of
segments
projects
for
e2e
tests,
but
we
don't
have
a
really
quick
and
easy
way
to
aggregate
the
building.
For
all
of
that,
so
I
think
we're
going
to
want
to
look
at
improving
the
billing
report,
providing
ways
to
like
aggregate
or
roll
up
a
bunch
of
related
projects.
C
D
I
I
also
just
looked
into
like
the
query
the
permissions
that
are
available
on
bigquery,
so
we
can
share
the
whole
thing
with
a
with
a
group
or
individuals
to
different
levels
like
read-only
access.
In
addition,
we
can,
we
can
create,
authorized
views.
D
So
if
we
wanted
to
create,
I
wouldn't
open
that
right
now,
you're
sharing
your
screen
error
by
the
way,
oh
yeah,
okay,
cool,
the
sorry,
if
you
want
to
create
authorized
views,
so
if
we
wanted,
if
there
was
a
table
that
had
pii
in
it,
for
example,
we
could
like
aggregate
it
down
to
something
which
the
cncf
says
is
appropriate
and
then
share
that
view
with
certain
people
like
a
group.
A
Okay,
that's
cool
yeah.
I
was
just
trying
to
demonstrate,
like
I'm
logged
in
with
my
personal
cloud
account,
it
doesn't
feel
anything
and
I
can
totally
see
the
current
kubernetes
anyway.
So
tim
you
were
like
kubernetes
public
is
an
artifact.
Should
I
just
go
create
whatever
random
project
names
I
want
to
host
whatever
I
want.
C
I
the
question
I
have
is:
do
we
want
one
project
or
small
n
projects
or
big
n
projects?
I
don't
I
don't
know
the
answer.
A
It
it's
related
to
the
overhead
and
friction
of
managing
multiple
projects.
Yeah,
so
less
bash
would
be
ideal.
So
using
everything
in
one
project
is
my
lazy
thing,
but
I
think,
as
long
as
it's
like
another
template,
it's
fine.
I
don't.
I
don't
care
if
it's
it's
it'd
probably
be
a
small
end,
but
I
feel
like
for
each
sort
of
major
reporting
purpose.
We
could
do
a
project
to
do
the.
D
A
But
anybody
running
a
query
against
any
of
those
data
sets
could
use
any
project.
They
want
to
build
a
compute
that
they're
using.
A
D
Yeah
I
mean
just
like
concretely,
I
think
you
were
saying
you
only
want
to
share
with
specific
groups
of
individuals
for
now
like,
and
I
think
we
can
just
do
that
almost
today
in
bigquery
like
if
we,
if
we
say
the
data
is
safe,
we
can
like
just
add
that
group
to
the
reader's
permission.
A
Correct,
I
feel
like
the
place
I
want
to
start
is
either
myself
or
my
teammate
will
contact
you
justin
to
see
if
we
can
get
the
ball
rolling
again
on
like
getting
the
data
sources
in
the
billing
report
community
editable,
because
I
think
you
still
have
to
be
google.com
to
edit
that
working
wrong,
but
just
sort
of
sort
out
the
permissions
on
that
and
use
that
as
the
model
for
hey.
We
know
where
the
building
for
generating
the
report
is
going
to
hey.
We
know
what
groups.
E
A
A
Although
some
of
this
stuff
is
like
google,
specific
implementations
of
iam
and
stuff
a
lot
of
our
infrastructure,
we
could
pick
up
and
put
down
on
any
kubernetes
cluster
anywhere.
But
that's
that's
a
totally
fair
possibility.
Maybe
you
could
open
an
issue
about
that
and
we
could
chat
with
the
steering
committee
about
putting,
in
a
request
to
the
cncf
to
get
funding
for
this
sort
of
thing.
A
But
I
also
still
like
yeah,
I
also
still
kind
of
want
to
check
in
with
bart,
because
I
feel
like
he
did
some
work
with
grafana
for
a
monitoring
thing
for
us.
I
don't
know.
E
Yeah,
on
the
other
hand,
I
think
that
that's
not
too
good
for
us
like
to
spread
our
our
all
of
our
resources
into
a
lot
of
projects
and
a
lot
of
a
lot
of
like
things
in
google
cloud
and
things
in
in
cross
planing
things
in
in
graphene
cloud.
But
I
guess
it's
a
possibility
like
we
can
take
a
look
into
the
graphene
cloud
that
provides
like
free
account
just
to
make
some
tests
yeah.
A
A
So
that's
part
of
why
we
feel
comfortable
using
google
based
services,
but
if
people
who
are
more
comfortable
with
other
things
show
up,
we
can
find
ways
to
shift
to
like
their
level
of
comfort.
So,
if,
like
a
whole
bunch
of
people
who
are
super
familiar
with
terraform
show
up,
maybe
we
can
finally
rewrite
all
our
backs
or
a
whole
bunch
of
people
who
are
great
with
crossplaying
show
up.
I
would
love
to
use
anything
but
bash.
A
Okay,
I'm
gonna
skip
over
the
audit
job
thing
on
the
agenda.
As
far
as
I
know,
hippie
hacker
has
somebody
from
his
team
working
on
it.
I'm
gonna
go
poke
them
today
to
ask
what
I
can
do
to
unblock
them
next
thing
I
I
should
have
just
kept
sharing
my
screen.
A
Okay,
I
think
ingress
instances
should
use
a
network
in
kate's
io
group
instead
of
extensions,
v1
beta1,
I
went
and
checked
and
they're
all
still
using
extensions,
b1
beta
1..
So
maybe
like
again,
if
I
have
the
time,
I'm
happy
to
sit
down
and
just
like
blast
through
this
with
somebody,
perhaps
arno,
who
has
a
little
more
experience
managing
the
aaa
cluster.
A
But
this
kind
of
made
me
realize
that
if
I
think
about
the
model,
where,
like
hey
they're,
a
bunch
of
sub
projects
that
all
own
their
infrastructure,
we
just
run
the
cluster
for
them.
I
don't
have
a
quick
and
consistent
way
to
know
how
to
blast
out
to
all
the
app
owners.
Hey
your
ingress
is
out
of
date.
A
You've
got
to
update
it
by
this
date,
because
I
don't,
I
don't
feel
like
it's
appropriate
for
us
to
end
up
in
this
situation,
where
we
do
all
of
the
work
for
all
of
the
individual
sub-project
app
owners.
A
What
what
is
our
cluster
version?
It's.
E
E
It
is
yes,
okay,
so
I
guess
we
are
still
good,
because
the
the
ingress
object
isn't
deprecated
yet,
but.
E
Yeah-
but
I
I
this
in
this
case
specifically,
I
have
some
some
experience
with
that,
because
I
have
like
a
program
that
fetches
all
deprecated
things
from
a
version
and
and
triggers
an
alert.
So
I
can
I
I
can
see
if
how
this
fits
into
that,
because
mainly
I
don't
know
the
swagger
json
and
and
compares
to
manifests
from
from
from
like
from
the
cluster.
E
A
So
I
think
it's
doable
like,
I
think,
probably
what
I
would
do
as
a
human
being
is
I'd
kind
of
read
the
directory
names,
and
then
I
go
look
for
similarly
named
groups
and
those
will
send
out
to
email
addresses
of
people
who
are
owners
for
the
stuff.
So
I
think
we
can
get
it
done.
A
E
A
A
B
Okay,
so
can
we
just
upgrade
the
object
and
just
inform
the
people?
We
have
upgraded
the
project
because
I
feel
like
most
of
the
people
on
earth
of
those
foreclosed,
don't
really
care
about
what's
happening
in
the
cluster
yeah.
We
never.
We
never
inform
about
from
basically
the
upgrade
of
trooper
a
so
I'm
not
sure,
because
we're
gonna
we're
gonna
ping
p.
We
need
to
bring
the
orders
and
wait
the
owners
doing
the
upgrade
by
themselves,
but
we
don't
know
where.
A
A
A
If
your
app
is
broken,
please
contact
us,
but
in
general
it's
not
a
it's,
not
a
great
supportability
model,
because
then
I
feel,
like
people
start
to
notice
every
little
misbehavior,
whether
it
was
there
or
not
and
attribute
it
to
the
fact
that
you
said
something
whatever.
This
is
just
my
testing
experience
coming
out.
E
We
can
do
the
same
way.
The
folks
from
from
cabs
did
right.
We
can.
We
can
open
an
issue
and
like
send
to
kubernetes
the
mailing
list
with
a
lazy
consensus
time
right.
We
are
going
to
migrate
this
date
and
if
you
have
any
problem
with
that,
please
reach
us
yeah
because,
like
I
have
like
some
experience
with
that
like
if
we
wait
for
the
users
they
are
not
going
to,
this
is
going
to
be
like
into
a
low
priority
for
them
until
we
migrate
and
something
breaks.
E
D
What
we
would
do
if
it
was
like
super
production,
we
would
have
like
a
staging
instance
right
and
we
would
do
the
staging
one
first
and
expect
them
to
to
make
that
change.
I
I
do
see
some
canaries,
I
don't
know
whether
that
is
a
staging
instance
or
not,
but
like
I,
we
could
probably
suggest
two
important
things
that
they
consider
having
a
staging
instance
and
doing
that.
D
A
Like
as
we
start
to
scale
out
things
like
the
number
of
apps,
we
own,
we
should
actually
figure
out
a
policy
that
works
for
us
because
right
now,
like
we'll
just
work
it
out
as
human
beings,
the
canary
things
are
for
the
canary
dns
zones.
So
it's
still
important
that
those
run,
because
I
think
that's
where
we
check
that
the
dns
updates
work
before
we
roll
them
out
to
the
non-canary
dns
zones.
A
If
you're
looking
at
the
word
canary
on
the
screen
ricardo,
I
agree
with
you
that
I
think
just
that
it's
the
lazy
consensus
way
of
the
project.
This
thing
is
going
to
happen
by
this
date.
Please
let
us
know
if
you
have
objections.
The
the
thing
in
the
middle
is
like.
A
I
would
rather
reach
the
responsible
individuals
for
this
stuff
more
directly,
either
as
github
users
or
via
their
email
addresses.
It's
probably
bad
that
I
admit
this
in
public,
but
like
the
way
I
deal
with
the
amazing
volume
of
email
that
I
get
at
google
is
stuff
that
doesn't
have
me
in
the
two
line.
E
Yeah
my
email
seems
like
a
log
streaming.
Also
like
I
can't
even
read
them.
I
say
that
my
email
can
be
like
a
blockchain.
You
need
to
send
me
three
times,
so
I
can
read
that
first
time
I
will
just
drop
and
say:
okay,
so
yeah
sounds
good.
I
think
that
we
can
we
we
can
probably
ping
folks
in
the
issue
or
maybe
in
the
pr
and
say
we
are
going
to
merge
this
this
time.
Please
reach
us
if
you
think
it's
not
appliable
and
and
mark
each
of
the
of
the
honors
yeah.
A
Okay,
I
just
wanted
to
be
a
respectable
time.
I'm
not
going
to
talk
about
dlk
taiyo,
I'm
going
to
rename
the
default
branch
for
the
gates.I
repo
from
master
to
main
I'm
going
to
do
that
today,
probably
I
did
it
for
the
kubernetes
work
repo
on
friday
and
it
wasn't
super
disruptive,
I'm
using
this
repo,
because
it's
low
traffic
and
it
has
a
couple
jobs
that
are
periodics
and
post,
submits
and
pre-submits
that
all
trigger
against
the
default
branch.
A
So
I'm
trying
to
exercise
the
behavior
of
our
ci
system
when
that
branch
remains
and
finding
out
ways
to
do
this
without
too
much
breakage.
Okay,
I
wanted
to
get
give
justin
time
to
talk
about
ways
to
move
forward
on
the
aws
accounts.
Pr.
D
I
I
we
can
probably
talk
about
it
next
time
I
just
like
there
was
a
lot
of
a
lot
of
feedback,
excellent
feedback,
it
sort
of
felt
like.
Maybe
we
want
to
just
pursue
a
different
approach
entirely.
I
don't
I
don't
know
like
it
felt
like
like.
Do
we
want
to
build?
Do
we
want
to
build
it
again
in
a
sort
of
collaborative
like
a
pairing
way,
so
we
have
like
two
eyes
on
everything
and
do
we
want
to
try
to
use
like
I?
D
A
Okay,
that's
fair
yeah.
I
think
it
would
be
fair
to
collaborate
on
a
better
path
forward.
I
think
part
of
it
is
I've
tried
to
go
around
most
of
our
stuff
and
align
us
on
the
use
of
secret
manager.
If
there's
something
better,
we
should
be
using
instead
of
secret
manager.
Let's
do
that.
The
terraform
version
thing
is
just
the
unfortunate
consequence
of
where
our
terraform
is
at
with
bottom
line,
like
I
think,
there's
a
sub
project.
That's
waiting
on
these
extra
aws
accounts
and
I
want
to
unblock
them.
A
So
I
mean
it
could
be
like
if
this
is
what's
there
today
is
fine.
Maybe
we
merge
it
with
a
big
scary.
Note:
that's
like
hey!
This
is
how
these
were
generated,
but
we're
not
actually
using
this,
because
it's
unclear
to
me
whether
you
use
this
terraform
to
generate
these
accounts
and
create
that
file
and
get
it
in
place
or
whether
that's
the
you
manually,
you
have
manually
generally
rated
them
someplace
else
and
that's
how
the
file
was
created.
D
No,
it's
it's
it.
It
was.
This
terraform
was
was
used,
but
it
was
run
well,
it's
always
gonna
run
manually,
but
it
was
run
manually
to
do
it.
It's
not
like
it
was
back
back
filled,
okay,
one
of
the
one
of
the
I
like
that
approach
a
lot,
and
I
can
clean
it
up
with
an
eye
to
that.
D
One
of
the
nice
things
that
I
envisage
for
these
accounts
is,
we
would
rotate
them
so
like
they
all
have
a
date
in
them
and
like
every
pick
a
number
of
months
like
three
months,
let's
say
we
would
just
like
create
10,
new
ones
or
100
new
ones
and
then
like
delete
the
one
from
six
months
ago.
D
Like
sort
of
thing
like
you
know
that
sort
of
yeah
or
even
from
three
months
ago,
once
they
once
they
go
into
bus
costs,
they're
not
going
to
be
used
within
within
an
hour
they're
all
going
to
be
like
unused
yeah.
A
Yeah
that
would
be.
That
would
be
awesome.
You
know
there
are
ways
you
can
teach
us
has
the
concept
of
dynamic
resources,
so
it
could
be
possible.
If
we
had
the
right
credentials,
we
could
teach
bosco's
to
go,
create
accounts
dynamically.
A
That
might
be
a
little
freaky,
but
you
know
we
can
also
like
secret
manager.
Is
there
for
secret
rotation?
A
You
could
probably
set
up
syncing
between
secret
manager
and
cluster
secrets
that
moscow's
manages
like
whenever
the
version
changes
in
secret
manager,
it
sinks
over
anyway
yeah
rotation
good.
We
should
be
doing
that
for
many
more
things
but
yeah.
My
tlbr
is
like.
I
don't
want
people
to
be
blocked,
so
I
just
didn't:
have
the
credentials
to
run
everything
in
its
current
form
and
verify
that
it
all
works
as
advertised.
A
So
like
hey,
maybe
a
separate
follow-up
thing
is
we
should
get
the
rest
of
the
kate's,
infra
organizers,
access
to
whatever
aws
credentials.
You
need
to
do
this,
but
if
you,
if
you're
like
yeah,
it's
all
good.
I
trust
you
big
scary
thing
that
says
this
is
how
this
was
done,
but
we're
gonna
do
something
else.
D
D
Maybe
I
didn't
go
check
that,
like
it's
individuals,
it's
a
different
aws
account.
It
doesn't
work
like
like
google
off
you
have
to
like
use
a
different
credential
set.
It's
not
like
your
your
gmail,
like
works
for
them.
D
A
D
E
There
is
another
one
that
uses
like
the
language
right.
I
can't
remember
the
name.
That's
uses
like
programming,
language
node.js
or
something
like
that
better
than
they
have
things
because
yeah
that's
a
mess
for,
for
this
part
of
modernism,
upgrading
from
13
to
14,
that's
yeah!
I
have
in
spain
right
now.
A
Okay,
we're
over
our
time.
It
was
so
great
to
see
you
all.
I
hope
you
have
a
happy
wednesday.
I
look
forward
to
doing
awesome
things
with
you
all
over
the
next
two
weeks
and
seeing
you
two
weeks
from
now
see
you
folks
happy.