►
From YouTube: k8s-infra-team's Bi-Weekly Meeting for 20201028
Description
k8s-infra-team's Bi-Weekly Meeting for 20201028
A
Okay,
hi
everybody
today
is
wednesday
october
28th.
You
are
at
the
kate's
infra
working
group
meeting
bi-weekly
meeting,
I'm
your
host
today,
erin
crickenberger,
also
known
as
spiffxp
on
all
the
places.
A
A
A
This
meeting
is
being
recorded
and
will
be
posted
to
youtube
publicly
as
soon
as
zoom
and
youtube
are
done.
Processing
it.
A
With
that.
I
thought
we
would
go
to
sort
of
our
regularly
recurring
topics,
and
I
was
wondering
if
there
was
anybody
who
was
new
to
this
meeting,
who
wanted
to
introduce
themselves.
C
A
B
A
A
D
So
I'm
looking
at
the
cloud
console
view
to
cross
check
and
it
its
query
is
the
last
30
days
the
easy
query,
but
the
last
30
days
is
about
119
000..
If
you
add
two
days
at
a
couple
thousand
dollars
a
piece,
it
makes
sense
to
me.
D
A
A
So
this
this
would
pick
up
all
of
the
cloud
build
related
jobs
to
push
things
into
the
staging
gcr
repos.
This
would
pick
up
proud
jobs.
This
would
pick
up
usage
of
triage
party
and
gcs
web,
and
things
like
that.
A
Spitting
distance,
okay,
cool
and
no
great
surprises
here
you
see
the
nice
wavy
pattern
because
we're
getting
the
pre-submits
from
daily
github
traffic.
D
Comment
to
date,
we've
not
really
spent
any
time
not
much
time
trying
to
optimize
for
this.
At
this
point,
I
think
that
is
still
the
right
choice.
I
don't
think
it's
worth
the
energy
in
particular
to
try
to
drive
these
numbers
down.
There
probably
will
come
a
time
when
it's
worth
our
group
spending
time
on
that.
I
don't
think
that
time
is
now.
A
I
would
agree
with
that.
I
think
the
the
ballpark
extrapolation
is
where,
if
you
take
what
our
past
28
days
was
and
multiply
it
by
12
we're
just
short
of
one
and
a
half
million
per
year,
which
gives
us
another
one
and
a
half
to
work
with
right.
And
so,
if
we
find.
D
Work
so
one
of
the
things
on
my
to-do
list
is
to
do
a
slightly
more
detailed
projection
than
that
like
taking
advantage,
taking
into
account
the
slope
of
the
organic
growth
of
what
we
currently
have.
I'm
happy
that
we
haven't
significantly
changed
anything
in
the
last
few
months.
It
helps
me
to
make
that
projection,
but
it's
on
my
to-do
list.
D
D
A
Yeah
we
gave
the
credits
to
cncf,
but
we
did
ask
that
the
cncf
reserve
usage
of
these
credits
solely
for
the
kubernetes
project,
not
any
of
the
other
cncf
projects,
because
I
know
there's
been
some
discussion
about
like
getting
helms,
charts
and
stuff
away
from
google-owned
infrastructure,
and
I
think
that's
that's
laudable
and
great
transparency
above
all
else,
and
community
funded
sustainable
project
operations
are
great,
but
these
funds
are
for
kubernetes,
specifically.
A
Okay,
so
there's
a
recurring
item
in
here
about
ai
review.
But
frankly
I
don't
really
recall
anybody
taking
ais
from
the
past
couple
of
meetings
so
I'd
like
for
us
to
unless
there's
anything
somebody
wants
to
bring
up.
A
Okay,
I
figure
we
should
move
on
to
antonio's
agenda
item
about
ipv6
support.
B
Well,
I
opened
the
issue,
I
think
that
one
year
ago
or
something
but
since
I
was
the
only
one
using
it,
I
I
really
don't
care,
and
this
week
or
past
week
somebody
commented
on
the
issue
and
we
are
going
to
release
dual
stack.
So
I
think
that
it
will
be
a
bit
taught.
You
know
claiming
that
kubernetes
has
ipv6
and
our
web
page
or
these
things
doesn't
have
ipv6.
B
So
I
was
checking
into
the
into
someone
in
the
meeting
commented
about
the
next
steps,
and
I
was
checking
that
the
google
cloud
has
an
ipv6
termination.
So
I
don't
know
if
this
is
because
some
technical
limitation
or
people
not
stepping
up
for
doing
it,
because
I
I
can
have
time
for
implementing
this.
B
A
I
feel
like
there's
a
bun
yeah.
Sorry
go
ahead.
I
was
just
gonna
say
I
feel
like.
There
are
two
questions
here.
The
first
is
whether
or
not
our
dns
can
support
serving
ipv6
addresses,
as
well
as
ipv4
addresses
right,
yeah,
yes,
queries
which,
I
believe
it
can.
Second
question,
is
the
infrastructure
that
some
of
these
records
point
to
can
that
support
ipv6
the
the
issue?
The
comment
posted
in
the
issue,
specifically
references
act.kubernetes.io,
which
is
not
community-owned
infrastructure?
D
A
It's
on
the
list,
as
far
as
I
know
that
points
to
google's
app
repo,
which
is
managed
by
a
google
internal
tool
called
rapture
and
only
googlers
have
access
to
it.
A
I
am
one
of
the
googlers
who
is
in
charge
of
responding
to
the
release,
engineering's
team's
requests
to
build
debian
and
rpm
packages
and
pushing
to
that
internal
repo,
but
what
would
be
involved
to
move
to
community
would
be
for
the
community
to
stand
up
their
own
repository
and
get
packages
there.
I
think
the
sticking
point
has
always
been
that
the
signing
keys
that
are
used
right
now
are
google's,
which
makes
the
packages
relatively
trustworthy.
A
A
Does
google
intern
does
google's
internal
app
repo
support,
ipv6
or
or
if
we
solved
the
dns?
Would
we
still
have
this
problem.
B
A
D
I
don't
know
how
the
app
stuff
is
set
up
in
the
back
and
how
we're
exposing
it.
I
imagine
if
it's
through
this
normal
google
app
site,
then
it's
not
through
a
cloud
load
balancer,
but
through
a
gslb
or
a
gfe,
rather,
which
is
our
internal
version
of
the
cloud
load
balancer,
and
so
the
changing
of
that
config
would
be
more
difficult.
Now,
if
that's
already
exposed
on
v6,
I
mean
that's
the
question
antonio.
D
That
was
my
second,
my
second
point
for
you,
sorry
latency
here
and
I'm
on
the
phone.
I
just
looked
that
up
and
yes,
it
goes
through
our
redirect,
which
is
into
ipv6.
C
B
C
B
D
Yes,
that
I
I'm
sorry
I'm
looking
at
the
back
side
too,
so
I
just
looked
up
the
packages
so
that
redirects
to
packages.cloud.google.com
and
packages.cloud
is
exposed
on
a
v6
address.
So
if
we
did
as
you
suggest,
then
it
probably
would
just
work.
D
I
have
no
objection
to
it,
yeah
I
I
just
not
on
my
wasn't
on
my
radar.
So
if
you
want
to
go
look
at
the
case
that
I
owe
repo,
we
have
the
script,
that's
set
up
our
the
ingress
for
not
script.
It's
a
yaml,
the
ingress
yaml,
for
we
view
directories
in
a
second.
D
So
in
that,
in
that
directory
we
could
add
an
ingress.
D
If
you
look
in
that
directory
you'll
find
ingressy
animals,
you
should
just
be
able
to
copy
them.
We
can
change
the
ip
address
and-
and
you
know
you
can
take
a
look
at
the
canary
the
test-
script
there
and
and
adapt
it
for
v6.
I
bet
I
bet
it
just
works.
B
D
D
A
Yeah
man
sounds
great.
I
think
I
don't
necessarily
want
to
speak
for
tim,
but
I
personally,
like
don't,
have
the
bandwidth
to
try
this
out
myself,
but
as
long
as
you're
willing
to
try
it
out.
I
think
we're
in
support
of
this.
D
Yeah,
so
I
I
put
aside
a
bunch
of
time
over
the
next
few
weeks
I
told
aaron
this
this
week.
I
put
aside
a
bunch
of
time
over
the
next
few
weeks.
I
cancelled
a
ton
of
you,
know
internal
meetings
and
stuff,
because
there's
a
bunch
of
things
in
community
that
have
sort
of
languished
for
lack
of
attention,
so
I'm
trying
to
get
through
those.
This
certainly
can
fit
under
that
umbrella.
D
Antonio,
if
you,
if
you
want
to
try
it,
I'm
happy
to
take
a
look
and
do
some
manual
poking
at
our
end,
just
to
make
sure
that
everything
works.
The
way
it's
supposed
to
work,
but
also,
we
should
put
you
in
the
right
groups
that
give
you
the
authority
to
actually
go.
Try
these
things
so
that
it's
not
always
you
know
aaron
or
me
who
has
to
try
anything.
You
should
make
sure
that
you
have
an
appropriate
permission
that
you
could
actually
do.
B
A
Okay,
let's
hang
on,
I
hit
my
screen
off
hot
corner
too
quickly,
there,
okay,
any
other
questions,
comments
or
concerns
on
ipv6.
A
All
right,
let's
see
so
next
thing
on
the
agenda
was
to
talk
about
our
progress
on
migrating,
the
remainder
of
the
release
blocking
and
merge
blocking
jobs
for
kubernetes
to
community
owned
infrastructure.
A
Just
to
summarize,
where
we're
at
briefly,
we've
migrated
all
of
those
jobs
except
any
of
the
build
jobs,
and
that
is
because
the
build
jobs
currently
right
to
a
bucket
called
kubernetes
release,
dev,
which
is
owned
by
the
google
containers
gcp
project
which
doesn't
allow
outside
google
accounts
to
write
to
it,
and
we
also
discovered
there
was
a
kubernetes
ci
images,
gcr
repo,
which
is
locked
down.
Similarly,
so
we've
gone
through
and
created
equivalent
buckets
and
gcr
repos
over
in
kate's,
I
o
arno.
E
E
A
Okay
yeah.
Thank
you
a
bunch
for
pushing
forward
on
this
our
now.
So
I
posted
a
link
in
the
meeting
notes
to
comment
that
sort
of
describes
what
the
plan
is.
A
It
can
probably
stand
to
be
updated
based
on
some
of
the
work
arno
has
done,
but
I
think
once
we
sort
out
the
build
canary
job
we're
on
track
to
basically
have
everything
migrated
within
the
next,
let's
say
week
to
two
weeks
to
be
safe,
which
would
be
great
because
then
that
means
all
of
the
jobs
that
merge
and
that
block
merges
and
kubernetes
releases
are
fully
community
owned.
A
Okay,
the
next
item
on
the
agenda
was
from
arno
about
proud
staging
instance.
E
So
basically,
this
the
question
is
is
about
to
to
issue
I'm
facing
basically
the
current
pro
instance
running
in
the
google
owner
project.
So
the
idea
was
is
to
set
up
a
station
pro
instance
running
in
maybe
triple
a
or
any
non-google
gk
cluster,
so
we
can
identify.
The
pain
point
relate
to
the
migration
to
pro
to
the
yeah
to
the
pro
migration.
E
A
Okay,
right
right,
I
have
objections
to
the
idea
of
using
well
we'll
see.
I
I
personally
have
objections
to
using
forked,
repos
or
separate
orgs
for
testing
out
kubernetes,
kubernetes
and
kubernetes
release
changes.
This
came
up
in
the
github
management
meeting
where
my
microphone
wasn't
working,
but
essentially
it's
it's
not
clear
to
me.
What
that
increased
complexity
will
gain
us
that
using
branches
cannot
and
for
better
or
for
worse
once
you
start
using
a
repo,
that's
kind
of
off
the
beaten
track
and
doesn't
receive
production
level
traffic.
A
We
just
haven't
really
been
able
to
find
an
equivalent
proxy
for
actual
production
level
traffic
that
the
kubernetes
repos
see
like
we
have
tried
in
the
past,
for
example,
to
set
up
an
instance
of
prow
that
just
exercised
the
testing
for
repo,
which
is
in
the
top
five
repos
that
receives
traffic,
but
we
were
still
missing
things
that
only
appeared
on
the
kubernetes
repo.
A
So
there
is
a
proposal
that
was
put
forth
a
couple
months
ago
through
sig
testing.
I
I'm
gonna
mess
up
who
proposed
it.
I
think
it
may
have
been
chi
zhang.
It
could
have
also
been
chow
dai,
I'm
not
I'm
not
sure
which
of
them
worked.
I
think
both
of
them
worked
on
it,
but
essentially
they're
trying
to
propose
using
a
staging
pro
instance
that
has
end-to-end
tests,
sort
of
automatically
open
up,
pull
requests
and
see
prow
exercise.
A
That
is
just
the
start
of
a
sanity,
the
start
of
like
a
smoke
test
against
new
prow
instances,
and
they
had
approached
me
about
using
community
infra
to
host
that
and
it
sounded
too
complicated
to
them.
So
I'm
unclear
whether
that
actually
exists
today.
A
If
it
doesn't,
I
think
if
you
want
to
take
the
lead
on
setting
it
up
in
triple
a
or
elsewhere.
That
sounds
great
to
me.
Okay,
it
said
a
lot
of
words.
Let
me
tell
you
alright,
the
idea
of
using
a
staging
prow
instance
great.
That
sounds
good
to
me.
We'll
need
to
be
careful
about
dueling
prows.
My
my
experience
is,
if
you
point
to
prows
against
the
same
repo,
it
doesn't
necessarily
work
out
great
all
the
time.
A
So
we'll
have
to
figure
out
how
to
do
that.
Does
that
answer
your
question.
A
Right,
yeah
figuring
out
the
trust
model,
there
is
going
to
be
a
little
complicated,
so
I
feel
like
I
would
want
to
see
a
little
bit
more
of
a
plan
or
a
proposal
for
how
it's
going
to
be
exercised,
but
I
think
at
least
just
setting
up
an
instance
of
prow
in
community-owned
infrastructure.
A
A
Okay,
any
other
questions
on
the
crown
staging.
A
Incidents
all
right,
so
I
leave
it
to
y'all.
We
have
30
minutes
remaining,
I
can
share
my
screen
and
we
can
walk
through
the
board
or
we
can
decide
we'd
like
30
minutes
to
go,
do
whatever
else.
We
would
prefer
to
do.
F
Yeah
me
me
either,
and
I
think
another
thing
that
we
I
have
a
question
about
is:
are
we
ready
for
november
first
docker
hub
changes,
because
from
what
I
saw,
we
should
be
ready
right,
because
there
was
some
pull
requests
on
test
infrared
regarding
the
jobs
to
use
the
mere
gcr.io
registry,
which
basically
should
solve
the
issue
right.
A
Believe
so
I
have
not
caught
up
with
all
of
yesterday's
pr
traffic,
so
maybe
ben
went
ahead
and
did
some
stuff
that
I
have
yet
to
see.
A
I
I
think
he
was
still
slightly
uncertain
whether
or
not
kind
clusters
were
going
to
automatically
benefit
from
mir.gcr.io
or
whether
there
was
something
that
needed
to
be
configured
to
make
them
use.
Mirror.Gcr.Io,
oh
antonio,
just
since
you're
here
and
you're,
pretty
prevalent
and
kind
do
you
have
any
idea
where
that's
at.
B
A
So
so,
just
to
summarize,
where
I
believe
we're
at
is
that
all
gke
clusters
by
default
are
configured
to
use
mirror.gcr,
dot
io.
So
our
proud
build
clusters.
If,
when
they
pull
down
images
to
run
pods,
that's
that's
cached,
then
any
kubernetes
clusters
that
are
stood
up
via
cubeup.sh,
which
would
be
all
the
clusters
used
for
release
blocking
and
merge
blocking
jobs
are
also
configured
to
use
mirror.gcr.io
by
default.
A
So
if
a
job
runs
a
script
that
calls
docker
poll
or
whatever,
I
think
there
is
some
configuration
that
needs
to
happen
in
the
image
to
configure
that
instance
of
docker
to
pullpremier.gcr.io,
and
I
had
a
pr.
I
was
working
on
to
put
that
in
the
cubekins
image,
but
it
could
be.
Somebody
else
got
ahead
of
me
on
that.
A
Just
to
share
it
briefly.
The
mirror.gcr.io
repo,
the
team
behind
gcr,
is
working
in
advance
of
the
deadline
to
make
sure
that
that
has
a
much
better
hit
rate
than
it
usually
does.
So
we
anticipate
that
most
of
the
images
that
are
pulled
frequently
are
going
to
be
cached.
A
If
we
find
that's
a
problem,
I'm
still
working
through
getting
accounts
from
docker
to
be
able
to
pull
in
an
unlimited
manner
and
what
we
would
do
is
configure
our
clusters
to
use
to
tie
image,
pull
secrets
to
whichever
service
account
is
being
used,
that's
going
to
require
significantly
more
plumbing,
but
it
shouldn't
be
too
bad,
so
the
biggest
unknown
for
me
would
be
end-to-end
tests
that
stand
up
clusters
using
something
other
than
cubeup.sh.
A
So
I'm
thinking
of
the
cops
jobs
and
I'm
thinking
of
the
cluster
api
jobs.
You
know
we'll
we'll
have
to
see
what
happens
there.
Does
that
answer
your
question,
claudia.
F
A
We
have
enumerated
so
so
we've
enumerated
it
a
couple
ways.
I
still
don't
think
I've
caught
literally
everything
by
all
1700
of
our
jobs,
but
we've
enumerated
looking
through
the
kubernetes
code
base.
What
images
are
pulled
down
for
end-to-end
tests
and
we've
also
enumerated
by
looking
at
the
community
brow
prow
build
clusters
to
see
what
images
the
kubelet
is
pulling
down.
A
So
we've
we've
captured
blogs
I'll
I'll,
dig
up
the
issue,
but
I've
posted
pasted
comments
on
how
antonio
went
through
the
kubernetes
code
base,
and
now
I
went
through
the
prologues
to
find
that
I
was
not
able
to
enumerate
that
for
the
same
level
of
detail
for
all
of
the
1500-ish
jobs
that
run
in
the
google.com
build
cluster.
A
That
is
where
more
of
the
like
kubernetes,
six,
repos
and
stuff
run,
but
enumerating
another
script
that
looks
at
all
of
the
proud
job,
yaml
files
and
pulls
the
images
out
from
that.
There
are
very
few
docker
hub
images
that
are
unpopular
so
most
of
the
docker
hub
images
used
by
those
jobs.
Look
like
golang,
node
stuff,
like
that.
A
What
I
haven't
been
able
to
enumerate
would
be
any
script
that
runs
docker,
pull
as
a
result
of
those
jobs,
because
docker
poll
isn't
logged
by
kubelet
and
the
the
the
logging
for
the
gke
clusters
doesn't
pick
that
up
either
from
what
I
can
tell
so
that
we're
gonna
have
to
find
out.
My
my
anticipation
is
the
majority
of
scripts
that
do
that
are
probably
building
docker
images,
and
the
majority
of
those
jobs
run
in
gcp
is
sorry,
gcb
google
cloud
build,
which
is
also.
A
Similarly,
they
anticipate
they're
going
to
be
fine
for
docker
hub
it.
It
is
statistically
very
unlikely
that
we
will
run
into
problems
with
gcp.
D
Okay,
what
a
mess!
Thank
you.
A
Yep
I
was
so
relieved
to
hear
that
they
decided
to
push
the
pause
button
on
the
image
retention
changes
which
don't
matter
to
us
at
all,
but,
like
honestly,
most
of
the
because
we
have
kates.gcr.io
the
majority
of
the
project
is
very
well
shielded
from
this.
It's
it's
more
like
the
long
tail
of
jobs
that
might
get
hit
by
this,
but
the
really
critical
stuff
is
going
to
work.
Just
fine.
D
A
Okay,
let's
share
my
screen.
A
All
right
how's
that
look,
I'm
hoping
people
can
see
that
so,
let's
see,
I'm
just
gonna
walk
backwards
from
blocked.
If
that's
cool
with
people.
A
A
Right,
we
need
to
make
sure
that
the
the
actual
promotion
jobs
run
in
community
owned
clusters.
At
the
moment,
the
jobs
are
still
running
in
the
google.com
trusted
build
cluster.
A
I'm
pretty
sure
are
no,
you
probably
know
enough
to
make
them
migrate,
or
anybody
from
the
release
engineering
team
should
be
capable
of
doing
this.
I
know
I
saw
steven
ask
why
we
couldn't
do
this
and
I
said-
and
I
proposed
that
the
release
engineering
team
do
this
on
one
of
the
umbrella
issues
related
to
in
the
release
engineering
team,
taking
more
ownership
of
the
container
image
promoter
cloud.
A
You
I
see
you
have
a
comment
here
related
to
the
windows,
build
stuff
that
you've
been
working
on
yeah
we
already
merged
that
yeah
yeah.
So
I
think
I
think
we're
good
there.
A
D
I'd
like
to
add
one,
maybe
one
small
bump
there.
I
want
to
make
sure
that
the
documentation
for
the
end-to-end
business
of
promoting
images
is
up
to
snuff,
so
that
somebody
who
isn't
linus
or
me,
and
even
I
struggled
with
it
the
last
time
it
happened-
can
investigate
any
alert
that
goes
off
and
I'd
like
to
try
this
by
getting
somebody
who
isn't
linus
or
me
to
respond
to
an
alert
that
I
can.
E
I
can
talk
for
stephen,
but
I'm
pretty
sure
the
sig
really
is,
you
know,
is
taking
kind
of
ownership
around
everything
related
to
promotion
process.
I
think.
C
C
D
That
is
a
great
answer.
Do
do
we
have
like.
We
have
google
groups
for
people
who
are
in
the
promoter,
maintainers
or
promoter.
I
forget
what
the
group
is
called,
but
we
can
look
it
up.
I
would
like
to
make
sure
that
there
is
ample
coverage
there
and
people
who
feel
like
they
are
responsible
for
it
like
if
that
alert
goes
off
last
time
it
went
off.
Nobody
looked
at.
D
D
A
D
I
was
going
to
say,
given
the
the
shape
of
the
billing
report
like
this,
should
be
a
p2
or
p3.
A
A
Okay,
so
I'm
gonna
kick
this
out
of
in
progress
back
to
the
backlog
or
even
I'm
just
going
to
put
it
into
meets
triage.
A
Develop
a
proud
migration
plan,
I
think
arno
proposing
a
staging
instance
would
help
push
this
along.
I
have
I.
I
also
need
to
go
through
and
update
the
status
of
this
issue,
so
migrating
all
of
the
release
blocking
and
merge
blocking
jobs.
Like
I
said
we're
almost
there
greenhouse
the
bazel
build
cache
has
been
moved
over
setting
up
github
proxy
in
the
trusted
cluster.
A
I
believe
arno
has
taken
care
of
that
developing
better
apples
to
ensure
that
more
of
the
community
can
actually
see
what
is
happening
with
rao
and
feel
empowered
to
troubleshoot
it.
I
think
we
are
most
of
the
way
there.
A
We
have
set
up
post-submit
jobs
so
that
if
we
change
the
resource
files
in
the
appropriate
cluster
directories
for
these
clusters,
they
automatically
get
deployed
right,
and
this
is
specifically
what's
holding
back
migrating
all
the
police
blocking
jobs.
So
I
think
enumerating
how
we
then
get
to
setting
up
a
staging
prow
instance
and
what
it
would
look
like
to
shift
traffic
from
one
to
the
other
and
how
we
would
decide
we're
ready
to
do.
That
is
probably
what
needs
to
be
done
next
here.
E
E
A
Okay,
I've
already
covered
migrating
the
release
blocking
jobs,
project
artifact
management
task
list.
A
The
checklist
at
the
top
of
this
issue
is
very
bold
and
has
big
ideas
about
not
just
gcr
hosting
but
also
gcs
hosting
for
projects
to
use
and
then
also
how
to
mirror
this
stuff
to
other
clouds
registries,
and
I
think
we
should
pick
that
conversation
back
up
at
some
point
in
time.
A
A
Some
one-offs,
for
I
know,
we've
done
one
for
kind,
but
I
think
sort
of
a
next
step
would
be
to
figure
out
how
to
take
the
container
image
promoter
and
turn
it
into
the
artifact
promoter,
and
I
think
justin
had
mentioned.
He
was
working
on
that,
but
just
hasn't
been
able
to
land
that
yet.
D
Yeah,
I
know
justin
started
looking
at
that.
I
honestly,
I
think
we
should
this.
This
issue
is
probably
not
worth
the
paper
it's
written
on
right
now,
because
it
covers
too
much.
I
think,
there's
two
separate
tracks
here.
One
is:
how
are
we
going
to
set
up
gcr,
mirroring
container
image
mirroring
which
we
can
tackle
like
today
right?
We
could
start
that
work
today.
D
If
somebody
wants
to
spearhead
it
and
how
are
we
going
to
manage
artifact
promotion
in
a
similar
way
and
then
how
are
we
going
to
mirror
artifact
from
our
artifacts
in
in
a
similar
way.
D
A
Yeah,
as
far
as
artifact
registry
goes,
I
I
personally
don't
want
to.
A
I
don't
want
to
be
forced
to
use
it
until
there's
a
good
reason
to
it
sounds
like
they
will
start
a
deprecation
clock
once
that
goes
ga.
It's
currently.
C
A
It's
been
beta
since
about
march
of
this
year,
once
it
goes
ga
we
have
six
months
to
switch
from
gcr
to
artifact
registry.
It
should
be
relatively
seamless,
but
there
is
some
work
involved,
but
I
kind
of
want
to
put
that
on
the
back
burner
until
we
have
to
do
it.
D
Yeah
and
I
don't
buy
the
six
month
clock
anyway,
like
we
are
a
significant
fraction
of
gcr's
bandwidth
right
now,
and
so
I
think
I
mean
a
significant,
but
we
are
a
noticeable
fraction
and
I
think
that
if
we
we
will
work
with
them
to
make
a
graceful
transition.
Let's
leave
it
there,
okay,
but
I'm
eager.
If
anybody
wants
to
try
to
start
figuring
out,
how
are
we
going
to
mirror
container
images
like
that
seems
very
approachable
now,
right
now,
the
gcr
is
in
the
is
in
the
community
space.
D
D
A
The
next
issue
is
about
moving
bosco's
projects
over,
so
these
are
the
the
projects
that
are
used
by
end-to-end
tests.
I
honestly
feel
like
I've
done
all
of
this.
Let's
see
yes,
so
there
is
no
way
to
create
a
template
and
then
create
a
bunch
of
projects
from
a
template.
So
we
have
a
script
that
basically
creates
the
projects
and
then
documents
the
quota
changes
that
had
to
be
made
to
those
projects
manually
by
a
human.
A
I've
documented
all
the
quotas
that
we
need
to
request
for
the
different
projects,
type
project
types
and
have
done
so.
This
includes
for
gpu
projects,
scalability
and
ingress,
and
so
the
only
like
follow-up
thing
I
can
think
of
is
if
we
get
to
the
point
where
we
want
to
start
doing
per
sig
billing
of
like
what
is
each
sig
using
or
consuming.
In
order
to
do
testing,
we
might
need
to
re-examine
how
we
tag
projects
or
maybe
have
projects
live
in
different
folders,
but
otherwise
I
think
this
is
done.
A
Okay
closing
out,
because
it
feels
great
we
got
three
minutes
left.
Is
there
any
specific
issue
on
the
board
that
people
wanted
to
talk
about,
or
should
I
just
keep
walking.
A
Okay
storage
analysis
for
billing.
D
Yes,
justin's
and
I
started
peeking
at
this
a
while
back
and
then
it
sort
of
fell
off
the
bottom
of
the
plate,
I'm
very
eager
to
turn
this
back
on
it's
one
of
the
things
I'm
hoping
to
maybe
spend
some
time
on
in
the
next
couple
of
weeks.
D
Like
I
mentioned
now,
that's
sweet
sweet
data.
If
it's
there
for
the
picking,
if
we
can
make
the
time
to
start
doing
it,
it's
not
super
important,
but
it
gives
us
some
visibility
into
what
images
and
those
sorts
of
things.
B
A
This
is
something
I
can't
do
with
the
terraform
files
that
we
use
to
create
our
clusters.
It
turns
out,
it's
really
painful,
if
not
impossible,
to
create
a
set
of
terraform
files
that
you
change
to
say.
Okay,
first
before
this
commit,
we
were
deploying
version
115
after
this
commit
we're
deploying
version
116.
A
It
is
significantly
easier
to
use
gcloud
or
to
use
the
google
cloud
console
to
initiate
this
upgrade.
If
it's
something
we
want
to
initiate
at
a
time
and
place
of
our
choosing,
if
we
don't
care,
google
will
upgrade
the
cluster
for
us
automatically.
A
At
least
it
will
upgrade
the
control
plane
for
us.
The
nodes
will
still
have
to
upgrade
if
we
want
that,
I
feel
like,
but.
E
Also,
a
little
warning
about
using
a
channel,
it's
destroy
the
cluster
because
the
last
time
trying
I
had
this
issue
using
the
current
state
of
the
terraform
file.
If
you
introduce
the
channel
in
the
current
cluster
it
may.
I
think
I
think
I
need
to
check,
but
I
think
if
the
art
form
will
recreate
the
cluster.
A
Yeah,
this
is
just
for
what
it's
worth.
This
is
my
opinion
on
using
terraform
to
manage.
Kubernetes
clusters
is
not
the
greatest
as
a
result
of
having
created
it
and
then
tried
maintained
it
for
the
past
couple
of
months.
E
A
Okay,
what
I
have
had
to
do
in
the
past,
for
what
it's
worth
is
update
the
resource
manually
like
go,
go
change
the
resource.
However,
I
want
to
and
then
re-import
that
resource
into
the
terraform
state
file,
so
I
have
to
sort
of
manually
do
some
surgery
on
the
state
file,
or
I
have
to
put
things
back
into
it
before
terraform's
view
of
the
world
agrees
with
the
way
the
world
actually
is.
E
E
D
So
yeah
I
I
the
last
thing
I
want
to
do
is
to
have
a
like
a
flag
day
once
a
quarter
where
we
all
go
and
upgrade
our
various
clusters
like
that.
Just
means
pointless.
Yeah.
D
A
Yeah
that
would
be
nice.
Okay,
we're
three
minutes
over
our
scheduled
hour.
I
really
appreciate
everybody
hanging
around.
It's
been
great
to
see
you,
but
that's
all
the
time
we
have.
I
look
forward
to
seeing
you
all
in
two
weeks
and
happy
wednesday.