►
From YouTube: Kubernetes WG K8s Infra 2019-05-01
Description
A
Okay,
hi
everybody
today
is
Wednesday
May
1st
happy
May
Day
right,
so
you
are
at
the
WG
Cates
infra
meeting
you
are
being
recorded.
You
can
all
go
watch
yourselves
later
on
a
public
YouTube
channel
as
you
adhere
to
the
committee's
code
of
conduct
by
not
being
jerks
first
off
is
there
anybody
new
here
feel
like
I,
recognize
all
the
faces
and
names.
B
Can
probably
take
this?
Why
don't
you
start
and
I'll
get
the
dollar
by
dollar,
read
out
and
say:
okay,
so
I
actually
did
I
started
using
data
studio,
I'm
gonna
try
to
present
my
beautiful
presentation
for
everyone.
I
prepared
a
PowerPoint
for
everyone's
enjoyment,
bring
it
up.
Let's
see
if
this
works.
B
A
Well,
justin
is
looking
I
wanted
us
to
try
and
go
through
some
of
our
usual
Oh
there's
even
Comic
Sans
in
there
I
wanted
us
to
go
through
our
usual
schpeel,
but
then
I
wanted
us
to
kind
of
go
through
things.
On
a
github
issue
driven
basis,
I
tried
to
call
up
all
of
our
actual
items
into
github
issues,
I
think
Justin,
something
we
got
on
one
of
his
but
I'm
so
excited
to
see
comics
ons
in
serious
business.
So,
let's
see
our
billing
report,
we.
A
B
The
Comic
Sans
this
is
Google
data
studio,
which
is
actually
really
sweet,
I
think
so
it
actually
gets
it's
fairly
easy
to
build
reports,
and
you
know
you
can
it
does
pretty
nice
drill
down
and
all
this
I
have
it.
You
can
change
the
date
range
if
you're.
In
the
view
you
can
drill
down
into
the
different
surfaces
so
like
you
can
then
drill
them
further.
B
This
is
live
data,
so
we
can
come
I'm
I'm
looking
for
the
day,
doesn't
exactly
match
what
we're
seeing
elsewhere,
but
so
there
might
be
some
reconciliation
needed,
but
we
can
make
to
create
whatever
billing
reports.
We
want
I
believe
I
can
embed
this
into
a
web
page
or
we
can
we're
getting
it
delivered
by
an
email,
a
PDF
server
by
email
each
day,
I,
don't
know
whether
we
can
put
this
somewhere
else,
make
the
data
public
or
there's
an
API
in
there.
B
B
First
but
I
guess
we
spent
21
percent
more
on
on
compute
engine
in
April
than
we
did
in
May,
so
and
13
bajillion
percent,
more
on
cloud
storage
and
GCR
product,
but
still
zero,
but
still
zero
I
tried
to
get
that
to
not
be
R
and
look
failed.
But
yes,
so
we
can
basic
rate.
Whatever
reports
we
want
it's
fairly
easy
to
create
and
it's
a
fairly
nice
interface
over
bigquery,
which
has
all
our
billing
data
being
exported
to,
and
we
can
figure
out
then,
where
we
want
this
data
to
go.
A
B
F
B
F
B
F
Have
8
so
the
this
billing
breakdown
breaks
it
down
by
SKU
and
let
me
break
it
down
by
a
product.
We
just
asked
you
breaking
them
with
MJ
a
product
you
ever
buy
SKU
well,
I
can't
see
both
screens.
At
the
same
time.
Oh
yeah,
you
do
have
a
SKU.
Let
me
break
you,
don't
my
skew
then
so
I
have
85
dollars
in
cores
42
dollars
in
memory
and
$31
and
SSD
PD
who's
using
SSD
who's.
Getting
all
fancy
I
think.
E
B
C
F
B
B
C
C
A
F
We
can
certainly
do
billing
per
block.
You
can
get
answers
for
billing
permit,
since
we
activated
that
yesterday
afternoon
and
per
namespace,
we
can
maybe
look
at
and
find
out
more
information
on
that.
Yeah
I
want
to
be
realistic
here,
two
weeks
from
Friday
I
leave
for
kube
Khan
so
like
in
the
next
two
weeks.
I'm
not
gonna,
be
doing
a
whole
lot
else,
like
very
honestly,
so
I'm
weary
of
over
committing
for
two
weeks
from
today.
Right.
F
Mean
open
the
gates,
we
have
like
eight
different
things
that
we
can
open
the
gates,
for
we
can
open
the
gates
for
GCSE
serving
without
a
whole
lot
of
fanfare.
We
can
open
the
gates
for
GCR
serving,
but
we
need
to
move
over
the
bulk
of
the
existing
GCR
stuff.
I'm
noah
lioness
is
here
today.
I
can't
see
well.
A
F
A
F
A
Honestly,
just
want
us
to
have
the
ability
to
adapt
if,
for
whatever
reason,
our
costs
explode
and
eventually
be
able
to
break
it
down
by
service
and
whatnot.
It's
easy
for
us
to
keep
track
of
whoa.
Our
numbers
changed
drastically
since
the
last
time
we
read
them
out.
What
have
we
done
in
the
last
two
weeks
and
then
adjust
that
accordingly
sure.
F
So
I
think
we're
we're
on
the
hairy
edge
of
that
already
with
this
report.
If
we
just
like
polished
it
up
a
little
bit,
we've
gotten
most
of
the
queries
that
we're
actually
interested
in
it
and
honestly,
if
people
have
queries
that
they'd
like
to
see
like
send
them
to
Justin
or
myself
and
we'll
figure
it
out,
I
don't
want
to
have
800
queries
in
here.
I
would
rather
have
like
six,
so
you
know,
I
would
focus
on
the
last
month,
the
last
two
weeks,
a
month
a
month
and.
F
And
then-
and
that
gives
us
an
overall
picture
and
then
I
think
we
can-
we
can
quote-
quote
open
the
gates,
but
that
said,
the
other
pieces
aren't
all
in
place
yet
like
we
can't
open
the
gates
on
GCR.
Until
we
do
the
bulk
version
of
all
the
existing
GCR
data,
we
can't
open
the
gates
on
workloads
until
we're
comfortable
that
the
cluster
automation
is
done,
which
I
would
like
to
have
done
for
today,
but
just
felt
between
two
knee
cracks.
Ok,.
A
G
Sure,
later
on,
we're
gonna
look
at
setting
up
an
automated
job
for
this,
so
that
the
moment
that
it
changes
we
have
a
PR
to
review
or,
within
you
know,
a
few
hours
every
day.
This
is
the
only
changes
since
our
last
meeting,
somebody
added
the
kate's
and
fred
GCP
accounting
Google
Group,
and
they
added
the
role
bigquery
job
user.
It
would
be
nice
to
automatically
figure
out
who
change
that
and
add
it
to
the
audit.
G
F
What
I'd
love
to
bigquery?
Ok,
it
seems
like
it
could
be
a
follow
up
like
simply
having
the
the
read
out
and
actually
hit.
If
you
run
it
again,
we
put
in
a
something
for
you
to
find.
We
left
your
breadcrumb
I
was
wondering
if
you
were
gonna,
run
it
just
before
the
meeting
or
if
it
was
gonna
take
longer
so
yeah
I
mean
that's
simply
the
fact
that
we
have
an
alert
that
simply
says
hey
somebody
added
so,
and
so.
With
this
permission,
like
do
you
want
to
codify
this
into
Yamma?
F
F
A
F
Is
my
next
I
don't
know
if
you're
gonna
jump
to
that?
Isn't
a
I,
but,
like
that's
my
next
mission,
I'm
happy
now
with
the
way
all
the
GCR
stuff
is
and
all
the
GCS
stuff
seems
to
be
up
and
we'll
talk
more
about
that
I
guess
when
we
get
to
that
topic.
My
next
real
thing
that
I
want
to
focus
on
for
this
group
is
getting
that
cluster
and
the
automation
and
all
the
config
setup.
The
way
we're
happy
with
it
again.
I
probably
won't
be
two
weeks
from
today.
A
A
A
He
thinks
it
is.
It
is
basically
done
yes,
okay,
so
I
will
share
my
screen
now.
I
can
and
what
I
wanted
to
try
and
do
I
had
an
AI
to
sort
of
like
normalized
how
we're
planning
this
stuff,
because
I
feel
like
each
meeting
that
we
run
it's
kind
of
a
slipshod
scraping
through
the
dock
and
trying
to
figure
out
what
we
were
talking
about
last
time.
A
That's
so
what
I've
done
is
made
for
milestones,
I've,
given
them
just
arbitrary,
due
dates
to
talk
about
what
I
think
we
had
been
doing
in
the
groups
in
section
was
proof
of
concept
making
sure
we
knew
how
to
stand
things
up
manually.
What
we
are
trying
to
work
on
now
are
all
of
the
tasks
necessary
for
us
to
say
we
are
ready
to
open
the
gates
and
then
I
feel
like
the
next
two
milestones
are
around
migrating
low-risk
infrastructure
and
migrating
all
the
infrastructure
related
to
kubernetes
tests,
because
I
feel
like
the
kubernetes
test.
A
Stuff
is
kind
of
tangled
up,
so
I
felt
like
what
might
be
most
productive
is
to
get
this
groups
consensus
on
whether
or
not
I
have
the
right
things
in
there
ready
to
migrate
milestone,
whether
they
are
assigned
to
the
right
people
and
like
kick
things
out
that
don't
belong
here
or
add
things
in
that.
I
do
belong
here,
so
I
also
have
tried
to
use
area
labels
to
break
the
work
down
into
some
baked
areas.
To
allude
to
James
comment
about
like
eight
different
gates,
I
feel
like
we
can
shard
things
up
by.
A
You
know
what
I'll
do
I'll
do
a
different
view
where
I
can
actually
filter
this
stuff,
while
I'm
talking
but
to
filter
things
by
access
to
the
cluster.
So
like
can
we
actually
describe
all
of
the
google
groups
that
were
using?
Do
we
feel
like
we
have
a
mechanism
to
manage
all
the
google
groups
that
we're
using?
Do
we
know
what
high
am
roles
and
policies?
These
things
are
associated
with
that
sort
of
stuff.
A
F
But
before
we
do
that,
we
should
I
would
like
to
understand
how
the
API
works.
I
think
Christoph
mentioned
some.
There
was
a
bug
that
was
assigned
to
Christoph
to
figure
out
the
API
and
how
to
do
it
through
the
API
I
feel
like
we
should
probably
actually
write
a
program
for
ourselves,
either
a
CI
or
something
that
can
actuate
Google
Groups
the
way
we
do
for
other
stuff
right
now,
so
that
we
have
a
like
a
simple
command.
That's
create
a
new
group
that
initializes
it
with
all
the
correct
owners.
A
F
I
am
NOT
owner
of
all
the
groups,
but
I
think
either
you
Aaron
or
you
DIMMs
are
owners
in
all
the
groups
or
Igor.
Is
you
three?
Are
the
sort
of
template
I've
been
whenever
I
create
groups?
I've
been
removing
myself,
which
actually
sort
of
bit
me
in
the
butt
this
morning?
But
so
you
three
should
be
able
to
look
at
your
groups
and
find
everything
that
starts
with
Kate's
in
front.
H
F
So
if
we
have,
then,
if
we
now
have
a
command
line,
that
a
select
group
of
people
can
use
to
create
and
bootstrap
a
new
group,
that's
a
huge
first
step
that
we
can
use
because
we're
gonna
want
we're
gonna
end
up,
creating
dozens
and
dozens
of
these
groups
for
all
the
different
stagings
and
all
the
different
namespaces
and
everything
else.
So,
let's
make
that
as
streamlined
as
possible.
So.
A
A
A
F
So
the
question
is,
we
can
do
it
one
time
and
just
create
them,
move
all
the
people
who
are
in
the
existing
groups
to
the
new
groups.
Fortunately
there's
not
that
many,
and
just
do
it
one
time
we
can
then
talk
about,
should
we
have
this,
be
a
mobile
reconciled
actively
somewhere
right,
like
the
rest
of
our
GCP
infrastructure,
is
not
salvia
amel
yet,
but
probably
should
be
over
time.
A
A
A
So
I
couldn't
quite
find
a
PR.
If
you
can
like
the
PR
to
this
HP
that'll.
Let
us
know
that
the
script
itself
exists.
Sure,
okay,
maybe
also
related
to
the
scripts.
There
was
something
about
setting
up
a
group
with
just
the
audit
permissions
necessary
to
use
these
scripts,
as
this
has
been
done
already.
That.
A
G
Was
actually
in
the
midst
of
creating
the
pull
request?
You
can
see
a
couple
of
the
commits
against
the
current
pull
request
and
are
generated
by
a
bot.
A
Okay,
I
feel
like
a
model
we
could
follow,
is
prowl
at
the
moment,
opens
up
a
pull
request
to
auto
update
itself
and
if
it
finds
that
there
is
already
a
pull
request,
doing
that
it
just
for
submissions
to
branch,
so
that
there's
only
ever
one
like
bot,
automated
PR
to
update
things,
but
it's
the
one
who's
human
review.
That's.
A
F
Yes,
I
think
missing
from
this
list
before
we
oh
well
again,
it's
which
gate,
if
we're
talking
about
all
the
gates
I
think
we
do
not
have
a
script
that
turns
on
a
gke
cluster
yet
right
and
we
do
not
have
a
and
that
script
will
need
to
include
the
basic,
are
back
enablement.
The
same
way
as
our
sort
of
template
groups
have
some
basic
permission:
basic
commissions
for
a
cluster.
B
E
A
F
A
A
F
A
So,
just
to
really
walk
through
it,
I
feel
like
there's
an
umbrella
issue
to
create
the
cluster.
There's
all
sorts
of
detailed
things
and
aspirational
things
in
here,
I
feel
like
I
want
to
shut
this
issue
down
when
we
feel
like
we
have
successfully
burns
down
a
manually,
created
cluster
and
have
recreated
a
new
cluster
from
scratch.
So
I
have
an
issue
open
assigned
to
you,
Tim
and
Justin,
since
you
guys
that
you're
gonna
work
on
this
last
time,
anybody
has
expressed
interest
in
maybe
shoulder
surfing.
If
you
get
together
to
do
this,
yes,.
A
A
F
Certificates
come
across
as
well.
That's
right!
So,
basically
everything
that's
currently
running
in
the
non
or
non
production
production
clusters
it
needs
to
be
recreated
so
in
the
process
may
actually
be
turn
up
the
new
one
and
then
burn
down
the
old
one.
Depending
on
how
important
we
think
that
promoter
is
and
how
long
we
think
it
will
take
I
would
I
would
advocate
that
we
write
the
script
and
once
we're
happy
with
the
script,
then
we
do
the
burn
down
and
turn
off
right.
A
F
Ok,
yes,
so
we
let
me
put
your
question
on
the
stack
for
just
one
second
and
address
there
Justin's.
Yes,
we
we
know
we
need
cluster
is
plural
I,
think
it's
fair
for
us
to
start
with
cluster,
singular
showing
that
that's
what
we're
running
from
today
and
so
just
figuring
out.
How
do
we
turn
that
cluster
on
automatically
applying
all
the
current
best
practices
and
all
of
the
security
knobs
that
are
non-default
through
the
various
gke
api
is
whatever
is
really
what
I
think
is
the
long
pole?
F
This
well
I
mean
that's
what
we
have
for
all
these
other
scripts
today
and
I.
Think
it's
the
shortest
path
to
something
deliverable,
I'd
love
to
help
contribute
to
that.
Okay,
so
I'm
I
would
be
happy
to
eat.
Was
that
heavy
sorry
I
didn't
see?
Yeah
yeah
I
would
be
happy
to
give
you
project
editor
access
to
the
the
project
or
we
could
spin
up
a
like
a
test
project
honestly
just
for
prototyping,
and
if
you
want
to
iterate
on
that
script,
I
don't
know
if
you've
got
even
a
start
off
the
script.
F
A
F
Do
you
use
it?
I
just
never
use
the
command-line
tool,
because
the
UI
is
so
I.
Love
will
stuff.
That
delights
me
to
do
that.
Those
type
of
things-
okay,
I,
think
it's
great
and
yes,
Justin
and
I
have
been
having
an
ongoing
discussion
about
whether
it
should
be
a
go
program
that
turns
this
on
via
the
API
or
just
calling
g-cloud
through
shell.
My.
E
I
F
A
F
Right
and
so
turning
the
cluster
on
is
is
step
one
and
then
making
sure
that
we
have
just
a
comprehension
of
what
we
want
the
default
permissions
to
be
like
which
Google
Groups
specifically.
Are
we
going
to
give
admin
access?
The
cluster
like
I?
Don't
think
we
have
a
group
for
that
yet
so
we'll
have
to
create
the
group
figure
out
the
naming
of
it
and,
of
course,
naming
things
is
hard.
F
A
F
So
here's
here's!
What
I'll
do
right,
I,
don't
know
our
nose,
email,
I,
think
I
have
hippies
email
in
various
other
places,
so
I
will
create
a
I'll,
just
create
a
test
project
and
we
can
and
I
will
add
you
guys
as
projects
editor
to
that
project
and
you
guys
can
go
nuts
iterating
on
turning
up
the
the
cluster
and
then
we
can
compare
notes
so
I
know.
Can
you
send
me
your
email?
So
I
can
add
you
to
a
group.
A
B
A
G
C
F
C
A
Next
billing,
so
we
had
a
task
for
Justin
to
show
us
data
studio
Justin.
He
did
that
array.
Our
next
steps
are
to
figure
out
what
actual
data
we
want
from
data
studio.
What
are
the
queries
we
want?
We
talk
about
that
being
either
per
name,
space,
filling
and
then
storage
analysis
I
feel
like
we
should
rephrase
this
to
maybe
be
like
per
bucket
billing.
Does
that
sound,
fair
or
it's.
C
B
F
B
F
Or
don't
care
I
mean
the
unfortunately
GCR
today
does
not
give
us
that
breakdown
right.
So
if
we
can
do
better
with
GCS,
then
Bonzo.
Let's
do
that
so
we'll
we'll
take
the
action,
then
to
just
throw
some
test
data
in
with
some
prefixes
that
we
think
look
vaguely
realistic
and
throw
some
artificial
load
at
it,
and
that's
data
artificial
though,
but
download
it
a
thousand
times
and
see
if
we
can
run
up
a
bill
just.
F
E
A
I
have
asked
a
bunch
of
questions
in
this
issue
to
basically
Express
that
it
sounds
like
mija.
We
need
to
figure
out
what
couple
of
granularity
you
want.
I.
Think
Justin,
you
articulated
well,
but
we
care
about
bandwidth,
delivered
and
storage
at
rest,
and
how
do
we
shard
that
appropriately
I
mean
this
project
in
some
way
to
shard
it
by
sub
project?
Is
some
project
rolling
out
to
sake?
It's
probably
what
I
care
about?
A
F
F
F
F
A
So
I
feel
like
I,
will
close
the
presents
data
studio
thing.
I
will
open
up
some
issues
for
some
of
the
follow
up
ideas.
We
discussed
and
you'll
call
the
umbrella
issue
closed
when
we
feel
like
we
are
content
with
billing
to
be
able
to
open
gates
for
certain
things
to
me,
it
seems
an
awful
lot
like
this.
The
words
question
defines
when
we
open
the
gate
for
a
bunch
of
sub
projects.
A
Related
artifacts
I
feel,
like
that's,
also,
probably
gated,
behind
the
use
of
G
Street
to
effectively
manage
all
of
our
high
Emeril's,
but
it
could
be
like
just
to
get
a
cluster
up
and
running
and
get
and
make
sure
you're
repeatedly
reusing
it
and
running
stuff
on
it.
Maybe
we
don't
necessarily
need
super
granular
building,
yes,
any
any
other
words
I'm
doing
okay,
so
the
the
area
that
I'm
least
informed
on
right
now
and
I'm,
actually
gonna
drop.
The
milestone
is
all
related
to
artifacts
that
we
store
things
I
have
in
the
milestone.
A
F
Lionesses
said
that
it
works
and
he
thinks
it
in
a
mobile
state.
He
has
done
demos
himself.
The
question
is:
are
we
comfort
of
what
I
would
call
a
lot
of
testing
in
terms
of
like
integration
or
end-to-end
testing?
Yet
we've
set
up
the
end-to-end
test
project
so
that
he
can
do
that
work
to
the
best
of
my
knowledge
that
work
is
not
done
yet.
E
B
F
F
A
F
A
A
So
then
justin
you
have
all
sorts
of
stuff
related
to
artifact
storage
in
general.
That's
beyond
just
GTR
is
that
I
don't
really
know
how
to
quickly
bucket,
because
I
feel
like
we
have
a
lot
of
mega
sounding
issues
and
I
want
to
understand
in
terms
of
what
do
we
think
of
opening
the
gates
in
the
context
of
like
artifact,
server
and
redirector,
and
all
that
stuff.
We.
B
B
We
have
created
a
single
GCS
bucket
for
prod
and
I
believe
we
have
mostly
created
a
GCL,
be
in
front
of
that
we
haven't
done
the
name,
the
dns
name
right,
but
if
we
like
or
in
fact
up
que
serĂa,
we
can
point
that
name
to
it
and
it
should
be
live.
When
we
do
that
and
then
I
have
also
created
a
PR
against
the
image
promoter
to
do
binary,
artifact
promotion,
which
I
will
ping
Linus
about
soonish.
B
F
B
B
F
A
A
Okay,
all
right
I
feel
like
I'm
gonna,
try
and
take
some
of
these
broader,
like
set
up
a
GC
our
repo
and
set
up
a
GCS
bucket
to
try
and
describe
it
in
terms
of
GCS
buckets
for
use
with
the
MVP
versus
like
GCS
buckets
for
everybody,
I,
don't
I,
don't
think
we're
doing
that
and
feel
like
the
GCS
bucket
issue.
I
could
consider
closed
when
I
feel
like
we
have
done
enough.
A
F
Y'all
read
this:
all
the
repos
are
in
place
so
like
we
can
do
a
demo
of
it
today.
I
pretty
sure
we
just
don't
have
the
testing.
That's
the
really.
The
only
thing
that's
getting
me
from
saying:
let's
start
the
ball
conversion
process.
Also,
we
should
define
a
conversion
protocol
of
life.
I,
don't
know
how
long
it's
going
to
take
to
the
bulk
import.
F
C
A
A
B
I
think
this
was
something
which
we
need
to
figure
out,
which
is
I.
Don't
think
this
is
MVP
and
I
know.
That's
it's
a
mission
which
is
like,
so
how
do?
How
does
anyone
know
that
the
image
that
we're
talking
about
is
the
image
that
we
say
it
is
and
I
were
ready
to
talk
about
this
I
know
they
were
ready
to
tackle
this
yet
ok,
I'm
getting
there
relying
on
this
is
on
TLS
exactly
we.
A
So
I
will
leave
a
deep
milestone.
I
will
create
a
like
nice
to
have,
but
I
feel
like
there's
something
beyond
all
the
tests.
That's
like
just
nice
to
have
so
I'll
put
in
there,
but
that's
why
I'm,
leaving
it
milestone
list
and
I
will
go
ahead
and
had
these
two
to
get
ready
to
migrate,
milestone
to
remind
me
to
break
them
up
appropriately.
A
Gracefully
accepted
credentials
to
docker
hub
from
Tim
I
noticed
we
had
all
these
images,
they
haven't
updated
him
forever.
I
think
the
recent
security
breach
and
we
demonstrated
why
we
don't
want
to
be
on
dr.,
have
anymore
so
I
removed
our
presence
entirely
and
waited
for
the
whales
of
discontent
and
then
I
think
I
bumped
into
one.
Somebody
said
like
broke
their
testing.
A
Specifically
they
used
OpenShift
3.10
and
had
it
couldn't
pull
down
the
kubernetes
/
cause
container,
and
so
now
I
am
chasing
down
currently
in
back
channels,
but
I'll
start
to
surface
it.
Just
where
is
kubernetes
paws
hard-coded?
How?
What
downstream
projects
does
this
affect?
What
are
their
support
windows
and
what
reasonable
guarantees
can
we
make?
It
seems
like
if
we
may
not
have
actually
busted.
A
Certainly
it
like
open
shift,
proper
I,
don't
think
we,
but
they
haven't,
had
kubernetes
paws
in
their
core
code
since
1:9,
but
it
looks
like
we
didn't
actually
fully
excise
that
cause
container
from
our
test
code
until
October,
2018.
So
conceivably,
we
have
some
back
porting
to
do
so.
I
feel
like
I
might
have
to
keep
the
stupid
docker
hub
account
around
long
enough
for
us
to
rid
of
all
references
to
the
pods
container
yeah.
F
You
know
if,
if
paws
is
the
only
one
there,
that's
a
pretty
reasonable
situation,
especially
if
maybe
we
just
push
a
new
one
up
there
to
make
sure
that
it
hasn't
been
compromised.
We've
already
changed
all
the
credentials.
Well,
they
can
check
the
chanson
changed
if
we
have.
If
we
know
what
the
shot
was,
don't
forgot
to
tell
you
when
it
was
asked,
updated,
I
presume
if
it's
compromised,
I'm,
not
trusting
anything
yeah.
A
I
A
H
A
So
did
you
people
like
that
structure
of
running
through
the
milestones
and
stuff?
My
thought
was
to
you
next
time
make
sure
that
our
project
board
actually
has
all
these
has
just
the
current
milestone.
If
people
start
adding
new
issues
or
other
stuff,
I
will
punt
them
to
other
milestones,
it's
appropriate
and
if
we
have
time
we
can
get
to
those,
but
so,
for
example,
you
notice
we
haven't
talked
about
the
go
decades
thought
I/o
replacement.
A
We
haven't
talked
about
the
redirector,
the
nginx
pastry
director
being
something
else
because
I
agree,
those
are
great,
but
we
really
should
be
focused
on
unblocking
and
opening
the
gates.
I
agree,
I,
agree,
I,
agree,
I,
agree,
okay,
the
only
thing
we
didn't
cover,
but
just
so
you
know,
I
started
taking
inventory
of
all
of
the
cluster
based
infrastructure
that
we
originally
pulled
together
in
that
dock
I
now
actually
know
what
clusters
they
live
in.
So
you
have
a
chance
to
look
at
this
and
see
if
anything
is
missing
from
this.
A
I
also
have
a
separate
issue
to
go
through
all
of
the
Google
projects
that
are
used
to
capture
non
cluster
based
infrastructure.
So
things
like
bigquery
are
probably
the
main
things
I'm
thinking
of,
but
also
what
are
all
like.
Gcs
buckets
that
we're
dumping
all
over
tests
are
sacks
in
things
like
that.
Okay,
that's
all
that
I
have
awesome.