►
From YouTube: Kubernetes WG K8s Infra 2019-05-15
Description
B
B
You
can
watch
this
posted
on
YouTube
later
so
I
thought
that
this
week
it
would
be
cool
if
we
could
go
through
our
recurring
topics
first
and
then
walk
through
open
issues
in
the
milestone
where
I
tried
them
all
of
our
action
items
from
two
weeks
ago,
as
we
try
to
drive
towards
like
opening
the
gates
on
stuff,
so
I
believe
I
recognized
everybody.
That
is
here.
So
unless
anybody
wants
to
reintroduce
themselves,
I
will
move
on
to
dealing.
C
B
Where
do
we
stand
on
billing
review?
I
know,
Justin
showed
a
bunch
of
fancy
reports
last
week
and
typically
at
this
point
in
time,
Tim
reads
us
out
the
standard
billing
report
from
GCP,
so
we
can
hear
about
the
resources
that
are
spent
unclear
to
me
whether
like
Justin's
fancy
thing
was
something
we
would
be
using
instead
by
now
or
we.
D
Don't
have
I,
don't
think
it's
something
we
can
use
instead.
Just
yet.
Maybe
wouldn't
it
all
surprise
me
to
hear
that
Justin's
been
very
busy
getting
ready
for
cube
con
marker
yeah,
so
I'm
happy
to
do
the
readout
here.
I
haven't
seen
an
update
for
that
Walt's
board.
I
mean
you
read
out
from
the
table
here,
though.
Top
single
line
item
is
cores
running
into
the
various
test
clusters
as
we're
staying
them
up,
that's
to
the
tune
of
$69.90
and
there's
Ram,
sorry,
35
bucks
and
SSD
PD's
$20
and
DNS
39
million
queries.
D
Oh
sorry,
I
didn't
read
current
much
you
changing.
The
query
hold
on
say
to
the
15th
is
current
month
is
close
enough:
thirty,
nine
million
nine
hundred
thousand
queries
to
date
at
sixteen
dollars
and
load
balancers,
seven,
bucks
and
Pete.
You
know
we're
going
down
into
the
pennies
from
here.
So
I'd
say
things
are
still
looking
good
and
all
the
costs
are
reasonable
and
going
to
places
where
we'd
expect
them
to
go.
D
B
D
B
Okay,
so
that's
the
I
guess
I
can
share
my
screen.
If
that
would
help,
you're
gonna
have
to
get
deal
with
github
in
dark,
nerd,
I
hope
you'll
live,
okay,
okay,
get
up
in
dark
mode.
We
so
one
of
the
things
is.
We
need
to
make
sure
we
actually
have
gke
clusters
available
to
migrate
infrastructure.
Over
to
this
is
the
umbrella
issue
has
to
do
with
cluster
management,
so
I'm
gonna
focus
on
everything
that
has
to
do
with
cluster
management.
For
a
second
I
know,.
B
D
Well
in
his
stead,
then
he
has
sent
a
PR
which
I
have
not
reviewed.
We
talked
a
little
bit
yesterday
on
slack.
He
was
looking
at
all
the
various
options
for
cluster
turn
up
and
said
he
had
most
of
them
nailed
down
in
that
script.
I
have
also
been
pretty
busy.
Getting
ready
for
cubic
on,
so
I
just
haven't
had
a
chance
to
look
at
it.
Yet.
B
Do
we
do
we
I,
don't
know
if
you
can
see,
I
have
I
actually
like
laser
engrave
to
the
Bosch
logo
onto
a
cup.
Here,
that's
that's!
Where
I
stay
in
the
flash?
Sorry
we're
really
far
afield
here!
Okay,
do
we
feel
like
it's,
it's
fair
that
we
could
get
this
script
reviewed
and
run
to
a
point
where
we
actually
have
a
cluster.
That's
stood
up
with
this
in
two
weeks.
D
Two
weeks
is
going
to
be
pushing
it
just
because
cube
con
is
the
entirety
of
next
week
and
like
I,
don't
return
to
States
until
Monday,
so
it'll
be
difficult
for
me
to
like
guarantee
that
we
have
this
in
two
weeks,
I'm
gonna,
try
like
hell
and
shoot
for
this,
but
I've
got.
You
know
a
huge
backlog
of
things
that
I'm
ignoring
this
week
and
next.
So
it's
gonna
be
a
challenging
week,
but
it's
at.
B
Me,
let
me
ask
the
question
this
way.
I
feel
like
it
might
be
productive
for
some
of
us
to
get
together
face
to
face.
While
we
are
at
coop
con,
possibly
during
the
contributor
summit,
and
we
could
like
a
constant
stop.
Do
we
think
it's
fair
to
try
and
push
on
something
like
this?
While
we
are
there
that
could
be.
D
B
D
So
sorry
I
meant
that
as
a
compliment
Christoph
and
yeah,
we
can
totally
get
together
and
work
through
this
and
see
if
we're
all
happy
with
it.
If
you
have
any
other
questions
around
it
and
if
so,
maybe
even
push
it
from
here
from
there
approve
it
from
there.
I'm
fine
I'll
be
at
contributor
summit
on
Monday
for
sewer,
I,
don't
know
what
the
agenda
is
gonna
be,
but
if
we
pulled
aside
an
hour
to
do
this,
I'd
be
cool.
D
D
B
A
A
B
A
And
crystals,
okay,
so
it's
easy
to
find
out.
Basically,
the
list
of
people
who
have
GPT
keys
in
the
repository
can
view
the
experimenter
well
can
view
the
token
fight,
so
you
can
do
crypt
unlock
and
you
can
run
the
reconcile
dot,
go,
there's
a
small
readme
and
we
have.
There
is
a
group
story
ml
which
has
a
list
of
the
the
list
of
Google,
Groups
and
list
of
members
for
each
Google
group.
So
it's
basically
shipped
at
this
point.
A
D
On
that
tipic
I've
gone
through
most
of
the
GCP
projects
that
we
already
have
where
we're
using
these
groups
for
I
am
permissions
and
I
have
converted
most
of
them.
I've
got
a
couple
of
scripts
left
that
I
need
to
finish
and
get
pushed
and
checked
by
all
your
eyes
about
this,
but
I've
converted
a
bunch
of
them
already
like
the
staging
repos
and
stuff.
It's
the
main
kubernetes
public
project
that
needs
more
and
some
of
the
I
think.
In
fact,
I
did
the
org
level
ones
already
too
so
I'm
working
on
scripting.
D
B
B
D
B
D
D
E
B
D
B
Okay,
all
right
to
me:
if
we
we
dump,
this
I'm
gonna
feel
a
lot
more
comfortable
about
deleting
things
and
seeing
it
people
scream
totally.
If
we
can't
audit
it,
it
doesn't
really
exist.
You
know
basically,
okay,
thank
you
for
that
and
then
setting
up
a
job
to
run
this
automatic
baby
will
do
Nate
a
cron
job.
We
got
an
issue
for
that
cool
cool
okay,
so
we
were
talking
about
ECR
stuff.
Let's
talk
about
setting
up
gt-r
repos,
so.
D
I
changed
the
insurer
staging
repo
script
to
instead
of
taking
an
argument
to
actually
just
loop
over
all
of
the
known
repos
so
to
a
first
approximation.
We
have
metadata
now
you
just
go
add
an
entry
to
that
array
and
rerun
the
script,
and
it
will
rectify
everything.
It's
not
exactly
the
same
as
having
a
nice
yamo
file
that
we
can
slurp
in,
but
it's
a
step
in
that
direction.
D
B
Okay,
I
feel
like
that
kind
of
contradicts.
The
comment
above
we're
saying
out
of
scope.
For
now
is
the
ability
to
create
repos
to
support
arbitrary
projects,
you're
saying
you
actually
want
to
create
staging
or
you
post
support
arbitrary
projects,
and
can
you
just
help
me
understand
what
is
the
difference
between
a
staging
repo
and
a
repo
that
people
can
push
and
consume
images
from
a
staging.
A
D
B
Okay,
I
apologize
if
I'm
like
stuck
in
the
past
here,
I
guess,
I
was
under
the
impression
that
we
still
wanted
to
get
this
process
correct
for
the
core,
kubernetes
artifacts,
like
Etsy
D,
and
all
the
images
for
kubernetes,
and
that
we
still
weren't
yet
confident
that
we
had
that
ironed
out
working
and
to
end
tested
all
that
stuff
and
like
we
weren't
to
get
that
first
before
we
opened
up
the
gates
for
your
sub
projects
and
whatnot.
So.
D
We
have
annotator
that
it
works
Linus
says
so.
We
do
not
have
the
tests
yet
hopefully
Linus
and
get
to
that.
Otherwise
we
will
have
to
reread
source
that
once
we
have
those
tests,
then
we
will
have
full
confidence
in
the
promoters
ability
to
move
containers
from
staging
GCRs
into
production
GCRs,
but
Linus
has
already
demonstrated
that
it
does
work.
C
C
D
There
are
remaining
questions
as
to
like:
do
we
throw
up
in
the
floodgates
right
away
and
let
people
start
promoting
before
edie
tests
are
done,
I'm
a
little
anxious
about
that.
Do
we
are
what
we're
going
to
do
with
all
the
existing
images,
we're
going
to
load
them
into
one
sort
of
omnibus
legacy
staging
repo
and
then
have
everybody
use
more
specific,
staging
repos
going
forward.
That
seems
like
the
easiest
path.
D
D
Don't
think
we're
ready
to
do
that
until
we
have
the
the
testing
in
place,
so
we're
really
gaining
on
the
testing
I'm
happy
to
add
more
staging
repos
in
the
meantime
for
people
who
need
it
like
immediately
and
there's
been
a
few
cases
like
cops
and
and
other
places
where
they
want
to
serve
artifacts
today
and
get
them
off
of,
like
you
know,
privately
owned
accounts
and
those
sorts
of
things.
That's
fine!
That's
low-impact!
D
B
Okay,
I
I
apologize,
guys
I
still
feel
like
I
am
like
way
far
behind,
like
I,
must
not
be
watching.
I
really
haven't
looked
at
a
whole
bunch
of
PRS
flowing
into
this
repo
in
the
past
two
weeks,
I'm
kind
of
catching
up
to
it
today.
So
just
trying
to
understand
like
what's
what's
the
basic
thing
that
we
would
get
done
that
if
we
were
to
open
the
floodgates,
we
could
point
people
to
say
like
copy-paste,
this
approach
and
you
to
can
had
add
your
things.
D
B
B
B
B
That's
all
part
of
that
that
same
script,
and
then
they
can.
They
can
manually,
build
and
push
images
from
their
laptops
to
that
staging
repo
to
their
heart's
content.
That's
right
and
then
we
are.
We
would
be
waiting
on
end
and
testing
of
the
container
image
promoter
before
we
feel
like
we're
comfortable
using
that
to
promote
from
the
staging
repos
to
the
production,
because
that's
right
and
and
is
running
the
container
image
promoter
to
do.
That
is
that
that
okay,
wait,
that
happens
in
Chrome
in
response
all
right.
That's
a
proud
job.
Okay,.
D
E
D
C
Yeah
that
that
second
part
is,
you
know
much
more
involved
and
I
think
that
the
latest
thought
on
that
was
possibly
like
promoting
to
it.
You
know
a
great
flash
like
backup
repository
in
case.
We
do
something
really
bad
but
right,
that's
kind
of
a
stopgap
or
like
it's
not
a
full,
it's
better
than
nothing,
which
is
what
we've
got
today.
D
D
B
C
And
then
that's
like
you
know,
hundreds
of
images
right,
thousands
or
thousands
starting
so
that
all
of
that
will
need
to
be
like
read
in
or
written
to
a
llamo.
So
there's
a
good
open
assume
for
adding
a
features
of
the
promoter
where
it
can
basically
read
or
snapshot
a
repository
and
just
write
it
down
all
the
contents,
so
ya
know
file
and
then
we'll
start
tracking
that,
like
the
perimeter
we'll
start
looking
at
that
file
and
then
say:
okay
little
changes
here.
C
Actually
we
don't
want
people
to
make
changes
to
the
old
one
and
the
promoter
will
look
at.
Basically,
a
huge
ya
know
file
with
all
of
the
old
images
and
make
sure
that
nothing
changes
beyond
that.
So
if
people
try
to
push
new
images
without
using
this
image
promotion
process
for
the
existing,
you
know
old
images,
then
it'll
get
rejected
like
it.
You
know
it'll
revert
back
to
that
snapshot,
it's
late
and
then
we're
we're
gonna
push
people
to
say
you
know,
please
stop
and
you're,
making
the
old
one
get
any
bigger.
C
D
We
can
do
the
like
flip
and
then
check
again
and
make
sure
that
nothing
has
changed
and
and
disable
the
old
one
right,
so
we're
just
gonna
have
to
as
we
get
all
the
testing
and
everything
in
place.
We
just
have
to
declare
like
Wednesday
is
moving
day
and
no
images
are
being
pushed
into
them.
The
GC
are
the
old
GC
are
well.
We
move
everything
over
and
make
sure
that
we
do
our
best
to
synchronize
I
mean.
C
Like
the
making
sure
that
no
need
in
this
land
in
the
old
repository
I
think
that's
I,
don't
they
saluted
by
the
promoter
doing
its
thing?
If
you
just
let
it
run,
you
know
every
day
and
people
push
stuff
there
and
they're
like
hey.
How
come
my
image
got
deleted
or
something
alright
or
I
mean,
let's,
even
if
you
don't
like
enable
like
bi-directional
syncing
like
if
you
just
have
a
periodic
job.
That,
for
example,
warns
us
of
any
new
images
that
are
not
in
the
manifest
like
I.
B
So
I
guess
I'm
kind
of
like
Tim,
you
meant
you'd.
Like
said
the
number
of
thousands
off
the
top
of
your
head
and
I,
and
we're
talking
about
old
repos
and
putting
people
like
I,
guess:
I,
don't
know
what
old
places
you're
talking
about
and
so
I'm
curious
like
what
could
you
give
us
some
example
of
some
of
the
images
you're
thinking
of
and
I
guess
I'd
also
like
to
understand
like.
Why
would
we
not
just
like
revoke
credentials
so
that
people
couldn't
push
to
the
old
place
or
something
so.
B
D
And
that's
you
know,
for
compatibility
reasons
were
reluctant
to
delete
those
old
images,
even
though
we
know
there
are
CVEs
and
some
of
them
and
using
them
we
really
have
sort
of
taken
a
policy
of
not
deleting
old
images.
So
we've
remember,
kids
that
you
see
are
video
as
for
revoking
credentials.
We
can
we
weren't
very
careful
in
granting
credentials
in
the
first
place,
because
it
was
all
in
the
family
internally,
and
so
it's
gonna
be
a
little
bit
more
challenging.
B
There
might
be
a
lot
of
tags,
but
I,
guess
I'm
feeling
like
everything
that
goes
in
there.
It
doesn't
at
all
just
come
from
the
kubernetes
repo.
Oh
sorry,
I,
maybe
I
just
used
the
word
just
but
I'm
thinking
of
images
that
are
built
by
our
release
process
for
the
various
kubernetes
components,
as
well
as
images
that
are
built
for
our
end.
The
end
tests
I
can't
think
of
anything
else
that
lives
in
those
places
right,
but
I'm,
probably
missing
stuff
right.
E
E
D
That's
something
that
Justin
and
I
have
been
talking
about:
I'd
leave
figuring
out
if
we
can
get
away
to
get
the
logging
out
of
there.
It's
not
something
you
looking
at
the
existing
case
study
started
I/o.
There
are
475
named
repositories,
and
then
each
of
those
repositories
has
some
number
of
tagged
images
underneath
right.
Cluster
autoscaler,
I,
just
clicked
on
randomly,
has
a
hundred
and
one
tags
in
there
right.
So
it's
probably
you
know
if
you
assume
that
a
hundred
is
the
average,
probably
in
the
order
of
5,000
total
image
tags
through
the
repo.
B
All
right,
here's
a
part
of
me
that
wonders
if
I
I'll
talk
about
it
offline,
I,
guess:
we've
set
up
all
of
our
images
in
the
tests
in
for
repo
are
now
built
by
like
a
single
thing,
it's
under
automation.
So
we
don't
trust
humans
to
build
the
in
push
the
images
I'm
wondering
if
we
could
identify
all
of
these
images
and
have
something
similar
built
so
that
then
we
can
feel
more
comfortable
revoking
people's
credentials
because
it's
it's
done
by
a
bot,
I
think.
D
We
can
totally
do
that.
I
think
some
of
its
going
to
be
case-by-case
honestly
I
think
once
we
throw
the
switch
we'll
be
able
to
get
most
people
to
switch
pretty
quickly.
I,
don't
think
it's
gonna
be
that
big
of
an
issue,
especially
if
we're
giving
out
staging
repos
for
people
to
be
able
to
push
like
right
now.
D
B
That
that
is
understandable,
and
that
gives
me
a
little
more
context.
Okay,
so
there
are
more
images
than
what
we
will
be
built:
okay,
because
I
so
I
feel
like
we
talked
about
moving
the
old
stuff
and
the
new
stuff.
If
we
set
up
staging
repos
for
the
new
stuff,
it
kind
of
doesn't
really
mean
much
until
we
flip
the
dome
vanity
domain
over
I
guess
it's
kind
of
what
I'm
hearing
they.
B
Okay,
because
I
guess
it
feels
like
we're.
We
believe
that,
theoretically,
all
of
the
pieces
should
work
I
want
to
get
us
to
a
point
where
we're
actually
getting
some
usage
of
the
whole
and
then
so.
It
kind
of
feels
like
the
sub-project
path
might
be
a
start,
but
it
still
doesn't
get
us
to
the
promotion
aspect
of
that
path.
Yeah.
D
B
C
B
Okay,
I
think
that
is
fair.
That
do
you
feel
like.
We
want
anything
else
to
talk
about
on
container
image
remoter
to
me
it
sounds
an
awful
lot
like
next
steps
are
maybe
again.
If
we
get
together
in
person,
we
could.
We
could
maybe
walk
through
the
mechanics
of
migrating,
the
legacy
stuff
a
little
bit
more
or
we
could
help
pack
on
some
end
and
testing
to
give
us
confidence
in
the
promoter.
B
D
We
added
Johnathan
Justin's
here.
Is
he
we
added
support
for
GCS
to
the
same
staging?
Oh,
this
is
for
the
apt
in
the
RPM
stuff,
so
we
had
a
GCS
support
to
all
the
staging
scripts.
So
anybody
who
creates
a
staging
repo
gets
both
the
GCR
and
a
GCS
bucket.
Now
the
we
didn't
talk
about
after
rpm,
specifically
justin
has
been
pursuing
his
load
balancer
to
expose
artifacts
with
the
intention
of
expending
the
image
promoter
to
also
promote
GCS
images.
D
B
B
D
B
D
B
E
D
B
B
B
B
D
D
B
Think
you
are
far
off
I.
Think
there's
just
some
like!
Yes,
you'll
need
to
actually
like
document
what
it
is
and
go
through
the
product
like
we
like
DIMMs,
just
kind
of
got.
The
group
reconciliation
thing
working
I
still
feel
like
we
kind
of
lack
a
good
onboarding
graph
to
describe
here's
the
step
by
step
process.
What
you're
actually
kind
of
enumerated
there
in
that
comment,
so
that
might
be
a
thing
I
can
try
pulling
out
and
also.
D
B
D
B
It's
just
like
some
cloud
of.
Let's
call
it
basil
instances
that
are
living
out
there
and
we
just
run
basil
builds
over
here,
which
is
which
is
cool
and
all,
but
nothing
to
do
like
it
doesn't
really
gain
us
any
at
all.
If
anything
I
feel
like
it
means
we
start
spending
community
money
on
compute
to
run
some
basil
jobs
and
I
feel
like
we're.
B
Not
yet
at
the
point
where
we
have
decided
what
community
money
we
want
to
spend
on
which
jobs
or
Suites
of
tests
and-
and
it
is
unclear
to
me
whether
just
because
they
are
written
on
basil-
and
it's
really
easy
to
create
this-
this
basil
RBE
thing
whether
that
means
they
should
go
first
and
get
carte
blanche
to
spend
community
money.
So.
D
This
is
like
the
hierarchy
of
delegation
right
we're
like
I
trust,
say
you
and
Eric
to
be
gatekeepers
of
what
should
and
shouldn't
be
run
in
our
Vee
and
we'll
set
up
the
permissions
such
that
you
guys
can
make
the
further
decisions
as
to
what's
in
that
project
and
I.
Don't
want
to
be
part
of
that
and
you
stand
in
for
the
the
Royal
you
right,
which
annoys
everybody
who
is
authorized
and
able
to
do
this
work.
I
just
want
to
make
sure
that,
from
an
infrastructure
point
of
view,
I
have
a
reasonable
grip
on.
B
We're
mostly
in
alignment
I
just
feel
like
if,
if
we
were
to
remove
RBE
as
a
word
for
a
second
and
just
talk
about
like
we
want
to
create
a
kubernetes
cluster
to
run
proud
jobs,
think
of
it
like
that
and
I
still
feel
like
we
haven't.
Yet
it's
the
question
of
how
are
we
going
to
figure
out
which
crowd
jobs?
The
community
is
spending
its
money
on
and
I,
don't
feel
like,
it
is
say,
testings
mandate
to
say
which
crowd
jobs.
D
Would
like
to
get
to
a
place
where
we
say
it
is
seeing
testings
responsibilities
and
here's
the
guidelines
that
we
give
them,
but
within
those
guidelines
somebody
instinct
testing
should
be
able
to
make
that
call
right.
Somebody
sends
a
proud
job
to
go
test
gke,
they
should
get
shot
down
and
somebody
sends
a
proud
job
to
could
do
some.
Other
testing
of
karate
is
open
source
than
raka
right,
I
mean
I.
Don't
want
to
be
involved
in
that
loop,
so.
B
A
really
good
question
and
that's
why
I
feel
like
we're,
not
gonna
answer
it
on
we're,
not
gonna
answer
today,
because,
for
example,
I,
don't
think
sig
testing
has
a
say
in
which
jobs
the
release
team
dams,
most
appropriate
for
release,
blocking
jobs
right.
So
testing
provides
the
the
tools
to
help
measure
which
jobs
are
healthy
and
the
infrastructure
to
say
like
which
jobs
are
owned
by
whom.
But
it's
been
the
release
team's
responsibility
to
say
we're
gonna,
pay
attention
to
these
jobs
and
we're
not
gonna
pay
attention
to
these
other
jobs.
B
Similarly,
like
we
want
to
enable
people
to
run
jobs
on
their
kubernetes
clusters
right,
we
have
the
ability
for
proud
to
run
different
jobs
on
different
kubernetes
clusters.
This
is
more
about
for
the
kubernetes
cluster
that
the
community
itself
is
paying
money
for
what
jobs
should
be
running
on
that
cluster.
D
Sure
so,
ok
I
will
make
more
nuanced
I,
don't
care
if
it's
sink
testing,
sig
release
or
some
Confederation
between
them,
who
makes
the
decision
I,
don't
think
it
should
be
this
group
who
is
deciding
what
tests
are
worth
paying
for
and
what
tests
are
not
I
think
we
should
probably
guidance
on
like
the
test
should
be
valuable
and
they
should
provide
real
information
for
us
and
they
should
not
be
egregiously
expensive
and
you
guys
figure
out
how
that
applies
to
the
actual
tests.
Does
that
make
sense.
D
So
there's
a
case
where,
like
a
five
thousand
note,
scalability
test
is
egregiously
expensive
and
we
need
to
decide
that
may
be
the
sort
of
thing
that
gets
escalated
to
here
and
even
to
steering
to
decide.
Is
this
something
that
our
community
is
willing
to
pay
for,
given
that
we
claim
to
support
five
thousand
nodes
right
and
then
you
know
extend
further?
Are
we
willing
to
support
a
ten
thousand
node
scalability
test,
even
though
we
don't
claim
to
support
ten
thousand
notes.
B
D
Point
I'm
sure,
but
also
to
be
clear.
The
the
donation
that
was
made
was
based
on
the
money
that
we're
spending
today,
which
includes
things
like
the
5,000
that
scalability
test
okay
sort
of
set
up
for
some
of
these
things,
but
I
call
that
the
ten
thousand
note
case
like
not
that
anybody
has
asked
for
it.
But
you
can
imagine
that
somebody
came
along
and
said:
hey.
Let's
do
a
ten
thousand
note
testing.
We
don't
actually
claim
to
support
ten
thousand
nodes
and
we're
not
funded
to
run
a
ten
thousand
no
in
tests.
D
B
B
D
Okay,
presumably
whatever
compute
we're
spending
on
it
today
is
sort
of
shadow
paid
for
by
Google
and
we'd,
be
moving
it
into
the
same
budget
pool
that
we
already
accounted
for.
It's
possible
that
in
the
you
know
the
the
donation
that
we
missed
some
of
the
accounting
we're
not
going
to
know
that
until
we
get
to
the
totality
of
it
and
realize
that
you
know
three
million
a
year
wasn't
enough,
it
needs
to
be
five
million
right.
B
B
So
I
will
discuss
this
from
Eric
discuss
this
with
Eric
off
line.
I
feel
like
we'll
get
to
this
real
soon.
Now,
just
as
soon
as
we
clarify
all
of
our
billing
questions
clarify
our
questions
on
disaster
recovery
for
artifacts
story,
promotion
clarify
how
we're
going
to
launch
kubernetes
clusters,
and
then
we
can
get
to
this
when
we
start
migrating
over
infrastructure.
As
part
of
the
next
milestone.
My.
D
B
A
huge
huge
thanks
to
Tim
and
everybody
who
pushed
on
actually
like
auditing
our
legal
groups
and
creating
a
script
to
magically
create
them.
We
did
get
that
thing
done
in
two
weeks
like
we
said
we
would,
which
was
cool
I,
believe
that
is
everything
in
this
milestone.
We
have
six
minutes
left
goodbye.
B
F
One
thing
that
is
problematic
is
that
came
up
with
the
humanities.
Iowa
redirecting
thing
is:
what
do
we
want
to
automate
in
terms
of
deployment
and
what
is
the
process
in
general,
for
example,
GC
as
web,
most
deemed
a
one
of
employment,
a
bad
would
circumvent
any
deployment
and
automation
and
testing.
So
what's
our
stands
on
what
should
be
done
and
is
that
already
codified
somewhere
sort
of
in
someone's
head
or
where
are
we
at
that
point?
Basically,.
B
It
is
not
codified
in
my
head
and
can
you
new
so
where
we
Tim
sort
of
laid
out
in
that
issue
comment
if
you
would
like
to
stand
up
a
new
GCP
project,
you
need
to
give
us,
you
know
people
and
a
group
and
a
purpose,
and
it
has
to
be
done
with
strips.
Are
you
trying
to
suggest
that
we
need
to
enumerate
similar
things
for
each
piece
of
cluster
based
infrastructure
that
we
migrate
over?
B
F
So
that
was
basically
so
we
have
two
things
that
are
sort
of
in
flight
in
terms
of
first
projects
that
might
be
migrated
to
the
new
infrastructure
in
the
new
classroom.
Once
we
have
a
script
that
automatically
creates
an
a.1
is
GCS
of
web
and
the
other
one
might
be
the
shortener
or
they
go
dot.
Community
Survey,
oh
basically,
I
mean
where
GCS
of
web
is
as
simple
at
least
that's
the
consensus.
F
It
seems
that
we
don't
need
to
automate
or
test
the
deployment
and
then
the
question
is:
what
is
the
testing
that
needs
to
be
done
in
terms
of
pushing
to
from
staging
to
production?
Do
we
need
a
staging
environment
for
the
different
things,
and
how
would
that
look
compared
to
what
we
currently
have
yeah.
D
So
some
of
the
examples
like
GCSE
web
today
are
not
great
examples
of
how
to
do
things
in
the
same
way
like
if
somebody
builds
a
new
GCSE
web,
the
binary
and
they
push
it
and
there's
no
canary
really
for
GCSE
web.
As
far
as
I
know,
it's
really
push
it
see
if
it
works.
If
it
doesn't
work
roll
it
back,
which
isn't
great
I'm
wondering
how
much
we
should
move
the
goalposts
while
we're
also
moving
the
infrastructure
or
whether
we
should
bring
it
over
and
say.
D
B
I
personally
would
prefer
not
to
raise
the
bar
I
feel
like
it
is.
It
is
unfair
to
expect
that
we
should
not
hand
over
ownership
of
stuff
to
the
community
until
it
reaches
such
in
such
a
bar,
its
it's.
How
it
is
supported
today
and
I
feel
like
we
can
talk
about
raising
the
bar
going
forward.
So
this
is
the
second
example
you
gave
of
like
replacing
Kate
SIA
redirector.
It
would
be
a
good
example
of
like
we're
kind
of
redoing
that
piece
of
infrastructure.
You
know
where
we're
placing
like
a
big
ol,
nginx
config.
B
F
So
basically,
the
overall
stance
would
be
what
we
have
currently
moved,
that,
as
is
so,
we
don't
introduce
more
more
Huddle's
that
are
necessary
and
everything
that
is
basically
redone
or
at
least
refactored.
Partially.
We
need
to
figure
out
what
an
actual
staging
deployment
looks
like
and
how
to
automate
that,
so
it's
not
being
Tim
pushing
to
or
doing
a
manual
deploy.
Basically.
B
B
D
The
like,
in
terms
of
timing,
we
have
the
other
path
which
is
still
open
to
us.
The
the
old
clusters
are
still
running
and
probably
will
be
for
at
least
weeks,
if
not
a
couple
months.
While
we
finish
the
transition,
it's
not
implausible
that
we
could
switch
over
Godot
kate's,
that
IO
in
the
existing
clusters
and
then
bring
that
forward.
But
if
we're
going
to
switch
it
to
the
new
mechanism.
Well,
the
PR
is
open
and
we've
had
comments
on
that.
Pr
right.
E
F
Okay,
that
make
sense.
So
if
we
do
a
face
to
face-
and
we
like
discuss
something,
I
would
love
to
get
something
like
at
least
a
few
points
that
we
try
as
like
goal
posts.
Have
things
like
what
do
we
need
as
a
minimum
thing
for
new
things
to
come
into
new
the
new
cluster
and
also
what
is
the
general
idea
and
goal
for
moving
to
ports,
a
more
get-ups
type
model?