►
From YouTube: Kubernetes SIG K8s Infra - 20220202
Description
A
Good
morning,
good
afternoon,
good
evening,
depending
on
where
you
are
thank
you
for
joining
us
for
the
february
2nd
sig
caves
infrastructure
meeting,
my
name
is
eddie
zaneski
and
I'll.
Be
your
host
today,
just
a
quick
reminder
that
this
meeting
and
all
kubernetes
meetings
abide
by
the
cncf
code
of
conduct.
So
please
be
excellent
to
each
other.
With
that
do
we
have
any
new
folks
on
the
call
that
would
like
to
introduce
themselves.
B
B
C
Hello,
my
name
is
john
johnson.
I've
also
never
been
on
one
of
these
calls,
but
somebody
nerd
signed
me
with
some
doc
about
mirroring
things,
and
this.
C
A
That's
awesome
thanks
for
joining
us
cool
any
other
folks,
I
think
I
recognize
everyone
else's
names.
A
All
right,
so
I
guess,
should
we
start
with
the
the
billing
report.
Let
me
do
this.
A
What
else
did
we
have
on
the
agenda?
So
I
said
before,
but
arnold's
going
to
be
joining
us
about
10
minutes
late,
so
he'll
come
and
take
over.
A
We
have
recurring
topics.
We
looked
at
the
billing
report.
I
don't
think
we
have
any
action
items
that
aren't
going
to
be
covered
in
open
discussion.
Justin.
You
want
to
kick
us
off
with
your
first
topic
here.
D
Absolutely
thank
you.
Yes,
I
am
I'm
mobile,
so
apologies.
If
the
connection
isn't
great
wait.
Am
I
the
first?
I
thought
it
was
a
second
topic,
but,
okay,
I
wanted
to
bring
up
it's
very
related
to
the
other
topics
and
it's
about
thinking
about
how
we
can
feel
more
confident,
emerging
prs
that
change
infrastructure
and,
like
you
know,
setting
up
some
of
the,
I
guess
theoretical
best
practices
which
I
haven't
actually
seen
implemented
in
many
places.
D
D
Krm
kubernetes
the
kubernetes
resource
model
to
describe
our
resources
rather
than
terraform,
but
if
someone
also
wanted
to
try
it
with
terraform,
that
would
be
awesome
as
well.
There,
the
the
google
one
is
a
proprietary
tool,
but
specifically
to
google,
but
there
are
equivalent
ones
for
aws
as
well.
A
D
My
thought
is
that
it
is
so
you
know
you
can't
just
take
the
same.
The
cloud
automatic
tool
is
kubernetes
right
and
it
doesn't
actually
support
all
infrastructure
even
with
terraform.
You
have
the
same
language,
but
you
have
to
like.
We
can't
take
a
gcp
configuration,
just
run
it
on
on
aws.
D
So
from
that
perspective
I
feel
like
no
matter
what
we're
going
to
be
using
cloud
specific
configurations,
certainly
once
we
step
outside
the
application
space
where
kubernetes
does
a
great
job,
I
would
rather
use
kubernetes
as
the
model,
because
I'm
more
familiar
with
it
and
it's
more
in
tune
with
the
work
that
I'm
doing
instead
of
my
day
job,
but
if
by
cross
cloud
ones
you
mean
terraform.
I
think
that
would
be
very
interesting.
It's
just
not
something
I
can
devote
time
to.
E
Oh,
thank
you
hi
everyone.
I
saw
your
issue
justin.
I
think
my
main
issue
is
maturity
of
all
the
tooling
all
the
ecosystem,
because
coffee
connector
is
basically
not
support.
All
the
gcp
resources
at
the
moment
we
are
talking
about,
but
it's
something
we
really
need
to
invest
because
we've
been
talking
about
this
for
like
a
year
now
we
had
a
demo
with
crossplane
last
year
about
this.
E
We
need
to
think
about
I'm
plus
one
with
the
idea,
because
we,
we
are
the
issue
about
these
two
for
two
years
now.
We
should
basically
have
a
sandbox
pressure.
We
can
try
everything
and
try
to
test
all
of
those
two
links,
but
currently
terraform
is
the
most
advanced
and
natural
tools
about
infrastructure
management.
That's
good.
D
I
I
think,
I
think,
that's
fair,
and
I
certainly
think
you
know,
like
the
the
the
team
that
is
responsible
for
config
connector
and
I'm
sure
the
equivalent
ones
on
on
aws
the
aws
operators
for
kubernetes,
I
believe
are.
Are
you
know
working
on
that?
I
think
the
difference
is.
I
have
a
an
idea
of
how
we
could
make
this
work
using
kubernetes
primitives,
whereas
I
don't
have
an
idea
or
I
don't
have
the
experience
to
know
how
to
make
it
work
using
terraform.
D
So
I
what
I'm
proposing
is
that
we
go
with
what
people
feel
comfortable
with
do
I
feel
comfortable
with,
and
then
we
can
like
look
at
maturity
like
once.
We
have
something
right
like
right
now
we
don't
really
have
this,
so
I
feel
like
let's
build
something
and
then
if
we
can
translate
it
to
terraform
great,
if
we
like
what
we've
got
with
the
kubernetes
model,
then
that's
great
as
well.
I
think
okay.
E
D
Yeah
I
mean
I
just
I
just
said
if
I
just
want
to
add,
if,
if
someone
wants
to
build
the
same
thing
with
terraform,
I
think
that's
great
as
well.
I
feel
like
we
can.
We
can
almost
do
both
if
there
is
someone
that
wants
to
do
terraform
and
like
cross-pollinate
from
each
other.
I
just
have
like
more
of
an
more
of
an
insight
in
there,
some
people
that
have
been
helping
me
to
develop
that
insight
at
google
as
to
how
to
use
kubernetes
to
achieve
this.
So,
okay.
A
F
Having
been
dealt
with
multi-cloud
stuff
for
a
long
time-
and
that
was
a
chef
long
ago-
trying
to
manipulate
fog
into
dealing
with
all
the
different
cloud
providers
and
trying
to
have
this
abstraction
layer,
it's
a
hard.
It's
a
it's
a
problem
that
ends
up
getting
you
in
more
time
on
the
complexity
of
the
differences
and
trying
to
get
this
same
model.
F
F
So
we've
ii
and
crew
have
done
some
pretty
higher
level
things
with
terraform
with
multiple
cloud
providers,
because
the
cross-cloud
ci
and
the
cncf.ci
project
initially
was
bringing
up
kubernetes
from
source
on
all
the
cloud
providers
before
everybody
had
kubernetes
as
a
service
and
the
highest
level
of
that
terraform
configuration
was
agnostic
to
the
provider,
and
then
they
would
be
modules
underneath
that
would
reuse
those,
but
I
can
go
see
the
code
base
it
is
it
is.
It
is
a
piece
of
engineering
on
its
own.
D
I
mean
if,
if
we
generally
feel
like
this
is
something
worth
pursuing,
then
I
guess,
if
I
have
like
the
blessing,
then
I
can
go
and
like
try
to
go
and
get
the
the
the
sort
of
sandblock
project
and
sort
of
spin
up
that
infrastructure
and
sort
of
start
to
send
some
pr's.
D
If
there
is
someone
that
wants
to
work
on
this
with
me,
that
would
be
wonderful,
but
it's
not
required,
and
if
there
is
someone
that
wants
to
work
on
terraform
alongside
with
me,
that
would
be
great
as
well,
but
again
not
required.
E
You
have
my
plus
one,
so
I
will
just
drop
the
plus
one,
the
issue.
So
I
think
if
you,
if
you
are
investing
this,
I
don't
see
a
technical
block
about
this.
I
think
my
main
concern
right
now
is
cars
because
well
we
are
under
some
kind
of
gas
burden,
so
that's
gonna
be
low
for
this
afro.
So
I'm
not
really
worried
about
that.
I
will
just
put
a
plus
one
in
the
issue.
G
So
I
have
questions
comments
concerns
I
don't
know
about
the
maintainer
bandwidth
in
general,
like
if
we
choose
something
who
is
going
to
maintain
it
long
term
justin,
you
may
be
hacking
on
it
for
the
initial
steps,
but
are
we
happy
with
choosing
a
solution
that
we
don't
know
who
the
maintainers
are
going
to
be
for.
E
I
think
it's
early
to
have
to
answer
to
that
question,
because
this
is
like
an
experiment
to
try.
What's
the
next
tooling,
we
we
want
to
use
for
infrastructure
management,
so
we
can
make
that
decision
right
now,
because
we
still
need
to
understand
what
we're
doing
about
infrastructure
management
at
the
at
this
moment
we
made
the
decision
to
use
sir
from
long
term.
We
still
have
to
use
migrate
from
bash
to
terraform,
but
I
don't
also
want
to
block
us
to
investigate
better
tooling
for
resource
management.
G
E
E
If
we
can
manage
that
with
config
connector
or
cross
plane
of
pulumi,
we
can
now
see
what's
our
blocker,
to
migrate
from
terraform
to
that
to
lane,
because
I,
I
think
the
reason
behind
what
justin's
proposing
is
basically
something
similar
done
with
inside
cups.
They
have
integration
tests
with
something
with
terraform
or
yaml
files.
A
G
Yes,
because
I
considered
using
it,
it's
the
go
cloud
development
kit.
I
think
it's
I'll
I'll
find
the
link.
H
So
so,
hi
all
right.
Sorry,
I'm
late!
I
had
a
meeting
run
over.
I
think
I've
caught
up
on
the
rest
of
this.
I
I
just
really
want
to
point
out
that
we
are
once
again
kind
of
predicting
future
problems
before
they
happen.
I
would
really
like
us
to
stand
this
up,
which
I
know,
and
I've
been
working
on
and
work
with
it
a
bit.
H
No
one
has
to
migrate
anything
to
depending
on
it
initially,
and
we
can
see
how
reliable
it
is
before
we
determine
whether
it
won't
be
reliable
and.
H
Heavily
for
the
most
like
simple,
straightforward
answer,
because
we've
kind
of
gone
in
circles
on
the
technology
for
this
for
a
couple
years
now.
E
Ben
we're
not
talking
on
the
same
subject
we're
talking
about
infrastructure
management.
I
think
you're
talking
about
the
mirror
stuff,
so
james
at
the
ends
on
so
ben.
I
will
pass
the
mic
to
teams
yeah.
We
can
come
back
to
you
yeah,
just
a.
I
Quick
question:
justin:
all
of
us
will
be
able
to
run
this
tool
right
and
without
any
issues,
and
all
of
us
will
have
access
to
it
and
will
have
access
to
the
latest
version
that
you
will
be
trying.
D
Right,
yes,
it
is,
it
is
free
and
it
is
available
on
any
kubernetes
cluster.
I
think
the
the
presentation's
a
little
bit
tricky
if
you
don't
set
up
on
gke,
but
it
is
free,
and
I
think
I
think
more
importantly,
though,
is
like
the
the
the
basis
for
this
is
sort
of
thinking
about
why
it's
hard
to
merge
prs
and
why
we,
you
know
why
it's
scary,
to
write
a
priority,
to
approve
a
pr.
D
Why
we,
how
we
can
do
confidence
and
learning
from
what
kk
did
so
it
will
be
very
valid
to
just
send
up
a
draft
pr.
If,
if
this
succeeds,
let
the
e
to
e
tests
run
with
infrastructure,
and
then
you
feel
confident.
So,
even
if
you
never
install
it
and
run
it
yourself
in
the
same
way
that
you
don't
actually
have
to
you
don't
have
to
like
ins
like
run,
you
don't
have
to
touch
the
cloud
to
test
the
kubernetes
pr.
That
would
be
my
sort
of
goal
here.
Okay
got
it.
E
You
just
have
it's
an
ad
on
in
the
gk
cluster,
so
I
think
that
what
justin
need
to
do
is
basically
recite
a
standard
project.
Bootstrap
a
gk
cluster
and
just
enable
the
add-ons-
and
you
will
be
the
crd-
will
be
provided
as
part
of
the
gk
cluster.
That's
the
main
advantage
and
also
a
constraint,
because
you
need
a
gk
cluster
for
that,
okay
got
it.
Thank
you.
E
A
E
J
E
So
I
think
in
the
document
most
questions
I
are
not
directly
will
relate
to
what
is
proposed
because
just
to
resume
we
want
just.
We
want
a
new
oci
proxy
as
a
replacement
of
gci
dot
io,
because
we
need
to
migrate
that
proxy
inside
community
infrastructure.
So
that's
the
main
point.
In
the
same
time,
we
also
want
to
explore
the
possibility
to
redirect
any
do
couple
requests
from
a
docker
client
to
the
amazon
of
gcr.
That's
the
second
approach,
so.
E
One
part
of
this
proposal
is
to
push
the
image
layers
to
a
street
bracket
and
use
that
proxy
to
redirect,
based
on
this
of
the
eyepiece
source,
I
would
say
something
like
that,
so,
which
means
this
implies
some
conversation
with
sig
release
and
specifically
release
engineering,
because
we
we
are
interested
to
make
that
action
really
is
blocking
at
some
point.
So,
basically,
before
the
promotion
process
of
the
control
image,
we
want
to
release.
We
want
to
be
able
to
push
those
image
layer
to
a
street
bucket
before
finish
the
release.
H
H
I
don't
that's
not
the
phrase
I
would
use
that
they
should
be
part
of
the
promotion
process
and
their
technical
reasons
for
that
are
outlined
in
the
doc.
But,
very
briefly,
the
reason
is
that
allows
the
proxy
to
be
very
dumb
and
reliable,
and
we
already
have
reliable
infrastructure
for
promoting.
H
Right,
so
one
thing
that
we
could
do
to
I
think
address
both
concerns
is
to
start
by
saying
it's
non-blocking.
If
it
fails,
we
just
note
this
and
we
keep
track
of.
Like
you
know,
is
this
a
reliable
process?
If
it's
not
a
reliable
process,
then
we
need
to
rethink,
and
we
can't
migrate
things
over
to
this.
If
it
is
a
reliable
process,
we
should
be
able
to
switch
it
to
being
like
promotion
fails.
If
this
doesn't
work.
I
Yeah,
I
I
my
two
cents.
It
is
like
the
publishing
bots
right,
so
we
tell
people
that
hey
the
tags
for
the
publishing
staging
repositories
are
not
available.
It's
going
to
take
like
within
a
day.
It
will
be
there,
but
it's
not
there
right
now
as
soon
as
the
tag
is
cut
in
kk
repository.
I
G
Yeah,
exactly
that's
what
I
was
kind
of
alluding
to
so,
if
you
think
about
the
release
process
at
any
one
point,
I
the
the
reason
that
we
send
out
the
the
announcement
late
is
because
we're
waiting
for
we're
waiting
for
the
tag
to
land
in
in
in
kubernetes
kubernetes
we're
waiting
for
the
images
to
be
promoted,
but
we're
also
waiting
for
the
debs
and
rpms.
G
H
J
H
H
Amazon
say
please
pay
for
this
is
we
can
say
this
makes
downloads
much
faster
and
better
for
your
customers
running
these
clusters,
if
we
have
to
add
some
kind
of
like
weight,
but
what,
if
it's
not
really
there
and
it
pulls
from
gcr
instead
we're
not
really
providing
that
benefit
anymore
and
we're
talking
about
a
pretty
substantial
cost.
H
Here,
we've
had
to
halt
any
like
further
migration
of
like
say,
prow,
because
we
are
exceeding
our
spend
and
the
far
bulk
of
our
spend
are
these
image
downloads,
and
even
if,
like
cncf,
does
wind
up
like
wound
up
paying
it
through
some
other
means.
It
should
be
much
cheaper
to
host
all
of
that
traffic
from
within
the
cloud
it's
originating
and
by
far
that
just.
G
Stay
clear,
yeah
just
to
be
clear:
I'm
not
disagreeing
with
any
of
this.
What
I'm,
what
I'm
saying
is
that,
like
part
of
like
part
of
my
concern,
is
introducing
new
things
to
an
existing
process
where
the
like
the
image
promoter
is,
the
image
promoter
is
maintained
right
and
like
we'd,
have
to
do
improvements
to
that.
If
you
look
at
the,
if
you
look
at
like
just
thinking
about
support
for
various
architectures
right,
we
support
multiple
like
in
quotes.
We
support
multiple
architectures,
because
that
was
what
we
were
doing
before
right.
So.
D
H
If
we
use
ecr,
it
will
be
the
same
code
path,
just
two
endpoints.
If
we
use
f3,
it
will
be
very
slightly
more
complicated
to
grab
the
layers
and
copy
them
over.
H
I
would
say
that
again
what
we
can
do
is
like
we
will
take
responsibility
for
building
this.
We
will
make
sure
that
failures
are
not
blocking
initially
and
we
can
see
in
practice.
How
well
does
it
behave
if
it
is
problematic
and
failing
or
causing
slowdowns,
we'll
remove
it
and
we'll
take
another
approach
of
this.
G
So
the
upside
of
the
current,
the
you
know,
the
current
jobs
that
are
doing
promotion
is
they're
running
all
the
time,
basically
right,
they're
running
every
hour
or
so
to
catch
up.
So
even
if
something
is
felt
like
the
failures
are
non-blocking
already
right
on
on.
You
know,
kind
of
thinking
about
the
artifact
registry
stuff
as
well.
I've
been
poking
around
with
swapping
references
here
and
there
to
see
if
we
can
actually
support
artifact
registry
in
general.
So,
like
I
right
now,
I'm
not
seeing
too
many
issues
that
are
like
gcr
specific.
G
So
I
like
I,
I'm
I'm
just
want
to
be
clear
that
I'm
I'm
saying
like
I
agree
with
this.
We
and
I
think
we
have
a
lot
of
the
pieces
that
we
already
need
to
do
it.
I'm
just
concerned
with
the
state
of
the
artifacts
that
we
have
already
agreed
to
to
publish
right,
making.
H
G
H
Non-Blocking,
I
think
we
should.
I
think
we
should
start
there.
It's
the
same
thing
with
the
like
registry
endpoint
itself,
we're
trying
to
get
this
set
up
as
soon
as
possible.
So
we
can
start
making
sure
that's
going
to
work
and
get
infrastructure
migrated,
but
we
are
certainly
not
going
to
pretend
that
it's
ready
to
move
like
live
image,
pulling
from
like
the
default
and
cube
atom
day
one,
but
we
kind
of
need
to
get
it
started.
H
So
we
can
see
if
it
will
be
and
we'll
probably
look
to
like
some
smaller
projects
to
trial
it
for
us-
and
arnold
has
already
been
talking
to
folks
about
that-
and.
E
Yeah,
I
don't
think
we
want
to
basically
jump
directly
into
image
pro
the
release
process.
Basically,
this
is
an
experiment.
We
want
to
basically
see
what's
the
best
approach
for
this.
Are
we
are
we
copying
the
image
label
to
a
s3
bucket
or
are
you
are
we
using
a
ecr
public
repository
for
as
a
mirror
for
long
term
usage?
I
don't
think
right
now.
E
We
want
to
basically
see
what's
happening
if
we,
let's
say
push
those
layers
inside
s3
and
see
the
redirect
and
basically
see
what's
the
impact
on
the
cost,
if
it,
if
it's
out,
reduce
the
cost,
we
can
try
to
understand
how
we
can
make
that
not
blocking
for
the
greatest
process
and
james.
You
have
your
andreas.
E
Yes,
so.
I
I
mean,
let's
make
sure
that
we
are
all
on
the
same
page
on
like
we
have
to
do
whatever
we
have
to
do
to
cut
down
costs.
Otherwise
we
can't
survive
right
like
let's
agree
on
that,
and
if
we
can
agree
on
that,
then
I
think
a
lot
of
these
decisions
will
be
easier
and
we
can
always
turn
things
off
right
like
even
even
you
know,
we
can
always
back
out
of
stuff
as
long
as
we
make
sure
that
we
have
a
path
to
go
like
when
we
are
doing
stuff.
I
K
I
mean,
I
would
say,
start
off,
make
it
non-blocking
and
non-public
right
just
to
see
if
it
works
as
step.
You
know
zero
step
one.
I
would
make
it
actually
a
blocker
before
it's
even
public,
because
if
it's
a
non-blocker
there
may
be
some
lag
or
some
delay
or
something
going
on
that
people
aren't
noticing
by
making
it
actually
a
mandatory
part
of
the
release
process.
Even
though
it's
not
for
publicly
visible
artifacts,
you
might
catch
a
problem
before
it
happens
before
you
have
an
end
user.
K
H
But
I
do
agree
with
first
of
all,
it's
very
important
that
we
don't
like
disrupt
the
release
process,
so
we
can
start
from
a
point
where
we're
just
like
the
promoter
is
handling
this
and
publishing
it,
but
if
it
run
encounters
any
errors,
it
just
moves
on
and
logs
them
and
we
can,
like
you,
know,
arno
and
I
from
kate
simper
can
come
start
checking
those
logs
and
looking
for
like
are
we
encountering
problems
here?
Is
this
going
to
work
before
we
talk
about
making
hard
blocker?
H
E
So,
just
to
clarify
we,
we
are
not
jumping
from
a
use
that
in
the
release
process,
no,
we
you
want
just
to
build
small
based
on
that
until
we
reach
the
point,
we
are
sure
we
can
do
the
migration
from
kgcr
to
that.
H
New
proxy
also,
the
first
thing
we're
doing
is
a
proxy
that
doesn't
even
have
any
additional
mirrors
just
so
we
can
start
working
on
the
problem
of.
How
are
we
ready
to
ask
people
to
like
move
to
another
endpoint,
and
I
think,
we'll
probably
continue
to
develop
the
proxy
to
do
like
the
mirroring
complications
in
the
sandbox
mode
only,
and
we
will
eventually
set
up
a
public
endpoint
that
we
can
say
go
ahead
and
start
switching
to
pulling
through
this.
H
K
The
traffic
yeah
we're
we're
in
violent
agreement,
especially
about
just
getting
the
proxy
out
there
first,
so
that
we
can
start
to
propagate
the
fqdn
change.
Yeah.
G
Yeah
and
in
terms
of
like
in
terms
of
testing
non-blocking
like
the
the
any
any
promoter
job
that's
going
out
and
and
is,
is
failing,
we're
getting
notifications
on
it
anyway,
they're
going
to
retry.
Eventually
we
have.
We
have
testing
like
fake
prod
projects
already
for
the
image
promoter
like
none
of
this
is
none
of
this
is
a
problem
great.
H
Also,
one
piece
that
I
think
we've
only
touched
on
a
little
bit,
but
we're
definitely
gonna
have
to
discuss
more
down.
The
line
is
at
some
point
tim
and
I
will
tim
hawking
and
I
will
probably
open
discussions
more
with
the
gcr
team
about
okay.
This
is
looking
serious.
Can
we
accelerate
migrating
traffic
by
updating
kates.gcr.io
to
point
to
this
thing?
Instead?
Are
they
willing
to
do
that?
I'm.
D
H
Sure,
if
that
will
even
be
acceptable,
but
that's
something
that
we'll
also
want
to
take
with
a
lot
of
care,
because
until
we
get
to
that
point,
clients
will
always
have
the
existing
case
of
gcrio.
That
has
zero
changes
like
we're,
not
making
any
change
to
how
that
works.
At
some
point,
we
might
want
to
flip
that
domain
over
to
this
and
have
something
else
back:
the
multi-regional
storage,
so
the
clients
that
are
still
running,
like
old
releases,
start
pulling
through
this
more
efficient
process.
G
So
so
I
can
you
say
more
about
what
you
mean
you
and
tim
having
a
chat
with
gcr
because,
like
that
is
all
on
community
infrastructure
at
that
point.
At
this
point,
the
gps.
E
H
Oh
okay,
let
me
let
me
let
me
explain
so.
Case.Gcr.O
is
not
actually
a
community
thing.
What
backs
it
is
a
community
thing,
but
there's
some
internal
google
piece
where
there's
a
special
sub-domain
and
that
special
subdomain
knows
that
it
should
map
to
these
backing
registries,
and
it
actually
also
is
doing
some
lifting
for
us
to
keep
costs
down
today
and
better
pulls
by
there's
a
gcr
in
the
us.
H
There's
one
in
asia
and
there's
one
in
the
eu
and
that's
where
the
actual
storage
is
case.gcr.io
takes
an
incoming
client
and
bounces
them
to
one
of
those
already.
So
that
is
like
that's
actually
like
the
details
of
that
happening
are
actually
not
public
infrastructure,
but
it
is
important
that
this
is
a
transparent
move,
which
is
why
I'm
bringing
up
that.
H
H
It's
really
hard
to
get
people
to
update
move
over
and
there
will
be
a
lot
of
people
still
pulling
through
that
domain
for
some
time
and
we
may
not
be
able
to
cut
costs
quickly
enough,
but
that's
that's
like
after
we
think,
then
we
can
start
that
that
said,
tim
is
suggesting
to
me
that
we
might
want
to
go
ahead
and
start
discussing
the
possibility
with
them
just
so.
H
H
We'll
have
to
find
out
like
if
they're,
even
willing
to
point
it
at
some
external
thing,
but
if
they
could
that's
potentially
of
something
we
should
strongly
be
considering
once
we
consider
this
like
ga,
because
I
think
it's
going
to
be
really
difficult
to
actually
get
all
the
clients
out
there
to
move
to
a
new
endpoint
in
a
reasonable
time
frame.
C
Double
agent
I
I
can
talk
to
people
I
mean
I
think
we
are
excited
not
to
be
involved
as
much
as
possible
with
anything,
and
so
that
may
be
amenable
to
us.
But
I
can't
speak
authoritatively.
F
Thank
you
so
much,
and
please
advocate
for
that.
I
we
have
a
lot
of
binaries
out
there
in
the
wild
and
cloud
providers
that
are
deploying
these
old
versions
and
they
won't
update
the
patch
releases,
possibly
so
the
amount
of
savings
we're
going
to
hit,
even
if
we
perfectly
execute
all
this
tomorrow
may
be
low,
unless
we
can
actually
shift
all
of
the
existing
stuff
out
there.
F
By
doing
this,
some
fancy
302
that
is
still
wonderfully
provided
by
the
gcr
team
and
what
they
obligate
to
is
just
the
302
at
that
level,
so
that
they
can
feel
that
the
services
provided
by
google
are
awesome.
They
just
might
not
be
a
google
container
registry
owned
thing.
H
Yes,
yeah
so
to
be
very
clear
here,
there
will
have
to
be
some
like
internal
discussions,
because
it
is
a
google
piece,
but
my
intention
is
just
to
like
make
the
community
aware
that
we
would
like
to
look
into
this
because
I
think
it
it
will
help
us
with
our
funding
and
that
you
know
if
we
get
to
the
point
where
folks
that
actually
own
this
have
agreed
to
it
like.
I
will
come
back
to
all
of
you
with
this
and
we
will
discuss
it.
H
No
one's
going
to
just
like
flip
a
switch,
but
I
want
to
go
ahead,
throw
that
out
there,
because
I
think,
that's
already
something
that
we
need
to
be
thinking
about,
not
something
we
need
to
be
doing
right
now.
We
should
not
be
planning
to
switch
right
now,
but
we
should
be
thinking
about
we're,
probably
going
to
need
to
even
if
we
went
back
to
every
tool
in
the
project
and
cut
a
new
patch
release
on
every
version.
We
support
right
now
and
said
it
uses
this
new
registry.
H
H
Yeah,
so
there's
also
that
piece
even
but
I'm
saying
even
if
we
agreed
to
that
and
we
went
and
did
all
of
it
and
that
happened
tomorrow.
I
think
there's
still
going
to
be
a
really
long
tale
before
people
pick
it
up.
So
we
should
really
consider
if
we
can
just
transparently
migrate
clients
and
then
in
the
future.
H
E
We
may
want
to
do
in
the
future,
because
we
can
have
that
basically
put
the
proxy
just
for
amazon
later
we
can
basically
say:
oh,
let's
use
google
cdn
cloudflare
or
anything
we
want
to
do
so.
Basically,
that
change
is
completely
transparent
for
any
anyone
trying
to
consume
kubernetes
artifacts
yeah.
A
A
H
If
we
can
get
the
migration
to
the
domain
step
one
and
it's
just
a
plane
redirect,
then
we
have
the
ability
to
move
clients
and
then
we're
only
hoping
to
do
like
the
most
like
least
offensive
thing
possible
to
ship
the
aws
traffic
to
something
we
can
run
cheaper
and
potentially
funded
by
amazon,
because
that
is
just
such
a
large
amount
of
our
traffic
that
is
really
expensive
right
now
and
that,
like
any
potential
future
optimizations,
we
can
like
circle
back
to
that.
A
Rian
hippy
you
when
you
all
pulled
that
data
of
the
ips
and
all
that,
can
you
pull
that
data
with
user
agents
and
can
we
tell
what
tools
and
versions
are
most
popular?
Maybe
that
can
give
us
some
insight.
E
It's
not
possible
to
do
that,
because
gci,
don't
log
that
information,
that's
a
problem
because
gcr
will
the
audit
log
just
provides.
You
just
tell
you
which
layer
I
download,
but
not
the
the
user
agent.
Is
it's
not
even
gcr,
it's
basically
the
gcs
audit
logs
that
provide
that.
So
it's
going
to
be
extremely
difficult
to
try
to
put
that
information.
E
Okay
in
do
we
have
any.
I
have
two
questions,
but
before
that,
do
we
have
something.
J
Something
else
I
guess
the
question
is
what
what's
what's
next?
What's
the
next
best
step.
E
Okay,
it's
that's
what
my
question
so
ap
and
caleb.
Where
are
we
on
account
management
for
amazon,
it's
possible
to
get
that
by
end
of
the
week,
because
that's
the
one
blocker
the
sandbag
is
already
up
and
running.
We
just
now
wait
for
amazon
to
fund
the
account
and
have
access
to
accounts,
so
we
can
try
to
synchronize.
This
goes
the
year
to
amazon
and
my
other
question
is
for
you
and
ap.
Where
are
we
on
the
cape
about
mirror
stuff?
Because
I
think
we
are
having
and
going
cap
about
this?
H
Go
for
it
I'll
interject
really
quickly.
That's
actually.
I
don't
actually
think
that's
the
blocker
to
move
forward.
That's
a
blocker
to
move
forward
on
finishing
the
like
mirror
piece,
but
we
can
already
start
to
move
forward
on
like
getting
confidence
around
just
the
plane
redirect
and
potentially
standing
up
a
non-sandbox
endpoint.
That
is
only
a
plain,
redirect
and
start
shifting
traffic
now
in
parallel
to
sorting
out
the
amazon
piece
and
making
the
more
complicated
version
of
the
redirector
that
looks
at
client,
ip
and
selects
the
back
end.
E
H
Well,
if
we
can
get
clients,
if
we
start
getting
confidence
in
pulling
clients
through
any
redirect
at
all
and
potentially
moving
projects,
chaos
or
something,
then
when
we
do
introduce
the
aws
piece
it
will
work.
I
think
the
slowest
part
of
this
by
far
is
going
to
be
getting
clients
to
use
the
new
endpoint.
So
my
intention
was
that
we
stand
up
something
like
registry,
that
kate
sadio
and
all
it
does
redirect
gcr.
H
We
have
very
minimal
concerns
about
reliability,
we'll
get
that
tested
and
start
shifting
clients
so
that,
when
we're
prepared
to
switch
that
over
to
something
that
also
serves
from
aws,
clients
are
already
like
in
place,
and
we
test
that
work
at
the
sandbox
endpoint.
If
we
don't
do
that,
I
think,
even
once
we
stand
up
the
aws
piece
now
we
still
have
that
whole
long
tail
on
getting
clients
to
use
the
new
domain,
and
we
haven't
made
any
progress
on
that.
K
I
just
want
to
double
down
on
agreement
with
what
ben
said,
I'm
as
a
guy
who
just
turned
off
his
last
1.17
cluster.
It's
going
to
take
forever
hey
I
mean
I
mean
we
all
laugh,
but
we
know
how
it
happens.
Right,
there's
like
I
mean
some
scientists
went
to
something
you
know,
and
I
can't
do
it.
It's
gonna
take
forever
to
get
a
name
change
out
there.
K
H
But
that
one
I
don't
want
to
depend
on
that
definitely
happening.
That's
kind
of
in
the
same
category
of
convincing
sig
release
that
I
want
to
go
to
every
kubernetes
version
they
support
and
switch
over
the
fqdn.
I
can
understand
where
people
are
going
to
disagree
with
this.
H
I
don't
want
to
definitely
depend
on
those
things
happening,
whereas
if
we
can
stand
up
just
a
plain
redirect
now,
which
we
have
put
it
on
a
domain
that
we
really
want
to
use
for
production
later
know
that
it
is
like
a
very
tiny
little
go
program
that
just
serves
a
redirect
and
is
running
on
some
reliable
infrastructure
and
start
migrating
clients.
H
Then
at
any
point
we
can
flip
it
over
to
something
that
has
the
has
the
mirroring
logic
and
we
will
have
shifted
at
least
some
of
the
traffic,
and
we
will
cut
at
least
some
of
the
cost.
If
we
wait
to
do
that,
then
there's
the
potential
that
we
don't
get
the
case
side
gcrdio
moved
and
we
don't
get
all
these
releases
moved
over
and
the
with
that
117
shift.
H
If
we
ship
this
in,
like
125
it's
going
to
be
years
before
we
move,
the
traffic
yeah,
probably
should
have
set
up
just
the
redirect
like
a
really
long
time
ago.
So
we
should
stand
it
up
now.
I
think
and
just
gain
some
confidence
like
this
is
a
dumb
redirect,
there's
almost
nothing
to
go
wrong
here.
Let's
get
clients
pulling
through
it.
E
H
Place
in
the
other
piece
in
place,
I
would
hope
I
can't
give
actual
numbers,
but
I
hope
it
should
be
pretty
cheap
to
run
something
that
just
responds
to
every
request,
with
a
302
or
whatever.
E
H
H
H
There's
no
obvious
flaws.
We
should
probably
set
up
another
endpoint.
That's
like
registering
that
question
and
we
should
freeze
that
in
time
with
like
no
additional
complexity,
it's
just
a
redirect.
Let's,
let's
start
shifting
clients
to
it
where
we,
where
we
can
and
then
we
can
follow
up
with
okay,
we
got
that
in
place.
People
can
independently
work
on
engaging
folks
to
move
clients.
While
we
also
work
on
the
aws
infra
and
the
more
advanced
redirect.
K
K
As
much
as
I
can
here,
but
that
was
the
action
plan.
E
The
part
is
already
done,
oh
well,
glad.
E
But
not
not
in
all
the
region,
because,
basically
it's
it's
based
on
cloud
brand
and
clan
run
is
like
a
serverless
platform.
So
I
think
the
most
complex
thing
is:
I
understand
the
mechanics
of
calderon
because
they
are
very
clever-
is
very
open
yet
about
concurrency
and
resource
allocation.
So
I
think
the
first
I
want
to
agree
on
ben
about
this.
E
We
should
move
all
the
test
infrastructure
to
that
new
endpoint
and
see
what's
happening
and
later
we're
basically
trying
to
make
that
communication
over
the
entire
community
and
see
and
see
what's
happening,
and
I
think
I
would
like
to
have
input
from
john,
because
I'm
trying
to
basically
get
the
position.
Readiness
requirement
for
this.
H
Not
in
this
call,
but
we
can
talk
about
later,
I
think
basically
what
our
no
has
already
set
up
from
the
code
that
I've
already
put
in
the
oci
proxy
repo.
We
can.
We
can
start
getting
test
test
workloads
on
that,
like
that's
available
today.
As
soon
as
we
have
some
confidence
just
within
the
few
of
us
here
that
that
looks
reasonable.
H
We
can
start
discussions
about
okay,
let's
set
up
another
domain
for
it.
That's
the
like
the
real
one
that
we
actually
want
to
move
people
to
and
start
shifting
it,
but
because
it
has
just
been
set
up
like
this
morning.
I
don't
know
we
probably
want
to
do
some
more
sanity
checking
run
a
little
bit
of
like
test
workloads
through
it
and
stuff
make
sure
that
there's
no
surprises.
E
So
I
plan
to
move
some
some
stuff
by
end
of
the
week
on
this
endpoint
see
what's
happening
because
there
are
some
tweaks.
We
still
need
to
do
to
do
with
the
current
setup.
I
made.
H
Also,
there
is
some
background
definition.
I
know.
Justin
is
particularly
looking
at
like
what,
if
we
do
this
with
some
gk
clusters
and
things
like
that,
arno
and
I
opted
to
go
with
cloud
run
for
now,
just
because
staffing
is
hard
and
we're
looking
for
something-
that's
like,
hopefully
reliable
and
doesn't
require
work
from
us
to
get
the
to
get
the
endpoint
in
place.
H
But
you
know
longer
term,
like
I
can
certainly
see
arguments
where,
like
maybe
the
you
know,
the
community
wants
to
use
like
more
open
source,
oriented
solutions
even
further
or
that
sort
of
thing,
but
I
think
we
should
like
press
forward
as
quickly
as
we
can
with
just
just
getting
the
very
simple
version
in
place
for
that
move,
and
I
think
I
think
we're
good
to
go
on
that.
We
just
need
to.
We
need
to
run
a
little
test,
workloads
against
the
sandbox
endpoint
before
we
set
up
a
real
endpoint.
H
I
think
this
is
a
good
starting
point
for
us
on
in
theory
like
we
just
give
them
the
container
image
and
it's
a
stateless,
http
service.
We
shouldn't
need
a
lot
more
from
this.
We
don't
need
microservices
talking
to
each
other
or
like
any
of
the
wonderful
things
that
kate
springs.
We
just
need
one
container
globally
serves
http,
does
a
redirect?
E
The
point
we
chose
cloud
one
is
basically
quickly
eliminate
cloud
one
if
we
we
face
some
a-cups
and
move
by,
because
gk
can
can
build
the
default
option
for
this.
But
we
want
to
basically
try
cloud
one
and
see
what's
happening
with
cloud
run.
So
I
think
the
goal
here
is
set
up.
Bootstrap,
that
sandbox
environment
shift
the
test
infrastructure
traffic
to
that
see,
what's
happening
in
cloud
one
over
the
entire
gcp
infrastructure
and
make
that
call
and
see
if
we
can
basically
use
cloud
rent
for
long
term.
H
Eddie
also
has
a
good
point
here.
That
was
in
the
back
of
my
mind,
about
which
you
know
knowing
what
the
traffic
is
actually
like.
I
think
the
one
complication
we
should
consider
entering
adding
to
this
is
just
some
serving
some
metrics
and
collecting
those,
and
that
may
be
something
we
can
get
folks
to
help
on
standing
up
something
for
that.
H
We
probably
want
to
go
back
and
before
we
actually
set
a
hard
cut
off
on
like
we're
not
making
this
tool
more
complicated
for
what's
serving
the
just
move,
the
traffic,
we
should
probably
add
like
some
prometheus
metrics
or
something
so
we
can.
We
can
start
getting
some
idea
like
what
is
the
actual
scale
or
flattering.
E
A
I
was
just
I
was
looking
at
the
pricing
for
cloud
run
and
it's
like.
We
can't
figure
out
how
long
the
cpu
memory
is
going
to
cost,
but
the
flat
requests
we
can
start
to
ballpark.
It's
like
2
million
requests
are
included
and
then
40
cents
per
million
requests
after
that
right.
So
that
can
give
us
a
nice
ballpark
there.
So
whatever
we
can
get
in
terms
of
that
just
say,
like
we
cut
everything
over
and
magically
everyone
uses
it,
it
will
cost
this
much.
H
Yeah,
I
still
suspect,
unless
the
pricing
structure
turns
out
to
be
insane,
it's
gonna,
it's
gonna
be
really
challenging
for
it
to
cost
more.
If
we
can
shift
the
traffic
because
most
of
the
traffic
should
be
downloading
the
layer
blobs
and
we're
gonna
be
handing
that
off
to
whatever
the
actual
back-end
is
anyhow.
A
H
Well,
I
want
to
leave
some
room
for
like
if
folks
want
to
come
work
on
some
other
way
of
hosting
it
and
we're
going
to
be
able
to
staff
that,
like
I'm,
not
stuck
on
that,
I
just
think
this
is
a
good
way
for
us
to
start
from.
We
should.
We
should
get
that
domain
up.
We
should
move,
we
should
move
clients
and
we
should
leave
that
alone
so
that
we
don't
have
to
worry
about
like
reliability
of
the
buckets
or
whatever
it
is
that
we're
concerned
about.
H
We
should
we
should
start
with
the
moving
it.
I
think
we
can
do
that
now
and
at
the
same
time,
though,
I
think
arno
and
I
pretty
much
have
getting
that
endpoint
stood
up,
but
there's
gonna
be
a
lot
of
work
going
around
and
like
vetting
it
and
getting
things
moved
over
to
it.
That
can
happen
totally
in
parallel
to
like
developing
the
mirroring
piece
as
well.
H
So
potentially,
if
folks
want
to
help
like
engaging
with
projects
to
start
trialing,
the
redirector
might
be
something
folks
can
do
in
the
very
immediate
future.
E
K
E
E
H
It's
also
a
stateless
go
up
with
like
no
per
request
allocation,
so
we
should
other
than
whatever
go
needs
internally
for
that,
so
we
should
be
pretty
fine
behind
it.
Well,
there's.
H
Fair,
I
think
we
had
some
numbers
on
that
in
terms
of
like
what
the
bandwidth
looked
like
to
like
gcs.
E
I
think
it's
one
one
trigger
gigabyte
per
day.
I
need
to
check
something
I
think.
H
I
need
to
move
something
like
a
terabit
to
aws
for
the
gcs.
E
Piece
so
I
think,
that's
monthly.
I
have
a
dashboard
for
that.
I
just
need
to
redesign
the
dashboard
for
it.