►
From YouTube: Kubernetes SIG K8s Infra - 20220216
Description
A
C
E
I
can't
hear
you
ben
arno
asked
if
I
could
yeah,
I
should
be
able
to
hopefully
looks
like
the
web
client's
struggling
a
little
bit,
but
it's
fine.
Let
me
make
you
host.
F
E
F
E
E
Okay,
well
welcome
everyone
to
today's
sigicate
sempra
meeting.
This
meeting
is
under
the
cncf
code
of
conduct,
which
is
essentially
be
excellent
to
each
other.
This
meeting
is
being
recorded
and
will
be
uploaded
to
youtube.
Welcome
everyone.
I
believe
the
first
thing
we
have
up
is
the
billing
report.
E
E
C
E
Did
that
not
work,
so
I
see
your
screen,
but
it's
still
still
on
the
first
page.
H
A
Yeah,
so
up
previously,
dns
has
been
hardly
no
cost
at
all,
and
maybe
that's
cdn
that
I'm
saying
cloud
cdn
went
to
50..
It's.
G
Not
it's
not
cloud
tns,
it's
cloud
cd
and
I
activate
yeah,
I'm
doing
some
experiments
with
cloud
cdn,
so
it
should
be
fine
for
these
milestones,
so
we
basically
get
a
a.
This
is
the
cost
of
the
new
services
for
until
end
of
march,
so
I'm
trying
to
investigate
how
we
can
optimize
some
costs
related
to
different
work
that
we
have
right
now,
especially
those
used
by
caps.
So
it's
like
a
little
experiment:
it's
not
costing
it's
not
costing
in
terms
of
budget,
so
we
should
be
fine.
G
I
also
need
to
investigate
exactly.
I
have
to
wait
the
end
of
february
to
basically
have
to
get
basically
the
full
report
on
that,
because
google
and
gcp
don't
generate
the
the
different
line
of
purchase
for
services
until
the
end
of
the
month.
So
I
have
to
wait
end
of
february
to
do
the
investigation.
A
Compute
engine
dropped
quite
a
bit
daily
by
about
1500
about
the
same.
What
we
increased,
we
traded.
It.
G
Yeah,
it's
it's
a
little
misleading
because
basically
the
we,
we
have
a
new
cost
related
to
google
cdn
and
at
the
same
time
we
have
we
reduced
the
cost
really
to
the
load,
balancing
service
saving
ups
artifacts.
So
I'm
trying
to
understand
if
we
can
get
save
more
of
that
over
different
manipulation
inside
google,
cd
and
oh,
we
can
basically
say:
okay,
we
stop
using
load
balance
and
we
only
focus
on
using
is
google
cdn
or
another
one.
G
So
that,
like
like,
I
said
it's
like
an
experiment
that
basically
don't
cause
us
something,
I
always
say
always
see,
what's
happening.
E
Okay,
I
hope
you
want
to
go
with
your
first
topic.
There
sure
this.
E
C
A
So
a
lot
of
our
focus
is
trying
to
get
our
artifacts
up
and
running.
We've
been
trying
to
figure
out
how.
A
How
to
work
together
well
to
get
that
across?
We
were
able
to
push
this
forward
with
some
terraform,
but
I'm
trying
to
figure
out
how
we
go
forward
from
here,
but
we
put
some
automation
in
place
or
whether
we
do
like
we
did
earlier
with
dns
you
assign
a
few
people
and
they're
the
ones
to
click
share
go
back.
A
So
we're
being
coordinated
a
bit
with
I've
been
out.
I've
been
out
for
about
a
week,
so
I'm
just
now
catching
up
to
working
with
then
to
to
get
some
redirection
working
based
on
some
of
the
code
we've
had
in
various
places.
Before
and
now
the
infra.
A
E
G
G
Calibers
is
creating
right
now,
once
we
have
confirmation
about
that,
we
can
basically
understand
the
infrastructure
requirement
and
move
forward
to
do
some
kind
of
production
about
how
we
want
to
do
so.
I
think
the
the
only
comment
is
basically
respect
magnum
former.
If
caleb
is
okay
to
doing
that,
and
it's
we
give
access
to
adolfo
and
basically
the
release
engineering
team.
They
work
on
pushing
image
layer
and
we
can
talk
about
the
next
step
because
they're
also
concerned
about,
I
don't
know
how
cosine
is
working,
so
I
also
have
concern
about.
E
G
E
D
So,
okay,
I
can
add
a
little
bit
about
what
we're
planning
to
do
just
now.
So
currently
we
are
undergoing
a
rewriting
parts
of
the
image
promoter
as
part
of
the
signing
effort,
so
the
first,
the
first,
the
first
three
right
that
we
did
really
puts
us
in
a
much
better
place
to
to
add
the
the
the
step
to
start
copying
artifacts
to
the
bucket
or
whatever,
whatever
we
decided
them
to
be.
D
So
my
first
idea
was
first
ensuring
that
we
have
a
bucket
where
we,
where,
once
the
the
end-to-end
tests
of
the
promoter
run,
we
make
sure
that
the
so
whenever
the
end-to-end
tests
of
the
promoter
run,
they
they
do
like
a
full
image
promotion,
fake
image,
promotion
within
the
tests.
So
the
idea
would
be
to
once
that
image
promotion
inside
of
the
tests
takes
place.
We
also
ensure
that
the
artifacts
get
copied
to
the
to
the
to
the
test
pocket
in
india
os.
D
So
once
we
have
that
running,
we
can
simply
add
another
destination
and
make
sure
that
I
I'm
planning
to
have
the
configuration
so
currently
in
the
image
promoter
manifests.
You
specified
a
source,
a
source
bucket
or
source
registry,
where
the
images
are
copied
from
in
a
destination
registry
where
they
are
promoted
to.
D
My
idea
was
to
add,
maybe
one
or
more
destinations
of
s3
compatible
buckets
where
we
once
we
do
the
promotion
if
we
find
those
in
there
we'll
also
copy
the
blobs
to
those
s3
buckets,
and
so
the
idea
would
be
first
to
just
run
that
that
new
code
inside
of
the
tests
and
if
it
works-
and
it
proves
to
be
like-
doesn't
have
too
much
overhead
or
problems
or
whatever
to
the
promotion
which
I
don't
expect
them
to.
D
We
can
start
adding
the
configuration
to
the
actual
promotion,
maybe
in
one
of
the
124
betas
that
are
coming
soon
and
testosterone.
Unless
anyone
has
another
idea,
which
shows,
of
course
welcome.
D
I
mean
happy
to
since
I'm
already
in
the
gods
of
the
image
promoter
like
happy
to
to
add
new
features.
Once
I'm
there.
A
Is
this
pr
that
we're
putting
forward
here
for
this
terraform
to
run
providing
enough
infrastructure
and
an
understanding
of
what
our
redirector
is
going
to
point
to
because
I've?
When
we've
talked
earlier,
and
I've
been
out
for
a
week
that
we
weren't
going
to
implement
push.
G
G
And
you
don't
you
don't
need
that
account?
You
just
basically
need
to
create
individual
user
icon
for
every
release.
Admin
I
mean
when
I
put
the
list
of
the
email.
Is
you
can?
Basically
are
you
just
ping
them
on
slack?
You
say:
oh,
I
need
each
one
a
an
email
address.
I
can
give
you
access
to
that
account.
G
D
G
A
This
is
something
that
the
next
project
is
already
lined
up
for
k-native
for
us
to
start
doing
that
and
in
these
initial
high-level
steps
of
creating
a
you
know,
pardon
the
root
levels
we've
been
using
as
the
term
if
that's
not
accurate
or
not,
but
creating
an
an
ou
and
putting
an
account
in
there
and
trying
to
figure
out
the
balance
between
how
the
cncf
can
support
that
and
help
to.
A
You
know
export
some
of
the
expertise
and
workflows
and
things
we've
done
within
the
cadets
and
for
working
group
into
k,
native
and
and
beyond
into
the
other
groups,
and
so
in
this
case,
when
we've
created
this
root
account,
we
don't
have
access
to
that.
A
Yet
that
may
be
what
we're
needing
to
understand,
how
to
that's
just
for
the
people
on
that
mailing
list
right
so
caleb
and
needing
to
run
the
terraform
his
mind
doesn't
have
access
to
run
that
terraform
for
the
stuff
within
that
account,
and
that
may
be
a
lack
of
understanding.
G
On
our
part,
okay,
so
you
are
a
organization,
admin
right,
yep,
so
basically
the
the
best
approach
to
to
be
able
to
really
organize
what
you
are
trying
to
do
is
have
one
how
you
dedicate
to
communities
and
have
different
server
accounts.
So
we
can
basically
break
down
the
costs
based
on
what
we
are
doing
right
now.
We
create
an
account-
and
we
use
that
from
some
experiment
under
one
organizational
unit
right
and
you
can
create
a
new
account
in
the
same
organization
unit
basically
say.
Okay,
we
gave
that
to
sig
release.
G
Because
if
you
basically
want
to
put
that
inside
the
same
account-
and
we
trying
to
understand
later
what
caused
us
to
do
all
those
experiments
is
going
to
be
difficult
because
we
don't
tag
resources
so
in
the
amazon
class,
because
explorer
is
going
to
be
difficult
to
really
break
down
every
course.
You
need
to
justify
institution
different
amazon,
so
the
best
approach
right
now
and
that's
my
association-
is
basically
create
one
organizational
unit
for
kubernetes
and
multiple
sub
icon,
and
you
can
do
whatever
you
want
about.
This
account.
A
A
Ones
for
six
and-
and
it
gets
a
bit
complicated
and
I'd
like
to
model-
I
think
we
could
model
for
these
other
groups
as
we
replicate
out.
How
do
we
have
some
transparency
and
a
bit
of
centralization
on
a
per
you
know,
cncf
project
basis,
where,
where
do
we
find
out
how
they're,
using
in
this
case,
amazon?
But
how
are
we?
How
are
we
effectively
using
the
credits
so
that
we
can
use
that
same
process
and
lift
it
up
and
go?
A
Tackle
that,
but
I'm
just
these
we're
gonna
have
more
and
more
groups
doing
very
similar
things
and
I'd
like
to
I'd
love,
to
be
able
to
point
to
the
kubernetes
case,
studio
repo
and
say:
look
at
the
info
folder.
Here's
the
way
that
community
is
able
to
to
come
together
and
make
these
solutions,
and
maybe
even
writing
about
how
we're
doing
that,
because.
G
Okay,
I
understand
now
so
for
this,
for
basically
for
the
four:
let's
just
create
caliber.
If
you
can
just
create
a
new
amazon
account
and
give
access
to
adulthood
and
run
that
from
script,
it
will
be
great.
I
can
basically
approve
anything
you
need
for
that
today
and
for
ap.
I
think
the
subject
is
about
infrastructure
management,
because,
if
you
want
to,
if
you
want
to
replicate
what's
cuban,
it
is
doing
it's
going
to
be
complicated
because
we
started
with
some
organization
with
bash
and
we're
trying
to
migrate
with
terraform.
G
A
A
As
a
kate's
infra
and
how
we
organize,
and
and
do
our
infrastructure
from
here,
going
forward,
trying
to
set
an
example
and
something
that
can
be
collaborated
on,
so
that
maybe
there's
innovations
in
the
way
that
that
that
k
native
is
doing
it
that
we
can
bring
back
in.
If
we'll
have
something
similar
to
the
kxo
repository
where
we
have.
You
know
this
we're
going
for
terraform.
Let's,
let's
go
for
terraform,
we
do
have
the
stuff
that
we're
migrating
from,
but
it's
more
of
of.
A
Where
is
the
approval
process
and
the
culture
of
where
do
the
keys
lie
and
where
to
where's
the
authorization
and
automation
for
for
publicly
and
transparently
as
a
community
running
the
infrastructure?
And
if
we
that's,
where
I'm
trying
to
have
that
conversation
later,
and
I
don't
mind
creating
one.
But
if
we
go
off
and
now
we
find
this
irrevocable
way
where
we
can't,
because
right
now
we're
still
talking
about
creating
temporary
things
to
try
stuff,
but
as
soon
as
we
start,
usually
when
we
try
that
it'll
stick
forever.
A
G
I
think
it's
difficult
to
underst
to
answer
to
that
question,
because
not
everyone
has
the
same
vision,
ban
infrastructure
like
if
you
want
to
I
mean,
if
ken
actually
won't,
for
example,
use
crossplan
for
infrastructure
management.
They
just
can
basically
use
what
we're
doing
right
now.
G
You
know,
because
crossplan
has
a
different
way
to
manage
cloud
resources.
That's
what
example
like
if
we
basically
see.
I
mean
there.
If
you
want
to
copy
something
from
us.
It's
about
policy,
it's
about
three
things
but
be
transparent,
do
infrastructure
as
code
and
have
an
approval
process
and
those
three
things
I
basically
the
foundation
of
the
kubernetes
community.
G
How
you
implement
that
it's
up
to
every
to
each
different,
I
would
say
community
group,
you
should
not.
You
should
not
say:
oh,
we
don't
bash
in
terraform.
You
should
also
do
that.
We
should
basically
say:
okay,
we
are
transparent
about
how
we
manage
infrastructure,
we
use
tooling,
to
declare
infrastructure
and
we
have
another
process,
and
we
have
other
trends
that
that's
what
you
should
say
to
different
cncf
group
now
how
we
manage
that
can
change
over
time.
G
I
think
last
time
justin
talked
about
basically
move
from
telephone
bash
to
use
a
custom
kubernetes
controller
inside
the
gcne
is
inside
the
gk
cluster
to
manage
everything
and
to
have
end-to-end
tests
against
cloud
formatted
resources,
which
means
the
approach
we
have
now
may
change
over
time
and
takes
will
not
stick
forever.
A
G
So
I
I
think
the
the
if
we
want
to
copy
what
we
did
over
time
is
about
basically
have
principle
and
policy
about
how
you
want
to
operate
infrastructure.
That's
the
main
thing
you
should
care
about
now.
The
technical
implementation
is
can
be
different
because
we
don't
have
the
same
way.
To
I
mean
we
don't
have
the
same
needs
every
community.
We
don't
have
the
same
name
as
ktemper
like
we
always
say
an
example.
We
used
to
build
custom
custom
tooling
to
manage
google
group,
but
another
google.
G
Another
community
can
basically
say:
oh,
let's
use
the
terraform
provider
for
google
groups
and
manage
that
or
let's
use
the
traffic
provider
for
github
to
manage
users
and
groups.
It's
basically
one
approach.
It's
a
tactical
implementation,
but
one
principle
have
and
first
have
infrastructural
codes
to
manage
user
groups
or
cloud
resources.
A
Yeah
were
you
talking
about
in
google
group
management
or
we're
talking
about
the
solution
we're
coming
up
with
for
a
registry
that's
distributed,
and-
and
these
are
issues
that
are
going
to
be
similar
for
other
groups,
and
so,
if
you
will
trying
to.
A
G
E
I
think
maybe
we
can
take
any
additional
discussion
of
this
document
offline
thanks
man,
I'm
I'm
not
sure
I
feel
like
we're
kind
of
just
back
and
forth
here.
Maybe
we
want
to
take
a
look
at
the
code
search
pr.
G
G
D
G
G
Okay
sounds
good
and
you,
as
at
the
caleb
as
an
arc
admin.
You
can
do
everything
you
can
you
want
about
billing
and
basically
permission.
G
I
Code
search
mostly
just
looking
for
approval
on
that
one
I
know
are
now
I
made
the
updates
that
you
requested
for
is
there
anything
else
you
want
to
go
over
on
that.
E
G
Yeah,
I
just
opened
a
an
issue
on
oci
proxy
repo,
so
we
have
the
ocr
registry
oci
proxy,
deploy
inside
cloud
run
over
to
jcp
region.
Right
now
we
have
a
few.
We
have
traffic
coming
from
amazon
to
that
and
we
are
also
traffic
coming
from
gcp
to
that.
So
now
I'm
trying
to
basically
get
a
way
to
do
it
to
stress
it
like
be
able
to
understand
how
we
can
get
get
some
kind
of
confidence
in
what
we
building
with
this
proxy
and
see
how
we
can
basically
go
to
production
with
that.
E
Yeah,
I
don't
think
I
have
anything
super
useful
to
share
here
other
than
like
I
mean
we
should
probably
run
some
synthetic
tests.
In
addition,
but
just
getting
some
of
our
actual
test
workloads
up
should
give
us
a
pretty
good
idea.
I
think
the
more
of
what
I'm
looking
for
there
is
just.
E
We
need
to
talk
to
each
project
that
we
want
to
start
switching
over
to
pointing
their
users
through
the
like
production
one
when
we
stand
that
up
and
find
out
like
what
like
at
what
point
are
they
gonna
feel
confident
that
this
is
reliable
enough,
just
the
redirect
to
start
shifting
traffic
to
the
new
endpoint
so
that
we
can
later
roll
over
mirroring?
I
think
a
lot
of
projects
are
going
to
be
fine
with
just
our
test
workload.
I
could
see
where
it's
like
someone
else
might
expect,
like
some
heavier
stress
testing.
E
First,
I
think
it
should
be
pretty
fine,
though,
like
it's
a
very
small,
very
easy
to
understand.
Binary
and,
like
I
think
cloud
run,
scaling
is
fairly
straightforward,
but
I
think
we
should
have
that
in
our
back
pocket
in
case.
You
know,
folks
aren't
confident
enough
in
just
running
our
test
workloads
against
it.
We
have
some
pretty
large
test
workloads.
E
G
G
Okay,
what
what
basically
been
saying
it's
busy-
I
had
a
kind
of
re
pro
job
running,
basically
5k
node
against
this
sandbox
registry.
G
G
E
I
honestly,
I
think,
we're
basically
ready
for
that
part.
The
more
interesting
part
is
going
to
be
when
we're
confident
to
switch
over
to
the
like,
not
just
plain
redirect,
but
that
is
also
like
a
little
bit
less
concerning,
because
if
we
see
any
like
issues,
we
can
always
just
redeploy
the
like
pure
redirect.
E
Like
that's
an
easy.
It's
it's
a
that's
an
easier
rollback
than
like
the
first
initial
putting
up
the
domain
and
we
have
to
make
sure
we
have
something
behind
it
that
works,
but
like
once,
we
know
that
just
the
plane
redirect
is
fine.
We
can
always
revert
to
the
plane
redirect
if
we
encounter
any
issues
when
we
start
trying
to
roll
out
actual
mirroring
okay,
so.
G
G
C
So
one
I
had
I
know
here
is,
I
remember,
tim
was
talking
about
small,
reversible
steps.
Is
this
something
that
we
can?
You
know
go
back
from.
G
E
So
we
we
can,
we
can
start
we
could
so
like
if
we
have
a
switch
a
project
to
pulling
through
this
slash
advertising
this
as
their
images
which
they're
still
running
publishing
through
staging,
then
we
can
always
revert
them
back
to
just
plain
case.
Study
serdeo,
like
has
the
exact
same
images
same
infrastructure.
It's
just
the
redirect
in
front.
E
I
think
the
only
concern
there
is
if
we
get
to
the
point
where
we
start
releasing
like
released
tag
binaries,
that
users
are
using,
that
default
to
this
registry,
that's
harder
to
roll
back
and
that's
why?
I
that's
why
I
think
it's
important
that
we
make
sure
that
we
start,
which
is
like
pure
redirect
it's
like
the
simplest
thing
we
can
serve
so
that
whenever
we
do
any
future
steps,
even
if
we
can't
get
clients
to
switch
off
of
that
endpoint,
we
can
switch
that
in
point
back
to
just
the
plain
redirect.
E
So
we've
actually
left
the
code
as
just
the
plain
redirect
so
far.
I
have
some
work
pending
to
like
do
more,
but
I
want
to
make
we'll
make
sure
that
we
have
like
a
published
tagged
image
that
we
can
just
switch
back
to
that
one.
If
we
run
into
any
issues
once
we
feel
confident
to
switch
out
the
mirroring.
The
only
thing
we're
looking
for
now
is
just
like.
E
Do
we
expect
to
hit
some
kind
of
like
weird
scaling
issue
with
cloud
run
or
something
like
that,
or
is
there
some
like
crazy
bug
and
go
or
the
code
or
something
like
that?
I'm
not
expecting
anything
like
that,
but
given
the
amount
of
traffic
we'd
see
if
we
actually
switched
over
everyone
right
now,
we'd
still
want
a
little
bit
more
confidence,
even
if
it
is
in
theory.
So
simple.
E
G
C
Okay,
so
can
we
just
write
this
down
exactly
what
we'll
be
trying,
and
so
we
can
always
all
give
a
thumbs
up
and
from
what
I
heard
so
far.
It
is
we'll
have
the
pure
redirect
image
created
and
we
will
try
to
switch
one
specific
ci
job
to
go
against
images
using
this
proxy
and
then
monitor
that
so.
G
G
G
So
we
switch
carbs,
basically,
the
the
caps
job
for
distribution
for
environment.
C
Scalability
job
okay,
so
what
can
we
try?
Next
it
would
it
be
taking
something
in
the
pre-submit
yeah,
because
I
can
give
you
the
largest
volume.
C
We
don't
need
to
change
the
code
in
the
kk
repository
to
do
that.
I
think
we
can
just
change
the
where
we
pull
images
from
is
configurable.
Okay,
so
all
I
think
I'll
show
you
where
that
is
so
anything
in
the
e
to
e
test,
especially
conformance
all
of
them.
You
know
you
can
point
it
to
another
repository
okay
yeah,
because
right
yeah.
E
E
C
So
we
might
need
to
change
a
python
or
a
bash
in
the
middle
to
squeeze
it
in,
but
we
don't
have
to
change
the
code
in
e2e.
Sorry
in
kk
is
what
I'm
saying.
G
C
Okay,
we
can
do
that
together,
we'll
pull
in
ben.
If
you
need
it.
E
Yeah-
and
I
I
think
honestly,
we're
probably
more
or
less
at
the
point
where
we
can
start
setting
up
the
infra
for
like
the
prod
quote-unquote
domain
but
like,
and
we
should
probably
we
should
consider
running
the
tests
through
through
that
version
and
leaving
the
registry
sandbox
endpoint
as
a
place
where
we
can
start
iterating
with
lesser
workloads.
E
I
really
don't
expect
us
to
have
any
issues
with
this.
We
should
just
make
sure
that
we
take
steps.
We
should
also
probably
open
an
issue
in
like
case
I
o
or
somewhere,
so
that
we
have
somewhere
tracking
this
written
down.
I
don't
think
we
have
one
currently
exactly
that's.
C
E
G
Yeah,
the
synchronization
from
gcs
to
s3
is
not
that
it's
not
really
complex.
It's
just
one
line
command.
It's
just
take
times
because
we
have
a
lot
of.
We
have
over
19
000
object.
We
need
to
synchronize
and
it
take
time
to
do
that
like
three
days
to
do
that,
but
I
mean
the
first
synchronization.
We
need
to
take
like
two
and
three
days
to
do
that
then,
after
that
we
if
we
succeed
to
basically
implement
synchronization
with
the
image
promoter,
it's
gonna
be
fine.
C
Okay,
so
now
I'm
getting
a
little
bit
hassled
because
there's
so
many
possibilities
and
so
many
options
that
are
there.
So
can
we
please
write
down
in
a
google
doc.
This
is
the
current
step,
and
here
is
the
next
step
and
the
one
after
so,
and
then
we
can
put
some
dates
on
it
and
then
we
can
get
some
sign
offs
and
we'll
we
can
try
doing
things
in
the
like
switch
things
on
a
friday
and
see
come
back
on
monday
and
see
how
it
has
gone.
So,
let's
do
a
proper
plan.
C
G
Okay,
yeah.
E
C
Even
if
it
is
like
check
boxes
in
an
issue
in
github-
that's
fine
too
like,
but
we
need
something
written
down.
Please.
E
E
Think
a
living
github
issue
probably
actually
makes
the
most
sense
for
this,
where
we
can
just
track
what
happened?
I
don't
think
we
need
too
much
more
debate
about
what
to
do.
We
just
need
to
make
sure
we
document
it
and
keep
track
of
what
happens
and.
C
While
we
are
doing
something
we
leave
notes
for
each
other
there
saying
okay,
I
did
this
part
kind
of
thing
like
so
we,
so
we
don't
have
to
come
to
this
meeting
to
figure
out
okay,
what
happened
last
week
or
the
week
before.
A
C
Right
so
I
know
so:
let's
do
things
in
bite-size
pieces
put
something
on
it,
put
some
people
on
it
and
do
check
boxes
and
like
organize
this
thing.
E
A
One
ci
tool-
that's
used
in
a
lot
of
places,
is
our
ede
test,
suite
for
conformance
and
everywhere
else.
I
think
if
we
were
to
update
that
may
be
what
was
talked
about
earlier,
but
I
just
wanted
to
make
sure
I'm
repeating
really
clearly
an
easy
win
would
be
to
update
the
registry
that
that
tool
uses
and
if
it
were,
to
break
we'll
quickly,
see
it
and
and
the
visibility,
because
of
the
number
of
places
that
it's
run
around
the
world.
A
E
I
think
we're
talking
about
doing
that
first
with
our
own,
our
own
workloads
without
actual
code
change,
but
it'll
certainly
have
to
be
some
more
discussions
about
you
know
once
we
get
to
the
point
that
we're
talking
about
rolling
it
out
to
end
users,
how
exactly
we
go
about
that?
I
really
don't
think
I
think
we're
gonna
reach
a
point
where
we're
confident
in
and
we
just
start
rolling
it
out
to
end
users,
because
it's
just
the
redirect
and
we've
we've
we've
soaked
it
a
bit
in
our
workloads.
E
I
think
the
more
interesting
thing
we'll
be
rolling
out
the
mirror,
but
that
one
at
that
point
we
will.
You
know
we
will
have
push
button
roll
back,
so
yeah.
E
We
can
run
some
pretty
large
workloads
ourselves
against
this
and
just
make
sure
that,
like
it
does
scale
and
whatnot,
and
then
we
just
need
to
look
at
you
know
what
kind
of
timeline
are
we
ready
to
start
making
this
default,
and
then
we
can
like
start
then
we'll
once
we
once
we've
done,
that
we'll
want
to
switch
test
workloads
to
look
at
the
sandbox
version
where
we're
starting
to
add,
mirroring
and
like
run
through
the
same
process
again,
but
then
that
time
we
will
be
able.
E
C
Yeah
the
other
thing
here
also
ben
is
you
know
there
was
a
long-standing
bug
where
continuity
was
not
able
to
pull
from
a
specific
repository.
I
forget
the
name
from
jfrog,
so
cases
where
tools-
and
you
know
the
cri
runtimes-
won't
be
able
to
deal
with
the
redirects
that
we
have
set
up.
So
that's.
E
Said
I've
looked
in,
like
major
registries,
depend
on
like
doing
unauthenticated
redirects,
so
the
really
I'm
only
concerned
is
if
we
have
like
any
additional
like
auth
in
the
flow,
otherwise
it
yeah.
If
you
look
at
all
the
problems
in
the
past,
they're
all
really
off
and
we're
trying
to
just
do
public
read
so
we
yeah
our
conformance.
C
Test
does
have
something
where
we
check
sending
a
bad
auth
or
something
like
that,
so
we
might
hit
it
like
the
sooner
we
hit
it
the
better.
I
think,
because
then
we
can
figure
out
like
how
to
get
get
it
fixed
in
the
longer
term.
E
Right,
but
I
think
we
should
be
pretty
safe
there
because,
like
I
said
so,
we're
only
serving
redirects
and
all
the
actual
api
implementation
is
like
in
existing
registries,
and
the
existing
registries
also
depend
on
this
redirect.
So
it
would
be
an
incredibly
poorly
behaved
client
if
it
didn't
handle
redirects
like
it,
wouldn't
be
able
to
pull
from
gcr
today
exactly
yeah.
A
Pcr
and
we
didn't
find
a
clean
way,
it
was
the
really
short
answer.
There's
a
there's,
a
conversation
topic
back
in
the
channel
I'll
make
sure
that's
part
of
the
conversation.
G
A
C
D
G
I
think
I
I
don't
remember
we
had
a
conversation
about
that
is
basically
the
image
promoter
should
be
basically
look
over
every
image,
fetch
the
the
layers
and
trying
to
do
the
copy,
using
some
kind
of
maybe
go
container
registry
package
of
print
package
to
do
that.
But
from.
G
E
I
think
my
only
argument
for
considering
using
the
registry
api
would
be
so
that
we've
already
implemented
this
in
the
future.
If
we
switch
to
like
I'm
not
sure
if
artifact
registry,
for
example,
is
as
transparent
about
the
gcs
backing
or
not.
No,
if
we
move
from
to
azure
registry,
we
lost
ascii.
We.
E
So
it
might
make
sense
to
to
go
ahead
and
use
the
registry
in
point,
but
they're
both
going
to
be
just
like
grab
the
manifest
get
the
layer
list
from
it
and
then
fetch
and
copy
the
layers.
A
E
If
we
do
that,
then
I
think
we're
just
going
to
be
pushing
directly
to
ecr
and
we're
just
going
to
have
to
wholesale
redirect
to
ecr,
depending
on
the
client
and
just
like
try
to
apply
the
same
mitigations.
We
have
on
the
gcr.
So
if
that
wants
to
be
the
case,
then
we'll
have
even
less
to
do.
E
It
will
just
be
updating
the
promoter
to
promote
to
two
registries,
and
it
will
be
the
redirector
will
just
be
redirecting
slash
to
one
of
the
other
it'll
be
even
simpler,
and
we
won't
need
to
concern
ourselves
with
layers.
E
E
Yeah,
I
think
my
concern
there
is
that
we
probably
wouldn't
want
to
do
that,
because
the
additional
request
and
the
path-
but
these,
I
think,
we're
at
the
moment
we're
moving
forward
with
the
s3
and
possibly
cloudfront
approach
for
now.
If
there's
some
reason
that
we
can't
do
that,
then,
like
we
have
a
pretty
straightforward
path
to
just
switching
to
like
purely
registry
api
copy,
to
pull
and
redirect
the
entire
path,
instead
of
being
tricky
and
selectively
redirecting
the
layers,
as
opposed
to
the
rest
of
the
requests.
D
Yeah,
as
a
from
from
the
last
meeting
we
had
on
releasing
hearing
that
I
I
I
think
I
misunderstood,
because
I
thought
that
the
requirement
was
going
to
be.
We
were
quite
if
we
were
going
to
have
other
destinations
other
mirrors
that
they
would
have
to
be
s3
compatible.
So
that's
why
I
was
thinking
about
doing
the
history,
but.
E
We're
not
trying
to
solve
the
problem
of
having
additional
ones
right
now
I
mean
in
the
many
years
of
this
project,
we've
had
like
three
providers
provide
resources
and
I
don't
think
we're
trying
to
do
packet
like
bare
metal
hosting
just
yet
it's
more
important
that
we
shift
the
cost
on
this
more
immediately,
and
if
we
get
to
a
point
where
we
need
to
do
like
more
bank-ins,
we
can
like
totally
rethink
everything,
especially
because
we'll
already
have
clients
on
a
domain
that
we
control
that
we
can
point
at
some
new
implementation
trivially.
E
So
I
think,
actually
very
intentionally.
My
argument
is:
let's
not
try
to
solve
that
problem.
Let's
not
even
really
think
about
that
problem
right
now.
Let's
just
focus
on
aws
plus
the
existing
gcr,
making
a
smooth
migration
so
that
we
can
fix
our
like
majority
of
our
registry
poll
cost.
And
then
you
know
if
we
have
the
happy
circumstance
that
there
are
more
providers,
we
can
look
at
okay.
How
do
we
scale
this
out
as
like
a
follow-up
step
that
might
even
involve
changing
the
approach?
E
Okay
and
that
won't
be
a
problem
to
clients,
and
it
won't
be
a
problem
to
us,
because
we
can
do
the
same
thing.
We
did
with
the
current
registry,
which
is
like
leave
it
in
place
until
we're
ready
and
just
put
something
else
in
front
behind
the
behind
the
case.io
domain.
E
We
probably
will
at
some
point,
have
some
interest
in
like
additional
ones,
but
like
we're
already
exceeding
our
budget
and
the
majority
of
it
is
the
like
aws.
So
if
we
can
do
that
more
efficiently,
then
we've
got
a
huge
win
and
we
can
scope
the
problem
better.
D
E
Yeah,
I
do
think
that
was
one
of
the
considerations
is
that
if
we
can
make
the
you
know
that
the
copy
layers
to
an
s3
style
bucket
with
public
read
approach
work.
Then
it's
going
to
be
really
easy
to
come
back
and
scale
it
to
additional
providers.
But
we
shouldn't
block
on
that
or
or
focus
too
much
on
that.
E
Okay,
it
will
also
be
pretty
easy
to
scale
to
additional
providers.
If
we
use
a
registry
endpoint,
it
just
changes.
Our
security
concerns
trying
to
make
sure
that
multiple
sources
of
truth
are
correct
and
and
like
lock
down
and
we're
already
like
extra
over
paranoid
with
like
the
current
registry.
So
it
was
a
little
bit
harder
to
convince
all
the
interested
parties,
whereas
this
way
we
can
say
well,
you
you
don't
actually
need
to
to
address
the
s3
buckets,
because
it's
all
content
addressed.
E
I
I
think
the
biggest
thing
we're
going
to
run
into
there
is
making
sure
that
we
get
it
funded
and
that
we
can
like
scale
it
to
handle
all
the
different
regions
correctly
for
aws.
I
think
that's
also
a
bit
easier
because
it
looks
like
we
should
be
able
to
detect
client
region
pretty
straightforward,
I'm
not
sure
how
viable
that's
going
to
be
on,
like
additional
providers.
E
Like
well
like
once,
we
get
further
along
with
this,
we
might
wind
up
doing
something
like
having
an
s3
bucket
per
region
or
something
like
that,
and
then
having
the
redirector
point
based
on
client
region
and
having
the
promoter
promote
to
multiple
regions,
but,
like
I
think,
that's
a
like
more
targeted,
more
solvable
problem
if
we
run
into
it,
trying
to
figure
out
how
that's
going
to
be,
reproducible
on
other
providers
is
like.
I
don't
want
to
get
too
ahead
of
ourselves
on
that.