►
From YouTube: Kubernetes SIG K8s Infra - 20230329
Description
A
A
I
will
be
partially
your
host
this
wonderful
afternoon
or
evening,
at
least
until
halfway
past
the
hour,
because
then
I
have
to
disappear
and
then
I
think
Ben
will
take
over
or
Arno.
Who
knows,
let
me
share
my
screen,
so
we
can
actually
all
see
what
we're
talking
about.
A
Okay,
at
a
quick
glance
at
the
participants,
I
think
I
see
a
bunch
of
folks
that
have
already
been
here,
but
just
in
case,
if
you
are
new,
feel
free
to
pop
on,
say
hello
or
just
say
hello
in
chat.
A
Otherwise,
we
will
move
right
on
a
quick
billing
review.
Honestly
I
think
the
best
thing
to
do
is
to
pull
up
what
Ben
posted
earlier
today.
A
Monday,
the
27th
back
to
Tuesday
the
21st.
We
were
at
forty
eight
thousand
two
hundred
and
sixty
two
dollars
on
Friday
or
only
Friday.
The
24th
and
forward
had
major
cost
savings,
which
was
the
first
redirect
rollout
PS.
This
is
all
in
the
Sig
Kates
and
for
Slack
Saturday.
The
25th
capped
off
the
first
12
weeks
at
857
652
dollars
that
is
January
1st
to
March
25th,
that
is
a
2.5
million
dollar
run
rate.
C
C
So
that's
a
pretty
good
estimate,
I
think,
and
so
we
should
already
be
under
and
that's
without
even
having
a
whole
week
of
reduced
traffic
in
that
past
week
of
data
and
we're
reducing
it
further.
As
we
speak.
A
Given
all
of
that,
that
doesn't
mean
the
problem
is
solved.
It
just
means
that
we
don't
necessarily
have
a
giant
ax
over
our
next
for
the
next.
What
seven
months,
something
like.
C
That
it's
pretty
solved
because
because
next
year
we
won't
have
the
like
beginning
of
the
year
at
the
worst
rate,
unless
we
regress
on
some
other
axes.
C
A
C
To
be
fair,
we
also
kind
of
targeted
which
scale
tests
we're
running
some
scale,
tests
less
frequently,
but
still
like
every
other
day
instead
of
daily
and
we
removed
scale
pre-submit
and
we're
evaluating
if
that's
even
actually
necessary.
C
With
my
sick
testing
had
on,
we
really
only
want
to
be
running
pre-subits
if
they're
catching
things
that
would
wind
up
breaking
in
release
blocking
CI
all
the
time.
If
it's
destroying
release
blocking
CI
signal
and
we're
missing
things
all
the
time
that
we
could
have
caught
before
they
emerged,
then
we
want
to
do
that
if
it
breaks
something
like
one
once
a
release,
then
it's
working
as
intended
to
just
test
and
release
blocking
track
down
the
issue
and
fix
it.
C
A
Another
thing
that
I
do
want
to
point
out,
and
this
is
within
the
thread
in
slack,
but,
as
you
said,
for
additional
context,
we
went
into
this
year
with
January
data,
suggesting
we
were
going
to
be
projected
at
spending
four
million
dollars
on
a
three
million
dollar
budget
up
until
the
registry
redirect
we
had
managed
to
cut
enough
costs
to
bring
us
down
to
around
3.4
million,
so
we
were
still
over,
but
even
before
this
redirect
we
had
cut
a
lot
of
costs,
and
now
this
will
again
fingers
crossed
once
data
is
firmly
in
our
hands,
make
it
so
that
it's
not
a
giant
dumpster
fire
of
a
budget.
D
C
I,
don't
want
to
call
it
more
specifically,
yet
until
we
actually
have
the
data,
but
right
you
could
make
some
early
estimates
based
on
looking
at
the
two
weekdays
we
have
in
the
weekends
we
have
in
comparing
to
past
days
without
even
accounting
for
increased
traffic
reduction
and
it
it
should
be
a
lot
a
lot
lower
even
than
what
we're
staying
here
and
as
of
yet
we
don't
have
any
reason
that
we
would
need
to
revert
or
anything
like
that.
So
the
other
thing
I've
been
negotiating
is
I'm,
hoping
worst
case.
A
Knock
on
wood,
cool
cool,
any
other
budgetary
questions.
E
The
green
and
the
blue
bar
is
the
blue
bars
how
much
we
actually
spend
and
the
green
bar
is
how
much
we
need
to
spend
to
be
able
to
be
under
3
million,
and
this
week
we
actually
came
under
by
twelve
dollars.
So,
if
that
only
that
keeps
going,
we
will
have
500
in
our
pocket
by
the
end
of
the
year,
but
with
all
the
savings
coming,
I
think
it's
going
to
look
way
better
and
then
also
the
bottom
right
graph
is
also
a
nice
one
showing
that
we
just
about
every
other
week.
E
We
move
out
by
another
week,
so
at
the
moment
we'll
run
our
money
on
the
21st
of
October,
assuming
that
they
spend
continue,
which
it
wouldn't
have
been
pointed
out
to
be
radically
under.
So
I
am
excited
to
update
the
graphs
next
week.
Sunday
all
this
week's
coming
Sunday.
We
will
update
the
game
and
we'll
see
where
we're
going.
A
Of
course
that
was
needed.
That
was
awesome.
Thank
you
for
that
update
as
well
of
note,
welcome
our
cook,
I
assume
Ryan,
but
maybe
I'm
wrong.
Oh
yeah,
Ryan
Cook.
Thank
you
for
joining
Carson
I
saw
your
question.
I'm
gonna,
throw
it
at
the
end
of
open
discussion.
A
Is
there
any
other
budget
stuff
that
we
want
to
talk
about
before
we
move
on
to.
A
C
It
would
help
if
I
had
more
of
this
off
the
top
of
my
head.
Sorry
I've
been
a
little
impressed.
I've
been
following
the
redirect.
C
Your
line-
oh
this-
yes,
I,
remember
this.
So
this
is
an
at
rest.
Pricing
increase.
It
primarily
affects
nearlining
code
line,
which
we
do
not
use
anywhere
in
the
project.
There
is
also
standard
storage
increases
in
some
regions,
but
it's
like
fairly
negligible
and
storage
doesn't
dominate
our
costs
at.
C
B
Yeah
I'm
the
one
that
put
that
basically
link
because
I
noticed
there's
an
increasing
cost
related
to
any
activity
in
Asia
region.
So,
like
Betsy
I,
don't
really
think
we
are
concerned,
but
just
want
to
put
it
out
there
that
if
we
see
something
happening
in
Asia
region,
we
know
we
know
it's
like
cost
change
next
week.
B
C
I
I
don't
expect
this
to
be
a
massive
issue
like
most
of
our
traffic,
like
the
US
region,
is
larger
than
EU
and
Asia,
combined
for
the
registry
and
Asia
being
the
smallest.
We
have
comparatively
small
amount
of
traffic
there
on
either
endpoint,
but
you
know
something
we
have
to
keep
an
eye
on
and
just
you
know
it's
good
to
have,
let's
say
reviewing
this
kind
of
announcement
to
products
that
we
use
heavily
cloud
storage,
definitely
being
one
of
them
at
the
moment.
C
We
also
have
some
more
options
if
we
find
later
this
year
that
we're
not
at
a
comfortable
place.
For
example,
within
the
Sig
we've
acknowledged
that
this
is
currently
the
redirect
is
currently
restricted
to
a
certain
set
of
images
that
are
aimed
at
maximizing
the
cost
savings
versus
risk.
C
We
can
do
further
rollouts
to
expand
that
list
of
image.
I
think
that's
something
we
want
to
do
anyhow,
but
we
probably
have
been
discussing
you
know
taking
a
breather
once
we're
solidly
well
below
budget,
so
we
can
refocus
on
things
like
heart,
hardening
the
registry
Kate's
IO.
While
there
aren't
other
changes
in
flight
or
freezing
the
gcrs.
Finally,
and
then
we
can
revisit
if
we
need
to
make
further
changes.
C
So
if
we
do
see
a
price
increase
here,
we're
again
mostly
going
to
see
that
reflected
on
case
gcrao,
and
we
can
up
the
redirect
most
likely.
C
Yes,
a
lot
just
the
50
redirect
we
saw
something
like
a
40
drop
in
bandwidth
in
our
regions.
C
The
the
images
that
are
in
the
list
are
the
most
popular
images.
It
also
turns
out
those
are
the
most
popular
images
on
Amazon
and
over
represented
on
Amazon
and
we're
targeting
non-gcp
traffic
approximately,
because
the
because
the
we're
trying
to
capture
egress
hold.
C
And
of
that
we
see
a
large
increase
in
AWS
I'm,
still
working
on
getting
access
to
those
metrics
directly
myself,
but
there's
been
some
shared
screenshots
from
others.
That
also,
thankfully,
is
working
as
intended.
It
is
not
a
large
price
increase,
because
we
are
successfully
largely
routing
to
in
region
copies
where
there's
only
operations
and
storage
costs.
There
is
no
there's,
no
egress,
there's
no
bandwidth
to
charge
because
it's
in
the
same
region.
C
This
is
a
good
thing,
long
term,
it's
actually
a
little
bit
of
a
problem
short
term.
We
need
to
continue
to
ramp
up
actually
using
those
AWS
credits
this
year,
I'm
going
to
expect
most
of
that
to
come
from
the
CI
stuff.
That
is
also
making
some
progress
at
the
moment.
Yeah.
A
So
then,
I
pulled
up
like
I
think
this
is
the
bandwidth
graph
and
then
the
one
above
it
was
the
cost
graph.
C
Yeah
I
believe
just
I've
been
looking
at
these
so
much
that's
the
US
grass
foreign
you
can
tell
they
have
a
little
bit
different
traffic
patterns
and
this
the
volume
of
it,
the
700
megabyte
yeah.
That's
that's
us.
So
this
is
the
US
bucket
behind
Kate
search
historyo,
there's
three
regions,
U.S
Asia
and
EU
that
are
multi-regions,
and
this
is
by
far
the
largest
one.
C
You
can
see
when
there's
five
peaks
in
a
row
and
then
two
smaller
Peaks,
those
five
Peaks
are
weekdays
and
then
the
smaller
Peaks
you'll
see
that
that
changes
quite
a
bit
in
the
past
few
days
because
around
the
24th
we
started
to
have
the
rollout
complete
from
the
last
rollout
and
you
can
see
a
huge
drop
so
even
on
the
24th,
where
we
hadn't
finished
doing
the
rollout
that
Friday
right
there,
that
Jiffy
is
hovering
over
huge
drop
Peak
to
Peak
will
can
that
trend
is
holding
and
it
should
drop
even
further
and
the
cost
savings
are
larger
than
that
ratio
would
suggest,
because
most
of
that
traffic
is
egress
and
the
remaining
traffic
has
a
higher
proportion
of
it.
C
Foreign
for
multiple
reasons,
including
the
fact
that
we're
targeting
non-gcb
traffic
and
because
the
popular
images
are
also
particular
with
high
bandwidth,
are
particularly
popular
on
Amazon,
like
AWS
EBS,
CSI
driver
and
no
DNS
catch
just
between
the
two
of
them.
I
think
that's
like
over
30
percent
of
of
the
non-gcp
traffic.
C
It's
going
well
and
we're
in
close
contact
with
the
GCR
teams
about
all
this
and
continuing
to
roll
out
the
next
phase
of
redirecting,
which
should
hopefully
approximately
double
what
we
shifted.
Last
time.
E
C
And
the
registry
infrastructure
is
also
holding
just
fine,
we
largely
haven't
even
scaled
up.
The
cloud
run
app.
That's
really
good,
we're,
mostly
seeing
increase
in
logging
and
network
costs
and
they're
acceptable.
A
A
G
So
question
here
is:
is
the
tooling
not
in
a
good
shape
for
us
to
do
this,
or
what
is
the
concern
here?
What
are
the
concerns
for
not
doing
the
freeze?
What
the
the.
B
Constant
is
related
to
a
late
Discovery
background
about
inconsistency
into
bacon.
In
the
back
end
registry,
we
have
like
we
don't
have
all
the
signature
over
all
the
okay
registry
report
survey.
So
if
we
do
phrase-
and
we
have
regret
to
the
new
version
stream-
a
user
slash
entity
and
try
to
verify
those
image
might
have
an
issue,
I
will
leave
then
doing
the
rest
of
the
give
the
details.
C
So
when
we
planned
the
freeze
or
when
other
folks
plan
the
freeze,
we
hadn't
the
redirect,
wasn't
a
thing
and
also
the
redirect
has
shifted
since
then,
because
of
the
SKU
issue.
Originally,
we
were
going
to
well
at
some
point.
We
were
going
to
mitigate
only
the
by
only
redirecting
the
block
requests,
which
is
again
the
bandwidth.
The
problem
with
that
is,
then
the
tags
are
inconsistent
and
you
wouldn't
with
a
broken
polls.
C
So
we
redirect
the
entire
API
surface
already,
which
is
something
we'd
want
to
get
to
eventually,
because
we
redirect
the
entire
API
surface.
That
means
that
the
set
of
images
that
are
available
depends
on
which
backend
you're
hitting
fully
so
it
right
now.
You
can
use
case
gcrio
for
redirected
images,
and
you
see
the
new
tags
available.
C
If
we
freeze,
then,
if
we
had
to
revert
the
redirect
for
some
reason
now
anybody
who's
using
those
the
the
tags
disappear
and
we
dig
the
hole
deeper,
I'm,
hoping
that
we
can
wait
until
we're
confident
that
we
aren't
going
to
need
to
retool
the
redirect
so
that
we
don't
wind
up
in
some
kind
of
inconsistent,
State,
I
I.
Think
we're
nearing
that
but
we're
in
the
middle
of
another
major
role
out
there
and
I.
Don't
I,
don't
have
explicit
agreement
from
everyone.
C
What
happens
if
we,
if
we
do
find
a
problem
this
week,
exactly
I
I
mean
I.
Will
personally,
my
stance
is
that
we
will
just
go
back
to
last
week's
rollout,
but
that
actually
wasn't
agreed
to
ahead
of
time
and
I'm
and
senior
folks
are
not
super
available
this
week.
It's
basically
performance
review
season
this
week.
F
Yeah
I
was
going
to
just
ask
something
related
to
coordination.
With
that
phrase
or
possible
phrase,
we
want
to
start
running
the
jobs
to
fix
that,
but
so
we've
been
holding
on
that
on
the
actual
jobs,
because
we
want
to
let
the
rollout
finish
or
at
least
get
to
a
state
where
we
can
more
confidently
do
so,
but
does
it
matter
if
we
independent
of
the
rollout
does
it
matter
if
we
fix
the
signatures
before
or
after
the
freeze
or.
C
I
think
I
can
take
that
it
shouldn't
like
we
won't
we're
not
likely
to
go
change
the
code
for
the
redirect
and
then
switch
to
only
redirecting
blobs.
Now,
since
we
already
have
it
out
there
and
there's
a
bit
of
a
lift
to
like
get
GCR
to
keep
rolling
out
changes
for
us
and
and
then
new
untested
code,
so
the
the
signature
you're
saying
is,
isn't
really
the
concern
now.
C
The
concern
I
have
now
is
that
we
can
have
inconsistent
state
with
normal
tags
and
I
think
the
signatures
can
be
fixed
at
pretty
much
any
point.
As
long
as
we
think
that
operation
is
itself,
you
know
safe.
Okay,.
G
So
my
feeling
right
now
is:
we
don't
have
to
tell
anyone
that
we,
you
know
there
is
no
fees
right
like
let's
everybody
have
the
assumption
that
there
is
a
freeze
and
that
they
have
to
use
the
new
registry.
Like
you
know
we're
not
going
to
change
the
comms,
it
is
frozen,
but
we
still,
we
are,
you
know
giving
it
some
more
time.
G
That's
a
way
to
take
it
right
so
and
like
yeah,
let's
finish
the
rollout
at
all
four,
and
then
we
can
go,
look
at
what
the
inconsistencies
are
and
how
to
fix
them.
If,
whatever
tools
changes
that
we
have
to
make,
we
might
not
end
up
making
those
tools
changes
in
time
for
the
really
12
release,
but
it's
okay.
We
can
do
it
just
after
the
127
release,
so
I
I
think
I
think
we
are
good.
G
A
Any
further
comms
I
feel
like
would
just
be
confusing
to
literally
everybody,
because
then
it's
like
well,
we
were
freezing
it,
but
then
we're
not
doing
it.
For
this
reason,
but
we'll
freeze
it
eventually,
no
like
functionally
have
everyone
treated
as
if
it's
Frozen
Ben
hand
raised
I'm,
not
sure.
If
Adolfo
your
hand
was
raised
again
or
not.
F
C
Yeah
I
mean
also
I,
I,
think
the
one
thing
that
got
missed
here
that
they,
the
inconsistencies,
are
also
present
in
the
GCR
backing
Registries.
The
reason
this
is
a
problem
for
the
redirect,
though,
is
you
go
from
one
backing
registry
per
like
continent
to
like
a
bunch
of
them,
and
you
could
wind
up
mixing
two
of
them
if
we
only
did
blobs
or
something
like
that,
so
that's
the
reason
that
we
were
concerned
there
at
this
point.
C
I
think
the
only
reason
I'm
concerned
is
just
if
you
switch
regions
on
either
of
the
Registries
redirected
or
not.
You
can
have
inconsistent
tag
images,
but
I
think
we've
identified
that
those
are
scoped
to
like
known
communicated,
broken
releases,
so
so
that
isn't
a
an
immediate
concern
and
I
think
whenever
Sig
release
is
ready,
we
can
start
to
repair
those
also
a
huge
plus
one
to
to
no
further
cons.
C
I
mean
we
haven't
even
acknowledged
to
outside
of
the
Sig
that,
like
we're
doing
certain
images
or
anything
like
that,
because
we
really
just
want
people
to
get
ahead
of
this
and
one
last
point
the
schedule
called
for
Monday,
the
third.
The
redirect
could
be
done
with
the
second
rollout
by
like
the
end
of
this
week
and
going
into
early
next
week
somewhere
around
there.
We
should
be
reaching
a
point
where
we're
like.
Okay,
nothing
has
cropped
up,
we're
good
I'm,
not
sure.
C
G
So
the
fifth
wave
should
hit
Friday,
even
if
it
slips
over
and
goes
till
Monday
Tuesday,
that's
fine
and
like
next
week.
I
didn't
want
to
do
anything
because
we
I
wanted
to
gather
data
about
like
how
much
cost
savings
and
which
is
the
next
set
of
logs,
that
we
next
set
of
images
that
we
need
to
gather
and
things
like
that.
So
I
I
would
say
in
the
week
after
that,
let's
not
do
anything
and
then
there
is
cubecon
week.
So
we
are
not
going
to
do
anything.
G
So
when
we
get
back
is
when
we
can
start
doing
something
more.
But
in
the
meantime,
if
sigrilles
wants
to
do
things
to
try
and
fix
all
the
signature,
mismatches
and
things
like
that,
you
know-
please
go
for
it
and
be
independent
of
this,
except
for
you
know.
Until
this
wave
ends,
this
wave
ends
either
Friday
or,
like
you
know,
Monday
Tuesday,.
C
I
also
have
a
small
personal
plea:
I'm
out
for
week
the
week
before
kubecon
I,
think
I'm
doing
most
of
the
interfacing
between
the
various
groups
here
and
I'm
gonna
be
pretty
unavailable,
I
will
be
hiking,
Yosemite
and
I.
Don't
expect
to
have
internet
access.
G
Absolutely
Ben,
thank
you.
Thank
you
for
taking
Point
here
and
yes,
when
Ben
is
not
wrong,
we're
not
going
to
touch
it,
but
we're
not
going
to
touch
anything
so
so.
But,
however,
when
you
go
to
kubecon,
just
make
sure
you
stick
to
the
story.
There
is
a
freeze.
Everybody
has
to
go
off,
you
don't
know
when
we
are
going
to
delete
something
or
clean
up
something.
So
all
bets
are
off
if
you're
using
the
old
registry.
G
So
stick
to
that
line
and
you
know
don't
change
the
line
on
like
there
was
a
freeze,
but
we
are
not
going
to
do
it
right
this
week
and
none
of
that
okay,
so
just
stick
to
the
story
and
we'll
be
fine.
G
Sorry
back
to
I
think
GP
was
a
running
point.
He.
B
Yeah
I
think
we're
good
I
would
just
put
a
comment
in
the
pull
request,
saying
we
we
delay
freezing.
My
next
point
is
just
saying
next
meeting.
We
should
do
annual
report
get
that
done
because
yeah
I
think
the
deadline
is
April
24th,
so
I
will
dedicate
next
meeting
to
write
the
annual
report.
So
if
you
are
not
interested
in
that,
don't
come.
H
Yeah
I
mean
I
can
just
vocalize
it
so
for
the
for
the
the
recording
and
stuff
essentially
I,
always
just
I
I
I
caught
the
you
know,
no
promise
that
we're
Mr
fire
yet,
but
like
it's
getting
past
that,
so
what
what
are
going
to
be
the
kind
of
focuses
for
improvements,
after
that?
You
know
now
that,
if
we're
less
concerned
about
cost
savings
being
the
highest
priority,
what
are
other
high
value
things
in
response
and
then
and
then
one
responsive
channel
was
resource.
H
Optimization
is
on
top
of
Mind
full
ownership
of
the
infrastructure
by
moving
a
workload
through
meeting
in
the
Google
infrastructure
and
and
and
just
to
frame
this
a
little
bit.
Some
of
this
is
is
for
for
my
own
education
and
for
helping,
like
my
colleagues,
Ryan,
Cook
and
other
folks
about
how
you
know
what
it
is
that
we
might
be
able
to
be
helping
on
or
or
look
for,
for
you
know,
ideas
to
present
or
demos
to
bring
forward.
Basically.
G
I
think
we
at
some
point
we
have
to
go
back
and
look
at
the
all
the
issues
that
we've
logged
and
all
the
PRS
that
are
stalled
and
get
back
on
that
horse,
Carson,
so
I
I.
At
this
point
we
are
able
to
do
a
bunch
of
incoming
requests
from
the
community.
Whether
it
is
you
know,
GCS
buckets
or
you
know,
hey.
We
need
importer
for
images,
necklify
redirect
URLs
and
shortcut
URLs
new
DNS
stuff.
G
G
Getting
pruned
will
happen,
and
you
know
redirects
are
happening
right
now,
but
then,
after
that
there
is
going
to
be
one
more
set
of
images
and
maybe
at
some
point
we
might
want
to
do
more
than
just
redirect.
Maybe
we'll
clean
up
the
older
stuff.
G
So
so
there
is
going
to
be
things
that
are
going
to
go
on
for
the
container
images
and
the
ca
jobs
we
still
haven't
talked
about
the
ca
jobs
Marco.
We
can
talk
about
it
next,
so
unless
you
you
guys
talked
about
it
before
I
came
so
that
is
definitely
also
there.
G
We
still
haven't
touched
the
images
I'm,
sorry,
devs
and
RPMs,
because
that
is
that
has
been
low
priority
right
now,
because
it
is
still
from
Google
own
buckets
and
it's
not
hitting
the
wallet
so
to
say
so.
Those
are
the
primary
things,
so
we
will
need
to
train
more
people
to
take
up
the
things
that
we
are
doing
right
now.
You
know
some
of
those
things
are
manual.
G
One
of
us
has
to
go,
apply,
terraform,
updates
things
like
that,
so
we
need
to
have
a
set
of
people
who
can
like
in
all
time
zones
who
can
like
answer,
queries
and
do
things
that
are
needed
to
be
done.
A
few
of
us
like
Arnold
and
myself,
to
do
like
the
debugging
diagnosis
kind
of
things
when
something
goes
wrong
or
something
is
not
working
right.
So
that
is
the
kind
of
work
that
I
expect
for
the
rest
of
the
year.
G
So
to
say,
unless
there
is
some
other,
a
very
pressing
issue,
if
somebody
is
one
of
the
clouds
is
sunsetting,
something
or
you
know,
then
we
might
have
to
scramble
and
do
things
again,
but
I
think
we
are
in
a
good
place.
We
have
the
the
three
million
plus
three
million
is
helping
us
a
lot
and
yeah.
We
might
have
to
do
something
another.
G
If
Azure
turns
out
to
be
the
next
set
of
you
know,
images
are
going
to
Azure,
then
we
might
have
to
start
whatever
we
are
doing
for
G.
You
know
AWS.
We
we
have
to
do
for
Azure
as
well.
Does
that
help
okay
go
ahead?
Ben.
C
All
of
that
I
think
a
big
one
that
is
kind
of
pressing
that
will
help
unblock
folks
like
dims
in
our
no
is
right.
Now
on
gcp,
we've
brought
pretty
mature,
tooling
for
being
able
to
group
people
into
different
sets
of
permissions
and
audit
that
and
roll
that
out,
and
it's
fairly
automated
and
has
good
observability
and
that
sort
of
thing
right
now
we
only
have
that
working
with
gcp
properly.
C
We
have
some
somewhat
rudimentary
terraform
things
for
Amazon,
but
we
don't
really
have
great
patterns
in
place
to
have
like
observability
around
the
things
in
Amazon
and
being
able
to.
You
know,
just
send
a
PR
and
add
someone
to
have
permission
to
work
on
something
getting
a
solution
to
that
is
going
to
unblock
I.
Think
a
lot
of
the
velocity
for
any
further
work
on
the
multi-cloud
I
also
want
to
add,
with
some
insight
into
the
into
the
other
things
that
are
still
in
Google.
C
They
they're
something
of
a
liability,
even
if
they
aren't
a
liability
from
the
bill
perspective
just
because
we
have
poor
tracking
of
them
and
and
we
have
to
make
sure
that
they've
remain
available.
That
sort
of
thing
so
there's
still
a
fair
bit
of
CI
and
binary
downloads
there
and
I
think
CI
and
binary
downloads
are
the
two
next
big
stories
with
RPM
and
Debian
packages.
Being
a
smaller
story,
I
think
has
a
solution
coming.
C
And
if
people
are
looking
for
tasks
that
can
try
to
see,
if
any
of
that
can
be
broken
off
a
lot
of
that's
pretty
highly
privileged
and
won't
be
so
easy
to
delegate,
but
should
enable
better
delegation
in
the
future.
I
Yeah,
if
folks
are
interested
I
can
provide
a
quick
update
on
Debian
and
RPM,
because
this
is
mentioned
quite
a
few
times.
So
maybe
it
would
be
nice
to
recap
where
we
are
actually
so
we
have
it
as
well.
The
plan
regarding
Debian
and
RPM
packages
is
that
we
are
going
to
use
open
sources
service,
which
is
like
a
copy,
build
Service
they're,
going
to
sponsor
us
and
give
us
access
to
platform.
I
We
had
the
successful
proof
of
concept,
so
we
know
that
it
is
possible
to
use
it,
but
the
problem
is
that
we
did
some
more
significant
changes
to
our
build
touring,
mainly
Corral
and
Q
Becky
G,
to
make
sure
that
we
can
generate
the
Debian
and
RPM
specs
and
everything
else
needed
and
that
we
actually
connected
to
the
build
Pipeline
and
that
we
actually
manage
stuff
like
projects
that
they
have
packages,
permissions
and
stuff
like
that.
So
we
all
have
idea.
We
all
have
proposal
for
that.
The
proposal
is
merged.
I
Cap
is
there,
but
the
problem
is
that
we
don't
really
have
power
to
do
it
at
this
moment,
because
a
lot
of
a
lot
of
resources
are
going
towards,
like
the
CI
cluster,
the
U1
register
migration
Communications
around
that
the
progress
with
signatures,
problem,
Suite
a
promoter
and
all
the
issues
that
we
have
at
Cardiff
actually
says.
So
we
had
a
lot
of
more
priority
things
that
we
forgettability
to
fix
that
need
to
fix
and
therefore
still
didn't,
really
have
time
to
touch
it,
hopefully
after
cubiccon.
I
This
is
something
that
we
will
start
working
on
more
seriously
and
that
we
hope
to
finish
it.
Let's
say
for
about
38.
the
biggest
problem
with
WWI
packages
is
going
to
be
communicating.
The
change
I
get
to
use
it
to
migrate,
because
this
is
going
to
require
some
manual
migration
and
we
can't
really
do
what
we
did
for
registry
redirect
because
there's
gpg
keys.
There
is
stuff
like
that
and
like
it's
just
not
possible,
so
you
will
see.
How
are
we
going
to
handle
that?
But
that's
the
current
status?
G
Marco,
so
are
the
artifacts
that
get
built
by
the
service
is
going
to
stay
in
the
suse
infrastructure?
Yes,
and
do
they
know
how
much
traffic
we're
going
to
throw
at
them.
I
I
I
have
worked
compared
to
that
to
see
if
that's
possible,
to
get,
but
the
answer
was
no,
so
it
turns
out
that
we
will
give
a
try
and
they
have
mirrors
all
around
the
world
and
they
hope
that
they
will
be
able
to
serve
it.
Eventually,
we
can
add
our
own
winners,
that's
not
a
problem
as
well,
and
they
will
connect
our
mirrors
to
their
offices
Service,
but
we
are
still
not
really
sure
how
much
traffic
is
that
until
we
don't
see
it.
G
What
is
a
plan
B?
Will
we
be
able
to
copy
things
off
of
their
build
service
into
S3
buckets,
for
example,.
G
I
I
am
scared
to
tell
them
that
you
know
yeah,
it's
gonna
be
okay
or
yeah.
I
G
Gonna
be
crazy,
yeah,
like.
G
Well,
who
are
we
talking
to
in
Suzy,
which
team
or
who
are
the
people
that
we
are
talking
to
in
suse?
For
this.
I
We
are
recorded
with
the
OBS
team,
there
is
Adrian
from
OBS,
darix
and
I.
Don't
remember
now,
who
else?
But
there
is
release
package
Spock,
China
loans
like
you,
can
join
it,
and
we
have
quite
a
few
folks
there.
Okay,
so.
G
Yeah,
can
you
you
know,
throw
the
names
or
the
slack
Channel
on
on
my
slack,
so
I'll
I'll
go
read
up
the
older
discussions
that
you've
already
had
yeah.
Okay,.
G
Thank
you.
That's
all
I
had
Marco
okay.
B
G
D
I
was
just
gonna
say
the
things
that
Mark
I
put
my
hand
out.
The
Mark
was
sad.
We
asked
this
question
when
we
were
talking
about
things
in
the
POC
and
it
was
pretty
clear
from
their
answers
that
we
could
run
our
own
mirrors
and
it
could
be
on
community
controlled
infrastructure
if
we
needed
to,
and
it
wouldn't
need
to
go
at
all
to
their
their
things.
We
can
publish
and
they
would
be
distributed
to
the
mirrors
that
we
control.
G
Yeah,
like
I'm,
not
worried
about
us
serving
traffic
from
like
our
side,
I'm
more
worried
about
like
we
don't
want
to
take
them
down
by
the
volume
of
traffic
going
to
them
and
you
know
be
bad
citizens
there
right.
G
C
You
next
yeah
sorry
about
the
data
these
are
like
co-hosted
with
Google
Cloud
like
standard
packages
like
the
gcloud
SDK.
That's
pretty
high,
proven
I.
Don't
remotely
have
access
to
that.
I
tried
to
track
down
some
folks,
but
I
don't
have
answers
for
that.
C
Yet,
and
I've
been
a
little
more
focused
on
the
registry
stuff
as
far
as
mirror
and
and
throwing
traffic
at
them,
I
mean
it
won't,
be
an
immediate
throwing
all
the
traffic
it'll
be
as
users
switch,
so
it's
gonna
ramp
up,
but
that
could
get
us
into
trouble.
C
I
believe
we
should
be
able
to
use
and
I
think
we
actually
already
have
like
apt.kates.io
on
the
nginx
redirect
thing,
which
is
something
else.
We
should
probably
replace
at
some
point
works
fine
and
I
believe
both
Debian
and
RPM
packages
are
Ascent
like
you
can
serve
them
as
just
like
a
static
snapshot
of
the
repo
State.
You
you
could
you
can
stick
it
in
like
an
S3
bucket
or
something
it'll
work?
Fine.
C
So
if
we
go
ahead
and
put
our
domain
in
front
again,
like
DL
and
registry
and
so
on,
we
can
always
update
to
redirect
to
some
other
backend
that
contains
a
copy
and
continue
to
use
their
build
infrastructure
and
I.
Think
that
should
be
fine,
and
we
should
make
sure
that
that's.
We
should
make
sure
that
we're
planning
for
that
eventuality
and
that
we,
if
possible,
always
bounce
things
through
a
domain.
We
control
where
we
can
quickly
update
this.
I
G
I
So
the
the
Gypsy
keys
are
Suffic
that,
basically,
all
you
could
have
physical
access
to
those
servers
can
access
so
like
they
have
some
measures
in
place
that
they
secure
those
skills,
so
that
not
yeah.
They
are
safe
and
secure
stuff
like
that,
and
it
is
managed
basically
by
the
OBS
platform,
so
it
took
various
physical
access.
At
least
this
is
what
OBS
box
told
us
I
think
it
is
summary.
The
cap
update
that
we
recently
worked
on.
There
is
a
response
regarding
that
from
the
OBS
folks.
G
Okay
from
our
side,
I'm,
assuming
that
the
release
leadership
has
access
to
the
gpg
Keys.
No,
we
don't
have
so
then,
who
creates
the
gpg
keys
and
who
updates
the
gpg
key
when
they
expire.
B
G
I
And
just
to
add,
if
you
have
any
questions,
there
is
a
risk
management
Channel
there
is.
Where
is
packages
POC
channel
prefer
to
transfer
to
ask
questions,
and
we
will
also
keep
you
all
updated
on
progress,
but
if
you
have
anything
in
meanwhile,
let
us
know.
G
Yeah
we
haven't
talked
about
the
status
of
the
eks
brow
cluster,
yet.
G
I
I
can
do
a
very
quick
update.
I
mean
okay,
just
in
two
minutes.
So
basically
the
status
is
that
the
cluster
is
mostly
ready
in
a
way
that
jobs
are
running.
Jobs
are
passing,
we
had
some
flakes,
we
managed
it
to
three
hours
and
divide
those
flakes
mostly
related,
because
now
we
are
using
bigger
instances.
So
there
are
more
tests.
Running
it.
I
Parallels
and
like
go
is
not
really
respecting
c
groups
limits
like
the
stuff
that
we
put
in
jobs
and
we
had
more
stuff
running
in
parallel,
but
not
more
resources,
because
we
did
a
decrease,
limited
jobs
and
that
caused
some
flakes
now
that
is
fixed
thanks
to
dead,
and
next
step
is
probably
going
to
be
to
add
more
jobs
and
to
streamline
what
we're
going
to
do
with
monitoring
stuff.
Patrick
is
doing
some
great
job.
Regarding
that,
we
are
still
seeing
a
discussion
discussing
about
like
pattern.
Next
steps
on
one
of
next
meetings.
I
We
should
probably
do
more
in-depth
discussion.
Actually
I
will
want
to
have
a
meeting
before
kyoconi
I'm,
not
sure,
but
we
should
probably
discuss
at
some
point.
How
are
we
going
exactly
to
handle
monitoring?
Like
we
should
probably
try
to
merge
the
monitoring
that
we
have
for
current
clusters
and
for
the
new
clusters,
we
have
everything
in
the
place,
but
this
is
something
that
we
will
discuss.
Overall,
we
will
continue
migrating
jobs,
probably
by
adding
more
Canary
jobs,
because
we
don't
want
to
migrate
jobs
before
test
freeze.
G
Marco,
just
as
prep
work
can
we
do
Bosco's
install
and
make
sure
that
we
we
can
hook
it
up
to
Bosco's
and
create
some
accounts
that
that
can
be.
You
know
saved
in
Bosco's.
I
B
I
I
think
it's
really
early
to
talk
about
Bosco
right
now.
We
should
focus
on
moving
non-ethree
tests
to
AKs
before
I.
Even
talk
about
pastors,
we
should
make
sure
the
existing
the
beer
cluster
we
have
kind
of
is
ready
to
move
stuff,
because
you
can
move
a
lot
of
periodic
and
pre-summit
and
that's
going
to
help
us
reduce
further
the
bill
before
we
talk
about
bus
goes
and
E3
tests.
B
B
In
the
past,
we
got
a
donation
from
them
about
10,
terabytes
monthly.
Last
year
we.
B
So
cncf
signed
the
service
order
on
the
site
and
it's
been
transmitted
to
fastly.
Now
we're
waiting
fastly
to
get
to
approve
it.
We
are
ready
to
go
so
I'm
hoping
by
may
we
can
start
to
plan
comes
and
sweet
and
flip
the
switch.
We're
also
waiting
for
increase
of
the
bandwidth,
because
we
discover
at
the
late
minute
that
we
are.
We
need
a
minimum
of
seven
terabytes
per
month
to
be
able
to
serve
tl.k
subtype,
so
those
things
are
depending
on
the
first
side
and
I.
B
Yeah
in
the
comments
yes
5.5,
petabyte
per
month,
just
for
one
month,
so
we
we
request
10
people
and
petabyte.
Hopefully
it's
gonna
be
enough,
but
it
is
not
enough.
We
can
request
more.
We
just
need
to
be
able
to
just
I
think
it
possible
to
justify
if
we
basically
use
the
first
10
petabyte
and
say:
oh
over
a
month
we
can
demonstrate
we
can
get
an
increase
of
amount
of
planetary
figure.
C
It's
in
the
slack
somewhere.
Actually
we
gave
like
an
updated
one
to
to
fastly
directly
to
help
them
understand
what
the
situation
is,
but
somewhere
in
the
case,
if
it's
like,
we
posted
before
I,
don't
have
that
graph
handy,
but
I
can
tell
you
that
it's
similar
to
the
it's
similar
to
the
image
hosting
in
terms
of
the
pattern
you
see
and
it
Peaks
at
around
1.8
gigabytes.
A
second
give
me
bites.
I'm
gonna
get
that
wrong.
Gi
B
for
a
second
okay,
I'm
I'm
recaffinating.
C
It's
a
lot.
We
we
did
some.
We
did
some
math
and
it's
around
five
and
a
half
petabytes
a
month,
but
that's
only
one.
We
don't
have
like.
We
just
have
like
a
rolling
like
like
about
month
of
data.
We
don't
we
don't
know
historically,
so
I
think
10
should
be
pretty
safe
and
hopefully
we'll
have
that
soon.
G
G
Whoever
is
going
to
keep
con
have
fun,
I'm,
sorry,
I
won't
be
there.
I'll
miss
you
all
fomo
already
so
enjoy
and
come
back
and
tell
us
stories.
G
C
Same
and
if
you're
going
definitely
a
ccncs
update
today
about
the
health
and
safety
updates
me
all
right,
I
won't
be
there
either,
but
hopefully
in
a
and
hope
you
all
have
fun.
If
you
are
going.