►
From YouTube: Kubernetes SIG K8s Infra - 20230118
Description
A
Hi
everyone-
this
is
Kitchen
from
meeting
and
the
first
of
the
year
2023,
so
I
welcome
everyone
to
this
meeting
and
I
just
want
to
remind
everyone
that
this
meeting
is
under
the
code
of
conduct
so
bye
and
run
everyone
to
be
kind
to
each
other,
so
happy
New
Year
to
everyone
and
welcome
to
any
new
new
people
in
this
call.
By
the
way
is
anyone
want
to
introduce
itself
before
we
start.
B
C
Hey
I'm
Jay
I'm,
a
Sagittarius
I
like
long
walks
on
the
beach
I,
don't
work
at
AWS
anymore.
Just
for
those
of
you
who
don't
know
that
I
left
at
the
end
of
November
last
year,
so
yeah
I'm,
just
self-employed.
A
A
To
raise
his
hand,
okay,
okay,
thank
you,
I
would
say:
teams
or
EP
I
put
the
link
in
the
doc
I'm
gonna
share
my
screen,
so
we
go
over
the
bidding
report.
A
A
A
B
A
E
I
know
I
think
there's
one
other
point
that
might
be
confusing
for
people
who
haven't
seen
this
before
yeah
the
networking
light
item.
That's
actually
not
all
the
networking
costs
it
shows
up
because
we're
using
Cloud
CDN
on
artifacts.case
audio
networking
charges
specific
to
just
using
cloud
storage
on
its
own.
Just
so
up
under
cloud
storage
as
the
service.
E
Because
it's
not
a
network
service
network
service
would
be
Cloud
CDN,
so
the
that
first
line
includes
the
actual
cost
of
storage
and
egress
for
the
GCR
and
for
the
objects
in
artifacts.kate
studio,
which
is
a
sub
domain
experiment.
We
have
that
has
like
chaop
binaries.
E
The
egress
costs
under
Network
that
show
up
under
Network
are
just
from
the
cloud
CDN
thing:
that's
on
artifacts,
like
Kate
study.
Only
so
that's
not
representative
of,
like
all
the
networking
cost
of
cloud
storage
things
most
of
that
shows
up
under
that
first
item.
B
And
kind
of
the
the
thing
that
I
was
curious
about
is:
if
there
is
some
sort
of
view,
if
not
now,
eventually
where
we
can
easily
show
what
chunk
of
the
you
know,
budget
is
going
towards
cates.gcr.io
versus
registry.capes.io.
A
A
So
we
identified
a
project
to
basically
match
the
building.
We
say:
okay,
guess
what
you
said
that
I
owe
this
plan
is
on.
The
project,
gets
abstract,
proud
and
we
know
is
there.
We
basically
know:
okay,
that's
the
only
project
where
the
cost
is
currently
happening
and
for
wesley.case.io
everything
happening
in
another
project,
so
that
separation
of
building
purposing
helpers
identify
it.
It
easily
identify
adoption,
but
we
cannot
be
grown
a
lot
about
metrics,
because
artifact
relationship,
metrics
are
really
are
really
limited.
Well,.
E
So
if
that's
actually
six
upset,
so
if
we,
if
we
split
a
couple
things
out
of
Kate's
artifacts
prod,
then
we'd
be
able
to
get
my
gcp
project
and
say:
okay,
we
know
these
gcp
projects
already
are
specific
to
our
registry.k.io,
and
we
could
say
this
is
how
much
it's
costing
us,
but
cost
is
kind
of
a
poor
proxy
for
adoption,
because
we
can
change
things
about
the
car,
how
the
actual
cost
breaks
down
over
time.
We
can
give
you
we
can
hand
collect.
E
We
can
go
to
like
the
cloud
run
page
for
the
production
service
and
give
you
like
what's
the
current
QPS,
but
we
can't
do
this
as
readily
for
case
igcritio,
because
we
we
only
have
that
sort
of
logs
on
the
object
storage,
which
is
not
all
the
API
calls.
If
we
want
to
know,
what's
the
QPS
to
case.gcardio,
we
actually
have
to
go.
Ask
the
GCR
team,
because
that's
just
not
actually
something
that's
exposed
and
it's
not
something
they're
going
to
add
exposing,
because
that's
a
legacy
product.
E
Mean
I've
got
I've
pulled
them
before
we
know
people
on
the
team.
We
can
ask
them
to
go.
Give
us
like
hey.
What's
the
ups
for
ksrg
cereal
and
it's
not
a
huge
problem,
but
it's
not
something
we're
going
to
be
doing
at
like
every
meeting.
There's
not
like
a
public
endpoint
for
that
you
can
do
audit
logs
on
the
actual
storage
object,
calls,
but
that's
apples
to
oranges.
So
that's.
B
Fair,
okay,
I
also
I
I.
Don't
want
us
to
tangent
too
much,
because
this
is
billing.
Specific
and
mine
is
more
of
a
usage
question,
but
I
do
think
in
the
future
as
we're
going
to
need
to
be
able
to
show
that
like
cost
is
getting
reduced
and
shifting
and
being
able
to
show
that
it's
also
like
traffic
going
to
registry.cates.io
and
some
of
the
other
things
would
be
helpful.
But
ultimately
it's
usage,
not
billing.
That
I'm
worried.
E
About
we
can
see
the
cost
switching
over
fairly.
Clearly
once
you're
familiar
with
this
bill,
you
can.
You
can
say
that,
like
okay,
Kate's
artifex
prod,
that's,
that's!
That's
not
registry.k.io,
but
the
like
Cloud
run
project
and
the
artifact
registry,
like
those
have
their
own
projects,
got
it.
C
Ben,
if
we
were
able
to
see
egress
versus
the
storage
costs,
Yeah.
C
Could
correlate
a
decrease
in
the
egress
cost
to
an
increase
in
adoption
of
the
registry
case.io?
You
know
AWS
S3
mirrors
right,
because
that
a
lot
of
that
egress
cost
is
coming
from
AWS
instances
calling
out
to
GCR
right
yeah
and
pulling
pulling
data.
E
I
would
I
think
what
we
want
to
see
is
we
want
to
see
usage
going
up
on
on
the
registry
so
that
we
know
that
that
adoption
is
happening
and
then
independently.
We
want
to
see
costs
going
down
regardless
and
if
cost
isn't
going
down,
then
we
want
to
see
you
know
where.
Where
is
it
going
down?
So
we
would
just
kind
of
naturally
expect
that
it
should
be
shifting
over
to
like
AR
and
things
so
something
that
we
might.
C
Right,
but
unless
we
see
a
breakdown
of
egress
versus
storage,
the
storage
may
be
going
up
quite
a
bit
and
the
egress
may
be
going
down
and
we
don't
see
a
difference
in
in
total
cost.
That's
what
I
was
referring
to
so
like.
If,
if
we
could
see,
you
know
out
of
that,
180
000
120
000
is
egress
and
60.
000
is
storage
like
broken
out
and
then
we
can
say:
Okay
egress
is
going
down,
but
we.
E
Don't
have
that
we
don't
have
that
like
handy
currently
as
a
metric,
but
we
can
also
tell
you
that,
like
that's
relatively
constant,
like
the
amount
of
storage,
we
have
is
mostly
static
and
then
we're
just
adding,
like
the
whatever
rate,
we're
adding
new
images.
So
we
have
a
we
have
like
that
goes
up,
but
like
from
past
exploration,
we
know
that
it's
okay,
it's
dominated
by
egress.
E
So
as
we
see
it,
move
over
from
the
old
to
the
new,
the
other
thing
I
point
out
is
just
getting
people
on
a
registry
to
case
Studio.
That
means
whether
or
not
the
cost
is
going
down
we're
fully
in
control
to
shift
costs
around,
whereas,
while
it's
on
case
our
gcrdo,
we
don't
have
any
control
over
that.
E
We
can't
do
anything
to
mitigate
costs
if
we
find
that
registry.kates.io
isn't
as
cost
effective
as
we
want
like,
we
can
quickly
roll
out
differences,
for
example,
right
now
before
we
had
the
3
million
donation,
we
said:
okay,
Amazon
traffic
will
only
be
Amazon
IPS.
We
could
flip
it
around
and
say:
Amazon
traffic
is
when
it's
not
gcp
IPS,
because
since
we
have
a
copy
in
gcp
we
should
just
route
gcp
there
and
avoid
egress.
E
But
if
it's
not,
then
like,
let's
route
to
Amazon
for
blobs,
maybe
we're
not
detecting
Amazon
IPS
enough
or
or
maybe
we
have
a
lot
of
traffic
outside
of
it
or
something
like
that.
But
we
are
we're
a
little
bit
ahead
of
ourselves
if
we
haven't
actually
moved
the
traffic
over
yet
and
that
again
we
don't
have
public.
But
I
can
tell
you
that
that's
the
case.
It's
still
not
much.
E
Last
I
checked:
I
had
gotten
the
GCR
team
to
get
to
me
like
the
monthly
average,
and
it
was
something
like
2500,
QPS
and
I
haven't
seen
registry.case.io
break
like
400,
yet
anytime
I've
checked
so
there's
still
a
lot
of
traffic
going
to
the
bowl
I
mean
we
actually
even
have
traffic
still
going
to
Google
containers,
but
that
again
isn't
billed
to
us
and
there's
a
concern,
but
just
to
get
some
idea
like
it
can
be.
E
A
Yeah
I,
I
I
think
I.
Think
that
two
aspect
to
the
to
chiefy
question
is
like
Matrix
related
to
consumption
like
basically
QPS
and
storage.
We
can
provide
in
some
extent
those
materials
because
they
are
part
of
gcp
but
QPS
for
the
load
balancer
all
underlying
on
the
traffic
it's
internal
to
Google.
So
that's
going
to
be
unlikely.
We
get
access
to
that
and
forecast
I
think
it's
fine.
We
have
mostly
of
all
the
metrics
related
costs.
We
are
looking
for
it'll.
E
Also
be
more
obvious
on
registry,
like
case
study,
because
we'll
be
able
to
see
like
this
is
artifact
registry.
This
is
yeah.
We
have
things
more
broken
up
there.
It's
not
all
tossed
into
one
gcp
project
and
one
surface
we'll
see
costs
hit.
The
redirector
will
see
costs
for
the
thing
so,
like
I
can
tell
you
right
now.
The
redirect
service
half
of
the
cost
is
logging,
and
we
should
probably
turn
down
the
logs
once
we're
more
confident
in
it.
Yeah.
A
But
I
think
Jeffrey.
We
can
follow
up
your
question
later,
piercing
so
I
think
my
AI
will
be
to
put
you
as
viewer
of
the
entire
gcp
org.
That's
going
to
be
easier
for
you
to
basically
look
and
I
can
point
you
to
different
metrics.
We
have
right
now.
A
Yeah
any
question
we're
related
to
billing
for
gcp.
E
For
the
new
folks,
I
also
want
to
quickly
point
out
that
there's
a
bunch
more
billing
that
isn't
built
to
us
that
it
like
isn't
directly
Our
concern,
but
as
we
go
to
migrate,
any
of
those
things
over
there's
actually
an
even
larger
spend
on
internal
Google.
Currently
we
don't
have
a
dashboard
for
that
and
I
mean
because
the
focus
is
on
migrating
things
over,
but
last
we
checked.
It
was
something
closer
to
four,
but
we
have
some
good
room
to.
E
Most
of
that
is
again
serving
binaries
and
I.
Think
that
that's
currently
because
the
project
is
just
using
a
GCS
bucket
and
hasn't
been
paying
much
attention
to
that
that,
like
a
single
bucket
in
a
single
region,
is
not
cost
effective
so
for
egress
purposes,
so
something
to
keep
in
mind
as
you're.
Looking
at
this
cost,
the
project
has
a
bunch
of
other
spin.
That
is
not
this
public
TCP
Bill
and
not
just
on
gcp
on
like
Amazon
and
other
places.
A
E
Because
the
child
logs
don't
get
preserved,
I
want
to
Echo
that
Justin
has
a
comment
that
it
looks
like
the
story
for
Kate's
artifacts
prod
is
like
under
a
hundred
dollars
a
month,
but
the
actual
just
storing
objects
isn't
costing
us
a
lot.
We
have
a
like
relatively
small
amount
of
data,
like
maybe
a
terabyte.
Last
time,
I
looked
at
image,
blobs
that
is
being
served
at
you
know
massive
scale
compared
to
that
yeah.
A
A
Okay,
let's
move
on
I
think
it's
time
for
the
open
discussion.
A
F
Great
hi,
everybody
just
wanted
to
announce
a
few
things.
One
is
you
all
you
all
know
GP
by
now,
but
GP
is
being
assigned
to
be
the
cnci
point
of
contact
and
Technical
lead
from
cncf
to
help
support
the
migration
to
multi-cloud.
F
So
please
reach
out
to
GP
to
coordinate,
support
from
foundation
and
contractors.
I
I
is
going
to
remain
involved
and
help
support.
We've
also
engaged
coopermatic
who's
going
to
be
attending
meetings
and
helping
provide
strategic
support
and
some
help
with
implementation
as
well
are
the
folks
from
coopermatic
on
the
line.
G
Hi
I'm
here
Mario
I
work
for
kilometric
and
I
was
already
in
a
couple
of
the
last
six
info
meetings.
F
Great
thanks,
Mario
also,
we've
put
together
a
proposed
timeline
for
major
Milestones
for
the
various
phases
of
the
transition
to
multi-cloud.
This
is
just
a
just
a
proposal
to
help
at
least
help
guide.
F
F
No,
the
leadership
of
suitcase
infra
at
at
kubecon
in
Detroit.
You
know
asked
if
the
foundation
and
and
our
contractors
could
just
take
a
more
engaged
and
active
role
in
helping
support
all
of
you
and
we
listened
and
we're
trying
to
provide
just
some
more
some
more
organization
from
our
end,
and
so
we
put
together
this
proposed
timeline
and
we
welcome
comments
and
feedback
on
whether
this
aligns
with
what
The
Stig
is
thinking.
H
F
So
why
don't
we
start
at
the
top?
So
we've
split
this
into
several
different
phases
and
all
these
phases
can
happen
in
parallel.
Ideally,
they
are
happening
in
parallel,
but
we
just
thought
it
would
be
helpful
to
think
of
the
type
of
work,
that's
being
done
in
several
different
categories
and
buckets
so
one
is
focused
on
artifact
and
container
distribution.
So
that's
how
we
Define
the
phase
one
and
the
hope
is
here
I've.
F
F
Yeah,
okay,
yeah!
Well,
why
don't
we
stay
in
phase
one
for
now?
So
covermatic
is
going
to
you
know
with
fresh
eyes
and
hubermatic
is
of
course
you
know,
helped
many
migrations
to
multi-cloud.
F
They
they're
going
to
be
spending
the
next
couple
of
weeks,
just
just
taking
a
with
fresh
eyes,
taking
a
look
at
the
the
work.
That's
already
been
done
by
sticky,
infra
and
I
I
and
provide
and
letting
you
know
if
they
have
any
suggestions
for
changes
and
by
mid-February.
Our
hope
is
at
cicades.
F
Infra
and
Cloud
providers
could
provide
input
if
there
are
any
recommended
changes
and
then
a
week
after
that,
the
third
week
of
February
they'll
help
put
together
a
more
detailed
technical
project
plan
and
timeline
that
has
sufficient
detail
to
track
to
issues
in
GitHub
and
I.
Understand
that
there
are
a
lot
of
already
issues
and
tickets
already
in
GitHub.
F
So
this
is,
this
is
just
you
know,
to
reflect
whatever
it
hasn't
already
been
put
in
GitHub
and
then
in
and
then
the
the
thinking
is
that
in
March
by
late
March
the
proposed
architecture,
the
implementation
could
be
completed,
and
then,
after
that
there
would
be
there
would
be
a
a
cncf,
would
publish
a
blog
post
just
announcing
that
this
phase
is
completed.
F
So
I,
particularly
interested
in
you
know,
is
this:
does
this
timeline
seem
realistic?
Given
the
progress
that's
already
been
made?
Does
it
seem
too
aggressive.
B
D
G
And
maybe
to
to
add
here
we
want.
We
want
to
help
you,
so
we
want
to
support
you
and
we
value
every
input
that
we
can
get
from
you,
because
we
basically
need
to
get
an
overview
and
we
also
open
for
any
recommendations
that
already
have
to
be
made
or
plans
that
already
have
been
made,
but
could
not
put
to
work
because
of
yeah
missing
resources.
Time
and
stuff
like
this.
B
And
the
other
thing
that
I'll
add
is
a
bunch
of
this
timeline
and
like
different
side
conversations,
you
know
at
this
point.
We
want
to
make
sure
that
this
is
fixed.
We
have
strong
opinions,
but
they
are
weekly
held
given
their
you
know
is
a
better
path
forward
for
all
of
us
so
like
this
is
meant
to
be
collaborative,
not
a
giant
hammer.
F
Right
and
if
this
timeline
is
not
helpful,
Kate
zipper
wants
to
come
up
with
his
own
timeline.
You
know.
That's,
that's,
absolutely
welcome
we're
just
trying
to
be
again
trying
to
be
helpful
and
make
sure
there's
alignment
so
that
you
know
we're
all
we're
all
working
towards
the
same
goal:
posts
on
a
similar
timeline.
H
My
feeling
with
Janna
is
that
you
know
we
love
to
update
this
timeline
as
we
go
along
as
we.
You
know,
we
don't
know
how
all
of
us
are
going
to
work
and
what
speed
and
how
and
how
the
work
will
evolve
over
time,
especially
the
you
know.
The
recommendations
in
1.2
will
help
us
figure
out.
You
know
the
rest
of
the
things
after
that
comes
after.
F
Yeah,
absolutely
that
makes
a
lot
of
sense.
This
is
absolutely
not
intended
just
to
be
a
tool
not
intended
to
be
not
into
something
that
is,
is
committed
to
fully
upfront,
yeah
Justin,
and
we
can
have
regular
check-ins
based
on
it.
F
Okay,
well-
and
you
know
you
can
continue
to
make
I
understand.
This
is
a
new
document
to
many
of
you
so
feel
free
over
the
next
over
the
upcoming
weeks
to
to
add
comments
to
the
document.
If
you've
been,
if
you're
in
leadership-
and
if
you
don't
have
access
to
to
to
comment,
feel
free
to
send,
send
GP
and
me
a
note
in
in
slack
and
we'll
take
your
comments
into
consideration
and
then
there's
phase
two.
F
H
Joanna
before
we
go
to
phase
two,
can
we
walk
through
the
racy
framework,
and
you
know
oh
sure,
that
everybody
knows
what
what
that
is
talking
about.
Yeah.
F
So
racy
that
the
it's
an
acronym
stands
for
responsible,
so
who's
who's
going
to
be
doing
the
work
on
this
task
accountable
means
who's
accountable
for
completion
of
it
and
therefore,
usually
is
going
to
be
the
person
who
approves
it
and
signs
off
on
it.
Who's
going
to
be
consulted
before
that
work
or
task
is
completed
or
or
finalized,
and
then
who
needs
to
be
kept
informed.
F
All
right,
if
we
could
go
back
to
phase
one
I'll,
walk
through
the
races
for
phase
one
tasks,
so
we
have
asked
you
know
and
again.
If
just
let
us
know
what
you
know
if
you
have
suggestions
for
what
would
more
helpful,
but
we've
asked
coopermatic
to
take
a
look
at
the
work.
That's
already
been
done
and
advise
if
there
are
any
recommended
changes,
and
so
coopermatic
is
going
to
be
responsible
for
that
task.
F
Cncf
is
going
to
oversee
that
task
and,
of
course,
suitcase
infra
III
and
the
cloud
credit
donors
are
going
to
be
going
to
be
consulted
and
then
the
nest
cast
is
really
to
review
the
recommendations,
provide
feedback
and
then
decide
what
recommendations
to
incorporate
in
the
final
architecture
design
and
that
we
see
as
a
suitcase
infra
tasks.
Cicades
and
Pros.
Also,
you
know
they're
very
much.
F
The
decision
maker
in
that,
but
the
the
others
who
are
going
to
be
impacted
would
also
be
consulted
along
with
cncf
and
then
1.3
and
1.4.
This
is
really
around.
F
F
F
And
then
we
move
to
implementation
and
who's
going
to
be
responsible
for
different
tasks.
That
will
be
reflected
in
the
detailed
project
plan
and
timeline,
and
then
the
individual
issues
and
tickets
that
are
opened
in
GitHub,
and
that
is
going
to
be.
F
You
know,
that's
that's
Community,
open
source
work
with
support
from
contractors,
so
the
details
of
that
will
be
will
be
determined
later
and
then
we'd
like
to
publish
a
blog
post
at
the
end
of
this
announcing
that
this
phase
of
work
has
been
done
and
we'd
like
to
engage
kubernet
to
prepare
the
first
draft
and
then
with
input
and
feedback
from
from
others
before
cncf
publishes
any
questions
or
comments
about
the
phase.
One
response
racy.
H
I'll,
probably
Channel
Ben
and
ask
for
the
1.5.
Usually
we
we
kind
of
like
figure
out
like
who's
available.
How
much
time
can
they
work
on
that
sort
of
thing?
So
where
do
you
see
folks
coming
from
to
help
with
the
tasks.
G
So
we
we
as
climatic,
will
also
bring
in
Engineers
to
work
on
the
tasks
itself,
so
we
are
not
only
Consulting
there.
We
also
helping
in
the
development
of
the
implementation-
and
we
already-
and
we
also
we
have
some
people
that
work
at
kubomatic
that
are
already
active
contributors
to
various
various
parts
of
the
kubernetes
ecosystem.
H
Okay,
that
sounds
good
Mario.
So,
let's
add
that
like,
in
addition
to
like
the
known
people
on
the
call
who
who
work
in
work
in
this
stuff,
you
know
mostly,
but
quite
a
few
of
us-
have
the
you
know:
full-time
jobs,
and
there
is
limited
amount
of
time,
so
we'll
need
additional
help
too.
Do
this,
and
thanks
for
doing
that,
Mario
for
sure.
F
So
yeah
we'll
there
will
be
a
conference.
We'll
have
conversations
about
that.
I'm,
not
quite
sure
what
you
mean
by
adding
to
the
is
there
a
language
you
wanted
to
add
to
the
proposed
timeline
doc.
H
F
Well,
the
reason
why
is
because
it's
going
to
be
at
the
individual
task
level,
so
there's
going
to
be
there's
going
to
be
a
different.
You
know
all
the
different
people
responsible
for
different
tasks,
so
your
Cloud
providers
there
are
assigned
ftes
are
going
to
be
responsible
for
you
know
certain
things
that
happen
in
the
cloud
providers
and
you
know
there's
going
to
be
some
tasks
that
community
members
who
participate.
F
Infra
are
going
to
handle
and
they're
going
to
be
some
tasks
that
are
going
to
be
assigned
to
II
and
some
tasks
that
kubermatic
is
going
to
going
to
be
responsible
for
so
it's
rather
complex
and
I.
I
was
just
thinking
that
when
we
get
to
the
point
where
we
have
a
detailed
project
plan,
we
can
we
can
develop
the
race
at
the
task
level.
F
So,
let's,
let's
talk
about
phase.
F
Parallel
with
the
same
time
that
kubermatic
is
reviewing
the
work,
that's
already
been
done
for
container
and
artifact
distribution,
they're
also
going
to
be
evaluating
the
work.
That's
already
been
done
for
transitioning
to
multi-cloud
for
prow
and
CI,
and
giving
giving
advice
on
whether
they
see
any
opportunities
for
for
improvement.
F
That'll
be
early
February
when
we're
expecting
their
recommendations,
and
then
there
is,
there
will
be
a
month
to
to
review
those
recommendations,
discuss
get
consensus
and
then
finalize
the
design,
and
then
these
March
7th
is
when
we
would
when
we
would
aim
to
have
a
detail
of
project
plan
and
timeline
for
phase
two
and
all
this
in
terms
of
the
racing
mirrors.
F
What
we
talked
about
already
for
Phase,
One,
In
terms
of
the
recommendations,
the
review
and
discussion
and
then
developing
a
detailed
project,
plan
and
timeline,
and
then
out
of
that
would
come.
You
know,
issues
created
in
issues
of
tickets
created
in
GitHub
for
the
pro
and
CI
migration,
and
then,
if
we
could
scroll
down
to
see
2.5
and
2.3
2.8.
F
April
mid
April
is
when
we
would
hope
to
stand
up
the
multi-crowd
prowl
and
begin
the
migration
to
the
new
crowd.
F
F
So
we
notified
the
community
that
they
have
until
July
15th
to
migrate
to
the
new
prow
cluster
and
here's
how
you
do
it
so
the
notification
to
what
actually
happened
in
April
after
the
multi-cloud
prowess
stood
up.
But
then
we
would,
we
would
aim
to
complete
the
transition
by
July
15th
and
that
date
was
chosen
based
on
the
code
freeze
of
at
least
1.28.
B
B
It's
it's
more
likely
that
if
we
do
it
during
code
freeze
and
something
goes
wrong,
we
can
kind
of
back
things
up
and
guess
what
the
code
isn't
being
changed.
The
only
thing
that's
really
happening
are
either
really
important
fixes
or
people
trying
to
test
their
CI
seemed
like
a
good
time
to
do
this.
H
So
the
other
one
was
I.
I
have
a
feeling
that
we're
gonna
keep
our
current
Pro
and
like
let
it
be
like
the
master
thing
that
sends
out
jobs
to
the
other
Pros
that
are
running
elsewhere.
Maybe
we'll
end
up
something
like
that.
So
I
don't
see
us
like
shutting
down
the
current
Pro,
but
will
be
somehow
one
way
or
another,
either
migrating
the
jobs
directly
or
setting
up
our
product
to
distribute
the
jobs
out,
we'll
be
doing
something
like
that.
H
I
feel
we
haven't
talked
about
it
enough
to
come
up
with
like
a
design.
At
this
point,
no.
B
H
Yeah
and
then
we
also
have
to
talk
about
the
test
grid,
because
we
need
a
single
test
spread
that
stitches
all
these
things
together,
too
and
I
I
think
we'll
need
some
deeper
design
and
maybe
we'll
need
some
engagement
with
the
desperate
team
in
Google.
That
is
doing
some
of
this
stuff
go
ahead.
E
Yeah
I
don't
know
if
now
is
the
right
time,
but
I
definitely
have
thoughts
about
like
that.
Technical
portions
here
I
think.
Maybe,
though,
that
this
is
just
sort
of
like
a
high
level
timeline
on
which
things
are
being
worked
on,
and
it
might
not
be
the
most
important
to
have
exactly
correct,
like
the
technical
like,
oh,
which
part
of
prow
is
multi-cloud
or
that
sort
of
thing.
G
Yeah,
but
to
give
to
give
maybe
a
little
bit
of
insight
also,
why
why
the
cncf
talked
to
us
is
for
chromatic
and
for
cube
one,
our
both
open
source
tools
that
we
that
we
created.
We
actually
have
a
multi-cloud
pro
environment
already
up
and
running,
and
we
want
to
give
you
some
some
insights
into
this,
so
that
you
can
maybe
maybe
utilize
on
on
the
structure
there
and
yeah
that
we
can
Implement
some
some
of
the
stuff
that
already
is
in
place.
H
Right
the
way
we've
been
doing
this
Mario
is
you
know
we
are
it's:
it's
like
changing
the
engine
on
an
airplane
right,
so
we
are
trying
to
do
that
model
where
it's
not
if
you're
not
waiting
for
something
else
to
be
completely
ready
and
we
are
iterating
with
the
existing
systems
and
like
extending
it
outwards
to
all
the
other
thing
to
include
all
the
other
things.
So
we
are
making
incremental
progress
over
time
rather
than
a
big
bang.
H
Hey,
there's
a
new
Pro,
everybody
go
there
kind
of
thing,
so
that's
a
model
we've
been
following
so
far
and
that
I.
A
H
It
is
right
now
this
is
like
it
almost
it's
telling
us
what
comes
after
what,
rather
than
exactly
the
dates,
I
I,
don't
know
if
you'll
be
able
to
make
all
the
days-
and
we
don't
even
know
the
exact
schedule
for
upcoming
releases,
and
we
only
know
only
when
the
current
proteins
is
done
so.
B
Gonna
I'm
gonna,
Channel,
I'm,
gonna,
Channel,
myself
and
say
I
have
a
feeling
we're
bike.
Shutting
on
the
wrong
thing.
As
long
as
as
long
as
we,
the
CNC
cncf,
can
say,
the
dates
are
flexible,
but
this
is
definitely
the
road
map
that
we
want
to
try
and
follow,
and
we
don't
want
to
blow
out
the
dates
by
like
a
year.
I
think
we
can
move
on
right
yep.
We.
E
I
think
we
should
communicate
at
some
point,
though,
that
there
are
some
expectations
around
dates
within
the
project
for
changing
the
infra.
With
my
sick
testing
head-on,
we
did
we're
more
careful
as
we
go
into
code
freeze
and
then
especially
test
freeze.
Those
are
actually
aimed
at
things
in
the
kubernetes
repo,
but
we
tend
to
for
anything
we
think
might
be
a
risky
change.
We
tend
to
extend
test
freeze
at
least
which
follows
shortly
after
code.
E
Freeze
too,
like
prow
itself
in
the
in
like
the
cim
friend
things,
so
that
we're
not
being
too
disruptive
to
the
rest
of
the
project.
E
So
we
don't
necessarily
I,
think
we
don't
necessarily
need
to
like
change
this
timeline
here
or
whatever,
but
as
people
start
to
work
on
these
things
and
are
trying
to
like
I,
don't
know
like
if
folks
of
kubernet
are
trying
to
hit
dates,
it's
like
it's
worth.
Knowing
that
like
there
will
be
some
level
of
the
project
going.
Okay
during
this
time
window
We're
not
gonna,
make
big
changes
to
the
infra.
H
Yeah
and
the
push
might
be
coming
from
somewhere
else,
not
people
on
this
call
and
they
might
be
release
team,
saying
hey.
We
cannot
accept
this
specific
date
for
this
specific
release
at
this
time,
you're
going
to
change,
and
so
then
we're
allowed
to
you
know:
rework
the
newer
dates
and
and
get
that
going
so.
B
Yeah
and-
and
the
other
thing
is,
the
engineers
at
kubernet
is
going
to
rope
in
our
folks
that
have
worked
within
the
open
source,
kubernetes
ecosystem.
If
they
hear
Hey
Sig
release
has
code
freeze,
please
don't
make
any
changes,
I
mean
submit
PRS,
but
they're
not
going
to
be
reviewed
for
like
two
weeks,
they're
gonna
know.
What's
going
on.
A
D
F
Okay
yeah,
so
this
is
a
living
document.
You
know.
These
dates
are,
of
course,
can
be
revised
later
and
I.
Think,
once
once
you
finalize
the
design
we
can,
we
can
refine
this
plan
and
and
adjust
dates
as
necessary.
F
The
other
phases
I
would
like
to
maybe
not
talk
about
today,
because
coopermatic
is
still
getting
up
to
speed
on
what
has
already
been
completed
and
what
hasn't
and
when
they're
able
to
really
wrap
their
heads
around
what
has
been
completed.
What
hasn't
you
know
where
the
conversations
are
with
the
vastly
what
progress
has
been
made
on
equinix
Etc,
then
we
can
come
up
with
a
much
more
useful
phase.
Three,
a
roadmap
right
now,
it's
very,
very
general
and
high
level.
F
It
just
says,
evaluate
possible
tooling,
so
we'll
come
back
later
after
coopermatic
has
had
time
to
digest
some
of
those
other
pieces,
but
you
know
again
welcome
feedback,
and
if
there
are
any
any
high
level
feedback
right
now
happy
to
discuss.
H
I
guess
the
main
thing
is
like:
how
do
we
enable
cubomatic
and
folks
who
are
coming
in
with
all
the
things
that
we've
already
done?
I
think
that'll
be
the
most
critical
to
you
know
the
successful
start
of
this
couple
of
projects
that
I
the
way
I
see
it
so
I,
you
know
between
I
I
are
know
me,
Ben,
I.
H
Guess,
like
all
the
people
on
the
call
you
know
we
can
already
do
asynchronous
stuff
on
slack,
but
if
we
need
to,
we
can
jump
on
a
call
to
work
out
some
stuff,
but
you
know
we
can
definitely
use
the
time
here
in
this
meeting
as
well.
H
F
I
think
a
call
would
be
tremendously
helpful
with
the
Sig
leadership
and
coopermatic
and
and
I
I,
so
I
have
posted
in
the
in
the
slack
channel
doodle
poll
to
to
see
if
we
can
find
a
time
next
week
when
when
we
could
all
sing.
G
Yeah
we
all
we
also.
We
will
do
any
input
that
we
can
get
and
I
think
most
of
our
folks
are
either
at
the
kubernetes
slack
or
at
the
cncf
slack.
H
Yeah,
we'll
let's
try
to
use
the
existing
kits
infra
channel
and
if
you
need
to
pull
over
we'll
spillover,
because
if
because
people
are
not
there
on
one
channel,
we
can
go
ping.
D
A
Okay,
for
the
time
that
I
think
we
good
on
that,
we
can
follow
up
async,
adding
band
gems
if
anyone
can
earn,
can
answer
to
the
Juana
doodle.
That
would
be
great
okay.
So
next
thing
we
have
10
minutes
left.
So
next
thing
is
EP
and
after
that
I
see
chiffy.
I
I'll
try
to
be
pretty
quick
here.
We've
gone
through
at
the
end
of
last
year
trying
to
get
to
a
point
where
we
could
apply
the
credits.
I
think
I
worked
a
lot
with
Jay
before
before
he
left
to
set
up
the
kubernetes
organ
move
all
of
the
many
historical
kubernetes
accounts
that
have
been
created
over
the
years
into
one
place,
so
we
can
keep
an
eye
on
them
that
we
we
do
need
to
migrate.
I
Some
of
these
some
of
these
might
be
CI
I'm,
trying
to
identify
which
ones
are
tied
in
with
email
lists.
In
case
we
need
to
reset
the
passwords
and
also
maybe
get
a
list
of
the
AWS
accounts
that
Sig
Kate's
infra
already
has
passwords
for
us.
We
don't
have
to
worry
about
trying
to
deal
with
password
management
to
prioritize
this
list
and
probably
coordinate
with
myself
and
in
dims
or
Arno
to
prioritize
and
get
a
few
of
those
over.
H
Yeah
I
know
will
know
more
than
I
have
like
zero
knowledge
over
the
passwords.
So,
but
you
know,
if
I
can
help
any
any
way
happy
to.
J
J
Yeah
since
I've
been
a
person
who've
been
setting
up
all
the
cncf
Amazon
infrastructure
kind
of
like
a
few
years
ago,
happy
to
help
you
here.
So
just
let
me
know
if,
if
any
kind
of
like
the
historical
consistence
is
required
from
from
me.
C
But
for
the
most
part,
the
the
accounts
that
were
added
by
Caleb
and
myself
over
the
last
year
for
registrykates.io
and
things
like
that,
I,
don't
I,
don't
actually
remember
than
them
being
they're,
just
member
accounts
under
the
the
AWS
organization
right.
So
they
don't
require
like
a
weird
email,
confirmation
process
and
accounts.
Don't
have
passwords
right.
Only
users
do.
C
To
check
whether
the
users
that
are
associated
with
a
particular
account
have
password
login
enabled
most
of
them
should
not.
Most
of
them
should
just
have
an
access
key
pair
like
for
the
ones
that
that
you
know
are
used
for
CI
purposes.
D
J
I'm
not
sure
about
those
that
are
created
for
the
CI
purposes,
but
there
are
enough
user
created
and
as
an
accounts
that
the
link
to
the
top
level
organization
account.
That
should
have
some
kind
of
the
credentials,
except
for
the
for
the
regular
key.
I
One
of
the
issues
that
we're
going
to
experience
for
the
ones
that
we
don't
just
want
to
migrate
and
create
new
accounts
for
and
make
probably
delete
or
disable,
is
that
in
order
for
the
AWS
accounts
to
migrate
from
one
organization
to
another,
they
need
to
be
detached,
and
at
that
point,
when
they
are
detached,
there
is
no
way
to
authenticate
to
them
unless
they
have
password
authentication.
And
so
this
is
part
of
this
difficulty
in
trying
to
ensure
that,
at
the
same
time
that
we
detach
them
as
quickly
as
possible.
D
K
Have
a
question:
if
you
don't
mind
so
when
you've
created
these
accounts,
you
should
have
created
a
root
user
right.
So
yeah.
I
K
No,
you
can
create
one.
So
that's
the
whole
reason
it
force
you
to
one
email
then,
so
you
can
just
reset
it.
If
you
don't
have
access
to
one,
if
you
didn't
create
one
right
away,
so
we'll
just
do
the
password
reset.
Should
you
should
get
an
email
and
then
you
should
be
able
to
generate
a
new
password
at
that
point.
I
If
we
look
at
the
issue,
there
is
a
list
of
all
of
the
email
addresses
associated
with
all
of
the
accounts
and
I've
been
going
through
trying
to
make
sure
that
I
or
someone
on
this
team
has
access
to
those
emails
for
a
password
reset.
And
what
I'm
finding
is.
Some
of
those
email
addresses
are
not
associated
with
the
mailing
list
or
are
unaccessible
and
do
contain
infrastructure
that
we
need
to
migrate.
K
I
A
We
have
five
minutes
left
before
so
there's
like
last
question
for
you,
so
the
one
thing
we
we
can
do
here
about
this
issue
that
we
don't
need
to
migrate.
All
this
account.
We
don't
really
need
to
migrate
all
of
them,
so
there's
no
version
doing
the
migration
I,
don't
really
necessarily
see
a
huge
benefit
on
doing
that.
We
can
recreate
those
icon
from
the
new
organization
if
needed
and
work
with
the
different
seek
to
see
how
we
can
move
the
different
Power
chart.
A
J
Heterosexual
my
suggestion
as
well
so
since
we're
speaking
about
the
accounts
that
the
use
of
for
the
CI
purpose
and
I
believe
they
don't.
Then
they
don't
contain
any
kind
of
unique
and
sensitive
data.
I
believe
it
would
be
pretty
easy
to
migrate
the
workloads,
but
not
their
guns
themselves.
So
we
can
consider
an
option
of
just
recreating
the
new
accounts
and
migrate
into
workload.
A
Yeah
I
think
I
think
that's
the
basically
it's
like
worst
case
scenario.
We
just
recreated
I
can't
listen
to
the
migration,
so
we
need
to
talk
to
care,
Ops,
Team
and
see
cluster
life
cycle
about
how
we
can
try
and
Twitter
initiate
migration
to
new
record.
A
A
We
can
take
a
look
later
at
the
order
icon
used
for
CI,
so
we
don't
have
to
migrate.
Everything
I
want
special
there's
a
heart
limit
on
migration,
because
you
have
to
the
limits
like
you
only
you
only
cannot
you
only
can't
send
20
amputation
per
day.
So
ultimately
we
need
to
do
wave
of
migration
at
some
point.
So.
A
We
can
reset
them
that's
most
of
them.
We
can
resend
them
because
they
are
basically
attached
to
the
same.
They
are
Alias
to
one
mailing
list,
so
we
can.
We
send
them.
That's
again
take
a
look
later,
but
my
priority
would
be
basically
track
on
a
machine
here.
That's
my
priority
and
I
think
you
have
access
to
two
of
them.
If
I
remember
correctly,
yeah
so
again,
trying
to
basically
do
that
manually.
First,
those
three
icons
and
then
get
figure
out
how
we
can
migrate.
The
CI
account.
D
B
Can
I
slip
in
because
I'm
gonna
have
to
leave
in
a
minute
too
yeah
I'm,
probably
going
to
calendar
snipe
several
of
you
over
the
next
couple
weeks,
because
I
want
to
get
spun
up
faster
and
I
also
have
a
bunch
of
specific
questions
that
probably
don't
need
to
be
in
the
full
locates
in
for
meeting.
It's
really
just
making
sure
my
brain
is
up
to
date.
So
keep
that
in
mind.
For
example,
I
do
want
a
history
lesson,
but
yeah
that
doesn't
have
to
take
up
the
whole
meeting.
Another.
B
The
other
thing
is
I
think
what
I
would
love
is
for
the
next
meeting.
We
make
sure
that
the
timeline
that
was
proposed
has
actually
been
kind
of.
Looked
at
feedback
has
been
welcomed,
so
to
speak,
but
also,
let's
start
looking
at
like
the
like.
There's
the
registry.cates.io
board.
Let's,
let's
start
trying
to
figure
out
what
action
items
we
can
actually
kind
of
start.
Tackling
I'm
I'ma
be
aggressive
about
all
of
this.
A
D
E
I'll
answer
that
some
other
time
GV.
This
is
actually
the
main
reason
that
I'm
attending
I
don't
know
that
I'll
be
doing
a
lot
of
kids
and
for
this
year,
but
I
want
to
make
sure
that
I'm
around
to
pass
along
context
on
everything
and
also
from
the
Sig
testing
point
of
view,
though
I
am
also
working
on
getting
other
people
in
charge.
Of
that.