►
From YouTube: Kubernetes WG K8s Infra - 2021-09-15
Description
A
Welcome
everyone.
We
are
september
15,
and
this
is
the
kitchen
from
meeting
group
just
a
reminder
that
this
meeting
is
under
the
code
of
country.
So
please
be
excellent
to
each
other
during
this
meeting.
A
Okay
is
that
ap
or.
C
C
A
A
A
Yeah
for
basically,
why
didn't.
D
A
Yeah,
but
I
was
expecting
something
less
because
I
didn't
finish
my
get
all
of
the
all
of
the
six
categories,
so
those
are
basically
the
5k
notes
and
the
two
kind
of
tests
right
now
those
are
those
are
exclusively
the
periodic.
So
I
think
over
the
last
over
the
next
month,
they
plan
to
migrate
all
the
jobs,
so
whether
it's
or
periodic-
but
I
was
expecting
less
so.
C
D
D
My
brain
may
have
flipped
those
205.,
so
so
that
pushes
us
to
let's
round
up
and
call
it
300
a
month
which
will
put
us
in
a
spot
where
we
are
out
of
money
at
the
end
of
the
year
before
the
end
of
the
year.
If
I
can
do
math
right
300
times,
yeah.
A
D
Yeah,
so
we
congratulations.
We've
just
crossed
the
threshold,
then
of
like
not
having
to
care
to
having
to
care,
and
now
we
will
need
to
start
thinking
about.
How
do
we
manage
our
spend?
I'm
glad
we
got
here.
I
was
waiting
for
us
to
get
to
this
place
so.
A
D
You
mean,
like
the
committed
use,
discounts
yep.
Maybe
I
I
don't
know
how
those
would
work
in
this
case.
I
just
don't
have
any
context
on
how
they
work
contractually,
but
maybe.
D
I
also
wonder
if
google
will
apply
committed,
use
discounts
to
credit
paid
services.
I
don't
I
don't
like
it
seems
like
the
sort
of
thing
that
there
might
be
a
carve
out
for
in
the
rules,
so
I'll
have
to
go
figure
that
out
okay,
the
other
side
of
it
is
like.
E
F
D
Yes,
I
feel
like
we
now
have
to
choose
like
we
could
migrate
scalability
tests
or
we
could
migrate.
Artifact
binaries,
but
certainly
not
both
before
we
handle
the
mirroring
issues.
A
About
the
binaries,
we
don't,
we
don't
really
have
too
much
to
migrate.
Because
are
you
talking
about
the
system
package
or
just
the
binaries.
B
F
Cri
plugins
things
like
that
anything
that's
hosted
in
the
kubernetes
release
deployment
right
now.
F
D
Like
that,
oh
wow,
that's
even
bigger
than
the
last
I
looked,
we
should
pet
ourselves
on
the
back
for
a
minute
and
then
start
thinking
about
how
we
mitigate
these
things.
F
F
So
I
don't
know,
there's
you
know,
maybe
there's
also
the
the
fun
consensus
building
question
of
you
know,
which
tests
do
we
actually
need
to
run
and
how
frequently
do
we
actually
need
to
run
them?
Gonna
be
a
necessary
step
too,
but
I
would
agree
that
reducing
our
artifact
hosting
costs
is
probably
the
biggest
blocker
we
have
for
proceeding.
D
I
do
get
the
sense
that
we
run
a
lot
of
ci
that
we
really
don't
need
to,
but
that's
a
much
more
complicated
conversation.
F
E
C
F
Oh
yeah
welcome
to
the
cost
of
getting
the
triage
board
for
that
yeah
buckets
dot
is
slash
triage
up
to
date,.
B
F
A
big
query
against
well,
this
isn't
even
running
against
the
data
set
of
everything
in
so
we
have
not
yet
migrated
over
the
builds
data
set,
which
is
sort
of
all
the
historical
data
about
all
builds
for
all
proud
jobs.
F
F
F
A
F
B
F
Oh
no
darn,
I
thought
I
could
get
away
with
having
you.
I
spelled
the
rest
of
the
meeting.
Okay,
so
I'll
share
my
screen
and
we'll
walk
through
some
of
the
links
that
are
in
the
meeting
agenda.
F
I
just
kind
of
wanted
to
give
a
status
update
on
where
I
think
we're
at
with
the
work
that
we
said
we're
going
to
do
during
this
release
cycle.
So
the
first
one
is
on
converting
this
working
group
to
a
sync.
F
I
just
those
of
you
who
check
the
mailing
lists
may
have
noticed.
I
just
called
for
an
updated
revision
to
the
charter,
I'm
asking
for
plus
ones
from
a
super
majority
of
steering
and
from
a
chair
or
tech
lead
of
all
of
the
stakeholder
sigs
for
this
working
room.
F
The
biggest
sticking
point
for
getting
this
merged
was
folks
wanted
a
clarification
on
the
charter,
so
I
we
we
sort
of
completely
rewritten
our
charter
to
look
more
like
a
sig.
F
Our
old
charter
was
very
aspirational.
I
think,
and
very
focused
on
the
specific
pieces
of
infrastructure,
but
less
so
in
the
context
of
the
services
that
we
provide
to
the
project.
So,
broadly
speaking,
the
sig
is
responsible
for
anything
that
you
could
get
from.
You
could
reasonably
expect
to
get
from
an
infrastructure
as
a
service
provider,
but
we're
not
limiting
our
scope
just
to
compute
network
and
storage,
because
many
clouds
offer
more
resources
than
that.
I'm
thinking
specifically
of
like
the
secrets
and
keys
and
google
groups
that
we
manage
with
our
tooling
today.
F
So
I
call
out
sort
of
the
generic
stuff,
as
well
as
the
fact
that
we
own
policy
definition
for
what's
in.
What's
out
how
should
we
spend
the
money
et
cetera
and
reports
for
transparency
purposes,
because
we
started
out
as
a
working
group?
It's
there
are
a
lot
of
cross-cutting
and
externally
facing
processes.
F
So
I
try
to
spell
out
the
generics
in
terms
of
we
collaborate
with
people
on
things
like
access
policies
and
artifact,
hosting
artifact
hosting
so
on
and
so
forth,
and
there
are
examples
of
specific
examples
for
each
of
the
things
to
clarify
like
what
we
will
do
and
what
we
won't
do.
F
E
F
As
I
sort
of
said-
and
you
know,
I
feel
like
we
tried
to
navigate
this
process
by
suggesting-
let's
not
attempt
to
recharge
the
world-
let's
just
sort
of
change
the
letters
and
it
seemed
like
we
could
not
prevent
everybody
wanting
us
to
recharge
the
world
a
bit,
but
hopefully
my
view
of
it
is
we
just
spell
out
in
much
more
clarity?
F
So
our
board
is
pretty
full,
which
is
why
I
opted
to
cherry
pick
things.
The
way
I
take
a
look
at
our
board
is
I
look
for
things
that
have
the
current
milestone
applied,
but
I'm
specifically
going
to
look
for
kubernetes.
That's
the
next
thing
on
the
agenda
and
I
didn't
bother
linking
it
and
I
don't
even
see
it
here.
What
if
I
look
for
data
set
here.
F
So
we
just
kind
of
talked
about
this.
Let's
see.
F
So
arno,
I
think,
have
volunteered
to
maybe
work
on
the
next
part
of
this,
but
if
it's
dropped
off,
I
can
get
to
it
at
some
point.
I
still
want
us
to
do
this
this
this
release
cycle,
even
with
the
extra
billing
that
might
bring
to
the
table,
but
I
will
ask
the
group,
given
the
discussion
we
just
had,
if
you
think
I
should
press
pause
on
this
or
if
I
should
at
least
make
sure
that
we
can
flip
back
if
we
migrate
over
and
we
suddenly
realize
there
is
too
much
cost
here.
F
I
know
okay,
so
prowl
dumps
all
of
its
results
into
gcs,
which
is
wonderful
and
publicly
accessible,
and
everything
is
at
well-known
uris,
but
gcs
is
not
a
database.
F
One
is
all
the
builds
ever.
One
is
all
the
builds
in
last
week
and
one
is
all
the
builds
in
the
last
day
that
data
set
is
then
consumed
by
tools
like
godocades.ios
triage,
to
give
a
more
interactive
drilldown
stuff,
as
well
as
periodic
jobs
that
run
queries
against
the
dataset
to
produce
json
files,
which
answer
questions
like
what
are
the
flakiest
jobs.
What
are
the
flakiest
tests
in
those
jobs?
B
F
One
of
the
things
I
think
that
has
impeded
it
being
more
useful
to
contributors
is
people
are
scared
that,
like
oh,
no,
it's
bigquery.
That
means
I'm
going
to
have
to
pay
costs
to
run
queries
against
it,
which
is
false.
Today,
it's
a
publicly
available
data
set.
You
do
have
to
put
up
the
costs
of
running
of
running.
F
F
We
should
be
prudent
about
not
making
sure
we
can
flip
back
if
we
switch
over
and
we
suddenly
realize
that
the
costs
are
much
larger
than
we
anticipated
any
questions
on
that.
F
F
Thank
you
for
the
copious
notes
migrating
the
release.
Artifacts.
I
don't
know
what
to
link
for
that.
I'll.
Try,
deal.kates.ido.
F
So
dl.case.io,
as
most
of
you
probably
know
or
if
you
don't
you've,
definitely
used
it
in
your
lifetime.
If
you
run
a
kubernetes
cluster
at
all,
this
is
the
uri
that
is
used
for
all
of
the
links
from
change,
log
and
release
notes
for
kubernetes.
F
So
we
try
to
get
people
going
to
this
uri
instead
of
hard-coded
uris
that
include
a
gcs
bucket
and
their
name,
and
this
is
using
the
wonderful
enginex
based
redirector,
that
tim
wrote
a
while
ago
and
can't
cannot
believe
that
we're
still
using,
but
thanks
to
that,
we're
able
to
change
that
uri
to
point
to
different
gcs
buckets
depending
on
what
people
are
asking
for,
which
is
how
we've
been
able
to
change
all
of
their
requests
for
ci
builds
to
go
to
a
community-owned
bucket.
F
This
issue
covers
doing
the
same
thing,
but
for
all
of
the
release
builds
all
of
the
other
release
artifacts,
because
those
currently
live
in
a
google.com
project.
The
thinking
was,
we
need
to
sort
out
there.
There
are
a
number
of
people
who
unfortunately
have
the
old
bucket
hard
coded
in
the
uri,
and
so
we're
going
to
need
to
leave
that
around
for
quite
some
time.
F
B
F
It's
probably
our
highest
cost
item,
more
cost
than
scale
more
cost
than
container
images,
so
the
thinking
was
instead
of
flipping
everything
over
literally
all
at
once.
We
could
do
a
quick
analysis
of
the
nginx
logs
to
see
which
binaries
people
are
hitting
the
most
and
we
could
sort
of
shard
redirects
over.
We
could
sort
of
choose
to
say
like
we're
going
to
migrate.
All
of
the
requests
for
a
given
release
of
kubernetes
to
the
community
hosted
budget,
we're
going.
F
Over
all
of
the
requests
for
the
coop's
etl
binary
things
of
that
nature,
I
feel
like
the
largest
blocker
is
consensus
on
whether
or
not
we
need
to
put
a
file
promotion
into
the
critical
path
here.
F
But
thankfully,
the
release
engineering
team
just
did
a
big
push
over
the
last
week
to
kind
of
set
up
automated
file
promotion,
which
looks
very
similar
to
how
we
do
container
image
promotion.
D
Yeah,
I'm
now
that
I
have
a
better
understanding
of
the
order
of
magnitude
here.
I
don't
see
how
we
can
move
anything
appreciable
like
let
me
back
up
I'm
making
an
assumption
that
the
vast
majority
of
the
cost
is
coming
from
fairly
recent
versions.
That
may
be
unfounded,
but,
like
I
imagine,
there's
not
a
ton
of
people
downloading
kubernetes
1.8
at
this
point,
so
even
moving
like
the
newest
releases,
we'll
still
get
the
lion's
share
of
the
cost,
and
so
until
we
know
that
we
have
the
multi-cloud
mirroring
actually
working.
D
I
don't
see
how
we
can
tackle
this
in
earnest
and
I
think,
as
a
second
order,
we
need
to
figure
out.
Why
is
this
so
big
and
how
do
we
mitigate
that?
Is
that,
like
better
cdn
use
or
do
we
need
smaller
binary,
like
smaller
tar
balls,
like
I
imagine
that
in
the
giant
tarballs
there's
stuff
that
people
don't
really
need
to
download
like
we
should?
I
don't
know
we
have
to.
We
should
take
a
look
at
that
and
figure
out
what
the
cost
cost
per
artifact
is
just
like.
D
D
C
A
D
Right
sure,
but
you
they
they
it's
opt
in
as
opposed
to
opt
out
right
like
maybe
we
should
consider
switching
prs
to.
We
only
run
ci
when
somebody
says
it's
time
to
run
ci
on
this,
or
maybe
we
run
in
a
max
number
of
times
per
day,
or
maybe
you
have
to
manually
re-trigger
it
or
something
I'm.
What
I'm
saying
is,
I
think,
we're
very
generous
with
our
ci
runs
right
now,
and
I
wonder
how
long
we
can
afford
to
be
that
generous.
F
F
I
think
it
could
be
very
informative
to
see
whether
that's
actually
true.
F
F
The
issue
is
that,
because
all
of
the
work
they've
done
is
based
on
gcs
access
logs
and
we
cannot
provide
those
for
this
bucket,
it
would
have
to
be
a
different
pipeline
that
would
look
at
the
output
of
the
cates.io
app
that
runs
on
the
triple-a
cluster,
which
is
something
they
don't
already
have
access
to.
We
can
give
them
access
to
that.
F
We
can
give
that
a
shot,
just
forecasting
my
own
availability,
for
that
I
don't
see
it
as
likely
that
I
get
to
that
in
the
next.
Okay,
that's
fair!
I
think
that
by
so
our
google
cloud
logging
retention
is
whatever
the
default
is,
which
I
believe
goes
at
least
six
weeks
back,
which
would
at
least
give
us
a
quick
gut
check
on
what
is
the
most
popular
or
the
most
popular
binary.
F
B
Reasonable,
okay,
I'm
looking
at
the
the
comparison
for
that
binary
finding
you
might
be
able
to
use
some
of
our
code
for
that.
If
the
logs
are
the
same
so
or
if
you
want
to
do
the
filtered
version,
I
don't
know
what
tooling
would
be
best
to
make
that
super
simple
and
not
take
a
lot
of
time.
But
if
you
can
do
that,
the
six
week
back
just
filter
for
the
identifying
the
binary
is
a
good
first
step.
F
B
I
I
can
escalate
this
up
to
the
cncf
legal
team
and
try
to
get
some.
F
F
Technical
implementation
question
for
me
right
now
we're
dumping
in
logs
every
day.
I
think
it's
actually
every
hour
of
every
day
for
every
bucket
that
is
publicly
accessible,
hosting
release,
artifacts
be
those
binaries
or
container
images,
and
so
we
have
a
and
those
all
end
up
in
the
same
access
logs
bucket.
So
we
have
a
bucket.
F
So
the
question
is:
do
we
need
a
similar
level
of
expiration,
for
whatever
the
resulting
big
query
data
set?
Is
that
we're
constructing
for
business
because
I
feel
like
we've
been,
could
be
all
been
iterating
you
know
rapidly
and
if
need
be,
you
can
blow
away
the
data
set
and
reconstruct
it
from
scratch.
With
the
gcs
box,
I'm
wondering
what's
the
point
at
which
we
need
to
treat
the
data
set
as
the
source
of
truth
and
sacred,
and
I
ask
this.
F
F
F
Unless
that
freaks
anybody
out
that
we're
just
leaving
that
perpetually
increasing
that's
that's
the
way
it's
going
to
be
until
we
hear
otherwise
from
cncf's
legal
team.
A
The
way
I
see
the
big
query
that
I
said
right
now
is
I
mean
I
can't
be
wrong,
but
I
take
the
job
extract,
the
raw
information
from
the
gcs
market
and
put
it
in
the
bitcoin
in
the
bigquery
data
set
and
later
run
some
analysis.
So
we
can
basically
reduce
the
period
of
retention
to
three
months
because
we
have
already
everything
in
bigquery.
F
F
I
think
basically,
where,
where
I'm
landing
is,
I
will
ask
this
question
at
a
later
date.
More
formally
and
it's
going
to
look
like,
I
would
like
a
design
document
or
a
project
plan
that
sort
of
describes
the
retention
policy
and
how
and
when
we're
going
to
roll
things
up
and
how
we're
going
to
handle
gdpr
if
we
need
to
handle
gdpr.
F
B
I
really
appreciate
you
bringing
this
forward
and
being
super
clear
with
boundaries,
and-
and
I
agree
with
that-
we
really
do
need
this
shared
agreement
on
the
design
document
and
retention
policy
with
the
backing
of
the
cncf
legal
team,
letting
us
know
that
we're
on
the
right
track.
B
A
D
A
A
Not
really
because
scalability
you
need
to
basically
start
the
migration.
I
don't
have
a
timeline
for
this.
I
have
a
meeting
with
them
tomorrow,
so
I
can
follow
up.
Okay,.
F
F
This
is
the
sort
of
thing
that
does
have
a
release
boundary
attached
to
it,
because
it
comes
down
to
the
images
that
are
used
in
ci
testing
to
qualify,
kubernetes
both
for
release
and
also
to
certify
kubernetes
as
components
and
we've
reached
the
point
where
there
are
individual
projects.
F
So
this
isn't
just
all
the
generic
e
to
b
testing,
which
is
these
are
projects
that
are,
for
generally
things
like
authenticated
image
building.
So
we
have
both
the
authenticated
image
portal
project
and
the
kate's
authenticated
test
project,
and
what
is
needed
is
for
us
us
to
reach
out
to
release
team
and,
more
specifically,
the
team
in
charge
of
the
tests
that
use
these
images
to
understand
if
the
tests
are
still
valid
or
if
we
can
get
rid
of
them.
F
If
the
tests
need
to
be
rewritten
to
be
valid,
and
then
you
know
whether
we
need
to
create
new
special
case
staging
repositories
that
have
authenticated
access
turned
on
things
of
that
nature,
because
I
think
we
were
talking
about
how
architecturally
we
don't
see
it
as
feasible
to
make
certain
images
hosted
in
kates.gcr.io
require
authentication
while
leaving
everybody
everything
else
readable
to
all
to
require
a
separate
registry.
That's
just
for
the
purposes
of
testing
authenticated
the
reason
I
talk
about
the
validity
of
these
tests,
especially
when
it
comes
to
performance.
F
Is
it
really
comes
down
to
the
cluster
operators
to
make
sure,
if
they're
doing
like
air
gap
style
testing
and
they
pull
these
images
down
and
then
push
them
to
some
other
registry?
Are
they
actually
pushing
them?
The
registries
that
require
authentication
or
are
these
tests,
passing
when
pointed
at
a
publicly
accessible
registry,
like
is
the
feature
even
being
tested.
F
B
I
have
some
thoughts
on
this
that
maybe
we
could
bring
around
to
sig
testing
or
sorry
to
to
the
release.
Are
we
going
to?
Is
it
really
the
validity
of
the
test?
Basically
currently
we're
reaching
out
to
a
public?
You
know
registering,
and
I
wonder
in
the
same
way
that
we
run
proxies
and
things
for
various
tests
internally
if
we
couldn't
spin
up
a
proxy
or
sorry
a
registry
inside
of
the
test.
B
So
at
the
beginning
it
spins
up
a
registry
with
an
authenticated
image
that
is
actually
running
in
the
cluster,
but
you
need
to
use
and
then
the
test
would
not
come
back
and
they
need
to
come
back
and
bring
that
forward.
If
that
makes
sense,
so,
rather
than
so
for
air
gap
purposes,
you're
shipping,
an
image
that
is
the
infrastructure
necessary
to
create
the
passing
actual
conformance
test
that.
B
Cool
I'll
take
an
ai
to
go
and
engage
with
the
release
team
to
I
don't
know
if
I
need
to
get
feedback
on
that
or
we
just
because
we
also
write
the
test,
so
I
can
just
put
it
on
the
list
of
things
for
us
to
do.
B
Yeah
that'd
be
nice
cool,
we'll
take
that
on
his
eye
to
engage
with
sig
nude.
F
I
found
that
about
half
of
them
are
unused,
like
oh
boy,
looking
at
the
submit
queue
and
the
ms
menger's
image
has
brought
back
some
memories,
but
also
I've
managed
to
identify
most
of
the
images
that
we
do
actually
have
to
work
on
migrating
and
made
some
progress
on
migrating.
F
So
briefly,
coopkins
has
been
fully
migrated
over
the
kind
runtime
environment
has
been
fully
ported
over
and
a
number
of
other
things
all
part
figures.
There
are
about
37
images
that
need
to
be
migrated.
In
total,
there
are
about
2
400,
proud
job
configs
that
need
to
be
updated.
There
are
about
200
cloud,
build
files
that
need
to
be
updated
all
across
the
project.
A
We
are
bennett
from
us
here
in
brazil,
sorry
eddie,
okay,
so
the
one
thing
I
want
to,
I
noticed
we
have
the
meaning
the
first
day
of
cubecon,
so
I
don't
know
and
we
do
something
hybrid
and
I
I
need
to
talk
so
with
stuff,
but
this
because
we
have
a
meeting
this
first
day
of
keep
going.
So
can
we
do
something.
A
You
know
virtually
whatever
I'm
not
sure
one
or
one
hour
is
enough,
but
because
I
was
taking
like
half
two
hours
of
all
the
way
tracking
this
zoom
call
for
people
interest,
so
they
can
come
and
ask
questions
but
two
hours.
F
A
F
This
basically
got
us
a
time
hippie
I
apologize.
I
did
not
time
box
my
stuff
well
enough
to
leave
room
for
your
thing
on
the
agenda.
Is
there
anything.
B
We
can
do
it
first
on
the
agenda
next
time.
I
think,
if
you're
interested
in
this,
please
just
reach
out
to
me
directly
and
I'd
love
to
get
your
feedback
and
thoughts,
I'm
going
to
be
meeting,
hopefully
with
sig.
What's
it
called
the
steering
committee
first
of
next
week
and
I'm
you
know
we're
branching
off
being
this
kind
of
beyond
us.
B
We
have
our
our
the
way
that
we
brought
together
all
of
the
companies
and
the
people
in
this
room
to
provide
the
infrastructure
for
a
single
project
and
if
we
think
about
how
we're
going
to
bring
that
into
the
whole
of
the
kubernetes,
I'm
sorry
the
cloud
native
compute
foundation
as
a
whole
and
then
for
me,
even
beyond
that
to
nations
and
neighborhoods
everywhere.
How
do
we
do
this
together?
B
I
would
love
to
this.
Is
something
I've
been
passionate
about
for
years
and
so
trying
to
navigate
this
with
grace
and
and
inclusion,
so
if
you're
interested
just
reach
out,
otherwise,
let's
put
this
higher
on
the
agenda
next
time,
all
right.
F
That
sounds
good
I'll
yeah,
I'm
gonna
edit.
The
meeting
that's
to
set
up
next
week's
agenda
and
I'll
put
this
first
on
there
thanks,
aaron,
okay,
thank
you,
everybody
for
showing
up.
I
hope
you
have
a
great
rest
of
your
day
and
thank
you
for
your
time.