►
From YouTube: Kubernetes SIG K8s Infra - 2021-09-29
Description
A
All
right,
hi
everybody
today
is
wednesday
september
29th
and
you
are
at
the
sig
kate's
infra
bi-weekly
meeting.
I
am
your
host
aaron
kirkenberger
aka
aaron
of
sig
beard,
also
known
as
spiff
xp
at
all
the
places,
and
we
are
all
being
publicly
we're
all
being
recorded.
This
meeting
will
be
posted
to
youtube
publicly
later
and
we
can
all
go
watch
ourselves
adhere
to
the
kubernetes
code
of
conduct
by
basically
being
our
very
best
selves
to
each
other.
A
A
A
C
So
I
wanted
to
talk
about
this
and
try
to
see
if
we
can
get
that
merge
to
this
week
in
the
maybe
the
next
week
so
where's,
the
poor
request.
D
Yeah
I
can
give
a
quick
intro,
I'm
definitely
honored
and
privileged
to
be
called
a
special
guest.
I
I
don't
think
it's
that
serious,
but
much
appreciated.
I'm
jim
angel
active
in
sig,
docs
and
sig
release.
I've
seen
some
of
you
folks
around
in
the
community
and
yeah.
D
So
one
of
the
things
that
came
in
to
the
security
channel
for
sig
docs
recently-
or
I
should
say
you
know
six
months
ago
or
so-
was
the
lack
of
caa
records
on
the
kubernetes.I
o
domain
and
for
those
who
aren't
very
familiar
with
caa
records.
It
is
basically
the
validating
authority
of
who
can
issue
certificates
against
any
fqdn
or
any
domain.
D
And
what
happens
is
you
can
have
a
list
of
certificate
authorities
that
are
like
a
approved
list
of
who
can
issue
kubernetes.io
certificates?
And
today
there's
nothing
said
so.
For
example,
if
you
had
a
way
to
validate
ownership
by
either
running
a
service
or
dns
queries
or
some
sort
of
way
to
circumvent
this,
you
could
potentially
issue
a
certificate
representing
that
service
as
being
kubernetes.io.
D
There
is
a
public
log
of
all
the
certificates
issued.
I
don't
know
the
time
frame
of
how
far
back
that
public
law
goes
to.
You
can
search
as
public
log
or
public
records,
of
all
the
certificates
issued
for
domains
and
what
I
came
back
with
on
the
kubernetes.I
one
was
google
pki
at
some
point
in
the
past.
D
That
is
not
one
of
those
two
cas.
It
will
fail
once
that
pr
merges.
I
think
the
risk
is
relatively
low
in
the
sense
that
we
can
easily
revert
this.
The
my
fear
or
my
reluctancy
would
be
unintended
consequences.
I
can't
foresee
any,
especially
when
we're
talking
certificates-
and
hopefully
I'm
not
foot
gunning
here,
but
when
we're
talking
about
certificates,
hopefully
the
renewal
process
and
the
actual
act
of
issuing
certificates
isn't
done
just
in
time.
It's
done
before
a
certificate
expires
before
a
service
goes
live
or
something
of
that
nature.
D
So
I
anticipate
it
to
be
a
very
low
risk.
One
thing
that
accommodated
the
pr
is:
we
could
send
out
a
message
to
the
wider
kubernetes
k.
Dev
community,
if
we
wanted
my
fear,
is,
is
that
it
might
be
unnecessary
noise
in
the
wider
community.
It
might
raise
concern
that
it's
not
needed.
D
However,
I'm
not
really
against
the
idea,
one
way
or
the
other,
so
we
can
definitely
raise
visibility
if
we
think
it
would
help
and
it
adds
you
know
more
signal
and
less
noise
than
my
opinion
and-
and
I
think
we
merge
this-
we
see
what
happens
if
nothing
within
a
week
or
two.
We
opened
the
k8.iopr
merged
that
and
we
might
even
consider
at
some
point
even
trimming
that
down,
because
as
of
right
now,
I'm
only
aware
of
netlife
being
the
issuer
of
kubernetes.io
and
case.io
certificates.
A
Okay,
yeah
I've,
I've
sort
of.
A
Pressing
that
every
button,
because
I
didn't
understand
the
full
context
of
the
change-
I
appreciate
you
coming
by
to
explain
it.
I
think
I
think
you
sort
of
provided
that
on
the
pr
as
well.
I
just
hadn't
thought
I
didn't
feel
like.
I
had
consensus
from
enough
people
to
say
like
let's,
just
let's
just
merge
it
and
find
out
what
happens
so
for
your
historical
context.
A
D
A
Tim
hawkin
we
now
use
gke's
managed
certificates
for
the
majority
of
our
stuff.
I
think
we
have
one
lone
circ
manager
instance
hanging
out
there.
That
still
goes
through.
Let's
concrete,
I
agree
with
the
like.
I
was
planning
on
pressing
the
approve
button
during
this
meeting,
and
just
I
think
it's
raising
the
signal,
maybe
sounds
like
a
good
idea.
I
view
it
as
an
asking
forgiveness,
not
permission
sort
of
thing,
just
like
hey
fyi.
A
If
any
of
you
were
doing
something
strange
like
trying
to
issue
your
own
kubernetes.io
certificate
and
that.
C
D
D
Cool
that
sounds
good
to
me.
The
one
other
thing
I
don't
think
we're
going
to
get
an
answer
here
is
looking
at
historical
records
of
who's
issued
certificates
for
these
domains
case
that
I
also
includes
aws,
pki
or
ca
certificate
authority.
I
have
no
context
on
where
that
issuer
would
have
came
from
at
some
point.
That
happened.
D
I
don't
know
if
it's
actively
happening,
but
one
of
my
ideas
being
is:
let's
not
break
anything
that
previously
is
existing
in
those
historical
records,
we'll
include
those
as
being
allowed
per
se,
and
then
we
could
potentially
evaluate
and
make
the
move
from
there,
but
wanted
to
raise
awareness.
I
thought
that
was
a
little
funky
from
the
case
that
I
owe
perspective.
A
That
is,
I'm
curious
if
you
maybe
I
just
didn't
see
it.
If
you
have
links
to
sort
of
the
forensics
that
you
did
to
go,
look
up.
You
know
how
you
determined
this.
I
guess.
D
Yeah
definitely-
and
it's
in
the
pr,
I
believe,
I'll-
pull
up
the
executive
link,
because
I
had
this
giant
synopsis
of
how
I
was
testing,
because
I
wasn't
totally
comfortable
with
octo
dns
and
so
it's
buried
in
there.
Let
me
figure
out
what
that
actual
link
is
okay
and
it's
still
relatively
new
to
me
as
well.
I'm
not
sure
exactly
where
this
historical
record
is
kept
house
maintained
where
the
data
is
coming
from,
but
the
fact
that
it
did
return
the
actively
valid
ones
that
I
was
aware
of
plus
some.
D
I
wasn't
aware
of
made
me
feel
pretty
confident
in
the
details,
but
I'll
send
the
link
in
chat.
C
A
Awesome,
jim
as
a
sick,
doc
person,
you
might
have
context
on
the
next
dns
related
pr.
I
wanted
to
unblock
as
well.
Basically,
there
was
a
request
put
in
eons
ago
to
get
kubernetes.
A
search,
console
access
for
the
website
for
sig
docs
maintainers,
so
that
you
could
better
understand
like
who's
using
the
website
and
how-
and
that
seemed
like
a
perfectly
reasonable
request,
but
it
was
like
vaguely
unclear
who
already
had
access
to
it
or
something
I
feel
at
this
point.
I
I
have
the
same
kind
of
reason:
arno.
A
Verifies
the
site
for
a
kubernetes.io
account
that
we
own
and
from
there
I
think
we
can
work
on
delegating
that
out.
I
I
think,
I'm
not
sure
this
one
requires
as
much
visibility
to
kubernetes
dev
in
terms
of
racing
signal.
I
think
I
am
going
to
go,
make
sure
tim
hawkins
and
some
of
the
other
usual
suspects
are
made
aware.
D
D
Yeah
and
just
some
quick
context
on
that,
so
I
believe
there
are
just
some
you
know
early
early
doc
days
before
my
time
before
a
lot
of
folks
times,
I
think
when
doctors
stood
up
the
google
team
that
was
actually
owning
that
search
console
that
was
used,
I
believe
either
moved
on
different
roles,
different
jobs
outside
of
the
kubernetes
community,
and
we
ultimately
got
to
a
point
where
I
was
working
with
zac
on
getting
a
new
set
of
tools
getting
plugged
into
analytics.
D
You
name
it,
and
so
I
think
this
is
just
a
artifact
of
the
folks
who
previously
had
access
have
moved
on
and
that
access
is
no
longer
given
to
the
current
docs
co-chairs.
Right
now.
I
think
there's
a
few
of
these
that
might
come
through.
I
have
no
problem
with
them
merging
as
this,
but
I
believe
the
historical
context
that's
missing
here
is
those
that
did
have
access
went
away.
D
D
A
I
super
appreciate
that
cool
thanks
so
much
for
your
time-
jim,
oh,
oh,
no
and.
C
C
A
All
right,
then,
I
will
follow
up
with
you
on
that
offline.
I
okay,
I
personally
and
I'm
like
I'm
trying
to
get
to
a
point.
Maybe
it's
a
bad
single
point
of
failure
where
there's
one
account
that
is
like
workspace,
super
admin
and
also
has
the
ability
to
hand
out
credentials
for
things
like
analytics
and
the
search
console
and
whatnot,
and
so
I'd
rather
see
if
we
can
get
that
hooked
up,
but
maybe
I'm
creating
too
much
of
a
spot
so
we'll
take
it
off.
A
Okay,
okay,
thanks!
So
much
for
your
time,
jim,
I
don't
know
if
you
feel
like
sticking
around
for
the
rest
of
this
it'd
be
great
to
have
you
but
I'll
head
back
to
our
note,
since
he's
just
moved
the
agenda
around.
C
A
A
D
A
A
C
C
B
C
Basically,
none
of
the
chair
of
our
technicalities
of
this
group
will
attend
the
los
angeles.
We
want
to
do
only
virtual
and
I
talk-
and
I
talk
to
some
the
summit
staff
about
that,
and
they
told
me
I'm
fine
to
brand
my
own
things.
So
if
you
can
have
a
meeting
from
10
to
12,
we'll
be
fine
or
less
on
that,
but
just
send
it
send
a
notification
out
there
and
say
to
people
hey.
We
have
something
during
kubecon
comment,
talk
to
us
I'll,
come
and
ask
a
question:
that's
it.
D
A
Having
this
meeting
and
then
having
just
an
extra
hour
after
that,
just
kind
of
hanging
around
for
whoever
wants
to
show
up
like
the
only
and
I
don't
particularly
care
about
having
an
agenda,
I
would
say
if
we
talk
about
future
plans,
I'd
want
to
make
sure
that
no
decisions
are
made
at
this
time.
It's
the
sort
of
thing
where
I
want
to
include
and
be
inclusive.
I
think,
like
it'd,
be
a
great
chance
to
talk
about
ideas,
hopes
and
dreams,
ponies
and
unicorns
sports
cars
and
horsies,
so
that
that
sounds
good.
A
C
B
B
A
I
feel
like
we
yeah,
we
probably
just
thought
I'd,
have
a
follow-up
conversation
make
sure
it
wouldn't
break.
D
Yeah
I
threw
in
chat
there
too.
I'm
gonna
try
to
capture
these
findings
on
the
pr
itself.
So
if
anyone
else
digs
up
this
old
issue,
it
digs
up.
You
know
if
there's
any
sort
of
dust
up
from
this,
or
even
you
know,
six
months
or
a
year
from
now
capturing
that
in
the
pr,
as
well
as
the
case
that
I
owe
one.
D
But
it
sounds
like
all
of
the
certificate
authorities
that
that
ssl
tool
that
I
linked
in
the
chat
there
came
back
with
are
all
at
least
accounted
for
was
one
way
or
another,
and
so
it
sounds
like
they're
all
valid,
and
we
shouldn't
really
go
about
trimming
down
that
list,
as
as
far
as
we
know
in
our
purview
that
list
that
we're
providing
for
the
caa
is
accurate
and
valid.
Today,.
A
Okay,
I
don't
see
hippie
here,
so
I'm
going
to
assume
it's
rhian
who's.
Presenting
next
is
that
correct.
F
If
he
cannot
be
here
today,
we've
got
other
engagements,
so
I'll
speak
on
his
behalf.
He
just
wanted
to
mention
before
kubecon
the
cloud
native
credits
program.
F
That's
gonna
launch
at
kubecon,
which
is
basically
an
initiative
from
the
cncf
where
they
want
to
include
all
providers
out
there,
either
through
donation
of
infrastructure,
people
or
cash
donations
towards
infrastructure,
to
expand
the
the
whole
open
source
community
and
he's
driving
very
hard
to
get
specifically
the
kubernetes
way
of
thinking
that
his
way
of
working
and
our
code
of
conduct
that
we're
adhering
to
and
get
other
communities
to
copy
that
and
do
do
it
the
same
way.
F
We
do
it
and
be
excellent
to
each
other,
so
he's
driving
that
quite
hard
with
the
cncf
and
just
letting
the
folks
in
infra
know
that
we're
pushing
this
way
beyond
us
and
make
sure
that
everybody
that's
doing
open
source
cloudy
things
in
cooperation
with
us
and
the
cncf
will
have
the
infrastructure,
the
people,
the
support
to
do
great
things.
So
that's!
F
What's
all
about
so
have
a
look
out
for
that
at
the
cncf
cloud
native
conference
awesome,
and
if
your
employer
is
available
to
do
some
things
in
this
space
there
will
be
a
page
where
you
click,
and
we
will
talk
to
you
about
the
great
things
you
want
to
contribute.
So
that's
fantastic,
so,
okay,
that
is
it
on
that
topic.
A
Not
at
the
moment,
I'm
pretty
psyched
about
it.
I've
worked
a
little
bit
offline
with
hippie
on
this
and
yeah.
I
feel
pretty
good
about
the
way
we
do
things
around
here.
We'll
see
more
of
that.
F
Yeah,
no
he's
is
really
trying
to
get
that
culture
cultivated
everywhere
where
we
are
so
yeah.
Thanks
for
everybody
for
cultivating
that
within
our
group
and
the
next
topic
is
carried
over
from
last
week,
I
had
a
look
at
the
the
data
logs
for
downloading
artifacts
and
just
an
interesting
thing
that
I
noted
in
august
there's
a
quite
a
big
drop
in
the
number
of
downloads,
but
cost
stayed
roughly
the
same,
which
was
interesting.
F
I
did
try
to
get
the
slides
to
load,
to
show
you
where
we
are
now,
but
it
is
sticky
because
we
have
in
data
studios,
26
billion
rows
of
data,
that's
being
processed
now
to
get
this
report
out.
So
it's
really
a
massive
amount.
We're
talking
since
april,
we're
talking
about
85
million
gigabytes
of
data,
13
million
unique
ip
addresses
and
710
images.
Unique
images-
that's
been
downloaded,
so
that's
just
some
information
to
share
that.
Unfortunately,
I
can't
get
the
live
data
to
come
up
the
show,
because
data
studio
is
just
struggling.
A
That's
fair,
it's
good
to
know
might
be.
Maybe
we
need
to
think
sooner
rather
than
later
about
how
to
sort
of
aggregate
and
roll
that
data
up
historically.
F
There
should
be
some
smarter
ways
to
deal
with
the
data
and
then
also
out
of
this
information.
We
are
talking
to
all
different
providers
to
try
and
see
how
we
can
get
the
cost
and
and
the
amount
the
traffic
shaped
in
a
way
that's
going
to
be
beneficial
for
the
community.
So
by
the
time
we
wanted
to
get
the
release
artifacts
over.
We
have
some
sufficient
room
for
for
that
to
to
get
into
our
budget
as
well.
A
Okay,
I
was
riffing
a.
A
Before
we
actually
clicked
record,
but
I
just
wanted
to
congratulate
everybody
being
members
or
attending
your
first
meeting
at
kubernetes,
newest
sig,
sig,
cakes
and
pra,
and
I
wanted
to
congratulate
carno
on
being
the
inaugural
one
of
the.
A
Yes
feel
the
awkward-
and
I
am
here
as
one
of
your
inaugural
technical-
leads
forcing
kids.
A
A
A
I
think
the
main
thing
for
anybody
who's
interested
is,
I
spent
some
time
sort
of
rewriting
our
charter
to
specifically
call
out
the
like
points
of
collaboration
with
other
things
and
stuff,
and
I
I
think
the
biggest
probably
the
biggest
point
of
clarification
or
confusion
was
that
what
we're
most
responsible
for
is
infrastructure,
if
you
think
of
infrastructure
as
something
that
comes
from
infrastructure
as
a
service
providers
or
basically
from
cloud
providers.
A
But
it's
it's
more
than
just
the
management
of
like
compute
to
network
and
storage,
because
cloud
clouds
offer
so
much
more
than
just
raw
compute
networking,
storage
and
so
we're
responsible
for
the
management
and
health
of
anything.
You
can
reasonably
expect
to
provision
from
a
cloud,
but
that
boundary
is
drawn
to
signify
that
we
do
not,
for
example,
manage
netlify.
A
We
don't
manage
github.
We
don't
manage
slack
like
the
the
tools
that
do
that
run
on
top
of
the
infrastructure
that
we
provide
like
we
make
sure
that
there
is
a
gke
cluster
that
can
be
accessed
by
the
appropriate
people
so
on
and
so
forth.
A
Yes,
sadly,
we
don't.
We
don't
yet
offer
blockchain
as
a
service,
but
you
know
pr
is
welcome.
A
I
I
really
want
to
thank
arno
for
pushing
me
to
to
get
this
done.
I
think
it
was
long
overdue
and
I'm
glad
we
are
now
sick.
C
So
historically,
the
mission
of
this
group
was
migration
from
google
to
a
google
organization
to
a
community
organization,
and
we
we're
working
on
that.
But
once
the
migration
is
over,
the
main
role
of
this
group
would
be
to
maintain,
operate
and
evolve
or
also
browse
the
new
services
for
the
community.
So
the
current
set,
the
current
charter,
described
two
aspects
of
this
group,
so
the
migration
and
the
current
states
of
kids
ever.
A
Okay,
I
still
personally
prioritize
like
moving
stuff
over
more
than
I
prioritize
adding
new
stuff,
but
I've
started
to
prioritize
like
reducing
attack,
debt
and
operational
load
of
the
stuff
we
already
have
moved
over
as
well.
A
D
A
A
A
A
Is
I'll
pick
on
sig
release
because
they
have
been
using
it?
The
most
already
sig
release
is
able
to
approve.
D
A
To
group
memberships
that
they
own
without
having
to
bug
anybody
the
owner's
piles
here,
basically
the
sig
release
leads,
have
approval
rights
to
works.
I
have
greatly
it
took.
It
took
a
little
while
to
like
shuffle
everything
into
these
subdirectories,
but
it's
worth
it
because
now
I
don't
have
nearly
as
many
prs
that
I
have
to
handle.
A
It
means
that
if
you're
part
of
a
sink,
you
can
totally
manage
your
groups
way
faster.
It's
the
same
sort
of
self-model
service,
self-service
model
we
use
for
crowd
jobs,
another
one
that
recently
happened,
shiny
graphs.
So
it's
basically
when
we
first
provisioned
prowl
a
while
ago.
We
ran
into
problems
of
like
capacity
issues
and
it
seemed
like
we
were
definitely
giving
proud
jobs,
sufficient
memory
and
cpu
capacity.
A
A
The
number
of
iops-
that's
given
to
a
given
that's
given
to
a
gcp
instance
depends
on
how
many
cpus
it
has
that
defines
how
much
network
it's
able
to
use
which
defines
how
quickly
it
can
talk
to
network
attached
storage,
which
is
all
that
was
available
for
gke
nodes
for
a
while,
but
a
couple
releases
ago,
gke
added
the
ability
to
provision
nodes
with
local
ssds
as
set
up
as
a
female
storage
which
provide
an
order
of
magnitude,
more
performance
for
io.
A
So
I
did
a
quick
experiment.
The
old
network
attached
ssds
on
the
left,
the
new
local
ssd,
is
attached
on
the
right.
You
can
see
there's
way
less
throttling
happening,
so
we
migrated
everything
over.
I
managed
to
cost
out
that
it
actually
costs
us
slightly
less
money
for
this
setup
and
we
get
significantly
better.
I
o
you
can
see
a
shiny
graph
where
the
amount
of
throttled
I
o
dropped.
A
So
most
of
our
jobs
really
aren't
throttled
anymore.
What
remains
is
throttling
that
happens
when
a
node
is
first
provisioned
during
auto
scaling,
which
is
super
cool.
Sadly,
I
haven't
actually
seen
much
of
a
performance
change.
In
our
times,
so
I'm
not
sure
that
io
was
really
the
point
of
contention
that
everybody
was
hoping
or
thinking
that
it
would
be,
but
I'm
glad
to
at
least
have
taken
that
off
the
table.
A
So
as
we
look
to
optimize
our
jobs
to
have
people
waiting
around
less
time
for
their
pr
to
merge,
we
can
figure
out
what
the
actual
things
are.
We
should
be
optimizing
just
wanted
to
shout
out
a
contributor
who's
taken
up
the
fact
that
we
have
this
group's
reconciler
that
that
reconciles
all
those
groups
using
all
those
yaml
files
that
I
said,
but
there
are
no
unit
tests
for
the
reconciler
itself.
A
We
have
like
a
couple
tests
to
make
sure
that
the
yaml
files
are
formatted
correctly,
but
we
don't
have
any
unit
tests
that
define
that
the
reconciler
won't
accidentally
all
of
the
groups,
and
so
we've
had
somebody
who's
sort
of
breaking
up
or
broke
up
the
code
into
more
mockable
pieces.
So
we
can
actually
swap
out
a
mock
service
for
a
live
service
and
start
to
make
sure
that
things
don't
accidentally
get
deleted.
A
A
A
Under
the
requirements
heading,
basically,
you
get
to
have
a
staging
project
if
it
is
to
host
images
that
are
created
by
code
that
lives
in
one
of
the
github
organizations
that
the
kubernetes
project
manages,
which
github
organizations
are
those.
You
click
this
link,
and
you
see
it's
one
of
these
kubernetes
communities,
client
csi,
sigs,
I'm
going
to
ignore
these
other
two,
but
basically
that's
the
rules
and
then
so
examples
of
what
that
means
is
like
cryo
used
to
be
part
of
kubernetes,
but
is
now
its
own
project.
A
It's
not
part
of
kubernetes
anymore,
so
it
shouldn't
get
a
staging
project,
but
we
do
allow
for
exceptions
on
this
on
case-by-case
basis.
For
example,
we
do
host
mirrors
for
ncd
and
poor
dns,
since
the
images
for
those
are
bundled
up
as
part
of
the
kubernetes
release.
A
A
I
just
wanted
to
give
a
shout
out
to
everybody
who
helped
this
one
get
this
one
over
the
line
so
back
in
january.
You
know
we
renamed
the
default
branch
for
this,
for
the
repository
that
we
work
on,
we
renamed
it
from
master
domain
and
that
process
was
relatively
quick.
We
got
most
of
it
done
within
a
month
or
so.
A
This
was
like
the
first
repo
to
actually
make
the
move,
so
we
stumbled
into
all
the
little
roadblocks,
but
the
much
harder
annoying
thing
was
everybody
links
to
our
repo
and
they
all
use
master
when
they
do
it,
and
so
I
finally
managed
to
use
cs.kates.io
to
generate
a
checklist,
and
these
were
all
of
the
repositories
that
needed
pull
requests
made
against
them,
and
so
I
just
wanted
to
shout
out
in
the
bottom
here.
Everybody
listed
here
helped
open
pull
requests
to
get
this
over
the
line.
A
A
I
wanted
to
give
a
shout
out
to
eddie
zane,
there's
an
esky.
Sorry,
I
just
your
username
said
his
name.
You
put
up
triage
party
for
six
cli
hooray.
A
While
I
have
you
both
here,
I'm
curious,
arno
and
eddie
did
you
did
you
all
figure
out
like
which
github
token
is
is
backing
this
up?
Is
this?
The
same
token,
that's
used
for
the
release
triage
instance.
C
Yes,
we
just
because
now
github
api
is
just,
I
think,
5000
requests
per
hour
for
the
same
user.
So
we
are
fine
for
the
moment.
Okay,.
A
A
And
then
I
wanted
to
give
a
big
thanks
to
arno.
Did
a
tiny
amount
of
lifting
to
get
this
over
the
line,
but
it
was
mostly
our
no,
we
have
well.
I
guess
all
I
can
show
you
is
the
login
screen
I'll
try
to
authorize
electo
to
look
at
it.
Welcome.
D
A
Logged
into
the
electo
app
I
get
to
sit
back
and
relax
because
there
is
not
to
do
yet,
but
this
is
going
to
be
where
we
do
voting
for
the
kubernetes
steering
committee
elections.
A
Next
I
wanted
to
so.
This
is
basically
like
walking
the
board,
but
using
a
google
doc.
I
walked
through
everything
that
was
done
since
we
last
talked.
I
wanted
to
walk
through
and
try
and
unblock
things
since
we
last
talked
so
we
already
unblocked
on
the
dns
stuff.
I
wanted
to
just
check
in
on
migrating
cs.kates.io
to
kate's
infra.
I
have
seen
pr's
from
jim
the
second,
the
first,
whichever
arno
has
his
hand
up,
so
I
think
I'm
going
to
defer
to
arno.
C
And
so
basically
I
went
to
the
culture-based
meeting
today
and
I
asked
if
the
interest
to
basically
be
the
sake
owning
concert
so
far
to
record
anything
running
inside
cancer
front
need
to
be
on
by
a
sig
or
a
working
group.
So
I
I
still
I
asked
to
seek
country
bags,
the
the
signature
and
the
technique
that
they
need
to
discuss
about
this.
But
ultimately,
now
we
are
saying
we
don't.
We
can
be
the
same,
owning
concern.
A
I
would
personally,
I
would
rather
unblock
jim.
I
I
feel,
like
my
main
question
that
I
have
is
just
because
I'm
like
super
curious
or
worried
about
performance
differences.
I
wonder.
A
About
standing
this
up
like
a
slightly
different
domain
for
a
little
bit,
so
we
could
make
sure
that
this
works
before
we
take
away
cs.kates.if
before
we
shift
traffic
over.
I
only
say
this
because
I
I
really
really
depend
on
cs
decades
that
I
owe
a
lot
so
I
just
I
will
definitely
use
the
canary,
but
if
the
canary
is
broken,
I
want
to
be
able
to
switch
back
to
the
normal
thing.
C
E
Yeah-
and
that
was
totally
my
intention-
is
get
it
stood
up.
We
can
test
it
in
whatever
canary
format
we
want.
I
wouldn't
be
comfortable
doing
a
cutover
like
right
away.
I
would
want
some
other
tests.
A
I
I
figured
yeah
like
I
just
haven't
looked
at
the
pr's
yet
so,
if
me
looking
at
the
pr's
is
part
of
what's
blocking
this
I
will.
I
will
seek
to
review
them,
but
I
I
think
I
would
kind
of
prefer
sick
contributor
experience
own
this
if
they
are
comfortable.
But
if
it's
about
unblocking
you
I'm
cool,
saying
like
we're
the
same
for
now,
it's
it's
really
not
hard
to
change
the
labels
later.
It's
totally
fine.
A
Much
appreciated
the
other
one
was
something
I
said
I
would
have
an
update
on
by
this
time.
I
don't
have
much
of
an
update,
so
this
is
basically,
I
kind
of
need
to
really
help
us
figure
out.
What
what
is
the
plan
for
crowd.cates.io
like?
Are
we
actually
going
to
try
and
cut
this
over
before
some
critical
dates
in
the
123
release
schedule
and
if
we.
B
A
How
are
we
going
to
do
that?
So
I
have
been
furiously
proceeding
on
like
it
looks
like
I'm
shuffling
around
a
lot
of
little
things
at
times,
but
this
is
really
me
realizing.
A
There
are
blockers
that
we
haven't
fully
articulated
yet
so
I
tried
to
dump
them
here
and
I
tried
to
break
it
up
into
sort
of
two
categories
like
if
we
were
to
migrate
product
case
like
let's
just
assume,
I
could
wave
a
magic
one
and
suddenly
all
of
the
kubernetes
components
and
whatnot
that
are
proud
of
case
that
I
have
run
over
in
kate,
center
land
and
they're
community
owned.
A
A
It
is
increasingly
not
so
easy
to
allow
something
that
it's
not
google.com
to
like
act
as
if
it
is
within
the
walls
of
google.com,
so
like
being
able
to
write
to
a
bucket
called
gs
called
kubernetes
jenkins,
for
example
like
that's
something
our
build
clusters
can
do
right
now,
it's
conceivably
something
we
want
our
our
cluster
to
do.
A
But
the
process
to
do
that
might
be
tougher
so
essentially,
like
I've
started
triaging
the
things
that
might
block
us
being
able
to
run
all
of
those
jobs
if
we
were
to
manage
to
provision
enough
capacity
for
them
to
all
run
like
on
community
hardware.
A
A
Stop
using
that
and
start
using
this
other
bucket,
I'm
not
quite
sure
how
we
support
redirects.
So
we
just
we
just
need
some
time
to
sort
of
think
through
the
level
of
breakage
that
we're
talking
about
if
it
is
a
matter
of
like
we'll,
accept
the
breakage
or
whatever.
But
we
do
want
to
proceed
with
a
like,
shut
everything
down
and
turn
everything
up.
B
C
This
is
a
very
long
conversation
because
one
of
the
way
to
success,
the
migration
should
be
to
finish
the
microsoft
or
the
2000
job
we,
the
sig
use,
so
everything
that
press
is
just
kubernetes
kubernetes
jenkins
and
that's
great,
that's
one
approach,
but
the
problem
with
is
not
the
problem
is
from
my
perspective.
I
don't
think
this
k104
can
enforce
a
policy
forcing
the
odyssey
to
migrate
over
the
next
three
months.
I
felt
like
this
policy
should
come
from
sick
testing,
but
I'm
not
sure.
A
Yeah,
no
I'm
totally
happy
to
like
take
off
my
sig
caden
for
hat
and
put
on
my
sick
testing
hat
and
say
yeah,
I'm
totally
willing
to
like
put
out
a
threatening
deadline.
That's
like
we're.
Gonna
reduce
the
capacity
of
the
google.com
build
cluster
by
like
for
like
we're,
just
gonna
cut
it
in
four
and
who
knows
if
your
job
will
still
run
we'll
find
out,
but
if
you
do
actually
care
that
it
still
runs,
you
can
get
it
to
run
over
here.
A
I
think
that's
a
that's
a
reasonable
move
or
or
else
sort
of
thing,
but
I
feel
like
it's
too
early
to
make
that
kind
of
statement
right
now.
As
you
say,
I
think
it
is
a
longer
conversation,
the
other.
I
guess
I'm
kind
of
like
re-blocking
here.
A
The
other
thing
that
concerns
me
is
based
on
our
discussion
last
time,
we're
really
kind
of
thinking
that
we
might
be
close
to
our
our
allowable
spend,
and
so
I'm
not
sure
that
I
actually
want
to
work
on
shifting
over
too
much
more
ci
spend
until
we
understand
where
we're
going
to
find
room
for
it.
A
So
I
don't
think
it's
necessarily
a
technical
blocker.
It
might
be
a
question
of
optimization,
it
might
be
a
question
of
funding
or
it
might
be
a
question
of
you
know,
project
policy
like
do
you
really
need
these
jobs?
Do
you
really
need
them
to
run
this
frequently
or
as
pre-submits
instead
of
periodics,
so
on
and
so
forth?
B
Rihanna
hippie-
and
I
talked
yesterday
too,
about
the
getting
the
cost
down
so
we're
gonna
hippies
on
vacation
right
now,
but
we're
gonna
talk
again
next
week
and
I
think
yeah
that
that's
probably
a
priority
too,
because
once
we
get
the
artifact
costs
down
like
that's
a
very
big
chunk
of
it.
So.
A
Yes,
I
agree,
so
I
created
an
issue
a
while
ago
about
like
hey.
Actually
it
was
about
a
year
ago
ish.
I
think
we
should
have
a
budget
and
we
should
make
sure
we
get
alerted
if
we're
gonna
blow
our
budget.
I
feel
like
it
is
really
important
that
we
do
that
now,
so
I
bumped
this
up
to
critical,
urgent
and
yeah.
Maybe
I
should
reconnect
with
y'all
and
hit
me
to
figure
out
sort
of
what
the
plans
are
to
mitigate
storage
costs,
but.
A
A
Things
into
the
blocked
column,
on
the
board
and
and
I'm
linking
back
to
this
issue
as
the
blocker.
A
Having
said
that,
I
will
still
try
and
think
through
and
work
through
what
are
the
like
technical
implications
of
migrating
proud.
But
it
sounds
like
you
all
also
have
some
good
ideas
there
as
well.
So
maybe
I
can
draft
a
new
google
doc
or
pick
up
the
old
google
doc
and
try
to
like
start
just
the
spitballing
ideas
there.
So
we
can
figure
out
what
what
are.
A
Okay,
I
had
it
up.
I
just
wanted
to
shout
out
that
eddie
unrno
did
get
kate's
in
for
prow
up
and
running
I'll.
Do
it
again
that
was
super
cool,
so
you
can
see
it's
heartbeat.
B
C
C
B
A
Yes,
I'm
gonna
stop
sharing
my
screen
there
I
didn't
even
get
through.
I
mean
I
got
through
about
half
of
the
stuff
I
put
down.
I
feel
like
I
didn't
get
a
great
chance
to
go
through
all
of
the
in
progress
issues
that
we've
got
going
going
on.
A
A
B
I
don't
want
to
cut
you
off
aaron.
This
has
happened
for
like
the
past
four
five
six
meetings
where
we
haven't
made
it
through
everything
on
the
agenda
do
do
we
need
to
consider
moving
this
to
every
week
instead
of
every
other
week.
I
know
people
hate
more
meetings,
but
I'll
be
that
guy.
That
suggests
that.
A
It's
gonna
drive
me
a
little
bananas.
If
we,
if
we
do
that,
I
think
let's
revisit
that
as
a
as
a
post,
kubecon
thing,
I
think
it's
a
it's
a
totally
valid
question
for
sure.
A
I
think
another
thing
that
I
I
happy
to
do,
depending
on
how
many
people
here
are
actually
going
to
show
up
for
this
meeting
versus
go
hang
out
at
kubecon
or
whatever,
like
I've
generally
seated
to
other
people
who
want
to
do
demos
or
talk
about
the
latest
cool
thing
or
whatever.
A
Happy
to
talk
about
this,
a
lot
more
in
slack
and
in
case
infra,
I'm
pretty
active
there
it
it
makes
me
slightly
uncomfortable
when
it
turns
into
aaron's.
Looking
for
lgtm
channel,
I
mean
I,
I
appreciate
all
of
you
who
to
do
slap
on
that
lgtm.
It
helps
me
move
forward
and
I
generally
hope
great
things.
A
But
yeah,
okay
for
real
we're
over
time,
thanks
a
whole
bunch,
everybody
see
you
all
in
two
weeks.
Thank
you
have
a
good
one.
Thank.