►
From YouTube: sig-testing weekly for 20230124
Description
sig-testing weekly for 20230124
A
Okay,
sorry
I'm
still
waking
up
this
morning,
so
I'm
gonna
be
a
bit
slow
but
yeah.
So
as
usual
agenda
is
at
the
bitly
link,
so
I
assume
you
all
found
it
from
here,
but
linking
the
no
it's
just
in
case
feel
free
to
add
anything
onto
the
agenda.
If
you
think
about
it,
we
have
a
couple
items
for
today
and
yeah.
If
anybody
is
new
here
and
wants
to
take
a
minute
or
two
to
introduce
themselves
feel
free
to
come
off,
mute.
B
For
the
first
time,
I'm
just
interested
in
kubernetes
contributing
in
the
project
he's
going
to
learn.
I
forget
the
best
way
to
learn
is
to
test
the
project.
So
here
I
am
nice
to
meet
you
around.
A
Nice
to
meet
you
too,
thanks
for
joining
and
yeah
feel
free
to
like
follow
up
more
in
the
Sig
testing,
Channel
and
whatnot
as
well.
C
Hi
I'm
Matt,
irons,
I
am
likewise
new
and
interested
in
seeing
if
there
are
useful
ways
to
become
useful
I've
been
in
kubernetes
adjacent
infrastructure
roles
in
healthcare
for
about
five
and
a
half
years
and
I'm
currently
between
jobs
and
so
I
have
more
time
than
I.
Usually
do
it's
also
nice
to
not
be
three
to
six
kubernetes
releases
in
the
past
for
the
first
time
ever.
A
Awesome
welcome
and
yeah
same
thing
like
definitely
a
follow-up
in
the
sick,
testing,
Channel
and
I'm
sure
we
can
also
like
point
to
stuff.
E
Benjamin
was
working
on
this,
removing
python,
2
and
updating
the
qkn
image.
That's
the
one
that
runs
on
and
I
I
I'm,
not
here,
so
maybe
you
know
they.
What
I
got
is
that
he
with
the
changes
it
didn't
work
it
we
reverted
and
the
bot
took
these
changes
and
updated
images
again
with
the
wrong
version.
So
it
started
to
run
with
this
python
images
with
this
images
that
doesn't
have
python
2,
but
the
the
the
CI
and
all
the
touring
is
still
not
able
to
use
Python
3
on
foreign.
F
Basically,
we
wanted
to
so
what
happened?
Is
we
burped
the
base
image
for
the
bootstrap
image
and
also
we
move
for
Debian
restaurant
to
bullseyes,
and
now
python
2
has
been
removed
from
both
eyes,
which
is
why
this
is
breaking
everything
and
that's
why
we
did
to
make
sure
we
maintained
compatibility
of
the
different,
because
in
Debian
Bull's
eyes
to
the
common
life
of
python
is
only
it's
now
named
python
3.
and
that's
basically
break
the
invocation
of
many
scripts
everywhere.
F
F
E
So
this
was
three
days
ago
and
Benjamin
Robertson,
but
why
they
both
picked
these
images
again
yesterday
night.
F
I
think
it
was
like
more
like
I.
Don't
really
know
the
detail
about
this,
but
I
felt
like
okay.
F
F
G
I
have
a
question,
so
it
looks
like
somebody's
using
python
too
right.
What
are
the
replacements
for
that
python
2
script?
Well,
is
it
to
upgrade
to
Python,
3
or
writing,
go
or
do
something
else?
What's
the
plan
for
that
code.
G
Yeah,
so
you
said
that
there's
some
code
written
in
Python
2
today
right
so
are
we
going
to
replace
that
code
or
does
it
just
need
to
be
upgraded
to
python3?
So
we
can
use
wn11.
F
The
the
main
idea
is
to
basically
I,
don't
want
to
say
python
to
basically
some
code.
Some
logical
right,
I've
been
written
in
Python.
Now
it's
possible
that,
because
it's
compatible
to
python
3..
The
problem
here
is
basically
during
the
invocation
of
that
logic.
F
The
the
batch
script
is
not
finding
the
right
python
command
to
do
your
invocation.
Okay,
so
we
can
basically
say:
okay,
we're
assembling
that
point
for
parents
of
Python
3.
We
can
improve
the
code,
the
invocation
of
those
of
that
python
loss.
You
can
say:
okay,
we
fixed
that
the
multiple
option-
or
we
say:
okay,
we
drop
this.
There's
a
goal
and
program
called
there's
a
set
of
goal
and
program
called
World
details.
People
should
use
I
will
get
rid
of
all
those
python
script.
We
have
been
keeping
for
six
years.
F
F
E
And
the
the
other
day,
someone
someone
replaces
a
lot
of
python,
two
things
in
kubernetes,
kubernetes.
F
H
E
This
is
the
the
yesterday
they
bought
a
Roberto's
day
might
say:
Sunday
started
to
fail
within
all
the
night
here
in
Europe
yeah,
and
we
were
talking
about
that
and
San
Juan
is
asking
if
he
can
help
and
because
we
are
asking
now
what
is
needed
to
remove
all
this
python
t
or
python
2
theme
and
be
able
to
move
forward
or
what
is
needed.
And
this
was
the
point
where
we
were
discussing.
Okay.
H
It
should
be,
it
should
be
the
scenarios
and
and
the
bootstrapped
up
Pi
script
in
test
infra.
That's
the
problem.
We
we
had
a
patch
sent
to
kubernetes
kubernetes,
but
inside
the
inside
one
of
the
image
layers
and
test
info.
There's
a
copy
of
some
scripts
and
test
infra
and
test
infra
itself
moved
to
Python
3
a
long
time
ago.
H
H
So
there's
a
script
that
like
sort
of
does
the
equivalent
of
doing
all
the
cloning
and
stuff
and
then
there's
a
script
that
you
select
one
of
many
scripts
for,
like
I'm,
doing
an
e2e
or
I'm
doing
a
test
or
whatever.
We
don't
even
necessarily
need
to
migrate
off
of
that
to
patchless.
It
should
be
a
really
small
patch
to
just
Python
3
the
those
couple
of
scripts,
and
then
we
need
to
bump
the
images
to
fix.
In
the
meantime,
we
should
just
roll
back
the
the
image
bump.
Again.
H
Oh
no,
we
should
we
should.
We
should
do
the
python
to
the
three
migration,
but
that's
like
you
can
do
that.
There's
a
like
automated
tool
for
that.
At
this
point.
H
G
Mean
that's
a
quick,
simple
fix,
then
we
can
look
at
moving
the
jobs
around
and.
H
Doing
well,
yeah
I
meant
do
that,
but
I
left
a
note
just
to
revert
so
that
we
don't
block
and
I
stepped
out
for
a
minute.
It's
I
mean
it's
literally
just
run
two
to
three
on
a
directory,
and
then
we
need
to
bump
the
images
forward
to
pick
that
up,
but
first
to
unblock
things.
We
should
just
roll
back
the
image
bump
file
over
commit.
E
H
Now
that
we
have
that,
we
need
to
two
to
three
the
remaining
scripts
intest
infra,
and
then
we
need
to
do
the
cycle
of
bump
the
images
which
the
bot
can
do
for
us.
But
we
should
probably
check
that
the
build,
succeeded
and
then
bump
the
bootstrap
image
and
the
images
keepkins
edoe
a
Docker
file
and
then
we'll
want
to
bump
cubekins
forward
and
make
sure
that
that's
like
safe
and
we
actually,
you
don't-
have
to
do
anything
manual
to
test
that
there's.
H
We
have
a
CI
job
for
this
purpose
that
always
uses
the
latest
image.
Let
me
pull
up
that
test
grid.
H
H
That's
how
we
broke
this
time.
Otherwise,
like
we
have
a
safe,
throw
for
where
like
when
we
push
a
new
kubekins
image,
you
don't
have
to
immediately
bump
it
once
the
image
builds
and
pushes
you
go
check
test
grid
and
see
if
it's
good
or
you
revert,
but
but
we
intention
I
intentionally,
made
sure
it
rolled
forward
the
first
time,
because
this
should
have
been
a
safe
change.
H
H
D
H
If
he
wants
to
do
it
that
that's
fine,
it's
going
to
be
a
very
straightforward,
commit
just
run
the
two
to
three
tool
on
the
Jenkins
and
the
bootstrapped
up
high
in
the
scenarios
directory
and
then
once
that
merges
we'll,
have
an
auto
built,
bootstrap
image
and
we'll
go
bump.
Cubekins
e2e
image
to
use
the
new
bootstrap
image
and
then
once
that
auto
builds
there's.
H
E
H
No
no
I
mean
there's
just
a
few
small
things.
It's
things
like
bear
print
statements
that
are
being
used
as
a
function
like
it's
just
the
reason
it's
actually
crashing
in
CI
isn't,
even
because
of
python
compatibility.
It's
because
we're
using
user
bin
and
Python
and
there
isn't
a
python
binary
in
path.
There's
a
python.
H
H
I
just
didn't
get
it
merged,
because
I
didn't
think
we
needed
it
yet,
because
we
were
just
rolling
back
and
I
was
going
to
wait
until
I
had
some
time
to
Shepherd
this
through
I
didn't
expect
the
lingering
latest
bootstrap
to
get
Auto
picked
up
again
that
that's
what
bit
us
if
that
hadn't
happened,
I
would
have
done
all
of
this
while
closely
watching
it.
After
the
first
issue,
we
had.
D
H
Is
a
good
point
to
remind
us,
though,
that
also
the
like
the
reason
we're
having
that
happen
is
because
test
info
has
good
testing
at
this
point
around
most
of
the
things
in
the
repo
I
would
say,
but
this
bootstrap.pie
stuff
is
supposed
to
have
been
deprecated
for
years
and
no
one
is
working
on
it.
No
one's
yeah
testing
it
and
test
infra
doesn't
use
it
anywhere.
H
So
if
you
break
bootstrap.pi
Behavior,
there's
like
a
somewhere
in
our
like
Canary
dashboard,
there's
a
job,
you
can
check
there's
a
lot
more
of
them
now
and
I
haven't
found
the.
D
H
Can't
recall
which
one
of
these
is
the
one
that
like
tests,
the
latest
kubekins
image.
But
if
we
check.
H
Yeah,
it's
possible
that
there's
some
other
things
like
that,
but
that's
just
a
yeah.
It's
like
a
distro
roll
forward
and
we
are
installing
the
same
set
of
requested
packages.
So
if,
if
like
a
binary
is
missing
or
something
I
mean
again,
we've
got
to
roll
forward.
Somehow
this
this
environment's
super
old.
So
another
thing
we
should
consider
is
forking
this
image
again.
H
H
It's
this
Cube
kins
image,
that's
a
bit
of
a
nightmare
and
is
out
of
date
and
now
we're
finding
like
we're
running
on
things
that
aren't
shifting
more
like
python
2
and
that
shouldn't
be
happening
not
just
because
CI,
but
because,
like
the
reason
this
started
coming
up
was
I
had
an
end
user
or
a
developer
ping
me
that,
like
they
couldn't
build
stuff
because
it
was
trying
to
invoke
python
2
and
they
don't
have
python
2
on
their
system,
because
why
would
you
have
Pius
2
on
your
system
in
2023?
A
Okay,
I
think
in
the
meanwhile
I
do
also
want
to
make
sure
we
can
get
to
some
of
the
other
agenda
items
so
I.
A
Is
a
good
discussion
if
there's
any
last
bits
that
we
should
comments
on
I
think
feel
free
to
do
it
now
and
then
that's
unfortunate.
H
I'm
gonna
start
a
slack
thread
with
everyone
interested
in
the
sick
testing
slack
where
we
can
continue
on
like
what
we're
gonna
roll
forward.
Yeah.
D
F
Sorry,
Zoom
crash
internet
is
bad,
so
domination
again,
yeah,
basically
I
just
want
to
say,
because
we
have
an
agenda
to
again
go
back
to
that
and
talk
later
about
testing
stuff.
H
Okay,
folks
I
just
started
one
in
the
sick
testing
Channel.
It's
like
you
can
see
the
image
which
is
fixed
thread.
Let's
continue
there
with
what
we'll
do
today.
A
Awesome
all
right
so
next
up
I
just
wanted
to
give
a
quick
thing
on
like
thanks
folks
for
responding
to
suggesting
chairs.
I
haven't
been
following
up
on
that
very
much
yet,
but
I
will
do
so.
Also,
if
there's
anybody
else
interested
feel
free
to
again
ping
me
on
slack,
but
we'll
figure
out
exactly
what
to
do
to
start
like
talking
about
or
onboarding
new
chairs.
A
Also,
since
I
have
you
here,
Ben
maintainer
track
talks.
It
sounds
like
we
have
a
couple
folks
that
volunteered
in
the
State
Testing
channel
is
there
anything
more
to
say
about
that
or
just
leaving
it
to
two
folks
representing
sick,
tossing
for
the
maintainer
track.
H
I
think
yeah
I
think
we're
good
on
that.
So
the
the
new
format
for
submitting
you
can
have
more
than
two
people,
but
it
has
to
be
a
panel
format,
so
I
think
we're
not
planning
to
do
that.
Usually
we
use
the
maintainer
track
slot
to
have
a
few
folks
show
up
that
are
like
interested
in
knowing
more
about
the
Sig.
H
It's
usually
not
super
High
attendance,
but
it
is
nothing
that
happens,
and
sometimes
we
get
contributors
out
of
that
and
that
sort
of
thing
so
most
sigs
do
this
like.
If
you
want
to
speak
about
something
else,
you
should
submit
in
like
Bangkok,
but
for
the
maintainer
track
slot.
H
We
use
it
to
like
bring
you
up
to
speed
a
bit
on
like
what
have
we've
been
doing
the
past
year
and
like
how
can
you
get
involved
and
like
what
is
the
Sig,
so
Patrick
and
Antonio
are
newest
tech
leads,
have
volunteered
to
write
and
give
that
talk
and
put
a
little
bit
of
focus
on,
like
some
of
the
e2e
stuff,
get
to
tell
people
some
more
about
UV
testing
and
what
we've
been
up
to
there.
H
So
we
just
need
them
to
submit
by
the
end
of
the
week
or
Michelle.
Larry
can
help
with
that.
It's
a
pretty
short
form,
so
I
I,
like
the
form,
there's
a
slack
thread
and
I-
think
we're
set
there
this
time
but
love
to
have
more
people
talk
in
the
future.
We'll
also
have
this
like
at
all
future
kubecons.
This
is
a
this
is
like
a
recurring
thing,
so
you
know
if
you've
been
working
in
the
Sig
and
you're
interested.
H
In
speaking,
we
have
some
people
spoken
for
it
this
time,
but
North
America
will
be
coming
up
and
I'm
sure
we'll
be
submitting
those
forms
soon
enough
as
well.
H
H
We
actually
talked
about
this
previously,
but
we're
now
running
up
against
the
deadline
we
need
to
actually
submit,
which
is
why
this
is
coming
up
now.
So
it's
not
like
a
last
minute.
Look
for
folks
interested
we've
been
asking,
but
we're
now
hitting
the
like.
Okay,
we
have
to
confirm
and
submit
and
I
think
we
know
who's
talking
now
so.
A
Yeah
awesome
all
right
yeah.
Let
me
know
if
there's
anything
else
on
there,
otherwise
Sean
I
think
you've
got
the
API.
I
Yeah
part
of
this
is
going
to
be
first
of
all,
hi
everyone.
My
name
is
Sean
and
I
work
with
Sig
testing,
mostly
with
test
grid
I,
do
a
lot
of
test
grid,
stuff
and
part
of
this
is
going
to
be
a
status.
Update
and
part
of
this
is
going
to
be
a
request
for
comment.
I
The
first
part
is
that
we
have
a
test
grid
API,
like
it's
a
go
controller
that
exposes
data
via
HTTP
and
grpc
that
you
can
get
data
from,
because
the
existing
table
endpoint
was
never
really
meant
to
be
an
API.
Although
there
are
like
I
know,
there
are
some
things
like
sippy,
for
example
that
do
use
it.
That
way.
I
The
code
is
in
the
test
grid.
Repo
I
think
it's
good
I
want
to
try
and
deploy
it
to
production
so
that
people
can
start
using
it,
because
there's
some
chatter
on
Sig
testing,
where
it
was
like.
Oh,
if
we
had
a
like,
if
we
had
this
API,
it
would
be.
Nice
I
spoke
with
dims
about
that,
specifically
about
getting
a
host
right
now,
I'm
struggling
through
getting
the
Ingress
set
up.
But
after
that,
I
want
to
give
it
a
friendly
host.
That's
like
testgraddata.kates.io
or
test
grid
api.case.io,
or
something
like
that.
I
I
think
it's
a
good
idea,
but
if
there
are
objections
or
other
thoughts
or
questions,
let
me
know
or
no
I
see.
You
have
your
hand
up.
I
The
new
API
is
well,
so
the
code
is
all
on
GitHub
that
the
API
is
and
I'm
planning
to
run
it
in
open
source
on
GC
well,
I,
say
on
open
source
I'm.
Sorry
I
am
running
on
a
gcp,
but
it's
otherwise
just
kind
of
open
to
the
public
and
the
code
is
open.
Source
and
it'll
be
publicly
accessible
so
that
people
can
hit
it
and
get
data.
H
H
I
Excellent
question
economic
reality:
oh
geez
excellent
question:
Ben!
Yes,
after
I
get
the
API
deployed
so
that
machines
can
start
reading
from
it.
I
I
have
plans
to
rewrite
the
test
grade
API
and
it
would
be
ideal
if
it
used
I'm
sorry
rewrite
the
test
grid
user
interface
and
it
would
be
ideal
if
it
used
the
new
API.
So
I'd
like
to
deploy
the
API
first,
but.
H
So
a
little
bit
of
a
leading
question
because
I
know
we
have
other
open
things
that
use
the
API,
but
if
the
AP,
if
the
API
the
for
the
front
end
is,
is
open
source,
then
that
actually
means
the
last
thing.
I
think
is
just
the
f
I
mean
just
the
front
end
itself.
H
Is
that
right
yeah?
So
if
we
can,
if
we
could
get
some
help
from
some
folks
at
some
point
on
a
front
end,
then,
like
the
project
could
actually
run
an
instance
right.
I
H
H
This
is
just
like
here's,
the
like
the
kubernetes
test
grid
externally
facing
on
a
kubernetes
domain,
but,
unlike
basically,
everything
else
that
one's
just
some
magic
Google
stuff
and
it
was
all
internal
and
they
have
done
a
huge
amount
of
effort
rewriting
each
component
and
on
the
rewrite
disentangling
it
from
Google,
internals
and
convincing
management
that
this
is
a
good
path
forward,
as
opposed
to
all
of
the
huge
amounts
of
internal
usage
and
like
focusing
on
that
so
I
know,
this
has
been
very
tricky
to
prioritize
and
it's
been
a
really
long
path
to
get
to.
H
I
So,
as
far
as
disentangling
goes,
my
intent
is
to
rewrite
the
user
interface,
so
we're
not
porting
over
the
existing
one.
The
new
one
may
look
and
feel
different
and
honestly,
that's
probably
for
the
best,
because
the
old
one
looks
I
mean
it
looks,
it
looks
its
age
and
it
is
because
we're
rewriting
it
entirely
in
open
source
off
of
an
API
that
is
itself
open
source.
I
Anybody
can
help
me
with
this,
which
would
be
great
because
I'm
not
a
front-end
engineer
and
would
love
help
and
request
for
comment
and
also,
if
I,
get
reprioritized
or
disappeared
in
the
middle
of
the
night.
Like
here,
here's
my
work,
whatever
whatever
may
happen
on
the
test
grid
slack
Channel
I
have
been
I,
have
posted
a
couple
of
like
proofs
of
concept.
One
isn't
Dart.
Another
is
in
lit
components
at
this
point.
I
I'm,
quite
convinced
that
Lit
components
would
be
the
good
thing
to
write
in
and
I
have
some
vague
ideas
around,
because
once
we
write
it
in
lit
components
and
you
roll
it
up,
you
get
kind
of
this
bundle
of
static
JavaScript
files,
as
it's
like
buildable
part
of
me,
wants
to
just
host
that
as
a
static
file
next
to
the
API.
I
So
that
way,
it's
just
one
controller
that
hosts
some
static
files
that
are
the
UI
and
the
API.
That's
getting
data
itself
I've
seen
that
before
it's
not
clear
to
me
whether
that's
best
practice
or
not
so
I
did
want
to
ask
for
comment
if
anybody's
very
familiar
with
front
end
stuff
or
has
some
strong
opinions
about
how
this
should
all
go
together.
I
Please
come
talk
to
me.
I
would
love
input
on
these
kind
of
vague
ideas
that
I
have
the.
H
H
Think
if
you,
when
you
get
ready
to
like
move
forward
with
any
of
this
I,
think
if
we
put
out
a
call
to
like
them
to
mail
this
in
a
few
other
places,
there
are
I'm
not
sure
with
like
current
Staffing
levels,
companies
but
I
know.
In
the
past
there
have
been
a
couple
of
companies
that
have
had
a
vested
interest
in
like
being
able
to
run
a
test
grid
for
like
their
own
kubernetes
adjacent
testing,
and
if
we
could
probably
fish
again
for
like
some
help
with
the
front
end
right.
I
Would
be
would
be
some
good
mailing
lists
to
send
out
for
that.
H
We
could
also
consider
I
would
say
it
would
be
reasonable,
in
this
instance
to
add,
like
the
dev
to
like
broad
kubernetes
project,
mailing
list
and
just
kind
of
send
a
shadow
they're
like
hey,
we're,
like
finishing
open
sourcing
test
grid,
if
anybody's
interested
in
helping
like
come
to
like
this
repo-
and
it's
a
little
bit
unusual,
because
it's
not
a
repo,
that's
like
in
the
organization
but
I,
don't
I,
don't
I,
think
that,
like
folks,
can
make
an
exception
for
the
circumstance
like
if
it
were.
H
Think
folks
will
understand
so
I
would
say
the
kubernetes
dev
mailing
list,
which
I
think
that's
devot
kubernetes,
you
know,
but
you
should
double
check
and
it's
in
the
like
members,
the
community
membership
Docs
and
the
Sig
testing
mailing
list
will
be
a
good
place
to
like
just
like
send
a
shout
out
to
both
those
for,
like
here's,
the
here's,
the
here's,
the
source
story,
we're
working
on
it
who
wants
to
join,
come,
come
meet
us
and
like
test
grade
Slack.
I
That's
excellent
I
think
I
will
be
ready
to
put
out
a
general
request
for
contributors.
I
I
have
like
I
said:
I
have
a
demo
project,
it's
just
in
my
personal
Branch
now,
but
it
is
on
test
grid
and
you
all
can
view
it
and
comment
on
it.
My
intent
is
to
get
that
deployed
like
to
production,
even
though
it
kind
of
doesn't
do
anything,
and
then
that
way,
we'll
have
the
entire
deployment
Pipeline
and
then
I
think
that
would
be
a
great
time
to
call
hey
here's
the
code
base.
I
H
I
agree,
I
think
once
you
have
like
something
you're
you're
developing
like
here's,
what
here's
the
repo
you
should
contribute
to
that's
somewhere
around
there.
It's
like
that's
the
time.
B
H
To
ask
bypass
a
lot
of
bike
shedding
just
like
get
things
rolling.
E
I
Sorry
I
might
have
missed
part
of
the
question.
Can
you
say
that
again.
G
So
some
kubernetes
adjacent
projects
like
Canada,
if
you
use
test
grid,
so
the
API
that
you're
building
right
with
intent
of
self-hosting
test
grid
entirely
right.
Can
those
projects
deploy
the
API
today
or.
I
Current
API
could
be
deployed
today.
There
is
one
kind
of
instance
of
test
Grid
in
open
source
that
K
native
and
istio,
and
all
of
these
adjacent
projects
are
also
using
like
go
to
tesker.case.io.
You
see
istio
stuff,
there
I
believe
yeah.
I
No,
it's
ready
for
use
the
API
I'm
combining
two
things
the
user
interface
is
is
not,
but
the
API
is
ready
for
use.
It
is
here
I'm
going
to
give
you
a
link
to
the
command
itself.
I
There,
that
is,
the
API
controller.
I
Anybody
could
run
and
deploy
it.
The
API
definition
itself
is
defined
by
a
protocol
buffer,
which
is
also
in
there
just
in
a
different
folder.
G
I
Yeah
you
can
deploy
that
it's
a
pretty
basic
go
controller
as
long
as
you
have
access
to
the
Google
Cloud
Storage
for,
like
that
Bax
test
grid.
Okay,
I.
I
Yeah,
absolutely
if
you
find
any
gaps,
are
interested.
If
you
have
any
improvements.
Prs
are
always
welcome,
but
yeah
the
apis.
There
I
just
it's
not
deployed
in
production
for
Kate's,
because
ingresses
and
networking
is
hard,
but
I
hope
to
do
that
soon,
like
maybe
in
the
next
couple
of
weeks,
depending
on
I'm
running
into
it,
and
heaven
knows
how
long
that'll
take
but
I
have
high
hopes.
G
H
I
H
I
Yeah,
you
are,
of
course,
welcome
to
run
it
on
the
Community
infra
or
locally.
If
you
have
read
permission
to
the
bucket,
the
API
should
just
work,
but
it
made
sense
to
put
this
controller
with
the
other
controllers
and
then
when
we
need
to
move,
we
move
them
all
at
once.
I
But
yeah,
thank
you
for
that
suggestion.
Honestly.
If
this
takes
too
long,
I
might
come
back
to
doing
the
you
know
to
doing
that
and
just
having
the
API
run
somewhere
else.
H
H
Just
if
you
get
like
stuck
on
something
like
configuring,
like
kubernetes
and
Grace,
the
folks
in
this
space
myself
include
be
happy
to
help.
We
got
a
bunch
of
kubernetes
people
in
the
room.
Yes,.
I
I
appreciate
that
I
think
the
configuration
is
correct,
I
think
it's
doing
the
wrong
thing,
but
the
configuration
is
also
in
that
repository.
It's
in
the
cluster
directory
I'm.
H
Looking
at
Antonio
right
now
in
Sig
Network,
knowing
full
well,
that
Ingress
is
one
of
the
interesting
corners
and
they're
trying
to
sort
it.
E
I
I
understand
it's
good
to
know
that
I
have
support
for
that
yeah.
That
is
that's
it
I'm
gonna,
try
and
do
those
things.
F
Yeah
I
just
want
to
talk
about
something
really
really
old
so
for
people
doing
the
call
when
sickness
in
God
blessed
wrapped,
we
I
think
different
part
where
we
create,
as
GitHub
account
and
they
were
created
under
Google
email
with
Google
emails,
so
I'm
trying
to
get
those
credentials
and
share
the
responsibility
with
c
contrabands
at
some
point.
So
this
is
a
long
shot
and
I.
Think
you're,
just
gonna
open
some
thread
internally
and
say:
oh,
we
have
those
GitHub
account.
H
That's
another
interesting
tricky
one
that
we
will
have
to
sort
out
extremely.
It's
actually
a
Google
Google
group
with
it's
private
because,
like
things
like
password
reset
or
whatever
you
don't
want
going
to
everyone,
we'll
probably
need
a
kubernetes.io
private
Google
group
and
give
a
few
folks
access.
H
If
we
want
to
move
the
email
over
for
the
one
password
Michelle
has
some
overlap.
There
is
on
the
should
have
access
to
the
credentials
and
I
think
has
access
to
the
Sig
testing
one
password
now,
but
it's
not
something
that
we've
used
a
lot
yet
so
there's
also
probably
just
going
to
be
a
little
bit
of
like
that
say.
H
Leadership
we'd
probably
need
to
I
I'm
very
familiar
with
one
password,
but
we
haven't
been
using
it
for
anything
in
Sig
testing
so
far,
I
think
we
have
the
zoom
credentials
in
there,
but
you
know
it's
not
something.
You're
touching
often
I
think
I
just
shared
the
actual
credentials
when
folks
have
needed
them
directly
old
school.
F
H
We
didn't
insert
them
to
that
Vault
and
we
need
to
set
up
a
public
email,
so
I
think
the
the
thing
that
we
might
need
from
contract
slash
infra
is:
we
need
a
restricted
mailing
list
for
the
bot
user
email
because
right.
H
I
might
deflect
that
a
little
bit
back
on
contrabacks.
So
on
these
things
like
we're
just
kind
of
operating
the
in
front,
we
don't
just
need
the
account
I'm
sure
the
googlers
are
happy
to
like
update
the
account
or
share
the
credentials,
but
I
don't
want
to
necessarily
ask
them
to
go
through
like
like
they're
already
Staffing
the
rotation.
Keep
it
up.
I
don't
want
to
ask
them
to
like.
H
Can
work
in
some
essential,
so
that's
all
we
need.
We
should
figure
out
what
the
what
the
rest
of
us
to
go,
get
that
email,
bootstrapped
and
then
come
back
to
like
Michelle
and
Sean
and
say
like
okay,
we've
got
an
email
for
you
to
use
for
the
account
now,
let's
migrate
over
the
credentials.
H
Okay,
because
also
there'll
be
other
people
in
the
project
like
you
or
me,
or
something
that
are
more
familiar
with
the
like
input,
configs
and
stuff
for
email
and
whatever.
D
H
Should
do
that
first
because
it
won't
really
help
to
have
the
it
won't.
There's
also
we
require
two-factor
enabled
I
guess
we
can
do
one
password
two
Factor.
H
Now,
though,
in
the
past
that,
like
same
thing
like,
we
have
like
a
set
of
spare
creds,
so
we'll
have
a
couple
things
to
move
over,
but
one
of
them
I
feel
like
is
it
will
make
a
lot
more
sense
if
we
have
an
email
to
move
it
over
to
because
right
now
the
like,
if
you
need
to
do
a
password
reset
or
something
like
that,
that's
going
to
an
internal
mailing
list.
H
Also,
it
definitely
needs
to
be
restricted
access.
It
should
only
be
a
few
very
trusted
community
members
like
our
leads
or
the
the
in
front
call
or
something
because
it
also
does
things
like,
because
the
bot
has
high
privileges
everywhere.
It
gets
githubs
like
vulnerability,
notifications
for
repos,
which
are
not
otherwise
public
I
know
because
I'm
still
on
that
mailing
list,
which
seems
okay
but
maybe
not
intended,
and
it
gets
a
fair
amount
of
interesting
email.
Besides
the
ability
to
password
reset
okay.
F
H
I'll
I'll
I'll
think
about
Michelle
about
the
one
password
Vault
when
we're
ready
to
do
that.
A
Sounds
great
all
right
is
there
any
last
minute,
thanks
to
folks
want
to
cover.
In
the
last
I'm
gonna,
say
five
minutes
of
meeting.
F
Interesting
this,
the
all
the
projects
and
later
maybe
September,
October
or
next
year.
We
we
need
to
move
the
power
control
plane.
So
that's.
H
E
H
It's
different
for
kubernetes
it's
more
of
a
mess,
but
the
the
real
problem
is
that
project
has
credentials
to
other
internal
projects
and
we
just
do
not
have
the
capacity
to
migrate.
All
that
right
now,
we're
currently
on
track
to
be
like
4
million
instead
of
3
million
this
year,
and
we
have
a
3
million
budget
and
most
of
what
that
CI
is
doing
is
running
ewe.
H
The
external
stuff
is
already
too
expensive
and
there's
a
ton
of
cluster
edu
running
on
in
the
internal
accounts,
and
we
can't
Grant
external
projects
permission
to
spin
up
e2e
on
internal
projects.
So
we
would
have
to
move
all
of
the
e2e
testing
as
well
before
we
can
move
the
control
plane,
that's
a
problem,
but
what
we
can
do
is
we
can
do
things
like.
H
We
can
move
more
of
the
non-edwe
stuff
or
kindly
stuff
from
either
what's
already
moved
to
external
or
from
internal
to
like
Amazon,
as
we
ramp
that
up
this
year,
but
I
don't
think
we're
even
gonna
be
close
this
year
to
having
enough
overhead
to
move
all
of
that
gcp
cluster,
EDU
and
I.
Don't
think
we're
I.
H
Don't
think
we'll
be
moving
the
gcp
cluster
edu
to
like
not
gcp,
so
we
having
some
of
prow
internal
is
granting
us
access
to
an
uncapped
budget
for
that
stuff
and
part
of
the
reason
we're
not
even
more
comically
over
budget
at
the
moment
until
we
get
the
until
we
get
like
the
download
costs
and
stuff
actually
under
control,
I,
don't
think
we
can
move
the
control
plane
and
we
can't
move
some.
We
can't
move
the
E
to
e
testing.
H
We
can't
move
the
control
plane
because
we
can't
move.
We
need
to
be
testing,
we
moved
the
critical
stuff
already
and
that's
eating
a
good
chunk.
H
If
we
continue
to
move
the
rest
of
the
long
tail,
the
E
to
be
testing,
we
will
be,
we
won't.
We
won't
have
space.
G
H
So
the
problem
is
no
one
maintains
any
tooling
for
this
except
hi
me
again.
There's
some
shell
scripts
that
you
can
pass
like
hundreds
of
environment
variables
to
to
configure
how
you
need
the
E
to
e
cluster
to
work
and
they
can
spin
up
from
like
the
latest
kubernetes
source
code.
H
It
would
be
a
huge
lift
to
make
all
of
those
jobs
do
something
equivalent
on
some
other
environment,
as
is
no
one.
The
only
reason
the
current
environment
works
is
because
that
source
code
is
in
the
kubernetes
repo,
and
it
runs
on
your
PRS.
So
between
a
couple
patches
for
me
here
and
there
people
have
to
patch
it
when
they
make
some
breaking
change
to
Cluster
bootstrap.
H
It's
really
hard
to
get
that
sort
of
thing
worked
on.
I
have
a
lot
of
experience
with
that
with
kind
and
kind
gets
to
cheat
quite
a
bit
by
a
lot
of
the
places
where
you
would
break
something
you
would
actually
be
breaking
cubanum,
so
people
need
to
patch
that,
and
that
is
entry
if
we
switch
to
something
else,
to
run
like
clusters
on
AWS
or
something
like
that,
we
have
to
have
someone
to
maintain
that
tooling.
H
It
has
to
work
with
kubernetes
source
code
at
head
or
ahead
of
head
with
people's
new
patches
that
aren't
merged
yet,
and
you
have
to
rewrite
all
the
CI
jobs
to
do
the
equivalent
cluster
configuration
and
right
now.
The
cluster
configuration
is
a
whole
bunch
of
environment
variables
that
get
interpreted
into
that
Bash.
G
H
Is
a
lot
that
goes
into
actually
bootstrapping
a
cluster
and
making
it
work
with,
like
the
latest
changes
and
there's
all
kinds
of
like
knobs
that
you
tune
like
turning
on
Alpha,
apis
or
whatever,
and
those
things
are
only
kind
of
standard
by
way
of
like
component
configs
or
Flags.
But
no
one
has
a
tool.
H
Turning
on
Alpha
apis
setting
some
runtime
field
deploying
the
CSI
controller
whatever,
and
then
we
run
e2b
tests
against
it.
Okay,
yeah
and
like
most
tools
available
like
if
you
wanted
to
use
like
eks
or
something
I
mean
you
can't
use
eks
with
like
fully
open
source
latest
greatest
changes
that
aren't
even
merged.
Yet
there
are
very
few
tools
right
now.
We
have
the
clustered
scripts
which
work
on
GCE
and
no
one
wants
to
touch
and
no
one
should
touch,
but
and
then
we
have
kind.
H
We
don't
have
anything
else
that
has
managed
to
maintain
stable
CI.
For
example,
cluster
API
exists
and
kind
of
fills
this
space,
but
it
doesn't
have
all
these
option
tunes
and
last
I
checked.
They
didn't
have
reliable,
conformance
CI
with
like
bleeding
edge
changes
yet
like
I'm
sure
when
they
ship
releases
and
they
and
they
say
a
kubernetes
release
is
verified
that,
like
it's
passing,
conformance
and
whatever,
but
for
like
the
stuff
running
kubernetes
ahead.
H
The
absolute
latest
changes
like
that.
It's
a
lot
of
effort
to
get
something
like
that:
stable
and
and
okay,
no
one's
funding
this.
So
even
if
we
start
working
on
that
now-
and
we
get
some
kind
of
commitment
from
someone
to
maintain
a
tool
like
this,
it's
gonna
be
a
long
tail
to
then
go
to
HCI
job
and
figure
out
what
the
heck,
those
environment
variables
are.
Actually
configuring
and
do
the
equivalent
thing
in
some
other
real
config
format,.
H
It's
a
nightmare
and
right
now
it's
really
really
to
get
anybody
to
touch,
because
it's
like
the
GCE
stuff
is
good
enough
and
it
works
and
it
like
de
facto
isn't
broken
because
it's
in
tree.
But
if
you
said
like
oh
I'm,
gonna
build
a
new
tool
that,
like
creates
clusters
on
AWS
I'm,
just
going
to
PR
it
to
kubernetes
No
One
Would
permit
this
okay.
D
C
H
H
We
have
all
these
cluster
tools
out
of
tree
same
thing,
like
Cloud
providers,
are
going
out
of
trees,
so
we
have
a
little
bit
of
a
problem
there
because
like,
but
we
need
a
cloud
provider
to
run
like
real
clusters,
so
I
think
we're
just
going
to
do
the
we're
going
to
do
the
probably
going
to
wind
up
doing
the
smallest
thing
and
just
patching
the
the
existing
Legacy
GCE
stuff
to
use
the
like
out
of
tree
GCE
provider.
Okay,.
G
H
And
we
used
to
run
pre-submit
on
chaos
AWS.
The
reason
we
don't
do.
That
is
because
this
we
switch
from
Google
paying
the
the
bill
with
like
a
credit
card
to
Amazon
paying
the
bill,
and
they
let
the
bill
lapse
for
like
months
and
the
account
got
terminating
for
a
while
and
all
the
CI
broke
and
just
from
being
out
of
pre
submit
for
a
while.
H
Okay,
it's
hard
to
staff,
that
sort
of
thing,
and
when
you
have
The
Stance
that
you
convince
kubernetes
developers,
no,
you
have
to
go
patch
kind.
If
you
actually
manage
to
break
it,
that
only
works
after
like
a
protracted
period
of
showing
them
that,
like
if
something
broke,
it
was
actually
broken
and
it
isn't
just
a
problem
with
the
tool.
H
Like
I
said,
as
is
the
GCE
scripts
are
like
not
staffed
by
anyone
from
any
company
at
all
myself
and
one
or
two
other
people
try
to
review
changes
when
they're
actually
necessary
and
generally
discouraged
touching
it.
H
And
while
I
might
be
happy
to
approve
like
adding
new
clubs
or
something
I
guarantee,
others,
since
the
project
would
be
proof
set
and
I
actually
do
think
if
we
can
get
people
to
put
energy
behind
this,
we
should
move
to
something
else.
But
it's
a
I
would
say:
that's
an
actually
equal
size
test
to
the
rest
of
the
stuff
we're
doing
in
infra,
and
it
has
a
much
bigger
question
of
who's.
H
Maintaining
this
long
term
like
when
we
created
gke
cluster
and
run
like
prow
on
it
that
isn't
going
to
have
a
super,
a
large
long-term
overhead,
but
maintaining
a
tool
that
deploys
kubernetes
clusters,
particularly
at
unreleased
latest
source
code.
Changes
is
actually
something
that
needs
quite
a
lot
of
ongoing
maintenance
and
we
don't
even
have
a
commitment
for
the
current
Scripts.
H
But
this
is
something
I
brought
up
in
the
governing
board
meeting,
even
when
they
were
like.
Oh
well
just
move
CI
to
Amazon
and
fix
the
costs
like
well,
we
can
move
like
unit
and
verify
and
stuff,
but
moving
e2e
is
going
to
be
actually
a
really
big,
lift
or
same
thing
with,
like
oh
move
to
GitHub
action.
It's
like
well,
but
GitHub
actions
doesn't
have
Bosco,
send
like
gcpxs
and
whatever
like
just
even
starting
from
that.
H
So
I
mean
it's
a
technically
feasible
project
and
something
that
probably
should
happen
eventually,
but
I
think
we
have
other
like
much
quicker
wins
with,
like
the
I
mean
just
the
bandwidth
is
mostly
costs.
Anyhow.
If
we
can
keep
making
progress
on
all
the
download
costs,
it
won't
be
a
big
deal
that,
like
straight
to
these,
are
mostly
on
gcp.
G
H
But
it
does
mean
that,
like
to
finish
migrating
crowdfully
to
public
infrastructure,
we
have
to
have
space
to
run
the
rest
of
the
edues,
and
that
will
happen
whenever
we
finally
see
the
cost
shift
on
the
downloads.
C
H
Yeah,
that's
kind
of
meaning
of
itself.
The
short
version
is
basically
Amazon
is
now
committing
to
also
provide
three
million
credits
a
year,
but
their
commitment's
a
little
bit
different
with
gcp
we
get
like
3
million
credit
did
against
the
account
like
in
January,
or
something
like
that
with
Amazon
they're,
giving
us
some
credit
over
time
and
there's
a
very
clear
if
you
want
to
receive
the
full
credit
and
get
it
on
an
ongoing
basis.
H
You
need
to
be
spending
it
sizably
because
they
have
a
zero-sum
game
going
on
with,
like
my
understanding
is
something
like
they
have
like
a
pool
of
credits
for
open
source
and
we're
getting
some
reasonably
large
chunk
of
it
allocated
our
way.
But
if
we
don't
wind
up
using
it,
they
will
reallocate
it
to
someone
else.
H
So
we
kind
of
have
two
problems,
one
where,
like
way
over
the
gcp
credit,
and
we
need
to
reduce
that
but
separately
on
the
Amazon.
We
also
just
need
to
like
spin
up
whatever
we
can
so
that
going
forward.
We
have
it
even
if
we
do
something
kind
of
inefficient
right
now.
We
can
always
improve
efficiency
later.
But
if
we
don't
use
it,
we
will
not
continue
to
receive
it
and
that's
Net
News
That's
net
new.
For
this
year
they
announced
it
the
last
cubecon,
but.
H
The
cncf
is
providing
some
contractors
now
and
I
met
with
them
this
morning,
but
there's
other
like
lower
hanging
fruit
for
like
getting
some
spend
running
and
reducing
the
gcp
bill
in
terms
of
our
cost
is
our
the
public
cost
is
like
66
serving
downloads
to
users
and
stuff
and
the
end
the
cut
stuff
that
Google's
still
paying
for
internally.
It's
like
75
percent.
H
So
if
you
can
shift
that
to
not
be
crazy,
because
it's
all
egress
to
other
clouds
and
stuff,
if
we
can
get
that
fixed
eventually,
then
we
actually
have
lots
of
room
within
the
3
million
gcp
to
run
it
and
then
it's
like.
Is
it
worth
doing
this
really
challenging
effort
to
move
it
over
window
and
stepping
up
to
like
maintain
all
this,
and
it's
very
hard
to
find,
because
you
need
someone,
you
need
to
have
people
with
expertise
and
like
actually
running
a
cluster
like
the
hard
way
or
something
like
that.
H
Maintaining
this
tool
and
most
of
those
people
are
making
money
working
on
a
product
that
and
the
products
don't
need
to
support
the
absolute
latest
source
code.
They
need
to
support
releases,
so
that
is
a
particularly
challenging
one
and
it
probably
isn't
necessary
long
term
like
if
someone
want
it
like.
If
Amazon
wants
to
see
more
things
running
on
their
cloud
like
happy
to
work
with
them
on
that.
But
I,
like
those
of
us
here,
can't
necessarily
commit
to
like
keeping
that
functioning.
Even
if
we
could
get
something
spun
up.
C
H
Harder
things
like
the
downloads
should
be
a
lot
more
straightforward.
The
thing
we're
running
into
there
is
just
users
know
the
current
location
and
getting
people
to
migrate
has
a
lot
of
lag.
H
We
have,
we
have
kubernetes
sub
domain
like
so.
We
have
registry.cates.io
and
we're
like
move
from
the
GCR
to
registry.case.io
and
behind
that
we
will
reroute
to
whatever
we
need.
H
But
we
haven't
done
it
for
all
the
download
stuff
yet
and
we
haven't
gotten
everyone
to
switch
most
of
the
other
download.
Things
are
already
at
least
on
a
community
domain,
but
they
point
at
gcp.
The
image
hosting
was
a
huge
chunk
of
the
cost
and
was
just
pointed
straight
at
gcp,
directly
like
GCR,
so
for
the
binaries.
We
just
need
to
update
what's
behind
it,
and
that
will
be
the
first
thing
I'm
asking
the
contractors
to
look
at
for
the
container
images.
H
So
Muhammad
actually
has
a
kept
open
about
stopping
publishing
to
GCR
and
only
publishing
to
the
new
registry,
because
currently.
H
Only
advertised
the
new
one,
but
we're
still
actually
publishing
to
both
locations,
foreign.
A
Yeah
thanks
everyone,
sorry
again
for
going
over
time,
but
ended
up
being
a
lot
more
cool
than
I
was
thinking
from
the
initial
agenda.
A
It
also
sounds
like
there
is
a
lot
to
follow
up
on
later,
so
I
think
the
Sig
testing
channel
should
probably
be
busy
today
and
like
for
a
bit
going
forward,
but
definitely
please
keep
in
contact
there
about
any
of
the
interests
around.
These
I
need
to
find
out
more
about
the
Amazon
stuff
as
well.
This
year
also.
H
Sig
Kate's
infra.
Yes,
we
there's
a
you
know,
there's
a
standing
meeting
specifically
for
like
the
infra
cost
management
and
stuff,
but
like
Sig
testing
is
going
to
be
pretty
involved
there
because
we
will
be
moving
at
least
some
portion
of
the
CI.
H
H
That's
internal
to
external
is
something
we
don't
want
to
do
right
now,
because
internal
billing
Works
differently-
and
we
don't
have
like
a
hard
cap
there
for
the
external
stuff,
there
is
literally
like
3
million
got
dropped
in
the
account
as
a
credit
at
the
beginning
of
the
year
and
we're
burning
through
it
way
faster
than
three
million
a
year,
and
this
is
not
a
great
gear
to
tell
people
like
hey
I
need
you
to
drop
like
another
Mill
in
this
account
or
something
trying
to
avoid
that
the
best
we
can,
but
also
just
as
importantly,
we
want
to
make
sure
that
in
2024
Amazon
isn't
like.
H
Oh
you
use
like
100k
last
year,
you
can
have
100K
we're
like
wait,
but
we
needed
that
like
3
million,
because
we
actually
have
a
lot
of
infrastructure
to
run.
We
just
couldn't
spin
out
fast
enough.
So
whatever
we
can
do
quickly
is
what
we
should
do
longer
term
I
think
it
makes
more
sense
to
try
to
get
more
players
involved
and
split
across
more
clouds
and,
like
that's
something
we're
talking
about
we're.
H
Also
looking
at
I
think
we're
nearing
getting
fastly,
helping
us
specifically
with
some
like
bandwidth
doing
some
of
the
downloads,
but
in
the
more
immediate
term
it
almost
makes
more
sense
to
just
say
screw
it
like
downloads
are
coming
off
Amazon.
We
need
to
like
show
them
that
we're
serious
about
actually
using
those
credits
and
there's
not
a
lot
of
things
that
you
can
just
snap
your
fingers
and
move
to
another
cloud,
the
applications
that
we
run
on
kubernetes.
We
totally
can
kubernetes
portable
itself,
but
that's
not
much
of
our
cost.
H
A
Thanks
again,
everybody
yeah
follow
our
Channel
and
see
all
in
two
weeks.
Sorry,
if
there's
any
last
things
that
people
want
to
say
feel
free.