►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
December
8
2022,
and
this
is
the
cluster
API
provider
Azure
officers
thanks
for
joining
us.
If
you
haven't
been
to
this
meeting
before
you
can
get
edit
access
to
this
agenda
document
by
joining
the
sick
cluster
real
estate
go
mailing
list
and
we'll
do
this.
Every
Thursday
at
9
00
a.m,
Pacific!
A
A
So
we
always
take
a
minute
at
the
beginning,
call
to
give
a
chance
for
anyone
who
is
new
or
who
hasn't
been
to
a
meeting
in
a
while
to
say,
hi
and
introduce
themselves.
So
I
do
see
a
few
new
names.
If
anyone
here
wants
to
say
hello,
this
is
your
time.
I
will
commute
and
let
you
introduce
yourself
if
you'd
like.
B
A
Review
at
the
end,
oh
for
now
we'll
jump
into
the
agenda.
So
first
item
I
added
that
I
think
I
don't
object
if
you
want
to
go
over
it,
but
basically
seven
weeks
to
talk
about
this.
C
Yesterday,
yeah,
we
can
do
this
together.
Your
audio
is
a
little
bit
in
and
out
to
seal.
C
Okay
cool,
so
the
this
is
something
that
I
observed
actually
a
couple
months
ago
that
the
pla's
been
open
for
a
while.
So
thanks
for
anyone
paying
attention
and
for
folks
like
Cecile
reviewing
it
for
hanging
in
there
for
week
after
week,
the
the
short
story
is
that
I
made
an
observation
that
there
are
no
web
hook.
C
Enforcement
rules
for
updating
the
control,
plane
endpoint
for
Azure,
managed
cluster
I
think
this
is
actually
Azure
managed,
control,
plane
and
I
did
a
little
research
and
concluded
that
from
an
AKs
perspective,
there
is
no
the
the
control
plan
endpoint,
which
is
essentially
the
URL
to
the
API
server
to
the
control
plane.
C
That
will
never
change
on
a
cluster.
That's
an
integral
property
that!
So,
if,
if
you
have
a
cluster-
and
you
want
a
new
API
server
endpoint
for
whatever
reasons,
then
you
have
to
create
a
new
cluster
I
couldn't
find
that
it
stated
contractually.
That
AKs
would
never
change
that.
But
it
seems
pretty
definitive
that
that
would
be
very,
very
disruptive.
C
So
I
just
thought
it
might
be
a
good
idea
to
reflect
that
API
reality
in
cap
Z,
so
that
capsi
can
ensure
that
no
other
actors
ever
accidentally
or
malevolently
try
to
change
this
value.
C
So
that's
the
backstory
of
the
intent.
The
the
challenge
is
that
this
is
a
sort
of
a
funky
configuration
Vector,
because
it's
not
user
configurable,
but
it
is
the
way
that
we
allow
the
AKs
API
to
configure
it.
So
what
happens
is
when
you
create
a
cluster
assuming
a
successful
terminal
response
so
like
an
Exit
Zero
from
the
AKs
API
part
of
the
response
data
is
the
API
server
URL,
it's
the
endpoint
to
connect
to
the
control
plane
and
how
we
do
that
in
cap
Z.
C
Is
we
wait
for
that
successful
response,
and
then
we
interrogate
that
response
data?
We
pull
out
that
URL
and
then
we
in
the
controller
itself,
we
update
this
value,
control,
plane,
endpoint.host
and
control
plane.
Endpoint.Port
Cecil
today
is
anything
that
I've
said
been
incorrect
or
incomplete
so
far
cool.
So
the
challenge
is
that
that
background,
automated
process
goes
through
the
web
hook
plumbing,
and
so
we
aren't
actually
able
to
site
at
least
I'm,
not
aware
of
a
way
that
we
are
able
to
fulfill
the
two
requirements.
C
B
A
Not
let
it
be
updated
by
the
user
when
the
cluster
hasn't
been
reconciled,
yet
so
I
guess
one
of
my
questions
was:
is
it
possible
for
a
user
to
want
to
set
this
before
creating
the
cluster
ever
no.
C
A
So,
in
that
case,
I
think
the
main
difference
between
what
we're
doing
here
and
what
we're
already
doing
for
this
self-managed
Azure
cluster
is
that
for
the
Azure
cluster
it
is
already
immutable
in
the
web
hook.
So
we
are
enforcing
that,
but
the
controller
will
never
try
to
set
it
if
it's
already
set.
So
that's
the
big
difference,
because
here,
what
we're
doing
is
essentially
disallowing
updates
in
our
web
hook,
but
then
not
having
our
controller
follow
that
rule,
which
means
our
controller,
could
itself
turn
into
a
book
together.
C
A
Well,
there's
two
parts:
there's
manage
control
plan,
managed
cluster
and
manage
cluster.
Just
so
we
can
solve
whatever
manage
control.
Plane
has
but
they're
managed
to
control
me
and
I.
Don't
know
if
I
can
find
this
very
easily,
but.
B
A
C
So
that
I
think
maybe
that
was
implemented
with
the
expectation
that
AKs
may
change
this
at
any
time,
or
maybe
it
was
set
that
way
as
a
kind
of
instead
of
using
a
web
hook
to
prevent
the
user
from
messing
this
up.
This
was
a
way
of
overriding
a
user
update.
A
Exactly
so
in
one
in
one
one
way
of
doing
this,
we
always
overwrite
the
user,
so
we
just
ignore
whatever
they
say
and
just
always
make
sure
it's
correct,
which
is
one
way
of
doing
it.
The
bad
I
guess
the
downside
of
that
is.
If
the
user
does
that
it,
they
won't
know
that
it
was
ignored.
It
will
just
finally
be
ignored
the
other
way.
A
I'm
doing
it
is
like
actively
prevent
the
user
from
studying
something
which
means
that
they
will
run
into
a
web
hooker
if
they
do,
but
the
slight
issue
with
that
is
like
if
they
do
set
it
before
we
get
a
chance
to
set
it,
then
we
can't
overwrite
them
anymore.
So
they
end
up
with
a
broken
configuration.
C
Right
I
think
that
that's
okay,
because
they
shouldn't
be
setting
it-
is
there
there's
no
way
for
us
to
just
disintegrate
between
a
user
setting
this
value
in
the
cluster
template
at
cluster
creation
time.
From
that
background,
controller
update
operation
well.
A
So
that
was
the
thing
that
I
was
trying
to
get
to
like.
Maybe
we
could
do
that,
but
I
don't
know
so
one
idea
that
I
had
off
the
top
of
my
head
just
quickly
is
what
we're
doing
in
the
manage
cluster
controller
here
is.
A
A
C
C
Then
this
will
this
flow
will
always
fulfill
those
Web
book
requirements,
because
it's
literally
serially
setting
that
property
to
True
before
it's
setting
the
control,
plane,
endpoint
value,
I
think
I.
Think
that
paired
with
that
create
web
hook
that
we
just
described
paired
with
updating
the
the
update
in
that
other
controller
flow,
to
only
set
the
value.
If
it's
an
empty
string,
I
think
that's
that's
an
improvement
from
the
current
model.
A
C
A
C
Especially
if
someone's
like
I
depend
on
changing
the
control
Point
endpoint
every
week,
which
would
be
extremely
surprising
but
I'd
love
to
know
about
that.
Yeah.
A
Also,
if
people
are
curious
to
the
pr
is
in
the
box,
so
go
check
them
out
all
right.
Sorry,.
D
I
just
had
a
quick
question,
so
so,
with
this
like
setting
the
end
point
each
time,
would
that
replace
the
validation
in
the
web
hook,
or
would
that
be
like
on
top
of
what
Jack
already
has
in
the
pr?
Because
it
seems
like
the
other,
the
the
self-managed
cluster
doesn't
does
it.
It
doesn't
have
that
same
type
of
validation
in
the
web
hook.
A
C
I
actually
am
unfamiliar
with
what
self-managed
cluster
does
Willie.
Are
you
asking?
Does
the
self-managed
cluster
flow
do
the
same
thing
where,
if
ready
is
only
set,
the
control
plane
endpoint
from
Cube
ADM
in
this
point
perspective
only
do
that
if
ready
is
equal
to
true
or
something
equivalent?
Is
that
what
you're
asking.
D
A
Sorry,
here's
that
thing
immutable
validation,
self-managed
clusters.
A
All
right,
it's
not
Jack!
You
have
the
next
one.
C
Okay
cool,
so
this
will
hopefully
be
quick.
Can
you
kindly
click
on
that
first
link,
so
this
is
I'm
just
going
to
provide
a
brief
update
on
all
the
managed
kubernetes
goings
on,
so
we
11
or
12
of
us
met
on
Wednesday
before
the
cluster
API
office
hours
and
had
an
introductory
discussion
around
managed
kubernetes
in
Cappy
I
thought
it
was
really
I'm.
Very
hopeful
about
the
prospects
of
this
seems
like
a
lot
of
folks
are
engaged.
C
It's
it's
going
to
be
a
long
effort
to
make
any
change,
but
I
want
to
link
to
this
PR
so
that
folks,
on
this
call
who
are
interested
in
keeping
up
to
date
with
that
week
by
week,
have
access
to
the
documentation
of
where
we're
meeting
and
this
eventually
this
will
get
merged.
I
think
and
then
it'll
be
a
more
durable
statement
of
all
this
info.
But
for
now
the
pr
suffices
that
the
zoom
links
it's
the
Cappy
Zoom
link
and
we
haven't
agreed
upon
time
on
Wednesdays
at
9.
00
a.m.
C
So
that's
just
a
PSA
and
then
the
next
link
on
the
agenda
doc
points
to
the
graduation
doc
which
which
has
been
updated
in
the
last
week
since
last
week's
office
hours.
So
folks,
please
take
a
look
at
this.
This
is
getting
nearer,
I
think
to
merging
there's
more
concrete
detail
in
the
dock
in
terms
of
status,
so
this
I
think
I
will
Advocate
that
this
land
in
capsi
and
then
you
know
every
whatever.
C
There's
a
status
updated
on
any
of
the
the
prerequisite
items
there
I'll
push
a
quick
PR
update,
so
this
becomes
a
kind
of
authoritative
state
of
the
world.
There's
also
going
to
be
some
project
boards
or
buildings,
so
those
will
be
referenced
from
here
as
well,
but
the
the
really
the
short
story
is
that
we
want
end-to-end
tests
and
we
want
documentation,
and
then
we
have
a
final
outstanding
confirmation.
C
So
that's
I
think
I'll
lean
on
Matt
a
little
bit
and
we'll
work
within
the
Cappy
Community
to
clarify
the
machine
pool
prerequisite
seems
to
me
from
our
community's
perspective.
We
want
to
graduate
with
machine
pool
being
experimental,
so
I
just
want
to
clarify
that
with
Cappy
to
make
sure
that's
not
controversial.
C
So
if
you're
running
managed
kubernetes
if
you're
running
running
cap
Z
with
managed
kubernetes
AKs
in
production,
all
of
the
discoverability
and
investigation
we've
done
over
the
last
several
months
suggests
pretty
confidently
that
you
can
continue
running
those
and
building
platforms
we'll
be
graduating
this
without
any
breaking
API
changes.
A
Thanks
Jack,
anyone
have
any
questions.
A
All
right,
if
you're
interested
in
managed
clusters,
please
take
a
look.
A
And
okay,
before
we
move
on
to
this
next
topic,
which
is
probably
going
to
be
also
pretty
lengthy,
does
anyone
have
any
other
topics,
any
questions,
comments,
feedback
and
things
in
general
that
we
wanted
to
bring
up.
A
All
right
so
I
guess
Jack
others
I,
don't
know
who
put
this
down,
but
we
wanted
to
talk
about
we're,
seeing
some
failures
in
the
end
to
end
us.
What's
going
on.
C
So
I
put
that
down
I
actually
for
the
I,
don't
know,
let's
say
for
the
for
the
engagement
of
the
audience.
Maybe
we
do
the
Milestone
review
now,
because
that's
a
little
bit
more
inclusive
of
folks
in
the
room,
and
then
we
can
do
this
at
the
end
as
a
kind
of
breakout
and
if,
if
folks
want
to
dive
into
the
weed,
that's
great,
but
if
folks
also
want
to
drop
off.
That's
also
great.
A
Yeah
that
sounds
good
to
me.
Any
objections
right
is
that
good
with
everyone
else,
let's
see
some
thumbs
up
great
all
right.
Let's
look
at
the
millstone,
so
today
is
December
7th,
which
we,
which
means
we're
officially
halfway
through
the
Milestone
and
and
then
from
our
Times
Release
by
the
beginning
of
the
new
year.
So
let's
see
I
think
I'll
just
go
over.
A
What's
in
here
like
if
there's
anything
that
needs
to
be
updated
or
kicked
out
of
the
Milestone
and
then
in
the
meantime,
if
you
see
anything
that
you're
working
on
that's
not
in
here
that
should
be
in
here.
Please
comment
on
that
issue
of
the
earth
agent
include
it
in
there
all
right.
So
first
one
is
emss
Flex
net.
E
A
A
If
you
haven't
reviewed
this
yet
and
you
have
an
interest,
please
review
I
think
we're
on
track
membership
proposal
by
the
end
of
the
Milestone
I,
don't
think
we'll
be
on
track
during
the
competition
for
that
release,
though
any
questions
so
far,
please
feel
free
to
raise
hand
or
whatever
or
just
shout.
If
you
have
a
question
I.
F
Think
he
did
say
that
he,
but
he
actually
opened
up
a
PR
for
the
implementation
working
process
as
well.
But
okay.
A
Yeah
great
all
right,
cool.
A
C
A
A
Of
it,
cool,
okay,
full
support
for
AKs
cluster
autoscaler
Mike
is
not
here.
Just
anyone
else
wants
to
speak
to
the
John.
F
Yeah
I
did
take
a
look
through
that
PR
I
think
it's
pretty
close.
There
is
one
at
least
one
bug
in
there,
but
I
don't
think
that
will
threaten
the
Milestone
at
all.
A
G
C
A
I
think
related
to
that
I
was
also
going
to
start
looking
at
adding
a
template
for
entry,
Cloud
providers
as
explicitly
so
that
we
can
then
start
migrating
our
posts
to
all
these
outages
so
that
we're
testing
out
a
few
bits
of
folks
but
yeah
cool,
any
other
questions
so
far
or
comments
all
right
all
right.
Next
one
is
the
network
interfaces.
A
Sorry
I'm
just
looking
at
this
PR
the
network
interfaces
configurable
care.
Do
you
think
we're
still
good
for
including
it
in
the
release.
C
I
believe
so
yeah
we
should
be
fine.
I've
been
busy
with
other
things.
The
last
couple
days,
so
apologies
I,
haven't
been
able
to
respond,
but
I.
A
But
all
right,
yeah
I,
think
it's
getting
really
close
great.
Thank
you.
Next
one
I
don't
know
anyone
who
is
following
that
Pierre
or
working
on
that
PRN
here
I
know
really
hard
review.
I
added
some
comments.
I
think
they
answered
the
comments.
A
All
right,
great
cool,
all
right,
respect
externally
managed
annotation
for
unmanaged
machine
pools.
So
I
don't
know
if
Michael
is
in
here
or
Jax.
If
you
want
to
talk
about
that,
one.
C
It's
yeah,
I,
don't
know
if
I've
looked
at
it
recently
enough
to
speak
to
that
bigness
and
tell
the
story
so
yeah
I
think
I
feel
like
this
would
be
great
to
land.
But
I
haven't
done
anything
so
okay
Circle
back
next
week,
and
if
no
progress
has
been
made
by
anyone.
Maybe
we
can
kick
it
to
the
next
Milestone
cool.
A
I
think
we
did
merge
copy
1.3,
which
I
believe
has
the
new
annotation
included
right.
Is
that
correct?
That's.
C
Right
there
should
be
a
follow-up,
I
need
to
file
an
issue,
because
we've
got
a
sort
of
interim
solution
in
place
already
for
the
auto
scaler
stuff.
So
we
okay.
D
A
All
right
cool
thanks
depend
about
anyone
know
about
that.
One.
A
A
All
right,
let's
go
next.
One
is
mine
that
one's
waiting
for
a
review.
A
D
Yeah
it
was
approved,
but
Noah's
left.
A
few
new
comments
at
I
need
to
address
should
be
done.
Pretty
quick,
though,.
B
A
Thank
you
all
right,
so
Jack
I
think
that's.
The
theory
already
talked
about
the
testing
external
copywriter
with
Windows
correct.
So
if
we
were
that
one-
and
there
was
log
managed
close
respect,
changes
is
that
one
ready
for
everything
yeah.
A
So
I
will
leave
that
in
here
and
that's
good
for
the
Milestone.
It's
Jonathan
here.
B
A
Does
anyone
know
about
this?
One
John
you've
been
following
going.
F
On
I
have
been
looking
at
that
I
haven't
looked
at
it
for
a
couple
days,
but
I'm
guessing,
that's
pretty
close,
so
I
think
that's
good
to
keep
the
Milestone.
A
C
It's
a
slightly
different
one
that
I
think
I'm
gonna
close
it
I.
It
was
mainly
a
sort
of
I,
don't
know
a
statement
of
I
I
was
trying
to
use
it
to
Advocate
with
the
Cappy
community
that
we
don't
have
to
change
our
API
to
the
option.
Three
thing
and
I
think
that
that's
successfully,
that
advocacy
has
been
successful,
so
go
and
close
that
PR.
B
A
Okay,
race,
detector
for
number
12.;
no,
it's.
A
Okay
sounds
good
all
right
and
then
I'm
guessing
this
one.
This
one
is
approved
in
lgtm,
so
it
just
needs.
Oh,
it
needs
a
new
base.
Okay,
so
that
one
should
be
good
too.
So
when
you
have
a
chance
to
rebase
and
then
this
one
is
also
like
it
straightforward,
so
I
don't
think
I.
C
A
Okay
sounds
good
and
then
having
just
confused,
because
I
thought
this
was
the
pr
I
was
looking
at
earlier
now.
I
realized.
There
are
two
so
I,
don't
know
what
the
other
one
was.
A
B
C
A
Yeah
a
lot
of
the
maintainers
to
our
U.S
space.
B
A
Cool
anything
anyone
else
to
add,
or
anything
that
should
be
in
here.
That's
not
in
it.
A
A
And
if
not
we're
going
to
jump
into
end-to-end
test
debugging
so
feel
free
to
drop
out
if
you're
not
interested.
This
is
probably
going
to
get
a
pretty
pretty
technical.
So
if
you
want
to
get
that
time
back,
feel
free
to
drop
off
and
we'll
see
you
next
time
for
anyone
who
does
want
to
stay
and
he's
interested
in
looking
at
some
test
breaks.
Please
hang
on
foreign.
C
Yeah
I'd
be
happy
to
do
you
want
me
to
share
my
screen
and
take
over
or
do
you
wanna
yeah.
A
A
Do
you
want
me
to
stop
the
record
this.
C
We
could
record
this
I
think,
okay,
so
yeah.
Let
me
participants
claim
hosts.
A
C
Okay,
can
folks
see
the
test
grid
interface,
okay,
great.
So,
let's
I
propose
that
we
scope
this
to
the
end-to-end
job.
That's
where
I
saw
most
of
the
flakes
it's
possible,
their
flakes
in
other
jobs,
the
op
there's
an
optional
job,
which
runs
a
few
cluster
scenarios
that
we
don't
run
with
every
PR
and
then
there's
a
exp
job
which
we
run
for
AKs,
but
I
was
observing
this
in.
C
I'm
going
to
go
to
test
grid
via
one
of
these
guys
if
I
click
on
this
and
then
do
job
history,
so
first
off
before
we
go
down
the
rabbit
hole
are.
Did
other
other
folks
observe
this
in
the
last
one
to
three
days.
Basically,
this
week
getting
one
thumbs
up,
are
we
talking
about
PR,
specifically
yeah
I've,
just
noticed
I
mean
this
is
a
some
somewhat
like
anecdotal
thing,
but
I've
noticed
this
week.
C
F
So
go
ahead!
Sorry
sorry
I've
been
seeing
it
too.
There
are
some
timeouts
resources.
Doesn't
spinning
up.
C
All
right
cool,
so
there's
just
from
the
if
we
just
take
the
data
of
the
pr
submitter
here,
there's
a
nice
cross
section.
Basically,
everyone
that's
experienced
this.
In
the
last
24
hours
Matt
you
experienced
that
I
experienced
that
John
experienced
at
Willie,
experienced
it
Alberto
experience,
I,
think
that's
Alberto
and
Cecile
experience
it.
So
maybe,
let's
let's
go
from
oldest
to
newest
and
I
think
this
is
the
link
I
want
and
let's
see
if
we
can
find
any
patterns.
C
C
C
So
let's
see
what
we
can
find
from
this
I'm
going
to
go
back
and
look
at
artifacts.
We've
got
two
out
of
three
control
plane
nodes
online.
Should
we
look
at
the
cluster
API
or
the
cap?
Z
controller
manager,
logs
cap
Z,
probably
probably.
C
G
G
C
C
C
So
the
not
found
is
probably
successful.
Belief
got
it.
Someone
fixed
this,
so
we
don't
throw
an
error
when
we
are
doing
something
we
want
to
do.
G
C
A
C
That's
what
should
we
look
at?
Oh
I,
see
that's
a
good
point.
Let's
just
I'm
gonna
quickly,
look
at
machines
and
make
sure
that
there's
only
two
machines:
okay,
there
are
only
two
machines,
so
what
Cecile
said
is
I'm
thinking
the
same
exact
thing,
control
plane,
provisioning,
its
cereal
conditionally
upon
each
one
succeeding,
and
so,
if
one
of
the
control
plane
cubadium
and
it's
fails,
then
the
third
one
won't
be
built.
So,
let's
see
if
that
happened,
this
is
actually
the
wrong
place
to
look
for
that.
C
G
C
A
C
Is
it
possible?
Well,
let's
see,
let's
see
what
the
timeline
is
here,
so
this
is
because
it
was
the
the
and
and
failure
was
a
timeout,
so
it's
possible.
It
was
just
in
the
progress
of
doing
its
work
and
took
too
long.
That's
21,
12.
C
G
C
The
machine
resources
captured
because
I
believe
that's
where
the
boots
strap
failures,
get
recorded
right,
just
I'm
gonna
Sidetrack
for
one
second
I
am
actually
curious
where,
when
the
Cappy
1.3
PR
landed.
A
C
So
hard
for
a
human
Okay,
okay,
so
that's
this
this
actually
to
me
correlates
anecdotally
with
what
I
started
seeing
it.
So
that's
very
interesting!
So,
let's
which
one
were
you
looking
at
this
one,
do
you
want
to
look
at
the
machine
status?
Is
that
right,
cool
so
I'm,
not
no
particular
order.
A
C
C
B
C
Okay,
well,
I,
don't
see
that
at
CD
event
there,
let's
see
just
really
quickly
in
the
interest
of
correlation
I'm,
going
to
see
if
I
can
see
that
same
thing
on
this
other
test
plate.
C
B
A
C
C
Okay,
the
etcd-
maybe
we
should
do
this
asynchronously,
let's,
what's
we're
not
going
to
sell
this
in
12
minutes
so
what's
worth
doing
in
in
say,
12
minutes
before
we
break.
If
we
want
to
keep.
C
Cool
I
was
going
to
say
that
I
just
want
to
make
an
observation
that
I've
seen
test
flakes
not
just
IPv6,
so
other
other
tests
have
also
failed,
but
this
this
would
make
sense,
because
this
particular
failure
should
have
nothing
to
do,
in
particular
with
the
cubelets
IPv6
configuration,
which
is
ultimately
what
the
most
interesting
thing
is
happening
here.
I
think
yes,
so
that
would
be.
G
Okay,
December
3rd,
that
works
I
think
there's
probably
a
weekend
in
there.
So
let's
look
at
this
one,
you
know:
is
this:
the
tab
I'm
going
to
open
up
a
new
window
because
I'm
gonna
get
lost
here.
C
Okay,
IPv6
is
lxr,
so
just
for
folks
who've
never
done
this.
There's
a
fun
funny
thing
where
the
the
sort
of
flavor
that
you're
building
here
is
part
of
the
name.
I
think
this
is
the
cluster
name,
but
then,
once
you
dig
further
in
you
notice
that
that
is
gone,
and
so
that's
why
I'm
saying
out
loud
lxr
so
I
can
this
had
to
be
great
once
I
get
this
level
deep,
it's
kind
of
unfortunate
okay!
C
So,
let's
see
if
we
have
any
of
those
failures
here,
this
isn't
going
to
tell
us
the
Cappy
version
to
confirm
but
I'm
like
it's
possible
I'm,
taking
this
from
a
PR
from
that
1.3pr.
We.
C
C
C
Confused
so
we're
I
didn't
want
to
close
that
one
so
we're
back
here,
let's
go
to
latest
runs
I'm
gonna
just
pick
on
this
failure
from
this
PR
and
the
tests
that
okay
great,
so
both
tests
failed
for
multiple
control,
plane
tests.
Actually,
let
me
quickly
just
see
if
that
is
a
consistent
pattern.
Yep
three
control
planes.
C
Sorry,
if
this
is
quick
for
folks
single
control,
plane,
okay,
so
that's
an
exception,
but
let
me
I
want
to
at
least
correlate
in
this
PR
that
the
other
failure
was
also
so
this
is
the
cluster.
C
C
It's
during
cleanup
yeah,
okay,
so
this
is
another
known
flake,
so
I'm
gonna
ignore
that
one
foreign.
C
C
C
Let's
actually
look
at
that
one,
that's
good!
So
let's
make
sure
that
that
one
that
failed.
C
C
C
A
It
wasn't
the
one,
the
Cathy
one
click
yarn.
Wasn't
there
also
end-to-end
test
kubernetes
version
Max
like
we
changed
the
version
of
the
minor
version
of
kubernetes.
B
C
Okay,
note
to
self:
we
shouldn't
do
these
all
in
one
PR.
C
A
C
C
Let's
break
because
we're
basically
a
Time
I
am
going
to
open
up
a
PR
that
downgrades
the
kubernetes
versions
just
so
we
can
get
some
test
mileage
on
that
I
think
even
one
IPv6
test
that
doesn't
exhibit
this
at
CD
symptom
is
really
interesting
data.
So
we
should
have
that
in
like
an
hour
or
so,
and
that
will
at
least
suggest
that
this
is
not
a
copied
problem.
I
mean
this
symptom
doesn't
seem
to
be
something
that
would
have
to
do
with
Cappy
per
se.
C
It
seems
like
it
would
have
to
do
with
the
the
way
that
cubadium
delivers
a
static,
pod
configuration
and
then
that's
consumed
by
qubit
when
it
starts
up.
So
it
seems
more
in
that
flow
and
less
in
the
cluster
API
load.
Is
that
sound
controversial?
What
I'm
concluding
at
this
point.