►
From YouTube: 20210126 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
Is
january
26th,
this
is
the
conformance
meeting
held
by
weekly
to
discuss
performance
issues
and
we're
all
abide
by
the
kubernetes
code
of
conduct,
which
basically
means
we
need
to
be
excellent
to
each
other.
So
welcome
to
the
meeting
and
let's
jump
in
if
you'll
be
so
kind
to
add
your
names
to
agenda.
Let.
B
A
B
A
Okay,
first
point
from
hippie
leaders
from
cnc
have
graduated
incubating
projects
and
activities.
There
is
again
kubecon
cloud
native
con
europe
2021,
so
you
have
to
register
and
submit
by
february
the
7th.
I
don't
know
if
you
want
to
say
anything
about
that.
C
I
basically
wanted
to
open
it
up
for
the
whole
group
to
participate
in
that
session,
but
what
normally
it
ends
up
doing
ends
up
being?
Is
me
or
or
someone
else
from
ii
submits
for
kind
of
where
we
are
and
how
to
contribute
session
to
kubecon
as
part
of
our
our
subgroup
presentation,
and
obviously,
we've
got
till
february
7th
we'll
put
that
on
the
agenda.
If
I
wanted
to
yeah
so
listed
any
feedback
or
thoughts
on
that.
D
C
A
A
C
What
are
they
what's
the
wording,
the
sub
resources
and
it's
confusing,
and
I'm
trying
to
get
a
little
more
feedback
from
outside
of
api
machinery
before
I
head
back
in
there
to
help
convey
that
it
is
important
that
we
have
some
type
of
metadata
tracking
and
I
think,
there's
a
couple
of
ways
to
do
that.
One
is
I
I've
got
another
thing
in
process.
C
I
think
for
sig
release
on
trying
to
update
our
caps
to
include
the
the
flags,
the
feature
flag
and
the
status
of
that
flag,
and
I'm
not
sure
if
it's
a
manual
method,
we
need
to
do,
but
it
would
be
great
if
a
particular
operation
id
had
the
metadata
for
the
flag
that
it
hides
behind
and
then
maybe
we
have
a
separate
area
because
you
can
use
the
cube
cuddle
binary
to
query
the
api
server.
I
believe
and
get
the
list
of
or
not
sorry
the
api
server
binary.
C
You
can
run
help
and
it
will
list
the
flags
and
their
current
status.
So
we
have
a
programmatic
way
to
retrieve
the
status
of
feature
flags,
but
I
don't
have
a
way
to
tie
it
together,
even
though
it's
in
this
one
pr,
maybe
we
can
look
a
bit
deeper
into
the
metadata
generated
to
tie
together
specific
operation
ids
with
their
current
alpha
beta
ga
status.
C
What
we
end
up
doing
and
it's
it's
doesn't
feel
clean.
Is
we
end
up
going
to
our
our
list
of
ineligible,
endpoints
and
add
them
to
ineligible
because
they're,
currently
at
a
beta
state
and
so
we'll?
We
have
a
lot
of
endpoints
that
we'll
need
to
go
and
revisit
so
I
think
at
least
once
once
a
release.
C
We
need
to
revisit
our
ineligible
endpoints
and
check
the
status
yeah,
because
what
what
may
happen
is
that
they
do
somehow
get
perm,
they
are
considered
ga
and
we
didn't
check
them
because
we
added
them
to
they're,
not
part
of
conformance
and
I'm.
I
just
think
a
conversation
needs
to
happen
and
it
needs
to
be
broader
than
just
api
machinery.
So
as
sig
architecture,
we
can
come
back
and
and
and
suggest.
Well
we
have
this
metadata
how
you
know
I
anyway.
B
D
My
gut
instinct
is
that
tying
this
together
via
feature
flags
is
maybe
the
wrong
level.
So
you
can
use
help,
but,
like
is
that
a?
Is
that
an
api
endpoint
or
is
that
you
run
the
binary
with
the
word
help.
C
It
is
in
the
currently
that's
via
the
compiled
binary,
and
that
only
shows
us
this
current
status
of
the
feature.
Flags
yeah.
That's.
E
C
My
other
suggestion
and
that's
not
relying
on
the
binary,
is
inside
of
the
cap,
so
we
could
refer
to
the
cap
within
the
cap
tracking
the
operate,
the
feature
flag
and
the
list
of
operation
ids,
which
I
don't
think
we
currently
track
so
that
there
is
now
a
database
of
all
of
the
api
operations
and
where
they
sit
within
the
the
the
kep
process.
D
Sure
you
can
correct
me
if
I'm
wrong,
but
I
don't
think
caps
have
a
machine
readable
way
of
specifying
whether
they
are
in
alpha
beta
or
ga.
D
C
C
Let
me
stick
if
it's
part
of
release
sig
release,
it
was
from
laurie
today
asking
me
to
and
a
bunch
of
others
to
update
their
information.
D
Yeah
anyway,
I
guess
the
point
I
was
making
like
that
is
something
that
is
lacking
from
caps
and
it
would
be
cool
if
we
could
land
that,
but
I
still
feel
like
tying
back
to
caps
is
also
kind
of
weird.
D
So
that's
the
sort
of
like
I
think
api
machinery
needs
to
contribute
back
to
the
discussion,
and
then
I
think
sig
architecture
is
maybe
a
forum
to
discuss
like
actually,
we
think
it's.
It
is
important
that
whether
or
not
fields
or
resources
are
alpha,
beta
or
ga
is
important
for
the
machinery.
C
There,
my
I
will
create
it,
so
my
action
that'll
be
to
create
an
issue
and
reference.
The
comment
that
I
have
here
to
kind
of
get
the
conversation
started
and
include
the
relevant
folks,
and
I
think
the
request
here
is
to
add
at
minimum
a
a
release
level
over
lack
of
a
better
term
of
alpha,
beta
or
ga
and
possibly
information
on
the
feature
flag.
C
But
that's
I.
I
have
a
separate
issue
open
for
the
kep
process
that
I
haven't
promoted
yet,
but
they're
they're
wanting
me
to
see.
If
I
want
to
get
that
merged.
D
Future
for
an
inappropriate
way
to
to
do
this
because
that's
my
that's
my
take
on.
D
Are
inappropriate
is
if
you
are
running
tests
against
a
hosted
version
of
kubernetes.
You
won't
have
the
same
api
server
that
they
do,
the
like
querying
the
api
via
its
discovery,
endpoint
or
something
should
or
or
if
open
api
aspect
should
be
sufficient
to
tell
you
like
what
is
enabled
and
what?
D
What
level,
what
version
it
is.
I
guess
so.
D
C
I've
tried
in
the
past
to
get
extra
metadata
added
and
because
it's
it's
complex,
I
mean
there's
not
a
lot
of
people
that
fully
understand
the
api
machinery
metadata
generation.
I
think
I
remember
one
of
the
conversations
being
we're
going
to
wait
until
there's
open
api
b3,
because
it
has
the
right
fields
for
this
great,
but
there
are
arbitrary
fields
that
we
can
add
in
the
same
way
that
we
use
x
dash
something
in
the
header
fields
to
add
arbitrary
metadata
that
we
could.
B
D
Yeah,
I
would
try
and
figure
that
to
figure
out
how
to
tie
it
together
at
that
level,
because
a
rube
goldberg
machine
that
that
ties
to
like
caps
or
ties
to
feature
flags
which
are
tied
to
caps.
None
of
those
are
quite
as
discoverable
as
interacting
over
the
kubernetes
api
boundary
with
the
system
under
test
yeah.
C
Yeah,
in
addition
to
that,
one
of
the
other
things
we
found
is
that
there
are
issues
with
the
api
audit
log
audit
machinery
where
verbs
are
dropped
and
because
we
don't
log
in
the
audit
log,
the
actual
operation
id.
There
is
no
way
to
distinguish
between
a
few
of
these
operations
and
our
hack
currently
has
been
for
when
there's
no
match.
C
Yet.
We
are
part
of
a
test
that
hits
the
endpoints
we're
interested
in.
We
go
to
specific
urls
that
include
the
name
of
the
verb,
because
there
is
no
programmatic,
it's
a
real
goal
machine
if
you
will
of
trying
to
figure
out
well.
What
do
we
do
when,
when
there's
no
programmatic
way
by
inspecting
audit
logs
to
discover
operation
ids
when
it's
hey,
you
want
to
help
me
from
this.
F
D
Sure,
okay,
I
hear
that
I
think
my
closing
thought
here
is:
is
there
like
one?
If,
if
we
can't
do
this
in
a
nice,
automated
manner?
Is
there
one
central
location
where
we
have
a
list
of
what
is
excluded
from
conformance
and
why.
A
A
Story
behind
this,
for
everyone
there's
a
reason
or
an
issue
or
something
that
explains
why
they're
ineligible,
but
very
much
due
to
his
point.
It
would
be
required
that
we
frequently
we
actually
discussed
it
yesterday
internally,
that
we'll
have
to
go
review,
because
it
could
be
that
these
things
that
we
mark
as
ineligible
get
their
alpha
point
endpoints
promoted
with
this,
and
we
don't
notice
these
come
in
and
they
don't
have
property
tests
because
we
can't
see
them.
So
it
would
be
important
for
us
to
check
that.
D
Right,
so
is
this.
C
D
It
may
be
worth
considering
finding
a
way
to
normalize
that
view,
so
it's
not
a
bunch
of
basically
I'm
trying
to
think
of
like
how
could
we
one
process
I
have
in
mind,
is
like
how
can
we
have
the
community
open
up,
pull
requests
into
this
thing
to
say,
like
hey,
we're,
gonna,
consider
these
excluded
or
we're
gonna
consider
these
eligible
for
inclusion
now,
and
that
way
you
have
kind
of
via
the
pull
request,
review,
notable
way
of
who
is
approving
the
exclusion
or
inclusion
of
these
endpoints
and
then,
presumably
because
you're
linking
back
to
issues
and
stuff
you,
you
can
kind
of
derive
a
time
stamp
either
from
the
issues
themselves
or
from
like
the
get
blame
of
this
file
of
like
yeah.
D
C
Ideally,
that's
what
the
swagger
file,
which
would
be
the
optimal
and
we've
looked
at
different
ways
of
dumping,
just
json
files
that
have
the
operation
ids
and
some
slight
metadata
about
them.
That
is
a
almost
a
parallel
to
swagger
json
that
lived
in
kk,
but
we
got
pushback
because
it
gets
it's
real
big.
I
mean
the
swagger
file
by
itself
is
what
three
or
four
back
now,
and
so,
if
you
look
at
all
those
operations,
if
we
had
any
reasonable
metadata
that
file
gets
over
one
megan's
size.
D
Right
so,
in
an
ideal
world,
all
the
information
necessary
to
decide
whether
something
is
eligible
or
not
is
in
that
file,
if
it
can't
be
being
in
its
own
single
file
in
a
way
that
makes
it
easy
to
have
a
community
pull
request
based
workflow
it
make,
might
reduce
the
toil,
I
guess,
of
maintaining
it,
because
when
you
describe
it
being
a
bunch
of
different
sql
queries
that
doesn't
sound
like
something
the
community
is
going
to
feel
empowered
or
knowledgeable
to
quickly
update
at
a
glance.
C
What
if
we
had
just
a
list
of
ineligible
endpoints,
we
will
leave
the
whether
it's
alpha
or
beta
to
our
our
hack
for
now,
but
the
ineligible
ones.
We
could
have
just
a
list
and
and
a
comment
or
a
link
without
much
metadata,
so
the
the
conversation
points
from
them
getting
added
could
be
discussed
or
disputed
or
because,
ideally,
we
removed
some
of
these.
When,
like
we
found
a
way
to
write
a
storage
testing
frame,
sure.
D
Yeah,
maybe
we're
over
engineering.
At
this
point
I
feel
like
a
brief
description
and
then
a
link
to
where
that
decision
was
made,
and
ideally
that
decision
would
be
get
a
pull
request
or
an
issue
or
something.
I
know
I'm
running
long
on
this,
but
I'm
talking
about
this
just
because
of
the
next
thing
where
ken
owens
has
concerns
that
some
resources.
D
Don't
make
sense
for
let's
call
it
business
logic,
reasons
nothing
to
do
with
the
machine,
parcel
data
and
more
to
do
with
what
those
specific
sub-resources
are,
and
so,
if
we
come
to
a
decision
that
we
should
exclude
the
status
of
resources,
how
would
we
document
that.
C
I
think
k
k
is
probably
the
right
place
for
that,
maybe
an
eating
conformance,
because
we
have
our
metadata
there
that
we
pull
from
for
what
is
part
of
conformance.
C
I'll
get
I'll
look
into
creating
a
an
example.
Pr
with
that
metadata
that
we'll
get
zach
to
create
based
on
the
sql
query,
so
that
we
have
a
starting
point
and
see
if
we
won't
reconsume
that
once
it
gets
merged
using
that
as
our
definitive
list
of
what
we're
not
going
to
be
testing
for
another
thing
that
we
could
do
there
is
since
we're
adding.
We
go
ahead
and
list
every
operation
id.
D
G
All
right,
thank
you
very
much
aaron
for
getting.
G
C
B
C
I
think
the
pushback
was
most
of
that
setting
particularly
of
setting
the
status
is
something
that's
not
done
by
end
users
of
kubernetes,
but
done
more
by
controllers
that
are
checking
the
the
objects
in
question
and
setting
that
status
based
on
their
observations
of
it.
So
any
ede
test
or
conformance
test
that
goes
through
and
set
that
is
inherently
facing
a
type
of
race
to
watch
it
be
set.
C
I
mean
you
know
the
best
you
could
do
is
set
it
to
something:
that's
not
valid
or
true
and
watch
the
controller
set
it
back
to
what
is
appropriate
and
because
many
of
the
endpoints
that
are
part
of
the
apps
include
status
endpoints.
That
was
their
their
quick.
Second
quick,
summary
on
those
status,
operation,
ids.
D
D
I
think-
and
so
you
know
it's
probably
a
bad
idea-
to
try
and
compete
with
some
existing
controller.
That's
updating
that.
D
The
implementation
you
have,
or
you
test
the
surface
area
you
have
not
necessarily
the
surface
area.
You
want
all
right.
I
think
I
think
yeah,
maybe
that's
a
broader.
It
could
be
a
broader
cigar
question,
or
certainly
I
would
I
would
want
input
from
my
fellow
subproject
owners
on
that.
C
We'll
bring
that
up
to
the
sig
arch
meeting
on
friday
and
put
that
on
the
agenda
for
discussion
and
maybe
we'll
send
an
email
out
to
the
mailing
list
today.
So
we
can
get
a
little
broader
feedback
on
that.
C
I'm
also
the
feeling
that
if
we
go
ahead
and
set
up
a
watch
and
we
set
the
status
to
be
offline
or
something
like,
we
create
something
that
we
know
the
status
should
be
a-okay,
and
then
we
set
it
to
be
not
okay
and
then
wait
in
a
watch
for
the
controller
to
set
it
to
what
is
the
known
good
status.
C
That
might
be
one
way
or
instead
of
trying
to
set
the
api
directly
by
hitting
it.
We
force
the
controller
to
hit
it,
and
even
though
it's
not
the
end
of
our
current
way
of
scoring
it
is
the
ede
binary
setting
that
flag
directly.
We
watch
it
happen
by
the
status
changes
reflecting
in
the
status
endpoint.
D
Yeah,
so
this
is
about
hitting
the
status
sub
resource
for
all
of
the
built-in
workloads.
Is
that
right?
That's
correct.
E
C
C
On
that
and
we'll
and
we'll
discuss
it
at
the
broader
meeting,
thanks
for
suggesting
that.
B
D
And
points,
and
so
it's
covering
that
doesn't
necessarily
get
you.
The
coverage
on
the
built-in
workloads
I
feel
like
it's
crds,
are
probably
more
the
motivating
reason
I
would
like.
I
would
really
want
to
check
in
on
crds.
I
could
see
how
maybe
it's
less
important
for
built-in
workloads,
but
yeah
somebody
could
decide
to
re-implement
the
built-in
workload
controllers.
C
I
will
get
that
email
out
and
make
sure
it
goes
to
the
stick
architecture
about
a
couple
of
options
to
get
the
discussion
going
and
make
sure
lcc
can
in
that.
C
There
was
a
it
can
also
express
concerns
about
apis
unused
by
the
community,
and
I
don't
think
he
was
talking
about
status.
He
was
looking
at
the
large
list
of
almost
like
architectural
changes
of
type
object,
types
that
he
doesn't
see
used
anywhere,
that
he's
never
seen
it,
and
I
and
I
I
brought
up
that
that
we
said
we've
thought
that
before
in
conformance
and
tried
to
remove
things
and
got
near
immediate
pushback
from
people
like
using
things
in
ways,
we
never
would
have
thought.
Possibly.
I
don't
remember
exactly
what
it
was.
C
That's
the
one,
but
I
just
wanted
to
bring
up
their
concerns
too,
to
the
performance
meeting
as
a
whole.
I
probably
won't
actually
any
more
on
that
unless
we
want
more
discussion,
he
may
reach
out.
Regarding
his
end,
points
are
not
eligible,
so
that'll
be
part
of
the
status
thing,
and
I
might
mention
and
make
sure
that
that
happens,
and
we've
got
some
notes
that
thank
you
rhiann
for
capturing
this.
C
Unfortunately,
that
human
is
offered
support
for
prioritizing
endpoints,
because
they
we
were
choosing
them
just
like
well,
there's
a
big
list.
We
want
to
go
to
one
nine,
because
I'd
love
to
see
that
disappear.
C
Sure
they
may
offer
support
on
that,
so
we'll
communicate
via
email
to
see
where
that
goes
and
we'll
be
reviewing
our
our
issues
before
we
turn
them
into
tests.
C
A
Now,
let's
have
a
look
quickly
stable
and
we
go
to
apps,
it's
actually
only
one,
that
is
that
one
there
it's
tested,
but
it's
actually
it's
just
a
drive
by
from
another
test.
It's
not
specifically
targeted
by
the
test.
It's
accidentally
touched
by
something
else.
So.
B
A
Actually,
all
these
here
is
conformant
one,
that's
tested
with
multiple
format
and
all
these
are
open,
and
our
industry
also
like
to
explain
why
we
have
approached
a
sick.
Abs
is
if
we
look
at
the
here,
we've
got
we're
basically
trying
to
work
backwards
and
we're
now
sitting
at
one
nine,
which
has
all
those
endpoints.
And
if
we
go
to
thing
the
detail,
you
would
see
that
basically
everything
in
one
nine
says
apps.
So
everything
down
here
I'll
do
the
absolute
favorite
thing
about
onenote
with
this.
A
So
apps
is
the
owners
of
all
the
debt
in
one
night,
so
we
try
to
help
them
and
and
also
get
them
involved,
to
help
us
to
try
and
clean
this
up
as
best
we
can
before
the
end
of
this
release,.
B
A
Will
make
a
list
of
what
is
I
already
shared
a
list
with
them
of
what
is
open,
where
they
are
touched
or
not
touched
by
tests,
which
other
part
I
broke
them
down
into
families
of
apis
or
similar
things,
all
the
replicas
set
things
together,
and
and
so
they
could
so
they
already
viewed
some
information,
but
I
keep
on
pushing
information
with
them
and
they
didn't
say
that,
honestly,
this
is
technical
debt,
they
have
big
priorities,
but
they
really
want
us
to
to
keep
them
involved
to
help
us
to
help
them
to
get
this
over
the
line.
A
So
they're
quite
keen
sounds
good
good,
so
that
is
for
the
agenda
points
and
that
we
can
go
into
what
is
open.
So,
finally,
we
got
the
proxy
endpoints
test
going.
The
an
image
that
needed
to
be
built
was
finally
built.
A
A
It
is
just
about
ready
to
go
in
stephen
and
which
is
having
a
short
discussion.
Oh
we
got
our
lgtm,
so
somebody
can
just
slap
approve
on
there
in
this
meeting.
It
would
be
very
fine,
so
yeah.
If
so,
that's.
C
A
F
In
the
t,
script,
sorry
and
the
details
for
the
power
job
that
whenever
they
actually
failed,
it
looks
like
they
just
hadn't
started
completely
before
the
test
started,
trying
to
test
the
endpoints.
So
I
just
added
a
watch
to
make
sure
that
they
surface
had
completely
started
was
available
before
we
started
actually
testing
those
endpoints
and
the
pod
ones.
So
that's
what
I
believe
will
get
rid
of
those.
D
D
Okay,
I'll
I'll
look
at
it
in
depth.
A
Later,
thank
you
very
much.
Thanks
stephen
good
job.
There
looking
forward
to
get
all
those
end
points
in
then
we
have
an
issue.
C
C
That's
right,
they're,
patching
back
and
forth.
This
isn't
sig
off
because
they
documented
what
what
happens
around
the
head
is.
C
In
that
match,
that's
right
so
head
we're
not
going
to
get
because
we
don't
have
any
weight.
There's
an
api
machine
he's
pushing
back
on
that
one
and
for
the
other
options,
even
though
it's
not
logged
in
the
audit
logs
zach
made
a
hack
workaround
to
when
we
don't
detect
an
operation
id
and
we're
hitting
this
particular
url,
because
the
url
for
those
tests
includes
the
name
of
the
options
verbs.
D
Signed,
I
found
the
issue.
I
agree.
We
should
chase
down
to
goth
until
they
pump
back
to
sig
api
machinery
until
we
get
a
definitive
answer
on.
Why
aren't
the
head
and
options
verbs
being
propagated
yeah?
I.
A
For
that
link,
then
the
next
one
is:
let's
have
a
look.
Why
didn't
you
do
that.
A
Sorry
about
that,
this
is
a
pull
request
for
two
end
points.
Stephen.
You
want
to
speak
to
this
test,
because
this
is
also
ready
for
approval.
We've
run
it
through
the
basis
with
the
tests.
Quite
a
lot,
so
that's
easier,
and
if
we
go
to
the
pr
history.
A
There
was
nothing
blocking
us
from
merging,
so
even
I'll
leave
it
to
you
to
quickly
explain
what
this
this
does.
F
Sorry,
can
you
go
sorry
across
to
the
files
change?
Sorry,
no
problem,
but
been
thinking
about
something
else.
Just
recently
sorry
yeah,
sorry,
I'm
putting.
F
So
it
just
goes
through
creating
the
service,
and
then
it
goes
through
doing
the
appropriate
patch
and
with
some
appropriate
watchings,
and
then
I
think
it's
going
to
delete
at
the
very
end
of
the
test.
This
is
one
that
I
think
caitlyn
had
started
a
while
back
that
we
just
got
getting
around
to
cleaning
up.
There
was
a
little
bit
of
pushback
around
one
or
two
little
settings
and
I
believe
I've
addressed
them,
but
I
haven't
had
any
feedback
from.
D
No,
nothing
I
mean
again.
The
answer
for
most
of
these
is
gonna,
be
like
I
don't.
I
will
review
these
offline.
I
don't
yeah,
I
don't
feel
like.
I
can
do
a
thorough
review
on
camera.
That
would
not
be.
D
A
Someone
stephen
good
job,
thanks
aaron
for
your
time.
We
do
realize
it's
it's
an
effort,
thanks
for
putting
in
the
effort
of
course,
then
this
is
an
update
to
an
existing
conformance
test
where
it
was
hitting
the
status
replica
set
for
scaling.
It
was
hitting,
read
and
replace,
but
not
the
patch
our
scale.
A
Basically,
just
add
a
little
section
at
the
bottom
for
batching.
You
know
for
patching
the
replica
set
or
the
replica
sorry
for
the
stateful
set,
and
then
there
was
a
little
bit
of
pushback.
The
one
is
just
a
go:
formatting
thing
that
moved
around
the
import,
so
we
fixed
that
and
the
other
one
was.
He
suggested
that
we
use
the
wait
for
running
and
ready,
which
turned
out
after
some
research
I
found
that
was
for
creating.
A
New,
when
you
create
new,
you
can
use
you're
waiting
for
running
and
ready,
and
then
I
found
an
option
for
wait
for
status
replicas
and
that
actually
did
the
trick.
We
added
that
in
I
haven't,
had
feedback
on
that,
but
we
believe
that
one
is
also
ready.
We
also
tested
it
thoroughly.
It
seems
to
be
fairly
solid,
there's
no
flex
no
issues
there.
So
I
think
that
is
a
simple
add-on.
A
F
No,
I
think
if
you
just
open
it
up,
I
just
wanted
to,
even
though
it
still
got
the
old
go
code
from
back
when
because
it
was,
I
think,
around
113.
Sorry
1.13
is
what
I
was
watching
for
originally
I'm
going
to.
I
just
want
to
run
it
past
our
own,
about
I'm
just
going
to
replace
the
current
watch.
F
Watch
tools
that
process
that
was
used
for
pod
and
pod
status.
So
it's
matching
that
sort
of
pattern
of
check
making
sure
everything's
checked
well.
F
It
seems
to
have
been
already
been
moved
in
previously
pr
back
in
november.
I
think
it
was
okay.
F
Just
bring
the
the
test
to
be
able
to
move
forwards
if
I'm
using
that
sort
of
pattern
of
using
what
is
until.
D
D
But
I
don't
have
like
really
concrete
data
on
what
the
scope
of
the
flaking
is,
but
I
was
just
checking
around
on
triage
for
any
tests
that
have
the
word.
D
Should
that
match
the
regular
expression
should
that
start
life
cycle,
because
a
lot
of
the
tests
that
you
wrote
following
this
pattern
are
like
should
allow
the
life
cycle
of
whatever
pod
status.
Damon
said,
what
have
you
and
they're
definitely
test
failures
in
there.
This
kind
of
difficult
to
pick
apart,
which
of
those
are
just
like
badly
misconfigured
jobs,
or
if
these
are
signs
that
these
tests
are
actually
flanking.
D
But
as
we
get
closer
to
code
freeze
and
test
freeze
in
the
121
life
cycle,
I
feel
like
we
will
want
to
prioritize
deep
flaking,
because
we
should
be
holding
conformance
tests
to
a
high
bar
of
like
not
liking.
D
So
a
long-winded
way
of
saying
like
no,
I
I
don't.
I
don't
think
I
have
any
objections
to
the
test
to
you.
Writing
the
test,
as
you
described
following
the
pattern
you
described,
but
we
may
later
discover
that
we
need
to
fix
this
along
with
all
the
other
tests
that
use
that
pattern.
F
Yeah,
I
agree
as
I'm
going
through
it
I'll,
try
and
see.
If
I
can
look
at
some
of
these
flakes
and
see
whether
I
can
extend
the
test
and
find.
D
Okay,
that
thanks
I'll
try
to
open
an
issue
if
I've
got
concrete
data.
F
I
think
actually
we
a
few
days
ago
about,
I
think
that
particular
test
flaking
on.
I
think
it
was
the
test
grid
for
the
node,
and
yet
it
was
failing
consistently.
F
F
Then
other
cluster,
sorry
t
squared
setups,
because
I
think
we
had
another
issue
with
another
t-square
before
the
holiday
break
period.
D
I
got
pinged
separately
on
a
test
change
to
the
scheduler
predicates
test
that
verifies
that
two
pods
that
attempt
to
allocate
the
same
host
port
are
correctly
scheduled.
D
This
update
then
went
a
step
further
and
verified
that
if
they
are
correctly
scheduled,
traffic
should
route
to
them
correctly,
and
that
seems
to
have
broken
some
people's
setups,
and
I
don't
personally
honestly
know
where
I
stand
with
regard
to.
D
We
should
take
that
functionality
check
that
extra
functionality
check
and
put
it
into
a
different
conformance
test,
or
you
should
leave
it
in
the
current
test,
because
that
is
reasonable,
behavior
for
an
end
user
to
expect
and
if
it
doesn't
work
that
way,
I
think
that's
a
bug
in
the
underlying
kubernetes
implementation.
You
may
just
not
have
noticed
until
now.
D
Anyway,
I
want
to
be
respectful
of
the
time
we
have
left.
I
know
you
said
there
was
a.
There
was
something
that
was
blocking
your
coverage
of
a
couple
thanks,
aaron
yeah,
that
that.
A
Issue
there
or
pull
request,
there
has
been
back
and.
C
Clayton
said
he
would
look
at
it
and-
and
he
hasn't
had
a
chance
to
yet
there's
a
question
like
this
is
just
seven
days
ago.
Look
at
had
another
question:
can
you
click
on
that
real,
quick
to
bring
us
back
up
to
his.
G
G
D
Sounds
like
you
need
to
answer
his
question
about
whether
or
not
we
need
to
append
a
trailing
pen
to
slash,
and
it
sounds
like.
F
I
had
this.
I
thought
I
had
a
solution
and
then
leggett
brought
up
the
whole
thing
around
the
rsc
having
a
requirement
for
adding
a
training
space.
If
there
was
no
training,
slash
scene,
there's
one
that,
but
I
think
I've
gone
down
the
wrong
path
with
this
change,
and
I'm
just
I
need
someone,
that's
a
little
bit
more
combination
of
networking,
which
is
like
understanding
to
put
on
a
little
bit
more
because
I'm
somewhat
stuck
on
it
and
I
think.
E
F
Yeah,
I
think
I
just
need
to
just
answer
them
back
why
I
need
otherwise
on
it.
I
think
the
previous
commit
is
actually
closer
to
the
solution,
but
I'm
just
not
sure
where
to
go.
D
D
Care
time
and
to
get
I
wouldn't
view
it
so
much
as
pair
time,
but
I
would
like
assign
him
or
cc
on
the
issue
and
say,
like
I
need
help
with
x.
Can
you
can
you
find
somebody
who
can
help
me
with
x
and
that
can
help,
but
I
also
interpret
liggett's
last
comment
as
like,
I
think,
there's
still
an
outstanding
question,
as
I
I
think
the
last
thing
in
his
comment
about
what
is
the
behavior
without
the
trailing
slash
has
not
been
answered
if
you've
try
both
and
see
yes,.
C
C
For
the
meeting
other
than
I
just
dropped
a
link
to
something
else
that
caleb
was
working
on
to
my
right.
That
might
be
good.
Just
while
we
got
you
on
the
phone
dude
hear
your
thoughts
on.
G
He's
I'll
help
the
camera
hello,
it's
just.
G
A
D
Okay,
so
to
answer
the
last
question
about
how
would
credentials
get
added
to
this
if
it
was
run
as
a
proud
job?
This
is
what
workload
identity
is
for.
So
you
give
the
proud
job
a
service
account,
and
then
the
kubernetes
service
account
that
that
crowd
job
runs,
as
is
bound
via
workload,
identity
to
a
google
cloud
service
account.
I
have
one
created,
I'm
pretty
sure
it
already
lives
in
the
center
for
pro
build
trusted
cluster,
which
has
a
service
account
that
gives
you
permissions
to
run
to
run.
C
And
the
google
secret
there
for
us
to
be
able
to
do
a
git
push
and
create
based
on
our,
where
is
so
get
in
our
github
token.
That
path
previously,
we
had
written
that
out
by
using
google
secrets.
I
think.
D
D
To
that,
so
the
same
service
account
that
the
job
runs
as
we
would
set
up
credentials
so
that
that
service
account
is
allowed
to
read
that
secret,
and
then
you
can
have
the
job
use
g
cloud
to
read
from
that
secret.
Another
option:
well
yeah.
I
guess
I
kind
of
like
that
approach
better.
I'm
not
sure
another
option
would
be
to
like
add
the
token
as
a
secret
to
the
prowl
build
cluster.
D
I
kind
of
I
kind
of
like
the
idea
of
using
secret
manager.
It
doesn't
like
integrate
as
cleanly
with
crowd
jobs.
It
doesn't
require
that
you
write
a
little
more
script
gunk
to
like
pull
the
secret
out
yourself.
I'm
wary
of
people
like
accidentally
dumping
their
secret
into
logs.
That's
also
why
I'm
wary
of
the
like
cat
github
token
thing,
but
totally
recognize
that.
D
Okay,
so
then,
using
hub
versus
using
something
that
is
under
our
control.
C
I'm
sure
that
the
production
build
cluster
like
privileged
build
cluster
already.
Has
these
secrets
there?
I
would
assume.
D
D
We
don't
have
that
token
available
in
this
community's
build
cluster
yet,
but
so
I
do
have
a
service
account.
It's
called
k-10
for
gcp
auditor.
It's
already
got
the
right
credentials,
so
I
would
use
that
and
the
crowd
job
that
I
linked
in
the
description
should
be
something
basically
copy
paste
as
an
example
of
how
to
use
that
service
account.
D
C
It
will
add
our
because
this
thing
at
the
end
where
it
has
the
cp
touch
a
like
that
get
diff
where
we
see
if
there's
a
diff,
that's
the
creating
of
the
pr,
and
we
append
this
using
the
google
secret
to
create
the
github
token
or
run
and
get
google
secrets
and
and
passing
it
out
on.
D
Sure,
I
guess
I'm
kind
of
stuck
on
mirror
over
time.
I'm
sorry,
I'm
stuck
on
whether
you
should
just
use
the
thing
you're
already
using
or
if
you
should
just
use
the
thing
that
the
kubernetes
community
is
already
using.
C
You
mean
the
google
create
or
push
pr
go,
go.
D
D
You're
gonna
need
an
image
that
has
hub
pre-installed,
whatever
I
don't
have
strong
objections
to
it,
whatever
allows
you
to
get
it
done
the
fastest.
Just
because
I
don't
have
objections
doesn't
mean
other
people
might
insist
that
you
use
the
pr
creator,
but
you
are
welcome
to
try
this
approach.
I
use
hub
pull
request
from
my
laptop
all
the
time,
so
I
certainly
trust
it.
C
So
I
think
I've
always
been
trying
to
find
new
images
or
create
new
images
takes
takes
time,
and
so
do
we
have
hub
and
something
already
out
there
that
we
could
use
with
it
and
no
so
I'm
reluctant
to
go
through
and
create
a
new
image
for
as
far
as
the
time
to
get
this
piece
over
the
line.
Unless
we
want
to
start
using
that
approach,
I
this
just
uses
git
and
is
able
to
create
the
pr.
I
believe.
C
D
Yeah,
which
I
like
I
said,
I
personally
don't
have
a
problem
with
it,
but
yes,
this
this
looks
good.
Thank
you
for
working
on
this.
I
will
check
back
in
with
y'all
about
pushing
this
forward
tomorrow
or
thursday.
Sorry,
next
tomorrow
or
two
days
from
now,
my
thursday.
C
Your
friday
sounds
good.
I've
got
caleb
on
it.
I've
got
some
health
things.
I've
got
to
do
with
the
next
couple
of
days
great,
but
thanks
for
supporting
us
and
showing
up
consistently.
We
appreciate
your
support
and
let
us
know
how
we
can
help
you
with
migrating
all
the
jobs
or
doing
what
we
can
in
that
regard.