►
From YouTube: 20210323 SIG Arch Conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
It
is
funny
because
I
do
feel
guilty
when
I
miss
miss
y'all
and
I'm,
like
you,
know
the
everybody's
kind
of
in
the
like
it's
in
the
long
phases
of
covet,
and
you
could
tell
that
like
meetings
are
one
of
those
things
where
everybody's
kind
of
like
yeah.
I
got
this
other
meeting
on
top
of
the
other
meeting
on
top
of
the
other
meeting,
and
you
know
I,
I
can't
blame
people
if
they
don't
want
to
go
to
more
meetings.
D
C
C
I
think
I'm
gonna
skip
over
the
the
flaking
thing
for
the
moment,
stephen.
So,
let's
first
get
to
this
other.
D
Side,
a
very
quick
thing:
thank
you,
clayton
for
the
initial
feedback
on
the
flake
stuff.
They
found
a
an
issue
to
do
with
the
volume
manager
was
having
a
like,
really
weird
edge
case
and
there's
some
tickets
in
there,
but
also
the
test
did
actually
have
an
issue
around
trying
to
call
start
a
container
with
the
wrong
command
that
didn't
exist,
so
it
just
seemed
to
so.
It
was.
D
Yeah
yeah:
it's
it's
good
to
see
lots
of
people
and
michelle
from
six
storage
getting
in
and
helping
out.
So
that
was
awesome
right
back
to
you
around.
C
D
C
First,
one
quickly
run
you
that
fast
here,
clayton,
oh,
the
insights,
promotion,
good
thing
that
merged-
I
don't
know
if
you
saw
in
the
channel
no
technical
dates,
lots
of
things
that
got
old
days
that
got
new
performance
tests
or
old,
endpoints
new
conformance.
That's
a
really
good
run
there
and
that
one
we
don't
have
to
discuss
already
merged.
C
Endpoints
we've
seen
ever
and
what
really
really
got
done.
Well,
okay,
let's
first
go
to
this
test.
Basically
diamond
said
status
test.
You
want
to
discuss
that
stephen.
Should
I
open
the
code
for
you.
D
The
status
endpoint
for
patching
all
the
conditions,
sorry,
it
updates
all
the
conditions
and
then
it
looks
at
patching.
The
problem
is
right
at
oh
sorry,
this
is
the
diamond
seat,
ones.
D
This
is
looking
pretty
good,
so
the
swans
on
track.
I
think,
without
any
real
issues
that
I'm
aware
of
the
only
problem
is
it's
the
it's
the
api
registration,
one
that
I
really
want
to
get
some
feedback
from
clayton.
If
it's
possible.
C
Okay,
so
this
this
one
clayton,
we
followed
much
the
the
status
arrangements
as
john
proposed
it
and
if
you're
good,
you
don't
see
any
obvious
glaring
reasons
why
we
shouldn't
I'm
gonna,
put
in
the
pr
and
mark
it
for
the
next
release.
C
B
I'm
just
looking
through
it,
I
think,
there's
a
few
subtle
things,
but
I
don't
think
I
think
these
are
good
tests
and
they
cover
it.
I
the
one
the
one
thing
that
I
was
thinking
about.
While
I
was
looking
at
this
was
you
said
you
copied
this
from
another
status
test,
or
did
we
actually
expand
a
test
with
status
that
was
specific
about
feedback
from
damon
sets.
D
D
The
reason
why
is
because,
for
it
just
means
I've
got
a
generic
way
of
doing
all
of
them
for
any
resource
and
also
api
regis.
Sorry,
the
api
service
endpoints
have
nothing
but
conditions
available
under
status,
so.
B
The
one
thing
that
I
would
actually
ask
that
we
add
to
this
and
potentially
api
service
would
be
patching
the
adding
a
label
or
an
annotation
from
the
status
endpoint,
so
status
supports
metadata
updates
as
well,
and
there
are
a
few
components
that
depend
on
it.
It'd
be
good
to
maybe
just
add
a
comment
and
get
john
to
weigh
in
too
so
I'm
I'll
I
can
go,
add
that,
but
that
would
be.
C
B
Only
thing
which
is
condition
is
absolutely
great
to
test
the
logic,
looks
fine
and
then
my
only
one
would
be.
We
potentially
should
be
verifying
that
metadata
is
mutable
on
the
off
chance
that
somebody
goes
and
shims
it
out
and
then
doesn't
do
that
right,
but
I
don't
think
we
have
to
block
this
test
on
that.
We
can
always
just
add
it
afterwards.
C
D
Yeah
once
we
get
the
looking
at
the
comments
and
stuff.
C
C
D
Yeah
test
for
now
sorry,
if
you
scroll
down
a
little
bit
further.
D
A
C
D
Okay,
so
the
existing
test
above
it
was
extras
three
endpoints
that
I
added
under
the
last
test,
and
here
I'm
looking
at
patching
the
api
service
so
that
it's
got
a
label.
So
I
can
then
start
a
watch
against
it.
I
patch
the
service,
sorry
they.
D
D
Then
it
goes
back
to
the
default
test
where
sorry
a
little
bit
further
down
ryan,
where
it
then
starts
to
clean
up
the
test.
What
happens
if
we
go
back
to
the
mott
test
round.
D
If
we
scroll
down
a
little
bit
in
the
my
test
output,
it
works
everything
through
and
then
when
it
starts
to
delete
it.
It's
coming
back
with
the
server
is
currently
unable
to
handle
the
request
from
my
research
that
links
back
to
a
503
request
and
something
seems
to
be
just
not
ready
to
be
able
to
start
doing
the
delete.
But
if
I
add
in
about
a
600
millisecond
sleeve,
it
works
absolutely
perfect,
not
a
problem,
and
I'm
just
wondering.
A
There's
a
watch
or
something
missing
to
wait
until
the
api
server
exposes
that
new
api
service
for
it
to
be
able
to
understand
how
to
delete
it
and
that
you're
hitting
it
go
ahead.
No
yeah
go
ahead.
Keep
going
now
that
that's
about
all!
I
had
my
thoughts
on
it.
I
I
don't
know
the
underlying
details,
but
that
feels
it
feels
great.
So.
B
Actually
yep,
that's
probably
it,
and
so
this
is
identical
to
another
problem
that
we
have
so
okay,
so
an
api
service
to
actually
hit
the
underlying.
Where
are
you
starting
the
server
for
it
where's
the
code
that
runs
the
actual
api
server
under
the
covers.
D
So
the
the
test
has
already
created
the
resources
for
the
v1
alpha
wordle
example.com,
it's
476.
B
So
here's
my
working
theory,
so
the
api
service
is
created,
you're
able
to
create
a
resource.
Then
you
patch
it
by
status
and
it
blips.
I
bet
you,
this
is
actually
a
cute
bug.
B
B
So
this
is
a
sig
api
machinery
bug
that
they
need
to
look
into
I'd,
probably
say
it's
actually
really
good
that
we,
if
it's
the
simple
thing
like
it,
could
be
other
reasons
but
like
if
this
is
consistently
flaking
or
reasonably
consistent-
and
you
add
a
sleep
I
think
what's
actually
happening
is
when
you
patch
it
the
code,
that's
supposed
to
be
like
it.
It
sees
a
change
and
instead
of
ignoring
the
change,
because
the
change
doesn't
behave,
I
think
it's
taking
the
api
server
out
and
putting
it
back
in
which
is
horrifically
bad.
B
B
Let's
get
somebody
from
sig
api
machinery
to
look
at
it.
There
are
some
things
we
have
to
do
so.
There's
another
thing
going
on
right
now:
we've,
if
you're
running,
aha
api
servers
when
you
make
a
resource
available,
you
have
to
wait
until
it's
on
all
three
api
servers.
That's
actually
very
complex
to
do,
because
you
know,
if
you're
talking
to
cube
behind
the
load
balancer,
you
can't
actually
say
like.
B
I
want
to
talk
to
one
two
or
three
lukas
is
actually
working
on
an
improvement
to
the
test,
because
we're
putting
this
in
conformance
like
this
is
in
conformance
like
it's
just
a
bug
in
the
test
framework.
There
may
be
something
where
it's
also
possible,
that
an
server
you'd
get
a
different
set
of
flakes.
That
would
show
up
like
this,
but
I'll
have
him
come
along
after
you
guys
and
we're
trying
to
get
something.
Generic
and
that's
not.
It's
almost
certainly
didn't-
has
no
impact
on
what
this
issue
is.
B
D
B
D
That
sounds
cool.
It
was
just
interesting
that,
because
each
of
my
replace
and
patch
I'm
actually
doing
a
watch
afterwards
to
make
sure
that
I
do
actually
see
those
conditions
and
they
both
work.
But
out
of
about
a
run
of
about
five
runs
with
no
sleeve.
D
That
ends
up
killing
the
the
test,
so
I
probably
get
about
five
or
so
sorry.
Two
two
flags
out
of
about
five
runs
so
yeah,
it's
pretty
consistently
flaking.
D
B
Yeah,
I
would
just
describe
this
scenario:
yeah,
let's
get
a
bug
open
and
I'm
pretty
sure
that
we'll
just
find
something
it'll
take
some
time
to
fix.
I
would
once
we
know
what
the
issue
is.
I
think
we
can
decide
whether
to
work
around
it
with
a
sleep
or
not,
but
from
a
conformance
perspective,
I
would
say
that
if
you
update
the
status
with
unrelated
condition,
it
is
a
violation
of
conformance
for
your
api
server
to
stop
being
available,
so
I
won't
we'll
just
have
to
have
to
sort
through
it
but
yeah.