►
From YouTube: 20201203 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
today
is
december
3rd,
so
I
wasn't
expecting
this
many
people
because
I
was
like
pushing
people
out
saying:
hey:
let's
do
it
next
next
week
because
john
and
underwreck
are
going
to
be
out,
but
since
we
have
a
dozen
people
on
the
call,
maybe
we
can
just
chat,
even
if
we
don't
take
up,
we
won't
have
quorum
to
like
make
decisions
or
anything
like
that.
We
can
just
chat
about
stuff.
So
the
first
thing
that
we
were
chatting
about
was
a
voitex
reliability
thing.
B
Sure
so
so,
basically,
like
the
the
proposal
is
basically
to
or
so
we
are
proposing
as
reliability
working
group
to
do
something
to
increase
the
the
bar
for
the
reliability
or
quality
in
general
and
in
particular
like
we
are.
We
start
like
the
proposal
consists
of
like
four
main
things
or
four
main
phases,
which
ideally
I
would
like
the
first
one
to
target
121.
B
we
can.
We
can
probably
I
don't
think
we'll
be
ready
with,
like
the
the
the
the
last
three,
but
basically
the
first,
but
the
first
milestone
is
like
doing
something
to
reduce
the
flakiness
of
our
tests
and
in
particular
what
we
are
suggesting
is
to
gradually
or
at
the
high
level
to
block
individual
seeks
from
enhancements
or
feature
contributions.
If
the
tests
are
above
some
threshold,
we
will
be
like
gradually
decreasing
this
threshold
to
to
get
to
to
get
to
reasonable
state.
B
But
we
will
we.
We
don't
want
to
block
everything.
We
just
want
to
start
roughly
where
we
are
and
try
to
decrease
the
threshold
over
time.
Obviously,
there
are
like
bunch
of
details
in
the
in
the
dock.
I'm
not
sure
like
how
deep
we
want
to
to
get
here.
I
think
the
overview
is
probably
more
important.
We
obviously
need
to
like
provide
a
visibility
into
what's
happening
and
stuff
like
that.
I
think
the
details
are
in
the
dock
and
probably
doesn't
make
much
sense
to
discuss
them
here.
B
Unless
someone
has
any
specific
questions,
I
think
the
details
are
also
described
in
the
docs.
So
if
someone
is
interesting
interested
in
that
like
they
should
definitely
look
and
provide
feedback,
and
that's
basically
the
milestone
that
I
would
like,
ideally
to
start
happening
in
121..
B
I
guess
we
won't
be
able
to
start
at
the
beginning
of
121,
because
we
won't
have
all
the
tooling,
but
hopefully
like
somewhere
in
the
middle
121
or
something
like
that
is
possible.
I
don't
think
it
needs
to
be
directly
attached
with
any
specific
release.
I
think
we
can
potentially
try
doing
it.
Try
starting
it
when
we
will
be
ready
whenever
it
will
be,
but
that's
also
something
we
should
probably
discuss.
Yeah.
C
So,
from
my
perspective,
big
efforts
like
this
need
to
happen
at
the
beginning
of
a
release
because
they
can
be
rocky,
I
think
for
121.
My
suggestion
would
be
like
if
we
can
get
things
in
place
to
provide
visibility
right
at
the
very
least
and
and
no
enforcement,
but
visibility
right.
We
get
a
solid
yeah.
B
Yeah,
so
visibility
is
definitely
a
blocker
for
any
enforcement.
We
we
definitely
can't
do
can't
start
enforcing
something
if
people
won't
be
able
to
to
see
why
it's
happening.
So,
yes,
I
think
saying
that
visibility
and
all
the
tooling
for
enforcement
without
enforcing,
is
probably
a
reasonable
target
for
sorry,
121.
C
So
so
we
from
the
the
blocker
portion,
we
have
been
kicking
around
the
idea
in
sig
release
as
well
to
modify
the
merge
blocking
capability
right
now,
I
feel
like
the
merge.
C
The
merge
blocking
capability
is
not
extensive
enough
right
like
if,
if
the
issue
is
on
it's
on
right
and
then,
if
someone
closes
the
issue,
it's
off
so
routing
routing
some
of
that
functionality
into
brow
instead,
right
so
being
able
to
the
same
way,
we
do
like
configuration
of
like
milestone,
maintainers
right
being
able
to
do
that
for
people
who
have
access
to
run
a
command
to
block
a
release
right.
So,
let's
from
the
reliability
and
release
perspective,
let's
work
together
on
that
piece.
B
Sure,
absolutely
I'm
I'm
definitely
far
from
being
an
expert
in
like
all
the
tooling
here,
so
we
will
definitely
need
your
help.
I
mean
by
your
I
mean
sig,
release
and
sick
testing
and
yeah.
I
guess
probably
those
two
groups
and
like
I
think
we
are
attached
to
the
goal,
but
we
are
not
attached
to
any
specific
tooling
that
I
described
in
the
dock.
So
if,
if
anyone
has
better
suggestion
on
like
enforcing
that
like
we
are
obviously
open
for
any
any
changes
there
or
any
yes,.
C
So
I
think,
there's
also
an
opportunity
for
cleaning
up
some
cruft.
I
a
while
back.
I
think
the
board
still
works,
because
it's
folded
into
kind
of
like
the
config
forker
rotation
logic
in
test
infra
there's
a
word
called
sig
release,
config
errors
and
it's
basically
like
a
hand,
wavy
idea
of
things
that
I
don't
think
are
configured
properly.
C
One
of
the
one
of
the
indicators
of
that
is
like
if
a
test
does
not
have
a
and
a
like
if
the
test
has
alert
options,
if
it's
on
a
sig
release
informing
our
blocking
board
and
it
has
alert
options
that
are
only
sig
release
right.
That
means
the
the
the
area
is
not
actually
looking
at
their
test
right.
C
So
I
would
consider
that
to
be
a
configurer,
so
I
think
taking
an
opportunity
to
go
through
some
of
those
and
identify
sketchy
tests
would
maybe
weed
out
some
of
the
the
work
that
we
have
to
do
in
terms
of
building
out
visibility.
B
Okay,
so
yeah,
let's
think
also
offline,
on
that
I
guess
I
will
or
whatever
or
whoever
will
be
doing,
that
we'll
probably
need
to
look
into
what
already
exists.
I
was
relying
in
many
cases
on
steve
kuznetsov
on
like
his
familiarity
with
like
like
testing
things,
but
yes,
we
should
probably.
I
definitely
want
to
involve
sick
testing
at
like
soon
after
or
like
soon
like
pretty
much
now
more
or
less
to
do.
C
B
Yeah
yeah,
I
I
like,
I
think,
like
my
my
goal,
was
to
start
with
getting
at
least
high
level
buying
on
the
idea
and
like
high
level
strategy
here.
So
the
fact
that
we
will
start
blocking
some
contributions
and
we
were
never
doing
that
and
as
soon
as
we
have
this
buying,
we
start
we
should
start
discussing
the
specific
tooling
and
try
to
executing
on
that.
C
So
I
think
a
worry
that
someone
had
pinged
me
about
with
the
proposal
is
blocking.
I
guess
more
specifically,
if
you
think
about
prs,
that
cross
cut
multiple
areas
right.
How
can
we
ensure
that
we
block
for
one,
but
not
everyone
right
like
if
a
pr
has
multiple
sig
tags
on
it?
What
happens
then.
B
Oh,
let
me
start
from
the
beginning.
So
basically
any
change
like
the
if
the
change
is
only
kind
of
boilerplate
in
the
area
of
one
six
people
or
the
leads
should
be
able
to
remove
that
specific
label
from
manually
from
the
from
the
pr.
If,
if
that
particular
seek,
is
blocked
so
that
that
is
my
high
level
idea.
So
we
should
be
blocking
you
or
maybe,
let's
put
it
into
other
words
like.
Basically,
there
is
usually
a
sick
driving,
a
particular
change
and
other
six
are
more
like.
B
Those
like
shouldn't
be
like
we
shouldn't
be
blocking,
because
some
random
sync
is
blocked
from
from
like
future
contributions.
So
I'm
assuming
that
like
people
there
there
should
be
some
group
of
people
that
will
be
able
to
remove
the
particular
sig
labels
from
the
from
from
a
pr
and
let
it
go.
But
it's
something
that
yes,
we
should
probably
discuss
too.
C
So
I
would,
I
would
maybe
flip
that
and
say
like
what
about
an
override
like
overriding
context
right
because,
like
you
still
want
to
identify
that
the
a
pr
is
targeted
for
sig
right,
even
if
the
sig
is
blocked
right
if
a
reliability,
approver
or
something
can
override
the
context
and
say
that
this
should
still
merge.
Because
we
accept
that
we
accept
that
this
is
primarily
in
one
cigs
area
versus
the
other
right.
Yes,.
B
That
that's
that's
pretty
much.
Yes,
I
think
it's
it's
clean!
B
C
Yes,
because
then
we
can
also,
like
I'm
sure,
I'm
talking
about
things
that
I
don't
know
exists,
but
but
I
think
that
we
could
potentially
key
in
on
people
who
run
that
command
right
and
get
metrics
of
what
gets
overwritten,
and
why
right
and
dig
into
y
right-
and
I
think
that's
another.
That's
another
data
point
for
us
to
use
in
in
the
next
release.
Right.
B
Yep,
absolutely
I
think
yes,
it
will
probably
mean
we
will
need
to
add
some
aquas
for
specific
set
of
subset
of
labels
to
do
that.
Obviously,
it
can't
be
a
group
of
like
three
people
we
need
like
to
be.
We
need
to
make
it,
I
don't
know
10
or
15
or
something
but
also
not
100
or
whatever.
So.
C
Yeah,
it's
it's
like
it's
like.
I
don't
know
the
like,
not
milestone
maintainers
but,
like
maybe
release
manager,
size
right,
a
little
something.
D
I
mean
I
have
not
had
a
whole
lot
of
time
to
follow
this,
but
I
think
the
proposal
seemed
pretty
reasonable.
I
was
glad
to
see
this
get
sent.
E
A
Go
for
it,
but
let's
do
it
async
also,
you
know,
send
out
information
to
the
mailing
list.
Please,
like
you
mentioned
before
I
forget
wi-tech
you.
Maybe
you
should
send
a
reminder.
B
If,
when
you
make
some
changes,
sure
yes,
I
will,
I
will
at
least
try
to
address
the
comments
that
already
were
added
and
steve's
comments.
Stephen's
comments
comments
from
today
and-
and
I
will
probably
ping
it
ping
that
fret,
like
early
next
week
or
something
okay
and
sasha.
Let's
do
the
same
for
the
architecture.
One.
A
F
Sorry
hippie
go
come
back
to
you.
Thanks
may
I
share
the
screen.
Please.
Yes,
you
may
share
screen
yes
and
rihanna.
If
you
wouldn't
mind
dropping
a
link
to
the
markdown,
that
would
be
that'd
be
good,
I'm
sure
I'm
sharing
my
screen.
E
Beautiful
here's,
our
roll
up
for
the
conformance
sub
project
for
the
okrs
for
120.
we've
got
our
final
numbers
in
no
radical
changes.
E
I've
ever
killed
iterating
still
try
to
get
things
more
with
more
velocity
and
stable
output,
rough
numbers
since
114
I
like
this.
This
looks
nice
and
this
is
based
on
the
conformance
progress
graph
over
to
the
right.
I'm
really
proud
of
a
proud
of
our
group
here,
final
update
for
120
is
there
will
be
no
more
changes
because
test
freezes
is
here
and
then
I
think
our
release
will
be
out
the
door
yay
for
that,
of
course,
our
main
okr
every
every
release
is
increasing,
table
stable
test
coverage.
E
We
had
a
goal
of
30
and
we
had
a
stretch
goal
of
40..
We
got
24
and
that's
still
good,
it's
80
and
from
from
a
okr
standard,
that's
still
success.
Let's
just
be
really
clear:
it's
just
not
the
100
of
what
our
goal
was
set
for.
These
are
all
the
very
specific
pr's
you
can
link
those
in
the
in
the
ocarina.
These
are
all
created
by
rion
rion's,
our
bot.
E
We
all
share
the
same
iii
branches
that
we
all
push
to,
so
we
can
all
push,
but
only
one
person
can
edit
the
content
of
a
whole
request
at
the
top.
So
rather
than
trying
to
get
a
bot,
we
asked
korean
to
do
that
for
us.
So
that's
how
you
can
identify
our
projects
work
pretty
easily
outside
of
the
work
that
we
did.
The
community
was
really
awesome
in
getting
eight
new
endpoints
promoted
to
ga
with
test,
and
we
we
always
want
to
see
that
every
release,
nothing
new
without
test.
E
We
had
13
node
proxy
endpoints
that
we
had
marked
as
ineligible,
because
we
are
unable
to
find
good
ways
to
test
those,
and
there
was
community
feedback
on
that.
I
won't
dig
into
the
details,
feel
free
to
follow
those
links.
Just
let
everybody
know
it's
getting
harder.
We've
kind
of
reached
the
60
mark
and
we
have
intentionally
reached
out
to
find
the
low-hanging
fruit.
E
We
had
lots
of
issues
around
trying
to
find
the
flaking
test
and
then
we
need,
for
the
most
part,
increasing
our
timeouts
to
reach
as
our
loads
get
higher
on
our
ci,
we're
finding
things
flake.
A
little
more
often
on
how
long
it
takes
things
to
be
like
pod,
ready
status
and
whatnot
the
kubernetes
are
for
for
most
of
our
ci
jobs
we
consume.
E
There
is
what
we
need
from
those
jobs
is
the
audit
logs
and
the
audit
policy
for
most
of
our
large
ci
jobs
excludes
anything
related
to
event,
which
means
we've
been
unable
to
to
log
and
count
all
of
our
event
related
endpoints.
E
Instead
of
trying
to
create
policy
changes,
we
ended
up
creating
new
ci
jobs
that
we
consume,
that
are
specific
to
a
conformance
and
in
those
jobs
we
allow
all
the
event
the
event
related
audit
logs
to
go
all
the
way
through
based
on
those
policies,
as
we
hit
endpoints
that
nobody's
tested
before
we
are
finding
several
upstream
bugs.
Each
of
those
words
is
a
different
bug,
I
think,
and
so
that
is
we'll
change.
E
The
velocity
slightly
we've
also
had
images
that
we
needed
to
have
built
underneath
us,
I
think,
agnon
host
is
one
that
stephen
had
worked
on
in
order
to
add
support
for
looking
at
the
other
options
we
passed
to
the
to
pod.
To
that
that
image-
and
we
still
don't
have
that
released
yet
so
we're
waiting
for
the
next,
hopefully
the
the
test
that
depends
on
those
images
will
be
in
121
and
we're
getting
obviously
much
more
community
interaction,
and
it's
also
some
latency.
E
We
we
it
takes
something
everybody's,
so
busy,
and
particularly
this
year,
it's
hard
to
get
quick
responses
on
things
the
remaining
endpoints.
The
summary
here
is
the
remaining
endpoints
are
going
to
take
more
effort.
E
E
E
With
regards
to
our
releases,
you
know
we
did
point
out
that
we
have
a
new
k
and
for
a
proud
job,
that's
running
to
catch
new
endpoints
and
we're
working
with
sig
relief
to
at
some
point,
progress
that
from
being
something
that
our
team
watches
and
then
reaches
out
to
sig
release,
to
hopefully
a
sig,
a
release
blocking
job
so
that
new
endpoints
need
to
be
either
coming
with
conformance
test
or
demoted
back
to
beta
before
the
release
is
triggered.
A
So,
do
you
want
to
put
it
for
ins
release
informing
in
121.
C
I
think
I
would
love
to
put
that
forward.
Was
it
was
it
not
already
informing?
I
think
it's
informing.
E
E
E
For
it
and
people
noted
that
there
is
some
steps
and
protocols
to
follow-
and
I
will
put
that
through
for
120
one-
their
further
automation
is
in
progress
so
that
the
main
api
snoop
website
is
actually
pulling
its
data
from
some
sql
dumps
into
into
json,
so
that
when
there
are
changes,
there'll
be
a
place
to
discuss
it
as
a
pr
and
I'm
not
sure
if
that
makes
a
lot
of
sense.
But
when
we
have
a
promotion
into
into
kk
that
includes
new
endpoints.
E
E
We
create
a
new
pr
to
update
the
data
underlying
api's
name
so
that
when
we
can
merge
that
it's
obvious
which
which
endpoints
were
a
part
of
that
and
that'll,
probably
be
more
of
the
discussion
around
the
release
and
forming
job
as
that
evolves
during
the
121
time
frame,
some
cool
other
important
news
120,
it
wasn't
a
short
release
cycle
overall,
but
the
test
freeze,
time
frame
was
a
little
compressed
and
the
link
here
about
the
time
frames
for
121
are
still
under
discussion,
whether
we're
sticking
with
a
three
release
per
year
cycle
or
a
four
release
per
year
cycle
I
haven't
this-
was,
I
think
those
slides
are
from
two
or
three
weeks
ago.
E
So,
if
there's
any
updates,
there
be
good
to
know
yeah,
I
can
give
a
quick
update
if
you
want.
C
Or
you
can
you
can
do
it
afterwards?
This
is
the
time
for
it.
This
is
hard
for
what
are
our
timelines
for
next
year,
all
right.
So
so
I
think
that
you
know
the
the
issue
was
opened.
Part
of
the
reason
that
I
did
the
keynote
on
the
release.
Cadence
was
to
gather
more
feedback.
I
think
that
we
need
to
wait
a
little
longer
for
feedback.
C
I
have
a
lot
of
people
that
are
interested
in
the
three
releases
I
kind
of
restricted
the
options
given
what
we
have
to
already
consider
just
around,
like
our
api
guarantees
and
and
like
release
branch
management
and
like
maintaining
some
of
the
branches
and
releases
over
time.
So
three
seems
like
the
least
option,
but
given
that
we're
rolling
into
the
next
cycle,
I
think
we're
we're
gonna
stick
with
four
right
this
second,
and
if
we
need
to
slide
a
release
later
in
the
year
to
compress
into
three
for
the
year.
C
C
I
think,
overall,
you
and
I
wearing
like
the
release
and
then
also
the
enhancements
hat,
since
we
want
to
do
since
you
want
to
start
rolling
in
some
of
the
the
the
api
conformance
requirements
into
the
kep
process,
as
well
as
getting
kind
of
finer
touch
point
on
how
and
when
test
freeze
is
happening,
and
if
we
need
to
consider
where
those
land
to
help
you
be
successful
too.
We
can
have
that
conversation
too.
A
I
still
haven't
but
stephen:
when
will
we
know
how
many
releases
are
we
going
to
have
next
year.
C
Can't
be
too
late,
yeah
yeah.
A
C
No,
I
know
it
like.
It
basically
needs
to
be
decided
during
121,
right
and
and
ideally
close
to
the
beginning
of
it
right.
But
I
am
sensitive
to
the
fact
that,
like
we're
going
into
kind
of
sleep
mode
as
a
community
right,
yeah.
C
A
C
C
We
have,
we
have
tentative
dates
for
for
121
already,
and
those
will
be
coming
soon,
as
the
team
is
as
the
leads
for
the
team
have
have
pretty
much
landed.
But
we're
going
to
play
around
with
the
three
versus
four
dates,
and
I
guess
propose
comparison
right
in
addition
to
the
feedback
right.
A
So
only
other
question
I
have
is
like:
is
there
a
likelihood
that
they
might
be
of
different
lengths.
A
Yeah
we
have
to
accommodate
kubecon
eu
and
then
cubecon
na,
so
they
might
not
end
up
being
exactly
three
months
or
exactly
four
months.
E
A
E
We
may
not
be
aware,
but
taylor
wagoner
is
the
human
who
actually
approves
any
of
our
companies
that
want
to
say
that
they
do
what
we
have
defined.
Kubernetes
is
and
that
process
being
completely
human
makes
it
prone
to
missing
them
some
things
and
what
we
we
worked
on.
E
I
think
this
was
in
the
in
the
last
release
was
the
ensuring
that
the
entire
list
of
defined
tests
that
we
require
are
actually
run
and
are
all
successful
within
the
submitted
results
towards
the
cncf
conformance
repository,
which
is
the
gatekeeper
for
saying
you
may
use
the
cncf
kubernetes
logo,
it's.
It
is
working,
and
I
just
met
with
taylor
yesterday
in
there,
and
they
are
happy
with
that.
E
E
So
as
a
community-
let's-
let's
let's
honor
that
and
help
us
to
ensure
that
I
often
equate
what
are
what
we're
doing
to
we've
had
this
kitchen
operating
for
a
while,
and
there
were
some
dishes
that
were
left
not
clean,
and
so
we
worked
through
to
to
wash
the
dishes,
but
we've
kind
of
put
a
policy
in
place
that
now
everybody
has
to
wash
their
own
dishes.
So
I
don't
know.
E
I
talk
to
those
people
and
see
how
long
of
a
journey
they've
been
on,
like
I
think
this
last
thing
was
introduced
in
either
2017
or
even
20,
20
28
2016,
and
to
see
someone
on
a
journey
to
get
a
feature
in
the
kubernetes
that
that
long
and
be
you
know
getting
to
see
that
come
through
with
clean
dishes
is
exciting,
32,
new
conformant
endpoints
and
to
clarify
where
those
came
from
eight
of
those
were
from
our
new.
Our
new
endpoints
that
were
promoted
and
24
came
from
from
our
team.
E
Again
we
have
13
newly
ineligible,
which
reduces
the
surface
area
of
our
debt.
We
don't
have
a
way
to
really
highlight
that
other
than
there's
a
link
that
says
the
new
list.
It
doesn't
say
when
we
introduce
those,
so
you
can't
see
the
graph
reducing
in
size
they're
not
terribly
important.
I
don't
like
percentages
because
of
the
way
all
the
numbers
change,
but
in
general
we're
about
nine
percent
further,
along
on
our
journey
to
100
conformance
for
ga
en
points
and
getting
all
the
dishes
washed.
E
Looking
for
this,
thank
you
looking
forward
to
121
not
a
lot
of
radical
changes.
I
think
we're
doing
a
good
job.
We
may
be
we're
going
to
keep
the
same
goals.
I
think
30
endpoints
is
a
nice
solid
attempt
and
just
just
with
a
a
reminder
that
these
endpoints
are
getting
tougher
and
then
we
were
kind
of
digging
deeper
into
all
the
complexities
that
those
last,
I
think
we're
at
150,
somewhat
endpoints.
E
Yep,
that's
a
kind
of
a
summary
of
what
I
just
said:
we're
going
to
still
shoot
for
30
and
40
is
a
stretch
goal,
but
that'll
definitely
be
a
stretch
for
our
key
results
for
the
cleanup
we're
going
to
continue
to
go
back
in
history
and
hit
the
1.11
and
110
api
registration
and
the
api
registration
status.
That's
for
a
total
of
seven
endpoints.
We
do
try
to
engage
with
the
sigs.
In
this
case
it
is
apa
machinery
I
think,
to
engage
and
get
some
assistance
writing
those
tests.
E
So
if
that's
your
area
of
expertise,
we'd
love
to
connect
with
you
any
questions
or
feedbacks
things
that
the
ways
we
can
help
support
the
community
would
be
welcome.
A
A
How
much?
How
much
is
the
upcoming
pipeline.
E
Yeah,
that's
kind
of
I
think
that
we
there's
some
it's
the
caps
right
and
kind
of
keeping
track
of
what
caps
are
ready
for
those
next
stage.
There's
some
metadata
that
might
be
useful
that
I
don't
think
exists
and
that's
tying
in
the
open
api
endpoint
names
specifically
into
the
cats
so
that
when
they
hit
that
transition
point
of
creation
into
alpha,
that's
a
huge
milestone.
It
should
be
celebrated
a
huge
checkpoint
on
that
that
cap
and
then,
when
they
transition
the
beta.
E
It
should
be
the
pr
length
and
celebration
points
and
then
ga
same
thing,
and
without
that
metadata
it's
going
to
be
a
manual
process
of
going
through
the
caps
and
defining
those
times
and
dates
in
order
for
you
to
measure
a
type
of
velocity
and
and
historical.
Where
are
we
and
and
where
are
we
going?
E
If
you
look
back
some,
we
have
this
data
on
an
endpoint
level,
but
we
can't
predict
the
future,
because
we
don't
know
what
at
what
point
in
the
kep
process
they
are
as
far
as
are
they
intending
to
do
that
release.
So
I
can
only
look
at
historical.
A
E
E
But
it
does
not
give
us
an
indication
of
of
those
beta
things
which
one
should
we
be
preparing
for
because
we
don't
know
their
velocity
and
we
don't
know
their
intent.
A
Right
then,
then,
the
other
question
is:
when
do
you
get
to
know?
When
do
you
get
the
signal
that
somebody
is
going
to
try
to
promote
something
before
the
release
yeah?
I
know.
E
A
C
A
C
Yeah
the
the
open
api
is
like
that's
a
that's
a
good
data
point
for
us
to
include,
because
we're
we're
thinking
through
increasing
validation
on
the
cap
side,
as
well
as
like
building
out
some
other
tooling
too.
And
that's
that's
a
really
really
good
data
point
to
have.
E
A
Okay,
I've
exhausted
my
questions
thanks
a
lot
hippie
awesome.
A
A
Clayton,
I
think
I
saw
one
of
your
pr's
about
a
single
repository
for
conformance
images,
so
it's
kind
of
related.
That's
the
only
other
thing
I
can
think
of.
C
What
what
specifically
is
that?
Do
you
have
a
link
or.
A
So
the
idea
was
right:
now
the
the
conformance
tests
go
pluck
images
from
multiple
repositories
if
there
was
a
way
to
specify
a
single
repository.
So
all
people
are
testing
in-house
yeah
so
that
that's
the
idea
there
so.
C
So
yeah
we're
building
out
on
the
release
side
we're
building
out
an
epic
right
now
for
artifact
management
overall,
and
that
includes
the
the
image
artifact
promotion,
as
well
as
the
devon
rpm
stuff.
So
thinking
through,
there
are
a
few
issues
that
were
opened
recently
about
consolidating
some
of
like
the
test,
utils
image
stuff,
all
the
various
registries
that
get
referenced
so
yeah.
If
you
can
link
me
that
pr,
so
I
can
kind
of
connect
the
dots
too,
because
maybe
this
is
something
we
can
tackle
as
we
do
that.
C
A
F
A
A
Steven
okay,
so
if
there's
nothing
else,
let's
wrap
it
up
for
today,
thanks
a
lot,
everyone
see
you
all
in
two
weeks.