►
From YouTube: 20190409 sig arch conformance
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
First
up
on
the
list
for
today,
I
wanted
to
discuss
frequency
and
duration
of
this
meeting.
Currently
we
have
it
scheduled
for
an
hour
every
week.
In
my
personal
opinion,
that's
probably
a
lot
given
that
you
know
singer
picture
itself
beats
every
two
weeks.
I
do
know
that
we
wanted
to
get
ourselves
food
strapped,
so
I
wanted
to
pull
the
folks
who
are
here
real,
quick
just
to
see
what
their
thoughts
are.
D
I
think
that
the
original
intent
to
do
it
weekly
right
was
so
that
we
can
get
people
up
to
speed
and
then
move
to
bi-weekly
later
so
I
guess
if
people
are
up
to
speed,
we
can
just
assume
that
we
can
move
by
a
weekly
right
is.
A
C
E
E
A
Right
any
other
last
questions.
Kindest
complaints,
concerns
on
it
administrivia
when
was
twice
three
times
sold
so
I
read
through
Patrick's
PR
Patrick
Raylan
I,
do
not
see
it.
Yes,
I
do
yeah,
yeah
I
think
it's
highly
useful
to
have
some
minor
updates
a
net
PR
with
regards
to
like
guidance
for
the
rest
of
the
group
as
we're
reviewing
yeah.
F
A
A
C
I
mean
one
of
the
things
like
I
said,
taken
with
a
grain
of
salt.
I
haven't
looked
at
this
for
a
little
while,
but
since
then
I
believe
we've
merged
this.
This
image
that
can
be
used
to
create
tests
that
run
and
succeed
on
both
environments
because,
like
like,
we
did
for
DNS
suffix
list.
It
provides
a
command.
If
you
want
an
outline,
so
the
things
would
need
work
on
Windows,
but
method
and
evaluating
them
wasn't
working
before
we
should
create
an
image.
C
A
G
F
H
A
A
G
We're
like
there
weren't,
really
any
functionality
being
tested
on
some
of
the
images
and,
and
that
just
requires
a
manual
audit.
But
in
a
lot
of
cases
it
was
just
a
matter
of
deploy
and
image
and
then
do
an
update
and
make
sure
the
image
changed.
There
was
really
nothing
about
the
image
tested
and
there's
a
whole
lot
of
cases
like
that.
I
could.
F
A
B
The
risk
of
asking
a
dumb
question
is
the
intention
here:
I
presume
we
still
want
to
you
know
not
have
a
bunch
of
you
know,
have
a
kitchen
sink
image,
but
rather
have
a
sort
of
hierarchy
of
images
where
we
reuse
as
many
images
as
possible.
So
we
might
end
up
with
you
know
reasonably
large
number
of
images,
but
the
levels
will
be
as
small
as
possible.
A
B
G
That's
the
biggest
part:
well,
it's
two
parts
between
one
I'd
really
like
the
size
to
be
smaller,
which
helps
anybody
who
wants
to
like
test
and
then
air-gapped
consideration,
but
then
also
like,
if
someone's
writing,
a
new
test.
I
wish,
it
was
simpler
to
say,
like
oh
I,
need
an
image
which
one
of
these
73
am
I
supposed
to
choose.
I
just
spin,
a
wheel.
You
know
if
there's
one,
that's
just
like
default
test
image
right.
That
makes
it
a
little
bit
more
clear
as
well
if
they
really
care
about
the
functionality,
but
we.
A
I
I
So
for
sure,
then,
the
first,
the
absolute
first
thing
that
we
did
was
do
the
audit
and
reduce
the
number
of
images
by
reusing
the
existing
images
in
places
where
they,
you
know,
they're
not
really
needed
like
using
a
specific
image,
but
it's
not
really
needed.
You
can
use
one
of
the
base
images
instead.
So
do
the
audit
first
and
try
to
reduce
the
number
of
images
that
we
actually
have
to
touch.
That
would
be
the
first
step.
I
think.
Have
you
done
that
already.
I
That
would
that
was
the
first
one
we
did,
and
the
second
thing
that
we
did
was
we
moved
the
definition
of
the
images
to
one
file.
So
we
could
know
that,
like
this
is
the
full
bucket.
We
are
working
with
that
made
things
a
lot
more
easier,
because
then
people
could
start
picking
off
one
by
one
of
the
images.
I
The
hardest
problem,
you're
gonna
have
is
getting
the
reviews
for
the
images,
because
there
is
a
few
very
few
people
who
are
doing
that
right
now.
So
you
need
to
make
sure
that
you
involve
people
like
Manjunath,
coma
tag.
Who
has
done
this
before
he's
one
of
the
owners
when
the
approvers
in
the
test
images
directory
so
get
people
talk
to
people
and
make
sure
that
they
are
in
on
this,
and
the
other
thing
would
be.
We
really
really
need
to
get
the
promotion
of
the
images
in
place.
Otherwise
we're
gonna
have
issues.
I
I
Way
we
set
it
up
is
a
stick.
Testing
will
have
a
GCR
repository
where
some
of
the
owners
in
cig
testing
can
push
through
it.
They
don't
have
to
be
Googlers
to
push
it
and
push
into
it.
Okay,
and
then
we
will
have
another
process
where
a
bot
picks
up
this
image
and
push
it
and
pushes
it
into
the
official
test
registry.
F
Okay,
can
you
add
a
light
touch?
This
talk
because
I
think
we
need
to
get
a
name
on
building
the
windows
images
so
that
way
we
can
add
them
to
the
military
manifests
as
part
of
this
process.
So
that
way,
what
we
could
do
is
sort
of
use
this
as
a
forcing
function
to
say
when
someone's
adding
a
new
tag
more
in
these
images,
we
should
have
the
windows
one
following
so.
I
The
way
it
is
going
to
work
is
the
first
image
that
gets
built
needs
to
have.
Okay,
let
me
start
from
the
beginning.
The
the
GCR
repository
we're
sick
testing
will
push
a
cig
testing
folks
will
push
mm-hmm
that
should
be
complete,
it
should
have
the
manifest
list
and
the
images
for
all
the
architectures.
The
manifest
list
should
include
the
images.
So
really,
the
only
way
to
do
that
is
make
sure
that
the
scripts
that
that
are
there
for
the
test
images
take
care
of
the
windows
want
to
you.
I
I
H
H
I
H
A
Why
why
would
you
the
family
here
real,
quick?
That's
the
less
time
box.
This
particular
conversation
I'd,
ideally
like
to
get
this
first
PR
for
Windows
guidance
merged
soon,
then
open
up
two
separate
pr's,
one
of
which
is
the
reducing
the
image
count,
which
John
said
he
would
do.
But
the
third
is
the
guidance
and
the
pushing
of
Windows
images
and
cross
go
s
or
cross
OS
image.
A
Construction
could
happen
as
a
third
separate
piece,
so
we
have
the
first
one,
which
is
the
general
guidance
on
Windows,
the
second
one,
which
is
reduce
the
overall
and
the
third
one,
which
is
you
know.
How
do
we
deal
with
this
cross
Louis
image,
as
well
as
the
ability
to
push
the
images
into
the
appropriate
location,
because
that
seemed
like
a
reasonable
way
to
strum
into
execute
against
this.
You
don't.
I
A
A
I
H
Maybe
it
depends,
it
might
be
a
good
idea
to
have
semantic
as
well
this
one
problem,
for
example,
a
lot
of
our
image
images.
We
are
basing
on
basic
box
in
which
we
built
ourselves,
which
is
outside
of
kubernetes,
so
we
actually
have
a
couple
of
images
which
are
not
based
on
anything
under
the
kubernetes
bring
more
repos.
I
H
Okay,
so
I
was
saying
that
we
have
a
couple
of
images
which
are
outside
of
what
kubernetes
images,
repository
or
communities,
there's
images,
one
of
them
and
we
actually
base
a
lot
of
our
images
on.
It
is
the
busybox
image
which
we
build
it
ourselves,
which
also
contains
a
lot
of
things
that
reduces
the
differences
between
Linux
and
Windows.
H
For
example,
we
have
a
sim
link
to
the
C
column,
slash
bin
directory,
for
example,
because
out
of
tests
are
executing
commands
from
the
bin
directory
this
one
so
forth,
and
adding
the
NC
command,
which
is
more
in
line
with
how
it
works
on
Linux,
is
also
form.
So
the
question
is
what
we
do
with
it.
We
wrote
that
docker
file
in
the
community
scrip
was
well
yes,.
H
H
A
I
A
I
A
A
Going
once
twice
three
times:
alright,
next
one
is
I,
did
walk
through
the
art.
What
does
the
current
workflow
that
I
believe
Erin
had
documented
for
how
the
conformance
working
group
works,
I'm?
Looking
for
a
volunteer
that
wants
to
potentially
write
some
test,
infra
of
dates
for
doing
project
board
maintenance?
Is
there
anybody
who
would
like
to
do
that?
Basically,
when
you
open
up
an
issue
and
you
label
area
conformists,
ideally
what
I'd
like
to
do
is
just
add
the
product.
A
E
C
Right
plug
ability,
things
things
that
might
behave
differently
in
different
in
different
distributions,
the
sort
of
comes
down
so
plug
ability,
things
that
might
have
been
pulled
out
or
disabled
alternate
implementation.
So,
if
you
look
so
those
are
the
things
kind
of
to
keep
in
mind
as
we
go
through
the
backlog
at
a
triage
and
order
the
backlog
as
a
higher
priority
than
other
things.
I
would
also
argue
that
things
that
are
related
to
the
testing
infrastructure
are
relatively
I
high
priority
because
they
probably
block
a
number
of
different
issues
to
proceed.
C
The
rest
of
that
email
talks
about
some
specific
areas
that
Brian
at
least
wasn't
clear,
whether
they're
covered
or
not
in
the
in
the
existing
tests
and
so
from
what
so.
The
the
the
the
top
two
are
furthest
guidance
around
general
areas,
but
right
the
sort
of
first
thing
to
nail
down.
There
is
pod
spec.
Let's
make
sure
that
we've
got
good
coverage
on
pods
the
API,
because,
obviously
that's
so
central,
so
that
that's
kind
of
the
the
guidance
from
Brian
in
there
anyway.
A
C
J
C
A
C
A
To
have
everybody
kind
of
get
a
feel
for
it,
so
Treaty.
J
Let
yeah,
let's
quickly,
do
this
and
then
there
are
some
tests.
We
realize
that
could
be
con
conflicts
in
the
conformist.
Our
text
and
I
have
seen
like
particularly
the
tax,
like
limits
only
windows,
only
to
strips
them
out.
So
if
the
same
test
name
appears
under
windows
only
on
linux,
only
then
they
will
go
into
component
start
text
as
the
same
text,
name
so
test
name,
so
that
that's
a
problem
should
we
fix
the
tool
or
we
should
have
different
names
for
Windows
Tessa,
which
are
basically.
J
Colitas,
do
we
run
into
that
at
all
this
one
questions
we
have
I
didn't
follow
your
description.
I'm
sorry
essentially
taken
eight
example
tests
from
you.
Next
only
tests
and
we
are
porting
it
to
Windows
I-
will
have
the
same
test
name
then,
when
we
generate
the
conformance
dot
text
file
that
has
the
list
of
all
the
conformance
tests
we
strip
out
the
the
tags
like
Linux
only
tag
or
Windows
only
tag
that
will
cause
a
test
name
conflict
in
the
confirm
and
start
text
file.
J
C
So
if
we
have
a
if
we
have
a
test
that
covers
both,
we
have
two
different
tests:
one
for
Linux
and
one
for
Windows
I
think
we
need
to
cover
the
same
behavior
I
think
we
should
try
to
combine
those
into
one.
Now
we
have
this.
If
it's
an
image
issue,
we
have
this
image
that
they
can
help
us
do
that,
if
it's,
if
the
behavior
is
actually
different,
then.
C
J
H
C
H
C
H
H
There
were
a
lot
of
these
questions
regarding
a
set
of
tests,
for
example
the
Signet
working.
You
know
UDP
and
TCP
connection
HTTP
connection,
most
of
the
pods
created
for
those.
This
music
host
network
is
true,
but
you
cannot
use
first
network
on
Windows,
but
those
tests
would
basically
pass
on
Windows
that
particular
right.
So.
H
C
H
C
J
So
yeah
there
was
always
a
question
like
you
know,
is
the
test
name
used
anywhere
else?
So
that's
what
that's
why
I
added
the
second
bullet
point.
Are
we
using
test
names?
Are
any
of
these
tags
hard-coded
into
our
CI
process,
like
testing
for
our
anywhere
I?
Do
not
have
any
visibility
on
that,
so
it
was
a
general
question.
I
have
here
if.
A
You
take
their
you
stole
over
tested
rim,
so,
like
is
so
there's
policy
and
if
you
look
throughout
the
different
Suites
that
exist
in
prowl
tags
are
exhaustively
used
and
they're
used
to
filter
jobs
for
given
Suites
that
exist,
so
I
would
not.
I
would
not
start
creating
new
tags
without
guidance
from
the
suggesting
so
like
the
windows-only
tag
was
shut
down,
because
unless
it's
very
explicit
that
this
is
a
brand
new
feature
that
exists
only
in
the
Middle's
domain,
that
we
should
be
doing
something
like
this
and
we
actually.
A
Not
windows
so
that
the
standard
operating
procedure
for
adding
things
that
are
feature
option
enabled
for
any
given
environment
in
this
case
would
be
feature.
Option
enabled
for
Windows
would
be
featured
:
whatever
in
feature
is,
but
if
their
core
behavior,
you
should
not
be
using
feature
:.
So
that's
usually
like
a
standard
practice.
A
So
if
you're,
if
you
have
a,
if
you
have
a
windows,
only
thing
that
just
a
core
feature,
enablement
such
as
like
pots,
will
take
an
exaggeration
right
that
should
not
be
feature
:
windows
that
should
be
a
core
test
that
is
somehow
made
into
the
core
suite
which,
as
the
the
you
know,
if,
if
provider
or
if
whatever
is
Windows,
run
on
the
intersection
of
it,
so
there
there's
guidance,
there's
documentation
on
all
of
this.
Actually.
J
Okay,
another
question
I
had
is,
and
one
of
the
PRS
Brian
mentioned,
that
some
of
these
tests
needs
to
be
released,
noted
and
I
do
not
see
a
guideline
is
all
the
new
tests
that
we
are
writing
now.
Conformance
tests
are
promoting
to
conformance
should
they
be
part
of
the
release,
notes,
I
thought
they
will
be
already
part
of
the
conformance
document
that
we
generate
from
for
each
other
releases.
So
I
see
that
a.
A
Little
bad
I
did
I
did
I,
don't
know
what
release
artifacts.
We
are
publishing
as
part
of
the
release,
notes
for
conformance
I,
really,
don't
know
the
answer
to
that,
and
maybe
we
should
follow
up
with
both
sig
release
and
Brian,
and
we
get
that
we
can
touch
on
this
and
cigarettes
they're
in
this
week
about
like
what
are
the
readings
artifacts
we
want
to
publish
in
who
owns
those
released.
Artifacts
I
can.
J
J
I
A
Yeah,
that's
all
that's
really
necessary,
for
instance,
but
let's,
let's
sync
with
Brian
and
say,
release
about
what
we
want
to
say
and
who?
How
would
this
update
and
whether
or
not
it
makes
I
don't
know
I,
don't
have
strong
opinions
on
this
I'd
rather
solicit
what
the
broader
group
of
people
want
to
accomplish
as
priorities
from.
I
J
The
next
item
is
failed.
Pots
limit
is
some
feature
that
pots
bag
would
have
had.
I
think
this
is
a
2016
thing
that
glove
and
ran
into
some
of
the
faults
they
were
investigating
all
the
features
in
sport
spec,
this
particular
one
is
not
implemented,
so
it's
it's
a
functionality
that
does
not
exist.
So
when
we
uncover
these
kind
of
stuff,
how
do
we
proceed?
I
mean
a.
J
A
J
I
A
Still
can't
hear
you,
let's
take
a
look
see
if
we
can
make
sense
out
of
this
test
that
do
not
have
conformance
seven,
two
one,
nine
one.
The
answer
was
that
we
still
anything
that
was
like
COD
related,
like
pod
spec
Editions.
We
did
want
to
review
within
this
group
in
part,
because
we
wanted
to
make
sure
that
we
vetted
it
and
that
would
eventually
go
and
become
a
conformance
test.
If
that
made
sense,
I
do
know
that,
like
part
of
the
guidance,
I
could
tell
us
a
little
bit
person.
A
Part
of
the
guidance
was
that
we
need
to
fill
out
at
the
pods
book
and
that's
what
I
think
Shree
said
he
was
going
to
create
a
generic
umbrella
issue,
and
you
know
we
should
probably
cross
reference
that
generic
umbrella
issue
with
these
new
ones
that
are
being
made.
So
that
way
we
have.
We
have
a
clear
understanding
of
where
it
came
from
that.
C
A
C
Well,
I'm
doing
a
lot
of
those
triage
ones.
There
are
things
where
it's
it's
like
prerequisite
things
that
you
know
it's
either
it's
a
test.
That's
not
yet
ready
to
go
through
this
confinement
process,
but
maybe
we'll
eventually
or
it's
a
prerequisite
to
doing
some
some
performance
test.
There's
there
like
a
different
category
to
me.
They.
A
A
C
Just
trying
to
what
I,
what
I
don't
want
is
things
to
get
lost
right.
One
of
it
either
need
to
tag
them
conformance
and
just
leave
them
in
triage
or
look
or
leave
my
sort
of
backlog
or
even
though
they're
not
yet
promotion
to
conformance
to.
But
how
do
we
denote
that
these
this
test
has
been
looked
at
or
it's
waiting
to
to
get
enough
data
around
lack
of
flakiness
and
whatever
to
eventually
move
to
conformance,
but
nobody
needs
to
look
at
it
right.
I
A
But
I
think
as
long
as
it's
gets
triaged
appropriately,
that's
in
here
and
it's
in
the
sorted
backlog,
then
it
doesn't
matter
so
as
long
as
we
have
coverage
and
we're
just
executing
against
the
backlog,
I
think
as
long
as
we
prioritize
it
appropriately
I
think
that's
the
key.
Okay,
give
it
a
lower
priority,
great
two
other
things
that
are
trying.
J
A
You
might
get
you
might
get
broader
adoption,
but
if
you
want
I
think
if
you
want
to
get
them
promoted
and
have
better
better
review
bandwidth
and
better
review
coverage
like
we
are
actually
functioning
as
as
a
as
a
sub-project
now
so,
which
is
good.
So
we
have
people
who
are
signed
up
and
who
are
signing
up
to
do
action
items.
So
this
is.
This
is
probably
a
better
Clearinghouse
than
most
other
areas.
L
Hey
so
this
are
we
talking
about
my
issue:
okay,
yeah.
This
issue
had
conformance
test
and
a
normal
test.
Mixed
I
asked
user
to
print
them,
because
I
saw
that
it's
opened
in
December
last
year,
and
people
are
commenting
on
both
conformance
side
and
the
normal
test,
and
it
was
not
going
anywhere.
I
felt
that
if
they
splitter
at
least
beep
baby,
you
know
something
will
get
merged.
L
L
And
it
was
it
a
promotion,
but
it
was
existing
conformance
test
at
first
getting
some
cosmetic
change
in
description,
and
so
I
asked
him
to
just
move
it
because
that
would
get
go
through
faster.
Then
we
test
itself
yeah
and
another
one
was
they
wanted
to
add
this
test
back
into
the
queue
so
I
wanted
to
know
where
it's
cute
should
I
put
this
in,
because
maybe
this
test
in
the
future
would
get.
A
L
A
E
A
J
J
H
A
I
would
say
in
progress:
this
is
quote-unquote
sort
of
backlog,
but
it's
kind
of
prioritization.
Some
of
these
things
that
are
currently
in
progress
also
have
no
prioritization.
Why
don't
we
take
an
example
of
one
and
try
to
see
if
we
can,
as
a
group,
ascertain
Ian's
prioritization,
given
Bryan's
kind
of
coarse-grained
rubric
test
cell
back
off
test
flakiness.
A
A
C
A
A
Kanban
that
actually
does
the
sorting
for
you
is
ideal
right
because
then
you'd
be
able
to
actually
see
like
in
the
ideal
world,
I'm,
just
a
sequel,
guy
and
I.
Just
I,
like
my
sequel
queries.
So
if
I
actually
had
sequel
on
github
I'd
be
a
happy
person,
but
I
don't
have
that
so
I
can't
sort
automatically
with
sequel
query.
So
if
there's
a,
if
somebody
wants
to
take
that
as
an
action
item
to
the
current
list
of
bot,
enablement
features,
I
think
that
would
be
highly
useful.
A
C
L
I
think
you
know
there
is
a
one-to-one
mapping
for
many
of
the
prioritization
around
prospects
to
the
e2e
tests.
So
for
some
of
them,
it's
very
clear,
at
least
the
ones
I
reviewed.
They
may
be
some
that
my
light
at
my
lay
outside
the
scope
of
the
priorities,
but
still
be
good
enough
for
conformance
I.
Think.
A
A
J
A
Extra
data,
but
what
I'm
trying
to
look
for
is
a
group
here
is
a
shared
understanding
of
how
do
we
evaluate
given
the
information
that
we
kind
of
have,
which
is
kind
of
a
little
limited,
be
honest
tip
for
us
to
sort
of
prioritize
the
backlog,
because
currently
we
have
41,
plus
nine
plus,
that's,
let's
say
51
60
60
items
that
really
need
to
be
dealt
with
in
some
sorted
order
right.
We
can't
just
like
do
them
all.
It's,
not
a
sane,
rational
way
of
doing
it.
A
C
Well,
given
the
guidance
of
pods
back
and
that
some
of
these
are
sort
of
cubelet
related
and
mostly
cutely
related,
and
we
have
things
like
the
container
behaves,
how
containers
being
inside
that
seems
like
it's
very
relevant.
It
probably
should
be
relatively
high
priority.
A
I
M
A
So
the
boy
is
different,
though
it's
recommended
to
be
cereal,
because
you
can
be
running
it
on
prod,
but
the
the
I
do
think
that
we
should
do
an
audit
and
probably
have
an
umbrella
issue
for
evaluating
parallelization
of
tests
and
to
make
sure
that
we
are
doing
our
best
effort
to
make
tests
paralyzed,
because
a
lot
of
the
cereal
tests,
I,
don't
think,
are
necessary.
I
think
they
were
just
bad,
not
that's,
really
badly
written
but
kind
of
conflating
right.