►
From YouTube: 20200730 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody:
this
is
the
kubernetes
snake
architecture
meeting
for
july,
30th
2020..
Thank
you
all
for
coming
and
please
abide
by
our
code
of
conduct
which,
in
brief
means
being
respectful
and
kind
to
one
another.
A
B
Yeah
thanks
john,
so
I
wanted
to
discuss
the
the
aspect
of
reliability.
I
think
that
we
basically
are
doing
or
started
doing
couple
different
things
towards
improving
reliability
or
in
quality
bar
and
kubernetes.
In
general,
things
like
production,
ready,
that's
a
project
that
we
are
driving.
It
has
exactly
that
goal,
but
I
think
there
are
still
things
that
we
are
not
really
investing
into
and
not
really
doing
much
around
things
like
different
kinds
of
testing,
in
particular
soak
testing.
That
is,
that
is
uncovering
different
things.
B
We
don't
pretty
much
have
any
soak
testing
like
the
longest
tests
that
are
currently
running
are
like
scalability
test
which
takes
12
hours
each
and
we
are
targeting
to
to
actually
shortening
them
significantly
and
speeding
them
up,
so
it
will
be
even
worse.
From
that
perspective,
we
don't
really
have
very
good
upgrade
testing.
B
We
were
like
as
part
of
production
readiness
we
were
trying
to,
or
we
were
thinking
about,
like
enforcing
or
making
having
an
upgrade
test
or
rollback
test
to
be
actually
a
prerequisite
to
be
to
be
it
to
either
beta
or
ga
but
like
given.
We
don't
really
have
like
good
infrastructure
for
upgrade
testing
nor,
like
we
don't
have
really
good
examples
of
those
that
we
can
point
people
that
to
try
to
come
up
with
like
their
own,
like
upgrade
testing
with,
we
decided
we
are
not
ready
for
that.
B
We
don't
really
have
good
chaos
testing.
There
are
some
initiatives
that
help
like
hardening
kubernetes
itself,
with,
for
example,
like
priority
and
fairness,
but
that's
that's
there
aren't
that
many
examples
of
those
things
that
are
happening
and
I'm
pretty
sure
we
we
can
be
doing
more
of
such
things
yeah.
So
I
wanted
to
discuss
like.
B
So
so
what
I
started
thinking
about
is
like.
Maybe
we
should
have
like
secret
liability
or
reliability
working
group,
or
something
like
that.
That
will
be
that
will
focus
more
on
those
problems
and
produce
some
more
holistic
vision
or
strategy
of
what
are
the
things
and
like
what
are
the
prior.
What
are
the
things
that
we
should
be
investing
and
what
are
the
relative
priorities,
but
obviously
like,
as
mentioned
in
the
email
thread
by
different
people?
I
think
I
also
mentioned
some
drawbacks
in
my
initially
maybe
like
more
were
mentioned
later.
B
This
has
some
drawbacks,
so
I
generally
wanted
to
start
this
discussion
and
hear
your
thoughts
about
like
what
can
we
do,
whether
things
like
dedicated
group
really
makes
sense
or
whether
can
or
if
we
can
do
something
to
empower
people
to
to
do
that
work
somehow,
because,
unfortunately,
most
of
those
those
examples
that
they
gave
is
not
super
fancy
work.
It's
not
like
super
easy
to
attract
people
to
look
into
sock
tests
and
try
to
debug
why
they
are
failing
so
so
yeah.
B
A
So
anybody
that
thoughts
on
this
please
jump
in
we.
There
has
been
some
discussion
on
the
mailing
list
and,
like
said,
there's
concern
over
overhead,
but
I
mean
a
working
group
might
be
a
better
strategy
to
put
together.
The
exit
criteria
would
have
to
be,
like
you
know,
there's
a
framework
in
place
for
the
upgrade
test,
there's
a
framework
in
place
for
the
chaos
tests
and,
if
there's
anything,
those
would
eventually
be
owned
by
testing.
A
C
So
what
I
am
more
interested
in
is
who
is
a
set
of
people,
and
I
mean
I
don't
care
whether
it
is
a
cig
or
a
working
group,
because
we
can
switch
it
around
and
it's
just
getting
the
set
of
people
together
and
getting
them
working
on
the
problem
and
making
sure
that
they
have
enough
support
from
the
rest
of
us.
That
is
what
worries
me
more
than
actually
whether
you
know
do
we
have
to
write
a
charter
or
is
it
a
sig
that
kind
of
stuff
y
tech?
C
So
at
this
point
the
scalability
team
has
been
leading
the
charge
in
terms
of
like
making
all
these
awesome
things
happen
and
I'm
like
a
little
bit
worried
that
there
isn't
enough
of
us
from
the
other
other
vendors
and
other
companies
helping
out
with
the
scalability
stuff.
So
yeah.
I
think
sorry
go
ahead.
C
To
reliability,
then
what
kind
of
skills
are
you
looking
for
in
other
teams,
other
companies
and
how
would
they
be
able
to
help
with
the
initiative?
And
that
is
basically
what
we
need
to
articulate
and
we
need
to
get
people
to
sign
on
so
to
say
so
to
say:
okay,
yes,
I'm
interested
in
this
problem,
I'm
interested
in
working
this
with
with
you
all
that
is
basically
if
the
bootstrapping
is
a
problem.
At
this
point,
I
I
feel
like.
B
Yeah,
I
think
that
that's
a
great
point
so
from
from
one
head
like,
I
think
there
are
like
people
from
very
different
skill
sets
would
be.
It
will
be
needed
here
like,
on
the
one
hand,
ideally
they're.
Well,
there
should
be
people
who
have
who
are
managing
different,
or
ideally
many
different
clusters,
in
particular
like
people
from
like
cloud
provider.
Companies
probably
would
be
good
because
they've
seen
a
bunch
of
those
different
reliability
issues
so
can
contribute
to
prioritizing
what
is
important.
B
What
is
probably
not
that
super
important
at
this
point
and
can
be
like
moved
to
the
future,
or
maybe
things
that
are
just
one-offs
and
aren't
worth
investing
at
all.
Ideally,
we
should
have
some
people
from
like,
with
like
more
reliability
or
site
like
sres
or
site
reliability.
Engineers
like
who
are
responsible
for
for
for
running
those
systems
in
production
or
for
managing
the
clusters
in
their
own
companies
or
whatever,
like
that,
are
that
reliability
issues
are
directly
hitting
them.
B
That
people
that
are
performing
upgrades,
for
example
and
like
are
dealing
with
the
issues
that
this
is
causing.
But
we
are.
We
also
need
like
people
from
different
cities,
in
particular
like
testy
yeah,
actually,
jordan,
just
pasted
it
to
the
chat,
a
list
of
six,
which
is
pretty
much
roughly.
What
I
had
on
my
mind
to
actually.
B
So
probably
we
need
like
we
probably
should
start
with
like
more
senior
people
to
come
up
with,
like
the
agenda
and
and
and
strategy
and
and
and
design
or
at
least
high
level
designs
for
for
or
high
level
requirements
stuff
like
that
for
for
different
things,
but
like
we
later
later,
then
we
definitely
would
need
more
people
actually
doing
the
coding
and
stuff
like
that.
B
But
yeah,
it's
it's
it's!
It's
not
super
well
fought
through,
at
least
by
me,
so
I
I
I
wanted
to
start
this
discussion.
A
Maybe
there's
more,
we
could
do
to
recruit
people
there
too,
but
but
we
started
out
in
the
beginning.
There
was
some
interest.
E
D
That,
oh
god,
go
ahead,
jordan
go
ahead,
jordan!
These
are
not
new
issues.
These
have
been
raised
for
years.
At
this
point,
I
think.
D
I
also
think
that
talking
about
adding
more
tests
when
the
tests
we
already
have
are
not
a
good
signal
is
difficult
to
sort
of
scope
and
plan
and
execute
on
when
a
lot
of
the
people
that
would
have
input
on
that
are
trying
to
wrangle
our
current
issues.
D
I
I
feel
like
a
sort
of
stack
rank
of
quality
issues
that
need
addressing
is
maybe
a
first
step
like
and
we're
sort
of
doing
that
already,
with
the
existing
tests.
Putting
these
in
the
mix
and
then
saying:
where
do
these
compare
to
feature
work?
Is
it
reasonable
to
continue
with
feature
work
plans
when
we
don't
actually
have
things
that
demonstrate
you
can
upgrade
successfully
and
so
just
sort
of
like
very
coarse-grained
like
we
need
reliable,
ci
infrastructure?
D
Our
existing
tests
need
to
be
giving
us
good
signals
here
are
tests
we
don't
have
at
all
chaos
soak
upgrade
and
then
pulling
in
sig
leads
and
saying
look.
This
is
where
we're
at.
We
think.
Until
these
things
are
addressed,
we
should
not
be
doing
more
feature
work,
because
we
aren't
confident
in
the
system
we
already
have
that
at
least
sets
out.
D
We
have
chaos,
that's
great.
That
at
least
gives
us
sort
of
broad
goals.
Then
we
can
stack
rank
those
and
start
breaking
them
down
and
figuring
out
like
who,
which
sigs
are
involved
in
each
of
those.
D
So
the
list
of
cigs
that
I
put
at
the
top-
I
agree-
walter
like
there
are
lots
of
individual
component
sigs
that
are
gonna,
need
to
be
involved,
but
the
the
ones
I
called
off
just
at
the
top,
were
sort
of
the
ones
that
we
need
to
even
get
the
framework
and
the
enforcement
and
the
goals
stated
before
we
even
get
to
like
all
right.
This
we're
seeing
a
problem
with
this
component
and
that
sig
owns
it.
So,
let's
dig
in
with
them
and
figure
out
how
to
fix
it
all
right.
I'm.
B
Done
so,
can
I
have
quick
follow-up
to
what
you
said?
Jordan,
so
does
it
make
sense
to
like
park
it
for
like
couple
more
weeks
until
we
are
in
better
shape,
with
respect
to
like
existing
tests
and
stuff
like
that
and
get
read
and
get
back
to
what
you
suggested,
which
is
like
taking
together,
leads
of
those
six
and
try
to
come
up
with
the
the
list
of
things
that
are
missing
and
try
to
come
up
with
some
priorities
across
them
in
the
next?
B
D
Sorry,
without
going
into
too
much
detail,
I
would
start
the
conversation
now
and
just
say
like
see:
infrastructure,
stabilization
and
ci
test
stabilization
are
two
efforts
that
are
getting
a
lot
of
attention
now.
But
if
we
don't
start
the
conversation
about
soak
chaos
upgrade
tests
that
we
don't
have
at
all.
D
D
If
so,
do
you
have
those
those
seem
like
the
most
reasonable
ones
to
run,
because
you
don't
need
a
lot
of
complicated
like
upgrade
downgrade?
You
don't
need
new
frameworks.
Really
you
just
need
like
a
long-running
test.
That
takes
a
well-known
thing
that
is
known
to
behave
well
and
just
runs
it
and
then
checks
something
at
the
end,
like
that's
a
pretty
simple
test
to
write.
E
Yeah
this
is
so.
This
is
just
speaking
in
broad
strokes.
This
is
definitely
an
area
we
want
to
get
like
involved
in
in
both
like
so
upgrade
upgrade
testing,
because
we
have
a
lot
of
investment,
obviously
that
we
want
to
make
sure
it
works
well,
just
like
everyone
else
is,
but
I
think
we
just
organizationally,
we
probably
for
us
to
get
best
involved.
E
We
will
need
help
in
terms
of
just
saying:
hey
here's
work
to
do
so
like
we,
you
know
it's
gonna,
be
it's
gonna,
be
not
probably
for
this
release,
but
we
definitely
want
to
be
part
of
this
conversation
and
be
involved
in
that.
A
That's
great
more
people
involved
is
better.
I
guess
jordan
and
I
think
tim
said
something
about
this
in
the
email
chain.
But
and
it
sounds
like
you're
saying-
maybe
not
you
know
internally,
we
would
do
something
like
you
know.
Oh
we're
stopping
feature
work
until
we
fix
all
this
stuff
right,
but
we
can't
really
do
that
in
an
open
source
project,
but
we
can
encourage
like.
A
A
D
Think
that
should
be
the
top
priority.
Like
walter
says,
it's
a
or
denim
says
it's
a
short
release
anyway,
right
and
given
the
current
state
of.
D
Ci,
it
seems
entirely
plausible
to
me
that
it
could
be
consumed
with
test
quality
stability,
bug
fix
improvements;
historically,
the
let's
have
a
stability
release,
kind
of
got
objected
to
because
it's
like
well
what,
if
some
cigs
are
in
super
great
shape,
and-
and
I
don't
know
that
we're-
I
don't
know
that
any
sigs
are
in
super
great
shape.
Right
now,.
D
Maybe
there
are
some,
but
I
even
sigs
that
don't
have
flaky
tests
so
like
sigoth,
doesn't
have
flaky
tests
generally,
but
some
of
the
tests
categories
that
were
identified
as
being
missing
like
soap
tests,
scale
tests.
C
D
Know
of
that
don't
have
test
exercising
those
things.
So
I
rather
than
saying
120
is
a
stability
release
and
you
may
not
do
feature
work
in
it.
I
think
it's
more
reasonable
to
say
like
if
all
of
these
categories
you
have
coverage
for
and
like
good
signal
for
and
you
have
capacity,
then
it's
reasonable
to
do
future
work.
But
if
you
don't
then
work
on
those
things.
First,.
A
So
what
that's
going
to
require?
I
mean
for
so-called,
like
you
said
those
you
maybe
don't
need,
there's
not
a
lot
of
well
sick
testing
or
where
or
is
going
to
have
to
run
a
silk
test
in
this
show,
there's
going
to
have
to
be
some
at
least
coordination.
There's
going
to
have
to
be
some.
If
we're
going
to
do
chaos,
testing,
there's
going
to
be
some
mechanism
like
we're
going
to
have
to
give
cigs
enough
guidance.
A
If
we
want
to
do
this,
and
the
concern
I
have
is
that
in
the
next
three
weeks,
are
we
going
to
have
enough
time
to
to
ramp
up
that
that
guidance
to
make
it
productive
for
people.
D
F
Ironically,
the
upgrade
tests
are
actually
one
of
the
more
reliable
bits
of
useful
testing
they're,
not
very
comprehensive,
but
you
know:
we've
been
running
them
for
like
an
openshift,
we've
been
heavily
relying
on
them
and
I
don't
actually
think
I've
seen
a
flake
out
of
them.
So
I
think
it's
more
a
coverage
problem
on
upgrade
and
we
can
debate
how
far
we
go.
F
A
I
think
walter
you
would
raise
a
hand
or
see
what
is
he
next.
G
Yeah,
I
mean
just
a
few
things
I
wanted
to
follow
up
on,
but
I'm
one
I
think
I
mean
starting
with
what
jordan
said.
I
think
the
earlier
we
discussed
get
what
we
need
and
I
think
we
do
need
all
the
sigs
involved,
because
I
think
many
of
the
sigs
have
individual
perspectives
on
their
area
of
code
and
what's
missing
in
terms
of
coverage,
etc.
So
I
mean
as
an
example,
I
know
the
api
server
api
machinery
code
has
some
interesting
problems.
G
You
know
we
tend
to
get
things
like
watch
storms
in
the
api
server
not
being
able
to
make
it
back
up
again
and
it's
probably
chaos
testing,
but
but
I
still
think
that
this
sort
of
indicates
that
we
should
be
involving
all
the
sigs
getting
them
involved
early
because
they
know
their
code.
They
know
the
problems
they
tend
to
see,
and
they
may
also
have
ideas
about
how
the
testing
and
how
the
fixes
should
work
and
the
sooner
we
do
that
the
better.
The
other
thing
I'll
say
is
when
we
talk
about.
G
So
if
we
want
to
do
feature
more
feature
work,
I
think
the
best
way
of
doing
more
feature
work
is
working
out
what
the
stumbling
blocks
are
and
removing
them,
and
that
comes
down
to
making
the
tests
reliable
and
adding
tests
for
things
that
we
end
up.
Spending
a
lot
of
time
on
and
most
of
the
individual
sigs
know.
Oh
yeah,
you
know
going
back
to
the
api
machinery,
like
probably
everyone
has
inc,
has
had
to
custer.
G
I
mean
on
the
cloud
provider
side
at
least
has
had
the
experience
of
debugging
and
working
out
how
to
move
forward
under
watch,
storms
or
under
something
like
you
know,
various
web
bad
web
hook
configurations
and
we
spend
multiple
sig
meetings,
doing
nothing
but
talking
about
those
things
but
not
moving
forwards,
and
so
I
think
getting
more
tests
in
those
areas
is
actually
going
to
allow
us
to
move
faster,
and
I
touched
enough
other
cigs
that
I
think
you
know
every
sig
has
that
list.
G
A
A
Agreement
on
the
idea
for
improving
reliability
and,
in
particular,
trying
to
focus
on
some
of
these
issues
the
voigtex
raised
in
the
1.1
120
time
frame.
So
I
guess
what
I
would
ask
is
what
our
next
steps
are
here,
maybe
way
tech
is
the
champion
of
this.
Do
would
you
like
to
maybe
send
out
a
suggestion
to
the
sig
leads
mailing
list,
or
how
should
we.
B
C
So
I
did
so
just
let's
start
this
as
a
working
group
just
so
because
we
are
talking
about
like
getting
people
from
different
sigs.
So
let's
start
this
as
a
working
group
model.
C
If
we
need
to
change
we'll
change
and
then
we'll
basically
tell
the
sigs
that
each
of
them
will
have
to
have
a
liaison
to
to
this
group,
this
working
group
right
that's
one
way
of
trying
to
get
people.
I
don't
know
how
many
people
will
show
up
actually,
but
then
we
have
to
have
to
say
something
like,
and
so
what
we
would
say
is
each
sig
should
send
somebody
to
this
working
group,
because
this
working
group
is
where
we're
gonna
try
to
fix
some
of
these
problems.
C
That's
so
that's
the
only
thing
I
can
think
of
other
than
you
know
usual
send
out
the
emails
and
see
who
shows
up,
but
on
a
slightly
tangential
basis.
We
haven't
really
had
a
strong.
C
What
in
openstack
we
used
to
call
the
operators.
Operators
are
in
in
openstack,
language
was
the
people
who
are
running
clouds.
We
don't
have
that
here.
We
we
just
have
hey
brad.
C
Do
you
want
to
talk
about
operators.
C
Yeah,
so
we
used
to
have
a
strong
operators
group
like,
for
example,
the
leads
were
like
from
cern
cern
and
they
used
to
run
big
clouds
right
and
then
they
would
do
it
on
their
own
hardware
and
they
would
be
advocating
for
the
things
that
they
would
like
to
do
in
they
would
go
to
each
sec
and
say:
oh,
this
is
this
is
not
working
fine.
This
is
not.
You
know
you
got
to
do
this
better,
so
we
don't
have
that.
C
We
just
have
developers
and,
like
john
was
saying
we
have
like
volunteers
who
are
trying
to
do
some
of
this
stuff,
but
we
and
the
thing
with
the
operators
was
they
would
trade
tips
with
each
other
saying,
okay,
you
know
we,
they
would
change
patches
and
things
like
that
too,
and
they
would
come
up
with
these
versions
of
things
that
they
run.
So
they
would
compare
versions
and
they
they
had
their
own.
You
know
meetups
and
things
like
that
too.
H
A
Yeah
and
they
have
a
certain
set
of
companies,
probably
which
you
know
mostly-
are
kubernetes
users
that
at
least
are
interesting
enough
to
show
up
there,
and
I
wonder
if,
like
there's
a
representation
of
the
governing
board,
also
so
like
I'm
wondering
if
that
would
be
a
place
to
go
and
say,
hey
we're
trying
to
solicit
more
operators
of
clusters.
A
You
know
obviously
there's
the
the
cloud
providers
represented
on
this
call
and
those
of
us
who
are
in
that
in
that
space
see
thousands
of
clusters
and
but
nonetheless,
you
know
it
could
be
useful
to
get
other
customer
input
too,
and
I
know
certainly
we
are
seeing
more
involvement
from
from
some
some
some
users.
C
So
why
tech
I'll
help
you
find
links
or
contacts
or
email
mailing
lists,
that
kind
of
stuff
to
spread
the
word?
But
then
we
need
to
get
the
information
ready
that
we
would
like
to
send
send
to
them.
A
Okay,
so
so
to
try
and
try
and
wrap
this
up.
So
we
have
some
of
the
other
agenda
items
the
it
sounds
like
there's
two
two
things
in
my
mind
see
if
this
matches
what
other
people
think
one
is
the
working
group
and
that's
sort
of
how
we
organize
the
effort,
but
the
other
is
actually
the
120
release
and
and
and
encouraging
and
or
well
encouraging.
A
I
guess
is
the
best
we
can
do
cigs
to
focus
on
test
coverage
test
flakiness
test,
particularly
potentially
types
of
tests,
so
those
are
two
kind
of
separate
threads.
I
A
B
A
That
sounds
good
all
right.
Thank
you
room.
Let's,
let's
wrap
that
one
up,
because
we're
already
halfway
through
our
time-
and
I
believe
the
next
thing
is
screening
on
the
profile
stuff.
Did
you
want
you
or
brad
want
to
jump
in
on
that.
I
Sure
sure,
absolutely
john,
you
know
this
is
in
reference
to
some
comments
that
are
on
you
know
the
community
of
4994,
if
people
are
have
have
access
to
that,
but
you
know
we
we
put
in
our
comment,
john,
and
then
you
know
you
put
in
your
comment
with
regards
to
yeah,
maybe
you're
right.
We
got
to
do
a
little
more
work
here
again.
You
know.
Profiles
are
great,
but
there's
still
got
to
be
that
piece
that
says:
hey
all
you
cloud
providers,
let's
go
see
which
of
these
profiles.
I
So
so
this
was
just
a
hey.
Let's
not
forget
about
the
real
key
to
this
making
this
successful,
which
is,
if
you're,
going
to
start
building
profiles.
We
got
to
have
that
process
or
mechanism
to
start
seeing
hey
who
can
support
which
profiles
and
maybe
there's
a
provider's
like
well,
I'm
one
short,
but
I
think
I
can
get
there
and
that
was
sort
of
the
spirit
of
what
we
started.
I
Conformance
in
2017
is
driving
everybody
to
supporting
the
same
things
as
opposed
to
potentially
taking
off
the
guard
rails
and
people's
you
know
support
any
any
profile
they
want.
And
next
thing
you
know
we
we
have
everybody's
favorite
flavor
of
kubernetes
that
doesn't
quite
interoperate,
and
then
you
know
so
john.
You
responded
and
I
thought
I
thought
you
know
your
your
response
was
spot
on.
I
You
know
it
recognized
the
issues
similar
to
what
dims
mentioned
about
having
you
know,
operator-based
stakeholders
having
a
good
understanding
of
who
thinks
they
need
what,
as
a
profile,
is
kind
of
a
good
start.
You
know:
do
we
have
people
knocking
down
on
our
doors,
saying
you
got
to
make
this
a
this,
this
a
profile.
I
I
Ideally,
the
cloud
providers
are
an
equivalence
class
that
all
provide
the
same
groups
of
profiles
so
that
the
end
user
still
thinks
they're
getting
a
wonderful,
interoperable,
portable
workload
experience,
and
you
know
I
we
just
wanted
to
get
that
on
the
sig
agenda,
to
make
sure
that
they
were.
You
know
you
know
aware
of
the
the
concerns
there
from
the
the
original
work.
A
A
What
what
my
plan
was
right
now
is
I'm
out
next
week,
but
before
I
go,
I
plan
to
write
right
out
just
a
just
a
short
document,
thoughts
on
thoughts
on
conformance
that
list.
Some
of
the
issues
have
been
raised,
some
of
my
thoughts
and
at
best
I
can
represent
other
thoughts
and
then
you
know
next
conformance
meeting,
which
is
a
week
from
this
coming
tuesday.
A
We
can
review
that
and
try
to
come
up
with
some
some
decisions,
but
what
I
don't
want
to
happen
and
is
for
us
to
just
talk
in
circles,
which
is
what's
kind
of
happened
in
the
past,
because
people
there's
just
there's
just
not
there's,
there's
a
lot
of
different
opinions,
and
so
it's
and
and
it's
not
always
clear
what
I
don't
think.
We've
made
really
clear
the
goal
to
me.
A
The
goal
for
profiles
was
to
be
able
to
increase
coverage
beyond
things
that
that
we
think
aren't
necessarily
baseline,
but
I
also
don't
want
to
create
a
posix
kubernetes
that
is
so
paltry
that
it's
used.
You
know
the
standard
is
so
useless,
but
it
there's
no
compatibility
at
all
guarantees
because
it's
so
minimal.
So
you
know
it's
difficult.
It's
a
difficult
challenge.
A
I
guess.
If
anybody
else
has
comments
on
that,
please
speak
up.
A
Meeting
okay,
then
till
looks
like
you
just
added
a
question
here.
J
Yes,
this
is
mostly
coming
over
from
sick
release
or
from
the
lumos
people,
and
it's
mainly,
I
never
saw
any
notes
past
it
from
july
2nd.
So
I
think
it
got
dropped
mostly
because
I
wasn't
here
to
answer
questions
there
and
the
main
question
is
what
does
kubernetes
actually
want
in
terms
of
hardware
support?
J
A
J
A
I
mean
strictly
speaking
right.
It
makes
sense
to
me
if
we
were,
if
we
were,
you
know
if
we
didn't
claim
support
for
things
that
we
don't
have
a
good
test
signal
on.
Of
course,
as
we
discussed
earlier,
there's
a
lot
of
things.
We
don't
have
good
testing
on
that
are
fundamental,
so,
but
this
is
at
least
one
that's
cleared
it.
So
I
guess
I'm
not
sure
on
what
we
would
do
about
to
get
a
clear
signal.
A
Other
than
working
with
you
know,
work
group,
infra
and
testing
to
get
whatever
infrastructure
we
need
and
test
setup.
We
need
to
to
exercise
those
platforms,
but
if
you
know
if
people
want
support
for
those,
you
know
it's
kind
of
as
an
open
source
community
falls
on
on
those
wanting
support
for
those
platforms
to
provide
the
infrastructure
that
we
might
need
to
do
it.
J
So,
are
you
looking
more
like
a
support
model
like
the
go
team
has
where
the
other
os
provide
infrastructure,
so
you
can
test
with
an
automated
tester
or
more
like
the
rust
team.
Does
it
with
tiers
where
the
individual
os
is
basically
have
a
lower
tier?
So
people
just
know
there
is
something
around
that
they
need
to
contact
the
other
os
team.
C
I
I
think
this
is
more
of
there's
a
larger
conversation
here.
Right,
like
people
are
asking
for
like
changes
in
scripts
and
make
files
and
docker
files
about,
can,
I
add,
support
for
for
different
architectures,
and
it's
like
so
a
while
ago.
We
made
this
effort
to
add
you
know:
power,
pc
and
arm
64
and
s3
360
390,
one
of
the
two
and
stuff
like
that
right.
C
So
what
we
ended
up
doing
was
I
don't
know
how
to
replicate
that
again
now
with
the
newer
architectures
that
are
coming
in,
but
it
it
basically
involves
making
sure
that
there
is
enough
stuff
in
there.
So
we
would
be
able
to
run
a
conformance
test
right.
That
was
the
outcome
that
we
marched
towards
when
we
did
that
work.
C
But
now,
if
we
have
to
do
the
same
work,
then
what
we
are
asking
people
to
do
is
write
a
cap
right,
so
we
would
have
to
have
a
cap
for
an
arch
to
say
these
are
the
things
that
we
would
test,
and
this
is
how
we
would
test
it
that
I
would
expect
to
be
part
of
a
cap
for
a
new
arch.
That's
coming
in
and
we
would
hold
it
to
the
same
standard
as
we
hold
other
things.
C
If
you
want
the
arch
to
be
officially
blessed,
then
you
know
there
are.
There
should
be
a
certain
set
of
checklist
that
they
need
to.
You
know
mark
on
including
they
should.
You
know
you
should
be
able
to
run
without
a
custom
patch
of
golang
or
you
know,
or
a
base
image
should
be
available
for
anybody
to
use,
or
you
know,
there's
a
bunch
of
things
that
are
involved
there.
So
this
comes
into.
C
Basically,
this
question
comes
back
to
that,
like
we
have
to
try
it
out
with
one
of
the
arches,
and
I
think
the
last
we
talked
about
was
with
the
arm
team
here.
Tina's
was
here,
so
it
will
come
out
of
that
discussion.
I
think.
J
A
Clearly
I
mean
we
can
think
of
the
different
architecture,
support
as
different
different
levels
of
support
in
that
like
if
a
third
party,
we
don't
as
a
kubernetes
project,
release,
blessed
images
or
blessed
binaries
of
a
particular
architecture
that
doesn't
preclude
us
from
having
make
files
that
theoretically
support
that
building
in
those
some
third
party
could
build
in
those
package
that,
and
as
long
as
they
can
pass,
the
conformance
tests,
along
with
you,
know,
making
their
images
publicly
available.
So
that's
reproducible,
then
that
would
be
able
to
be
considered
a
conformant
kubernetes.
C
Right
the
yardstick
that
we
had
from
the
you
know
the
trademark
processors
somebody
else
other
than
the
vendor
should
be
able
to
replicate
that
thing
right.
What
you
did
right,
so
that
would
be
the
same
yardstick
that
we
would
keep
like.
As
long
as
there
is
enough
information
for
somebody
to
be
able
to
run
a
performance
test
and
submit
the
results,
then
we
would
say
you
know
thumbs
up
or
thumbs
down
on.
Based
on
that,
I
think.
A
D
D
D
Scripts
is
like
adds
to
release
time
and
then
the
last
point
is
if
we
have
a
platform
that
we
don't
have
test
coverage
for,
and
we
encounter
some
issue
like
a
build
issue
or
a
dependency
issue,
or
something
like
that
like
are.
We
allowed
to
drop
support
for
that
untested,
not
officially
supported
platform.
In
order
to
make
progress
on
the
platforms,
we
do
support
and
do
tests
and
do
have
to
deliver.
E
Yeah
so
james,
I
just
had
a
question
on
what
you
were
you're
saying
about,
like
I
think,
repeatability,
and
not
just
in
terms
of
conformance,
but
you
were
you
talking
about
like
builds
too.
They
need
to
be
like,
like
reproducible,
binaries
or.
C
C
So
the
latest
problem,
just
from
day
before
yesterday
was
you
know
we
were
trying
to
build
the
go
115
one
and
we
ran
into
a
problem
where
we
were
using
a
docker
setup
which
is
inside
microsoft
or
some
some
some
some
place
and
it
had
filled
up
with
you
know
the
the
file
system
was
full
so
and
basically
we
had
to
wait
for
somebody
to
come
and
clean
up
so
that
that's
just
an
example
of
you
know
it
hurting
the
release
process
and
you
know
given
a
choice,
jordan,
I
would
dump
windows
but
right
now
they
even
they
don't
even
do
plenty
of
the
conformance
tests
that
we
have.
F
F
F
This
has
been
a
struggle
for
a
lot
of
the
different
things,
because
implications
are
not
obvious
unless
you
can
really
stress
them,
and
so
I
think
just
based
on
the
past
experience.
I
would
encourage
us
to
have
a
higher
bar
for
new
things
coming
in.
It
doesn't
have
to
be
an
overwhelmingly
high
bar,
like
you,
don't
have
to
run
a
pr
test
on
every
pr.
F
If
it's
an
unsupported
architecture
and
your
periodic
doesn't
run
fast
enough
and
you
care
about
security,
you
should
pony
up
for
testing
this.
This
will
be
different
for
every
architect
or
every
variant,
so.
D
So
I'm
clear
are
we:
are
we
describing
the
kubernetes
project
distributing
binaries,
or
are
we
just
saying
the
build
scripts?
Would
if
you
built
on,
like
my
architecture,
acme
whatever
like
the
build
scripts,
won't
choke.
You
can
run
those
yourself,
but
we're
not
going
to
build
and
distribute
those
binaries.
I.
F
Mean
it
might
be,
maybe
we
build
and
don't
distribute.
I
do
feel
that
the
bar
for
distributing
should
be
pretty
high.
I
think
that's
completely
reasonable.
Nothing
should
nothing
should
block
a
release
that
has
not
ponied
up
for
like
that.
You
bring
effort
or
you
don't
block
the
release.
It's
pretty.
I
think
that
could
be
like
a
rule
or
something
or
the
guideline.
A
I
mean,
I
think,
that
the
for,
if
we're
distributing
it,
the
project
is
distributing
event.
We
need
test
signal
right
and
we
need
pretty
good
testing,
but
what
I
was
understanding
earlier
was
that
is
it
possible
to
add.
We
just
said
it:
is
it
possible
to
just
add
things
to
the
build
scripts
that
we're
not
even
necessarily
building,
but
it's
up
to
some
third
party
to
build
and
distribute
in
their
architecture
but
yeah?
Then
we
we
certainly
can't
block
the
release,
but
if
it's
not
part.
A
Not
going
to
do
that
anyway,
but
my
experience
with
with
like
in
the
openstack
project
that
didn't
brought
up
like
there
was
a
third-party
ci
and
it
was
not
a
great
experience
from
my
point
of
view.
As
one
of
those
third
parties,
I
mean
you
had
a
lot
of
so
every
every
pr
you'd
get
these
third
third
parties
yeah
I
figured
and
they
would.
You
know:
they'd
go
offline,
they'd
get
flaky,
so
you
ended
up
making
them
non-voting
anyway,
and
then
they
would
just
go
red
and
they
would
stay
red
forever.
A
I
don't
know
if
other
people
had
other
experience
with
that,
but
that
would
be
know.
That
would
be
my
concern.
We,
I
think
walter-
and
I
talked
before
about
the
similar
issue
with
cloud
provider
right
as
cloud
provider
spins
out
of
tree.
Then
how
do
you
do
the
ci
and
all
those
different
environments?
It's
not
necessarily
that
easy.
C
Right
so
john
yeah
that
we
do
have
a
similar
one
where
we
intestinate
people
can
report
results
and
which
is
what
arm
uses
for
it,
for
example,
so
just
going
back
a
little
bit
of
the
three
architectures
that
we
added,
I
think
the
arm
turned
out.
C
Okay,
the
power
pc
turned
out
okay,
because
those
people
are
actually
engaging
in
many
of
the
conversations
around
around
things
that
we
are
doing
here,
but
the
s
390
folks
never
showed
up,
and
I'm
like,
based
on
this
conversation
with
clint
clayton,
I
I
I
would
just
stop
tell
the
release
team
to
stop
shipping
s390
stuff.
You
know
from
from
our
official
releases
and
let
it
be
for
you
know,
on
the
make
file
and
things
like
that.
F
If
you
can't
even
do
that
much,
I
don't
think
you
should
be
part
of
the
release.
The
ideal
outcome
is
that
people
step
up,
and
then
we
have
some
like
working
groups
right.
We
we
had
working
groups,
we
came
in,
we
assigned
a
garbage
collection
policy
for
working
groups,
let's
have
a
garbage
collection
policy
for
supported
architectures,
absolutely.
C
Yeah,
okay,
mika
you
had
a
question:
do
you
want
to.
E
E
What
about
like,
if,
if
say,
a
what
like,
a
specific,
not
kubernetes
but
a
dependent
project,
say
docker,
sed
cordiness
like
what,
if
there's
an
os
specific
bug
in
one
of
those
that
requires
you
know,
moving
forward
on
on
that
dependency
or
that
other
project,
so
that
diverges
from
for
just
from
like
the
standard
for
just
that
os
architecture
like
how
do
you
think
about
that
with
conformance
and
then
also
like?
I
don't
know,
testing.
C
C
D
F
And
we've
pushed
back
pretty
strongly
even
against,
like
minor
version,
you
know,
if
you
regress
in
a
minor
version,
conformance
is
about
trademark
and
meeting
a
bar
and
being
able
to
keep
it
up
at
a
periodic
interval
of
three
months
or
so,
whatever
the
release
cadence
of
kubernetes
is.
It
is
not
a
more
diligent
aggressive
thing
now.
That
said,
when
you're
no
longer
conforming
the
next
time
around,
because
you
let
your
stuff
atrophy
you'll
make
that
mistake.
Once.
C
Right
but
then
what
is
end
up
happening
is
so
nobody
people
are
using
to
test
if
their
clusters
are
healthy.
A
There
are
people
who
use
it
who
misuse
it.
It's
not
a
quality
measure,
it's
not
a
it's.
It's
simply
wait.
You
can.
You
can
say
that
our
tests
are
not
a
quality
measure.
Does
that
go
back?
No.
F
John,
he
got
you
I
do.
I
do
think
that
the
the
meta
problem
we
are
talking
about
is
the
is
the
improving
the
raising
the
bar
on
quality
testing,
as
distinct
from
the
other
types
of
testing
that
we
allegedly
do
in
a
more
holistic
manner,
and
that's
the
initiative
that
we're
trying
to
drive
up
across
all
these
fronts.
A
Okay,
so
it
sounds
like
if
I
can
summarize
that
the
well
I'm
not
sure
what
it
sounds
like
so
maybe
within
the
last
couple
of
minutes.
Somebody
else
can
summarize
what
I
heard
is
that
we
clearly
we
need
a
solid
test
signal.
A
If
we're
going
to
distribute
any
binaries,
we
need
a
garbage
collection
process
for
architectures
so
that,
if
you're,
not
maintaining
your
third-party
eci,
you're
booted,
I
didn't
get
any
clarity
on
whether
we
would
accept
a
patch
to
our
build
script,
for
an
architecture
that
we
don't
have
any
way
to
build
and
or
test
against.
That's
the
one.
I
still
wasn't
clear
on.
A
Okay,
yeah
that
just
just
trying
to
summarize
what
I
understood,
the
discussion
today
make
sure
that
we're
on
the
same
page.
So
then
last
call
for
any
comments
in
the
last
two
three
minutes
here.
Otherwise,
I'll
give
you
back
all
that
extra
time.
C
Yeah,
we
usually
have
a
set
of
pages
in
community
that
we
update.
You
know,
for
example,
our
deprecation
policy
is
there,
so
that's
where
we
would
add
it
when
we
come
up
with
one
okay,
so
we'll
watch
that
space.
Okay,
thank
you.
A
Okay,
thank
you,
everybody
interesting
meeting
and
have
a
great
couple
of
weeks.
We'll
see
you
in
two
weeks.