►
From YouTube: SIG Architecture 2018-08-16
Description
Agenda/notes: http://bit.ly/sig-architecture
A
All
right,
hi
everybody,
I'm
Erin,
Griffin
burger.
Today
it
is
Thursday
August
16th,
and
this
is
the
weekly
sega
architecture
meeting.
This
is
being
publicly
recorded
and
will
be
posted
to
youtube.
So
please
keep
in
mind
that
what
you
say
will
be
set
in
stone
forever
and
ever
I'm
going
to
run
us
through
our
usual
housekeeping,
starting
by
sharing
my
screen
and
walking
us
through
the
boards
that
we
have
set
up
through
this
purpose.
So
we're
going
to
start
with
the
conformance
testing,
Review,
Board
and
Brian
I'm
gathering.
B
I
did
so
first
of
all,
thanks
Aaron
for
pushing
a
bunch
of
these
issues
forward
for
other
folks,
I
just
sent
out
email
this
morning
about
conformance
testing.
We
are
trying
to
improve
test
coverage
of
important,
widely
used,
existing
features
that
are
portable
and
non
optional
and
have
stable
api's,
especially
prioritizing
cases
where
the
implementation
is
pluggable
or
has
plugins
like
the
container
runtime
interface
or
where
there
are
demonstrated
multiple
implementations.
Alright,
since
the
goal
is
portability,
we
want
to
ensure
that
you
know
all
these
cases.
B
All
these
alternate
implementations
are
are
all
compatible
for
users,
so
we
have
been
focusing
pod
is
very
under
tested
and
that's
like
the
most
used
feature
of
kubernetes
and
has
a
pretty
rich
set
of
functionality.
So
we
have
been
prioritizing
more
pod
tests.
In
particular,
a
lot
of
the
pod
functionality
was
tested
by
node
conformance
tests,
which
were
not
actually
part
of
the
conformance
suite,
despite
similarity
of
the
name,
because
they
didn't
actually
run
as
part
of
the
same
test
framework
and
so
on.
B
So
I
am
prioritizing
in
terms
of
my
own
time
in
cigarette
texture.
I
am
prioritizing
conformance
testing
in
general.
Over
things,
like
API
reviews,
that's
a
general
topic.
I
want
to
get
you
later,
but
I
reviewed
all
of
the
open
conformance
test,
PRS
that
I'm,
aware
of
all
the
ones
that
were
in
the
dashboard.
If
there
are
ones
that
were
not
labeled,
conformance
area,
conformance
and
kubernetes
communities,
and/or
were
not
in
the
dashboard,
then
I
didn't
get
to
them.
Please
fix
both.
B
Exec
also
came
up
there.
Then
stability
problems
in
the
past,
but
those
seem
to
be
resolved
and
exec
is
required
to
be
implemented
by
all
CRI
implementations.
So
exec
based
functionality,
such
as
in
lifecycle,
hooks
and
probes,
will
be,
in
conformance
the
remaining
tests
that
are
in
review.
There
have
some
issues
the
aggregator
tests
have.
A
B
I
think
another
good
thing
to
explore
would
be
how
we
can
get
more
subject
matter
area
experts
involved
in
the
process.
So
one
general
thing
that
would
help
is
to
understand
what
surface
area
is
being
tested
now
right
and
then
we
can
Det
whether
we're
happy
with
that
or
not.
And
then,
if
any
new
tests
you
know,
part
of
the
point
is
adding
new
surface
area,
so
I'm
not
entirely
sure
how
to.
A
So
I
have
some
thoughts
on
that
that
I'm
planning
on
putting
together
in
a
more
fully
formed
presentation
into
the
CN
CF
conform.
It's
working
group
because
I'd
really
like
to
make
sure
I
have
buy-in
from
everybody
collectively,
but
the
TL
DR
is
I.
Think
that
API
coverage
is
a
poor
proxy
for
behavior
based
coverage.
A
It
will
not
show
us
what
corner
cases
look
like
I
have
somebody
who's,
putting
together
a
proposal
to
do
some
evil
hackery
to
get
us
line
based
coverage,
which
will
be
far
more
illustrative
of
which
corner
cases
we
are
actually
catching
in
pieces
of
code.
It
may
break
certain
performance-based
thresholds,
but
I
think
it
will
definitely
inform
the
direction
we
should
be
going
or
writing
different
test
cases.
A
So
I
don't
know
that
it's
super
necessary
to
hash
that
out
here,
just
other
than
to
send
a
signal
out
that,
like
I,
really
would
like
to
get
more
active
discussion
going
when
it
comes
time
to
like
iterate
through
is
this
conformance?
Is
it's
not
conformance
right
now,
I'm,
finding
Brian,
like
you
go
through
the
VRS
at
a
bat
like
one
hour
a
week
and
then
sometimes
the
discussion
gets
kicked
up
to
email
like?
Is
there
a
better
escalation
mechanism
for
this
and
I'm?
C
A
100%
and
I
really
appreciate
your
flexibility
there.
You
know
this
is
we're
still
at
the
point
where
we're
trying
to
build
momentum
so
I'm,
just
at
the
point
where
I
need
to
you
sort
of
break
the
work
down
into
like
what
are
the
corner
cases
we
need
to
cover,
and
then
what
is
the
most
appropriate
way
of
testing
out
functionality
as
Brian
is
discovering
things
like
logs
versus
termination
messages
or
is
exec
okay?
Is
it
not
okay?
What
are
the
assumptions
we
can
have
for
storage,
bla,
bla,
bla,
bla,
I?
Think.
B
D
D
D
We
can
say:
where
do
we
have
anecdotal
reports
where
things
look
like
they
diverge,
you
know
and
I
think
one
example
here
and
this
one
it
would
be
very
difficult
to
do
with
performance,
but
maybe
it's
an
aspect
of
conformance
is
when,
when
a
node
disappears
with
a
cloud
provider,
does
the
node
actually
get
removed
from
the
server
versus
gone?
You
see
this
being
different
across
cloud
providers
and
it's
something
that
maybe
working
with
the
the
new
cloud
sig
we
might
be
able
to
tighten
down
and
then
right.
D
A
And
so
I,
sorry
to
interject.
There
I
have
also
talked
to
Andrew
from
the
sig
cloud
provider
about
having
the
creation
and
creation
of
additional
conformance
tests,
be
part
of
the
runway
for
onboarding
new
cloud
providers,
as
some
projects
there,
so
to
be
clear,
I'm
really
focused
on
let's
use
the
word
core,
but
just
the
barest
minimums
of
functionality
to
call
something
at
kubernetes.
D
Gonna
disagree
with
you
a
little
bit.
The
purpose
of
conformance
is
to
make
sure
that,
as
implementations
diverge,
we
actually
get
predictable
behavior
now
the
end-to-end
tests,
the
purpose
of
those
is
to
actually
ensure
that
we
have
a
well
working
system
now.
We're
reusing
the
end-to-end
test
for
conformance
but
I
think
it's
important
to
recognize
that
those
really
are
slightly
different
purposes
across
those
things
and
the
places
where
we're
gonna
see
the
most
issues.
Around
conformance
are
gonna,
be
places
where
we
do
have
plug
ability
that
will
be
outside.
D
Of
course,
so
I
think
conformance
around
CRI
conformance
are
on
C
and
I
are
on
CSI
around
cloud
providers.
Those
are
the
places
where,
if
we
can
start
locking
things
down,
look
a
little
bit
harder
to
test
I
think
we're
gonna
provide
the
most
value
to
users
in
terms
of
what
conformance
means.
Moving
I
I.
A
B
G
G
B
I
think
another
issue
is
most
of
the
tests,
haven't
been
vetted
for
the
criteria
that
were
iteratively
developing
right,
so
they
test
things
that
aren't
actually
guaranteed
to
be
stable.
For
example,
like
one
test
I
just
looked
at
was
matching
an
exact
value
of
a
recent
string,
which
is
not
a
valid
thing
to
do.
According
to
the
API
definition
right,
it's
not
defined
as
part
of
the
API
like
a
contents
of
an
event.
It's
generated
is
not
part
of
the
API
right,
so
we
had
some
into
end
tests.
B
They
were
testing
based
on
that
in
the
past,
and
that
was
a
problem
for
version
skew
testing
as
part
of
the
release
process,
much
less
conformance
I.
So
you
know
over
time.
I
think
that
would
be
a
good
goal
is
to
have
all
et
tests,
in
conformance
or
in
staging
areas,
to
be
promoted
to
conformance
when
they
qualified
but
we're
nowhere
near
there.
Yet
I.
D
Why
does
that
I
think
there
is
a
ton
of
overlap
here
but
like
if
we
only
have
so
many?
You
know
so
many
tokens
to
spend
on
improving
these
tests,
where
you
know,
and
for
the
purpose
of
conformance,
where
are
we
gonna
get
the
most
bang
for
the
balk
in
terms
of
improving
test
and
those
are
the
places
where
we
do
have
variability,
even
if
it
fast-forwards
us
a
head
around
price
and.
G
A
Okay
yeah
so
like
I,
really
apologize.
If
it's
as
though
I'm
a
bottle,
that
I
really
need
to
be
honest,
like
oh
here
is,
we
should
be
focusing
on
the
things
that
are
pleasant
and
can
change.
We
should
be
focusing
just
on
end-to-end
tests.
I
expect
there's
a
different
level
of
fidelity
for
things
like
CRI,
CNI,
CSI
cloud
providers.
A
Further
the
conformance
test
stock
that
Jordan
linked
is
a
really
good
start,
but
I
think
that
needs
to
be
tightened
up
quite
a
bit
to
help
start
like,
as
Brian
is
iterating
on
these
real
fine-grained
technical
details
about
what
is
and
is
not
acceptable.
I
want
to
come
up
with
a
super
concrete
list,
so
it's
really
clear
to
both
review.
Subject
matter
experts
and
people
who
are
potentially
writing
tests
what
they
can
and
cannot
what
is
and
is
not
considered
conform.
It
can
depend.
I
So
we
I
think:
what's
the
deployment
PR
merges
have
conformance
for
the
workloads
API,
and
we
did
this
effort
as
part
of
taking
it
to
me--one.
One
of
the
things
we
found
is
that
the
big
issue
is
we
had
a
lot
of
EDT
tests
that
really
should
have
been
medium
tests.
So
what
we
did
there
was
move
them
into
the
integration
framework
and
integration
to
us.
We
didn't
have
appropriate
small
tests
and
I
really
don't
think
you
want
to
try
to
get
slot
coverage
for
your
conformance
test
that
it's
gonna
be
painful.
I
You
should
probably
focus
on
lines
of
code
coverage
in
your
unit
test.
The
link
ensure
that
each
module
has
appropriate
coverage
for
small,
medium
and
large
tests
and
then
looking
out
in
saying,
okay
well
for
my
ETA
tests.
I
really
want
these
to
be
black
box
and
exposed
system
behavior
that
we
care
about
that
actually
defines
what
it
is,
and
then
marketing
goes
as
performance
that
was
like
doable
in
a
fixed
timeframe
in
order
to
like
get
some
value
out
of
the
process.
So.
I
I
B
Yeah
I
have
a
comment
on
that.
So
somewhere
I
have
a
list
of
test
issues
and
that's
one
of
the
major
issues
is
that
really
to
get
better
test
coverage
and
more
stable
testing
and
more
efficient
testing
and
so
on?
There
are
lots
of
reasons
why
having
more
narrowly
scoped
tests
are
desirable,
you
know
they're
easier
to
debug
and
so
on.
So
but
the
challenge
is.
B
We
also
want
to
make
sure
that
user
facing
features
are
covered
by
conformance,
so
what
board
had
was
actually
a
framework
where
tests
could
be
run
in
multiple
modes
like
the
test.
Execution
was
abstract
'add,
so
you
could
actually
run
them
either.
As
you
know,
test
or
integration
tests,
so
I
think
something
like
that
is
going
to
be
necessary,
ultimately
for
different
parts
of
the
system,
so
that
we
can
run
tests
in
sort
of
synchronous,
direct
unit
or
integration
testing
mode
and
also
run
against
the
real
API
against
a
real
cluster.
B
I
B
B
B
So
the
kept
tracking
I
want
to
cover
quickly,
mainly
there
are
a
ton
of
caps
in
the
dashboard
and
a
ton
of
issues
that
have
been
filed
against
architecture,
tracking
and
I've.
Looked
at
almost
none
of
them,
because
I've
been
focusing
on
conformance
testing,
so
I
think
the
meta
issue.
There
is
what
do
we
want
to
do
about?
That
I
mean.
Obviously
there
should
be
people
other
than
just
me,
who
should
be
looking
at
them
as
well,
and
that
relates
to
the
prioritization
topic.
I
added
to
the
agenda
coming
up.
D
B
A
B
And
that's
part
of
the
kept
process
that
we
need
to
evolve
like
how
and
where
should
cigar
structure
get
involved?
The
first
thing
Jace
did
is
just
like
chase
down
the
caps.
There
were
in
flight,
so
we
could
start
figuring
out
that
process
and
it
was
pretty
messy
because
some
are
open,
PRS
and
some
are
merged,
and
things
like
that,
so
that
flagged
a
need
for
more
automation
to
help
search
those
things.
Maybe
you
know
when
it
kept
goes
to
implementable,
is
when
we
a
PR
is
open
for
that.
B
You
know
a
few
of
those
have
popped
up.
Maybe
that's
a
point
to
get
involved
unless
people
have
explicit
concerns,
get
escalated
earlier,
but
yeah,
that's
stuff,
that
we
need
to
figure
out,
but
someone
needs
to
own
figuring
that
out,
and
it
probably
can't
be
me
because
I'm
focusing
on
conformance
testing.
A
B
D
On
a
screen,
by
the
way,
how
it's
we
can
talk
later
else,
I
do
the
option.
Okay,.
B
J
B
So
the
Alpha
features
on
stable,
api's
point
I'm
gonna
move
down
the
agenda
because
I
want
to
talk
about
priorities
first,
so
in
general,
as
I
think
I've
mentioned
before
I
feel
that
the
project
is
at
a
point
where
we
need
to
focus
more
on
stability
and
there
that
comes
in
various
flavors.
One
is
prioritizing.
B
B
That's
really
something
I
put
in
the
future
agenda
items
list,
which
is
how
to
move
to
more
secure
defaults
without
continuously
breaking
users.
This
is
an
issue.
That's
come
up
with
some
of
the
patches
that
have
been
made,
so
we
don't
have
time
to
discuss
that
now.
But
I
do
want
to
schedule
that
for
a
future
meeting,
I'm
pretty
sure
we
can't
we
shouldn't
be
doing
that
in
patch
releases
and
I'm,
not
even
sure
that
there's
the
same
way
to
do
that
and
minor
releases.
B
But
that's
worth
a
discussion
to
see
if
we
can
figure
that
out
in
terms
of
areas
of
somewhat
new
functionality
that
I
want
to
talk
about.
I.
Think
one
of
the
roles
of
cig
architecture
is
to
sort
of
provide
technical
guidance
for
the
entire
project
and
it's
been
a
while,
since
I
sent
out
emails
telling
people
what
I
think
they
should
focus
on,
and
that
probably
also
shouldn't
just
be
me.
B
So
I
want
to
get
some
agreements
amongst
Sagarika
tech,
chure
stakeholders
about
it,
but
similar
to
release
bugs
and
backward
compatibility
breaks,
and
things
like
that
I'm
concerned
about
reliability
of
the
system,
especially
under
load
or
under
intensive
workloads.
I
no
longer
have
the
ability
to
monitor
github
issues.
Sadly
so
I
don't
know.
If
there's
some
sort
of
triage,
we
can
do
that
to
get
a
better
sense
of
what
problems.
B
You
know
architectural
improvements
and
things
like
that
that
have
sort
of
stagnated
for
a
long
time
and
Sagarika
textures
roll
and
pushing
those
things
across
multiple
SIG's.
So
do
people
have
thoughts
about
whether
we
should
be
doing
that
how
we
should
be
doing
that,
whether
we
should
come
up
with
some
sort
of
statement
of
priorities
that
spans
multiple
releases
or
something
like
that
and
work
with
the
SIG's
to
prioritize
those
things.
I
think
a
lever
we
do
potentially
have
is
in
terms
of
approving
api's
and
caps,
and
things
like
that
or
Sagarika
textures
involved.
B
A
B
A
Agree
with
that
assertion,
certainly,
but
that
does
not
seem
to
I
guess
forced
the
the
solution.
But
again
we
should
chat
about
that
some
other
time,
especially
since
release
of
threads
that
you
know
from
other
conversations.
We've
both
had
or
been
a
part
of
yeah
I
think
we're
still
lacking
good
data
on
like
Daniel's
experience,
with
the
feature
branch
that
he's
working
on
first
well,.
B
A
With
that,
but
I
I'm
just
trying
to
make
the
point
that
I
think
we
all
got
in
a
room.
What
is
it
a
year
or
two
ago
and
we're
like?
Yes,
stability
is
the
thing
Tim
put
up
that
awesome
slide
with
all
those
big
numbers
and
were
like
yeah.
We
made
it,
but
we
incurred
all
sorts
of
debt
getting
here
and
now
we're
gonna
pay
it
down,
and
then
we
still
ended
up
shipping
up
a
shipping
out
a
whole
bunch
of
features
as
a
community,
so
yeah
the.
H
Only
lever
we've
got
is
our
ability
to
say
no
to
things
and
as
the
sort
of
architectural
leadership
of
the
project
we're
sort
of
in
the
best
position
to
say
no,
but
the
community
is
naturally
a
delegated
thing
right.
So
how
do
we
or
can
we
impose
our
will
on
other
SIG's
and
say
you
know,
networking
storage,
apps,
whatever
thou
shalt,
not
merge
features,
and
can
we
get
buy-in
across
the
board
on
that.
F
We
made
some
progress,
I
think
there's.
The
problem
is:
maybe
it's
just
serious.
The
communication
I
know
we
need
a
lot
of
communication.
I
want
to
go
stable
release
and
instead
the
new
feature
is
the
even
post.
Every
other
means
to
be
the
stable
release.
We
fix
the
black
the
backlog,
something
so
signaled
I
can
have
that.
We
really
the
only
success
they
release.
I.
Think
I
should
have
the
signal
and
the
weak
I
think
that
everyone
well
know
about
you
that
release.
We
don't
want
I
some
new
feature.
I
remember.
F
We
only
approve
one
new
feature
in
that
release,
and
most
of
them
will
focus
on
promote
a
leader
to
the
better
future
by
the
future
to
the
GA
feature.
So
I
do
seek
my
health
and
so
each
we
could
do
that
and
to
communicate
it
to
the
different
ethnic
groups
and
we
do
occlusive
architecture
accrual,
all
those
stomach
that
some
state
one
exception.
Then
we,
the
next
those
who
process
and
my.
H
Concern
there
is
the
time
is
not
a
really
great
metric
for
this.
What's
going,
what
happens
for
a
lot
of
outside
contributors?
Is
they
just
take
their
thing
and
they
treat
it
like
a
six
month
dem
cycle
and
they
go
away
and
they
don't
fix
bugs
in
the
interim?
They
just
focus
on
their
thing
and
they
show
up
at
the
talk
and
they
push
their
changes
in
the
talk
and
nobody
works
on
the
ticket
right.
So
we,
if
we're
gonna,
do
something
here.
We
need
concrete
metrics,
jojo's,
yeah,.
D
So
I
I
think
I
think
this
might
be
a
queen.
What
Caleb
was
trying
to
say,
I
think
there's
there's
the
motion
of
like
making
sure
that
the
stuff
that
does
go
in
even
if
it
is
a
new
feature,
is
really
solid,
well
tested,
documented
that
will
naturally
slow
things
down.
That'll
change
the
culture
around
this
and
that'll
mean
that,
like
the
the
destabilizing
stuff
will
naturally
tend
to
happen
outside
of
the
core.
D
That's
a
different
motion
from
actually
plane
paying
down
debt,
but
I
think
that
that
it's
it's
necessary
to
first
stop
incurring
new
debt
before
we
actually
then
start
to
try
and
pay
down
the
old
debt,
and
so
I
think
that's
the
same
know
around
like
you
know,
or
at
least
making
sure
that
we
have
more
process
so
that
there
is
some
pain
around
caps
and
and
and-
and
you
know,
future
flags
and
API
reviews,
and
that
type
of
thing.
So
two.
H
K
A
My
proposition
was
that
somebody
from
Sega
architecture
should
participate
in
the
next
release
and
be
the
hand
of
cigar
contexture,
saying
no
and
pushing
back
against
features.
I
know
that
from
a
technical
perspective
that
might
appear
inadequate
and
things
might
still
like
sneak
in
but
I
think
it's
the
higher
level
top-down
message
that
you're
actually
taking
this
seriously
and
going
forward.
You
actually
have
to
convince
someone
who
has
real
technical
depth
across
the
project
why
this
feature
is
worth
including
in
the
release,
as
opposed
to
something
that
pays
down
stability.
L
To
Ken's
point:
it's
not
just
how
many
features
you
have
in
beta
like
it's
not
just
like.
Are
you
moving
them
from
beta
to
GA?
It's
also.
How
many
are
you
moving
from
alpha
to
beta
Lee?
Is
the
total
number
of
beta
features
at
any
given
time
nobody's
integers?
So
so
so
maybe
we
could
limit
the
number
of
beta
features.
That
was
the
implication.
Do.
K
H
F
D
D
Does
this
feature
really
need
to
be
as
part
of
core
do
we
need
to
accept
this,
and
I
felt
like
I
was
going
out
on
a
limb
and
I,
probably
pissed
some
people
off
around
and
I
was
really
coming
at
it
from
the
point
of
view
of
like
we
need
to
actually
sort
of
you
know
be
very,
very
thoughtful
about
this.
I
don't
feel
like
even
amongst
this
group.
B
I
B
M
Just
by
choosing
to
promote
my
thing,
regardless
of
how
stable
the
actual
code
behind
it
is,
there
is
value
in
keeping
something
in
alpha
or
beta
until
you've
had
experience
and
feedback,
and
if
we
get
graded
for
stability
based
on
how
quickly
or
how
many
things
we
have
moved
to
GA
or
to
beta,
we
are
encouraged
to
not
gain
that
feedback
during
an
alpha
and
beta
stage.
I
would.
A
L
I
M
Baby
I
sit
outside
of
Google
and
what
I
see
a
lot
of
is
that
things
that
are
alpha
are
treated
as
though
they
do
not
exist
because
they
are
not
in
gke,
because
you
do
not
turn
on
alpha
by
default
and
so
I
hear
a
lot
of
I
need
to
get
it
from
alpha
to
beta,
so
that
I
can
see.
People
try
it
and
I
think
that
that
is
an
example
of
how
we
create
a
perverse
incentive
to
quickly
move
things
from
one
stage
to
another
without
gaining
feedback.
B
Agree
with
the
in
general
moving
a
feature
from
one
stage
to
the
next
in
back-to-back
releases,
whether
it's
alpha
to
beta
or
beta,
it
said.
D1
doesn't
really
make
any
sense,
because
zero
people
will
have
tried
that
future,
given
how
our
release
cycles
work
and
how
long
it
takes
people
to
pick
up
and
try
kubernetes
with
alpha
features
in
particular.
The
reason
they're
disabled
in
G
key
is
because
compatibility
can
be
broken
and
are.
M
B
M
M
K
That
you
could
still
have
like
identify
someone
who's
going
to
give
you
feedback
for
consuming
the
feature,
whether
it's
a
customer
or
vendor
or
whatever
someone's
build
opportunities.
I
also
want
to
say
that
I,
don't
think
it's
reasonable.
That
I,
don't
think.
Google
is
the
only
people
who
is
cautious
about
using
a
feature
that
my
break
I,
think
lots
of
kubernetes
users
might
want
to
avoid
using
a
feature.
That's
it's
early,
so
it's
important
that
we
label
those
and
I
also
think
other
projects.
You
know,
don't
take.
K
H
H
We
have
a
perverse
incentive
to
accelerate
it.
To
beta,
as
David
was
noting
for
very
good
reasons
and
very
bad
reasons.
Everything
that
is
out
of
core
is
not
a
problem,
and
in
my
out
of
core
I
mean
you
can
do
a
you
know
you
can
build
it
as
an
extension.
I
feel
like
what
happens
is
the
only
alternatives
we
typically
are
presenting
is
out
or
in
and
when
we're
in,
we
follow
one
set
of
rules
and
that's
how
we
follow
a
different
set
of
rules.
H
Win
features,
go
alpha
that
figure
out
that
they
helped
us
move
this
across
the
project
like
it's
tough,
because
the
cigs
are
putting
up
resources
to
do
this,
but,
like
the
problem
is
everyone
feels
that
they
don't
have
enough
time
to
properly
review
the
things
that
are
changing
kubernetes?
How
do
we
put
incentives
in
place
into
the
SIG's
into
the
development
process
and
into
the
review
process
that
actually
play
it
out,
because
I
think
jo-jo
usually
has
to
be
the
bad
guy
of
asking
the
question?
Can
we
slow
this
down?
H
That's
not
fair
to
Joe,
because
Joe's
absolutely
1%
correct
that
we
drop
massive
changes
to
the
fundamentals
of
communities
that
we
can
then
never
change,
or
you
know
practically
speaking.
We
can
never
change.
How
do
we
do
better
at
balancing
this?
What
are
what
are
the
concrete
things
that
we're
gonna?
Do?
Is
part
of
sig
lifecycle
that
will
make
this
better
so.
I
One
thing
that
Joe
and
Caleb
already
did
that
I
liked
a
lot
that
they
kept
process
is
building
the
promotion
plan
and
the
graduation
criteria
into
the
process
of
design
right.
So
when
you
think
about
shipping
the
feature
you
should
already
think
about
the
graduation
criteria
throughout
the
lifecycle,
stability
and
what
your
plan
is
to
promote
it
through
the
API
having
that
at
least
discussion
up
front
might
help
the
process
along
a
little
bit
better
I
mean
it's
not
a
big
salt,
but
it's
a
start.
I
E
Yeah
we
talked
about
gaining
experience
with
alpha
features
and
I.
Think
we
put
very
little
thought
into
how
we're
going
to
gain
that
experience.
So
it's
kind
of
this
amorphous
well
checked
in
and
if
you
turn
on
this
gate,
you
can
use
it
and
then
maybe
a
few
people
go.
Do
that
and
we
don't
hear
from
anybody
and
after
a
while,
we
kind
of
think
well
I,
guess
I
guess
it
was
okay.
A
part
of
graduation
criteria
should
probably
be
what
is
the
plan
to
gather
feedback
like?
E
A
B
Did
want
to
time
check
I
think
we
were
not
going
to
get
to
alpha
features
on
stable,
api's
I
think
there's
an
email
thread
which
I've
not
caught
up
on
you,
neither
so,
let's
punt
that
the
you
know
back
to
the
email,
trail
and
I
will
try
to
get
to
it,
but,
like
I,
said
I'm
prioritizing
things
like
conformance
tests
as
opposed
to
new
alpha
features
as
far
as
getting
Windows
GGA
yeah.
We
could
talk
about
that.
J
Okay,
thank
you.
I'll,
go
ahead
and
share
this.
This
documents
I
have
linked
as
well.
So
just
a
bit
of
background
here.
The
windows
work
it's
kind
of
funny
how
someone
had
brought
up
in
an
example
of
how
long
it
takes
to
get
something
from
alpha
to
GA.
Well
we're
currently
at
two
plus
years
right
now,
so
you
know
we
started
the
work
with
Windows.
J
You
know
back
in
2016,
and
that
was
when
you
know
the
windows
container
support
had
just
sort
of
materialized
at
that
point
in
time
it
worked
well
with
docker
and
there
was
no
orchestration
story,
and
so
you
know
I
basically
worked.
You
know
within
the
windows
team
to
help
get
a
lot
of
those
gaps
filled.
So
that
way
we
could
do
things
like
networking
namespaces
and
get
all
that
OS
functionality
that
was
there
to
meet.
J
You
know,
deployed
from
cube
test,
get
results
on
test
grids
like
we've
got
all
these
PRS
open,
which
are
all
part
of
what
I
you
know
believe
is
necessary
to
get
to
GA,
but
I
haven't
been
able
to
find
any
clear
criteria
on
what
it
actually
means
to
take
a
completely
new
feature
like
this.
You
know
from
beta
to
GA
and
so
I
basically
just
started
writing
and
that's
you
know
really
what
this
doc
I
have
is
you
know
a
lot
of.
J
It
has
been
a
discussion
around
scoping,
because
you
know
one
of
the
things
that's
different
is
that
you
know
Windows
is
not
it's
not
UNIX,
but
people
do
want
to
use
it
to
to
orchestrate
containers,
and
so
for
some
of
this
I've
gone
through
and
clarified.
You
know
like
what
kind
of
features
are
supported
and
you
know
we've
been
evaluating
the
conformance
tests
and,
like
you
know,
we've
got
a
list
attached
in
the
dock
here
of
you
what
the
current
pass
rates
are,
but
I
basically
need
to
understand.
J
B
E
F
On
the
north
side-
and
you
know,
we
need
to
extend
their
to
die
to
collaboration
to
the
network,
because
there
are
a
lot
of
also
network.
It
is
totally
different.
So
so
that's,
why
would
we
suggest
Sega
windows
also
clever
ways,
the
storage,
because
also
so
so,
the
so.
The
only
reason
it
is
because
what
support
is
the
new
platform
support
is
different
from
the
other
things
we
mentioned
earlier
is
the
feature
support
with
it.
Your
transform
supporter
so
I
also
want
to
say
architecture.
F
It
has
the
same
sign
on
for
those
kind
of
things
that
kubernetes
in
the
future.
We
are
in
those
windows
as
the
new
panel
phone
and
at
the
same
time
we
want
to
have
like
the.
What
it
is
confirmed
has
how
we're
going
to
pursue
of
the
windows.
We
need
a
couple
of
foam
texture.
This
is
what
I
thought.
I
did.
Okay,.
B
So
I
guess
there
are
a
couple
of
issues.
One
is
conformance
testing
specifically
I
think
we
need
to
answer
how
that
should
work.
It
has
also
come
up
and
four
different
architectures,
for
example,
arm
in
particular
it
has
come
up,
so
we
will
need
to
figure
out
how
to
tackle.
That
arm
is
easier
because
I
think
the
behavior
is
expected
to
be
the
same
and
multi
architecture.
B
Images
are
may
be
sufficient
to
make
that
work.
What
would
really
help
me
right
now
to
understand
what
state
that's
really
in
like
looking
at
the
sig
windows
GA
plan,
it
only
mentions
a
couple
of
features
that
are
not
supported.
Is
that
really
the
full
list
of
things
that
are
different
or
we're
not
supported,
or
is
there
such
a
list
somewhere?
So
we
can
really
understand
what
the
Delta
is
so.
J
J
Community
folder
over
on
the
github
repo
I've
got
like
some
annotations
on.
Like
you
know,
this
works.
This
doesn't
work
but
I
think
what
you're,
probably
the
best
feedback
I've
had
so
far
is
a
Oh
as
part
of
this
process.
I
need
to
get
some
some
Docs
together
that
show
whether
common
use
cases
are
and
what
the
common
values
are
for
Windows,
because
you
are
going
to
see
some
things.
You
know
like
different
different
paths
for
exact
commands,
different
container
names
and
some
things
like
that.
B
J
H
J
J
You
know,
you
know
linux,
you
know
whatever
the
name
of
the
property
is
another
one
will
be
windows,
you
know
whatever
the
property
or
structure
is,
and
so
like
one
example
is,
you
know,
there's
like
windows,
security
options
and
there's
like
security
options,
they're
two
different
things
in
the
OCI
spec
and
to
date,
what
I've
tried
to
do
is
mirror
that
in
the
container
and
spec,
where
it's
necessary.
So
that
way
it's
more
explicit.
H
That's
probably
reasonable,
I
think
in
terms
of
accumulating
fixing
comments
would
be
great
something
that
yeah
pattern
that
I
liked
I
don't
know
if
everybody
else
feels
the
same,
but
I
actually
really
like
the
API
accumulator
bug.
Where
you
have
one
issue
open
and
you
just
continue
to
file
comments.
Are
you
know?
This
thing
is
wrong
and
thing
is
wrong
and
this
thing
should
be
fixed
and
that
thing
is
specific
and
then
I
have
one
place
that
we
can
keep
track
of
it.
It's
not
a
discussion
bug.
It's
a
it's
a
laundry
list.
That's.