►
From YouTube: 20210114 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody!
This
is
the
kubernetes
league
architecture
meeting
for
january
14th,
2021.
A
Everybody
it's
the
first
meeting
of
the
year
and
we
have
pretty
short
agenda.
So,
let's,
let's
just
get
started,
it
looks
like
first
up
we
have
daniel
with
a
discussion
around
formalizing
and
support,
supported,
architectures.
C
Thanks
john,
so
I,
as
john
said,
my
name
is
daniel
mangum,
I'm
a
tech
lead
for
sig
release
and
this
cycle,
I'm
gonna,
be
leading
some
of
the
work
around
formalizing
supported
platforms.
So
you
know
operating
system
architecture
tuplets
there
for
for
kubernetes.
You
know:
we've
traditionally
had
a
number
of
different
supported
architectures
for
the
different
artifacts
we
release,
but
we've
never
really
formalized
that
and
as
different.
You
know.
Folks.
C
I've
come
around
and
wanted
to
add
support
for
things
like
risk,
5
or
or
the
m1
max
was
something
recently
that
came
up.
We
don't
really
have
a
well-defined
process
for
that,
so
we're
going
to
work
on
defining
that,
but
obviously,
as
sig
architecture,
you
all
would
have
a
a
large
say
in
that.
So
as
we
start
in
on
this
journey,
we
wanted
to
make
sure
that
we're
aligned
with
y'all
in
some
of
our
goals.
C
So
I
have
a
short
presentation
here
that
I'll
give
real
quick,
and
this
is
also
shared
in
the
agenda
doc.
If
anyone
wants
to
review
it
afterwards,
it's
not
real
complex
or
anything
like
that,
but
it
should
give
a
good
overview
looks
like
I
need
screen
sharing
permissions.
C
Okay,
so,
like
I
said,
we're
gonna
talk
about
supported
platform
formalization
and
we
have
four
main
goals
in
this
effort.
So
first
is
to
define
the
official
artifacts
that
we're
currently
releasing
and
the
platforms
that
we
support
them
on
and
I'll
give
you
a
matrix
in
just
a
minute
of
what
we
as
sig
release,
consider
the
officially
supported
architectures
and
artifacts.
C
C
And
finally,
we
want
to
define
a
formalized
process
for
anyone
who
comes
along
and
wants
to
add
support
for
a
new
architecture
for
one
of
the
artifacts
we
release
or
we're
obviously
adding
a
new
artifact
at
some
point
in
the
future,
and
you
know
we're
going
to
be
supporting
different
architectures
for
that
all
right.
So
here's
kind
of
a
matrix
of
the
different
artifacts
we
release.
C
So,
as
you
can
see,
this
is
primarily
most
folks
are
interested
in
the
binaries
or
container
images
they're
released
for
each
of
these
artifacts,
but
there's
also
tar
balls
with
the
source
code.
There's
different
packages
et
cetera,
you
can
always
look
at
the
you
know
the
change
log
and
the
linked
release
artifacts
there
if
you'd
like
to
see
all
the
different
components,
but
you
can
see
that
you
know
generally
for
a
specific
grouping
of
artifacts.
C
So
if
we're
talking
about
node
artifacts
or
we're
talking
about
server,
artifacts
or
client
artifacts,
you
can
see
that
they
generally
have
the
same
supported
operating
system
architecture
toplets
here
and
those
obviously
correspond
directly
to
those
supported
by
the
go
compiler,
and
you
can
see
that
for,
for
instance,
on
the
client
side,
we
support
far
more
architectures
or
at
least
double
the
number
of
architectures,
which
makes
sense
right
because
more
folks
are
going
to
be
executing
things
on
their
their
machine.
C
With
the
client
side,
then
they're
going
to
be
needing
to
support
on
the
server
side.
So
this
is
kind
of
an
overview
of
what
we
have
right
now
and
I
would
specifically
look
at
a
lot
of
these
platforms
that
you
don't
think
that
folks
really
use
that
much
or
you
don't
think
are
well
tested
because
we'll
get
into
that
in
a
little
bit.
But
you
could
consider
these,
at
least
from
the
sig
release
perspective,
the
official
artifacts
that
we
release
in
the
past.
C
There
have
been
artifacts
that
we've
dropped,
support
for
so,
notably
like
hypercube
or
cloud
controller
manager
and
you'll
see
those
not
mentioned
here.
There
could
be
ones
we
drop
in
the
future.
C
Probably
the
only
one
on
this
list
that's
been
considered
would
be
mounter
down
there
at
the
bottom,
which,
in
my
opinion,
doesn't
really
make
sense
to
be
in
the
release
bundle,
but
that
can
be
debated
looking
at
the
the
tiers
of
support
as
we've
broadly
defined
them,
and
you
can
see
a
pull
request
open
that
that
has
this
in
a
more
formal
manner,
but
it's
similar
to
what
you'd
see
for
a
given
compiler
tool
chain
and
then
we've
kind
of
customized
what
it
means
to
be
in
each
tier
to
a
kubernetes
perspective.
C
So
looking
at
fully
supported
we're
saying
it's
able
to
produce
all
applicable
artifacts,
so
the
binaries
images
packages,
obviously
for
some
of
the
artifacts,
not
all
of
them
will
be
relevant,
but
all
of
those
that
are
relevant
that
we
do
produce
those
and
release
them
in
a
kubernetes
release.
C
We
have
full
test
coverage,
so
that
is
a
little
bit
fluid
right
now
in
terms
of
what
full
test
coverage
would
mean,
but
in
a
moment
I'll
kind
of
talk
about
what
our
current
metric
is
and
why
it's,
maybe
not
very
good,
and
then
the
other
component
of
testing
would
be
that
failing
a
test
will
block
a
release.
So
you
know
we
have
the
sig
release
test
grid
dashboards,
which
we'll
look
at
in
a
moment,
and
if
a
a
blocking
job
fails
on
there,
then
we're
not
going
to
run
a
release.
C
The
other
thing
would
be
fully
documented.
That's.
This
is
also
a
little
bit
fluid
and
you
know
there
isn't
much
difference
between
different
architectures
that
you
need
to
specifically
have
documentation
for,
but
we
do
want
to
call
that
out
out
as
a
component
tier
two
would
be
that
we're
producing
artifacts,
so
we
produce
all
applicable
artifacts
as
part
of
the
kubernetes
release.
So
all
of
those
you
know
official
artifacts
that
we
looked
at
already.
C
Those
would
naturally
fall
into
tier
two,
because
we
are
producing
those
artifacts
and
we
would
like
to
have
baseline
test
coverage
once
again.
Whatever
that's
defined,
as
you
know,
basically
saying
this
thing
can
actually
be
built
and
run,
and
the
last
thing
would
be
possible
to
build.
So
this
is
a
very
kind
of
like
nebulous
tier.
You
could
pretty
much
put
anything
that
the
go
compiler
can
compile
into
tier
three,
but
obviously
there
could
be
some
cases,
especially
on
the
node
side,
where
that
isn't
applicable
all
right.
C
A
Ask
a
question:
are
you
gonna
differentiate
between
platforms,
for
which
maybe
we
only
produce
client
tools
versus
like?
I
know
you
said
in
the
tiers,
that's
per
artifact
per
platform,
but
I
think
conceptually
people
think
about
there's
the
server.
You
know
there
should
be
a
lot
more
client,
potentially
platforms
than
server
platform,
at
least
it's
a
broader
set.
Do
we
want
to
differentiate
somebody
in
this
scheme?
As
far
as
tiers
are
concerned,.
C
Yeah,
I
think
so
so
you
know
at
the
beginning
there
we
saw
that.
Obviously
the
client
does
have
a
lot
more
supported
architectures.
I
think,
as
far
as
the
tiers
here,
we're
going
to
say
each
architecture
per
artifact,
so
that
is
not
like
doing
any
of
the
groupings
in
terms
of
node,
client
and
server.
But
at
the
end
here
I
have
a
list
of
open
questions.
That's
going
to
be,
basically,
are
we
open
to
supporting
you
know
a
a
platform
for
the
cube
api
server?
That's
not
for
the
controller
manager.
C
You
know
that
really
wouldn't
make
much
sense,
so
that
would
be
something
that
we'd
say
like
these
are
kind
of
like
tightly
coupled
groups
of
artifacts,
where
we
want
to
add
support
for
all
of
them.
At
the
same
time,
looking
at
the
current
support
levels
that
we
have
by
the
definitions
I
just
gave
for
the
different
tiers,
we
would
really
only
say
that
linux
amd64
is
the
the
fully
tier
one
supported
artifacts.
That's
because
that's
really
the
only
thing,
that's
blocking
a
release
for
us
at
this
time.
C
There
is
definitely
testing
done
for
other
platforms.
You
know
specifically
for
arm
and
windows
platforms,
but
we
don't
block
a
release
if
those
are
failing
right.
So
if
you
look
at
the
the
master
blocking
board,
you're
not
going
to
see
any
releases
for
those
architectures,
and
so
one
things
that
we
like
to
do
as
part
of
this
effort
is
be
able
to
have
a
single
pane
of
glass
for
kind
of
like
test
coverage
for
an
architecture
right.
C
So
if
a
sig
adds
a
job
for
you
know
their
the
thing
that
they
own
the
artifact
that
they
own
for
a
specific
architecture,
we'd
like
to
surface
that
on
a
broader
board
for
that
architecture,
so
that
it's
clear
you
know
the
full
test
coverage
we
have
for
a
given
platform
which
helps
us
categorize
things
into
tiers.
C
C
What
do
they
need
to
do
to
make
that
happen?
So
a
big
part
of
this
is
establishing
someone
or
a
group
of
people
who,
over
the
long
term,
are
going
to
support
that
platform.
C
I
look
at
this
a
lot
with
how
we
interact
with
the
the
go
team,
in
that
you
know
we
kind
of
have
liaisons
between
kubernetes
and
go
and
have
a
strong
relationship
there
that
make
sure
you
know
that
we're
gonna
be
able
to
support
that
going
into
the
future,
and
this
is
obviously
gonna
have
high
overlap
with
that
right
because
we
are
directly,
you
know
reliant
on
the
architectures
and
operating
systems
that
the
go
compiler
supports.
C
So
if
it's
a
net
new
architecture,
so
in
the
case
of
like
risk,
five
we'd
like
to
have
some
relationship
with
folks
that
are
working
on
that
part
of
the
go
compiler
or
you
know,
part
of
the
risk.
Five
community
that
have
a
pulse
on
things
that
are
happening
there
right
so
that
you
know
things
come
up,
are
related
to
that
architecture,
that
they're
kind
of
the
point
person
to
make
sure
we
have
good
communication
there
if
it's
a
new
platform
just
for
a
specific
artifact.
C
So
let's
say
you
know,
we
already
produce
a
artifact
for
a
given
platform.
C
The
the
next
thing
you
need
to
be
able
to
do
is
demonstrate
the
ability
to
build
and
run
the
artifact
on
a
platform,
so
we
don't
want
to
be
producing
artifacts
that
are
not
able
to
be
run.
So
this
is
a
bit
of
a
hazy
step.
In
fact,
all
of
these
steps,
I
think,
need
much
more
like
tangible
guidelines,
but
as
an
overview
right,
we
like
to
see
that
there's
something
backing
up
the
motivation
to
do
so.
C
The
next
thing
you
would
do,
which
you
could
consider
moving
into
tier
two,
is
actually
start
producing
the
binaries
or
images
or
other
artifacts,
and
this
typically
doesn't
require
a
lot
of
work.
If
you
look
back
through
some
of
the
pr's
that
have
added
support
for
different
platforms,
you
can
see
that
you
know
it's
frequently
very
little
changes,
but
it
can
affect
other
parts
of
the
release
pipeline.
C
So
you
definitely
want
input
from
cig
architecture,
folks,
as
well
as
sig
release
and
release
engineering,
but
the
first
step
in
terms
of
supporting
something
right
is
producing
the
artifacts,
so
we
can
start
testing
them.
The
next
thing
you'd
want
to
do
is
start
running
periodic
jobs.
So
this
could
be.
You
know
if
it's
for
a
specific
artifact,
this
may
be
very
apparent
and
it
may
go
through
the
sig
who's,
the
owner
of
that
artifact.
C
But
you
need
to
be
testing
something
right
to
start
to
progress
towards
tier
one,
and
also
tier
two
has
a
requirement
of
some
baseline
testing,
essentially
showing
that
it's,
you
know
able
to
perform
the
baseline
functionalities
that
that
artifact
is
built
for
the
next
thing
you'd
want
to
do
is
meet
the
the
bar
of
sufficient
test
coverage,
so
that
would
you
know
right
now.
I'd
say
what
we
do
for
linux.
Amd64
would
be.
C
What
we
are
implicitly
saying
is
the
bar
for
a
sufficient
test
coverage,
whether
we
have
really
good
insight
into
what
that
actually
means
is
another
question
I
think
should
be
addressed
through
this
process,
but
we
do
need
to
have
a
a
line
that
says
okay,
this
has
all
of
the
you
know,
conformance
tests,
running
against
it
or
all
different
api
tests,
et
cetera,
to
be
able
to
make
sure
that
this
is
something
that
we
feel
confident
putting
out
into
the
community
and
saying
hey.
C
This
will
work
in
in
mission
critical
scenarios,
and
the
last
thing
we'd
want
to
do
which
kind
of
makes
something.
The
definition
of
tier
one
would
be
to
promote
that
to
a
release
blocking
job.
So
you
know
a
failure
of
that.
Job
is
going
to
keep
us
from
actually
putting
out
artifacts
to
the
community,
and
that
would
kind
of
be
the
final
step,
and
then
we
would
go
on
maintaining
it,
just
as
we
do
for
artifacts
for
linux.
C
Amd64
right
now,
also
at
the
bottom
here
are
links
to
a
few
recent
prs
where
someone
wanted
to
add
support
for
a
platform,
and
we,
you
know
kind
of,
didn't,
know
what
to
tell
them
or
we
didn't
have
a
process
for
for
them.
You
know
taking
steps
to
make
that
happen,
so
even
if
they
were
someone
who
was
really
you
know
reliable
and
someone,
we
could
count
on
to
support
the
thing
there.
Just
wasn't
good
steps
for
getting
that
reviewed
and
getting
that
supported
all
right.
C
So,
as
I
mentioned
numerous
times
throughout
this,
there
are
some
open
questions
and
things
that
we
will
define
as
we
go
through
this
process.
But
overall
the
number
one
question
especially
talking
to
sig
architecture
today
is
you
know,
does
this
effort
feel
worthwhile
right?
Do
you
feel
like
this
is
something
that's
important.
C
Obviously,
adding
new
architectures
and
platforms
is
not
something
that
happens.
You
know
extremely
regularly,
but
it
happens
with
enough
regularity
that
you
know
it
makes
sense
to
have
a
process
for
doing
it
also
just
adds
more
reliability
right
when
you
have
a
process
for
for
changes
that
are
happening
and
and
for
changes
to
the
things
that
we're
releasing
to
the
community.
C
Is
there
a
desire
for
more
official
artifacts
to
reach
tier
1
support
so
right
now,
as
you
saw
in
that
matrix
earlier,
we
have
a
number
of
different
platforms
that
we
support,
but
there
hasn't
been
a
strong
push
for
things
to
become
tier
one
or
become
release
blocking
from
the
most
part,
and
you
know
hopefully,
as
kubernetes
continues
to
continues
to
proliferate.
You
know
other
platforms
that
there
could
be
a
larger
push
for
this,
but
I
could
also
see
a
lot
of
folks
saying.
C
Well,
you
know
we're
getting
the
binaries
released
and
they're
working
for
our
use
cases.
So
you
know,
is
there
really
a
point
in
adding
more
test
coverage
here?
Do
I
really
want
to
put
in
the
effort?
Is
our
current
level
of
testing
for
tier
1
artifact
sufficient,
so
that
was
kind
of
what
I
alluded
to
earlier?
We
have
this
bar
of
reaching
tier
1.
That's
going
to
say
you
have
sufficient
test
coverage.
What
does
that
really
mean
right?
Now?
C
I
personally,
as
someone
who's
done,
a
lot
of
ci
signal
work
don't
feel
like
we
have
a
great
story
of
what
sufficient
testing
means
for
all
of
kubernetes,
so
I'd
like
to
drive
a
a
more
tangible
definition
for
that
more
measurable
definition,
which
is
you
know,
required
for
progressing
things
through
these
stages
and
then
the
last
one
here
which
I
think
john
was
kind
of
alluding
to
earlier
is:
is
it
acceptable
to
support
a
platform
for
only
one
of
a
common
group
of
artifacts?
C
C
So
that's
pretty
much
all
I
have
for
you
all
today,
like
I
said
there
are
a
lot
of
open
questions
here,
but
before
we
get,
you
know
really
far
down
this
path.
We
definitely
want
to
solicit
input
and
advice
from
sig
architecture.
B
Okay,
yeah
hi
daniel.
This
is
awesome,
really
really
good
presentation
thanks
for
putting
it
together.
Can
you
leave
the
questions
up?
Please,
okay,
so
it's
easier
to
talk
about
them.
So
definitely
this
has
been
bugging
a
bunch
of
us
for
a
while
now
and
we've
did
fits
and
starts
and
we
ended
up
checking
in
at
least
one
document
to
guide
the
work.
So
yes
thanks
for
doing
this,
and
hopefully
this
will
help
a
bunch
more
people.
B
So
is
there
a
desire
for
more
official
artifacts,
so
a
couple
of
ways
to
answer
this
question?
One
is
like,
for
example,
we
are.
B
There
are
other
projects
that
are
able
to
say
that
they
support
other
platforms
like
arm
32
arm64,
those
kinds
of
platforms,
and
we
don't
do
that,
even
though
you
know
they
are
using
our
code
and
you
know
they
are
essentially
doing
the
testing
for
us.
So
the
recent
bug
that
you
saw
about
the
golang
started
with
k3s
and
then
it
came
to
kate's.
You
know
issue,
and
then
we
went
back
to
golang
and
we
so
I
I
think
that
we
could
have
caught
that
earlier.
B
If
we
had
testing
for
arm
32
better,
so
I
think
we
should
own
this
a
little
bit
so
and
not
not
end
up
having
two
people
going
and
testing
in
k3s
and
then
coming
back
to
us
saying,
oh
you
know
case
is
broken
right
that
that's
what
they
end
up
saying.
So
I
would
like
to
see
both
the
edge
device
scenarios
as
well.
B
As
you
know,
the
server
arm
64
scenarios
to
move
forward,
so
I
think
there'll
be
enough
people
interested
in
those
two
areas
to
consider
that
then,
for
example,
power
pc
folks
have
been
really
good
at
working
with
us,
especially
taking
care
of
all
the
images
and
stuff
like
that.
So
you
know
they
don't
really
show
up
as
much
because
you
know
they
are
not
end
users
who
use
power
pc
it's
more
like
a
one
single
vendor
that
is
taking
care
of
one
architecture.
So
you
don't
see
too
much
traffic
from
an
end
user
perspective.
B
But
you
know
it's
been
solid
and
it's
been
green
for
a
while,
so
I
think
you
know
they
would
like
to
they.
Would
I
know
that
they
would
like
to
apply?
For
you
know
a
specific
tier
when
it
comes
to
and
check
the
boxes
that
we
tell
them
to
check
right
is
the
current
level
of
testing
for
tier
one's
artifact
suction.
Sometimes
I
feel
either
even
the
amd64
is
not.
B
I
will
substitute
right
so,
but
if
we
have
to
draw
a
line,
then
we
would
say
the
pre-summit
job
that
we
have.
You
know
the
c
pre-summit
ci
job
that
wants
a
whole
bunch
of
tests
that
one
then
the
conformance
test
and
the
node
conformance
test
would
be
like
three
check
boxes
there
that
we,
we
would
need
to
say
yeah.
You
need
to
have
all
three
of
these.
They
don't.
B
That
would
be
good,
so
I
think
the
fourth
one
I
think
we
might
have
three
three
ways
of
looking
three
buckets.
So
to
say
one
is
a
client
like
you
know,
darwin
arm64,
the
other
one
would
be
a
node,
so
this
could
be
like
a
windows
node,
for
example.
Right
you
know,
there's
some
bits
of
us.
Our
stuff
runs
on
windows,
so
that
would
be
a
second
bucket
and
the
third
bucket
would
be
a
full
server.
B
So
I
think
we
would
say
there
are
three
buckets
and
you
know
the
people
who
are
looking
at
different
architectures
would
say.
Oh
I
want
to
target
one
of
these
buckets
or
all
three
of
these
buckets.
They
could
pick
and
choose
right
and
yeah.
I
would
say
that
we
don't
want
to
certify
just
one
component.
If
you
would
say
you
have
to
confirm
this
whole
bucket
the.
Why
I'm
saying
that
is
like,
then
we
could
have
three
different
variations
of
tests
right.
B
We
have
the
node
conformance
test
for
the
node,
so
that
will
take
care
of
that
bucket.
We
have,
you,
know,
conformance
tests
or
something
like
that
for
the
server
bucket
and
we
would
definitely
have
a
client
bucket.
I
don't
think
we
have
a
stand-alone
test
suite
for
the
side
stuff,
but
we
could.
B
There
is
enough
material
that
we
can
carve
into
a
client-side
test.
I
think
so
that
would
be
my.
You
know,
quick
feedback
on
on
these
questions.
Awesome.
Thank
you.
D
Hey
dems,
maybe
I'll
just
chime
in
and
say
so
we
can
have
some,
maybe
some
consensus
so
like
definitely
a
plus
one.
On
this
effort
being
worthwhile.
D
D
For
tier
two
and
tier
three
to
continue
compiling,
particularly
on
architectures
that
have
been
good
good
actors,
thus
far
like
390
and
power.
So
I
I
like
tim's
representation
of
that
testing
for
tier
one.
I
mean,
I
think,
like
everything
we
always
want
testing
to
be
better
everywhere.
So
I
don't
think
it's
specific
to
this
problem,
and
then
I
I
like
the
breakdown,
I
think,
dims,
that
you
try
to
identify
which
is
basically
grouping
of
components
by
where
they
are
in
our
client
server
architecture.
D
So
breaking
out
client
components
like
keep
cut
all
separately
from,
say
what
typically
runs
on
a
control
plane
versus
what's
needed
to
get
a
worker,
and
so
I
think,
if
we
have
those
three
buckets,
then
the
windows
example
is
the
one
that
is
very
clear
as
being
only
in
many
cases
needing
a
worker
bucket
to
be
satisfied
and
probably
a
client
bucket,
but
not
control
planes.
So
basically,
I'm
very
pleased
about
the
way
this
is
presented
in
the
outcome.
So
this
all
looks
great
to
me.
A
I'll
and
I'll
third
that
yeah,
I
think
definitely
this
is
worthwhile.
Yes,
we
don't
on
board
platforms,
but
even
the
existing
platforms
we
have
being
able
to
articulate
that
to
customers
in
the
community
is,
is,
I
think,
really
really
important
the
different
levels
of
support
so
yeah.
I
I
I'm
thumbs
up
on
all
of
that.
A
I
would
have
a
question
what
what
is
the
downside
from
a
qualification
like
practically
there
may
be
communication
downsides
but
like
if
we
didn't
bother
with
the
buckets-
and
we
simply
said
this
artifact
is
at
this
level
at
this
tier
for
this
platform,
are
there
testing
implications
like
I
want
to
make
sure
that
our
test
boards
and
our
signal
is
as
clear
and-
and
you
know
but
but
have
you
thought
through
the
implications
of
that?
I
guess.
C
Yeah,
I
think
dems
kind
of
like
gave
a
good
perspective
on
that
at
the
beginning,
that
right
now,
like
the
the
three
critical
like
kind
of
like
groups
of
testing,
that
we
do
already
exercise
the
different
components
for
the
different
groups,
so
it
would
obviously
be
a
downside
in
that
we'd
have
to
figure
out
how
to
test
an
artifact
for
a
specific
platform
without
some
of
its
like
counterparts
in
that
specific
group.
C
If
we
look
back
at
the
matrix,
you
can
see
we've
already
implicitly
done
exactly
what's
been
articulated
here.
As
you
can
see,
you
know,
cube
proxy
cube
admin
cubelet,
you
know
all
have
the
same.
Cubectl
has
the
same
and
then
server
components
have
the
same.
So
in
practice
I
don't
see
anyone
ever
really
making
an
effort
to
do
that,
and
I
think
you
know
formalizing.
C
It
is
saying
you
do
have
to
support
an
entire
group
if
you're
going
to
add
a
platform
just
kind
of
like
reiterates
that,
so
I
think
I
think
that
is
a
good
path
forward.
I
guess.
D
C
E,
when
you
say
it's
a
delta
and
that
the
it
I
mean
it
crosses
two
different
groups
here,
I'd
say
right.
C
But
not
not
right,
not
control,
planning,
right
and
then,
and
then
darwin
and
linux
386,
of
course,
would
be
client
and
not
node
and
server.
So
yeah
right
like
right
now,
I'd
say
that
we
we
do
honor
the
grouping
and
how
we
do
things.
B
The
one
other
thing
that
is
not
captured
here
daniel
would
be
like
what
is
the
entry
criteria
for
a
tier
3
right
like
the
the
problem
with
risk,
for
example,
was
you
know
either
you
needed
a
custom
build
of
a
golan
compiler
or
there
are
some.
Some
things
were
not
available
and
people
could
build
it
behind
the
scenes
with
patches,
but
it
so.
I
would
say
that
the
bar
for
the
tier
three
would
be.
B
Anybody
should
be
able
to
build
the
components
you
know
and
we
should
be
able
to
document
what
versions
of
things
are
needed
to
be
able
to
build.
You
know
the
artifacts
right,
so
we
would
have
you
know
some
entry,
so
going
from
tier
to
tier,
we'll
have
we'll
need
some
guidance
as
well
as
just
getting
into
the
whole
bucket
is
going
to
be
another
thing,
then
an
additional
twist
that
we
haven't
captured
here.
We
should
is
to
be
able
to
go
to
tier
one
from
tier
two.
B
You
will
you
will?
There
is
a
whole
bunch
of
things
that
we
need
to
do
to
make
sure
that
all
the
images
that
we
need
for
conformance
testing
are
enabled
for
that
architecture
and
they
are
continuing
to
work
right
like
we
found
out
yesterday,
this
bunch
of
images
conformance
image
wasn't
working
right,
so
so
so
we
would
need
to
make
sure
that
you
know
that
people
understand
that
it
is
not
just
the
the
components
that
you
can
build,
but
also
the
additional
support
things
that
that
would
be
needed.
C
All
right,
well,
I
appreciate
you
all
taking
the
time
to
to
hear
the
presentation
day.
Obviously,
as
we
we
go
through
the
different
steps
for
this
I'll
continue
to
communicate
back
with
folks
here,
and
I'm
sure
that
there
will
be
opportunities
for
us
to
ask
for
your
input
on
some
of
these
things,
especially
on
the
more
granular
requirements.
But
thank
you
all
for
taking
the
time
today.
B
Daniel
one
request
to
sig
releases:
can
we
add
some
verification
steps,
so
we
don't
break
the
manifest
next
time
when
we
make
a
release
you
know.
So
let's
do
that
for
121.
Please
absolutely.
A
All
right
great,
thank
you
daniel!
That's,
that's
great
stuff.
Next
item
on
the
agenda
last
item:
production
readiness
review
update,
so
that's
the
sub
project
here
in
sync
architecture,
and
for
those
of
you
there's
been
a
couple
leads
meetings,
so
I'm
repeating
myself
over
and
over.
A
Some
of
you
will
have
heard
this
already,
but
so,
as
folks
here
know,
we've
been
working
on
production
audience
for
a
few
releases
and
in
december
we
sent
out
an
email
and
for
leads
of
consensus
on
merging
enforcement
of
production
readiness,
review
approval
within
caps
that
are
targeted
to
a
release,
an
implementable
state,
and
we
got
the
feedback
that
hey
it
was
december
and
nobody.
You
know
there
are
a
lot
of
people
who
maybe
didn't
see
that
lazy
consensus.
A
So
we
we
disabled
the
enforcement
of
that
policy
and
we've
had
a
number
of
discussions
over
the
last
couple
days
around
around
this
policy
and
the
the
plan
right
now
is
for
me
to
write
a
pr
that
re-enables
it
but
add
some
additional
context.
A
Explanation
some
information
about
how
new
people
can
be
added
to
the
latter,
a
little
more
detail
on
the
process
and
and
some
explanation
around
why
this
we
don't
expect
any
sort
of
bottleneck,
given
that
we've
we've
done
a
piloting
of
this
for
the
last
couple
of
releases.
So
that's
on
me.
I
will
get
that
out.
I
was
supposed
to
get
out
by
today,
but
it
didn't.
I
will
get
it
out
tomorrow
at
the
latest
and
then
the
idea
is
that
that
pr,
I
would
ask
that
those
folks
who
are
interested.
A
Please
take
a
look
at
that
early
next
week.
It
will
be
on
a
three-day
lazy
consensus,
so
that
will
merge
back
in
wednesday
unless
the
consensus
is
otherwise.
So
that
would
be
enabling
enforcement
of
that
process
for
caps
going
into
enhancements
freeze
for
121.
A
Okay,
awesome
all
right.
Well,
that's
the
end
of
our
agenda,
so
I
guess
let's
get
back
a
little
bit
of
time
and
everybody
have
a
great
a
great
weekend
coming
up.
Thank
you
very
much.