►
From YouTube: 20200924 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody!
This
is
the
kubernetes
architecture
meeting
for
september
24th.
Thank
you
all
for
coming.
Please
remember
our
code
of
conduct
and
treat
each
other
with
respect
and
kindness,
and
today
we
have
just
a
couple
things
on
the
agenda:
we'll
start
with
a
presentation
from
open
cruise
and
then
we
have
a
discussion
over
female
containers
and
then
some
project
readout
for
performance.
A
So
the
choppiness
is
from
my
end
of
awesome.
Well
then,
I
won't
talk
much
so,
let's
just
jump
right
in
and
hear
from
open
criticism.
Thank
you.
Yeah.
B
Thank
you.
Can
I
please
share
my
screen.
B
B
Are
you
able
to
see
this
points
cool?
Thank
you
so
good
morning,
folks,
I'm
andy
schiff
from
alibaba
cloud.
So,
a
couple
weeks
ago
we
presented
this
project
open
cruise
to
the
cncf
toc.
B
Our
goal
was
to
enter
as
a
sandbox
project
and
then
the
tlc
had
some
concerns.
They
would
like
to
know
if
this
project
fits
better
within
the
upstream
or
being
a
six
sponsored
project
or
it's
better
fit
as
a
cncf
sandbox
project.
B
So
for
that
we
are
doing
the
due
diligence
I
reached
out
to
sick
abs
in
the
stephen
committee,
and
the
student
committee
said
that
I
should
come
here
to
the
sick
architecture
first,
and
I
hope
that
after
this
meeting
would
hopefully
we
don't
have
to
go
to
a
steering
committee
again
and
we
can
just
reach
to
the
agreement
on
where
this
project
should
sit
on.
On
monday,
the
past
monday,
I
had
a
meeting
with
sick
abs
already
and
we've
had
the
agreement
on
where
it
should
be
in
their
opinion.
B
So
now
this
time,
I'm
here
to
gather
your
professional
opinion
as
to
where
this
project
should
sit
so
a
little
background
about
open
crews.
So
we
alibaba
cloud
we
have
kubernetes
offerings
both
for
the
external
customers
and
for
internal
customers,
meaning
that
our
large
e-commerce
website
actually
is
running
on
top
of
our
kubernetes
cluster.
B
So
during
that
process,
we've
got
a
lot
of
complaints
about
the
limitations
of
the
upstream
workload
management
and
especially
when
people
are
trying
to
do
updates,
rolling
updates
and
so
what
they
wanted
for
two
things
right,
one
is
more
combinations
on
the
update
strategies
and
b
is
efficiency
in
terms
of
the
updating
processes.
B
So
we
had
something
working
already
in
production
and
we
are
updating.
We
are
open
sourcing,
these
technologies
into
a
project
called
open
crews,
so
open
crews.
B
Currently
we
have
only
the
workload
management
controllers
and
we
tend
to
have
operator
framework
wrap
around
these
controllers
and
they
should
be
released
later
right
now
we
are
still
planning
and
developing
these
these
features,
but
both
of
them
are
open
source
components
of
the
open
source
open
cruise
project,
and
this
is
the
whole
project
we
we
try
to
donate
to
ucncf.
B
So,
in
terms
of
the
cruise
workload
management
controllers,
there
are
currently
five
of
them
and
we
plan
to
add
a
new
one,
but
we'll
see
so
the
beginning
of
all.
These
came
from
people
who
are
complaining
about
staples
that
what
they're
doing
update
it
takes
very
long
time,
because
it's
doing
this
one
by
one
and
they
wanted
to
have
in
place
update,
meaning
that
they
want
to
bypass
the
api
service,
and
we
implemented
that
and
then
they're
saying.
B
Okay
now
we
want
that
to
be
the
same
same
user
experience
to
apply
to
stateless,
continuous
as
well.
So
now
we
have
clone
stack.
So
first
we
had
advanced
stable
set.
B
Now
we
have
clone
set
and
then,
while
doing
concept,
there
are
more
requirements
coming
saying:
okay,
we
want
this
and
that
updating
strategy
and
we
want
to
to
have
the
capability
of
basically
mixing
and
match
and
playing
with
them
in
combinations,
different
combinations
and
that's
what
the
concept
is
about
and
currently
it's
still
under
active
development
and
there's
another
controller
called
side
car
set.
B
Basically,
it's
used
to
manage
side
cars
like
you,
have
monitoring,
logging
side,
cars
and
it's
a
centralized
place
to
manage
those
side,
cars
and
it's
going
to
inject
the
side,
cars
and
it's
going
to
update
the
side
cars
using
using
the
in-place
update.
I
mean
that
you
don't
have
to
restart
the
row
out
of
of
your
workloads
once
you
have
a
new
version
of
the
sidecar
and
we
have
united
deployment.
B
This
is
a
use
case
when
you
have
your
cluster
is
crossing
different
data
centers
and
you
want
to
make
them
into
different
zones
like
in
aws.
They
have
a
name
availability
zone,
we
call
them
subsets
so,
but
this
is
like
segregating
your
whole
deployment
or
your
cluster
into
different
zones
and
manage
pausing
different
subsets.
B
And
lastly,
we
have
a
sim
card
broadcast
job,
which
is
a
specific
type
of
job
that
you
can
select
the
notes
that
this
job
will
run
on
so
instead
of
having
it
all
or
having
one
you
can
you,
you
have
more
granularity
in
control
on
in
picking
which
nodes
you
want
to.
You
want
to
run
them
on.
B
So
that's
basically
the
the
the
current
status
or
the
design
of
what
we
have
so
a
couple
of
things
I
think
stood
out
in
the
cncf
presentation
with
why
they
were
because
it's
a
closed
door
discussion.
So
I
didn't
know
why.
But
I
was
wondering
why
they
are.
They
are
having
these
questions,
and
one
of
them,
I
think,
is
that
the
the
names
right
you
have
advanced
devil
said
and
they
with
on
unified
deployment,
and
they
think
it's
probably
a
fork
of
of
the
upstream
code.
B
So
we
don't
fork
upstream
code
because
we
basically
design
all
those
features
from
scratch.
So
we're
not
adding
a
wrapper
on
top
of
like
deployment
or
replica
set.
We
just
yeah,
so
we
use
the
atomic
part
and-
and
we
are
not
forking
the
the
logic
and
also
in
terms
of
use
cases
we
are
complementing
like
these
features,
are
more
useful
for
large-scale
production
uses
like
if
you
have
10
or
20
parts.
B
Probably
you
wouldn't
see
any
differences,
but
if
you
have
like
a
couple
hundred
part
running
like
four
e-commerce
websites
and
you
need
to
update
them,
yes,
you
will
see
the
differences
and
both
for
the
for
the
responding
time
of
your
pause
and
also
the
load
onto
your
api
server,
because
so
many
requests
definitely
would
be
be
a
drag
on
your
performance
of
the
api
server.
B
So
we,
these
features
are
mainly
designed
for
large-scale
production
users,
but
that
being
said,
even
though
it
is
being
used,
I
think
we
are
still
under
active
development,
especially
clone
set
and
other
new
new
controllers.
That's
being
discussed
right
now,
and
also
the
the
operator
part
of
it.
So
here
is
a
more
detailed,
more
detailed
example
on
the
film
set
like
it
has
different
types
of
update
strategies
available
and
things
like
maximum
available
max
search
and
stuff.
B
I
think
some
of
them
came
from
other
workloads
on
in
kubernetes,
but
also
we
have
our
own
invention
as
well.
We
have
the
priority
we
have
the
scatter
and,
of
course,
in
place
is
a
big
big
feature,
big
feature,
that's
being
requested
by
the
community
by
our
users.
B
So
here
is
a
simple
comparison
of
what
we
have
in
terms
of
updates,
updating
strategies
of
consent,
which
is
the
existing
upstream
workloads
and
again
these
these
features
can
be
combined
and
they
can
be
mixed
and
matched
so
you
have
way
more
choices
in
in
your
updating,
in
controlling
your
updating
strategies,
so
clone
set
really
is
one
of
the
most
advanced
or
most
feature-rich
controllers
in
all
of
them.
B
All
the
others
probably
don't
have
that
many
many
features
because
they
were
not
actively
developed
like
advanced
staple
set.
We
I
think
we
are
fixing
bugs,
but
we
are
not
adding
new
features
to
it,
but
the
concept
we
are
still
adding
new
features.
Another
another
example
would
be
sidecar
set.
Sidecar
set
is
centralized
place
to
manage
all
the
sidecars
in
your
in
your
deployment,
so
it
uses
a
mission
control.
B
It
decouples
your
workload
from
the
from
the
side
cars,
so
you
don't
have
to
worry
about
them
and
also,
even
though,
in
this
diagram
we
say
clone
side,
it
also
works
on
deployment
or
other
standard
kubernetes
upstream
workload.
So
basically,
these
things
can
be
used
with
with
each
other
or
they
can
be
used
with
kubernetes
upstream
workload,
management
as
well
like
they
are
not
conflicting,
each
other
that
that's
for
sure.
Like
you,
you
install
these
controllers.
You
can
still
use
deployments
and
and
stifle
sets,
that's
not
a
problem.
B
So
it's
not
that
either
all
situation
as
well.
So,
like
I
said,
it's
complementing
the
features
and
I
think
that's
pretty
much
the
typical
things
that
I
would
like
to
introduce.
The
other
three
controllers
are
pretty
straightforward,
like
they
serve
single
purposes,
and
they
are
mostly
already
done
in
terms
of
development
and
feature
requests,
so
that's
not
much
to
talk
about,
but
they
are
useful
to
certain
use
cases,
so
we
hope
to
include
them
all
into
into
these
this
open
source
project
on
open
cruise.
So
that's
my
presentation.
B
Hopefully
it
answers
some
of
the
questions
and
I
think
that's
the
screen
sharing
from
me.
Thank
you.
A
Okay,
thank
you.
So
we
have
a
lot
of
people
on
the
call.
So
I
don't
know
if
people
have
questions
or
comments
they
use,
maybe
they
put
a
copy
or
a
chat
in
to
raise
their
hand,
just
fans
unless
there
aren't
that
many.
But
let's
see
how
that
goes
so.
A
But
you
know
certainly
from
a
sort
of
architectural
standpoint.
Different
workload
controllers
are
within
project.
They
don't
need
to
be
in
the
kubernetes
project.
You
can
either
certainly
create
your
own
instead
of
workflow
controllers
that
do
whatever
you
want
and
have
your
own
projects
do
whatever
it
is,
but
if
you're
interested
in
that,
then
you
know
that's
talking
to
speak
out.
Just
sounds
like
you
did,
it
would
be.
That
would
be
where
they
would
live
within
this
project.
A
You
know
not
talking
specifically
about
tactical
marriage
or
lack
of
I
don't
know
enough
about
it,
but
certainly
from
a
scope
perspective,
which
is
kind
of
what
I
would
think
would
be
interested
in.
It's
an
appropriate
set
of
projects
instead
of
controllers,
because
the
gaps
has
the
capacity
in
the.
A
C
So
can
I
ask
one
question:
can
you
guys
hear
me?
Okay,
yes,
there
you
go
so
it's
interesting
to
look
at
how
people
in
the
real
world
are
slicing
and
dicing
their
workloads.
C
I
think
there's
actually
a
couple
of
concerns
in
what
you
were
showing
like
the
idea
of
sidecar
as
a
resource
is
interesting,
independent
from
the
workloads
concepts,
there's
some
things
that
seem
like
they
would
be
generally
useful
features
and
like
why
doesn't
deployment
have
them?
I
don't
know
what
I
really
want
to
avoid
is
a
situation
where
users
have
to
sort
of
go
shopping,
for
which
workload
controller
is
right
for
them
like.
C
B
B
Controllers,
yes,
I
agree.
So
there
are
a
couple
of
things
one
like
I
said
there
are
like
certain
things:
that's
for
for
large
production
use
cases,
especially
talking
about
things
like
in
place,
update
where
you
kind
of
like
you,
you
bypass
the
api
server
right.
So
what
happens?
Is
it's
not
a
generic
use
case?
It's
really
in
the
it's
meant
for
a
control
environment,
where
you
know
what
you're
doing,
because
you're
not
getting
a
new
node
you're,
not
getting
allocated
new
resources
right.
B
The
purpose
of
going
through
on
the
the
apa
server
is
that
the
scheduler
would
assign
you
basically
destroy
and
create
a
new
part
for
you
and,
for
example,
you
you
have
totally
different
cpu
or
memory
usage
in
different
versions,
and
you
should
avoid
using
in
place
update,
but
in
production
environment
in
a
controlled
environment.
I
know
what
I'm
doing.
I
know
the
differences
in
these
images
and
I
can
make
a
nice
judgment
call
basically
saying
okay,
I
would
like
to
take
the
risk,
because
I
know
it's
not
gonna.
B
Take
that
much
more
resources.
That
being
said,
so
it's
really
as
like,
I
would
say,
a
practical
use
cases
than
a
generic
one,
and
the
second
part
is
since
these
things
are
still
in
in.
B
I
want
to
say
experiment
because
they
are
they're
already
in
use,
but
they
are
still
under
going
active,
active
development
there.
The
release
cycles
are
way
faster
than
more
frequent
than
than
the
upstream
really
cycles
in
terms
of
features,
I
would
feel
more
comfortable
of
merging
them,
like
the
merging
the
majority
of
them
after
it's
more
mature
like
right
now,
I
see
a
couple
features
that
can
be
donated
to
you
on
to
to
upstream
on,
like
the
ones
we
already
have
frozen
development
on
and
they
have
been
tested
for
a
while.
B
Those
things
can
be
upstream
because
there
wouldn't
be
much
changes,
the
the
the
others
probably
they
have
they
need
more
time
to
to
mature.
I
would.
B
C
Oh
yeah,
sorry,
for
me,
the
audio
is
really
bad
on
the
receiving
end
too
so
andy.
I
heard
what
you're
saying
and
I
didn't
mean
to
start
a
side
conversation.
Maybe
at
this
point
we
should
take
it
back
to
mailing
list
and
and
discuss
the
like
the
path
forward
like
what
do
we
ultimately
want
and
what's
preventing
us
from
getting
there.
B
That
makes
sense,
yeah
and
also
yeah,
so
also,
I
would
like
to
confirm
that
I
think
this
will
be
the
like.
The
last
sake
that
I
need
to
go
to
before
on
going
back
to
cncf
and
and
hand
over
our
verdict,
because
I
think
steering
committee
would
have
only
like
meeting
once
a
month.
So
I'm
not
sure
that's
a
proper,
proper
form
for
such
discussions.
E
B
Also,
they
wouldn't
sponsor
projects,
so
I
I
would
assume
that
this,
the
six
will
be
the
right
place
to
discuss
these
detailed
projects
or
technical
details
and
does
that
make.
B
F
A
As
far
as
your
donation
to
cmcf
versus
kubernetes,
I
would
take
it
wholesale.
You
know
it's
really
going
to
be
up
to
the
apps.
I
would
say
I'm
curious
how
that
conversation
went,
but
it's
certainly
another
approach
is
to
be
an
independent
project
and
then
take
your
features
back
one
at
a
time
as
tim
is
suggesting,
but
it
really
comes
down
to
apps
can
accept
that
additional
commitment
to
support
and
develop
these
that's
my
opinion.
B
Thank
you.
I
think
sig
apps
were
saying
the
same
thing.
Basically,
so
they
suggested
that
we
stay
as
a
separate
project
for
them
keep
up
streaming
back
once
certain
features
are
available
or
one
that
when
the
upstream
is
actually
looking
for
those
features,
then
it's
a
good
time
to
contribute
back
and
eventually
be
absorbed
to
the
part
that
there's
going
to
be
huge
differences.
B
But
I
think
there
are
a
lot
of
common
use.
Cases
that
can
be
can
be
contributed
back
to
upstream,
and
we
certainly
do
want
to
have
to
have
our
workloads
or
our
code
be
part
of
the
official
release
of
kubernetes.
That
will
be
ideal.
It's
just
right
now.
We,
our
features,
are
still
under
development,
and
we
don't
have
much
we.
We
are
not
sure
about
how
these
things
will
go
eventually
or
will
look
like
eventually
after
runs
of
testings
by
by
the
community.
B
No,
I
just
would
like
to
ask
you
please
please
write
to
the
please
reply
to
that
email
list.
A
H
Hey
everyone.
Can
you
hear.
C
H
Okay,
great
hey
everyone,
I'm
I'm
verb
on
github
and
I
wanted
to
to
reach
out
and
talk
a
bit
about
thermal
containers
today
with
cigar
architecture,
because
it
it
touches
several
parts
of
the
system.
H
So
I
do
have
some
specific
questions,
but
I
thought
I
might
start
with
a
sort
of
a
brief
reminder
of
what
ephemeral
containers
are
and
how
they
work.
So
ephemeral
containers
are
it's
a
it's
an
alpha
feature
that
allows
adding
containers
to
a
pod.
That's
already
running.
It's
intended
for
troubleshooting
for
interactive
troubleshooting
of
things
like
distro-less
containers,
where
you
might
not
have
a
shell
ephemeral
containers
have
have
many
restrictions
above
normal
containers.
They
things
like,
they
don't
have
guaranteed
resources.
H
H
H
So
right
now
the
features
in
alpha
and
we
have
an
alpha
command
and
cube
cuddle
to
to
allow
creating
them-
and
I
guess
I'll
I'll
pause
here
to
see
if
there's
any
questions.
H
Nope,
okay
cool,
so
I
wanted
to
talk
about
a
couple
of
issues
that
were
a
bit
contentious
when
we
first
started
started
discussing
ephemeral
containers
years
ago
and
those
are
specifically
the
ability
to
remove
ephemeral
containers
right
now.
You
can
only
add
them.
You
cannot
change
or
remove
them
and
the
ability
of
configuring
a
security
context.
H
These
sort
of
these
weren't
fundamental
features
for
necessary
for
to
get
the
to
get
the
feature
started,
but
they
are
our
most
common.
Their
most
popular
feature
requests
now,
so
I've
written
up
an
update
to
the
cap,
which
I've
linked
in
the
meeting
notes
and
I'd
like
to
get
some
more
more
eyes
on
it.
H
So,
for
example,
I
I
just
learned
that
yesterday
that
I
should
not
be
building
on
top
of
pod
security
policy,
so
I
need
to
go
back
and
revise
the
kept
and
to
not
do
that,
but
but
I
would
like
to
be
able
to
make
these
two
things
configurable,
somehow
on
a
cluster
basis,
so
I'm
not
familiar
with
config
architecture.
I
I
don't
know:
do
we
it's
discussing
caps
and
update
the
caps,
something
that
we
do
here.
H
H
Okay,
so
the
main
architectural
input.
I
think
I
think
the
strongest
guidance
that
I've
gotten
so
far
was
for
us
from
the
api,
and
we
may
want
to
chat
about
the
api
with
some
experts
to
see
if
we
want
to
change
anything
before
graduating
to
beta.
But
these
two
specific
things
they're
sort
of
far-reaching.
H
I
think
there's
there's
a
lot
of
opinions
about
what
should
be
allowed
by
auth
and
and
how
we
should
be
able
to
configure
policy.
But
I
don't
know
if
they're
architecture.
H
C
Sorry,
like
there's,
there's
a
lot
of
audio
issues
going
on
okay.
I
I
paid
close
attention
to
this
at
the
very
beginning
and
then
it
seemed
like
it
was
under
control,
and
so
I
shifted
to
other
things,
and
I
haven't
really
come
back
to
it.
If
you
think
it's
useful,
I'm
happy
to
discuss
it
to
talk
about
what
the
bumps
are
that
you're
currently
experiencing
from
the
sort
of
api
point
of
view.
If
you
just
want
to
to
talk
with
somebody
from
from
this
group,.
H
H
H
Yes,
okay,
so
so
being
able
to
tell
if,
if
a
pod
is
tainted
is,
is
also
included
in
my
update
to
the
to
the
cup,
and
my
idea
there
is,
is
having
the
kubelet
set
a.
C
Well,
the
new
format
can
be
an
easy
change.
That's
that's
just
administrator.
H
Production
readiness
questions
great
yeah,
okay,
so
that
was
the
other
thing
that
I
wanted
to
discuss
with.
Is
I
get
some
feedback
through
through
github
issues,
but
but
not
a
lot,
so
I
don't
know.
I
would
like
to
be
able
to
get
more
feedback
and
more
opinions
from
the
community
about
whether
we're
you
know
we're
ready
to
to
move
to
beta,
whether
it's
something
that
we
we
want
to
pursue,
but
it
sounds
like
maybe
production
readiness
is
a
venue
to
do
that.
C
Production
readiness
is
definitely
something
that
we
should
pay
extra
attention
to.
I
think
I
believe
this
falls
into
the
case
of
everybody
who
wants
it,
but
it's
complicated
enough
that
few
people
are
able
to
make
the
time
to
think
about
it.
Deeply
enough.
I
don't
think
it's.
I
don't
think
you
should
interpret
silence
as
people
don't
want
this
or
aren't
excited
by
it,
mostly
as
everybody
looks
at
it
and
says,
oh,
my
god,
that's
complicated.
I
don't
have
time.
H
Okay,
great,
that's
great,
that's
good
feedback.
I
can.
I
can
work
with
that.
I
can
I'm
I'm
not
above
pestering.
H
H
Okay,
so
that,
I
think
actually
gives
me
good
direction
that
answers
the
questions
that
I
had
coming
into
this.
Does
anyone
have
any
any
questions
for
me
or
any
concerns
or
things
that
that
I
should
look
at.
H
Okay
sounds
good
thanks
for
your
time,
I'll
I'll,
follow
up
with
these
items.
D
It's
actually
a
pool,
we
moved
neighborhoods
this
week
and
I
haven't
yet
figured
out
a
place
to
work
this
bright
and
early
in
the
morning.
So
here
we
are
I'm
going
to
share
my
screen,
but
I
am
not
able
to
yet.
Can
I
get
the
permissions
to
do
so
and
momento
some
exciting
stuff
on
the
way
we
will.
Finally,
as
cigarch
own,
a
release
blocking
job
as
soon
as
the
process
is
complete,
we're
definitely
kicking
it
off.
D
Hey
see,
I
think
it
is
john
or
them
may
need
to
give
me
permission
to
share
screen.
D
All
right
so
main
news
that
I'm
super
excited
about.
It's
been
several
years
in
the
making
is
making
sure
we
don't
create
any
more
technical
debt
and
we
are
in
the
process
of
creating
that
a
release
blocking
job
here.
D
Take
a
look
at
that
if
you're
interested
the
actual
pr
to
get
the
job
in
for
now
that
job,
when
it
fails
we'll
email,
our
team
until
we
ensure
that
the
signal-to-noise
ratio
is
correct
at
this
time,
we'd
actually
do
have
a
failure
occurring,
but
it's
a
a
true
negative,
and
if
we
just
if
you
went
to
the
top
of
like
I
just
reloaded
the
page
api
snoop
dot
io,
I
think
that's
cncf,
that
I
owe
we've
added
at
the
bottom.
D
Anything
new
that
occurs
during
a
release
cycle
and
we
can
see
that
there's
a
entire
new
category
called
internal
api
server,
which
is
mainly
an
alpha,
but
whenever
we
enable
that
from
the
generated,
swagger
json
that
ends
up
in
the
kubernetes
repository,
which
is
our
source
of
truth.
Currently,
we
can
talk
about
whether
that's
the
right
source,
truth
or
not.
It
becomes
something
that
we
need
to
filter
out
as
either
writing
a
test
for
or
putting
in
our
list
of
endpoints.
D
We
have
an
area
over
here
on
our
conformance
or
list
of
ineligible
endpoints.
It
has
specific
reasons.
Each
of
these
stable
apis
are
not
part
of
conformance
yet,
for
example,
like
storage
and
a
few
few
other
things.
D
D
We
can
see
that
this
has
a
list
of
all
those
new
internal
api
server
endpoints,
but
if
we
back
out
all
the
way
to
here
and
go
to
stable
and
this
stable
api
endpoint
is,
is
currently
not
tested
and
not
performance
tested,
I
think
we're
likely
just
going
to
create
a.
I
want
to
get
a
little
bit
of
feedback
on
the
process
for
what
happens
when
we
have
promotions
to
ga
that
are
pretty
normal.
Like
you're
gonna
anytime,
we
get
a
new
api
group.
D
We're
gonna
need
to
get
that
api
group
and
that's
part
of
the
stable,
but
that's
on
a
as
far
as
when
we
do
our
spinning
up
to
generate
the
swagger
json.
I
think
it's
part
of
the
release
process,
where
we
spin
up
an
api
server
with
all
alpha
and
beta
api
endpoints
enabled
and
do
a
quick
query
to
the
living
swagger
json,
endpoint
and
dump
it
into
the
kubernetes
repository.
D
So
to
take
that
a
bit
and
kind
of
table
that
for
an
hour
we
can
discuss
it.
This
is
the
actual
gate
itself
when
it
runs.
This
is
on
proud
at
cncf
that
I
o
primarily
lose
by
my
team
right
now,
but
we're
looking
to
expand
that
out
to
more
projects
at
the
very
bottom.
D
It
listed
the
currently
untested
endpoints
and
returns
non-zero
so
that
the
job
fails,
and
this
is
where
we
will
get
an
email
notifying,
currently
our
team
and
then
I'd
love
to
get
some
feedback
on
where
that
should
go
longer
term.
Whether
we
create
a
conformance
mailing
list
that
interacts
with
sig
release
to
ensure
that
the
responsible
party
for
or
sig
for
creating
those
new
endpoints
covers
their
own
debt
before
the
release
is
made
or
they
have
to
revert
those
promotions.
E
D
To
chip
away
at
this,
I'm
we're
happy
to
keep
chipping
away
at
it.
This
thing
about
no
more
debt
is
also
huge
for
us.
D
When
we
get
this
up,
we
can
visit
around
next
time.
We
do
a
a
readout
to
see
how
it's
been
performing
as
far
as
what
we
do
with
the
emails
and
how
we
engage
with
sig
release.
D
As
I
continue
on,
we
saw
a
great
increase
in
our
conformance
in
in
1019,
and
we
have
a
little
conformance
progress
area
where
you
see
how
far
we've
we've
come.
The
the
red
stuff
is
not
always
our
team
completely,
but
but
pretty
much,
and
then
the
blue
light
blue
area
is
stuff,
that's
new
endpoints
that
were
promoted
with
tests
and
so
we're
looking
forward
to
those
being
completely
taken
over
and
be
the
only
way
to
get
new
endpoints
in
by
the
time
we
get
rid
of
that
gray
area.
D
We
have
cleared
all
debt
back
to
115.
yay,
so
there
used
to
be
gray
areas
here.
This
is
coverage
by
release
current
to
see.
What's
the
race
we've
erased,
it
all
the
way,
we're
working
on
this
114
area.
If
you
look
up
here,
you
can
see
we
had.
We
used
to
have
endpoints
up
here
this
one
here
now
we're
working
on
this
114
to
clear
it
all
back
to
the
beginning,
exciting
and
watch
on.
The
call
is
actually
one
of
the
few
people
who's
contributed.
That's
not
on
the
team!
D
Thank
you
for
that.
So,
let's
that's
lgtm
and
approve
this
any
feedback
we
have
for
this.
Pr
would
be
great.
This
is
one
of
the
apis.
I
think
that
we
caught
and
reached
out
to
the
sig
and
say:
would
you
mind
cleaning
up
the
mess
with
us
thanks
heaps
and
without?
D
I
won't
go
into
the
details
of
it,
but
we
we
do
an
okr
style
goal,
setting
based
on
the
release
cycle,
and
we
have
a
presentation
that
that
that
I
we
do
during
our
our
bi-weeklies
every
once
in
a
while,
and
also
I
roll
up
to
the
cncf
for
feedback.
D
So
if
you're
interested
in
what
our
specific
milestones
are
as
far
as
the
velocity
that
we
have
or
the
steps
that
we're
taking,
please
take
a
look
at
the
what
we
did
and
what
we're
up
to,
in
addition
to
the
work
that
ii
does
for
writing
tests
and
and
blocking
people
from
from
adding
new
apis
with
out
test.
We
also
have
had
a
lot
of
discussion
around
the
conformance
profiles
and
I
encourage
you
to
take
a
look
at
john's,
kept
and
give
feedback
on
that.
It's
been
a
lively
discussion
and
that's.
E
C
Thank
you
looking
at
the
agenda
that
seems
like
it
were
there
other
things
that
people
want
to
bring
up
today.
There's
some
good
discussion
on
the
mailing
list
this
week.
C
That
sounds
like
consensual
silence,.