►
From YouTube: 20220310 SIG Arch Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
today
is
march
10th,
and
this
is
the
meeting
for
sig
architecture.
Please
be
kind
to
each
other.
We
are
recording
this
meeting,
so
please
be
nice.
A
So,
let's
get
started
on
what
is
in
the
agenda.
Let
me
share
my
screen
and
when
I
do,
I
won't
be
able
to
see
if
you
guys
say
something
on
video,
so
you
know
yell,
please,
okay,
where
am
I
oops?
A
This
is
this:
is
the
one
share?
Yes,
so
I
don't
know
who,
let's
start
with
the
easy
one
right
like
what
easy,
as
in
we
already
talked
about
it
on
the
mailing
list,
but
clayton?
Why
don't
you
please
set
the
stage.
B
Sure
so
there
was
a
merrick
from
google
shared,
an
email
with
steering
talking
about
the
concern
about
the
health
of
the
edcd
project
in
terms
of
contribution
reviewers.
So
there's
a
discussion
that
will
that
I
think
it
is
a.
It
is
a
little
outside
what
sigarch's
normal
role
would
be,
but
it's
something
we
should
be
aware
of
and
participate
in
the
discussions
in
terms
of
etsy
as
a
fundamental
kubernetes,
fundamental
component
of
the
kubernetes
architecture.
For
many
people,
the
the
obviously
you
know,
staffing
of
the
project.
B
The
health
of
that
project
is
a
cncf
issue.
It's
not
it's
a
it's
a
kubernetes
in
some
sense
issue,
and
it
really
is
an
architectural
problem
for
the
project
to
have
a
component
that
we
fundamentally
depend
on,
and
so
there
may
be
some
actions
for
us
to
consider
potentially
supporting
a
larger
working
group,
making
sure
that
we
represent
both
problems
and
issues
in
the
current
project.
But
as
well
as
architectural
perspective,
on
what
are
some
longer
term
plans
and
challenges.
B
We
have
sig
api
machinery,
typically
is
the
one
who
is
most
connected
to
the
support
of
it,
and
so
generally,
I
would
say:
sig
api
machinery
is
the
one
that
would
directly
be
represented
there,
but
we
may
have
some
broader
topics.
There's
a
number
of
groups
that
have
raised
historically
over
the
last
couple
of
years.
Certainly,
we've
talked
multiple
times
around
the
possibility
of
alternative
stores
under
kubernetes.
The
kind
project
started
with
k3s
and
explored
a
number
of
trade-offs.
B
B
A
more
direct
database
integration
with
something
like
cockroachdb,
which
offers
more
compatible
core
semantics,
so
steve
kuznetov
has
been
working
on
that
for
other
ecosystem
related
projects
not
directly
for
the
core,
but
he's
got
some
of
the
ee
tests
running
or
he's
got
most
of
the
ed
test
running
and
in
kind
and
there's
some
broader
discussion
that
I
think
as
part
of
this
effort,
we'll
want
to
talk
about
just
to
make
sure
that,
as
a
as
a
community
and
as
a
architecture,
we're
healthy
and
maybe
look
for
requirements
and
summarize
some
of
the
state
of
the
ecosystem
around
what
are
the
gaps
in
the
current
project
that
objectives
that
sigs
have
that
can't
be
accomplished,
as
is
today
it's
a
good
time
to
reflect
on
those
so
anything
I
missed
there
dims
anything
you
can
think
of.
A
And
no,
I
think
you
covered
it
pretty
well.
There
was
one
aspect
that
keeps
coming
up
related
to
you
know
plugability
of
the
kv
store,
which
was.
I
know
that
the
apc
kp
machinery
has
talked
a
few
times
about
it
and
has
ruled
it
out,
but
I
don't.
I
don't
think
we've
like
documented
that
no,
we
are
not
we'll.
A
Never
do
this
kind
of
thing
that
I
can
point
people
to
because
a
lot
of
people
you
know
it
came
up
in
the
cnc
of
toc
as
well.
You
know
some
folks
were
asking
hey.
Why
do
we
need
to
stuff
this
or
why
do
we
need
to
continue
this?
Why
is
it
not
pluggable,
since
everything
in
cuba,
humanities
is
pluggable,
so
why?
Why
are
you
all
not.
B
B
Plugability
is
not
the
primary
goal
of
the
system,
but
it
is
a
strong
principle
that
we
adhere
to
that
that
one's
a
big
one,
because
we
we
said
sig
api
machinery
was,
did
not
have
the
bandwidth
to
truly
explore
alternative
stores
and
up
until
kine.
No
one
publicly
had
made
a
strong
effort
to
address
that
and
there's
I
think
I
would
say
my
perspective
is
we
can
there's
lots
of
things.
B
What
is
going
to
deliver
the
most
success,
which
I
think
you
know
we
all
would
agree
on?
There's
nothing
controversial
about.
We
shouldn't
just
randomly
change
something
because
we
still
have
a
long
tail
of
people
who
will
have
security
issues.
We
made
the
choice
to
use
ncd
and
to
co-develop
ncd
alongside
kubernetes.
B
We
are
accountable
for
the
shape
of
etcd
and
the
ecosystem
of
ncd
to
some
degree,
and
so
we
should
that's.
That's
a
factor,
so
I
I
agree
with
you,
but
I've
also
caught
people
saying
things
like
that
that
I
don't
think
is
completely
true.
A
Yeah
it
slipped,
you
know,
since
we
are
about
friends
here,
you
know,
I
stated
it
bluntly
or
you
know
without
since
you
you
all
know
what
what
we
are
doing
here
so
so,
what
it
looks
like
is
two
people
from
the
hcd
maintainers
ended
up
in
a
company
called
ava
labs.
A
Fusion,
I
forget,
is
really
so
those
two
end
up
in
ava
labs,
and
you
know
now
the
same
thing
happened
once
before
and
then
at
that
time
the
cnco
cncf
toc
ended
up
reaching
out
to
the
cnc
gb
and
somehow
our
rather
amazon
ended
up,
hiring
you
holy
and
allowing
giholi
to
work
on
hed.
A
So
this
time
as
well,
we
need
to
like
raise
the
flag
of
the
poll,
and
that
is
happening.
You
know
we
need
to
do
it
from
multiple
sites.
So
all
of
us
should
cop
talk
to
our
company
representatives
and
raise
it
through
that
channel,
and
some
of
us
are
trying
to
put
this
on
the
gb
calendar
agenda
for
the
next
gb
meeting.
I
think
it
is
april
second
week.
I
think
so.
A
We
need
to
make
sure
that
it
is
on
the
agenda
and
you
know
if
you
any
of
you,
have
some
more
background
information
that
you
can
share
with
me.
You
know
hit
me
up
on
slack.
I
know
why
tech
some
some
folks
in
the
poland
team
are
also
working
on
ncd
now
and
how
has
their
progress
been.
C
Yeah
so
marek
in
particular,
marek
who,
who
was
escalating
that
was,
is
one
of
those
like
I.
We
also
have
piotr
working
on
that
like
for
last
two
years
and
he
is
still
helping
with
that,
but
like
he
is
now
focused
on
other
stuff.
So
it's
more
like
more
like
helping
and
ensuring
that,
like,
like
marek,
has
reviews
and
stuff
like
that
and
then
consulting
that,
but
that's
it's
more
or
less
mostly
mark
on
our
site.
So
yeah
we
are,
we
are.
We
are.
C
I
think
we
are
missing
both
manpower
and
like
some
different
diversity,
so
so
that,
like
it's,
not
only
google
and
it's
not
perceived
as
like,
we
are
doing
whatever
we
want
and
so
on
and
so
on
and.
D
I
was
gonna,
add
here,
so
this
is
a
really
interesting
point.
As
far
as
my
perception
is
kubernetes
is
by
far
the
dominant
consumer
at
cd
in
the
space
it'd
be
good
to.
B
B
There
is
a
lack
of
diversity
of
etcd
consumers,
which
is
itself
a
problem
from
an
architectural
perspective
like
one
of
the
reasons
why-
and
you
know,
jordan
and
I've
talked
about
this-
for
like
at
least
going
back
six
seven
years
talking
about
alternatives,
derek-
and
I
like
we
went
through
this
like
early
in
the
cube
we
went
through
a
set
of
assessments
about.
Is
there
something
better
at
cd
at
the
time
was
recording
was
willing
to
design
itself
to
solve
cube's
problem?
B
It's
an
open
question
about
whether
seven
years
later,
all
of
the
same
assumptions
that
led
to
that
decision
were
there
and
that's
partially.
Why
like,
through,
like
through
some
of
the
prototyping
work
that
steve's
doing
and
stuff
that
people
done
with
kind
like
defining
the
formal
semantics
that
we
actually
expect
from
our
store
as
clients
see?
It
is
a
fundamental
part
of
the
health
and
this
like
the
stability,
the
project,
jordan
and
I
had
a
back
channel
where
some
of
steve's
pr's
asking
questions.
B
Jordan
use
your
I'll
use
your
sentence
publicly,
which
was
we're
all
terrified
of
changing
things
at
the
fundamental
level,
because
there
can
they
have
so
many
implications
over
the
years.
The
watch
cache
evolution
has
exposed
semantic
challenges.
We
have
a
surface
area,
that's
somewhat
weak,
getting
much
better.
At
formally
defining
the
semantics
we
expose
to
customers,
end
users,
you
know
the
existing
production
applications
through
our
apis
would
benefit
us,
no
matter
what
some
of
the
regressions
that
we've
seen.
B
We
don't
test
enough
and
we
rely
on
ncd
to
do
the
testing
and
so
there's
a
we
have
kind
of
a
weird
split
of
we
ask
etcd
to
provide
something.
We
use
it
and
we
trust
it.
But
then
we
are
surprised
sometimes
with
regression.
That
was
from
merrick's
point.
So
there's
there
is
some
architectural
like
clearly
within
the
scope
of
cig
architecture
that
there's
some
smoke
here,
and
so
I
don't
want
to
just
fixate
on
solving
the
lcd
problem
as
a
project.
B
I
also
want
us
to
have
some
reasonable
discussions,
reliability,
conformance
understanding
the
guarantees,
we're
documenting
what
we're
actually
providing
to
clients
which
steve
and
I
have
been
enjoying
figuring
out,
which
guarantees
we're
actually
providing
and
what
people
should
be
allowed
to
do
with
regards
to
apis,
and
so
some
of
that
all
ties
together
and
I'd
like
to
see
more
investment
there
and
drive
more
investment
there,
not
just
an
ncd.
A
E
Yeah,
I
I
wanted
to
sort
of
split
into
three
three
streams.
One
is
what
clayton's
talking
about
like
formalizing,
stuff
figuring
out,
where
we
have
gaps
from
the
things
that
merrick
raised
in
the
email
it
seemed
like
there
were
sort
of
two
aspects:
one
is
there
are
bugs
that
we
know
exists
that
haven't
gotten
traction
getting
fixed,
and
so,
like
immediate
term.
If
we
could
like
list
those
and
say
why
are
our
tests
green
if
these
are
critical
bugs
that
aren't
solved?
E
So
we
can
at
least
know
when
we've
sort
of
stabilized
the
current
state
right.
So
that
was
like
the
immediate
like
it's
part,
part
of
his
email
sounded
like
something
is
on
fire
and
we
don't
have
visibility
to
that.
So
I'm
really
interested
in
getting
that
stable,
because
that's
apparently
a
version
that
we
already
released.
Like
123.
E
the
email
said
using
a
version.
That's
buggy.
The
next
one
is
sort
of
longer
term
like
features,
don't
have
good
diversity
of
reviewers
and
reviewers
with
context.
E
E
Maybe
we
do
maybe
there
are
gaps
and
problems,
and
but
change
for
changes
sake.
I
don't
know
that
we
necessarily
want.
So
that's
all.
I
was
going
to
say.
A
Yeah
one
thing
I
had
there
was,
you
know:
traditionally,
it
city
hasn't
performed
well
on
slow
discs
right
and
we
we
always
throw
ssd
at
it.
B
A
So
so,
at
this
point,
we've
talked
about
it
in
steering
private
a
little
bit.
We'll
probably
talk
about
it
more
in
the
next
steering
meeting.
A
We
we
already
talked
about
it
in
the
toc
meeting
and
we're
going
to
take
it
to
gb
as
well,
so
getting
manpower
and
trying
to
find
the
right
set
of
people
to
work
on.
This
is
definitely
a
top
of
priority,
but
then,
when
people
do
show
up,
there
needs
to
be
some
welcoming
committee
of
some
kind,
that'll
put
them
to
use
and
get
them
started
off.
I
don't
know
how
to
do
that
with
hcd.
A
You
know.
Sahadev
has
been
there
for
a
while
on
the
hcd
side,
but
you
know
I
I
think
satay
will
probably
need
help.
You
know
if
we
try
to
get
some
new
people
in
so
the
only
other
thing
I
can
think
of
is
like
some.
A
Some
of
the
companies
have
in-house,
say
cd
folks
and
getting
some
amount
of
bandwidth
on
their
side
to
show
up
in
the
community,
and
you
know
see
how
they
can
help
with
some
of
the
things
that
jordan
was
talking
about
enumerating
bugs
and
going
around
trying
to
figure
out
how
to
get
them
fixed.
B
B
Three
five
went
on
a
long
time,
if
I
remember
correctly-
and
that
was
also
a
problem
like
I
should
have
recognized-
that
early
on
that
three
five
was
it
I'm
like
it's
been
a
while,
since
I
looked
at
this,
so
I
might
be
misinterpreting
some
of
it,
but
I
was
already
a
little
nervous
about
the
time
between
the
previous
version
and
this
one.
We've
shifted,
cadences
and
approaches.
B
A
C
Yeah,
I
think
I
I
wanted
to
mostly
say
what
jordan
and
clayton
is
doing.
What
we're
saying
is,
I
think
we
are
missing
a
lot
of
tests
on
our
site,
like
in
particular,
the
the
couple
bugs
that
I'm
aware
were
in
ncd,
although
I
don't
know
the
details
but
but
are
all
about
like
membership
changes,
we
are
not
really
testing
like
reliably
the
aha
setups
of
kubernetes
at
all,
not
even
talking
about
like
upgrades
of
aha
setups
and
like
upgrades
together
with
like
membership
of
fcd
clusters.
B
Is
an
area
that
red
hat
has
invested
in
a
lot
so
like
we're,
always
running
aha
and
we're
always
doing
in-place
upgrades
and
we're
all
like
we
have
invested
in
that.
That
might
be
an
area
where
we
can
do
more
to
bring
that
aspect
upstream
and
it
might
involve
more
test
infrastructure.
The
upgrade
tests
honestly
in
upstream
are
another
long-running
architectural
gap.
B
C
Yeah
the
upgrade
upgrade
this.
I
think
one
of
the
things
that
I
I
was
also
mentioning,
although
it's
in
the
like
increasing
the
reliability
bar
proposal
like
upgrade
tests
and
upgrades
in
general
or
other
types
of
tests
like
chaos,
tests
and
so
on,
and
so
like
in
general
testing,
is
something
that
we
we.
We
need
to
invest
more,
because
we
there
are
too
many
gaps
that
we
we
might.
We
haven't
like
the
bugs
that
may
slip
over
over
our
releases
and
there's
it
like.
B
With
openshift
like
we
ended
up
over
the
last
two
years
or
so
doing
a
fairly
sophisticated
set
of
like
invariant
testing
changes
to
the
like,
taking
the
upgrade
tests
as
a
source
fixing
bugs.
I
was
fairly
remiss
in
bringing
that
infrastructure
back
mostly
because
we
were
lacking
a
test
environment
upstream
and
I
think
actually
maybe
the
right
way
to
go
about
that
would
be
to
fix
the
test
infrastructure
upstream
and
then
move
many
of
those
components,
because
I'm
fairly
proud
of
the
upgrade
testing
we
do.
B
I
haven't
seen
as
many
of
the
issues,
but
also
sam
for
our
side
was
also
controlling
the
pace
of
ncd
upgrades,
and
I
don't
know
that
we
actually
got
to
that
3-5
point,
because
he
was
a
lot
more
conservative
than
others
were,
and
so
maybe
there's
like
this
is
a
place
where
getting
more
alignment
on
what
we
actually
recommend
and
putting
more
weight
under
one
arrow
would
actually
benefit
all
of
us
and
it's
we've.
B
Everybody
kind
of
is
approaching.
Ncd
versioning
upgrades
a
little
bit
differently.
Can
we
bring
a
little
bit
more
weight
to
bear
on
all
of
us
doing
the
same
thing?
And
I
mean
all
of
us
in
the
vendor
space,
but
also
in
the
service
space,
making
sure
that
we're
we're
really
aligned
and
we're
we're
all
seeing
benefits
of
having
shared
investment.
A
So
one
question:
there
did
we.
I
remember
there
was
an
external
vendor
that
ended
up
kicking
the
tires
of
hcd
and
they
added
jepson
tests.
I
believe
I
don't
even
know
where
they
are
or
if
anybody
has
run
them
recently.
A
So
maybe
you
know
that
might
be
another
thing
to
poke
at,
but
there
is
another
problem
that
you
brought
out,
which
is
sick
testing.
Right
like
we
are
under
the
water
there
with
sick
testing
in
general.
There
aren't
that
many
people,
you
know
doing
everybody's,
doing
little
little
pieces
of
testing
sick
testing.
You
know
what
it
takes
and
we
aaron.
We
are
waiting
for
aaron
to
come
back
as
well.
One
of
the
chairs,
so
sick
testing,
I
think,
will
need
help
a
lot
of
help.
A
E
Yeah,
so
to
be
clear,
like
sig,
api
machinery
is
probably
the
closest
owner
of
the
use
of
ltd,
and
so
I'd
expect
api
machinery
to
be
making
sure
versions
that
get
pulled
in
are
suitable.
So
for
anyone
on
sync
testing
hearing
that
and
thinking
wait
what
more
work
for
us
like
that?
That's
not
really
what
dims
and
I
are
saying
like
it's
more
a
question
of
like
if
api
machinery
wanted
to
close
this
test
gap
and
needed
to
do
an
upgrade
test
or
an
test
or
some
sort
of
test.
E
E
A
If
I
guess
the
the
lead,
the
leadership
for
this
effort
should
be
with
c
ap
machinery,
because
they
need
to
add
conformance
tests
that
are
missing
and
things
like
that
and
if
they
need
help
saying
the
current
things
that
are
in
pro
are
not
enough.
We
need
something
new,
then
it
goes
to
seek
testing
and
if
sick
testing
wants
additional
infrastructure
that
need
to
be
set
up,
whether
in
gcp
or
outside,
then
we
bring
in
the
case
and
raw
folks.
I,
but
those
three
are
definitely
the
six
that'll
participate
in
the
effort.
B
Yeah
and
as
jordan
noted,
this
is
a
sig
api
machinery
problem
and
yeah.
Our
elements
here
would
be:
are
we
doing
enough
to
support
sig
api
machinery?
Are
we
doing
enough
on
the
formal,
like
jordan?
Breaking
into
three
problems
was
really
good.
The
problems
that
sig
api
machinery
doesn't
clearly
own.
B
Do
we
have
good
owners
for,
like
we've,
got
reasonably
good
scalability
testing
around
ncd
the
chaos
testing
aspect,
probably
to
some
degree,
is
sick
testing,
but
it
also
raises
the
issue
and
then
making
sure
that
we
just
have
good
owners
for
those
two
problems.
Jordan
was
there
was
there
another
sig
that
you
can.
Think
of.
That
was
in
your
list
that
we
like
scalability,
testing
api
machinery
and
it's
in
front
infra.
E
A
So
one
of
the
red
flags
for
me
was
you
know
the
dependency
updates
for
hcd
sig
ap
missionary
wasn't
driving
it,
so
it
ended
up
with
the
core
organization
folks
helping
out
with
filing
you
know
the
the
upgrade
you
know
filing
the
go,
mod
updates
and
things
like
that
and
getting
people
through
like
updating
the
vendor,
updating
the
test
infrastructure.
You
know
all
the
yamaha
yeah.
I
think
that's.
E
More
a
sign
like
that
we
have
test
gaps,
though,
like
the,
we
should
be
able
to
bump
a
version
that
claims
to
be
compatible
and
like
if
our
tests
pass,
we
should
have
confidence
that
we're
good
to
go,
and
so
I
I
don't
know
that
I
would
expect
api
machinery
focus
on
every
pull
request
that
touches
the
scd
client.
Necessarily,
I
would
expect
us
to
say
like
the
way
we're
using
it
we're
exercising
with
tests,
so
we're
confident
that
it
works
right.
B
And
the
the
reliability
aspects
of
like
some
of
the
stuff,
like
leadership,
changes
and
management
changes
like
that
is
a
fundamental
ncd
problem
that
may
like
our
job
would
be
to
understand
upgrades
work.
But
I
don't
know
that
it
necessarily
fits
within
the
remit
of
cube
to
do
deeper
testing
on
coherency
of
how
etcd
is
upgraded,
at
least
based
on
the
way
that
we
we
recommend
things.
We
have
cube
admin
but,
as
far
as
I
know,
most
of
the
people
providing
distros
are
doing
different
things
with
that
cd.
E
B
Okay,
yeah
anything
client
related
totally
within
our
remit.
I
was,
I
was
thinking
some
of
the
membership
corrections,
anything
membership
related
that
doesn't
surface
to
us.
That
would
be.
We
should
be
advocating
for
investment
in
the
ncd
project.
Probably
even
if
that's
us,
you
know
folks
who
care
about
it
from
a
cube
perspective,
it
seems
yeah.
E
It's
like
suitability
for
for
use
right
like
if,
if
we're
planning
on
using
the
client
in
an
way
or
a
fail
way,
that
requires
failover
to
work
and
tell
us
to
work
like
we
need
to
make
sure
we
either
have
test
coverage
or
that
there's
test
coverage
upstream,
and
there
was
neither
and
so
like.
I,
I
think
either
of
those
places
is
a
fine
place
to
add
that
coverage.
I
selfishly
I'd,
want
it
upstream,
so
that
how
do
we.
B
E
Anyway,
I
I
commented
on
the
thread
particularly
about
that
issue,
just
because
I
want
to
stabilize
the
current
state
the
long-term
conversations
about
like
diversity
and
future
stuff
and
features
and
alternative
stories
like
all
of
those
are
good
conversations
to
have,
but
I
want
to
make
sure
we
sort
of
make
users
whole
for
123..
E
I
think
we
add
it
to
the
agenda
and
tag
them
into
the
thread.
I
would
probably
add
sig
arch
and
sig
api
machinery
into
the
thread
and
add
it
to
the
api
machinery
agenda
and
talk
through
some
of
those
things.
There.
A
A
Thank
you
andrew.
I
see
the
noogler
hat
nice.
A
Thanks
yeah,
so
anybody
else
had
any
thoughts
about
this
direct
christian,
sergey.
F
I
would
echo
everything
clayton
said,
and
he
and
I
were
chatting
before
this
and
just
ensuring
that
we're
doing
the
right
things
to
support
the
broader
community.
F
B
E
I
mean,
I
think,
the
the
short
version
of
this
is
server
side
apply,
allows
upsert,
so
that
is
probably
the
only
thing
I
would
expect
to
change
here.
So
if
we
document
that
as
part
of
server
side
apply,
that's
a
good
step
to
take.
A
Okay,
so
that
is
one
thing
that
we
can
add:
do
you
want
to
type
what
you
wrote
there,
jordan
yeah
I
mean
he.
A
B
A
Consumer,
I
remember
at
one
point
we
did
write
down
the
list
of
things
that
the
cubelet
needs
to
be
present.
B
And
that
one
was
actually
really
important
for
distributions
and
that
maybe
even
is
like
a
we're
under
tested
there.
I
actually
don't
know
what
our
vulnerability
to
that
is
as
a
project.
If
someone
has
subtle
things,
a
lot
of
people
tend
to
try
to
be
very
careful
about
that
stuff.
We
haven't
had
a
significant
regression,
as,
as
we've
moved
more
to
grpc
plug-ins,
some
of
the
new
things
that
might
have
introduced
more
churn
have
not
had
as
much
of
that
but
yeah.
B
A
Yeah
I'll
comment
on
it
later.
Oh,
this
is
from
you,
jordan,.
E
This
was
mostly
a
scratch
pad
for
topics
that
weren't
currently
fleshed
out
or
we
ran
across
in
api
reviews.
I
have
not
reviewed
this
recently,
so
it's
possible
some
of
the
stuff
got
added.
In
the
meantime,
it's
still
relevant
ish,
but
it's
probably
not
actionable
or
specific
enough
for
someone
to
like
assign
and
resolve.
Yes.
A
H
A
A
Yeah
so
let's
take
a
look
at
our
annual
reports
and,
if
folks
want
to
drop
off
feel
free
to,
but
let
me
throw
the
url
in
the
chat
and
I'm
gonna
stop
my
screen
share
because
I
can't
see
your
faces
so.
A
Okay,
so
like,
let's
start
from
the
beginning
right,
what
did
we
do
in
the
last
year
that
we
were
proud
of.
B
G
B
H
For
enhancements,
I'm
working
on
a
cab
survey
just
to
get
some
community
feedback
so
that
we
can
kind
of
guide
some
of
the
work
that
we
want
to
do
this
year
and
kind
of
be
in
line
with
what
people
need.
So
that's
something
that
is
being
worked
on
now.
E
One
thing
that
comes
to
mind
is
like
working
with
all
of
our
dependent,
not
all,
but
a
lot
of
our
dependencies
to
cut
down
transitive
stuff.
So
something
just
landed
today
that
I'm
super
excited
about.
We
got
an
update
to
the
cobra
library
that
dropped
the
viper
library
as
a
dependency,
which
means
that
we
can
drop.
I
think
12
percent
of
kubernetes
transit
or
dependencies
with
no
code
changes,
so
that's
not
represented
in
a
cap
because
it's
not
like
it's
a
change
up
stream,
but
we're
working
on
it
and
it's
helping
so.
A
Okay,
let's
go
to
the
next
one.
Again,
you
fill
this
one
right.
You
found
some
of
the
old
caps,
the
warning
api
from
jordan,
the
clarification
of
status,
I
think,
clayton
you
were
involved
in
that
then
the
release
cadence
stuff
that
we
ended
up
doing
four
to
three.
I
think
there
was
a
survey
out
for
from
sigrid
release,
folks
on
like
how
we
are
doing
with
it
now
do
people
want
to
go
back
to
four
or
something
like
that,
so.
E
The
things
I
had
most
top
of
mind
were
actually
things
that
are
going
in
124.,
so
I
I
got
confused.
I
was
like
wait,
we're
doing
more
than
that,
so
like
the
not
making
beta
apis
non-breaking
by
not
having
them
on
by
default
and
lengthening
or
strengthening
the
ga
deprecation
policy.
A
Didn't
we,
how
did
I
see
a
job
where
we
switched
off
alpha
or
something
like
that?
Jordan
la
that
was
last
year?
You
know
making
sure
that
we
can
run
conformance
tests
without
alpha.
That
was
119..
That
was
all
19,
okay,
yeah.
A
F
I
was
just
in
the
background,
I
hope,
my
cameras
in
the
background.
I
was
just,
I
always
think,
a
lot
of
times
this
group
is
looked
at
for
providing
like
broad
conventional
guidance
and
so
like,
I
always
think
of
like
the
api
conventions
document.
As
like
the
clear
system
of
record,
a
lot
of
people
saw
since
I
was
just
curious,
hey
how
many
changes
did
we
make
to
that
in
2021?
The
rough
count
on
commits
was
23,
but
I
didn't
go
deeper
than
that,
but
I
felt
I.
E
Actually
pulled
that
out
in
the
the
sub
project
stuff
below,
so
there
were
really
only
two
key
updates.
One
was
around
object,
references
and
one
was
around
spec
and
status.
So
the
object
references
one
had
a
lot
of
commits
stacked
and
there
were
a
couple
like
two
or
three
pr's.
What
about.
B
B
B
J
E
E
I
think
2020,
I
think
119
was
when
all
the
core
ones
hit.
Ga.
B
V1
beta
1
of
crd
is
still
rippling
through
the
ecosystem,
so
I
do
feel
that,
like
I
would
say,
a
significant
a
civil
negative
perceptional
thing
is
the
cube.
Ecosystem
went
to
v1
crd,
which
was
the
last
of
the
significant
ones
right
or
was
there
was
ingress
after.
B
B
Maybe
that's
like
something
I
can
like
draft
up
like
a
quick
like
the
ecosystem,
moved
to
v1
apis
in
2021,
and
we
managed
it
with
a
reasonable
and
we
learned
some
things
that
will
improve
future
changes,
but
we
are
now
on
a
stable
foundation
of
apis
in
a
broad
sense
across
the
whole
ecosystem
across
the
deployed
fleets
and
leading-edge
contributors
have
passed
that
threshold
or
leading
it
sorry,
leading-edge
deployments
have
passed
through
that
and
2022
we'll
test.
How
well
we
bring
the
long
tail
over.
A
I'm
trying
to
write
a
thing
just
thinking
of
what
else
we
need
to
do
right
like
so,
you
said
something
right.
We
have,
we
haven't
had
to
update
the
architecture
very
much.
I
don't
know
if
direct
said
that
or
places
like
that,
but
one
thing
that
that
is
that
I
see
more
and
more
right
now
is
like
how
do
you
have
a
bunch
of
kubernetes
clusters
act
as
one
right?
You
know
whether
you
want
to
call
it
as
a
multi-cluster
or
multi-cluster
across
multiple
cloud
providers,
or
things
like
that.
B
B
A
Bunch
of
things
I
know,
red
hat
is
doing
a
bunch
of
things,
but
we
as
a
community
we
haven't
had
that
many
caps
or
something
you
know.
I
remember
the
I.
A
unique
id
for
cluster
kind
of
tips
are
there,
but
other
than
that,
we
didn't
really
do
anything
and.
B
What
are
the
things
that
kubernetes
ecosystem
can
do
to
assist,
bringing
that
innovation
to
everyone
equitably
or
you
know,
or
to
bring
to
bring
that
you
know
to
bring
that
away
that,
like
amplifies,
multiplies
the
ecosystem
right,
I
think
that's.
The
role
of
the
community
is
to
be
the.
How
do
we
make
everybody
as
successful
as
possible
or
allow
people
to
be
successful
and
support
them
in
that
mission?.
A
I
kind
of
like
the
w
the
working
group
batch
stuff
how
it
went.
You
know
a
lot
of
people
were
doing
some
scheduling
stuff
and
they
went
away
and
they
did
their
own
projects
volcano
or
whatever,
right
and
then
folks
said.
Okay,
we,
we
did
a
bunch
of
things,
but
I
think
that
some
of
those
those
things
can
be
common
and
let's
push
that
back
into
kubernetes,
and
now
they
started
working
on
both
at
the
cncf
level
and
in
the
as
a
working
group
in
kubernetes.
A
So
I
like
that
pattern,
so
probably
need
to
do
something
similar
for
multi-cluster
work.
I
think
also
going
forward.
A
But
there
is
no
section
here
where
we
can
write
this,
so
let
me
just
scroll
back
down
okay,
so
let's
go
back
to
where
do
we
need
the
help
most
of
from
so
do
we
need
help,
and
so
we
have
four
or
five
sub
projects
right
like
do
you
all
feel
that
any
of
them
are
not
doing
enough
or
they
need
more
people.
A
E
I've
got
a
group
of
five
folks,
I
think
elena
and
abdullah
and
aldo
and
michelle,
and
that
have
been
picking
stuff
up
so
they're
doing
first
passes
on
things.
So
that's
kind
of
a
nice
cross
section
across
sigs.
E
That's
going
okay,
throughput
is
pretty
solid,
I'm
not
sure
about
the
other
api
reviewers
I'll.
Let
clayton
weigh
in
tim
seems
to
be
pretty
solid,
so
yeah,
I'm
gonna
turn
off
my
camera
before
I
started
eating,
but
I
appreciate
the
transparency
and
openness.
Yeah
tim
seems
to
be
pretty
quick
on
turnarounds,
I'm
not
sure
if
he's
pulling
in
other
folks
or
not
so
I'm
trying
pretty
hard
to
involve
other
people
when
I'm
doing
reviews,
but
jordan.
A
You
know
the
feature
fees
is
always
like
two
weeks
before
future.
Freeze,
like
you,
have
to
do
a
whole
bunch
of
things
right,
like
everybody,
comes
to
you
for
the
reviews,
especially
in
the
last
week
before
the
future
freeze.
E
It
it's
been
better.
The
last
couple
releases
actually
like
specifically
for
api
stuff.
I
mean
it's
always
crazy,
before
code
freeze,
but
the
the
expectations
that
people
are
going
to
open
a
big
api
change
like
three
days
before,
like
I'm,
just
not
seeing
that
people
have
those
open
further
in
advance.
Yeah.
B
And
it's
there
is
the
right
like
what
I
see
is
it's
significantly
like
what
was
pete
bad
was
like
113
or
114,
where
it
would
be
the
a
day
before
the
rush,
we're
definitely
down
in
terms
of
volume,
and
then
I
maybe
see
a
few
that
kind
of
they
get
lost
in
the
shuffle
and
they
bubble
up,
but
they're
not
bubbling
up
the
last
day
and
people
don't
have
people
are
not.
The
mindset
has
changed
a
little
bit,
which
is.
B
H
E
A
So
how
about
conformance
rian?
Are
we
doing?
Okay
with
this,
the
speed
at
which
you
are
getting
reviews.
I
Honestly,
no,
we
have
one
setting
now
for
I
think
three
weeks,
if
not
four,
that
we're
just
looking
for
a
review.
So
there
is
not
enough
bandwidth
there.
It's
a
sick
abs
one.
Are
they
they're
aware
of
it?
We
discussed
with
them.
They
know
about
it.
Both
they're
totally
snowed
under.
They
don't
have
capacity,
and
I
have
another
one
that
I'm
going
to
put
in
today
also
requiring
apps,
because
we're
really
trying
to
we
can
the
last
two
releases.
We've
been
hacking
at
this
stuff
to
get
it
cleared
out
so
job
apis.
I
A
Question
here
to
kind
of
like
leads
here
is
clayton.
I
I
know
marcie
got
tagged
with
sig
apps
work
right
in
addition
to
with
jordan.
Is
there
anybody
on
the
google
side
that
is
interested
in
sig
apps?
There
are
a
bunch
of
people
listed,
but
they
are
not
doing.
Reviews.
I
Okay,
I
might
much
actually
took
on
that
on
and
I've
been
to
six
abs
this
week
and
he
said
he'll
take
it
on,
but
I
know
I'm
going
to
throw
him
another
one
this
week,
so
yeah
for
him
there
are.
There
is
some.
A
Chatter,
on
the
sea
gaps,
slack
channel
also
saying
that
people
are
trying
to
get
into
the
seg,
but
they
are
not.
You
know
they
are
not
able
to.
You
know
talk
to
anybody
who
is
still
doing
sick,
apps,
stuff.
I
So
I
think
for
the
annual
report
you
can
say
it
is,
you
can
put
it
mild,
it
is.
It
is
a
little
hard
because-
and
I
think
part
of
the
reason
is
all
the
nice
stuff
is
gone.
So
it's
all
the
things
that
requires
a
lot
of
discussion
and
thinking,
and
it's
a
good
example.
I
I
I
watch
the
cookie
jar
and
the
kids
take
all
the
nice
ones
and
they
leave
the
the
ones
that
are
slightly
burnt
in
the
oven
for
last.
So
so
we've
got
the
burnt
cookies
for
last.
H
I
A
E
Number
of
dependencies-
those
are
the
types
of
things
I'm
looking
at,
but
I'm
not
tracking
those
in
a
disciplined
way.
It's
more
like
a
released
release
like
right.
We
care
about.
A
A
We
don't
have
special
training.
Yes,
no,
maybe.
E
I
mean
for
the
special
training
I
think
prr
and
api
reviews
both
have
dedicated
doc
for
that,
okay.
A
That
can
help
folks
get
started.
What
we
need
is
for
sick
leads
to
send
people
our
way
right.
A
Yeah,
okay,
there's
a
group
of
contributors
from
multiple
companies.
Yes,.
I
And
number
three
for
the
contributing
markdown.
I
started
just
the
first
lines
for
it,
but
I'll
look
at
some
other
sigs
and
wrestle
up
something.
It's
kind
of
a
modified
read
me
actually
looking
at
what
everybody
else
did.
Yeah
I'll
put
your
name
there.
A
Yeah,
okay,
are
there
ways
end
user
companies
end
users,
slash
companies
can
contribute,
they
are
not
full-time
support.
What
would
they
work
on
on
why
we
need
help
across
all
the
sub
projects.
I
Numbers
that's
on
there.
I
took
off
slack
and
automating
that
so
I
kind
of
made
that
up
and
then,
if
you
have
look,
there
is
some
comments
where
I
got
the
the
people
that
I've
got
there.
I
said:
there's
20,
reviewers
and
approvers
and
there's
some
some
where
I
got
it.
So
I
took
that
from
the
owners
files
and
so.
G
A
Thank
you.
Thank
you,
rihanna,
so
sub
projects.
I
think,
jordan.
You
filled
this
one
up,
architecture
and
ap
reviews.
I
think
conformance
rian.
You
filled
this
up.
Did
you
add
you
can
add
something
for
enhancements
here.
A
Then
production
readiness,
john,
was
going
to
take
a
look
tomorrow,
a
code
organization,
I'll
I'll,
add
some
stuff
here.
So
in
terms
of
working
groups,
we
haven't
really
talked
to
wg
api
expression
in
a
while.
E
There
are
working
groups
that
we're
supposedly
participating
in
or
sponsoring
or
whatever.
So
what
I
would
probably
do
is
start
a
thread,
copying
cigarch
and
group
api
expression
and
say
like
hey.
We
were
doing
our
annual
review.
What's
up
with
you
all
and
is
there
anything
we
should
know
about
or
anything
you
want
to
like
fold
back
into
api
stuff
for
like
what's
going
on,
and
so
they
have
the
same
thing
on
their
end,
when
they're
doing
their
annual
report,
like
hey.
H
E
A
Yeah
and
we
need
to
get
back
to
our
previous,
you
know
the
way
we
we
used
to
do
where
each
sub
project
would
give
an
update
every
so
we
have.
You
know
something
on
the
agenda
right,
so
we'll
we,
we
would
also
add
the
working
groups
come
and
talk
to
us
as
well.
A
Okay,
I
think
time's
up
thanks
a
lot.
Everyone-
and
you
know,
see
you
next
time,
bye.