►
From YouTube: SIG Cluster Lifecycle 2021-01-12
A
A
A
We
should
have
the
sig
name,
always
sequence,
life
cycle,
the
name
of
the
meeting,
for
instance,
I
don't
know
kubernetes
office
hours
and
after
that,
it's
the
year
month
and
day,
I'm
not
going
to
show
the
playlist
right
now,
but
it's
all
over
the
place,
pretty
much
everybody
is
doing
their
own
thing,
since
we
don't
have
that
much
presence.
Today,
if
you
participate
in
some
of
our
sub
projects,
we
can
at
least
you
can
at
least
notify
your
su
project
in
the
appropriate
meeting
and
say
hey.
A
We
should
start
doing
that
in
the
meeting
I'm
going
to
send
the
email
to
the
mailing
list
and
as
an
uploader,
I'm
going
to
start
doing
this
myself,
jason.
B
Yeah
I
just
wanted
to
mention:
should
we
reach
out
to
the
folks
that
manage
those
guidelines
to
see
about
getting
them,
adjusted
specifically
for
things
like
sub
projects
and
kind
of
other
meetings
to
help
differentiate
between
just
the
playlist
group
itself?.
A
Well,
I
could
said
that
email,
that's
a
good
point.
I
can
send
the
email
first
to
country
backs
and
see
what
they
think
whether
the
guide
is
up
to
date
like
because
of
the
super
project's
names
are
unique
in
kubernetes.
B
A
Okay,
so
I
guess
we
are
deciding
that
we
should
ask
first
before
taking
any
action.
Does
anybody
else
have
an.
A
A
A
Okay,
next
item
is
kubecon
eu
2021.
You
may
have
seen
the
email
already
it
feels
like
you,
one
keep
going
after
the
other.
Basically,
the
submissions
for
the
maintenance
track
is
due
to
february
the
7th
again.
The
procedure
is
similar.
We
have
to
fill
a
form
and
for
history
we
in
the
the
cube
cons
from
last
year
we
had
the
sig
intro
and
a
couple
of
project
highlights
which
in
separate
sessions
last
time
we
highlighted
costa,
api
and
image
builder
I
brought
this
is,
I
think,
tim
wanted
to
discuss
this
topic.
C
Do
we
know
if
anything
happened
about
like
creating
a
different
format
for
these
sorts
of
tracks
write
a
more
bi-directional
format?
I
guess.
A
Sorry,
you
mean
if
the
cupcake
format
is
going
to
change.
C
More
that
I
think
we've
talked
in
the
past
about
how
the
the
talks
that
the
virtual
coupons
are
very
sort
of
broadcast
right.
You
don't
get
a
lot
of
feedback
from
the
audience.
The
talks
are
pre-recorded.
The
question
medium
is
not
great
for
high
bandwidth,
like
it's
able
for
the
audience
to
participate,
and
I
think
you
know
like
certainly
the
sub
projects
I'm
most
involved
in,
like
I
think
most
of
the
value
comes
from
interaction
with
the
audience
and
sort
of
a
bidirectional
thing,
and
I
think
we've
talked
about
like.
C
Is
there
going
to
be
more
of
a
like
open
forum
type
platform?
In
addition,
I
guess
to
the
talks
right
to
me.
The
the
maintainer
track
sessions
don't
have
as
much
value
the
the
maintenance
tracks,
just
as
we've
been
doing
them
virtually
don't
have
as
much
value
as
we
got
in
person.
We
were
able
to
have
more
of
a
discussion.
A
I
I'm
not
aware
of
any
pending
changes.
I
also
agree
that
the
maintainer
track
is
not
very
efficient.
This
way,
I
guess
we
should
just
follow.
What's
going
to
happen,.
A
B
Yeah,
I
think
this
goes
back
a
little
bit
to
kind
of
what
justin
mentioned
as
well
and
in
the
past
you
know
they
gave
us
a
lot
of
leeway
to
have
a
lot
of
maintainer
track
sessions
for
the
various
sub
projects
that
the
group
that
this
group
has
and
they've
recently
kind
of
tightened
that
up
a
bit
and
and
brought
in
more
restrictions.
And
it's
created
this
kind
of
weird
tension
about
what
projects
do
we
highlight?
What
projects
do
we
not
highlight
during
these
sessions
and
that
led
to
some
vague
discussions
about?
B
B
But
it
would
be
good
to
you
know
at
least
try
to
start
those
conversations
up
again,
whether
it's
with
the
program
committee
or
the
cncf
or
whatever
about
how
do
we
do
a
better
job
of
doing
this,
both
equitably
and
also
meeting
what
the
attendees
and
the
users
expect
to
to
have
presented
to
them.
A
B
It's
more
of
these
kind
of
like
cross-cutting
cigs
like
sig
cluster
life
cycle
or
sig
release,
or
you
know
something
like
that.
That
has
a
much
broader
area
of
sub-projects
that
kind
of
fall
underneath
that
umbrella.
A
C
Actually,
I
think
I
mean
we
have
the
intro
sessions
uploaded
right,
they're
there
for
anyone,
anybody
wants
a
an
intro
to
cluster
life
cycle
can
go
and
watch
those
videos.
I
don't
think
a
ton
has
changed
in
terms
of
our
mission
or
the
projects
that
we
have.
Maybe
what
we
could
do,
therefore,
is
say
like
look.
If
you
want
an
intro
go
watch
a
video
on
youtube
from
like
last
six
months,
and
then
we
can
devote
a
couple
of
minutes
to
each
project
to
just
like
describe
what
they
did.
That
is
new.
C
C
A
Yeah,
the
c
intro
itself
is
not
changing
the
core.
The
core
of
the
presentation
is
the
same.
We
just
last
time
we
just
changed
the
sub
projects
we
are
highlighting.
A
I
honestly
have
no
idea
what's
what
some
of
the
super
projects
are
doing.
For
instance,
we
have
boot
cube
like
so.
C
I
mean
I'm
happy
to
collate
to
like
chase
people.
I
I'm
not
going
to
chase
them
too
much.
In
other
words,
if
they
don't
send
an
update,
they
won't
have
a
slot,
but
I'm
happy
to
do
that
and
either
you
know
what
what
you
and
I
did
live
in
mirror
we
uploaded
our
recordings.
I
think
worked
well,
but
we
otherwise
I'm.
I
can
just
like
read
out
what
someone
wrote
if
they
don't
want
to
make
a
recording.
A
Yeah,
but
you
and
me
have
become
very
efficient
in
audio
and
video
editing,
so
we
didn't
even
have
to
speak
that
much
in
the
second
cubeco
last
year.
I
think.
A
A
So
maybe
this
should
be
all
mailing
list
discussion
where
we
should
just
continue
there,
but
I
don't
think
we
should
make
the
we
can
make
the
decision
whether
we
should
change
the
format
highlighting
all
the
super
project
is,
is
a
lot
of
work
like
talking
about
not
highlighting
talking
about
all
the
sub
projects
is.
C
And
the
other
problem
is
like
in
the
last
coup
con
free
kubecon,
like
I
I
didn't
know
something
like
we
had
one
like
in-depth
question
about
topic.
I
didn't
know
anything
about
and
like
it's
difficult
because
then
we're
like
effectively
opening
ourselves
up
to
like
be
experts
on
all
these
things.
I
guess
we
just
have
to
try
to
attack
people
attending
in
chat
in
slackchat
and
see
what
we
can
cover.
E
B
C
F
If
I
may
chime
in
in
this
discussion,
so
I'm
fine
in
trying
to
change
a
little
bit
the
format
and
and
try
to
collect
feedback
from
the
project,
maybe
it
will
be
interesting.
F
My
background
question
is
that
when
I
wrote
the
this
communication,
it
seems
that
we
can
basically
send
only
one
session
for
our
sig.
Is
that
right
or
we
are
we
trying
to
force
and
have
more
session,
given
the
number
of
projects.
A
It's
a
vag
jason
explained
earlier.
We
were
back
and
forth
with
the
cncf
deciding
how
many
sue
projects
we
can
send.
Ultimately,
I
think
they
nowadays
they,
I
think
they
are
wars
to
like,
like
it
says
here
that
the
war
was
to
send
one
for
each
project,
but
like
do
we
want
to
communicate
all
that
or
like
how
should
we
proceed?
I
can
just
send
the
email
to
the
mailing
list
and
allow
our
subprojects
to
send
transparently
without
the
knowledge
of
the
the
sick.
F
But
to
be
honest,
the
they
made
this
vague,
but
but
if
you
click
it
in
the
submission
form,
then
there
is
a
note
that
is
pretty
clear
that
you
have
only
one.
So
if
you
scroll
down
a
little
bit
okay
before
the
red
lines.
B
C
The
the
interpretation
in
the
past
has
been,
I
think
what
fabrizio
is
saying
where
there
is
one
sixth
class
lifecycle
gets
one
has
been
introduced
in
the
past.
I
I
think
the
wording
was
the
same
last
time
as
well.
A
I
think
we
should
just
interpret
the
wording
here
as
all
the
sub
projects
are
about
to
send
if
they
wish
we
are
going
to.
I
am
going
to
send
an
email
to
the
mailing
list
and
pretty
much
delegate
to
subprojects
to
send
if
they
want
to-
and
you
know
the
six
chairs
and
leader
should
not
care
about
who
is
sending
eventually,
we
will
see
who
has
sent.
A
I
mean
this
removes
the
pressure
from
the
sick
leads
to
track,
who
is
sending
or
not,
and
we
we
can
care
about
the
the
intercession.
A
Session
I
I'm
going
to
propose
to
somebody
else.
I
don't
know
about
you
justin,
but
I
think
I
will
ask
if
somebody
else
wants
to
present
and
of
course
I
can
have
a
zoom
call
with
them
to
you
know,
explain
everything.
A
Yes,
I
think
we
have
a
to
do
here.
Does
everybody
else
have
comments
about
this.
A
F
E
F
I
guess
so.
I
saw
your
email
in
the
mailing
list
and
that
that
basically
the
scalability
test
kubernetes
is
migrated
to
kubernetes.
I
remember
our
own
discussion
I
beg
paired
on,
but
but
today
I
didn't
manage
to
to
grow
through
the
the
thread
and
go
back
to
the
decision
why
we
decided
to
propose
kubernetes.
F
But
my
consideration
is
that,
should
we
possibly
try
to
move
them
to
cluster
api
instead
of
kubernetes,
given
especially
the
work
that
was
done
recently
on
the
kuber
test
provider
and
the
kubernetes
provider
for
cluster
api?
So
it
I.
I
don't,
have
the
full
context,
because
I
didn't
have
time,
but
I
think
that
we
should
carefully
consider
because
this
is
a
big
opportunity
for
cluster
api
being
used
by
the
scalability
test.
A
Slowly
start
phasing
out
cube
up,
and
when
we
talk
about
cube
up,
we
should
note
that
cube
up
is
not
just
a
deployer
it
just
it
also.
Provisions
infrastructure
so
cube
by
itself
is
not
a
replacement
for
cube
adm.
It's
a
like
already
like
a
replacement
for
closer
api.
A
It
does
all
the
things
the
the
problem
with
the
gcp
provider.
So,
first
of
all,
six
scalability
once
you
want
to
use
gce
for
scalability
testing,
which
means
that
we
have
to
use
the
gcp
provider,
which
is
unmaintained
mostly
right,
it's
a
great
opportunity
for
constant
api,
but
we
have
to
convince
safe,
scalability
somebody
else
to
start
contributing
to
the
gcp
provider.
A
A
It's
just
it's
just
a
matter
of
more
discussion
and
people
investing
time
into
this
jesus
people,
whether
if
you
want
to
replace
it,
sorry
to
consider
it.
C
With
the
resent
I
mean
yeah,
I
was
gonna
say
like
I
think
it's
it's
often
hard
for
googlers
to
justify.
You
know
what
they
work
on,
if
we're
not
directly
using
it
type
thing,
and
I
think
I'd
rather,
that
we
like
had
the
googlers
work
on
cluster
api
provider
gcp,
which
is
generally
valuable,
then
on
something
like
some
other
script
right
like
we're,
basically
creating
another
another
deployer
like
either
way.
They're
gonna
have
to
maintain
something.
C
Why
don't
they
maintain
the
thing
that
people
want
to
use
and
I've
actually
started
sending
some
prs
gradually
to
classify
the
upper
writer
gcp
and
from
some
of
my
stuff,
and
thank
you
for
the
review.
So
people
did
that
and
yeah.
I
think
it's
in
it
might
not
be
supported,
but
it's
in
it's
in
reasonable
shape
and
it's
close
close
enough
that
we
can
probably
get
it
there
wherever
there
is
I'd
I'd,
be
in
favor
of
effectively
saying
look
we're
not
going
to
build
another
deployer
or
another
cube
up.
C
So
let's
just
use
cluster
approach
gcp,
and
if
you,
if
you
want
to
test
on
gce,
you
got
to
support
it.
B
Yeah,
so
I
think
one
of
the
challenges
is,
I
don't
think
anybody
has
actually
done
like
a
decent,
like
gap
analysis
between
what's
possible
to
do
with
cube
up
right
now
and
is
required
by
the
tests
and
what
we
can
do
in
cluster
api
today,
specifically
around
bootstrapping
things
and
setting
up
the
cluster
for
particular
configurations
to
to
work
around
some
of
the
particular
type
of
end-to-end
testing
that
we
have
in
place,
especially
when
we
start
looking
at
things
like
more
niche
configurations
like
some
of
the
some
of
the
node
tests
around
cubelet
runtime
tests,
and
things
like
that.
B
G
I
get
it
on
hi,
so
we
don't
want.
We
don't
have
to
ensure
that
every
ci
job
can
be
migrated
right
away
right
like
so
that
that's
number
one.
So
I
would
focus
on
what
do
we
have
in
pre-summit?
What
do
we
have
say
in
release
blocking
and
scalability?
Those
would
be
like
where
I
would
basically
look
first
for
guidance.
G
So
so
that's
why
we
started
with
the
you
know,
cluster
paper
for
gcp
all
the
cluster
api
jobs.
We
got
that
done
first
right
and
now
they
are
in
a
decent
shape
to
figure
out
what
we
need
to
do
to
adopt
them
to
kk.
G
So,
let's
focus
on
the
pre-summit
jobs
and
and
see
how
far
we
get
and
parallelly
attack
the
scalability
folks.
And
if
we
get
these
two
use
cases
going
then
sooner
or
later
we'll
get
get
to
the
rest
of
the
things.
A
F
Yeah
yeah,
I
agree
in
in,
at
least
in
my
mind.
I
had
this
this
kind
of
road
map
for
cluster
api
in
the
while
testing
upstream
kubernetes.
F
So
I
think
that
that
a
possible
easy
way
is
to
use
cluster
api
for
testing
cloud
provider,
because
if
there
is
no
clear
coverage
there,
then
there
is
also
delta
scalar,
which
is
another
opportunity
and
then
basically
follow
try
to
widen
the
the
other
use
cases
start
different
conformance
and
moving
on
the
the
scalability
is
an
opportunity
is
interesting
because
I
I
raised
the
point,
but
I
agree
we
should
start
scoped
and
then
grow.
B
Yeah,
so
the
other
concern
that
I
would
have
is
is:
should
we
get
feedback
from
the
previous
couple
of
release
teams
to
feel
about?
You
know
how
they
felt
about
the
signal
that
we're
providing
out
of
the
current
conformance
test
and
if
there
are
issues
that
they
potentially
have
with
those
being
moved
to
release
blocking
as
opposed
to
release
informing
and
then
that
would
help
with
making
the
case
for
moving
additional
jobs
over
there
and
the
work
and
support
required.
B
You
know,
there's
there's
some
level
of
effort
that
we
can
do
as
a
say
to
help
move
some
of
these
tests,
but
at
the
end
of
the
day,
we're
not
the
ones
relying
them
relying
on
them
for
the
work
that
we're
doing
and
we're
not
the
ones
necessarily
supporting
the
underlying
fixes
for
those
tests.
So
we
we
also
do
need
to
make
a
case
to
the
folks
that
are,
you
know,
on
the
supporting
end
of
this.
G
Yeah,
typically,
what
we've
done
is
just
submit
a
pr
to
switch
from
one
dashboard
to
the
other
dashboard
and
argue
on
that.
So
if
you
want
to
do
this
right
away,
then
we
should
just
file
a
pr-
and
you
know,
get
on
the
siege
release
agenda
and
advocate
for
it.
A
Yeah
I've
seen
it
I've
seen
how
the
process
works.
Basically,
sig
release
will
look
at
this
history
of
your
job.
If
it's
failing,
it's
unlikely
that
he's
going
to
be
promoted
to
release
blocking.
A
And
that
is
one
of
the
reasons
we
haven't
promoted
the
keyboardim
jobs
to
release
blocking,
because
we
had
a
strange
failure
where
docker
is
not
writing
properly.
Some
of
the
settings
on
the
nodes
or
it's
flaky,
and
this
alone
was
a
novel
subject
to
decline.
The
request
to
promote
them
to
blocking
it's
just
a
random
flick
that
we
cannot
explain
in
the
currently
in
the
gcp
job.
A
One
of
the
jobs
is
failing
completely
with
a
crd
problem
that
I
don't
understand,
but
you
can
you
know,
whoever
is
the
cap
g
maintainer
can
push
from
for
that
again.
One
problem
is
that
I
don't
think
we
have
official
statement
of
who
is
the
cap
g
maintainer.
B
Yeah,
I
think
the
difficulty
there
is
is
that,
right
now,
it's
a
whole
bunch
of
folks
that
are
generally
over
subscribed
with
a
lot
of
other
work
like
right
now.
It's
I
think
myself
justin,
vince
and
recently
carlos
has
also
become
a
maintainer,
and
it's
just
a
matter
of
folks
are
currently
spread
too
thin.
If
we
had
somebody
who
was
more
had
more
time
dedicated
to
the
project,
it
would
be
a
lot
easier.
I
think.
A
Yeah
yeah,
for
me,
that's
that's
a
walker
until
we
have
somebody
that
is,
like
almost
I
don't
know,
at
least
20
percent
of
the
work
day
dedicated
to
cup
g.
I
don't
think
we
should
push
for
even
scalability.
A
Tests
and
going
back
to
like
the
topic
of
creating
a
new
provider
anecdotally,
the
new
provider,
is
200
lines
of
code
and
resembles
what
people
actually
do
to
deploy
cuba
dm
on
a
gcp
on
gce,
sorry.
A
A
Okay,
we
are,
we
are
like
writing,
cube,
adm,
beta
and
corporate
beta
configuration
inside
of
this,
so
it
will
require
version
branching
eventually-
and
this
is
something
that
we
try
to
remove
from
kubernetes
anywhere,
but
it
was
unavoidable
and
all
the
projects
you
know
kind
mini
cube.
Everybody
is
doing
the
version
branching
because
of
changes
in
the
api.
A
A
We
are
still
discussing
this,
but
if
you
have
comments,
please
chime
in
at
this
point,
I'm
in
a
position
where
I'm
going
to
do
whatever
they
want.
If
they
say.
Okay,
we
are
going
to
drop
this
idea.
I
will
happily
close
the
peers.
If
somebody
says
okay,
we
shouldn't
use
this.
We
should
use
cap
g
instead
again
I
I
will
present
my
argument
there
that
we
shouldn't
propose
something
that
is
not
maintained,
and
I
guess
that
that
is
the
somebody
that
I
have
for.
A
This
and
to
be
clear
again,
this
is
just
experimental.
I
don't
think
this
particular
job
that
will
eventually
run
these
scripts
will
replace
the
scalability
suit
suite
it's
a.
A
We
have
a
number
of
different
tests
there,
and
this
is
just
experimental
at
this.
A
A
A
And
also
again,
who
whoever
has
comments?
Just
drop
them
on
this
pr,
there's
a
big
chance
that
this
pr
can
be
blocked,
because
there
are
also
comments
that
we
shouldn't
use,
cube
adm.
G
A
Scalability
testing,
due
to
some
complexities
around
how
you
configure
nodes,
but
I
I'm
not
sure
cuba
dm
is
exactly
not
applicable,
because
I
think
kubernetes
will
still
be
able
to
do
some
of
these
things
that
cubop
is
doing
but
yeah.
It's
again.
It's
everything
is
experimental
here.
At
this
point,.
A
All
right,
let's
move
to
the
subproject
updates
for
cube
idm.
We
have
a
planning
session
for
1
21
that
is
going
to
be
on
wednesday.
The
20th
of
january.
This
is
during
the
office.
Hours
for
brits
was
something
that
I
realized
after
we
decided
on
the
date
is
that
I
think
the
deadline
for
caps
is
the
end
of
january,
which
leaves
whoever
is
going
to
want
to
write
some
caps,
something
like
10
days
to
prepare
the
cap
after
we
complete
the
planning
yeah,
it's
a
bit
of
a
limitation
that
we
imposed.
F
A
Potentially,
yes,
let's
see
we
also
we
are
discussing,
which
is
related
to
the
cape
discussion
is
we
are
discussing
the
cube
adm
operator
me
and
fabrizio
had
some
meetings
already
about
this.
We
are
going
to
continue
the
discussions
about
in
the
kubernetes
office
hours.
I
guess-
and
also
we
are
discussing
the
topic
of
exposing
parts
parts
of
kubernetes
library,
which
is,
I
think
the
operator
is
going
to
depend
on
that
and
also
costa
api
too.
A
We
I
I
mean,
for
we
are
also
going
to
deprecate
and
remove
some
alpha
features.
I
think
some
of
the
pr's
are
already
ready
for
that.
We
graduated
some
alpha
commands
to
ga
which
pretty
much
moves
them
in
a
top
level
command.
For
instance,
we
had
cubed
mouthful
certs,
which
is
now
cubed
inserts
and
the
cubed
m
alpha
come
out
will
be
empty
after
121,
but
we
are
still
living
it,
leaving
it
for
the
future.
A
All
right,
justin,
okay,
just
dropped.
I
guess
they're
focusing
on
the
new
release
falling
a
bit
behind,
but
that's
normal,
because
it's
after
the
holidays,
128
is
going
to
be
a
shorter
release
cycle.
H
Yeah,
I've
missed,
I
missed
a
couple
of
meetings,
so
we
released
mini
cube
back
before,
like
the
the
freeze
in
december
was
actually
like
a
huge
release.
So
we
added.
H
We
had
we
added
a
bunch
of
we
upgraded
to
like
container
dv2
and
case
120,
and
a
bunch
of
other
stuff
multi-node
is
ga
now
and
so
is
scheduled
stop.
So
if
you
want
to
clean
up
your
mini
cluster
in
the
future,
if
you're
like
in
an
embedded
system,
then
you
can
do
that
so,
like
like,
like
caps,
this
release
coming
up
is
going
to
be
a
lot
smaller.
It
should
be
either
next
week
or
the
last
week
of
january.
A
So
it
takes
something
related
to
container
d.
We
I
already
started
discussing
how
users
can
migrate
to
from
docker
to
container
the
on
the
miracle
mini
cube
sides.
I
guess
you
you
will
not
have
that
much
problem,
but
if
somebody
is
persisting
a
mini,
cube
question:
are
you
looking
at
giving
them
a
guide
how
to
migrate?
Somehow.
H
We're
so
right
now
the
big
issue
is
that
we
default
to
the
docker
runtime,
because
a
lot
of
users
use
the
internal
daemon
for
building
images
and
stuff.
So
as
long
as
we
when
we
default,
when
we
move
over
defaulting
to
container
d,
we
just
need
to
give
them
a
way
to
do
that.
So
we're
going
to
like
probably
write
a
shim
to
to
imitate
building
docker
images
and
container
dating,
and
then
it
should
be
transparent
to
them.
H
H
Oh,
the
other
big
thing.
The
other
thing
we're
worried
about
is
is
arm
because
the
new
macs
are
going
to
be
armed,
so
we're
scrambling
to
get
that
to
work
as
well.
A
Yeah,
I
saw
a
compiler
bug
for
arm
the
other
day.
It
was
started
in
the
kubernetes
kubernetes
issue
tracker.
Somebody
found
a
book
on
armed,
32-bit,
okay
and
I
eventually
described
it
to
a
bug.
There
is
also
I
386
and
finally
on
iamd64,
so
it
was
pretty
much
a
full-time
compiler
backing
goal,
but
it
originated
from
the
arm
community.
A
But
you
know
that's
that's
what
that's
that
was
mostly
raspberry,
pi
users.
It
wasn't
really,
you
know
a
mac
64-bit
arm,
but
yeah
you
will
have
to
adapt
to
the
changes.
Apol
are
doing
so
yeah.
I
Yeah,
nothing
too
new
to
report
ncd
manager,
the
work
that
that
justin
has
had
and
it's
on
repo
has
been
merged.
I
think
for
the
most
part
to
see
adm.
So
I
think
that's
that's
good
progress.
That's
going
toward
the
sort
of
the
long-term
goal
where
ncdm
and
suv
manager
kind
of
work
together.
A
Right
thanks
happy
to
see
this,
do
you
have
any
questions
or
anything
for
this
group
with
respect
to
adm
development
or
anything.
I
No,
I
mean
you
know
it
would
be
great
to
you
know.
I
guess
I
was
always
looking
for
for
more
contributors.
There
are,
you
know,
a
handful.
We
have
prs
now
and
again
from
from
people
that
kind
of
outside,
I
guess
the
community
and
yeah.
If
you
know,
I
guess,
that's
yeah,
I'm
always
looking
for
for
more
contributors.
I
Yeah
for
something
like
for
something
like
justin
was,
I
think,
proposing
or
I
yeah
I'd
be
happy
to
do
that.
I
I'm
yeah,
I'm
not.
I'm
not
sure
that
I
like,
if
there's
one
sort
of
big
chunk,
that
you
know
that
that's
supposed
to
go
to
one
project,
then
you
know
something
like
something
like
cluster
api
is
probably
the
I
I
would
say
the
maybe
the
better
candidate
but
yeah.
I
If
there's
there's
an
overview,
I'd
be
happy
to
do
it.
A
A
Basically
we're
going
to
let
all
the
super
projects
submit
the
session
it
and
if
the
cnc
I
feel
like
we
have
way
too
many
projects,
they
can
block
us
but
yeah.
I
think
if
you,
if
you
think
that
it's
beneficial,
you
should
check
this
email
from
the
cncf
and
just
submit
the
session,
or
maybe
you
you
can
also
ask
some
of
your
contributors
to
do
it
instead
of
you.
If
you
want
to.
I
I
Okay,
yeah
I'll
do
that
I'll?
Look
at
the
the
email
that
we
were
reviewing
earlier.
A
Let
me
see
that
shot
quickly.
This
is
dropping
off.
Okay,
thank
you.
Everybody
see
you
in
a
couple
of
weeks,
bye.