►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
afternoon,
good
evening,
today
is
Wednesday
September
6th
and
we
are
meeting
on
behalf
of
the
cluster
API
project
for
our
weekly
office
hours.
Cluster
API
is
a
sub-project
of
Sig
cluster
lifecycle.
We're
part
of
the
cncf
please
be
kind
to
one
another
and
respectful
and
raise
hands.
If
you
would
like
to
speak,
we'd
love
to
hear
from
everyone
and
without
further
ado,
let's
pause
briefly
and
allow
any
newcomers
to
say,
hi
and
introduce
yourself
who
you
work
for
anything
you'd
like
to
say
so.
A
All
right,
so,
let's
get
to
today's
well
first
thing:
open
proposals:
do
we
have
any
open
proposals.
A
B
Yeah
so
the
first
item
here
this
is
kind
of
following
on
from
our
discussion.
The
last
couple
weeks
I've
created
a
poll
that
I'm
going
to
leave
open
for
a
week
and
then
I'll
close
it
off
like
before
next
Wednesday,
just
to
kind
of
figure
out
when
we
could
have
a
meeting
for
people
who
are
interested
in
Carpenter
and
I
Know
Jack.
B
This
would
come
up
while
you
were
away,
but
you
know
just
the
the
point
of
the
meeting
would
be
to
kind
of
get
together
with
people
who
are
interested
in
Carpenter
and
cluster
API
and
figure
out.
What
next
steps
might
we
want
to
take
you
know:
do
we
have
enough
people
to
form
a
feature
group?
B
Is
this
something
that
we
might
want
to
write
a
a
collaborative
cluster
API
provider
for
a
carpenter,
or
do
we
want
to
figure
out
if
Carpenter
maybe
fits
in
with
some
of
the
managed
service
work
that
we've
been
doing?
You
know
so
really.
This
will
just
be
a
a
one-hour
meeting
for
us
to
get
together
and
kind
of
talk
about
those
initial
things.
B
So,
like
I
said,
I'll
I
should
probably
put
a
note
there
in
the
in
the
notes,
but
I'll
leave
the
poll
open
until
next
Tuesday
and
then
next
Wednesday
I'll
announce
at
the
meeting
like
what
you
know
what
we
figured
out
or
whatever
so.
A
So
what
worked
well
for
for
the
managed
kubernetes
feature
group
previously
was
meeting
kind
of
before
this
meeting
that
allowed
I
think
it's
easier
calendar
maintenance
for
folks
Minds,
because
Wednesday
Cappy
is
like
already
in
people's
minds,
and
it
also
enabled
discussion
during
this
meeting
to
the
larger
Cappy
group
with
freshness
from
you
know,
you're
just
kind
of
coming
off
of
the
more
focused
discussions
and
it
made
that
sort
of
General
summary
process
a
little
easier
in
my
experience.
A
So
I
don't
know
if
the
other
thing
is
going
a
little
earlier
in
in
than
this
includes
the
European
folks,
a
little
easier,
because
you
know
they've
already
probably
self,
like
willingly
self-sacrifice
their
evenings
for
this
meeting.
So
you're
not
asking
them
to
sacrifice
another
evening.
B
Yeah
totally
I,
you
know,
yeah
I
mean
I,
guess
that
I
will
I
guess
we'll
just
have
to
you
know,
see
who
shows
up
and
who
wants
to
talk
about
it
that
may
end
up
being
a
great
way
to
to
hold
it.
I
figured
a
doodle
poll
might
be
easiest
just
to
try
and
figure
out
like
first
to
see
like
who's
interested.
What
kind
of
group
do
we
have
there
and
then
kind
of
figure
out
time
after
that?
B
Okay,
I
guess,
if
there's
no
other,
if
there's
no
other
comments
or
questions
about
that,
one
I'll
go
on
to
the
next
topic:
I
had
up
here:
okay
cool,
so
the
next
topic,
Cameron
McAvoy,
put
up
a
great
patch
a
while
ago
to
the
to
the
auto
scaler
up
with
the
situation
we
have
in
clusters,
where
the
bearer
tokens
get
refreshed
on
a
more
frequent
basis,
and
so
this
PR
is
going
to
take
care
of
like
refreshing,
the
tokens
inside
the
auto
scaler.
B
It
looks
pretty
good
to
me,
but
I
just
wanted
to
bring
it
here
to
see.
If
perhaps
we
could
get
someone
who
knows
the
client
go
like
authentication
mechanisms
a
little
bit
better
than
me
to
take
a
look
if,
if
we
don't
get
any
reviews
by
like
end
of
the
week,
I'll
probably
just
end
up
merging
this,
you
know
Cameron's
been
working
on
it
for
a
long
time
and
I
I
guess
they've
been
running
it
in
production
at
indeed,
which
is
the
company
he
works
for
so
I.
B
Don't
know
like
it
looks
pretty
good
to
me,
but
we
went
back
and
forth
on
these.
You
know
on
these
mechanisms
in
the
past,
with
the
bearer
tokens
and
whatnot
I
just
want
to
make
sure
we
don't
mess
something
up
here.
So
yeah,
that's
about
it!
For
me,.
B
A
Yeah,
that's
that
that's
a
loaded
statement,
unfortunately
with
the
various
Auto
scaling
providers,
because
we
don't
have
great
test
coverage,
although
you
know
we
do
have
I
do
have
something
scaffolding,
cap,
C,
so
I
can
I,
can
Wrangle
some
tests
in
that
branch
and
see
if
I
can
break
it
cool
awesome.
Thank
you.
A
lot
of
providers
out
there.
A
B
A
Okay,
cool
any
further
commentary
on
the
previous
two
agenda:
items
for
Mike.
D
E
Time
check,
this
is
more
of
a
question
to
maintainers,
but
the
gist
is
that
since
couple
of
weeks
ago,
we
migrated
release
1.3
jobs
to
eks
community
cluster
as
part
of
utilizing
the
AWS
credit.
And
then
we
were
running
into
some
GitHub
rate
limiting
issues,
but
then
that
conversation
led
to
a
change
on
the
eks
community
cluster
site,
where
the
pods
the
test
spots,
which
were
spun
up.
E
They
now
have
public
IP
address,
instead
of
all
the
traffic
being
routed
via
Nat
Gateway,
so
that
did
help
the
release,
1.3
jobs,
performance
on
the
cluster
side,
more
or
less.
They
have
been
a
little
green,
better
than
being
completely
red
or
being
throttled
because
of
the
GitHub
rate
limit.
So
we
were
wondering
if
this
would
be
a
good
time
to
migrate
release,
1.4
jobs
as
well
to
the
eks
community
cluster,
because
September
26th
is
our
upcoming
release.
E
It'd
be
better
to
have
that
a
green
signal
for
the
upcoming
release,
at
least
for
a
two
weeks.
So
if
you
can
have
a
shift
over
now,
if
and
if
we
notice
any
problems
because
of
the
underlying
intra,
so
we
can
always
revert
back.
So
this
was
more
of
a
question
to
the
maintainers
out
there.
A
Thank
you
cool
thanks,
so
it's
fine,
you
raise
your
hand
writing
while
I
was
speaking
so
I
was
wondering
if
you
were
one
of
the
previous
topic
or
this
topic.
This.
F
F
Up
I,
just
I,
just
read
it
ahead
of
time,
so
yeah
it
was
coming,
I
would
pref,
so
I
think
there's
no
no
rush
necessary
that
we
already
moved
the
release
one
for
tests
over
to
the
cluster
before
the
release.
I
think
it's
also
fine.
F
If
you
do
it
afterwards,
what
I
would
prefer
to
be
honest,
is
if
you
figure
out
why
those
pots
are
getting
deleted
and
fix
it,
because
essentially,
the
problem
is
the
following:
I
mean
essentially
we're
getting
this
flag
on
release
1.4
as
well
and
depending
on
what
the
actual
problem
is.
F
It
could
be
that
at
some
point
it's
just
a
course
way
way
more
frequently,
so
my
guess
would
be
to
be
honest,
Mr
Richard's,
getting
out
of
memory
cards,
so
we
have
now
with
on
the
testing,
free
resource
requests
and
limits,
and
my
prior
experience
with
provers
that
essentially
at
some
point,
if
you
have
the
one
wrong
request
limits
your
your
Pro
Shops
just
get
so
you
approach,
shop
pots
are
just
getting
out
of
memory
killed
and
then
you
end
up
with
something
like
this
I'm,
not
100
sure,
but
there's
also
a
dashboard
for
this.
F
So
we
can
take
a
look
at
the
corresponding
dashboard
and
just
see
if
basically,
our
memory,
the
actual
usage
of
memory
is
close
to
the
limits
and
then
just
basically
bump
the
limits
by
a
bit.
I
think
we
have
a
lot
of
spare
room
and
then
see
if
it
gets
more
stable
and
basically,
if
that's
fine
and
and
we're
getting
rid
of
those
problems,
we
can
move
on
and
I
mean
should
be
quick.
F
It's
just
bump
the
limits
and
and
wait
a
I,
don't
know
a
few
days
and
if
that
doesn't
fix
it
it'll
be
good
to
trigger
some
sort
of
Investigation
to
figure
out.
What's
going
on
there
I,
don't
know
how
much
we
can
do
and
how
much
one
of
the
I
don't
know
what
the
folks
from
test
info
or
something.
Okay,
it's
in
front
I
can
take
a
look
there.
F
I
just
don't
want
to
just
one
avoid
that
we
basically
import
or
the
flakiness
on
other
branches,
and
then
we
have
some
other
change
that
we
do
and
then
just
of
course,
like
50
or
80
of
the
time,
because
you
just
don't
know
where
it's
coming
from
sorry
talk
too
much.
Probably
that's.
My
second.
A
Yep
that
definitely
makes
sense,
and
thanks
Christian
for
posting,
a
link
to
the
the
dashboard
to
help
us
debug
and
go
ahead.
Question.
G
I
want
to
add
I
know
there
is
some
artifact,
which
also
contains
the
events
with
regards
to
that
Pro
Shop
part.
Maybe
there's
some
information
there.
You
didn't
take
a
look
back,
but
let's.
E
E
Appreciate
the
Headway
at
least
I
have
a
pointer
to
go
look
into.
Thank
you.
H
D
H
Yeah
middle
queue:
last
week
we
released
TV
1.5.0
release
and
it
picks
up
the
copyv1.5.1
and
kubernetes
31.28.1
I
just
wanted
to
share
with
community,
and
we
didn't
face
any
issues
with
of
his
copy
v1.5.1.
Sorry
yeah,
thank
you
and
so
yep.
C
I
had
one
question:
I
was
just
trying
to
find
if
there
was
any
reference
documentation
around
using
like
secure
secret
rotation
options
with
with
Cappy
I
mean,
obviously
it's
just
using
kubernetes
secrets
and
there's
lots
of
General
options,
but
I
wonder
if
there's
anything
specific
for
like
people
using
it
with
with
Cappy.
C
A
Did
you
did
you
do
it
their
audit
of
the
captain
book
David
T.
F
Sorry
I
just
missed
part
of
the
question,
because
I
was
writing
comparison
for
which
sequence
where's
that
already
answering
it.
F
D
F
Rotate
control
plane
machines
before
their
certificates
run
out
because
they
are
just
created,
qubit
and
just
generates
some
of
an
expiry
of
a
year.
So
usually,
if
you
don't
upgrade
it
within
a
year,
you
run
into
problems,
but
I.
Think,
apart
from
that,
we
only
have
that
one
documentation
very
basically
say
hey.
If
you
want
to
generate
the
cas
yourself,
you
can
deploy
those
secrets
of
those
names.
That's
definitely
documented,
but
I'm
pretty
sure
that
at
this
time
we
don't
really
support
like
rotation
of
CA
secrets,
for
example,
or
rotation
of
I.
C
Okay,
yeah
I
did
see
an
issue
on
that,
specifically
the
the
secrets
or
the
certificates
I'm
just
trying
to
get
an
idea,
if
that's
something
that
maybe
is
better
fitted
for
the
operator
to
try
and
Tackle
or
if
it's
a
cluster
API
thing
and
then
also
just
General
Enterprise
best
practices
if
you're
maintaining
a
long-running.
F
I
think
it
really
depends
on
which
Secrets
exactly
basically,
we
have
the
associates
that
we
generate
on
cluster
creation
or
someone
can
create
them
before
and
then
when
we
create
a
control,
pin
our
worker
machines
qbdm
generates
certificates
based
on
those
cas
es,
and
basically
the
story
is
that
for
control
plane
machines,
those
certificates
run
out
after
a
year,
which
is
by
weirdly
accepted
feature
that
you
recreate
after
a
year,
if
you
configure
it
and
on
worker
machines
installations,
basically
that
per
default
we
are
generating
certificates.
F
So,
for
example,
the
I
mean
qubiting
is
generating
it
the
serving
certificate
of
the
cubelet,
for
example,
but
we
don't
enable
CA.
So
we
don't
enable
on
the
cube
API
server
side
that
the
serving
certificate
of
the
keyword
is
actually
verified.
That
means
the
worker.
Sorry,
the
serving
certificate
of
cubelet
also
expires
after
a
year,
but
it
doesn't
matter
we're
talking
via
https,
but
we
don't
do
actual
any
sort
of
certificate
validation.
F
So
it
doesn't
matter
that
it
expires
and
the
good
solution
for
that
owners
that
we
basically
get
to
automate
automated
certificate
rotation
working,
which
is
a
feature
of
kubernetes,
but
it
basically
requires
I
think
it
was
called
machine
adaptation
or
something
so
there's
a
proposal
lying
around
somewhere
in
in
core
cluster
API
that
never
got
implemented
to
implement
some
sort
of
I.
Don't
remember,
the
components
are
called,
but
this
you
need
some
sort
of
controller
which
attests
that
that
cubelet
or
that
node
is
actually
a
node.