►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
today
is
wednesday,
may
26th,
and
this
is
the
cluster
api
office
hours.
Cluster
api
is
a
sub-project
of
sig
cluster
lifecycle.
As
always,
please
follow
the
meeting.
Etiquette
use
the
raised
hand
feature
of
zoom
if
you'd
like
to
speak
up
and
add
any
items
that
you
want
to
bring
up
to
the
agenda
and
please
follow
the
cncf
code
of
conduct
be
respectful
of
everyone
else
on
this
call
and
yeah.
A
Let's
get
started.
If
you
haven't
already,
please
add
your
name
to
the
attendee
list
and
we'll
start
with
the
psas
cigar.
You
have
the
first
one
go
ahead.
B
Yeah
so
last
week
we
last
week
during
the
office
hours
we
talked
about
like
the
cluster
class
proposal
and
like
there
was
like
a
google
doc
or
the
proposal
was
in
the
google
doc
format.
Last
week
I
saw
like
a
few
people
did
go
through
it
and
I
got
some
comments.
B
I
tried
to
like
basically
address
all
of
them
and
I
did
not
see
a
lot
of
changes
so
like
went
ahead
and
created
a
pr
for
the
cluster
class
proposal
and
I
like,
if
it
does
and-
and
I
try
to
copy
like
a
couple
of
comments
which
I
thought
were
still
like,
useful
and
not
addressed
in
the
doc.
So
I
basically
copied
over
your
comments
to
steal
and
I
think
I'll
be
copying
over
one
by
ome
later,
but
yeah
so
like
basically,
I've
attached
the
paer
to
the
agenda.
B
A
Okay,
next
alberta
coming
over
so
yeah
I'll,
open
the
pr
to
add
help
as
reviewer.
I
think
it's
gone
a
lot
less
once
we're
holding
until
the
meeting
to
merge
it,
but
yeah
alberto
has
made
really
good
contributions
to
the
project
over
this
last
several
months
and
been
very
active
as
a
reviewer.
A
So
if
anyone
has
any
objections
or
anything,
please
speak
up,
but
otherwise
I
think
we'll
plan
to
merge
the
pr
by
the
end
of
the
day,
vince
rubizzo
alberto,
anything
you
want
to.
C
Add
not
really
like
for
those
who
who
don't
know
me
yeah,
I'm
albert
I've
been
I
used
to
work
for
chorus
now.
I
work
for
red
hat.
I've
been
involved
with
these
products
since
the
very
beginning.
Thank
you
even
have
crds.
We
have
aggregated
servers
and
yeah
now.
Fortunately,
I
have
the
time
to
be
contributing
more
actively,
so
yeah
I'll
be
happy
to
to
become
a
reviewer.
A
Awesome,
welcome
really
glad
to
have
you
on
board
and
really
glad
to
see
us
expanding
our
our
reviewer
circles.
I
think
we
need
more
active
reviewers
on
the
project,
so
this
is
a
good
step
in
the
right
direction.
D
Yeah,
first
of
all,
thank
you
to
alberto
for
stepping
in
and
also
would
like
to
to
remind
that
that
we
are
looking
forward
to
get
more
more
reviewers.
More
and
more
people
are
playing
cluster
and
what
what
we
are
trying
to
do
is
that,
given
that
cluster
api
could
base,
as
let
me
say
several
area,
the
test,
the
cluster
cattle
or
cup
d,
it
is
not
required
for
people
to
know
everything,
but
they
can
focus
on
on
a
single
area
and
become
a
reviewer
of
specific
subtree
of
the
code
base.
D
And
this
will
really
help
us
in
in
reducing
the
review
time.
Issue
triage
and
stuff
like
that.
A
Yeah,
that's
a
really
good
point,
thanks
for
brieto
any
other
comments
or
general
questions
about
the
becoming
a
reviewer
process,
or
anything
like
that.
A
Awesome:
let's
move
on
all
right
discussion
topics,
a
reminder
to
add
any
other
discussion
topics
that
you
want
to
discuss
to
the
agenda.
It's
looking
pretty
empty
right
now,
but
let's
get
started
with
fabrito
about
the
release.
D
Yeah,
thank
you
cecil,
so
we
are
at
the
end
of
the
month
and
and
given
that
there
are
some
fixes
in
the
zero
three
dot
branch,
mostly
regarding
kcp
remediation
and
kcp
remediation
handling,
atc
node,
not
properly
joining
and
kcp
remediation
extended
support
for
scaling
up
from
zero
to
three.
So
these
are
good
things
and
also
we
have
a.
D
And
and
yeah,
if
there
are
objection
to
these,
as
far
as
I
know,
there
is
only
one
pr
that
still
have
to
merge
into
our
problem.
Maybe
it
is
measured
because
I
wrote
this
comment.
Yes,
okay,
this
pr
is
merged,
so
I
think
that
we
are
going
ready
to
go.
A
Okay,
one
question
I
have,
I
guess
is
like:
are
we
trying
to
get
the
zero
four
release
out
asap
or
because
I
know
there's
a
few
bug
fixes
that
have
gone
in
recently?
So
should
we
try
to
backport
those
for
that
release,
or
should
we
not
do
that
and
instead
focus
on
getting
alpha
4
out
with
everything.
E
E
We
can
release
the
drc
using
a
beta
tag
and
then
once
they
release,
which
should
happen
also
in
the
next
by
the
end
of
the
month.
We
should
be
good
to
go
ahead
and
release
zero,
four
zero
and
just
see
what
bug
fixes
we
need
to
do
in
zero
for.
A
One,
do
you
have
any
idea
on
the
timeline
of
controller
runtime
release.
E
It
should
be
by
the
end
of
week
at
least
ready
at
an
rc
there's
one
big
m
test
change
that
needs
to
get
merged
so
yeah
by
next
week.
We
should
have
the
disabled
release
out.
A
Okay
sounds
good,
any
other
questions
or
comments
from
anyone.
F
Yeah
I
I
heard
about
there
was
some
discussion
last
week
and
I
didn't
see
any
comments
on
the
issue
so
just
been
reading
through
it.
I
apparently
there
was
some
discussion.
If
I
understand
correctly,
there
was
some
discussion
last
week
about
we're
moving
kubal
back
party
completely.
F
It's
definitely
causing
us
issues
in
terms
of
like
supply,
chain
and
multi-arch,
and
I
think
there's
a
question
of
like.
Does
it
matter
that
slash
metrics
is
authenticated
or
not
done,
some
cursive
research,
it
does
seem
like
there
is
a
strong
requirement
to
have
metrics,
endpoints,
secured
and
authenticated
and
in
a
number
of
frameworks,
and
now
it's
it's
come
up
in
the
context
of
cloud
foundry.
F
It's
also,
more
importantly,
come
up
in
the
original
in
the
first
cncf
security
review
of
prometheus
itself,
where
it
was
stated
that,
although
it's
a
low
risk
that
client
and
metrics
endpoints
should
be
authenticated,
you
don't
want
to
leak
metrics
data,
so
it
looks
like
this
might
mean
we
would
want
to
add
move
that
functionality
to
controller
runtime
and
then
do
it
there.
The
subject
access.
E
Runtime
that
be
great
like
especially
if
we
can
have
it
as
part
of
the
like
server
functionality,
that's
inside
controller
runtime.
E
That
said
like,
if
that
just
takes
too
much
time
like
we
could
also
like
just
document
like
a
way
to
do
it,
and
then
folks
can,
because,
like
our
publish
components,
are
just
like
a
baseline
right,
like
you
could
still
add
more
patches
on
top
of
it,
and
one
of
which
could
be
hey.
E
I
was
actually
going
to
suggest,
given
there
are
like
some
concerns
about
keeping
it
to
maybe
also
that
our
back
proxy
from
mobile
203,
it's
a
breaking
change,
so
we
need
to
be
careful
about
it.
That
said,
like
the
image
that
we
were
using
is
still
old
and
we
have
we
updated
zero
eight,
but
going
forward
if
we
still
need
to
keep
publishing
some
maintenance
releases
for
zero
three.
While
we
just
declare
like
out
of
support,
we
might
wanna
consider
removing
it
there
as
well.
E
We
don't
have
to
do
another
release,
that's
happening
this
week.
We
can,
we
might
cut
zero
318
sooner,
but
just
the
top.
I
just
want
to
see
folks
what
they
think.
E
F
So,
just
to
clarify,
so
we
would
remove
kubota
back
proxy,
but
also
not
expose
by
default,
the
metrics
endpoint
on
zero,
zero,
zero
zero
right.
So
we
would
keep
it
on
one:
two:
seven:
zero,
zero
one,
but
just
remove
the
cube
up
but
see.
I
think
that
that's
fine,
I
think
we're
not
introducing
the
security
risk
where
there
isn't
one
already
then.
A
Makes
sense
any
questions
or
comments
about
this
topic.
G
Just
question
for
me:
just
I
guess,
based
on
this,
the
the
pr
I
have
open,
I
saw
some
comments
made
in
the
last
week
since
I
updated
it.
I
think
about
just
just
some
some
edits,
but
are
we
feeling
like?
G
We
still
want
that
the
consensus
is
we
still
want
to
have
that
get
that
in,
but
it's
just
just
flag
it
appropriately.
Is
it
breaking
change
and
make
a
few
more
modifications
to
the
open?
Pr?
Is
that
the
stat,
the
the
status
right
now.
G
A
Oh,
I
think,
for
now
we
just
want
to
get
this
pr
merged
as
and
then
we
can
like
discuss,
reporting
it
separately.
Okay,
great
is
that
what
you
were
also
thinking?
That's
all.
E
Just
to
clarify
the
one
bit
like
nadir
was
saying
like
we
should
not
expose
on
all
ports,
sorry
all
addresses
or
interfaces.
So
maybe
we
could
change
this
to
just
be
localhost
right
and
then,
if
you
want
to
expose
that
port
in
the
container,
you
could
just
do
that
with
like
exposing
that
board
to
a
service
or
something.
E
I
don't
know
appear,
there
is
not
clear
to
that,
so
it
just
says
like
8080
and
maybe
that's
my
fault
actually,
but
are
we
exposing
on
local
hosts?
I
think
so
right.
G
So
previously
it
was
just
listening
on
localhost
so
that
keyboard
proxy
could
communicate
over
localhost
and
then
proxy,
the
metrics
with
with,
like
other
authorization,
this,
the
the
pr
I
have
open,
now
changes
it
to
all
interfaces
on
the
pod
so
that
it
can
be
accessed
by
you,
know
prometheus
or
something
directly,
but
there's
the
service
itself
doesn't
like
it
doesn't
route
to
that
port.
G
It's
it
only
routes
to
the
primary
traffic
port,
but
if
you
were
to
you,
know,
scrape
the
end
points
and
then
communicate
with
the
pot
on
that
addition
on
the
secondary
port,
because
I
added
a
secondary
port
on
the
on
the
pod
spec.
It
could
be
like
the
metrics
address
to
be
accessed
is
that
is
that
what
we're
saying
we
do
or
do
not
want.
F
I
would
be
tempted
to
say
we
don't
want
that.
We
want
to
leave
it
on
localhost
because
we
would
be
introducing
then
an
unauthenticated
endpoint,
and
we
would
document
that
you
should
set
the
you
should
change
the
buying
address
manually,
but
in
our
default
shipped
infrastructure
components.
Leave
it
closed
completely.
F
A
A
We
should
probably
get
in
the
habit
of
doing
that
either
at
the
end
of
every
meeting
or
the
start
in
the
next
few
weeks,
just
to
like
check
in
and
see
where
we're
at
on
release
blocking
stuff,
especially
as
we
get
closer,
but
we
sort
of
started
covering
that.
But
let's
just
do
a
quick
check
now,
just
to
make
sure
we're
all
on
the
same
page.
A
So
we
said
controller,
one
runtime
is
like
one
blocker
right
now,
but
besides
that,
let's
just
take
a
look.
H
A
Yeah
so
there's
five
issues
that
are
still
open
with
release
blocking
and
there's
that
pr
as
well.
I
should
probably
remove.
A
This
okay,
so
let's
just
so
load
bouncer
provider
that
should
not
be
released
blocking
anymore
nope,
so
I'm
gonna
remove
that.
A
Okay,
I'm
going
from
bottom
to
top
by
the
way,
adapt
clusters
gotta
move
to
the
new
multi-tenancy
model,
there's
a
pr
open
fabric.
So
what's
the
status
on
that
one.
D
There
are
two
pr
out
waiting
for
the
first
one
and
the
second
one
that
built
on
top
of
the
first
one
they
are
waiting
for
for
review.
So.
A
Okay,
thank
you.
So
we.
A
I
think
there's
only
one
pr
linked
to
this
issue,
so
I'm
not
sure
where
the
second
one
is,
but
if
you
can
link
it
yeah,
I
will
link
it
okay,
good
and
then
we
can
prioritize
review
on
those
next.
One.
A
D
Yeah,
I
have
a
question.
I
have
a
question
here,
so
this
is,
if
you
look
at
this
issue,
it
has
basically
a
check
a
list
of
a
checklist
of
things
to
implement
and
the
pr
is
only
taking
the
first
one.
A
A
A
It's
additive,
yeah,
okay,
let's
just
kick
it
out,
then.
A
Cool
all
right,
adapt
cluster
cuddle
to
managers
watching
all
name
spaces
for
each
provider
for
pizza,
you're
assigned
that
one
doesn't
have
a
pr
linked.
D
D
Right
yeah,
because
if
we
don't
have
these
basically
user
will
be
able
to
create
cluster
with
multiple
credential,
which
is
not
supported
anymore
by
our
yammer.
By
our
manifesto.
A
Yeah,
okay
and
then
the
less
okay
I
lost
the
filter.
A
Last
one
is
implement
the
cubanium
types
that
one
was
also
in
progress
right
or
I
guess
shank
is
assigned.
I.
A
A
Cool
is
there
anything
that
anyone
is
tracking
that
should
be
really
sparking
besides
controller
runtime,
that
is
not
in
here
and
not
the
metric
server
thing.
We
just
talked
about.
E
So
there
is
a
pr
two
that
actually
introduce
a
dependency
on
docker
as
a
library
directly
and
that
depends
on
is,
is
being
pulled
into
the
main
goal,
mod,
which
I'm
not
not
really
a
fan
of
that
so
right
now
like
today,
in
the
test
folder,
there
is
the
docker
provider
in
there
and
that
has
a
go
mod,
which
we
don't
release,
which
is
we'll
just
keep
that,
as
is,
and
it's
just
a
way
to
segregate
capti.
E
But
given
the
frame,
there
is
the
framework
in
there
there's
some
other
like
folders
like
packages
I
was
thinking.
Maybe
we
could.
This
should
be
a
decision
for
all
of
us
to
do,
because
it
includes
like
a
lot
of
changes
to
just
release
the
test.
Folder
separately.
E
As
you
know,
kubernetes
releases
test
framework.
What
what
this
means
is
we
will
remove
the
go
mod
from
the
capti,
bring
it
up
a
couple.
Folders
such
as
in
the
test
folder,
the
m-test
helper
should
just
move
away
from
there
like
it
shouldn't
just
it
shouldn't
be
in
there.
E
Otherwise,
it
will
cause
a
cycle
dependency
with
cluster
api
and
it
does
require
changes
to
how
we
tag
during
a
release,
because,
right
now
we
just
tag
zero,
four
zero
and
that's
it,
but
to
kind
of
go
mod,
we
would
need
to
cut
another
tag
extra
every
time,
which
is
test,
slash,
zero,
four
zero,
which
will
tag
the
inner
module.
E
I
don't
know
if
there's
any
alternative
folks,
no
I'd
like
to
to
this.
I
don't
think
so.
This
is
what
gomod
requires,
but
I
haven't
looked
if
there
have
been
any
update
to
go
mod
in
a
while.
D
E
So
for
go
mod
like
when,
if
you
want
to
import
the
past
framework
late
like
right
now,
you
just
import
last
api
and
you
have
the
test
framework
with
it
later.
If
you
want
to
import
the
test
framework,
you
would
have
to
import
the
test,
sub
module
and
just
say:
zero.
Four:
zero,
like
that's.
How
go
mod
will
parse
that.
D
Yeah,
but
but
what
how
this
can
work
if
we
are
tagging
with
a
different
with
something
different?
So
maybe
we
can't
take
all
this
outline.
But
what
I
don't
understand
is
how
the
commode
understand
that
the
tags
that
are
for
the
submodule
and
which
and
what
are
the
tags
for
the
top
level
model,
because
the
texts
apply
to
the
entire
repository.
E
A
In
terms
of
the
timeline
and
the
urgency
of
putting
it
in
zero,
four
or
not,
what's
the
like?
What's
the
what's
the
rush,
why
does
it
make
sense
to
put
it
in
zero.
E
Four,
I
would
think
so
if
we
want
to
so,
if
we
want
to
remove
the
dependency
on
the
docker
cli
and
have
instead
like
copy
and
the
test
framework
to
use
the
docker
library
directly,
I
would
rather
not
have
the
dependency
pulled
in
into
like
all
the
binaries
that
we
ship
as
well,
and
the
only
way
to
do
that
is
to
have
the
framework
tagged
and
be
a
separate
module
directly.
E
That
said,
like
the
current
approach
to
just
shelling
out
to
docker
is
working
like
it's
not
ideal,
so
we
don't
have
to
do
this
right
this
second,
this
is
just
an
idea.
We
could
do
this
later
as
well,
which
probably
makes
sense,
but
then
that
means
also.
We
cannot
merge
those
pr's
that
are
in
flight.
A
Okay,
so
maybe
it
sounds
like
we,
I
don't
think
there's
an
issue
for
this
one
right
yet
so
maybe
we
need
to
start
with
an
issue
and
then
maybe
link
it
to
those
existing
prs
and
either
do
it
together
with
this
or
just
hold
on
the
other
pr's.
A
A
Yeah
sounds
good.
Okay,
all
right
any
other
questions
on
this
topic
or
any
general
comments
in
terms
of
the
release
updates.
Is
this
helpful
to
folks?
Do
you
need
more
context,
less
details,
more
details,
zack
or
anyone
else
have
thoughts
on.
E
A
Okay,
cool
we'll
try
to
keep
that
up.
Other
meetings
as
well
cool
any
other
last
minute
topics
or
general
questions
from
anyone
about
the
project.
G
Q
barback
proxy
one
sorry,
one
last
follow-up
question
there,
as
I'm
looking
at
the
the
pr
feedback
that
if
we
want
to,
I
totally
understand
if
we
want
to
not
make
if
we
want
to
set
like
the
local
host
address
to
the
default
address
for
kubernetes
for
the
metrics
endpoint.
G
Can
I
go
ahead
and
change
that
in
like
the
like
the
flag
in
the
in
the
go
file
to
be
a
safe
default
like
so
that
it's
not
a
unsafe
default,
but
you
should
are,
but
the
yaml
files
uses
a
safer
default
than
the
code
itself.
That
way,
we
don't
have
to
express
that
in
the
yaml,
it's
just
the
safe
default.
If
you're
running
the
application,
if
you
want
to
opt
out
and
have
a
separate
metrics
port,
you
need
to
explicitly
say
which
interface
other
than
localhost
you
want
it
on.
E
I
see
but
yeah
and.
G
F
F
Secondly,
I
think
maybe
there's
the
action
item
that
I
will
take
like
find
the
history
of
the
default,
and
maybe
there
is
some
historical
context
that
we're
missing
about
why
it
was
set
like
that
in
the
first
place
in
controller
one
time
and
maybe
there's
a
decision
made,
that
it
was
an
acceptable
default
and
therefore
don't
make
changes
to
your
pr
at
all,
yeah
and
yeah,
so
I'll,
try
and
do
that
today,
so
that
you've
got
an
answer.
A
All
right
all
right
before
we
end-
I
guess
I'm
doing
this
sort
of
backwards,
but
we
usually
have
a
little
welcome
at
the
beginning
for
anyone
who's
new.
So
I
see
a
few
new
names
in
the
attendee
list.
I
Hey
guys,
here's
the
enders,
so
I
came
from
databricks
previously.
I
was
in
aks
in
azure
work
with
ccl
closely.
I
guess,
but
right
now,
I'm
in
databricks.
I
came
here
to
learn
about
the
community,
we're
also
very
interested
in
class
api
and
the
recent
cluster
class
proposal.
I
So
that's
why
I'm
here
so
I'll,
be
here
more.
I
guess
in
the
future.
H
I'm
currently
writing
my
master
thesis
not
yet
specified
the
topic
yet
in
general,
about
infrastructure
as
code
we'll
see
where
I
will
get
there
in
the
future,
and
I
was
invited
by
stefan
buringer
who
I
think
joined
you
or
will
join
you
in
the
future,
and
he
told
me
a
lot
about
cluster
api
and
welcomed
me
to
the
weekly
meetings
here.
A
All
right
welcome
bob
great
to
have
you
here
and
I
guess
this
concludes
our
meeting.
See
you
all
on
slack
and
github.