►
Description
Meeting notes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit
A
Welcome
everyone
today
is
wednesday
february
the
9th
2022-
and
this
is
the
cluster
api
community
meeting
cluster
api
is
a
sig
cluster
life
cycle,
sub
subproject
and
as
such,
we
are
following
the
kubernetes
community
guidelines,
so
that
generally
means
treat
everyone
the
way
you
would
expect
to
be
treated,
and
you
know,
let's
keep
things
positive
and
fun
so
starting
off
with
our
agenda,
we'll
start
with
the
open
proposal.
Readouts,
I
don't.
Usually
what
we've
been
doing
is
giving
folks
a
chance
to
talk
here.
A
So
if
anyone
would
like
to
report
on
any
of
these
I'll,
just
take
a
few
minutes
here
and
see
if
anyone
raises
their
hand,
matt
go
ahead.
B
Thanks
mike
yeah,
we
finally
got
around
to
creating
a
pr
for
the
proposal
for
machine
pool
machines,
and
so
that's
out
there
this
morning
there
have
been
some
changes,
but
it's
not
radically
different
from
the
dock.
There's
also
a
I
haven't
linked
to
it
yet
from
this,
but
there's
also
a
proof
of
concept:
implementation
for
for
cappy
and
for
the
docker
provider
that
I
just
opened.
B
So
there's
still
a
little
work
to
be
done,
but
if
anybody
has
any
feedback
at
all,
we
would
love
it
on
the
proposal
we
want
to
get
this
done.
It's
been
out
there
for
almost
a
year.
A
All
right,
great
thanks
for
the
update
and
yeah.
So
I
guess,
if
anybody
out
there
could
take
a
look
at
the
prs
and
the
open
issues.
That'd
be
awesome.
Any
other
updates
on
the
open
proposals.
A
C
A
Okay,
awesome
yeah,
looking
forward
to
that
one
raheed:
why
don't
you
go
next.
D
Thank
you
mike.
Allow
me
to
introduce
myself,
I'm
rohith
from
the
apache
cloud
stack
project
and
we
have
actually
added
an
agenda
item
to
talk
about
a
cluster
api
provider
for
cloud
stack.
We
are
new
to
these
sig,
so
we
just
look
forward
to
get
guidance
on
you
know.
What's
the
process
of
getting
the
provider
accepted
in
the
second,
we
put
together
one
doc
to
just
go
through
this.
D
This
is
still
work
in
progress,
so
I
mean
we
can
talk
more
in
detail
when
we'll
go
into
our
specific
agenda,
but
we're
just
here
here
to
kind
of
build
consensus,
some
kind
of
support
with
the
sig
just
to
say,
hi
and
with
me
joining
my
colleagues
from
aws.
I
work
for
shape
blue
and
this
work
is
a
joint
collaboration
between
aws
and
shape
blue
and
from
shape
from
aws
side.
We
got
peter
and
whip
in
on
this
call.
A
Yeah
awesome.
Thank
you.
Rohit.
This.
This
proposal
looks
really
exciting,
so
it's
pretty
cool
we're
gonna.
Take.
We
usually
take
a
break
after
the
readouts
to
let
the
new
attendees
introduce
themselves
so
I'll.
Just
I'll
give
one
more
second
to
see.
If
there
were
any
other
readouts,
then
maybe
rohit.
Some
of
your
colleagues
can
can
jump
on
and
say
hi.
A
All
right,
I'm
not
seeing
any
more
hands
raised,
although
rohit,
you
still
have
your
hand
up,
I
don't
know
if
you
wanted
to
do
joe,
something
else
to
add,
or
so
I
guess
now
we'll
go
to
our
welcome
new
attendees.
A
So
if
anyone
would
like
to
introduce
themselves
and
and
say
hey,
you
know,
please
feel
free
to
unmute
and
yeah.
Introduce
yourself.
E
Great
I'll
go:
hey
everyone.
My
name
is
vipin
mohan,
I'm
a
principal
product
manager
at
aws,
working
on
some
of
the
kubernetes
initiatives,
great
to
be
on
the
stage
I've
been
on
some
six
before
but
great
to
be.
On
this
one
and
great
to
meet
you
all.
F
Hey
there
I'll
go
ahead:
peter
matakowski
senior
software
development
manager
at
aws
supporting
some
of
the
kubernetes
efforts
that
fitness
mentioned
first
time
joining
this
session
nice
to
meet
everyone.
G
H
So
I
continue
next
hi,
I'm
johannes,
I'm
a
software
engineer
for
daimler
tss,
where
we
leverage
copy
for
our
own
kubernetes
as
a
service
offering,
and
the
experience
we
made
so
far
has
really
been
great.
So,
first
of
all,
thanks
guys
for
all
your
hard
work
on
copy
and
all
the
providers
and
yeah.
We
are
really
interested
in
all
that's
planned
for
copy
in
the
future
and
looking
forward
to
contribute
to
the
project
as
good
as
we
can
so
yeah.
So
you'll
probably
see
me
here
around
more
frequently
so
hi
guys.
A
Welcome
johannes,
that's
awesome
to
hear
we
love.
We
love
hearing
more
from
users
who
are
actually
putting
cluster
api
to
use.
A
All
right,
perhaps
perhaps
we've
we've
rounded
over
everybody.
So,
let's,
let's
move
on
to
the
discussion
topics
looks
like
first
up
fabrizio.
You've
got
a
bunch
of
stuff
to
go
over
here.
So
why
don't?
Why
don't
you
take
it
away.
C
Yeah
I
tried
to
keep
it
short,
so,
first
of
all,
let's
try
to
upgrade
2v1.1.
C
Second,
we
have
all
the
release
that
are
slowly
going
and
end
of
support,
so
the
first
one
going
and
the
support
will
be
alpha
three
at
the
end
of
february.
C
What
I'm
proposing
is
that
we
we,
we
will
issue
an
additional
patch
release
in
april,
so
we
go
one
month
after
and
the
support
after
the
date.
Basically,
we
will
stop
to
issue
regular
pressure
release
and
the
measuring
pressure
release
will
be
up
to
maintain
our
discretion.
C
C
We
will
continue
to
support
until
the
date
that
we
have
to
all
agree
upon
is
the
conversion
between
alpha
3
and
we
won
beta
beta
1.,
but
the
branch
itself
is
going
to
to
be
stopped
and
also
people
should
not
expect
that
all
the
release
of
copy,
like
v1
alpha
3,
will
be
tested
for
new
or
implemented
for
supporting
new
kubernetes
release
and
yeah.
That's
the
concept
more
or
less.
It
applies
the
same
for
the
alpha
four
and
zero
four
branch.
C
The
only
difference
is
that
this
branch
is
planning
to
go
and
support
in
april,
and
the
similar
goes
for
for
the
v1.0
branch
that
technically
it
is
already
end
of
support.
Given
that
we
have,
we
have
we
1.1
so
yeah,
that's
the
the
the
news.
If
you
keep
up
with
with
feliz
you,
you
don't
have
a
problem.
Otherwise,
please
let
us
know
this
is
those
those
are
basically
the
dates
that
we
carried
in
the
community.
A
Okay,
great
cigar,
I
see
you
have
your
hand
raised.
I
I
C
All
right,
I
think
so.
These
these
dates
basically
are
a
consequence
of
the
api
guarantees,
so
one
api
version
get
deprecated
after
a
certain
time.
C
J
Go
ahead
yeah
in
general,
like
providers
can't
really
make
guarantees
of
support
if
the
foundation
that
they're
based
upon
isn't
actually
like
supportable.
So
if
cappy
is
dropping
some
support,
then
providers
should
follow
along
within
this.
At
the
same
roughly
the
same
timelines,
I
think.
A
Okay,
I'm
not
seeing
any
more
hands,
so
I
guess
we'll
move
on
to
the
next
topic.
Stefan
you've
got
a
couple
here.
Why
don't
you
take
it
away.
K
Can
you
give
me
co-hosts
just
shims.
K
Mike,
can
you
give
me
co-hosts
so
I
can
share.
I
also.
K
Yeah,
it
should
be
just
right,
click
and
give
course,
or
something.
K
Okay,
yep,
you
should
see
it,
I
hope
yep.
I
can
see
it
okay,
good,
so
first
topic
is
chase
and
lock
format.
Essentially,
we
have
a
yeah.
Let's
start,
let's
start
with
that,
so
we
had
an
issue
for
a
while
that
I
created
that
we
want
to
support
a
json
log
format
so
currently
in
cluster
api,
and
I
think
in
the
other
providers
too,
we
are
using
k.
Log
and
k
log
only
supports
the
text
format
by
itself,
and
that
issue
is
about
blocking
chasing
block
format.
K
Essentially,
if
you
try
to
parse
those
logs
and
some
tools
like
hibana
or
loki
or
somewhere
else,
usually
you
have
to
figure
out
a
way
to
parse
our
regular
locks.
Oh
just
open
some
random
test
shot
for
a
moment.
K
So
we
are,
we
are
already
having
a
structured
logging
in
our
controllers,
so
we
have
key
value
pairs
and
all
that
fun
stuff.
But
our
locks
are
looking
like
that.
So
essentially
you
get
a
text
format.
You
have
some
header
some
controller
and
then
you
have
this
yeah
non-ideal
key
way
format.
So
if
someone
wants
to
use
those
logs
and
filter
in
ui's,
they
actually
have
to
parse
that
format,
which
is
kind
of
hard,
so
yeah.
K
The
proposal,
essentially
to
just
log,
is
json
format,
and
then
you
already
have
a
structure
and
you
can
just
take
it.
So,
for
example,
on
I
have
a
pr
for
exploration.
So
if
you
take
the
pr
and
enable
the
json
lock
format,
your
locks
will
all
look
like
that
and
if
you
just
send
them
to
loki
and
without
any
additional
parsing,
let's
drop
that
for
now.
K
K
So
when
you
enable
json
log
format
per
default,
you
are
getting
that
so
there
is
some
meter
data
which
you
get
from
prompt
and
we
have
some
additional
stuff
which
is
yeah
automatically
detected
by
loki.
In
that
case,
I
have
no
idea
how
other
twos
are
hunting
that,
but
probably
similarly,
but
you
can
only
filter
on
on
key
value
pairs
that
it
actually
yeah
they're
actually
already
in
the
structure.
You
can
filter
on
those
detected
fields,
and
I
assume
it's
probably
similar
in
other
other
tools.
K
So
usually
you
have
to
create
some
kind
of
regex
to
pass
the
text
format
if
you
have
chasen
format,
at
least
in
loki,
I'm
not
sure
if
how
other
troops
are
doing
that,
maybe
they're
just
detecting
json
and
are
passing
automatically
or
maybe
you
have
to
do
some
kind
of
hey.
Please
chasing
parts
with
that
thing.
K
K
I
don't
want
to
go
into
a
lot
of
further
details,
but
then
you
can,
I
don't
know
just
highlight
the
messages
and
drill
down
essentially
to
a
specific
cluster
or
machine
or
whatever
you
want
so
tldr.
You
could
already
do
this
today
by
filtering
our
text
format,
but
it
would
be
very
easier
to
avoid
that
parse
configuration
for
everyone
by
just
exposing
chase
and
rock
format.
K
Okay,
coming
back
to
the
issue,
I
just
want
to,
let's
say
tease
that
issue.
I
think
we
have
to
have
a
larger
discussion.
I
guess,
but
we
can
discuss
after
after
a
demo,
but
probably
a
lot
of
folks
want
to
read
all
that
stuff.
So
some
considerations
around
that
so,
as
I
said,
k-log
doesn't
support
the
json
log
format,
but
what
we
can
use
is
something
called
component
based.
Slash
blocks
component
based
session
logs
is
what
the
core
combinator's
components
are
using
for
logging.
K
So
what
we
can
do
is
we
can
configure
our
logging
with
component-based
logging.
What
we
will
get
is
the
json
log
format
and
a
few
other
things
and
that's
where
it
starts
to
get
complicated,
because
we
have
to
figure
out
if
it's
fine
for
us
to
build
a
dependency
on
component-based
component-based
logs
or
if
we
say
oh,
maybe,
let's
do
our
own
thing.
K
K
The
current
state
or,
let's
say
the
state
in
kubernetes
before
123-
is
that
they
are
essentially
leaking
k-log
flags.
So
every
of
their
components
has
all
k.
Log
flags
and
they
now
decided
that
I
don't
want
to
keep
all
of
them,
because
some
of
them
are
not
ideal
more
details
in
the
cab,
but
essentially
there
are
a
bunch
of
legs
for
file
locking.
K
So
you
can
decide,
please
lock
in
that
specific
file
and
do
log
rotation
and
all
that
stuff
and,
as
you
probably
all
know,
and
kubernetes
actually
just
recommended
to
log
on
standard
out
and
standard
error,
so
they
want
to
get
rid
of
those
flags
and
that
duplication
is
already
implemented
in
component-based
logs.
So
you
wouldn't
have
to
do
it
ourselves.
We
could
just
use
the
library,
then
we
get
a
deprecation
and
with
kubernetes
1.26
they
would
just
vanish
yeah.
So
that
was
a
lot
of
information.
K
I
hope
it
made
sense
would
be
great
if
folks
have
an
opinion
on
that.
Can
take
a
look
post
comments.
K
Yeah
in
my
opinion,
we
should
use
component
baselocks
because
it
has
a
bunch
of
advantages
and
we
can
align
to
upstream
components,
but
maybe
there's
a
certain
danger
in
and
depending
on
on
that
dependence,
yeah
common
sense.
That's.
A
Very
cool
stefan
yeah,
so
I
guess
if
anyone
here
is,
is
kind
of
interested
by
this
or
curious
about
it.
Please
go
and
leave
some
comments
on
the
on
the
pr's
and
whatnot
just
kind
of
help.
The
discussion
move
forward
here
and-
and
you
got
the
next
topic
too-
stefan
so
go
ahead.
Yeah.
K
I'll
just
switch
to
the
channel
yeah,
so
next
topic
we
will
do
I'm
not
sure
what
we
called
it.
I
just
called
it.
K
Let's
chat
about
so
we'll
do
a
session
tomorrow
in
roughly
eu
asia
time
zones,
so
it's
11
central
european
time
and
it's
about
code
structure
in
cluster
api
make
file
targets,
how
to
develop
locally,
so
that's
around
tilt
and
how
to
debug
with
filled,
and
the
idea
is
that
forbids-
and
I
will
just
yeah
just
show
it
more
or
less
like
a
hacking
session
or
something
and
folks
have
questions.
Please
please,
please
bring
us
some
questions.
We
want
to
make
that
interactive,
not
just
a
boring
presentation,
yeah.
So
that's
the
idea.
K
I
said
tomorrow,
11
ct,
that's
assuming
we
will
record
it
and
we
will
schedule
another
one
in
an
yeah
us
friendly
time
zone
over
time.
So
yeah.
A
A
All
righty,
so
let
me
I'm
going
to
share
my
screen
again
here.
It
looks
like
rohit.
You
are
up
next
with
the
cap
c.
So
if
you
want
to
go
ahead
and
start
talking
about
that,
please
feel
free.
D
Sure,
thank
you
mike,
so
just
I
think
we
have
sort
of
introduced
ourselves
so
on
this
call
is
joining.
You
know,
colleagues
from
aws,
whippin,
peter
and
vignesh,
and
I'm
from
shea
brew.
It's
a
joint
collaboration
between
aws
and
shea
blue.
We
have
created
a
capi
provider
called
cap
c
for
cloud
stack,
so
cloud
stack.
D
Apache
cloud
stack
is
an
infrastructure
as
a
service
cloud
computing
platform
used
by
a
lot
of
companies
in
in
users
worldwide,
and
we
think
with
this
sort
of
provider
it
will
give
an
opportunity
to
communities
user
to
use
cluster
api
with
with
cloud
stack
using
gap
c.
So
that's
the
project
and
I
think
what
we
have
done
is.
We
have
put
together
a
proposal
document,
and
this
is
the
first
time
I
think
a
couple
of
us
are
actually
joining
the
six.
So
we
don't
know
exactly
what
the
process
is.
So
we
started
by.
D
You
know,
starting
a
mail
thread
on
the
mailing
list
to
seek
some
guidance
from
that.
We
discovered
that
you
know
you
have
this
weekly
call
and
we
put
together
a
google
doc
to
say
you
know
what
the
project
is
about.
What
we
intend
to
find
out
is
is:
are
we
on
the
right
track?
Get
guidance
from
this
express
specifically
people
on
this
call?
D
How
do
we
proceed
next
and
the
ultimate
goal
is
to
build
a
small
community
within
the
sig
around
cloud
stack
and
the
capsi,
and
we
have
also
proposed
a
new
slack
channel
to
be
created,
and
I
think
I
was
advised
to
get
support
like
someone
from
sick,
to
like
add
a
plus
one
or
approve
approval
on
the
on
the
pull
request.
So
we
need,
I
will
be.
You
know
thankful
if
we
can
get
support
for
that.
Also,
and
that's
pretty
much
it
like
what
is
the
process.
D
E
D
So
let
me
just
add
one
more
update
before
you
know
we
get
the
guidance
which
is
where
does
it
stand
right
now?
So
this
is
functional
raw
now,
so
before
we
get
to
the
sig,
we
thought
we
should
do
a
round
of
testing
to
kind
of
say
you
know
whether
it
works
or
not.
So
just
to
give
you
a
brief
summary.
We
have
tested
with
the
latest
416
version
of
cloud
stack
as
well
as
slightly
older
414
version
of
cloud
stack
right
now.
D
We
have
tested
this
against
a
couple
of
templates
of
our
different
linux
distros,
including
ubuntu
2004
l8,
rocky
linux,
8,
centos,
7,
couple
of
distros.
We
actually
added
support
for
that
in
the
image
builder
project,
so
me
and
my
colleagues
actually
were
the
one
who
added
support
for
l8
and
rocky
for
the
kmu
builders.
We've
already
gone
through
that
some
of
us
have
already
signed
the
cla
as
well,
and
we
have
tested
this
against
the
three
main
hypervisors.
A
I
mean
that's,
it
sounds
really
great.
Raheed
and
team
like
it
sounds
like
you're
really
well
prepared,
and
you
and
you've
kind
of
tested
this
stuff
out.
I
don't
know
like
fabrizio
or
vince
or
maybe
cecile
did.
Did
you
want
to
chime
in
on
on
kind
of
what
the
next
step
to
getting
these
graduated
into
the
kubernetes
sig
is
or
maybe
do
we
have
a
link
that
would
describe
some
of
these
things.
B
C
Of
all,
welcome
and
congrats
for
the
for
the
great
work
that
that
you
are
doing
so
far.
So
I
think
that
in
if
I
remember
well
in
the
email
thread,
you
already
got
a
couple
of
links
about
the
the
process
of
donating
the
repositories
is
described.
C
So
briefly,
you
need
a
plus
one
from
the
siege
leads
in
this
meeting.
There
is
vince.
C
You
can
join
the
sikh
class
recycle
meeting
on
tuesday
this
hour
to
to
basically
make
your
your
case
with
with
the
other
leads.
Usually,
this
is
a
formality,
because
we
we,
like
the
community,
grow
growing
and
then,
basically,
you
have
to
open
up
ticket
to
to
get
a
repo,
and
you
have
to
fix
up
stuff
about
the
license
in
in
in
your
report
and
and
also
check
make
sure
that
the
the
dependencies
that
you
are
importing
are
compliant
with
the
cncf
guidelines
and.
C
And
yeah:
that's
it
answer
you
are
in
the
repo.
Then
you
can
start
leveraging
on
test
infra
all
the
cncf
infrastructure.
C
This
gets
a
little
bit
complicated
since
you
need
basically
a
set
of
values,
backend
that
I
I'm
not
sure
or
I
don't
believe
that
many
of
them
are
available
in
as
a
offered
by
cncf.
C
So
you
or
you
can
start
discussion
with
the
cncf
on
how
to
get
this
type
of
infrastructure
manager
or
you
can
test
on
on
your
own
infrastructure
and
then
up
upgrade
upload.
The
test
results
in
in
kubernetes.
That's
great!
So
it's
something
that
it
is
an
something
that
you
can
take
all
over
time,
but
and
yeah.
That's
it.
I
don't
know
if
this
or
someone
else
wants
to
add
something.
J
You
seen
as
hindu
yeah,
I
think
that,
like
yeah
plus
one
fabrizio
said
eventually
like
over
time.
Also
once
you
have
like
the
repo
in
there
like,
you
can
take
a
look
at
the
existing
providers
there.
J
You
can
see
a
bunch
of
the
you're
going
to
see
on
the
test,
infrared
a
bunch
of
the
configuration
and
the
pro
jobs
that
you'll
likely
want
to
have
like
things
like
ede
integration,
linters
and
all
of
these
kind
of
things,
so
that,
like
the
providers,
is
on
par
with
the
others,
there's
also
probably
the
like,
if
you're
not
leveraging
it
already
like
the
the
ade
framework
that
is
provided
by
cluster
api
in
order
to
automate
your
end-to-end
tests,
there
are
a
bunch
that
are
already
predefined
and
that
you
likely
want
to
run
against
your
provider.
D
Right,
thank
you,
fabricio
in
your
scene,
so
just
one
more
bit
of
date.
So
while
we
were
up
like
developing
this,
we
were
referencing
couple
of
other
providers
just
to
keep
a
guidance
of
the
structure
of
code
base
and
whatnot.
So
from
the
beginning,
we
have
used
the
apache
license,
version
2
and
I
think
I
didn't
mention.
But
yes,
we
have
actually
run
the
e2e
test
and
pass
the
conformance
test
also
for
the
provider.
A
A
D
Yeah,
I
think
I
think,
we'll
take
the
guidance
and
first
of
all
thank
you
fabrizio
and
your
scene
and
and
mike
for
all
the
guidance
as
you
advised.
You
know,
we'll
join
the
next
call
on
tuesday
to
you
know,
get
get
support
with
vince.
One
thing
is
still
not
clear
with
me
is,
is
the
is
the
document
draft
acceptable?
As
is
do
we
need
to
make
changes
to
the
word?
D
The
google
doc
we've
shared,
and
the
second
question
is
about
you
know
getting
set
up
a
new
slack
channel
for
for
this
project
or
or
does
that
come
let's
say
afterwards,
like
after
you
get
support
from
the
stick
leader.
C
A
Awesome-
and
I
guess
it'll
be
it'll-
be
great
to
have
a
new
another
cloud
provider
here,
so
very
cool,
all
right,
so
fabrizio
next
up
with
the
cluster
api
runtime
sdk
proposal.
C
Thank
you.
So
I'm
really
happy
to
talk
about
this
proposal.
This
is
something
that,
in
my
opinion,
will
will
be
the
the
next
big
thing
in
in
big
thing
in
cluster
api.
So
a
quick
introduction.
Basically,
what
is
the
idea
is
to
implementing
cluster
api,
a
mechanism
that
is
inspired
from
kubernetes
admission
workbook
so,
and
this
will
basically
allow
application
built
on
top
of
customer
api
to
hook
in
into
the
workload
cluster
life
cycle.
So
this
will
allow
you,
for
instance,
to
I,
don't
know,
block
and
upgrade
the
before.
C
You
do
something
or
to
postpone
a
machine
direction
to
do,
because
you
want
to,
I
don't
know
clean
up
services
or
to
do
something
before
before
a
machine
gets
remediated
or
stuff
like
that.
So
this
pr
is
called
cluster
api
runtime
sdk,
because
in
order
to
get
there,
we
need
some
foundation
in
place,
and
the
document
at
the
end
is
quite
complex
or
long
because
it
is
divided
in
in
two
main
parts.
The
first
one
is
a
kind
of
developer
guide
for
people
writing
around
time
extension,
and
the
second
part
is.
C
It
is
basically
the
implementation
guide
for
cluster
api
maintainers
that
want
to
offer
new
runtime
moocs.
So
there
is
this.
You
know
like
in
kubernetes
extension.
There
is
two
two
actors,
two
persona
that
are
in
involved
in
developing
extension,
and
this
document
try
to
address
concern
and
and
things
for
both,
and
it
also
take
a
stab
of
no.
How
are
we
going
runtime
extension
version
or
what
will
be
deprecation
and
stuff
like
that,
and
so
there
is
a
lot
of
of
it.
A
All
right
so
there's
a
lot
to
go
through
here.
I
know
I
certainly
have
not
read
it
all
yet.
Does
anybody
have
questions,
though,
or
any
comments
people
wanted
to
make
about
the
cluster
api
runtime
sdk.
L
Hey
yeah
thanks
a
lot
for
putting
that
together.
I
think
it's
a
great
proposal,
I'm
just
curious
about
the
what
use
cases
we
have
out
there
like
that
would
benefit
from
from
this,
so
just
to
encourage
people
to
to
share
in
that
doc,
or
maybe
on
the
slack
channel
about
the
particular
challenges
and
use
cases
that
people
have
that
you
are
going
to
be
able
to
to
solve,
and
also
do
you
see
these
superseding
the
existing
hooks
that
we
have
for
the
with
the
annotation.
Or
do
you
see
something
different.
A
Yeah
go
ahead,
fabrizio
yesina
lazy.
Oh,
I
thought
you
were
responding
like
I
was
gonna.
Let
you
respond
to
alberto.
I
thought
you
seen
had
a
different
question.
If
I'm
reading,
okay.
C
Okay,
so
let
me
answer
to
alberto,
so,
first
of
all,
deprecation
it
is,
it
is
stated
as
a
future
goal.
I
hope
that
runtime
extension
will
provide
a
better
semantics
about
machine
deletion
hooks,
so
I
could
provide
I
I
don't
know
I
higher
number
information
to
choose
the
the
to
decide
what
to
do
and
could
allow
a
finer
control
on
on
deletion,
and
so
I
hope
that
in
future
we
will
delete
it,
but
it's
too
too
early
to
state
about
use
cases.
C
C
Cluster
class
patches,
we
want
to
use
runtime
extension
to
do
patches
implement
patches
in
code
instead
of
using
declaring
them
in
yaml
in
the
cluster
class
objects.
So
you
can
get
them
versioned
stuff
like
that
and
yeah,
and-
and
there
are
others,
but
I
agree
with
you-
we
should
start
collecting
them
in
a
more
systemic
way.
J
Yeah
to
add
to
what
febreze
just
said,
alberto
one
of
the
other
use
cases
is
likely
to
be
for
providers
cpi
and
csi
migration.
So,
as
these
are
going
to
come
up,
this
way
can
be
a
powerful
hook
to
actually
get
our
users
from
entry
to
out
of
three
cloud
providers
and
csi.
A
And
apologies
you
seen
I
didn't,
I
didn't
realize
you
were
responding
to
alberto
there.
I
know
it
is
okay,
matt
you're
next.
G
I
started
reading
this
earlier.
I
had
an
extremely
high
level.
G
Query
which
I'm
not
sure
how
I'd
fit
it
in
the
document
comment,
the
the
api
seems
to
be
effectively
event-based
layered
on
top
of
a
declarative
api.
Are
there
any
concerns
around
an
impedance
mismatch
there,
for
example,
missed
events,
etc?.
C
Yeah
that
that's
the
right
tool
and
and
a
really
deep
question-
I
I
like
it
because
it
means
that
that
basically
document
is
driving
in
the
right
direction.
I
think
that
yeah
customer
api's
is
a
declarative
system,
but
at
the
at
the
same
time,
the
infrastructure.
We
are
driving
infrastructure
through
imperative
steps.
C
So
we
we
we
basically
control
the
the
cluster
has
been
provided
or
the
first
machine
has
been
provided.
So
I'm
pretty
confident
that
we
can
have
a
reliable
moments.
We
can
identify
moments
reliable
in
between
the
transition
from
one
state
to
the
other.
I
hope
this
answered
question.
G
A
Okay,
cool
any
other
questions
or
comments
around
this
proposal.
C
Yeah,
quick,
quick
run
as
usual
after
each
main
release.
We
do
a
check
and
update
on
our
own
files.
If
you
are
interested
in
stepping
up,
take
a
look
at
our
contribution
letter.
If
you
are
in
and
reach
out,
and
hopefully
we
can
have
some
new
contributor,
new,
maintainer
or
reviewer.
Also,
this
cycle.
A
Yeah
and
we've
got-
we've
got
all
these
new
people
here
today
too,
so
here
here's
a
great
way
to
get
involved
all
right,
yesin
you're
up
next.
M
J
J
So
today,
like
providers
are
either
relying
on
a
field
that
wasn't
supposed
to
be
actually
a
way
to
set
the
api
server
port,
which
is
like
cluster
spec
cluster
network
api
server
port,
so
yeah
they're,
either
relying
on
that
or
to
set
like
them
to
set
the
load
like
to
set
the
load
balancer
port
instead
of
the
api,
server,
port
or
they're,
just
ignoring
it.
So
like
yeah.
Well,
this
issue
is
basically,
if
you
scroll
down
at
the
bottom,
we
started
like
a
brainstorming
with
fabrizio
and
we
started
adding
some
ideas.
J
So
if
you,
if
you
scroll
a
bit
more
like
one
of
the
yeah
one
of
the
the
issues
that
we
have
today
is
is
at
two
levels.
The
first
one
is,
we
don't
like.
We
don't
have
a
reliable
way
to
set
the
local
api
server
port
and
we
don't
have
a
way
or
an
official
way
to
set
the
port
on
the
load
balancer
on
the
control
plane
endpoint.
So
those
are
like
two
two
problems:
the
first
one
regarding
the
control
like
regarding
the
local
api
server
ports.
J
So
since
providers
are
already
tied
and
are
doing
what
they're
doing
with
the
current
field,
what
we're
proposing
is
introducing
like,
within
this
release,
like
a
a
new
field
that
is
on
the
same
like
on
the
same
struct
in
the
cluster
network
that
is
called
like
local
server,
a
local
api,
server,
port
or
binding
port.
J
This
would
this
field
would
specifically
be
as
part
of
the
contract
for
providers
to
set
this
for,
like
bootstrap
providers
to
set
the
like
the
the
local
api
server
port.
For
example,
cap
bk
would
look
into
this
field
and
insert
like
an
insert
directly
the
the
binding
port
into
the
init
and
join
configuration
and
eventually
like
deprecate.
J
The
first
field,
which
is
like,
was
used
inconsistently
across
providers
and,
as
the
community
starts
looking
at
the
next
api
version,
then
we
can
start
about
discussing
about
the
removal
of
of
the
field.
Another
another
thing
is
probably
worth
documenting
that,
like
can
cluster
spec
control,
plane,
endpoint,
host
and
port
are
likely
the
places
where
users
should
set
the
port
and
the
host
if
they
want
to
bring
in
their
own
their
own
control
plane.
J
So
that's
that's
a
second
step
for
for
providers
that
are
already
offering
you
know
a
way
to
set
the
porch
for
their
load
balancer
we
need
to.
We
need
to
provide
them
with
an
alternative
and
the,
in
my
opinion,
like
the
natural
alternative
in
terms
of
user
experience,
is
using
what
we
already
have
for
the
control
plane,
endpoint,
so
yeah,
please
feel
free
to
chime
in
the
issue.
Add
any
ideas,
thoughts
or
anything
that
we
might
might
be
missing.
A
Okay,
cool
yeah.
So
if,
if
anybody
is
kind
of
curious
about
that,
please
take
a
look
at
the
issue
and
yeah
leave
a
comment
or
leave
your
opinion.
Yes,
it
looks
like
you
got
the
next
one
as
well.
J
Yeah,
so
for
this
one,
it
is
more
a
follow-up
on
the
discussions
that
we've
had
in
the
last
office
hours
regarding
issues
with
the
cube,
that's
located
in
the
control
plane,
endpoint
pointing
to
the
actual
load
balancer
and
in
case
of
upgrades,
as
you
can
see
it
can
like
it,
can
break
existing
setups
plus
for
some
for
public
cloud
providers.
J
When
you're
using
internal
load
balancers,
you
can
end
up
in
a
hairpin
issue
where,
like
a
machine
that
is
the
control
plane,
the
part
of
the
back
end
of
the
load
balancer
cannot
reach
out
through
the
load
balancer
itself,
so
yeah
like
so
far
there.
There
seems
to
be
a
few,
a
few
suggestions
here
and
there,
but
yeah.
This
is
mainly
to
confirm
that
this
is.
This
is
still
an
issue
that
we
probably
need
to
address,
especially
that
we're
rolling
out
private
clusters
in
in
the
various
providers.
A
J
Mainly
mainly
input
to
to
see
like
if
they're
like
yeah
as
as
of
now,
the
the
providers
that
I
know
of
that
can
be
affected
by
this
are
probably
aws
and
azure
yeah.
If
you're
on
other
providers,
please
feel
free
to
add
anything
that
might
be
related
to
that.
A
N
Cheyenne,
I
see
you
have
your
hand
up
yeah
just
to
add
on
that.
So
I've
been
testing
this
feature
in
kubernetes
for
private
clusters
in
azure,
so
so
in
azure
we
have
a
workaround
where
we
map
the
private,
the
control,
plane
and
point
to
localhost
so
that
it
get
it
doesn't
get
routed
through
the
load
balancer.
N
So
I
tried
like
removing
that
hack
and
testing
it
with
just
having
kubernetes
installed.
What
I
observed
was
cube.
Reading
was
still
using
the
control
plane
endpoint
for
for
cubelet
config
cubelet.conf
conf,
which
is
used,
which
is
the
cubeconfig
for
cubelet,
which
uses
the
controller
endpoint
to
basically
register
the
node
so
and
it
fails
because
of
the
hatman
issue.
So
we
still
have
to
like
figure
out
a
way
to
not
use
control
play.
End
point
for
cube,
configms
you're
seeing
go
ahead.
J
Yeah
and
there's
also
the
issue
that
lubomir
is
pointing
on
on
the
cuban
issue,
which
is
like
you
like.
Initially,
if
you
scroll
like
you,
you
can,
and
you
you
will
end
up
in
you-
would
end
up
in
situation
where,
basically,
your
cube
would
need
to
do
the
csr
and
everything
would
work
eventually,
if,
like
the
local,
like
the
local
api,
server
and
kcm
is
the
leader,
and
if
that's
not
the
case,
then
it
wouldn't
work.
J
So
it's
kind
of
a
rolling
dice
situation
where
it
can
work
sometimes,
but
sometimes
it
wouldn't.
A
Okay
and
you
know
cheyenne,
I
don't
know
if
you're
following
this
issue
or
not,
but
it
sounds
like
it
might
be
worth
adding
your
comments
here
as
well,
just
so
people
kind
of
know
what
what
you're
experiencing.
N
A
I
guess
anything
else
we're
at
the
end
of
the
meeting
now
so
if
nobody
has
any
ad
hoc
or
last
minute
topics,
we
could
probably
take
10
minutes
of
our
day
back
here.