►
Description
Meeting notes https://docs.google.com/document/d/1ushaVqAKYnZ2VN_aa3GyKlS4kEd6bSug13xaXOakAQI/edit
A
All
right
welcome.
Everyone
today
is
Wednesday
the
14th
of
December
2022,
and
this
is
the
cluster
API
project.
Meeting
cluster
API
is
a
sub-project
of
kubernetes
sigs
and
as
such,
we
follow
their
meeting
etiquette,
which
essentially
means
if
you'd
like
to
speak.
Please
raise
your
hand
and
I
will
call
on
you
and,
if
you
and
please
treat
each
other,
you
know
kindly
and
with
respect
sorry
I'm
getting
my
screen
set
up
here
a
little
bit
for
this.
A
So
at
the
beginning
of
our
meetings,
we'd
like
to
give
an
opportunity
for
new
members
of
the
community
to
introduce
themselves
or
say
Hey
or
people
who
are
attending
for
the
first
time,
so
I'm
going
to
be
quiet
for
a
few,
and
if
anyone
would
like
to
unmute
or
raise
your
hand,
please
feel
free
to
welcome
yourself
to
the
community.
A
All
right,
I
am
not
seeing
any
hands,
go
up
or
anyone
unmuting
so
I'm
going
to
guess
that
we
will
pass
that
part
for
now.
So
we'll
next
go
to
the
open
proposals.
Readout
and
I
think
we
only
have
a
cup
Wheeling
of
one
here,
the
label
sink.
Is
there
anyone
who'd
like
to
speak
on
that
topic.
A
Okay.
Let
me
rewind
a
little
bit
here.
So
first,
the
first
discussion
topic
for
today
I
have
up
and
it
is
about
the
cube
Mark
provider.
A
So
I
am
looking
to
do
another
release
of
the
kubemark
provider
soon
and
we
could
use
a
few
more
reviewers
there.
I
have
a
PR,
that's
up
that
could
use
a
little
bit
of
looking
at
and
I
think
in
general,
just
the
Cadence
there
we
get
some.
You
know
when
cluster
API
updates.
We
want
to
be
updating
that
more
frequently,
but
it's
moving
a
little
slow
on
the
reviews,
so
I
just
wanted
to
bring
it
to
the
community
here.
A
If
people
are
interested,
you
know
you
can
definitely
reach
out
to
me
I'm
more
than
happy
to
do
deep,
Dives
or
share
examples
of
how
the
provider
works.
It's
fairly,
simple
in
comparison
to
some
of
the
other
providers.
It
doesn't
have
quite
as
much
code,
but
yeah
I
just
wanted
to
bring
it
up
here
in
case
people
are
interested-
or
perhaps
you
know
looking
for
a
project
to
do
some.
A
A
Does
anybody
know
in
The
Wider
kind
of
kubernetes
Sig
land
like
I
I,
realize
it's?
Probably
it's
poor
practice
to
do
self-approvals,
but
I'm
just
wondering.
Is
there
any
guidance
around
that
in
the
wider
community.
E
Okay,
so
usually
the
guidance,
for
example
in
the
cubadians,
whose
code
is
what
we
do
is
we,
for
example,
if
I'm
the
author
of
the
pr
and
I
automatically
get
an
approval
by
the
Bots,
the
approved
label
is
applied,
I
can
put
a
hold,
so
if
somebody
else
wants
to
review
eventually,
once
we
have
consensus
on
the
pr,
we
can
remove
the
whole.
That's
what
we
do.
A
A
E
F
E
Not
advisable,
but
you
know,
for
a
small
change
like
a
type
of
fix
or
something
like
that.
It
it's
okay,
I
think.
A
F
F
F
E
Yeah
that
also
support
the
lgtm
mechanic
in
the
boats,
but,
like
I
said,
it's
all
advisable
like
you,
have
a
problem
with
reverse
or
something.
A
I,
just
sometimes
you
know,
we
get
like
small
changes
there,
like
I'm,
trying
to
put
up
changes
and
I'm
having
what
I'm
finding
is
I'm
having
trouble
getting
reviewers
so
that
I'm.
That's.
Why
I'm
trying
to
reach
out
to
the
community
here
and
just
see,
if
maybe,
if
there's
people
who
are
interested,
who
would
be
interested
in
looking
at
that
code
and
whatnot.
E
So
what
people
do
usually,
for
example,
if
they
cannot
get
a
lgtm
from
anyone
they
just
being
a
like
a
bigger
selectional
and
somebody
will
just
do
a
pass
by
review.
I
can
lgtm
for
you.
B
So,
first
of
all,
you
can
consider
me
as
a
reviewer,
but
probably
I'm
already
in
as
a
seek
something
and
so
feel
free
to
ping
me
if
you
need
help
to
unblock
stuff.
Second,
I
think
this
is
a
super
interesting
project.
You
know,
you
know
my
idea
and
I
I
will
like
to
bring
to
this
audience
to
use
cases
that
this
Pro,
how
this
project
could
help
this
communicate.
B
The
first
one
is
we
like
a
test
on
the
autoscaler
and
we
really
need
a
test
for
Delta
scaler
in
main
cluster
API,
and
so
Cuba
Mark
is,
is
was
kind
of
designed
to
to
help
test
this,
and
we
just
need
a
signal
in
main
cluster
API.
The
second
one
is
that
it
is
a
thing
that
I'm
discussing
with
a
couple
of
forks.
We
need
some
sort
of
scale
testing
for
cluster
API
and
coopermark,
who
could
really
help
here?
B
I
have
I've
played
a
little
bit
around
Cooper
Mark
recently
and
shared
with
Mike
a
little
prototype
where
we
basically
run
both
control,
plane
and
kubemate
port
to
try
to
make
the
things
as
as
thinner
possible,
and
so
we
can
scale
yeah
and
I,
so
I'm,
not
only
ar-12,
but
I
also
plan
to
do
some
work
soon
to
to
get
this
project
to
cover
these.
These
use
cases.
A
Yeah
I'm
pretty
sure
you're
on
it
as
a
maintainer
through
the
campy
stuff
Fabrizio,
but
thank
you
for
volunteering
and
also
Killian.
Thank
you
for
for
speaking
up
in
chat
I
will
I
will
look
to
reach
out
to
you
too,
probably
in
the
next
week,
because
I'll
try
and
get
the
the
release
PR
together
and
then
you
know
I'll
ping.
You
guys
so
thank
you
I!
Think
that's
good
for
me.
Did
anybody
else?
Have
questions
or
comments
about
the
oh
forget,
I,
see
your
hand
is
up.
Please
go
ahead.
G
Yeah
I
agree
with
the
like
it's
dangerous,
but
I
know
that
you
can
set
up
the
pro
so
that
whenever
you
open
a
PR
and
if
you
are
approver
in
the
repo
it
will
automatically
approve
that
PR
like
whenever
it's
open.
So
it
will
tag
it
as
approved
and
you
just
need
the
LG
TM.
So
in
case,
if
that
helps
that
should
be
doable.
A
A
A
Okay,
I'm
not
seeing
anything
so
Mike
tutoron
you've
got
the
next
topic
about
kubernetes
event.
When
the
cluster
is
ready.
C
Yeah,
so
someone
opened
I
forget
who
exact
thing?
Yes,
I
could
look
open
up
a
ticket
or
an
issue
a
couple
weeks
back
or
months,
I
guess
at
this
point
about
emitting
kubernetes
events
when
a
cluster
is
ready,
they
put
in
some
suggestions
of
what
they
felt
was
most
appropriate
for
when
they
thought
that
an
event
should
be
admitted.
C
There
was
some
back
and
forth
about
what
others
thought
it
meant
when
a
cluster
became
ready,
there's
some
discussion
around
well,
perhaps
we
should
just
submit
events
based
upon
cluster
status,
part
of
the
various
status
conditions
and
it's
kind
of
come
down
to
at
this
point
where
I
think
we
need
where
we
need
to
kind
of
pull
the
team
as
to
what
the
the
definition
of
ready
for
admitting
this
event
should
be
because
it
kind
of
seems
a
consensus
is
a
combination
of
status
conditions,
not
just
a
single
status
condition
and
I
wasn't
sure
what
the
one
is.
C
This
the
right
Forum
right
now
to
just
kind
of
pull,
or
is
there
a
proper
way
to
pull
to
get
kind
of
a
consensus
around
what
that
should
be
and
just
how
to?
How
to
get
that
pole
to
get
that
consensus
of
what
the
definition
of
the
for
when
to
emit
the
event
of
being
when
a
cluster
is
ready.
A
Yeah
good
good
question,
Mike
and
I
see
I
see
this
has
been
triage
accepted,
so
I
don't
know
Fabrizio
or
maybe
Vince.
Do
you
have
any
comments
about
kind
of
the
next
steps
for
this
yeah
for
breezy
I,
see
you
every
hand
up.
B
Yeah
I
think
that
most
of
all,
the
old
boy
adopts
to
what's
problematically
solving.
So
what
are
we
trying
to
get
an
information
about
when
we
can
install
something
on
the
Clusters
or
when
the
control
plane
is
ready
and
or
what
is
the
control
pan?
Really
when
the
API
server
answer
or
where
the
all
the
control
plane
machine
happen?
So
I
think
that,
let
me
say
first
of
all,
we
are
clarify
what
problem
are
we
solving
second
I
think
that
we
already
have
almost
sold
this
signal,
either
in
condition
or
in
Flex.
B
No,
we
don't
have
events
from
for
for
all
of
them,
so,
in
my
opinion,
I
need
to
be
much
simpler
to
just
look
at
what
we
have
and
and
see.
If
we
can
instrument
it.
So
we
have
we
it
it's
not
only
transitional,
flag
or
transitional
condition,
but
also
raise
an
event
like
we
have
in
some
place
in
the
code
that
might
not
be
known.
B
A
All
right
thanks,
Fabrizio
Vince,
you
had
your
hand
up
next.
D
I
read
the
use
case,
I
think
it's
pretty
clear
to
me
like
what
this
is
trying
to
do.
It's
like
kind
of
trying
to
wire
up
our
go
with
Argo
events
like
I'm
curious
to
understand,
if,
like
because
events
like
are
just
ephemeral
in
in
a
lot
of
ways-
and
there
were
a
lot
of
discussions
like
back
like
a
few
years
ago
that
like
and
we
would
rely
on
events
for
some
things.
D
But
then
you
see,
events
are
missing
depending
on
how
the
cluster,
how
many
events
are
being
received
in
the
cluster
and
so
like
that
was
causing
some
issues.
I
was
wondering
like
would
Argo
be
able
to
initialize
or
like
a
trigger
a
self-event
of
some
Source
I,
never
used
it
when
something
changes
in
the
status
instead,
because
that's
already,
the
information
is
already
there.
In
that
case,.
C
I
can
I
can
add
to
two
points
on
that:
one
for
Argo,
the
Argo
event
can
handle,
update,
add
delete
and
I
or
so
patch
delete,
add
and
I
think
there's
a
fourth
one,
but
it
won't
handle
Trend.
It
doesn't
handle
transition
all
states
for
the
conditions.
C
C
If
Argo
events
would
handle
kind
of
those
condition
changes.
That
would
be
perfect
for
me,
but
there's
kind
of
two
conditions
that
I'm
looking
for
one
is
when
the
event
is
emitted
of
a
cluster
becoming
ready,
I
want
to
trigger
an
Argo
workflow
to
do
things
like
adding
it
into
Argo
CD
or
to
trigger
some
things
into
our
change
management
system.
C
C
From
back
to
what
Fabrizio
was
saying,
I
I
do
think
that
just
kind
of
emitting
events
for
each
of
those
status
conditions
would
be
sufficient.
For
my
use
case,
I,
don't
know
if
that's
submission
for
the
user
that
opened
the
the
ticket,
but
for
me
that
that
would
be
sufficient.
A
Okay,
I
guess
like
yeah,
Fabrizio
yeah.
B
I
think
that
if
we
can
basically
go
and
extend
the
state
transition
or
that
we
have
or
reflect
transition
with
evidence,
this
kind
of
non-controversial
we
already
have
for
in
in
a
couple
of
places,
it
is
just
raising
more
events,
and
so
just.
B
If
we
have
to
introduce
something
new,
it
will
be,
it
will
require
some
discussion
just
in
order
to
avoid
to
I.
Don't
know
overload
the
term
already
that
has
many
significance
or
stuff
like
that,
so
yeah
we've,
my
suggestion
is:
simplify,
try
to
make
the
request
as
simple
as
possible
and
and
and
if
possible,
stick
to
what
we
have,
because
it
will
make
it
faster,
get
a
solution.
B
B
Were
you
not
no,
it's
not
necessary
to
have
to
have
an
issue,
so
I
think
that,
for
instance,
Mike
now
describe
oh
I
need
the
two
two
things
two
moments
to
be:
instrumented:
okay,
let's
try
to
enumerate
them.
Let's
try
to
Define
them
because
at
certain
point,
if
you
look
into
the
thread
of
the
issue
there
was
someone
commenting.
Oh,
we
have
to
instrument
all
the
transitions
I
understand,
but
it
is
also
a
bigger
effort.
B
So
let's
try
to
I
think
that
we,
if
we
figured
out
what
we
need-
and
we
know-
and
we
do
small
caloric
change-
we
keep
going
to
the
direction
without
burning
up
the
ocean.
Is
that
what
what
I'm
trying
to
advocate
for
so?
If
we
keep
Simple,
we
make
one
step
at
a
time
we
can
get
there
faster
than
aiming
to
hide.
D
I
I
would
agree
that
just
just
to
enter
that
a
little
bit
is
we
already
have
this
transition
in
place.
There's
a
lot
of
places
like
in
the
cluster
controller
code
that,
like
basically
checks
like
if
the
whole
state
is,
is
ready
right
and
then
transition
that
condition
to
ready
and
I
feel
like.
D
D
C
A
C
Yeah
I
would
I
would
agree
with
that,
and
there
already
are
some
places
where
we
do
admit.
Events
like
in
the
cube
AVM
provider
that
that
did
do
something
similar
to
that
I'll
work
already.
A
So
I
guess,
like
action
items
out
of
this
Mike
I
saw
you
had
a
lot
of
comments
on
that
issue.
Do
would
it
be
appropriate
for
you
to
go
back
there
and
and
kind
of
like
put
another
comment
on
with
a
summary
of
what
what's
being
asked
here
like
this,
the
use
cases
you
want
and
then
kind
of
the
related
changes
and
let
you
know,
focus
on
keeping
the
the
kind
of
initial
cut
small
and
then
maybe
we
could
grow
it
later.
C
A
Yeah,
like
PR,
is
always
accepted
right.
A
A
A
All
right,
I'm,
not
seeing
any
other
hands,
go
up
so
Fabrizio
you've
got
the
next
topic
with
some
new
versions
being
cut.
B
Yes,
so
in
the
last
couple
of
the
day,
we
were
kind
of
busy
trying
to
nail
down
an
issue,
a
couple
of
issues
that
we
discovered
when,
when
people
use
longer
names
for
cluster
machine
deployment
and
so
on,
so
we
now
have
a
fix
for
those
issue
and
we
are
cutting
1.3.1
and
1.3
and
1.2.8
right
now,
most
notable
we
are.
We
added
a
validation
on
cluster
machine
deployment
name.
B
So
we
prevent
issue
on
creation
and
everything
else.
It
is
managed
in
the
code.
So
basically,
these
those
are
not
Breaking
Chains,
because
and
if
you
were
using
the
cluster
machine
deploying
name
longer
longer
than
63.
Today,
things
were
already
failing,
but
not
in
an
explicit
way.
We
are
just
making
the
the
failure
opening
in
a
visible
way,
so.
H
B
Information
number
one:
we
are
cutting
a
release,
it
fixes
the
a
box
and
all
is
nice
to
it.
Second,
second
I
will
really
take
the
opportunity
to
give
kudos
to
the
release
team.
They
just
started.
They
work
one
week
or
two
weeks
ago
and
they
are
already
taking
care
of
cutting
the
release
and
cooperating
very
well
among
them.
So
I'm
super
happy
how
things
are
going.
This
is
a
really
great
improvement
from
our
reward
and
that's
it
for
the
release.
I,
don't
know
if
there
are
questions.
A
A
All
right,
I'm
not
seeing
any
hands,
go
up
so
Fabrizio
you've
got
the
next
one
with
a
status
check.
B
Yeah
talking
about
the
release
so
when,
when
we
discussed
the
our
release
calendar,
we
kind
of
pointed
out
that
the
key
point
for
our
release-
Cadence,
is
the
capability
of
providers
to
pick
up
the
new
release.
Somehow
Timely
and
so
I
would
like
to
make
our
first
status
check
on
provider
migration
to
the
one
to
three
just
to
get
up
pulls
on
where
we
are
I,
don't
know
if
there
is
someone
from
providers
that
that
can
give
them
if
this
is
already
happening,
planet
or
whatever.
B
Yeah,
if
possible,
yes,
otherwise
it
will
open,
not
trade
in
this
channel.
Okay,.
A
I
see
Jack
has
his
head
up
so
go
ahead.
Jack.
F
A
And
I
don't
know
if
I
don't
know
if
cap
Cube,
Mark
I,
don't
know
if
we're
calling
that
cap
K
or
what
I
I
don't
know,
Killian
just
put
up
a
PR
to
upgrade
to
one
dot,
two
five
I
thought
but
I
don't
know.
If
we
went
to
1.3
Killian,
do
you
do
you
know?
Did
we
take
that
all
the.
I
Way,
yeah,
it
was
1.3
eventually,
when
the
thing
marched,
I'm,
pretty
sure.
A
Cool
and
it
looks
like
from
chat
that
Kappa
is
also
on
1.3.
B
Also
CI
copy
oci.
Okay,
that's
great,
so
it
seems
that
from
the
maybe
we
have
to
check
on
cup
on
Kappa
V,
but
yeah
things
is
happening,
so
this
is.
This
is
a
great
answer
for
the
community.
Thank
you
very
much
to
everyone
doing
this.
A
B
A
All
right
so
feel
free
if
other
providers
want
to
put
their
information
in
chat.
I'll
try
to
keep
watching
that
Vince.
You've
got
the
next
topic
with
a
management
controller
ideas.
D
All
right,
thank
you,
so
this
is
just
like
a
very
early
thought
on
something
that
I've
been
prototyping
the
past
couple
of
weeks
so
to
to
basically
expand
on
this
like
right.
Now,
there's
like
a
couple
of
problems,
just
in
general
like
in
the
ecosystems
like
the
when
we
build
a
controller,
it's
usually
controllers
are
built
for
a
single
cluster
and
when
we
have
to
expand
another
cluster,
cluster
API
has
a
lot
of
hacks
to
do
so.
D
D
So
what
I've
been
thinking
like-
and
this
is
the
kind
of
a
combination
of
cluster
API
and
controller
runtime-
is
to
had
better
support
for
watching
many
many
clusters,
but
also
maybe
add,
like
a
more
native
support
to
watch
cluster
API
workload
clusters
from
a
management
cluster.
The
goal
here
is
well.
This
is
what
I
have,
in
my
mind,
so
far
is
to
basically
improve
what
it
means
for
what?
D
What
is
a
management
controller,
so
Define
that
and
you
know,
could
be
a
controller
that
runs
in
the
management
cluster
But,
ultimately
like
this
controller
has
to
run
actions
on
the
workload
clusters
and
maybe
on
the
management
cluster
as
well.
So
just
an
example
that
I
have
been
thinking
about
is
ipam
if
I
want
to
have
IPM
coordination
across
many
many
clusters-
and
this
is
a
very
small
and
simpler
example-
and
it's
also
Niche,
but
it
gets
an
idea
like
I
need
to
coordinate
IP
leases
and
address
them
in
space
across
every
cluster.
D
But
I
may
also
want
to
say
the
cluster
itself
will
register
at
crd
and
it
will
allocate
the
crd
So
within
the
cluster.
I
know
how
what
what
IP
space
I
have
so
I
might
want
the
controller
to
watch
all
the
Clusters
that
the
management
cluster
has
under
its
management
and
act
on
those
but
ultimately
coordinate
within
the
management
cluster
preview.
D
That's
just
a
basic
idea
like
if
you
have
more
use
cases
like
please
reach
out
I'm,
starting
to
kind
of
putting
this
all
more
together,
and
there
will
be
a
couple
of
efforts
that
we'll
need
to
kind
of
go
through,
one
of
which
is
in
controller
runtime,
specifically
and
then
kind
of
integrated
with
cluster
API
later.
D
A
D
F
F
D
I
actually
have
a
hard
stop,
so
I
have
to
actually
drop
but
feel
free
to
reach
out
on
slack,
and
we
can
have
like
follow-up
meetings
and
we
can
discuss
that
another
time
as
well.
All.
A
Right,
cool
thanks,
Vince
and
thank
you
Jacob
or
Jacob
for
setting
me
correct
there
in
the
India
notes
so
yeah.
That
is.
That
sounds
like
a
really
exciting
topic
to
me.
Please
retouch
reach
out
to
Vince.
If
you
have
questions
or
you
have
ideas
and
Fabrizio
did
you
have
your
hand
up
for
this
topic.
B
Yeah
I
just
wanted
to
say
that
these
sweets
are
the
problem
like
the
Caster
cash
tracker,
you
know
more
structured
way,
so
I'm,
definitely
plus
one
and
also,
let's
not
forget
the
discussion
that
we
have
the
last
week
about
inverting
the
communication
pattern
between
management
cluster
workload,
cluster
that
kind
of
fit
into
the
picture,
but
yeah
really.
A
Cool
yeah.
Thank
you.
Thank
you
for
the
addition
there.
Okay,
now
Jack
you'd
like
to
go
back
to
the
status
check.
F
Cool
so
I
agree
that
this
is
really
great
to
see
all
that
integration
I
wanted
to
sort
of
maybe
pose
the
question
about
what
do
we
do
with
back
compat
integration,
because
strictly
speaking
providers
I?
Don't
when
you
you
know
adopt
1.3,
that
doesn't
mean
you
don't
work
with
1.2,
just
using
those
two
as
an
example
per
capsi,
in
particular
from
a
practice
for
a
practical
point
of
view.
F
I,
don't
believe
we
test
one
dot,
two
for
capsi
at
some
point,
we
release
a
new
Miner
which
is
going
to
cut
to
1.3
as
as
it's
tested,
validated
Cappy
pairing,
and
then
we
sort
of
stopped
testing
the
previous
versions
and
that's
that's
a
sort
of
practical
contract
that
I
think
we
it's
worth,
calling
that
a
practical
contract
without
a
strict
contract
because
I,
you
know
for
folks
who
want
to
continue
consuming
future
versions
of
capsie,
we're
not
requiring
that
they
also
go
through
the
process
of
updating
to
Cappy
1.3.
F
And
this
using
this
example.
Again
so
I
wonder:
do
we
want
to
make
this
contractual?
We
want
to
sort
of
Define
this
more
formally.
What
do
other
providers
think
any
thoughts.
A
B
Go
ahead
and
just
trying
to
understand
what
do
you
mean
with
backward
compatibility
So?
Currently
we
have
different
guarantees,
one
for
the
API
one
for
the
contract
and
and
one
from
the
golang
types
current
blanket.
For
instance,
1.30
is
just
only
40
golanga
types,
but
API
types
are
compatible.
Contract
is
compatible
so.
F
Yeah
I'm
talking
about
the
latter,
the
API
stuff,
so
I'm
talking
about
the
Practical
fact
that
folks
running,
say
Kappa
might
prefer
me
they
may.
They
may
have
an
operational
scenario
where
they
prefer
to
update
Kappa
continuously
as
new
releases
of
Kappa
land
but
update
their
management,
Fleet
sort
of
less
periodically.
For
whatever
reason,
and
at
some
point
they
may
be
running
a
version
of
Kappa-
that's
not
actively
tested
against
the
version
of
Cappy.
That's
on
their
management
clusters.
F
I
don't
mean
to
pick
on
Kappa,
but
because
this
might
not
actually
not
be
true
from
Kappa
per
campus
perspective.
F
Maybe
they
test
n
minus
three
versions
of
Cappy
against
every
version
of
of
Cappy
versus
every
version
of
Kappa
I'm,
not
sure,
but
in
capsized
scenario,
what
we
do
is
we
sort
of
methodically
move
forward
and
at
some
point
we
stop
testing
older
versions,
that's
happening
against
newer
versions
of
cap,
Z
and
so
I
think
we're
in
a
situation
where
they're,
probably
in
the
wild
folks
running
cap
Z
at
a
particular
version,
with
a
particular
version
of
Cappy
that
we're
no
longer
testing
because,
as
you
say,
febreza
those
API
contracts
are
backwards,
compatible
moving
forward,
and
so
they
can
continue
to
do
that
safely.
B
Yeah
I
can't
so
let
me
say,
given
that
the
API
the
contractor
is
compatible
I,
don't
think
that
it
is
I.
B
Just
fine
to
not
test
them.
Okay
from
the
other
side,
if
I
look
at
what
we
do
in
cluster
bi
in
cluster
bi,
we
have
two
main
dependencies
which
are
controller,
runtime
and
and
kubernetes,
and
whenever
we
bump
them.
Basically,
we
have
a
new
branch
and
we
keep
testing
the
old
combination,
a
new
combination,
because
yeah
we
prefer
doing
this
way
so
I
think
that
is
not
necessary
but
I
from
the
other
side.
B
I
would
like
to
leave
each
provider
to
choose
how
much
invest
in
their
branching
invest
in
their
in
maintaining
different
branching
cherry
pick
and
and
different
CI
senior,
because
it
is
an
it
is
an
effort.
I
understand,
yeah.
We
we
feel
the
pain
in
cluster
in
main
cluster
API,
but
we
kind
of
think
that
it
is
necessary
to
have
a
proper
senior
kubernetes,
especially.
B
Just
one
one
clarification
why
cluster
API
as
a
specific
contract
and
specific
guarantee
for
the
providers
kubernetes
does
not
have
such
guarantees
for
us,
so
we
are
kind
of
required
to
test
and
the
same
is
controller
runtime,
so
they
they
can
break
something.
Instead
between
cost
API
provider.
There
is
a
better
communication.
This
meeting
exists
for
this
group,
so
I
think
that
is
a
is
not
really
the
same,
but
yeah
I
just
wanted
to
report
the
index.
B
F
Cool
that
makes
sense,
yeah
I'm,
not
hearing
from
other
providers,
with
any
feedback
right
now.
So
I'll
probably
take
this
into
the
cap
C
community
and
see
what
folks
think,
just
in
terms
of
if
we
can
improve
our
advocacy
for
when
you
move
forward
with
with
capsi.
You
know
move
forward
with
the
tempted
version
of
Cappy
if
at
all
possible.
A
I
think
there's
some
there's
a
good
com.
A
couple
good
comments
in
the
in
the
chat
here
as
well.
So
from
Richard
case
I,
see
that
Kappa
does
not
specifically
test
older
versions
of
Cappy
and
then
lubermere
has
an
interesting
point
too,
which
is
it
says:
I
want
I
wonder
when
or
if
Sig
kh's
info
will
message
us
with
Hey
cap
star
has
too
many
test
jobs
burns
the
cncf
credits.
A
If
that
happens,
then
we
might
have
to
discuss
provider
sponsorship
so
and
then
I
think
that's
a
great
Point
as
well.
When
we're
running
on
kind
of
these
public
infrastructures.
A
Every
additional
test
case
you
know,
adds
to
the
cost
and
whatnot
from
I
mean
from
a
kubemark
perspective.
I.
You
know,
since
that
tool
that
provider
is
mainly
made
for
testing
I
wouldn't
have
I,
wouldn't
have
any
problem
with
doing
that
kind
of
Matrix
testing,
but
I
think
I.
Think
cap
K
is
kind
of
off
on
its
own
a
little
bit
in
some
of
these
respects.
A
And
Fabrizio
says
we're
dropping
test
frequency
for
old
branches,
so
I'm
not
seeing
any
other
hands
go
up.
Would
anyone
like
to
make
a
comment
on
this.
A
K
B
I
have
this
prototype,
something
is
already
immigrated.
If
you
go
into
documentation,
for
instance,
documentation,
then,
if
you
go
user
guide
or
user
guide
under
reference,
there
is
a
grocery
CRT.
So
tldr
is
that,
in
my
opinion,
so
I'm
playing
around
to
go
and
I'm,
not
a
tech,
writer
I'm,
not
web
developer,
but
it
seems
really
simple
and
at
the
end
we
got
a
better
organization
of
the
document,
because
Hugo
managed,
because
what
we
have
today
is
that
we
have
everything
in
one
single
index
menu
which
is
just
exploding.
B
If
you
look
at
the
current
book
here,
we
can
organize
and
I
like
the
idea
that
we
can
organize
stuff
around
what
is
relevant
for
the
user,
what
is
relevant
for
the
providers
and
what
is
Reverend
for
the
copy
Developers,
which
is
a
a
kind
of
which
are
the
out
the
three
main
Persona
that
can
they
use.
They
follow
this
project,
and
but
in
the
current
book
we
don't
have
such
a
good
separation
of
concern
so
I'm.
B
Looking
what
I'm
looking
for
with
this
Cloud
I'm,
looking
for
some
feedback
from
the
I,
don't
know
if
someone
already
used
Hugo
and
they
have
experience,
it
is
a
bad
tool,
a
good
tool
or
whatever.
Second,
if
the,
if
they
have
a
some
idiom,
what
I'm
just
discussing
separating
document
trying
to
give
them
an
organization
is
a
better,
is
a
good
idea
or
not
what
do
you
think
about?
B
And
thirdly,
if
there
is
someone
someone
interested
in
this
effort-
and
so
we
can
kind
of
nail
down
how
to
make
this
happen
because
yeah,
it
will
be
interesting
to
figure
out
how
to
make
this
happen
without
being
stuck
in
revising
on
maintaining
two
books
for
a
longer
time
or
stuff
like
that.
So
there
is,
it
is
a
nice
idea,
but
a
lot
of
detail
to
be
figured
it
out
and
if
people
are
interested,
we
try
to
get
this
morning
moving.
Otherwise
I
will
continue.
My
research
at.
A
I,
don't
have
an
option
to
raise
my
hand;
oh
I
see
David
races
and
go
ahead.
You
go
first.
I
Yeah
I
was
just
gonna
say
that
I
know
that
I
mean
I'm
I'm
generally
interested
to
help,
but
I
think
there
there's
a
lot
of
different
documentation
issues
and
I
know
we
have.
We
have
a
couple
people
that
are
on
the
documentation,
right,
approvers
list,
so
I
think
I'd
really
like
to
see
kind
of
like
a
plan
of
of
what
is
gonna.
You
know
like
what
that
list
of
work
or
migration
is
I,
mean
I.
Think
the
template
is
good
and
I
know.
I
I
do
know
that
on
that
combination
of
tech,
the
netlifine
Hugo
is
one
that
the
cncf
folks
utilize,
but
so
multiple
things
I
guess
one
seeing
if
we
can
get
some
help
from
cncf
and
two
maybe
talking
with
some
of
the
docs
folks
on
like
the
bigger
plan
and
the
issues
that
are
there,
that
we
could
scrub
and
things
like
that.
I
For
instance,
there's
one
issue
that
I
commented
on
on
just
one
one
part
of
the
dock
that
is
saying:
hey
refactor
it
and
I
made
some
suggestions,
and
then
it
just
sat
there.
So
it's
like
you
know,
it'd,
be
great
to
just
kind
of
prioritize
and
have
a
bigger
plan
to
do
the
migration
amidst
a
lot
of
the
issues
that
are
there
before
we
just
Dive
In.
A
I
Yeah
I
think
that
would
also
help,
because
we
did
some
of
this
work
on
the
notary
project
a
while
back
and
and
it
kind
of
helps
as
well,
for
if
we
get
someone
from
cncf
to
have
like
a
plan
of
hey,
we
want
to
you
know
like
kind
of
do
this,
and
they
can
obviously
provide
their
own
input
as
well.
I
But
if
you
kind
of
just
start
like
here's,
my
site-
and
you
know
it
might,
it
might
take
a
while-
and
you
have
to
do
that-
work
anyway-
to
try
and
help
iterate
on
what
what's
your
goals.
And
what
do
you
you
know?
What
do
you
want
to
try
and
migrate?
What
and
what
not,
so
just
having
some
more
of
that
written
out
I
think
would
be
really
helpful
all
around
for
even
just
whoever
wants
to
contribute,
even
if
we
don't
get
help
from
cncf.
A
Great
points
Fabrizio,
you
got
your
hand
up.
B
Yeah
I
kind
of
agree
that
having
a
plan
is
is
necessary,
especially
for
figurative
transition
from
from
the
other
side
at
this
moment.
Basically,
this
this
idea
is
simply
understaffed,
so
I
I'm,
just
figuring
out
if
Hugo
is,
is
a
good
Target.
B
B
There
should
be
a
a
team
of
volunteering
that
that
plan
to
make
this
happen
in
a
reasonable
time,
because
we
don't
want
to
do
the
double
management,
basically
and
and
so
now,
I'm
kind
of
playing
with
the
terminology
I'm,
not
really
yet
there
to
plan
the
activity,
and
this
is
why
I'm
trying
to
raising
awareness
if
you
get
if
you
get
a
critical
mass,
we
can
make
this
happen
and
I
think
this
will
be
valuable
for
the
process.
I
I,
just
look
at
it
in
netrify.
There
are.
B
I
Yeah,
no
I
think
I
think
there
could
be
value
in
kind
of
Clean
cleaning
up
and
streamlining
there's
just
like
the
current
cluster
API
book
is
just
there's
just
so
much
there
that
it's
hard
to
follow
everything
just
at
a
rough,
a
rough
feel
so
I
think
that
I
think
the
the
refraction
could
be
really
helpful.
I
know,
Killian
is
somebody
I've
been
who's
been
pretty
responsive
on
some
of
the
issues
have
and
he's
one
of
the
two
doc
approvers.
I
Have
you
talked
about
this
with
with
him
or
there's
one
other
person?
That's
I
believe
on
the
docs
approver
owners
list.
B
Yeah
I
think
that
Killian
and
Oscar
about
the
idea
we
are
okay.
We
are
kind
of
brainstorming
together,
but.
A
So
I'll
just
add
my
comment
here.
You
know
I
I'm,
happy
to
help
out
with
this,
like
I've
done
a
lot
of
Dockside
building
as
well.
I'm,
not
a
huge
Hugo
fan,
I.
Think
MK
docs
would
be
another
interesting
technology
to
look
at
for
us.
It
I
basically
I
like
MK
docs,
a
little
more
because
I
think
it's
easier
for
not
people
who
aren't
necessarily
used
to
more
complicated
docs
tool
chains
to
get
involved.
A
It
has
kind
of
a
flatter
file
structure
and
just
uses
markdown
files
like
pretty
much
as
the
Mainstay.
But
aside
from
that,
like
I'd,
be
happy
to
help
out
with
this
too.
If
we
need
some,
you
know
if
we
need
some
more
organizing
and
just
you
know
even
just
moving
docs
and
whatnot,
so
I'm
I'm
happy
to
get
involved.
If
we,
if
we
start
to
put
together
like
a
working
group,
to
look
into
this.
I
Yep,
that's
cool
yeah,
I'd
I'd
be
happy
to
help
too.
The
the
only
thing
I
think
would
be
good
to
just
sanity.
Check
is
again
the
cncf
folks,
because
they
they
I'm
likely.
They
have
like
a
set
of
tools
that
they
kind
of
work
with,
and
maybe
ones
that
are
outside
of
that
bubble.
I
So
I
think
it
would
be
good
to
just
kind
of
know
if
you
know
if
we
use
MK
docs
or
whatever
else
thing
that
it's
that
it's
not
somewhere
off
offshoot
to
where
we're
going
to
be
kind
of
out
of
a
happy,
cncf
supported
path,
we
wanted
to
ever
get
help
from
from
outs
from
those
folks.
I
B
Okay,
I
will
see
if
I
can
find
yeah.
It
would
be
nice
to
talk
with
someone
ticket
is
not
always
it's
not
a
nice
human
interface
but
yeah.
We
will
figure
it
out.
Okay,.
A
All
right
cool,
good
discussion
there
and
I
just
wanted
to
call
out
a
couple.
A
couple
comments
in
Chad
here,
uveraj
says:
they're
I
mean
I'm,
not
understanding
that
there
can.
There
can
I
contributors
yeah
go
ahead.
You've,
Raj
yeah.
L
I
I
mistake
that
so
sorry
I
was
just
saying
that
at
the
last
kubecon
there
was
a
contributor
Summit
talk
about
various
Technologies
used
for
creating
docs
and
it
did
a
pretty
good
evaluation
of
the
pros
and
cons
of
each
of
the
setup,
including
MK,
docs
and
I.
Believe
it
also
covers
Hugo.
Maybe
we
can
take
a
look
at
that.
L
I,
don't
know
if
contributor
Summit
talks
are
recorded,
but
if
they
are,
maybe
we
can
take
a
look
at
that
to
maybe
help
us
guide
if
we,
if
we
can
get
inspiration
on
what
what
are
the
right
things
to
do
and
so
on
because
I
remember,
it
also
covered
topics
about
how
do
you
handle
separate
dogs
for
different
work
versions
of
the
products
and
so
on,
so
which
is
important
for
this.
So
maybe
we
can
take
a
look
at
that.
A
A
I'm
not
sure
if
this
is
the
same
one,
but
I
will
put
its
Link
in
the
docs
just
so
we
can.
We
can
review
it
later.
A
All
right
is
there:
is
there
any
more
comments
or
questions
about
this
topic?
We're
we're
getting
down
to
the
last
eight
minutes
here.
So
I'm,
probably
gonna
try
and
move
things
along
a
little
bit.
A
Jonathan
you've
got
the
next
topic
working
on
khig
repo
for
home.
Take
it
away,
yeah.
J
So
this
should
be
pretty
quick
I'm
just
giving
an
update
on
the
helm
project
right
now,
I'm
working
on
getting
getting
my
repo
migrated
into
a
k86
repo,
I'm,
pretty
sure
I
have
everything
set
to
go?
I
just
need
approval
from
I
guess
whoever's
in
charge
of
that
I'm
not
really
sure
what
the
actual
migration
is
going
to
look
like.
Do
you
know
if
I
would
do
that
myself
or
would
somebody
create
the
repo
and
then
I
can
just
add
stuff
to
that.
A
A
J
And
yeah,
so,
besides
that
we
were
also
thinking
about
getting
a
meeting
set
up
for
that
as
well,
and
so
there
are
kind
of
two
ways:
I
was
thinking
we
could
go
with
this.
One
is
just
make
that
a
part
of
this
meeting
if
we
don't
really
need
that
much
time,
and
also
that
way,
everybody
in
Kathy
can
see.
H
I,
personally,
perhaps
selfishly
would
prefer
doing
it
in
this
meeting
to
avoid
having
yet
another
community
meeting
to
attend,
but
especially
at
the
beginning
as
we're
ramping
up,
and
there
isn't
much
happening.
If
we
do
see
that
there
is
a
need
for
a
separate
meeting,
then
I
think
that
would
be
fine
to
evolve
into.
J
Okay,
yeah
I
think
until
we
have
critical
mass
I
think
it
would
make
sense
to
put
it
in
this
meeting
and
also
keep
everybody
else
in
the
loop.
For
that.
A
Yes,
does
anyone
have
any
objections
to
doing
it
in
this
meeting
and
I,
see
Fabrizio
says
in
the
comments
we
we
can
have.
We
can
do
it
after
this
meeting
finishes
as
well
right
at
the
end
of
the
agenda
that
you
know.
If
people
aren't
interested
in
staying
for
that
portion,
then
they
could
leave
after
that.
J
A
J
Yeah
also,
we
don't
have
to
commit
to
this
decision
now.
It's
just
sort
of
I'm
just
sort
of
trying
to
get
a
idea
of
what
people
think
and
then
in
the
new
when
the
New
Year
wraps
around
since
I
know
a
lot
of
people
aren't
around.
We
can
sort
of
set
that
in
stone
and
decide
what
we
want
to
do,
but
as
far
as
the
next
update
goes,
I
cut
a
another
patch
release
of
the
visualizer
app
and
I
fixed
a
couple
bugs
with
how
cluster
CTL
worked
with
it.
J
A
J
K
A
Right
cool,
that's
all
for
me
all
right!
Cool
thanks,
Jonathan
I'm
gonna
push
us
along
here
to
the
provider
updates
Jack
you're
up.
First
with
cap,
Z.
F
Okay,
having
a
little
airpod
drama
here
locally.
So
if
you
can't
hear
me,
I
apologize
in
advance,
but
I
think
we're
good
now
so
yeah
I
wanted
to
add
for
cap
z
that
the
managed
kubernetes
are
Azure,
managed,
cluster
and
other
related
crds
I'm.
Graduating
from
experimental
effort
is
nearing
the
finish
line,
which
is
great
news.
F
There's
there's
a
lot
of
folks
using
that
and
so
we're
planning
to
add
test
coverage
and
then
with
Community
approval,
move
the
apis
from
experimental
to
B1
beta
1
sometime
in
early
2021,
we're
trying
to
pin
that
to
the
the
1.8
release
or
development
cycle.
So
right
after
we
cut
1.7.
So
that's
just
an
FYI
and
we'll
talk
about
that
more
in
our
office
hours.
K
I
had
just
a
very
quick
one,
really
just
what's
written
there,
we're
trying
to
resurrect
the
cap
G
office
hours
again
in
the
new
year,
so
if
anyone's
interested
just
come
along
to
the
the
channel
one
just
so
you're
interested
so
that
we
can
accommodate
times
and
frequency,
that's
it.
A
F
Yep,
so
we
met
this
morning
and
I
think
we've
got
a
decent
momentum
for
continuing
to
do
this
indefinitely.
So
that's
great.
Thank
you
for
opening
up
that
link
the
pr
to
sort
of
Define
this
in
the
Cappy
repo
is
I,
think
Nearing
near
emerging,
so
yeah
come
join
us
if
you're
interested
in
Manchester
kubernetes
in
cluster
API.
F
All
right
and
we
meet
every
sorry
I'll,
add
this
important
to
we.
We
meet
an
hour
prior
to
this
meeting.
Every
Wednesdays
is
the
the
Cadence
we've
established
and
we're
going
to
take
a
break
for
the
duration
of
2022
as
folks
go
on
holiday,
but
essentially
9
A.M,
Pacific,
standard
I.
Think
standard
is
the
correct
thing:
the
back
and
forth
in
the
spring
and
fall
between
daylight
savings.
So.