►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Hello,
everyone:
this
is
the
sick
cluster
life
cycle,
cluster
API
officers
on
Wednesday,
December,
7
2022.
We
follow
cncf
code
of
contact,
so
that
basically
means
be
nice
to
each
other.
In
the
meeting
we
have
a
meeting
etiquette.
If
you
speak,
if
you
wish
to
speak,
please
use
the
raise
hand
feature
in
Zoom
it's
under
reactions.
A
Let
me
drop
a
link
to
the
agenda
and
if
you
want
edit
access
to
the
agent
meeting
notes,
you
have
to
join
the
sickless
lifecycle
mailing
list.
It's
linked
at
the
top
top
of
the
document
feel
free
to
add
your
names
to
the
attendee
list
and
let's
get
started.
A
Are
there
any
new
folks
in
the
meeting?
Who
would
like
to
speak
up
and
say
hi,
it's
generally
traditional
on
here
for
the
new
folks
to
let
them
go
first.
So
if
there
are
any
new
people,
please
use
the
raisan
feature.
B
A
I,
don't
see
any
hands
moving
on
open
proposals.
A
Okay,
we
have
two
open
proposals.
Both
of
them
are
already
merged.
Does
anyone
have
any
points
on
this?
Can
you
go
ahead.
C
Yeah
I
I
just
wanted
to
highlight
an
issue
which
was
I
think
reported
today,
which
points
out
that
the
latest
released
cluster
cattle
version
is
not
able
to
read
provision
providers
which
have
a
major
version
bigger
than
we've
won.
So
in
this
case
it's
Kappa,
which
has
the
example.
The
reason
for
that
is
the
goproxy
client
we
introduced
to
reduce
the
number
of
API
calls
to
GitHub
yeah.
C
The
TDR
is
they're
good.
There
would
be
a
fix
by
requesting
also
the
other
versions
like
for
a
major
V2
and
V3,
and
so
on,
but
24
SEMA
outlined
that
this
may
also
have
an
issue
in
general
when
a
provider
does
not
follow
the
go
rules
for
modules
like
yeah.
The
one
best
takes
other
cup
KK,
which
would
have
to
issue
with
that.
C
The
go
proxy
will
not
be
able
to
list
even
the
V2
or
V3
text.
If
we
modify
the
method
here,
slower
yeah
I
just
wanted
to
highlight
it
here
that
it
gets
attention
yeah,
it's
okay
for
me
to
move
discussion
then
asyn
to
the
issue.
Maybe
yeah.
A
Sure
the
issue
yeah,
the
issue
is
already
Linked
In
the
agenda
nodes
so
folks
were
interested.
Please
feel
free
to
drop
your
thoughts
on
the
issue
there
and
continue
the
discussion
on
the
GitHub
issue.
C
Yeah,
maybe
the
last
important
point:
there
is
a
workaround.
The
workaround
would
be
as
listed
in
the
in
the
agenda:
export
the
go
proxy
equals
off
environment
variable
which
disables
the
go
proxy
and
stuff
so
yeah.
A
So
because,
since
so
in
the
meantime,
what
do
you
think
about
adding
this
into
our
Cappy
book?
The
V1,
the
release
one
three
copy
book,
while
we
have
a
fix?
Because
right
now
it's
broken
right.
C
Yeah
happy
to
do
awesome.
A
Okay,
moving
on
Joe,
you
have
the
next
one.
D
Yeah
I've
got
two.
The
first
one
I
think
is
pretty
quick.
The
state
of
the
machine
full
machine
looks
like
it's
still
sitting
in
lazy
consensus,
so
I
wasn't
sure
if
that's
moving
forward
out
of
The
Proposal
foreign.
E
Would
be
here
to
speak
to
us
since
he
authored,
The
Proposal,
but
he's
not
so
I
would
recommend
starting
a
slack
on
sorry,
a
thread
on
dislike,
so
he
can
chime
in,
but
I
don't
want
to
speak
for
him,
but
I
think
from
what
I
know
the
proposal
merged
it
took
a
while
and
I
think
then
he
opened
the
pr
of
the
implementation
that
was
work
in
progress.
E
So
that's
still
in
the
prq
and
I
think
that
work
kind
of
got
put
to
the
site
for
a
bit
because
of
other
priorities
that
intervened
so
I
believe
he's
planning
on
revisiting
it
after
the
holidays.
But
might
he
might
you
know
gladly
accept
some
help
if
you're
looking
to
make
that
move
faster?
So
that's
that's
what
I
know
I'll,
let
you
follow
up
with
him
to
get
more
info.
D
Though
perfect,
that's
awesome,
thank
you
and
then
my
one
other
thing
was
and
I
think
we've
got
a
few
people,
who've
talked
about
it
or
helped
out
a
little
bit
on
it,
but
we
on
the
cluster
API
side
of
things
have
an
issue
where
the
machine
pools
and
the
instances
cannot
atomically
update,
because
our
machine
pool
has
the
concept
of
a
kubernetes
version
and
then
the
instance
itself
has
the
image
that
it's
going
to
use.
D
That
is
going
to
have
the
kubernetes
version
in
it,
and
so
I'm,
not
sure
if
anybody
else
is
running
into
this
or
how
they,
if
anybody
has
a
possible
suggestion
on
how
to
approach
this,
but
I
think
we're
going
to
get
stuck
in
a
really
bad
place.
If
we
can't
upgrade
these
atomically.
F
I
don't
know
if
I
can
easily
repair
that
exact
thing,
but
we
probably
can
because
that
sounds
familiar,
so
Joe
I
would
say
hook
up
with
Matt
and
myself
and
and
maybe
just
a
call
out
to
others,
and
we
can
maybe
hop
on
a
zoom
at
some
point
and
kind
of
walk
through
this
and
see
it
happen
in
real
time
and
then
maybe
start
making
some
plans
for
how
we
might
make
this
better.
Okay,.
D
A
Thank
you.
Are
there
any
more
thoughts
on
this
one?
Any
more
questions
comments
concerns
okay.
Moving
on
to
the
next
one
Florian
you
have
the
next
one.
G
Yeah
hey.
Thank
you.
There
were
some
discussions
lately
about
reverse
communication
pattern
from
the
workload
clusters
to
copy
so
essentially
establishing
the
connection
from
the
workloads
cluster
side,
so
that
we
can
yeah
operate
clusters
in
this
networks
and
I
wanted
to
ask.
If
there
is
interest
in
scheduling
a
meeting
to
discuss
the
possible
approaches
so
that
we
can
move
forward
with
this
topic,.
B
Oh
yeah,
definitely
it
sort
of
feels
like
this
would
be
good
for
a
new
feature
group.
It
seems
like
a
significant
change
and
potentially
a
lot
of
people
interested
so
I
wonder
if
we
take
the
same
approach
that
we
did
with
managed
kubernetes
a
group
so
that
we
formally
meet
to
discuss
it.
G
Yeah
I
think
that
would
sound
good
to
me.
I'm
not
experienced
with
how
those
groups
are
run
and
managed.
G
A
I
used,
the
first
thing
probably
would
be
to
open
an
issue
to
track
this
and
then
start
a
slack
thread
referencing
that
issue,
so
that
people
have
a
common
place
to
go
back
and
look
at,
and
we
can
continue.
The
discussion
from
there
basically
use
the
issue
as
the
tracking
machine
I'll
kill
him
going.
H
Sorry
yeah
there
is
an
issue
open
on
this.
Let
me
find
it
and
I'll
add
it
into
the
Google
Doc.
A
I
Thank
you
to
your
garage,
so
this
is
the
issue
which
we
have
been
facing,
but
when
we
enable
auto
scaler
with
the
classy
cluster,
so
the
replica
field
said
a
kind
of
conflicts
with
the
reconciliation,
the
current
cluster
topology
reconciler,
which
goes
and
reverts
it
back
to
the
default
replica
which
is
set
in
the
classic
cluster.
So
this
thing
I
had
discussed
on
the
auto
scaling.
Sig
also
Michael
is
aware
of
it.
I
So
the
one
of
the
ways
to
fix
this
is
that
bringing
in
the
or
making
the
classic
cluster
reconciliation
Loop,
however,
of
these
annotations
cluster
Auto
scale
annotations,
so
that
it
skips
the
replica
part
when
it's
reconciling.
So
this
is
one
of
the
options
I
wanted
to
know
on.
There
was
something
similar
done
for
the
machine
pool,
also
where
a
new
annotation
wasn't
introduced.
So
I
wanted
to
know
the
thoughts
about
the
from
The
Forum
that
what
would
be
the
right
way
to
fix
this.
A
J
Yeah
I
think
what
you're
proposing
sounds
good
to
me
like.
If
we
see
the
annotations
for
the
minimum
and
maximum
size
for
the
auto
scaler
on
the
cluster
class,
you
know
we
need
to
make
sure
we
set
the
replicas
to
the
minimum
to
start
with
and
then
yeah
I
think
what
you're
saying.
If,
if
those
annotations
are
there,
we
need
to
not
manage
the
replica
count
for
those
machine
deployments
or
whatever
I,
don't
know
how
tricky
that
is,
but
that
that
sounds
like
the
right
approach
to
me.
I
Okay,
cool,
actually
I
experimented
with
changes
like
that
and
I
got
very
favorable
results,
so
I'll
then
I'll
post
that
those
set
of
changes
for
review,
and
then
we
can
take
it
further
from
that.
A
Yeah
sounds
good
thanks
for
that.
Are
there
any
more
questions?
Comments
concerns
autoscaler
for
classical
steps.
H
Yeah
so
I
just
wanted
to
highlight
this
in
case
anybody
has
come
into
contact
with
that
I,
don't
know
what
the
extent
of
this
issue
is,
but
we
had
a
couple
of
reports
in
slack
at
that
thread
of
issues
with
the
cluster
cash
tracker
in
one
two,
seven
and
one
three
zero
haven't
been
able
to
replicate
yet
haven't
managed
to
get
a
replication
case
from
the
people
who
reported
it
just
yet,
but.
B
H
H
So
if
anybody
has
seen
these
errors,
particularly
with
larger
clusters
and
creating
a
large
number
of
clusters
at
once,
it
would
be
great
if
you
could
respond
to
that
Triad
or
dump
it
in
slack,
but
anything
that
can
actually
reproduce
it
because
I
haven't
been
able
to
get
to
grips
with
this,
and
this
could
be
a
very
critical
bug,
because
the
cluster
cash
tracker
is
kind
of
used
for
everything.
So
yeah
I
just
wanted
to
highlight
that,
but
I
don't
really
have
the
details
on
it.
Never.
B
A
Are
there
any
more?
Are
there
any
questions,
concerns
thoughts
on
First,
Cash,
tracker
issue.
A
I
don't
see
any
hands
moving
on
to
provider
updates
I,
don't
see
any
entries
on
the
provided
updates.
So
let's
move
on
to
feature
group
updates.
Jack.
You
have
the
first
one.
F
Right,
so
this
is
a
totally
new
thing,
so
hi
feature
groups.
Hopefully
folks,
are
okay
with
feature
groups
as
a
concept
started
with
working
group.
Didn't
want
to
collide
with
the
kubernetes
concept
working
group,
so
kind
of
piloting,
feature
groups
so
feel
free
to
add
any
thoughts.
If
you
think
that
is
misleading
moniker
or
something
like
that,
but
we
had
our
first
discussion
around
managed
kubernetes.
There
were
a
lot
of
folks
present
super
optimistic
about
where
this
is
going,
we'll
be
hanging
out
for
a
long
time.
F
So
hit
me
up
on
slack
I'll,
add
some
links
there
as
well
to
help
get
involved
and
I
also
plan
to
send
out
a
blast
in
the
mailing
list
today,
which
I
haven't
done
so
just
wanted
to
announce
that
it
seems
like
we're
official
now.
A
Awesome,
that's
great
that
any
more
points
that
anyone
wants
to
add
to
either
the
discussion
topics
or
provide
updates
of
future
group
updates,
because
we
are
at
the
end
of
our
agenda
and
if
there's
nothing
else
in
get
back
a
few
minutes.