►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
We
follow
the
cncf
code
of
conduct
during
this
meeting,
so
that
basically
means
be
nice
to
each
other
and
if
you
wish
to
speak,
please
use
the
race
hand
feature
in
Zoom,
it's
generally
under
reactions
and
please
feel
free
to
add
your
name
to
the
attending
list.
It's
tradition
around
here
for
new
folks
to
get
to
get
a
chance
to
introduce
themselves.
So
if
anyone
wants
to
speak
up
and
introduce
themselves,
please
unmute
and
say
hi.
B
Hi
I'm
member
of
the
kubernetes
cluster
team,
at
indeed
so
I'd
like
to
talk
about
cluster
out
of
scalar
stuff
at
some
point
during
this
meeting.
But
thanks
for
having
me.
A
I,
don't
see
anyone
else
on
muting.
So,
let's
get
started
with
our
agenda.
Open
proposed,
readouts.
Okay,
there
are
no
open
proposal.
Reroutes.
Does
anyone
want
to
give
any
updates
on
any
of
the
other
state
proposals?
A
No,
okay.
I!
Don't
see
any
answers.
So,
let's
move
on
move
on
to
the
discussion
topics.
First,
Stephanie
are
the
first
one.
C
Yep,
can
you
open
the
issue
that
I
linked
here
as
part
of
in
a
PR
description,
yeah,
okay,
yeah?
So
we
have
this
issue
open
since
roughly
a
few
weeks,
which
is
about
which
kubernetes
versions
We,
are
testing
supporting,
maintaining
actively
in
coil
cluster
Circle
and
I
now
opened
up
here.
C
So
my
impression
was
about
a
discussion
that
we
had
on
the
issue
that
there's
basically
agreement
on
diversity
on
the
support,
and
then
there
were
some
I.
Don't
know,
I
don't
know
if
it
was
even
diverging
opinions,
but
then
there
was
a
discussion
about
how
would
we
Implement
some
safeguards
that
I
don't
know?
C
Do
we
want
to
block
certain
versions
on
a
web
hook
on
a
controller
in
cluster
card
I,
don't
know
what
I
would
want
to
do
is
the
following:
I
would
essentially
split
the
two
discussions,
so
one
thing
is
like
what
is
the
policy?
C
What
are
we
testing?
What
are
we
actively
looking
out
for
that?
That
version
actually
works
in
a
sense
of
test
coverage.
We're
accepting
bug
fixes
all
that
kind
of
stuff,
and
the
other
part
is:
when
are
we
implementing
some
sort
of
safeguards
and
how
strict
are
they
and
can
I
opt
out
and
all
that
sort
of
stuff?
We
have
image?
C
We
have
an
issue
for
that
since
at
least
a
few
months,
so
I
was
more
like
linked
into
that
one
and
I
essentially
just
opened
a
PR
for
documenting
the
policy,
and
once
we
can
once
we
have
that
merged.
The
only
thing
that
I
would
do
is
I
would
ensure
that
we
are
only
testing
what
we
actually
want
to
support
and
we
can
also
go
over
our
existing
code
and
essentially
drop
code
powers
for
old
releases.
C
D
C
Have
General
consensus
on
what
a
policy
should
be
and
also
towards
your
user
test
coverage?
You
had
a
point
last
week
about
resource
usage
in
the
communities,
project
and
saving
money
and
not
using
the
old
registry.
So
let's
check
it.
I
think
it
would
be
good
if,
like,
if
basically
agreement,
that
we
can
get
us
in
for
1.4,
so
we
can
already
reduce
our
test
footprint
yep
and
the
main
thing
I
want
to
ask
is
like:
do
we
have
General
equipment?
Any
objections?
A
I,
don't
see
any
High
state,
so
I
guess
folks
can
always
go
on
the
pr
and
as
well
as
the
issue
and
comment
there
about
their
thoughts
on
the
new
proposed
doc
update
for
the
kubernetes
version
support
and
if
they
have
any
concerns
like
you're
sticking.
Also
there
is
it
there.
C
Yep
then
I
will
say:
let's
set
the
latest
consensus
for
a
week
and
I
mean,
of
course,
if
there
are
any
objections
which
are
more
like
minor
documents,
I
mean
the
only
merge
if
agreement,
of
course,
as
usual,.
A
Any
more
thoughts
on
this
I
I,
don't
see
any
hand
stays
moving
on
to
the
next
one
Matt.
You
have
the
next
one.
B
Thanks
so
a
few
weeks
ago,
we
ran
into
a
pretty
like
large
production
outage,
while
trying
to
like
do
like
a
note
of
node
update.
We
use
machine
deployments
with
Cappy
and
when
this
occurred,
we
cluster
Auto
scalar
wound
up
setting
the
our
total,
like
node
count
in
the
cluster
from
down
from
800
down
to
about
400,
which
is
pretty
scary.
About
two
weeks
later,
we've
been
digging
through
the
code
trying
to
understand
how
this
could
even
happen.
B
We
have
no
Max
unavailable
set
on
these
node
groups,
so
we
were
a
bit
surprised,
but
really
it
comes
down
to
the
way
that
cluster
Auto
scaler
tries
to
handle
nodes
that
fail
to
provision
within
a
certain
window
of
time.
It
will
effectively
mark
them
with
an
annotation
saying,
like
hey,
prefer
to
delete
this
please
and
then
it'll
also
drop.
The
replica
count
of
a
node
group
I
think
that
this
behavior
is
probably
not
healthy.
Given
that's
the
node,
the
replica
count
should
probably
stay
fixed
and
cluster
out.
B
Escalade,
probably
shouldn't
be
manipulating
that,
in
order
to
try
to
delete
nodes
effectively
and
I,
guess
I'd
like
to
get
thoughts
on
whether
we
could
shift
the
the
deletion
of
nodes
away
from
cluster
being
something
that
cluster
Auto
scalar
is
responsible
for
to
something
that's
Kathy
or
like
the
the
actual
providers
are
handling
such
as
with
these
example
PR's.
B
That
kind
of
show
that
we
could
basically
like
take
out
the
the
logic
that
does
the
downscaling
in
this
situation,
whenever
a
node
is
failing
to
to
become
healthy
from
coast,
article's
point
of
view
allow
it
to
still
set
the
The
annotation
and
then
automatically
and
then
rely
on
cluster
on
gappy
to
actually
do
the
Deep
provisioning
of
any
nodes
that
are
marked
for
deletion.
E
Yeah,
this
I
really
appreciate
all
the
detail
that
went
into
that
issue.
This
is
this
is
complicated
and
I.
Think
like
the
proposal
that
you're
putting
forth
now,
it
does
make
sense
to
me-
and
one
thing
we
have
to
keep
in
mind,
though,
is
that,
like
we,
you
know
the
cluster
API
provider
implementation
that
we've
put
forward
descended
from
an
earlier
time
in
in
development,
and
so
like
cluster
API?
Isn't
the
only
consumer
of
that
provider
right
now.
We
also
use
the
machine
API
against
that.
E
So
I
think
we'd
have
to
really
think
about
adding
this
change
in
because
it
would,
it
would
require
anyone
who's
using
that
to
have
to
make
the
same
change
to
their
providers
as
well.
So,
like
I,
think
what
you're
proposing
makes
good
sense.
I
would
just
like
some
time
to
kind
of
check
the
details
and
make
sure
that
we
could
actually
do
this
change
for
everyone.
B
Yeah
I
think
what
we're
trying
to
propose
here
is
that
we
don't
change
the
default
behavior
of
cluster
Auto
scaling
or
of
of
Cappy.
We
would
basically
be
setting
up
flags
that
can
be
enabled
to
like
set
up
this
functionality
and
I
think
that
would
effectively
allow
providers
to
opt
into
supporting
this.
E
F
B
And
what
would
be
the
difference
so
to
go
into
the
issue
is
slightly
a
little
bit
more
detail
and
the
the
reason
that
cluster
scalar
is
marking
these
nodes
for
deletion
is
because
they're
failing
to
join
within
a
Time
window.
B
So
these
are
effectively
cluster
Auto
scalar
has
upscaled
the
node
group
and
then
is
waiting
for
new
nodes
to
be
provisioned
in
order
to
join
this
and
then
after
10
or
15
minutes,
it
says
of
the
node
not
failing
to
join
or
like
not
having
joined
yet
cluster
autoscaler
will
say
this
one
failed
for
whatever
reason
we're
going
to
market
for
deletion,
downscope
group
and
then
try
again.
But
if
you
have
enough
of
those
nodes
in
that
like
stuck
in
provisioning
status,
then
cluster
Auto
scalar.
B
There
is
no
restriction
on
how
much
cluster
Auto
scalar
can
downscale
simultaneously
while
it's
occurring,
and
so
even
if
you
have
like
a
minimum
node
group
set.
This
bug
still
bypasses
the
the
hard
minimum
set
on
a
node
group.
Due
to
how
cluster
Auto
scalar
tries
to
cancel
a
node.
That's
stuck
in
provisioning
status.
Sorry
go
ahead.
E
Oh
okay,
cool
I,
didn't
know
if
Fabrizio
had
a
follow-up
there,
I'm
guessing
that
you
tried
this
map,
but
did
you
did
you
do
any
of
the?
Do
you
do
anything
around
extending
the
time
that
the
auto
scaler
will
allow
for
a
node
to
join?
You
know,
I.
Think
I
forget
what
it's
called.
It's
not
unneeded.
E
B
Yeah
we've
definitely
we've
been
trying
to
lower
that
because
of
we're
trying
to
support,
like
fallback
instance,
types
between
and
also
like.
B
So
we
can
only
do
one
node
group
per
AZ
we're
working
on
AWS,
so
we
have
to
have
we
end
up
using
three
node
groups
in
order
to
support
like
a
particular
instance
type
and
each
AZ,
and
then
also
if
we
want
to
support
like
three
different
types
of
incidents
fallbacks,
we
have
to
have
the
maximum
provision
time
low
enough
for
cluster
Auto
Scala
to
actually
fall
back
to
other
instance
types,
and
it
also
like
falls
back
to
each
AZ
first,
and
so,
even
if
you
have
a
five
minute,
Max
node
provision
time
you
have
to
it
takes
15
minutes
for
it
to
try
the
next
instant
type.
E
F
Yeah
I
also
follow-up
question,
so
it
seems
if
I
got
it
right.
Your
Delta
scanner
in
your
case
is
targeting
directory
machine
sets
and
he
If.
Instead,
if
I
think
the
cluster
API
in
machine
deployments,
we
have
we
are.
We
have
strategies
that
we
prevent
with
Max
available
and
available
that
will
help
to
to
prevent
such
type
of
improvised
improvisation
drop.
So
let
me
say,
I'm
trying
to
understand
the
issue.
What
I
want
to
make
sure
is
that.
F
B
Yep
now
I
I
think
our
goal
is
to
elect
cluster
API.
Do
its
thing
correctly
is
really
like
what
we're
trying
to
do
or
to
propose
with
this
approach,
because,
like
really
like
to
us,
the
problem
is
that
cluster
Alo
scalar
shouldn't
really
been
manipulating
the
the
replica
count
in
order
to
to
try
and
cancel
nodes,
it
should
be
setting
the
replica
count
to
the
right
amount
and
then
letting
cluster
API
do
the
rest
effectively
and
right
now
it's
due
to
Legacy
reasons
of
how
what
interfaces
were
available.
B
This
is
basically
the
only
way
that
cluster
Auto
scaler
was
allowed
to
to
cancel
nodes
by
giving
cluster
API
a
preference
or
a
preferred
annotation
to
say,
hey,
please.
This
prefer
this
one
forever
for
deletion
whenever
you
do
and
then
telling
it
to
downscale,
but
that's
a
I,
don't
think
that
the
down
or
that
the
downscaling
behavior
is
a
good
thing
at
all,
even
in
like
normal
behavior,
when
there's
not
a
note,
allow
it
occurring.
D
F
Thank
you,
I
I
would
take.
A
look
first
impression
is
that
if
we
try
to
fix
so
in
cluster
Pi,
the
machine
set
is,
is
a
pretty
dumbed
up
astruction,
so
it
does
not
ever
allowed
policies,
it
does
not
have
a
maximum,
viable,
etc,
etc.
So,
if
you
go
on
a
machine
set,
you
you
set
a
replica
for
Android,
you
just
delete
for
an
replica,
but
that's
the
same
will
happen
if
you
apply
400
labels
to
them
to
the
machines
and
then
the
next
are
consider
will
delete
all
400
replicas
in
one
shot.
Yeah.
B
Sorry,
the
distinction
here
is
that
the
the
machines
that
are
being
deleted
are
different,
because,
whenever
you
have
two
different
machine
sets
active
during
like
a
node
rollout
from
like
a
new
spec
to
or
from
one
spec
to
another,
cluster
API
is
going
to
like
start
to
downscale
the
old
ones.
That's
kind
of
the
Crux
of
this
issue,
and
so
in
the
case
you're
correct,
saying
that,
like
if
we
have
400
annotations
that
are
saying
hey,
please
delete
me.
B
400
nodes
are
going
to
be
deleted
by
copy
with
this
proposal,
but
these
400
nodes
would
be
the
nodes
that
are
stuck
in
provisioning
status,
which
actually
haven't
destroyed
or
haven't
entered
the
cluster.
Yet,
thus
we're
not
going
to
be
kicking
off
any
workloads
and
causing
issues.
A
B
Guess
final
question
is
seeming
like
it's
not
like
unreasonable
to
do
this
and
if
it's
properly
feature
dated,
then
it
seems
like
we
could
proceed
just
with
PRS
and
continue
discussion
on
those
in
detail.
G
A
Looks
good
great
thanks
folks,
thanks
for
the
great
discussion
if
there
are
no
follow-ups,
let's
move
to
the
next
item.
Okay,
I,
don't
see
any
ends
Mike
you
have
the
next
one.
E
Yeah,
so
this
PR
has
been
open
for
a
while.
Now
it's
basically
about
adding
some
annotations
so
that
in
scale
from
zero
scenarios,
users
could
specify
the
labels
and
taints
kind
of
manually-
and
we
had
talked
about
this
several
weeks
ago
and
I
know
there
have
been
a
few
reviews.
Fabrizio
you
had
a
couple
open.
Are
you
at
an
open
question
here?
E
I
thought
about
yeah
like
this
part
here
and
I
just
wanted
to
make
sure
like
I'm
I'm
Keen,
to
merge
this,
but
I
wanted
to
make
sure
you
didn't
have
any
sort
of
follow-up
questions
or
anything.
You
know
or
objections
to
us
kind
of
merging
it.
F
E
A
Great,
are
there
any
questions,
concerns
on
this
I,
don't
see
any
hands
going
on,
I
have
the
next
one,
which
is
I
believe
last
week
or
the
week
before,
that
we
had
this
discussion
about
doing
a
four
month:
release
Cadence
for
Cappy
for
the
rest
of
2023
and
then
going
back
and
revisiting
how
we
want
to
do
how
often
we
want
to
do
releases
in
2024.
So
with
that
I
created
a
PR
that
released.
That
gives
us,
like
the
preliminary
release,
dates
for
the
1.5
release
cycle.
A
With
the
four
months
release
Cadence
has
discussed
PR's
open.
Please
take
a
look
on
the
date,
so
it
roughly
follows
the
same
release
frequency
that
we
had
for
1.4
itself.
It's
close
to
17
weeks.
Just
has
one
note
for
it
and
the
dates
have
been
just
calculated
based
on
when
1.4
ends
and
when
we
want
1.5
to
release,
and
then
we
fill
in
the
dates
in
between
so
folks.
If
please
take
a
look
and
provide
feedback
on
that
does.
Simeon
have
any
questions.
Concerns
on
this.
A
I,
don't
see
any
hands
faced
moving
on
Joe,
you
have
the
next
one.
You
want
to
share
screen.
A
A
D
Okay,
let's
make
sure
okay
cool
oops
sorry
hold
on
I,
wanted
to
share
a
screen,
not
where's
the
entire
screen.
H
Share
you're
right
I
forget
that
every
time
thank
you,
okay,
so
real,
quick,
I
just
wanted
to
give
just
a
quick
demo.
So
the
cluster
API
for
oci
we
released
last
week
had
the
support
for
managed
kubernetes
using
the
new
kind
of
proposed
managed
solution.
H
So
I
just
wanted
to
kind
of
quickly
cover
that
and
see.
If
anybody
had
you
know,
questions
come
with
concerns
over
that,
so
I'm
using
open
lens,
I'm.
Sorry
I'm,
not
a
CLI
purist,
I
apologize!
This
is
just
quicker
to
show
I
think
so
anyway,
I've
already
spun
up
my
cluster.
H
It
takes
a
little
bit
to
spin
that
cluster
up
and
I
am
going
to.
Let's
see
if
I
can
grab
this
here
so
using
food
cuddle,
get
the
information
I
need
for
our
cluster
to
be
able
to
use
the
ocl
oci
CLI
here
to
set
up
our
kubeconfig.
H
H
But
we
can
see
down
here
in:
let's
see
it's
the
oh,
where
is
it
the
managed
the
managed
control
the
manage
cluster?
So
we're
and
I'll
go
into
the
template
here
in
just
a
second
I
apologize,
but
we
have
our
our
oci
managed
cluster.
We're
also
going
to
have
our
our
oci
managed
control,
plane
and
then,
lastly,
our
shoot-
where
is
oh
sorry,
oh
see,
I
managed
machine
pool.
H
So
those
are
our
three
new
kind
of
data
types
that
we're
going
to
use
and
we
can
see
that
all
of
those
are
running
and
managing
our
our
current
nodes.
H
H
I
can
see
it
yeah,
okay,
cool
I
just
want
to
make
sure
I
shared
the
right
thing.
So
we
have-
and
this
is
coming
from
this-
was
designed
around
the
the
option-
three
proposal
from
managed
kubernetes.
So
we
we
kind
of,
took
that
and
ran
with
it.
H
So
we
have
our
our
manage
cluster,
which
is
going
to
kind
of
represent
the
infrastructure.
So
like
setting
up
a
network
and
and
things
of
that
nature,
then
we
have
the
oci
managed
control
plane,
which
is
going
to
be
the
oci
like
control,
plane
but
kind
of
the
cloud
provided
cluster
and
then.
H
Lastly,
we
have
the
manage
machine
pools
which
is
going
to
be
that
that
node
pool
I
believe
is
what
OTE
calls
it,
and
so
one
of
the
things
and
we
kind
of
ran
into
a
couple
of
quick
short
drawbacks
and
I-
think
one
of
them
I
brought
up
actually
in
the
community
already
was
we
didn't,
have
an
atomic
way
to
modify
the
machine
pool
from
the
oci
side
of
things.
H
So
we
we
have
to
provide
our
image
ID
and,
at
the
same
time,
also
provide
the
kubernetes
version,
so
that
needed
to
be
in
our
oci
machine
pool
spec,
because
otherwise
we
couldn't
upgrade
the
machine
pool
without
adding
that
the
other
thing
let
me
see
here,
oh
so,
the
other
thing
that
we
kind
of
have
and
I
think
everybody's
aware
of
this
is
the
machine.
H
Pools
are
still
an
experimental
phase
so
that
we
have
to
ask
users
to
not
only
flag
the
manage
the
the
pools,
as
well
as
as
managed
clusters
and
then
right
now
and-
and
maybe
others
have
have
already
worked
through
some
of
this,
but
I
think
it's
right
now.
H
It
might
not
be
cluster
class
compatible
given
given
kind
of
the
the
current
setup-
and
this
is
something
you
know
we
can
talk
about
offline
or
next
time,
but
that
those
are
the
kind
of
the
three
main
things
we
ran
into
with.
With
going
this
route,
one
of
them's
kind
of
been
mitigated
and
then
the
other
two
will
kind
of
continue
to
work
towards
in
the
future,
and
one
last
thing
is
currently
we
are
supporting.
H
All
of
our
our
current
oke
apis
through
this
and
and
our
provider,
so
it
was
kind
of
a
quick
dump
there,
but
yeah.
A
That
was
nice.
Let
me
shed
my
screen
back.
A
I
H
H
Be
awesome,
that'd
be
awesome
and,
and
I
do
want
to
make
a
real,
quick
call
out.
This
is
my
colleague
Sean
who's
done
this,
but
that
the
time
didn't
work
for
this,
so
I'm
just
demoing
his
hard
work.
A
I,
don't
see
any
hands
race,
don't
have
anything
on
the
provider
updates
you
see,
Jonathan
is
adding
under
something
under
the
feature
groups.
G
A
Does
anyone
have
any
questions
about
the
the
classic
API
add-ons
or
helm?
Yes,
that's
the
one
I
don't
see
any
hands
raised
so
I,
just
I
guess
we
are
at
the
end
of
our
agenda.
Then
thanks
folks,
thanks
for
thanks,
everyone
for
joining
see
you
next
week,
oh
Deepak,
sorry
yeah.
J
G
J
Got
it
okay,
okay,
that
helps
yeah,
because
we
have
definitely
sort
of
the
requirements
of
installing
custom
operators
as
well
as
So.
Currently
we
use
cluster
is
so
set,
which
is
still
experimental
feature,
so
we
want
to
basically
also
start
adding
CSI
and
a
bunch
of
other
properties
and
other
things
as
a
pre
package.
Through
this
whole
custom,
API
framework,
so
maybe
I
think
I
I'll
probably
get
in
touch
with
you.
If
you
can
provide
some
more
details
on
how
to
use
it,.
C
G
Main
idea
is
that
we
have
a
crd
that
specifies
a
Helm
chart
and
you
can
put
a
label
selector
on
your
clusters
and
it
will
install
the
chart
on
any
cluster
selected
by
the
label.
You
can
also
configure
the
values
of
the
helm
chart
with
values
that
show
up
in
your
cluster
as
well,
for.
G
Okay,
cool
also
I,
was
thinking
about
publishing
or
getting
some
testing
for
stuff
set
up
to
publish
an
initial
image
just
so,
people
can
try
it
out
if
they
want
to
without
having
to
set
up
tilt
and
everything.
So
that's
something
I'll
I'll
start
working
on
this
week
as
well.
A
Does
anyone
have
any
last-minute
questions,
concerns
comments?
Any
discussion
topics.
A
I,
don't
see
any
answers,
they
don't
see
anyone
adding
anything
to
the
agenda
thanks
everyone
for
joining
see
you
all
next
week
have
a
great
week.
Hi
thanks.
You
barrage.