►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
January
4,
2023,
Happy,
New
Year
to
everyone.
This
is
a
sick,
tester
life
cycle
cluster
API
office
hour.
Just
a
quick
reminder
about
how
this
meeting
works.
So
this
meeting
Works
according
to
the
cncf
code
of
control,
so
be
kindly
with
each
other.
We
have
a
meeting
agenda
document
which,
which
I'm
I'm
already
sharing
for
adding
edit
assets
to
the
this
document.
You
need
to
subscribe
to
the
C-Class
that
I
cycle
meeting
mainly
list
I've
shared
the
link
to
the
document
in
in
the
in
the
chat.
A
If
you
have
any
topics,
please
feel
free
to
go
down
in
the
in
the
agenda
at
the
topic
to
the
gender
and
also
kindly
add
your
name
to
the
list
of
people
attending.
So
we
can
keep
track
of
people
interesting
on
the
project.
So,
let's
get
this
started
and
as
usually
at
the
beginning
of
the
meeting,
we
give
some
time
for
new
attendee
to
introduce
themselves
so
I
shut
down
for
a
second
and
please
use
the
rise
and
feature
if
you
want
to
speak
and
introduce
yourself.
B
B
Yeah
so
hi
everyone,
I'm
laksh
and
I'm,
a
computer
science,
undergraduate
student,
I'm
learning
kubernetes
these
days
and
I
came
across
cluster
API,
so
yeah
I'll
be
here,
see
how
the
what
the
project
is
all
about
so
yeah.
That
is,
it.
A
Happy
to
have
you
here,
we
recently
had
a
lot
of
graduates
to
look
at
the
cluster
API.
Someone
made
also
thesis
on
over
audits,
so
if
to
see
people
in
the
universe
in
the
university
to
take
a
look
at
the
project,
it's
really
interesting
happy
to
be
here.
A
C
A
Before
moving
on
as
Shield
was
writing
if
anyone
is
interested
in
Austin
the
one
one
of
this
meeting
feel
free
to
speak,
speak
up
at
the
beginning
or
reach
out
it.
It
is
always
nice
to
have
new
new
folks.
Speaking
for
the
hosting
limiting
and
the
other
is
also
another
interesting
question:
should
we
create
a
new
Docker
of
foreign?
Yes,
I
will
take
care
after
the
meeting
good
point.
A
D
Comment
about
that!
That's
okay,
I
think
it
would
be
good
if
it's
possible
to
keep
this
document
the
same
and
use
it
for
23
and
move
the
other
stuff
into
another
document
so
that
we
don't
have
to
change
the
links
in
like
two
or
three
places
across
kubernetes
yeah.
A
This
this
will
we
learned
that
they
are
the
way
last
year,
and
this
is
what
we
are
going
to
do:
yeah.
Okay,
thank
you
for
the
reminder.
A
Okay,
moving
on
open
proposal,
readout,
if
I
remember
well,
we
don't
have
open
proposal
at
the
moment,
and
so
we
can
move
to
the
next
Topic
in
agenda,
which
are
discussion,
topics
and
the
first
one
is
on
mic.
E
Yeah,
so
we
had
a
Community
member
and
I'm
I'm
blanking
on
the
name
at
the
moment
who
posted
a
proposal
to
the
cluster
Auto
scaler
in
a
way
that
we
can
add
labels
and
taints
for
the
scale
up
operations
when
scaling
from
zero.
Now
I'll
give
a
little
background
here
and
I
want
to
say
first
thank
you
to
Cameron
for
making
this
PR
I,
don't
know
if
I
don't
know
if
Cameron's
at
the
meeting
today,
but
this
is
this-
is
kind
of
an
awesome,
drive-by
and
great
to
see.
E
But
when
scaling
from
zero-
and
we
talked
about
this
in
the
past-
you
know
the
cluster
Auto
scaler
needs
to
understand
kind
of
the
shape
of
the
machines
that
it's
creating
and
it
needs
to
understand
this
so
that
it
can
properly
schedule
or
predict
where
pods
will
be
scheduled.
So
it
knows
which
nodes
to
create
when
scaling
up
what
we've
implemented
for
cluster
API
so
far
is
the
ability
to
kind
of
expose
the
size
of
the
machines
that
will
be
created
in
a
scale
from
zero
scenario.
E
But
what
we
didn't
do
was
expose
a
way
to
to
share
what
labels
and
taints
will
be
present
on
the
nodes
that
are
created
and
the
cluster
Auto
scaler
uses
this.
You
know
because
it
looks
at
pending
pods
and
it
tries
to
match
those
pending
pods
to
nodes
that
it
could
create
to
fit
them,
and
so
knowing
what
labels
and
taints
are
on
the
nodes
is
useful,
because
some
pods,
you
know
they
have
label
selectors
or
they
have
tank
tolerations.
E
So
we
had
not
come
up
with
a
good
way
to
do
this,
and-
and
although
there's
some
there's,
some
really
good
work
going
on
around
syncing
labels
from
machines
to
nodes,
you
know
Cameron
proposed
a
way
to
add
a
few
more
annotations
to
the
scalable
resource,
which
would
be
like
the
machine
deployment
or
the
machine
set
and
then,
through
those
annotations,
a
user
would
be
able
to
add
the
labels
and
the
taints
that
they
expect
nodes
created
by
that
machine
deployment
will
have
I
thought
this
was
a
really
novel
solution.
E
I'm
surprised
we
didn't
think
about
it
before,
but
it
would
solve
one
of
the
problems
we're
having
and
it
fits
in
well
with
the
work
that
we've
already
done.
So
that's
kind
of
my
bias
here,
but
I
wanted
to
I
wanted
to
bring
this
to
the
community
because
it
does,
you
know,
touch
a
component
that
we
all
use
and
I
wanted
to
see.
If
you
know,
if
anybody
else
had
thoughts
or
objections
about
this
approach,
you
know-
hopefully
you
know
if
nobody
has
any
thoughts
here.
E
Hopefully
we
could
at
least
maybe
get
some
comments
on
the
pr
and
if
there's
no
objections,
you
know,
probably
by
next
week,
I'll
I'll
remove
my
hold
on
this
BR
and
we'll
continue
reviewing
so
yeah.
That's
about
it.
I
don't
know.
If
anybody
has
questions
or
anything
happy
to
happy
to
answer
or
or
provide
more
context,
if
needed,.
A
C
A
To
comment
on
this
I
I
will
take
a
look
at
this.
Thank
you
for
for
the
heads
up
and
my
main
concern
is
to
make
sure
that
this
proposal
fits
well
with
the
work
that
that
we
are
starting
for
label
propaganda
and
propagation.
E
Yeah
thanks
for
brazier
and-
and
you
know,
speaking
of
the
work
we're
doing
for
label
propagation
now,
you
know
my
hope
is
that
what
what
Cameron
is
proposing
here,
we
would
use
in
the
same
way
that
we
use
with
the
overrides
for
like
the
memory
and
CPU
and
whatnot.
So
a
user
could
use
those
annotations
to
specify
the
labels
and
the
taints,
but
hopefully
in
the
future,
once
we're
we've
got
the
syncing
mechanisms
defined
a
little
better.
E
C
E
That's
that's
kind
of
my
concern
why
I
wanted
to
bring
it
to
the
community,
but
yeah
I
appreciate
any
reviews,
so
thank
you.
Fabrizio.
E
D
E
That
he
mentioned
because
I
don't
I,
don't
know
if
it's
related,
but
it
okay.
So
yes,
it
is.
It
is
related,
but
not
directly.
It
is
kind
of
tangentially
related.
So
one
of
you
know
one
of
the
things
that
we're
talking
about
with
this
syncing
between
machines
and
nodes
is
that
we
want.
We
want
to
have
a
way
to
expose
what
labels
would
go
from
a
machine
deployment
all
the
way
to
a
node
right,
because
the
cluster
Auto
scaler
only
knows
about
machine
deployments
or
machine
sets
at
this
point.
E
So
it's
kind
of
tangentially
related
to
this,
in
that,
if
we
have
a
way
to
determine
what
what
labels
will
be
on
a
node
that
gets
created
from
a
machine
deployment,
then
the
cluster
Auto
scaler
can
use
that
information
as
well.
So,
yes,
it's
related,
but
not
directly.
C
Oh
thank
you.
Yes,
I
asked
that
because
I
I
recently
I
opened
similar
Pier,
which
Implement
all
the
other
propagation,
so
I
adjust
the
concern
about
the
whether
or
not
we
have
to
on
the
same
page,
about
robust
propagation
I
mean
implementing
the
same
manner
for
the
love
and
propagation.
E
Yeah,
that's
that's
a
great
question:
hiromo,
no
I,
don't
think
there
will
need
to
be
any
changes
on
the
work
that's
being
done
in
the
synchronization
part
of
this.
Everything
that's
being
proposed
in
the
auto
scaler
right
now
is
is
very
much
separated
from
that.
So
it
shouldn't
affect
the
work.
That's
going
on
currently
to
sync
the
labels.
E
A
Great
thank
you
for
the
question
yeah
dude.
This
was
my
concern
as
well.
We
we
have
a
couple
of
activities
which
are
kind
of
are
related,
but
they
touch
the
same
concept,
which
is
labeled,
annotation,
propagation,
so
more
eyes.
We
can
get
to
check
that
everything
works
well,
the
better
okay
moving
on
so
Stefan
next
one
is
yours.
D
D
Okay,
so
I
think
a
lot
of
us
noticed
like
last
year
that
we
had
some
problems
with
the
registry
handling,
so
essentially
Cube
medium
changes
default
registry
and
we
modified
kcp
accordingly
in
1.3.0
and
102.8,
so
that
everything
works.
D
Unfortunately,
that
fix
wasn't
like,
let's
say:
100,
perfect,
okay,
but
let's
start
from
our
current
state.
So
the
current
state
is
essentially
that
when
kcp
roll
triggers
a
roll
note,
it
modifies
the
registry
of
a
cluster.
So
sorry,
the
the
registry
of
a
cluster
in
a
sense
of
that
Cupid
enjoying
would
use
that
registry,
so
every
subsequent
join
would
use
the
registry
of
kcp
sets
and
what
we
changed
in
one
two,
eight
and
one
three
zero
is
that
for
all
those
versions,
essentially
for
1.22
until
1.25
kcp
is
setting
redstreet.kates.io.
D
So
our
assumption.
Essentially
all
communities
has
changed
the
default
registry,
so
we
can
to
the
new
one.
So
we
can
set
a
new
registry
for
all
of
those
versions.
Unfortunately,
what
we
notice
is
that
certain
versions
of
Cuban,
certain
patch
versions,
still
use
the
old
registry.
D
I
wouldn't
mention
all
the
details
feel
free
to
read
the
entire
issue,
but
essentially
because
of
the
way
that
the
qubit
in
pre-check
works.
We
have
to
make
sure
that
the
default
registry
in
cubadium
is
in
sync
with
the
registry
that
case
P
writes
into
the
workout
cluster
and,
as
you
see
with
the
green
check
marks
so
for
all
of
those
versions.
Here
we
have
the
same
Defender
streams,
so
everything
is
fine.
D
So
all
the
new
qbdm
patch
versions
will
all
work,
but
we
have
old
ones
and
those
were
the
tests
that
we
had
fading
in
core
cluster
API
in
cap
C
and
Kappa,
and
cap
M3
at
least
probably
more
but
yeah.
Those
old
versions
were
all
failing,
and
essentially
we
fixed
all
the
end-to-end
tests
by
just
using
a
new
versions,
but
I
mean
that's
just
fixing
a
test.
That's
not
really
fixing
the
root
issue.
What
I
would
suggest
is
essentially
that
we
modified
the
KSP
Behavior.
D
I'm,
not
sure
if
that
was
like
understandable,
feel
free
to
ask
questions
and
and
follow
up
on
the
issue.
Yeah,
that's
a
decision.
A
C
A
F
A
F
Yeah,
so
this
is
regarding
kubernetes
126
support
with
cumulus
126
out.
We
are
adding
support
to
the
main,
the
one
three
and
the
one
two
release
branches,
the
mains
already
updated
with
support
for
126,
so
the
quick
start
and
across
the
federal
upgrade
tests.
All
of
them
are
now
running
on
126.
F
A
Thank
you
great
to
see
the
release
team
moving
on
and
all
the
coordination
that
is
being
that
is
happening
behind
the
scene,
to
to
move
the
release
on.
So,
if
you
are
not
comment,
I
have
the
next
topic.
A
Okay,
so
I
have
opened
at
a
PR
we've
as
a
small
set
of
improvement
to
our
back
parting
policies.
The
the
tltr
is
the
following.
So
why
I
did
this
because
we,
since
basically
1.3
we
are
now
supporting
two
release:
Branch
currently
1.2
and
1.3?
Okay,
and
these
during
December.
This
caused
a
little
bit
of
noise
with
a
lot
of
backup
or
PR.
A
So
so
we
we
I
took
a
look
at
it
and
basically,
what
I'm
suggesting
is
the
following
is
that
we
have
two
Branch
under
support,
one
to
two
and
one
two:
three
and
we
are
going
to
a
low
backboard
to
both
of
them
for
the
following
changes.
Bug
fixes
dependencies
bump
for
CV
resolution
search
manager,
virtual
bump,
because
we
don't
want
our
users
to
use
a
search
manager
version
which
are
Auto
support
and,
though,
and,
if
possible,
also
changes
required
to
support
a
new
kubernetes
version
like
the
one
that
you've
already
just
described.
A
A
It
is
I
I,
don't
know.
Github
action
bump
usually
does
not
have
a
severe
resolution.
It
is
not
really
required
to
backboard
it
back
to
1.2.
A
It
is,
it
is
just
noise,
dot,
Improvement
and
we
are
to-
and
we
are
recording
this
because
this
is
the
the
current
version
of
the
book
that
the
user
sees
and
also
we
we
would
like
to
improve
and
to
back
part
of
improvement
to
the
CI
senior
and
to
the
test
framework,
because
the
sales
signal,
because
it
helps
the
project,
General,
Health
and
also
a
test
framework,
because
usually
sometimes
there
are
asked
from
providers.
C
A
Improve
the
the
test,
something
in
the
test
framework,
I
hope
this
is
clear
and
I
just
stopped
for
questions.
E
So
I
just
wanted
to
get
a
little
clarification
on
that
second
part
you're.
This
is
talking
about
back
porting
from
Maine
to
like
a
patch
version
on
the
currently
supported
release.
Branch
is
that
is
that
correct.
Okay
is.
E
Okay,
yeah,
like
I
guess,
like
my
only
my
only
feedback,
might
be
like
the
documentation.
Improvements
like
I,
don't
I,
don't
know
if
I
would
necessarily
backboard
those
unless
they
were
related
to
a
change
that
was
like
in
a
change
that
was
in
that
version
as
well,
but
that
might
make
that
might
make
the
burden
of
review
a
little
a
little
tough
I
don't
know,
but
that's
really
my
only
feedback.
Otherwise
it
looks
pretty
good.
Thank
you.
Fabrizio.
D
I
might
be
misinterpreting
stuff,
but
I
I'm,
not
sure,
I
see
that
that
separation
between
all
the
body
branches
and
only
the
latest
branch
on
the
current
PR
I
thought
it's
more
like
the
separation
between.
D
Okay,
I'm
not
actually
sure
we
have
two
lists
and
no
PR.
We
have
one
for
in
order
to
keep
the
project
up
to
date
with
the
existence,
and
we
have
one
for
additionally
in
order
to
improve
user
developer.
C
A
Yeah
because
the
code
before
was
I
can,
let
me
say
rewriting
the
same
order,
but
we
have
here
for
Eastern
defenses,
but
I'm,
usually
limited
to
severe
resolution
back
part
of
non-cv
related
are
considered.
A
It
is
written
in
parenthesis,
but
yeah
we
can
go
back,
but
this
is
I.
I
think
that
this
is
the
the
main
goal.
The
main
goal
is
to
keep
the
burden
of
a
backboard
under
manageable
and.
A
This
kind
of
things
is
is
important
to
have
involved
this
one
is
something
that
we
do
in
order
to
to
make
the
life
of
our
user
simpler
or
our
life
simpler,
but
given
that
it
is
a
trade-off
between
what
we
can
do,
etc,
etc.
We
we
have
it
kind
of
seen
to
remix
only
to
the
latest
version.
C
D
H
I
was
going
to
say
something
similar
yeah
that
the
pr
didn't
seem
to
Metro
is
in
the
dark.
But
what
was
in
the
dark
to
me
seemed
a
bit
clearer,
so
I
think
it's
it's
a
lot
easier
to
understand.
H
The
only
thing
I
was
insured
is
like
for
the
latest
supported
Branch
dependency,
bumps
not
for
CV
resolution.
That
seems
a
bit
wide
to
me.
I.
H
Don't
think
that
we
should
like
I,
don't
know
that
we
should
backboard
every
dependency
bump,
especially
since
some
dependency
bumps
are
related
to
like
kubernetes
version
or
controller
runtime
version
and
can
have
breaking
changes
and
they're
more
like
features
than
bug
fixes
to
me
so
I
don't
know
I
feel
like
it
should
still
be
Case
by
case
for
those
or
we
should
have
some
sort
of
restriction.
H
H
D
Yeah
and
and
I
think
to
all
of
those
dependent
sperms
we
have.
We
have
always
certain
limitations
like
I,
don't
know
if
we
would
have
to
bump
client
go
a
minor
version.
That's
definitely
not
something
that
we
will
just
do.
That
would
be
a
huge
discussion,
so
I
think
a
basic
sanity
like
don't
bump
major
versions,
don't
bump
minor
versions
of
dependencies
that
you
know
that
are
problematic.
D
A
Okay,
let's
keep
a
feasible
in
both
and
and
thank
you
for
the
comment.
I
will
try
to
rewrite
the
the
pr
in
in
the
same
shape
if
this
is
simpler
for
a
reader.
One
last
note
is
that
also
keeping
the
the
old
branches
in
in
well
shape
is
a
is
a
work.
A
A
Okay,
moving
on
for
cut.
G
Hey
folks
yeah,
it
would
be
quick
just
wanted
to
bring
up
the
copies
yeah,
all
that
mailing
list,
that
you
see
that's
used
for
core
copy
and
other
providers
that
I
mentioned
there.
This
is
the
lotion
caption
couple,
meaning
that
the
people
on
that
car
and
all
at
least
will
be
receiving
the
emails
for
all
the
CIA
issues,
which
is
a
bit
annoying.
A
D
I
think
what
we
should
do
is
we
should
open
an
issue
in
in
the
core
copy
repo
and
mention
maintainers
of
all
of
those
providers,
just
that
they
are
aware
because
I'm
pretty
sure
not
everyone
is
here,
and
what
what
my
suggestion
would
be,
essentially
that
we
ask
those
three
providers
if
they
want
to
have
an
alert
mailing
list
and
then
they
have
added
option
to
essentially
drop
our
mail
or
move
to
their
own
one
and
I
think
that's
fair,
considering
that
yeah
as
I
said
I'm
pretty
sure
nobody
of
those
providers
is
on
our
alert
list
and
we
have
a
lot
of
other
providers
who
have
their
own
mating
lists.
D
A
Thank
you
for
cut
Stephen
I
I
also
agree
that
those
providers
should
move
up
on
their
own
list,
and
so
it
is
up
to
them
it's
to
choose,
to
move
to
a
separately
start
to
Simply
drop
because
they
don't
use
the
mailing
list
yeah.
Let's
open
an
issue
loop
them
in
and
see
how
to
how
it
goes.
Otherwise,
we
will
take
action
and
open
a
PR
on
again
instances
in
front
of
basically
dropping
they're,
removing
the
remaining
list
from
their
jobs,
but
yeah.
Thank
you
for
highlighting
this
program.
A
Okay,
moving
on
provider
updates
the
first
one
is
kubemark
provider
with
Mike.
E
Yeah
thanks
for
resume,
so
over
the
holiday
time
we
released
version
0.5.0.
E
It
mainly
contains
some
bug
fixes
and
then
updates
so
that
it
works
with
the
latest
versions.
Cluster
API
and
I
want
to
have.
You
know
special
call
out
to
Killian
Muldoon
thanks.
You
know
for
all
the
help
you
did
there.
You
really,
you
know,
helped
us
kind
of
get
to
the
next
level.
E
Here
we've
got
like
probably
another
release,
I'm
guessing
we'll
do
maybe
next
month,
fabrizio's
got
some
really
exciting
patches
up
there
to
kind
of
help
the
testing
framework
out
so
I'm,
looking
forward
to
that
and
I
think,
probably
at
some
point
in
the
first
half
of
this
year.
If
we
can
get
like
a
lot
of
these
testing
changes
that
Fabrizio
is
proposing
in
place.
I
think
we
might
want
to
get
to
the
point
where
we
could
declare
a
1.0
version
of
that
kubark
provider
and
then
kind
of
start
from
there.
E
You
know
once
we
have
a
really
solid
testing
framework
in
place,
because
we
do
have
a
little
bit
of
end-to-end
testing
now,
but
I
think
you
know.
The
work
that
Fabrizio
is
doing
is
is
really
great
and
so,
like
I
think
when
we
get
to
the
point
where
we
can
have
a
really
solid
CI.
That
we
know
is
easy
to
update
and
kind
of
change
versions
and
everything
I
think
I'll,
probably
look
to
the
community
towards
seeing
if
we
can
rally
around
getting
a
1.0
version
released
at
that
point,
so
anyways
yep,
that's
it.
A
Thank
you
just
a
side
comment
on
on
on
this
work
on
them
to
end
test.
If
everything
starts
out
well,
basically,
we
live
in
cluster
API
on
auto
scalar
test
that
can
be
run
with
any
providers
not
only
with
Cooper
and
yeah.
This
will
be
nice
for
everyone.
I
Hey
everybody
thanks
for
Brazil,
real
quick,
shout
out
that
we'll
be
cutting
patch
releases
in
capsi,
basically
after
this
meeting,
I
think
so
for
folks,
who've
been
waiting
for
that
it's
been
a
a
month
or
so
expect
that
later
today,
that's
really
it
I'll.
Do
the
incrementer
math
at
some
point
so
1.5,
whatever
dot
existing
plus
one
1.6.existing
plus
one.
A
Thank
you
I'm
great,
to
hear
this.
Then
we
have
a
feature
group
updates.
I
I
So
for
folks,
who
haven't
heard
that
it's
Wednesdays
at
9
00
a.m,
Pacific
so
I
know
that
we're
scattered
across
lots
of
time
zones.
Basically,
the
easiest
math
is
it's
an
hour
before
this
meeting
on
this
student
Channel.
A
A
Okay,
we
we
are
at
the
end
of
our
agenda.
Is
there
some
last
minute
topic
that
we
want
to
talk
about.