►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
If
you
haven't
joined
us
before,
we
usually
do
a
little
introduction
at
the
beginning
of
the
meeting
so
we'll
get
to
that
in
a
second.
In
the
meantime,
if
you
have
any
agenda
items
feel
free
to
add
them
to
the
agenda
and
your
discussion
topics
and
if
you'd
like
you,
can
put
your
name
under
the
attendee
list
that
helps
us
keep
track
of
who's
attending
these
meetings
and
know
how
well
the
you
know,
Community
Health
is
doing
all
right.
So
let's
get
started
first
off.
A
So
if
you
are
new
to
this
meeting
or
to
the
cluster
API
Community
or
you
haven't
said
hi
before
now,
is
your
chance
to
introduce
yourself
and
tell
us
a
bit
about
why
you're
here
so
I
will
mute
for
a
second
and
anyone
who
wants
to
say
hello,
feel
free
to
unmute
and
say
hi.
A
A
A
But
otherwise
let's
get
right
into
discussion
topics
and
we
have
a
couple
PSAs
and
I.
Think
Oscar.
You
have
the
first
one
go
ahead.
B
Thank
you
CeCe,
so
be
on
behalf
of
the
released
team.
I
would
like
to
just
give
a
small
warning
that
next
week
on
Tuesday,
we
will
create
the
1.4
release
branch
and
with
that
we
will
also
have
a
feature
freeze
for
1.4.
So
if
you
have
anything
that
you
would
like
to
get
into
1.4,
it's
urgent
to
get
that
merged
as
soon
as
possible,
I
will
send
some
links
in
the
chat.
If
you
would
like
to
read
more
about
the
release
cycle
Etc.
Thank
you.
A
Thanks
Oscar
any
questions
for
the
release
team
about
the
process
or
the
code.
Please
or
anything
like
that,.
C
Yeah
I
have
a
PR
that
I
just
got
a
little
bit
of
feedback
on,
but
I'm
trying
to
get
in
for
this
release.
It's
been
taking
a
little
bit
of
time
to
get
a
review
on
it.
Is
there
a
way
that
I
can
get
help
to
if
I
get
those
changes
in
today
to
try
to
get
it
into
this
release?
It's
a
pretty
minor
change.
Is
there
someone
that
can
help
me
get
that
or
should
I
just
wait
till
potentially
the
next
one
as
well.
C
I'm
not
sure
I
haven't
contributed
to
core
Cappy
before
so
I'm,
not
sure
the
turnaround
time
on
things.
I
don't
want
to
pressure
anybody,
but
at
the
same
time
I
don't
want
to
accidentally
miss
it.
Just
because
I
didn't
ask.
A
For
sure,
yeah
I
think
lots
of
people
are
probably
running
into
the
same
thing
as
you.
So
that's
a
totally
valid
concern.
I
think
we
tend
to
have
more
contributors
than
reviewers,
so
reviewers
are
sometimes
a
little.
You
know
over
spread,
so
I
think
what
you
can
do
is
try
to
get.
A
You
know
attention
on
your
PR
when
it's
ready,
so
that
reviewers
don't
have
to
like
keep
checking
back
on
it
and
see
if
you
push
the
changes
or
not,
and
hopefully
we
can
help
you,
especially
if
it's
minor,
like
I,
think
smaller
PRS
are
easier
to
review,
so
the
smaller
the
better.
A
C
A
Totally
yeah,
thanks
for
the
yeah
and
I,
see
a
few
people
volunteering
to
help
in
chat
as
well
so
cool
for
Rita.
You
had
your
hand
up.
Are
you.
D
Yeah
I,
so
sorry,
I
joined
it
a
little
bit
late,
so
I
hope
to
know
saying
wrong
stuff.
But
I
want
to
comment
on
the
release
stuff
to
to
comment.
First
of
all,
I'm
really
appreciating
the
work
of
the
release
team
he's
getting
us
a
Cadence
he's
helping
us
to
get
on
a
reason
on
quality.
So
I
would
like
to
give
a
shout
out
to
yugaraji
Oscar
and
for
cut
so
all
the
three
sub
team
of
the
release
team
I'm
I'm,
really
appreciated
the
their
work.
D
They
are
freeing
up
resources
for
us
to
get
a
better
CI
signals
to
improve
staff
and
I'm,
seeing
results
and
I'm
really
happy
about
the
experiment
and
yeah
this.
This
first
consideration
great
shout
out
to
folks
the
SEC.
The
second
is
a
call
out
for
for
providers.
So,
according
to
our
reschedule,
what
we
are
saying
is
that
battery
is
is.
D
To
to
talk
about
some
of
them,
so
in
this
copyright,
what
is
the
big
difference
from
this
release
and
the?
Although
and
the
previous
one
in
the
previous
one?
We
added
the
the
usual
huge
amount
of
PR
and
and
but
we
mostly
worked
on
totalism
on
specific
feature
like
cluster
class
or
runtime
extension?
Okay,
they
were
focused
and
impacting
on
area.
This
release.
The
the
work
is
a
little
bit
different
because
we
work
at
more
or
less
across
the
entire
code
base.
D
For
make
an
example,
we
did
the
kcp
remediation,
which
is
affecting
kcp.
We
did
label
propagation,
which
is
affecting
machine
deployment,
kcp
machine
to
nodes,
so
touching
must
part
of
the
code
base,
and
we
did
also
work
on
runtime
extension,
so
I
I
think
that
we
have
a
good
senior,
all
the
pr
that
mergers
to
interest
or
end-to-end
testing
flight,
etc,
etc.
But
you
know
better
than
me
the
evidence
on
the
Tails.
It
is
on
the
details.
D
A
Thanks
fabric,
so
those
are
really
important:
call-outs
I
wonder
if
we
should
open
issues
and
some
of
the
main
provider
repos
that
we
know
about
to
encourage
them
to
test
the
RC.
Maybe
that
could
be
something
we
do.
I
can
follow
up
on
that
awesome
and
then
just
also
to
remind
everyone
for
those
who
don't
know
the
dates
by
Hearts.
The
next
release,
so
V
1.4.0,
Next,
minor
release
of
Cappy
is
planned
for
March
28th.
A
So
that's
two
weeks
after
the
code
restarts
so
just
FYI
and
then
we'll
be
starting.
The
1.5
release
cycle
with
a
release
planned
I,
believe
four
months
later
or
the
Tuesday
25th
of
July.
A
All
right,
if
there
are
no
more
questions
on
this
topic,
let's
keep
going
Uber.
She
have
the
next
item.
E
E
Thank
you
so,
with
in
place
propagation
in
place
for
machine
deployments,
machine
sets
and
machines.
We
are
now
introducing
rollout
after
support
in
machine
deployments,
so
that
it's
a
way
for
users
to
manually
trigger
a
ruler
as
part
of
that
implementation.
The
this
PR
basically
modifies
how
we
modify
the
schematic
of
the
machine
template
hash
value
that
is
stored
as
part
of
a
label.
E
E
This
is
because,
when
we
manually
trigger
a
rollout,
the
second
machine
set,
the
new
machine
set-
that's
created,
is
essentially
equal
to
the
existing
machine
set.
So
the
and
the
entire
spec
and
everything
is
the
same,
but
you
need
a
way
for
it
for
them
to
distinguish
so
that
the
Machine
controller
can
machine
deployment,
controller
can
identify,
which
is
the
machine
set.
It
should
scale
up
and
the
other
machines
that
it
should
scale
down.
So
this.
E
The
comment
that
I
is
highlighted
right
now
gives
a
brief
description
of
exactly
what
I'm,
explaining
right
now
and
also
identifies
that
we
are
adding
this
random
string
at
the
end
of
our
label.
Value
I
just
wanted
to
call
this
out,
because
there
we
had
opinions
on
the
how
the
label
value
should
look
like
when
we
were
working
on
in
place
propagation
for
machine
deployments.
E
So
I
wanted
to
call
this
out
and
see
if
there
are
any
any
other
opinions
on
on
the
way
we
are
doing
doing
it
here
or
if
this
is
all
good
and
just
to
add
to
that,
even
though
we
call
it
the
machine
template
hash,
we
never
actually
the
the
words
engineered
the
hash,
but
what
that
means
is
we
never
actually
compare
and
check
that
the
hash
is
actually
equal
to
the
hash
of
the
machine,
template
and
so
on.
E
E
If
yeah
so
I
wanted
to
just
call
out
so
that
people
can
folks
who
are
interested
can
take
a
look
at
this
and
then
comment
if
they
have
any
thoughts
and
again
just
to
look
around.
This
is
important
for
us
to
get
in,
because
with
in
place,
propagation
users
cannot
trigger
rollout
on
machine
deployment
by
just
changing
and
on
an
annotation
anymore,
because
that
would
just
in
place
propagate
and
would
not
to
get
a
roll
out.
E
So
this
is
important
for
users
to
be
able
to
have
so
that
they
can
manually
trigger
rollout
and
machine
learns
yeah.
A
That's
okay.
Thank
you
very
much.
I
owe
you
a
review
on
this.
We'll
take
a
look
today.
Anyone
have
any
questions
on
this
TR.
A
All
right,
if
not
Mike,
you
have
the
next
one.
C
Yeah
so
we're
interested
in
somehow
getting
Carpenter
support
into
Kappa
and
Cappy
and
I
know.
Some
of
this
is
better
brought
up
in
the
the
Kappa
session,
which
I
think
is
Monday
or
the
following
Monday.
C
But
I
was
curious
about
kind
of
pulling
the
the
idea
of
Auto
scalar
support
Beyond,
just
the
generic
cluster
Auto
scaler
from
the
sigs
and
if
there's
been
any
sort
of
discussion
or
efforts
already
underway
for
something
like
Carpenter,
because
I
know,
there's
a
lot
of
excitement
and
interest
in
the
community
around
it
and
so
I
just
kind
of
wanted
to
poll
and
see
if
anybody
knew
anything
about
that
before
I
went
too
far
down
the
rabbit
hole.
A
All
right
we
got
a
few
hands,
I
think
other
Mike.
You
were
first.
F
Cool
thanks
so
I
think
this
is
a
really
interesting
topic,
thanks
for
bringing
it
up.
Mike
I
had
investigated
about
a
year
ago,
or
maybe
a
year
and
a
half
ago,
trying
to
write
a
a
cluster
API
back
end
for
Carpenter,
but
I
ran
into
a
bunch
of
problems
it.
It
wasn't
quite
right.
You
know
in
terms
of
what
I
was
trying
to
do,
I
think
maybe
looking
at
it.
F
The
other
way
around,
though
like
what,
if
a
cluster
API
user
could
run
Carpenter
against
their
Kappa
deployment,
and
then
Kappa
could
somehow
be
aware
of
that
I'm
kind
of
curious.
If
there's
some
crossover
into
the
into
the
managed
kubernetes
like
feature
group
kind
of
stuff,
because
I'm
not
I'm,
not
totally
convinced
that
it
would
be
appropriate
to
write
a
cluster
API
backend
for
Carpenter,
but
I
could
totally
see
someone
who's
using
Kappa
wanting
to
run
Carpenter
on
the
back
end.
F
And
then
it
would
just
be
a
matter
of
somehow
cluster
API
being
able
to
be
aware
of
the
resource
changes
that
Carpenter
is
making
because
it
it
brings
in
all
sorts
of
different
size
nodes
and
everything
and
I'm
not
sure
what
it
would
take
to
kind
of
get
Cappy
configured
properly.
To
be
able
to
see
what
Carpenter
wants
to
do
so.
Yeah.
That's
that
that's
my
take
on
it.
A
G
Yeah
I,
those
are
good
good
remarks.
Mike,
so
I've
actually
looked
into
Carpenter
more
recently
than
that
very
superficially,
but
enough
to
get
a
feel
for
how
it
works
and
I
share
Mike's
observations
about
the
difficulty
of
integrating
it
into
Cappy.
G
So
if
it's
okay
I'll
go
into
a
little
bit
more
detail
like
I'm,
not
sure
exactly
how
it
is
a
Kappa,
but
I
would
guess
that
there's
a
sort
of
integral
relationship
between
a
machine
deployment
in
Kappa
and
a
particular
flavor
of
underlying
compute
in
AWS,
and
that's
Carpenter,
really
doesn't
use
that
model.
It
extends
that
model.
G
So
in
order
to
get
to
work
in
a
in
a
Cappy
provider,
I
think
that
you
could
sort
of
massage
the
the
way
that
a
machine
deployment
looks
to
have
a
more
flexible
configuration
interface
and
so
rather
than
declare
like
you
know,
this
machine
deployment
is
going
to
create
machines
using
this
SKU,
which
has
this
CPU
profile.
This
memory
profile.
You
could
maybe
imagine
a
a
new
sort
of
type
of
machine
deployment,
definition
in
Kappa
that,
rather
than
Define
those
it
just
says,
I'm
a
carpenter
machine
deployment.
G
G
Would
it
would
sort
of
be
a
two-headed
Beast
to
require
Evolution-
probably
not
in
Cappy
per
se,
but
certainly
I,
think
in
every
provider
that
I
can
imagine,
because
the
the
carpenter
approach
is
just
different
than
the
expectations
that
the
Cappy
providers
have
used,
which
is
to
say
that
you
want
a
kind
of
recipe
for
VM
and
then
you
horizontally
scale
those
as
replicas.
G
So
in
Carpenter
there
is
horizontal
scaling,
but
but
it's
not
the
each
thing
is
a
sort
of
snowflake
that
Carpenter
is
trying
to
sort
of
heterogeneously
manage
for
purposes
of
scaling
density
and
all
that
kind
of
thing.
Oh
Michael,
you're
raising
your
hand,
I'll
pass
it
back
to
you
and
Cecil
for
over
the
time
boxes.
C
G
It
okay,
well
I,
mean
also
another
level
of
detail.
Just
one
level
is
I'm,
pretty
sure
that
a
novel
provider
would
have
to
be
written,
so
I
know
folks
in
a
can
I
work
with
the
books
in
AKs
I
know:
there's
investigations
happening
there
about
how
to
integrate
Carpenter
into
AKs,
and
that
would
require
novel
providers,
so
the
existing
just
because
Carpenter
gives
you
sort
of
AWS
for
free.
Now
it's
not
going
to
work
in
Kappa,
as
is
so,
it
would
probably
require
a
new
back
end.
Like
Mike
was
saying.
G
Potentially
you
could
evolve.
The
current
AWS
default
implementation
to
include
Kappa
support,
but
my
read
on
that
based
on
sort
of
circling
that
project
is
that
would
not
be
the
most
likely
outcome.
F
So
what
I
was
going
to
say
was
one.
You
know
one
of
the
really
cool
things
about
Carpenter
for
people
who
haven't
looked
at
it
very
much
is
that
Carpenter
can
make
all
sorts
of
really
cool
decisions
about
like
changing
instant
sizes
and
whatnot,
and
trying
to
give
you
like
the
best
fit
of
of
what's
in
that
pool
as
it
were,
and
it
can
also
do
things
like
Reflow
pods
and
everything
and
move
them.
F
You
know
to
different
places
and
whatnot
and
so
I
think
one
of
the
difficulties
about
bringing
it
to
Cluster
API
is
that
you
know,
like
a
machine
deployment,
probably
is
not
sufficient
for
describing
the
type
of
you
know,
grouping
that
Carpenter
is
going
to
want
to
do
and
I'm
wondering-
and
you
know
mainly.
This
is
just
my
own
ignorance
about
the
code
but
I'm
wondering
if
something
like
a
machine
Pool
isn't
closer
to
what
Carpenter
would
be
doing,
and
so,
in
that
respect
like
if
we
had
you
know,
my
understanding
is
machine.
F
Pools
can
have
heterogeneous
machine
types
in
them,
and
so
given
that
that
that's
probably
a
more
applicable
reference
to
what
Carpenter
is
doing,
but
we'd
still
need
some
way
to
be
able
to
bring
the
information
back
and
kind
of
like
make
sure
that
there's
a
Reconciliation
between
the
two
you
know
the
other
thing
about.
Carpenter
too,
is
that,
like
the
the
way
it
works
internally?
F
Is
it
has
kind
of
a
lot
of
callback
mechanisms
for
when
it
sets
off
activity
in
the
cloud
and
when
I
was
looking
at
that
in
terms
of
writing
a
cluster
API
back
end,
it
didn't
seem
to
comport
well
to
what
we
were
doing
in
cluster
API
like
I
felt
it
would
be
a
really
slow
approach
for
auto
scaling
through
cluster
API.
But
that's
why
I
say
maybe
inverting.
It
is
like
the
other
way
to
do
it.
C
Yeah
my
last
question
was:
it
was
just
I
I
did
see
in
cap
Z.
There
was
the
the
notion
of
the
extra
you
know
an
external
Auto
scaler
because
of
the
with
AKs,
the
external
Auto
scaler
does
anything
like
that
play
into
it
with,
as
we
talk
about
the
machine
pool
aspect
of
it,
or
is
that
more
only
work
with
the
traditional
Auto
scaler
just
out
of
curiosity,
because
I
know
with
cap
Z?
A
I
can
answer
that
question
all
right
go
ahead.
Let
me
go
no,
no
go
ahead,
so
yes,
and
no
so
the
external
Auto
scaler,
just
essentially
what
it
allows
you
to
do
is
to
delegate
the
management
of
the
replica
counts
to
some
external
process,
so
cab,
Z,
Cappy,
just
stopped
managing
replicas
and
all
they
do
is
observe
replicas,
and
it's
like
that
state
back
into
the
status.
A
So
the
replica
count
is
no
longer
driven
by
the
machine,
pull
spec,
it's
driven
by
some
external
thing,
which
could
be
the
Azure
Auto
scaling
feature.
It
could
be
some
other
autoscaler
you're
using
it
could
be
the
Azure
cluster
autoscaler.
A
Anything
like
that.
I,
don't
know
that
it
would
just
work
out
of
the
box
with
Carpenter
because
of
that
different
VM
size,
different
note,
pool
types
like
concept
of
just
adding
new
nodes
versus
just
scaling
up
an
existing
pool
of
nodes
and
I
think
what
Mike
was
saying
about
having
heterogeneous
instances
inside
Mission
pools.
That's
not
totally
true.
Today
in
Cappy
at
least
like
in
I,
think
I
don't
know
about
Kappa
but
cabs.
A
He
does
have
like
one
VM
size
per
machine
pool
or
per
Azure
machine
pool,
even
though
there
is
like
support
for
heterogeneous
vmss
in
Azure,
so
Azure
supports
it,
but
tacky
right
now
only
does
like
one
per
instance.
Type
and
I
see
new
hands
coming
up.
I'm,
not
sure
who
was
first
I
can
potentially
check.
G
Yeah,
so
just
a
quick
response
to
your
comment
about
the
sort
of
passive
extra
Lotto
scalers
to
seal
it,
in
fact,
I
think
something
like
that
could
work.
It
would
just
be
very
clumsy
if
you
simply
enabled
the
current
your
current,
like
machine
deployment,
configuration
spec
to
sort
of
wrongly
reflect
data
that
is
actually
no
longer
under
the
enforcement
of
Cappy
altogether
so
I
I
bet
you
could
prototype
something
super
quickly
which
which
essentially
short
circuits
all
the
machine,
deployment,
reconciliation
and
simply
calls
Carpenter,
or
it's
called
some
other
source
of
authority.
G
That
says
how
many
machines
are
in
this
machine
deployment
right
now
and
just
update.
But
the
point
is
that
the
the
machine
deployment
definition
right
now
assumes
that
every
machine
can
be
generally
like
sort
of
expressed
with
a
single
recipe,
and
then
the
replica
cat
just
says
how
many
of
those
there
are,
whereas
in
fact
in
Carpenter,
every
machine
is
going
to
have
a
slightly
different
profile.
F
F
Is
this
notion
of
like
heterogeneous
node
pools
right
so
like
a
carpenter
is
awesome
because
it
can
do
all
these
really
cool
calculations
about
you
know
it
has
aws's
inventory
of
machines
to
pick
from,
and
so
it
can
like
pick
the
appropriate
ones
to
fit
like
the
workload
you're
doing,
but
I
think
that's
a
concept
that
we
would
have
to
kind
of
like
accept
in
some
ways.
You
know
I
think
everything
Jack
said
is
absolutely
right.
A
Foreign
I'm
going
to
wrap
this
discussion
just
in
the
interest
of
time,
but
it
sounds
to
answer
your
question
like
like:
there
is
lots
of
interest
just
based
on
you
know
how
much
time
we
spent
talking
about
this
and
I
also
saw
that
Jonathan
and
Miss,
who
is
an
AWS,
Carpenter,
maintainer
I,
believe,
is
in
this
call
and
commented
about
you
know
potentially
working
together,
so
wine
closer
with
tapi
and
I,
think
that
would
be
great,
so
I
think
the
next
step
is
probably
for
the
folks
here
who
were
you,
know
interested
me,
Mike,
Jack,
almiko
and
Jonathan
to
connect
and
see
what
we
can
do
all
right.
A
Thanks
everyone,
let's
move
on,
could
I
get
a
bit
of
a
scroll
Jack.
Please.
A
Awesome
so
moving
on
to
provider
updates
tab,
Z,
actually
Jack.
You
have
this
one
go
ahead.
G
Okay
cool,
so
just
a
PSA
about
a
couple
of
PSAs
one
eight
zero
is
this
uppercase
B
is
driving
my
OCD
while
together
correct
that
this
is
on
track
release
today.
So
we're
really
excited
about
this.
This
is
a
pretty
big
release.
The
key
thing
is
that
we're
graduating
managed
kubernetes
from
experimental,
so
grab
your
1.08
1.80
bits
soon
and
test
them
out
in
your
staging
environments,
if
you're
using
AKs
with
cap
Z,
so
we'll
have
more
details
in
the
release.
G
Notes
about
that
and
then
also
worth
shouting
out,
that
Cecile
has
done
a
ton
of
work
to
transfer
all
of
our
reference
and
test
templates
to
use
the
out
of
tree
cloud
provider
by
default,
so
we're
still
going
to
be
testing
the
entry
for
legacy.
G
Kubernetes
scenarios-
if
you
don't
know
what
I'm
talking
about
feel
free
to
just
hit
me
up
on
slack
and
I,
can
explain.
But
this
is
this
is
something
that
I
think
all
providers
should
be
doing
so
I
wanted
to
kind
of
nudge
other
provider
maintainers.
G
A
Thanks
Jack
any
questions
all
right:
Jonathan,
Tong
I,
believe
you
have
the
home,
charts,
updates
or
home
sorry,
home
provider
updates.
H
Yeah
I
just
wanted
to
follow
up
a
bit
more
about
the
cap.
H
image
publishing
so
I
got
the
staging
repo
set
up
and
I
still
have
a
testing
for
PR
for
the
actual
job,
but
that
should
be
good
to
go
as
well.
So
I
cut
a
release
tag
with
yeah.
We
have
two
Jonathan's.
Now
we
have
a
0.1.0
release:
0.1.0
Alpha,
dot,
Zero,
release
tag,
I
just
cut,
so
we
could
try
to
publish
an
image
and
see
what
happens
so.
H
A
Awesome
work
any
questions
for
Jonathan
about
this.
A
A
G
Cool
real,
quick,
just
another
reminder
that
we've
got
a
9
A.M
slot
in
this
zoom
and
we
are
at
the
part
of
the
draft
proposal
dock,
where
we're
talking
about
defining
a
new
cluster
API
crd
that
wouldn't
include
a
control
plane
as
a
requirement.
So
you
can
come
for
the
managed
kubernetes
today
for
the
cluster
API
controversy.
We're
going
to
be
talking
about
all
kinds
of
controversial
topics,
but
you
don't
have
to
come
to
that.
You
can
also
just
contribute
in
the
dock.
B
G
A
Awesome
thanks
all
right.
Any
last
minute
questions
concerns
comments,
shout
outs
intros.
That
was
your
chance.
A
All
right,
if
not
have
a
great
Wednesday
everyone
and
CEO
online
bye
thanks
for
sharing
Jack
the
screen.