►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Oh
well
welcome
everyone
welcome
to
the
cluster
API
provider,
AWS
office,
hours
on
May,
15
2023,
and
just
a
reminder
that
we
abide
by
the
cncf
code
of
conduct,
which
basically
means
just
be
nice
to
each
other.
A
If
you
could
add
your
names
to
the
list
of
attendees
in
the
meeting
notes
document,
this
has
been
shared
by
ankito
in
the
chat
thanks
and
Kita
again.
Also,
if
there's
anything
you'd
like
to
discuss
I
see,
we
have
got
some
agenda
items
which
is
good,
but
if
there
is
anything
you
would
like
to
discuss
that
is
not
on
that
list.
Please
add
it
to
the
list
as
well.
A
A
I
can't
see
it.
I
can't
actually
see
the
whole
list.
So
I
don't
know.
If
there
is
anyone
new
but
based
on
the
list
in
the
doc
I.
Don't
think
there
is
so
we
can
continue
on
if
you'd
like
to
speak.
If
you
could
use
the
raised
hands
feature
as
well,
that
would
be
useful
I
will.
A
Let
me
just
see
if
I
can
that's
better,
so
I
can
see
the
list
now
yeah,
if
you
could
use
raise
hand,
feature
and
then
you'll
be
asked
to
speak
at
the
appropriate
time,
just
so
that
everyone
has
a
chance
to
speak.
A
So
if
we
move
on
to
the
agenda,
so
we
have
a
PSA,
which
is
that
V
2.1.1
was
released
last
week.
A
Goga
did
a
lot
of
work
on
this,
as
did
Ankita
and
and
go
go
used
it
as
a
chance
to
learn
the
release,
process
and
I
think
he
did
two
releases
within
three
days
or
something
like
that.
So
yeah,
it's
great
work
by
him,
but
also
great
work
to
to
get
it
out.
It's
been
a
long
time
since
we've
done
a
a
major
release.
We've
done
a
couple
of
patch
releases,
but
yeah.
A
I,
don't
think
there
was
so
that's
good
brilliant,
so
we
can
move
on
to
the
agenda.
So
the
first
item.
Well,
the
first
two
items
is
Andreas.
The
floor
is
yours.
B
Thanks
yeah,
so
I
just
wanted
to
bring
attention
to
those
two
issues:
they're
separate
from
each
other,
so
I've
been
working
on
those
for
a
while
Richard.
You
might
remember
this
draft,
which
I
felt
by
now.
So
the
issue
was
that
managed
supplements
are
broken
with
version
two
of
copper,
which
basically
revolves
around
the
ID
field
becoming
required.
So
you
cannot
say
in
a
more
I
I,
don't
give
an
ID,
please
manage
it!
For
us,
please
cover
create
infrastructure.
B
A
B
That's
too
much
for
this
meeting,
but
yeah.
If
you
want,
we
can
discuss
a
bit.
So
the
options
we
had
and
in
earlier
meeting
was
basically
to
create
a
new
CR
ony
UCLA
for
for
networking,
specifically
either
a
big
one
which
describes
the
whole
network
or
smaller
ones
like
AWS
subnet,
but
I.
Don't
like
that
approach
too
much,
because
it
opens
this
up
for
becoming
teleform
so
yeah.
There
are
a
few
ideas
in
here
with
yeah
advantages
disadvantages,
so
we're
not
in
a
hurry.
B
A
Cool
thanks
address.
Go
for
Ankita.
C
Yeah
so
I
just
wanted
to
know
the
motivation
behind
this.
Is
that
only
because
it's
broken
or
is
there
something
else
we
are
also
thinking
about.
B
It
it
was
the
trigger
it
was
the
trigger
and
then
I
think
Richard.
You
were
with
us
in
the
meeting
and
he
said:
hey
the
copper
networking
code
isn't
great
in
general,
I
guess
you
can
say
more
about
this
one,
and
then
we
thought
hey
what
about
such
a
draft
document
that
thinks
about
some
options,
giving
more
flexibility
for
example,
or
maybe
also
leading
to
a
rewrite,
not
sure
Richard.
A
You're
right,
it
was
the
trigger
there's
been
a
lot
of
a
lot
of
issues
raised,
are
all
related
to
the
networking
code
in
various
shapes
and
forms.
Some
of
them
are
specific.
You
know
regressions
like
this,
but
then
there's
things
like
you
know
totally
private
networks
and
and
stuff
like
that,
and
actually
the
like
the
subnet
code
is
pretty
horrible,
so
we
thought
we
could
use
this
as
the
opportunity
to
completely
rethink
how
we
do
the
networking
and
make
it
more
flexible
and
more
extendable
for
the
future.
A
So
I
guess
the
the
they
ask
here
is
that
if
people
are
interested
to
to
review
this
proposal
and
sure.
A
Yeah
and
I
guess
the
other
thing
is:
if,
if
it
would
be
helpful,
we
could
do
more
of
a
deep
dive
into
this
can
be
like
a
some.
You
know
work
for
it
for
the
people
that
are
interested
at
a
different
time
on
a
different
call.
Maybe.
A
Way,
either
either
way
whatever
works
best
Mike
has
a
comment.
A
Yeah
yeah
yeah
so
yeah.
So
this
is
what
this
is.
This
is
all
around
Mike
I
completely
agree.
There
was
I,
don't
know
whether
to
call
it
a
regression,
but
there
was
another
undocumented
feature
that
wasn't
intended
to
be
used.
The
way
it
was,
but
it
was
used
by
people
and
yeah.
The
decision
was
to.
A
At
that
time-
and
that
was
that
was
a
mistake
in
hindsight-
so
yeah
I
completely
agree
things
changed
in
that
area
because.
B
Then
what
about
I
suggest
some
dates?
Maybe
in
slack
see
who's
interested.
B
B
B
An
instance
refresh
should
happen
so
new
nodes
being
rolled
out,
and
that
does
not
happen
and
that's
yeah.
It's
annoying
at
best,
because
whenever
you
have
a
managed
platform
like
in
our
company,
for
example-
and
you
make
a
change,
then
you
have
to
manually
trigger
the
cluster
update
rolling
all
the
affected
nodes.
B
B
C
B
You're
showing
I
think
and
that's
still
not
perfect,
because
this
kind
of
assumes
the
cloud
in
it
format
or
assumes
that
the
token
is
somewhere
in
the
script
in
plain
text,
not
perfect,
ideally
copy
would
tell
us
hey.
This
is
the
checksum
or
hey
if
something
changed
or
not,
that
would
be
great
and
yeah
Daniel.
You
want
to
comment
first.
D
D
That
is,
if
I
want
to
update
right
if
I
want
to
roll
out
a
different
machine,
I
need
to
create
a
new
template
and
what
what
I'm
reading
here
is
that,
rather
than
create
a
new
template,
this
is
this
is
a
in
place.
Change
for
the
kubadium
config,
yes,
which
exactly
that
I
I,
don't
know
that
that
seems
to
go
against
the
the
cluster
API
design
is.
D
Yes,
I
I
mean
I'm,
not
saying
that
it
should
be
the
design
goal.
I
thought
that
that
was
the
right.
That's
that's
the
that
is
like
the
status
quo
that
that
an
infrastructure
provider
is
not,
you
know,
is
not
going
to
pay
attention
to
a
kubadium
config.
D
B
Right,
it
seems
to
look
different
for
machine
pools,
so
formation
pools,
which
happens.
We
create
a
an
auto
scaling
group
and
launch
template
versions
and
inside
there
you
have
this.
What
AWS
calls
user
data.
So
basically
what
sets
up
the
instance
when
it
gets
created
and
in
that
Cube
ADM
config,
a
spec
like
you
can
set
files
or
pre-cube
idiom
commands
post
Cube
ADM
commands.
So
all
those
fields
are
relevant
for
node
creation
and
you
can
make
actually
relevant
changes
in
there,
like
not
the
kubernetes
version,
but
maybe
other
things
like
adding
files.
B
D
From
from
what
I
understand,
in
order
right,
that
use
case
is
supported
by
creating
a
new
template
right,
a
new
template
that
that
would
point
to
a
new
Kuby,
DM
config.
A
A
I
Daniel,
are
you
when
you
say
template
you're
talking
here
about
a
a
machine
template
or
the
that's.
D
D
Yeah,
yes,
but
the
the
I
think
the
if
you,
if
you
do
change
you
do
change
something
in
the
kubidm
configs
back
in
the
in
the
kubadium
control
plane
resource.
It
will
generate
I,
think
new
bootstrap
data
it
will.
It
will
treat
it
as
it
will
be,
an
equivalent
sort
of
rollout.
D
If
you
it's
it's
an
equivalence.
Behavior
to
you
know
creating
a
new,
a
new
template
for
for
a
machine
to.
B
Play
actually
there's
a
bug
in
there
specifically
for
machine
pools
where
it
yeah.
It
should
update
the
bootstrap
token,
which
includes
this,
this
user
data
script.
It
doesn't
do
that
I
actually
found
that.
So
that's
the
second
thing,
I'm
working
on
right
now
so
I.
What
I
Envision
right
now,
how
it
should
work,
is
I
make
a
change
to
cube
ADM
config
like
adding
a
file
or
a
command.
B
It
updates
the
secret.
So
the
bootstrap
secret
immediately
and
Kappa
also
reconciled
immediately
because
it
sees
a
change
in
the
bootstrap
data
and
then
the
machine
pool
would
also
kind
of
immediately
trigger
instance,
refresh
because
I
made
a
change
that
affects
the
node,
so
I
want
the
nodes
to
be
recreated
so
by
Design.
I
cannot
say
how
the
design
should
actually
look
like.
A
Yeah
personally,
I
need
to
go
back
and
check
what
we
do
on
the
unmanaged
machine
pools
when
we,
when
we
make
changes,
does
it
does
anyone
know
off
top
of
the
head?
The
behavior
there.
E
E
So
this
sounds
sort
of
similar,
but
I
would
have
to
take
a
close
look
to
see.
If
that's
the
same
sort
of
thing
is
that
I
saw
on
the
eks
side.
I
can
tell
you
that
the
Mission
Pool
is
handling
it's
not
great.
A
That's
an
interesting
yeah
I,
don't
think,
we've
ever
honestly
sat
down
and
thought
about
a
behavior
that
we
actually
want,
because
we've
had
issues
in
the
past
with
Amis
different
versions
of
the
Amis
in
the
same
cluster
and
not
pinning
it
and
things
like
this,
so
I
don't
could
be
wrong,
but
I
don't
think
we
have
actually.
D
I'm
sorry
I
I'm,
I
misspoke
earlier
I,
I
meant
to
say
kubadium
config
template
as
opposed
to
the
machine
template,
so
the
at
least
for
the
machine
deployment.
The
machine
deployment
refers
to
the
cool
medium
config
template
and
that
template
is
used
to
generate
the
kuberdm
config
spec
for
for
each
machine
and
that
so
yeah
from
machine
cools.
E
E
A
D
Yeah
I
would
also
like
to
so
machine
right.
Machine
pools
is
still
experimental
feature,
at
least
in
cluster
API
core
and
right.
There's
ongoing
work,
for
example,
right
for
machine
pool
machines
to
I
think
address
some
of
the
shortcomings
of
of
you
know
the
the
design,
the
original
design
and
so
I
wonder
how
this
how
this
plays
into
it.
I
I
can't
I
can't
recall
why
there
is
a
a
template
for
machine
deployments,
but
not
a
template
for
for
machine
pools,
but
yeah
now
I
mean
it.
It
does
make
sense.
D
A
Cool
and
it'd
be
good
to
go
back
on
that.
We
can
also
go
back
and
look
at
the
issue
that
the
Cameron
race
around
the
eks
config
as
well,
and
just
see
how
related
they
are.
A
Oh,
please
Andreas.
So
next
on
the
agenda
item
is
my
item,
which
is
around
having
a
defined
release
cycle.
So
the
2.1.0
release
was
a
long
time
after
the
previous
release,
and
there
was
a
lot
of
people
asking
pretty
consistently
for
a
new
release,
but
I
kept
dragging
on
and
on
and
on
so
The
Proposal
is
that
we
come
up
with
a
defined
release
cycle.
A
So
we
do
have
quite
an
old
discussion
issue
on
this,
so
it
would
be
good
if
people
had
thoughts
on
this
to
add
it
to
the
discussion.
A
You
know,
for
instance,
do
we
keep
it
in
line
with
Cappy,
but
slightly
delayed
to
allow
us
to
to
bump
the
versions
and
and
test
and
also
what
triggers
a
new
version
from
from
our
side?
So
you
know
a
new
version
of
cap
you
may
do,
but
we
all
Also
may
want
to
trigger
a
new
release
so
yeah.
So
the
ask
is,
if
anyone
has
any
thoughts
on
that
just
to
to
put
it
into
that
discussion
item.
A
Just
anyone
have
any
comments
now
on
having
a
defined
release
cycle.
B
A
A
minimum
release
cycle,
so
we
know
if
you
know
this
could
be
a
new
release,
every
I'm
just
picking
times
out
of
the
air
here.
But
you
know
every
two
months
or
whatever
or
every
time,
there's
a
Cappy
release,
something
that's
as
a
minimum
is
good,
but
yeah
I,
don't
know
whether
it
needs
to
be
for
every
every
release
of
Cappy
again.
Is
it
just
you
know
major
and
minor
changes
or
patch
change,
normal
patch
changes,
I
I,
don't
know,
I
think
that's,
probably
something
we
just
have
to
work
out
ourselves.
A
Personally,
but
I
think
probably
get
into
the
situation
where
we
were
before
where
we
had.
You
know
four
months
between
a
release
and
there
were
a
lot
of
people
depending
on
the
release,
and
we
and
we
we
didn't
do
it,
we
kept
pushing
it
back
and
back
and
back
it's
probably
not
a
good
position
to
be
in,
especially
if
you're,
you
know,
you're
using
Kappa,
Building,
Products
or
services,
or
anything
on
that
is
it's
quite
hard
to
plan.
Go
for
it.
Mike.
F
From
and
into
perspective,
some
predictability
would
be
beneficial,
especially
if
there's
you
know
a
particular
bug
fix
that
happens
to
be
in
there
that
you're
looking
for
that
kind
of
thing
doesn't
need
to
be.
You
know,
I'm
not
saying
like
oh
hey,
every
three
weeks,
that
sort
of
thing,
but
just
some
form
of
predictability.
B
B
A
Yeah
I
agree
yeah.
We
could
capture
these.
These
points
in
the
discussion.
If
you
have
time
be
super
helpful.
A
G
Yeah
hello,
so
there
was
some
talk
internally,
like
a
red
hat
like
to
see,
if,
like
there
was
enough
interest
to
add
in
support
for
Rosa
like
roses
like
a
basically
like
an
eks
but
just
runs
open
shift
instead
of
I
guess
playing
kubernetes,
and
this
is
like
a
basically
like
an
offering
that,
like
it's
managed
by
AWS,
specifically
and
I,
I
thought
it
would
be
actually
a
good
candidate.
As
we
talk
about
like
machine
pool
getting
out
of
experimental.
C
G
Joined
late,
I
heard
some
things
I'm
happy
to
expand
on
those,
but
but
yeah
like
Upstream,
like
in
Cappy,
like
we're,
trying
to
put
machine,
pull
out
of
experimental
and
also
add,
like
machine
support
to
manage
topologies
through
cluster
class
and
so
on,
and
so
I
figured
like
that.
This
could
bring
more
people
from
the
the
Rosa
team.
Basically
like
upstream
and
just
I,
guess,
I
could
add
more
support
for
other
kubernetes
distributions.
D
End
to
end
sting
of
of
Rosa
How
would?
How
would
that
happen,
and
would
we
are
there?
Are
there
significant
differences
as
far
as
cluster
life
cycle
is
concerned,
or
is
it
mainly
sort
of
developer
differences
for
for
developers
using
the
cluster
once
it's
up.
G
Specific,
it's
mostly
bad,
like
openshift,
works,
a
lot
differently
than
kubernetes,
like
it
just
in
general,
but
for
Rosa
like
it's
like
the
same
way
that
you
would
upgrade
like
an
eks
cluster,
as
in
like
you,
just
make
a
call
and
say
like
upgrade
to
this
other
version.
C
G
The
same
for
for
Rosa
clusters,
so
from
that
perspective,
like
kind
of
works,
the
same,
that's
why
I
still
got
like
a
better
fit
like
alongside
the
eks
and
other
managed
Services
in
terms
of
ite
dpd
I,
assume
that
there
will
be
a
story
and
we'll
need
to
figure
out
like
either
credits
or
whatnot,
but
I'm
sure
that
can
be
figured
out
like
that.
That
is
definitely
interested
so,
but
I
want
to
say
that.
G
That's
that's
like
last
of
my
concern,
but
the
the
first
good
thing
is
like
hey:
can
we
make
this
work
first
and
then
kind
of
like
understand
like
how
what
like
how
these
cncf
accounts
can
basically
like
work
on
the
with
Rosa,
given
that
it's
a
like,
you
need
to
subscribe
to
something
or
whatever
into
the
AWS
UI
before
you
can
use
it,
but
I'll
ask
for
sure.
F
Yeah,
just
one
of
the
things
I
I
agree
with
some
of
what
Vince
is
saying,
but
one
of
the
things
I
wonder
as
it
regard
relates
to
Rosa
and
I
I.
Don't
bring
this
up
to
like
naysay.
What's
being
said,
I
just
wasn't
on
the
the
Rosa
and
the
related
like
ARA,
Arya
or
Aro.
The
the
Azure
version
belongs
with
its
own
provider.
F
F
I
mean
if
it's
easier
to
put
them
into
you
know
these
ones
and
I
can
get.
You
know
Rosa
and
you
know
Arrow
sooner,
don't
get
me
wrong.
I'll
be
happy,
but
yeah.
It's
just
something.
I
do
wonder
about.
G
Yeah
I
actually
started
thinking
that,
like
it
should
go
like
in
a
different
provider.
But
then
the
thing
is
like
then:
you'll
have
I.
Guess
if
you
want
to
use
both
like
you
and
like
both
as
in
like
AWS
and
Azure,
and
then
growth
and
arrow
you'd
have
like
to
have
to
install
like
four
providers
that
import
the
same
libraries
at
the
end
of
the
day,
because
they
have
to
import
yes,
AWS
SDK.
They
have
to
be
configured
like
to
use
credentials
and
whatnot,
and
that's
why
I
went
back.
It's
like.
G
A
Well,
Mike
is
your:
is
your
hands
still
phrased
from
last
time.
F
F
Just
I
just
want
to
make
sure
that
we're
not
going
to
end
up
yeah
to
your
point
of
like
I
I
would
hate
to
see
four
different.
You
know
Kappa
providers
that
do
the
same
thing
or
not
have
a
big
happy
providers
that
do
the
same
thing.
That
would
be
really
bad
almost
as
bad
as
having
to
shoehorn
something
in
that
doesn't
quite
fit.
F
A
D
I
think
the
Mike's,
your
concern
we
we
do
have
the
ability
right
to
Mark
a
feature
is
experimental,
so
I,
you
know
I
think
that's
the
right.
We
have
that
in
place
specifically
for
these
kind
of
situations
where
we're
we're
not
sure
it
seems
like
it
would
be.
It
could
be
a
benefit
right
to
our
end
users,
but
we're
not
we're
not
sure
about
the
impact.
D
One.
One
question
that
I
had
I
know
that
eks
has
a
its
own
set
of
kubernetes
versions
that
are
supported
at
any
time
and
that
tends
to
differ
from
you
know.
Let's
say
what
what
cluster
API
in
general
supports:
I
I
assume
it.
It
will
be
similar
for
ferosa.
D
G
Like
I
said
to
Mark
the
feature,
experiment,
so
I
think
that's
where
we
would
have
started
anyways,
given
that
you
know
like
we,
don't
even
have
our
answer
to
be
to
eat
yet
right
and
then
for
versioning
specifically
and
distributions
like
I
guess.
Just
to
add
like
on
the
point
before
it's
like
I
saw
almost
as
like,
Rosa
would
be
I
mean
eks
is
a
distribution
of
kubernetes.
They
maintain
the
patches
same
thing
as
that.
G
The
gke
does
and
I
don't
know
about
Azure,
but
these
stores
are
kind
of
like
go
alongside
there,
but
yes
like
it
would
the
version
would
differ.
G
I
would
expect
it
from
the
grains
version.
I
would
expect,
though,
that
happy
would
like
allow
specifically,
but
it's
like
a
whole
managed
services
like
feature
group
being
spun
up
Upstream
so
like
we
could
use
that
as,
like
a
rallying
point
like
to
say
like
okay,
how
do
we
support
like
versions
that
look
different
and
then
the
then
a
kubernetes
version,
but
they
are
still
a
kubernetes
version
somehow
and
and
yeah
it
goes
from
go
from
there.
A
Yeah,
it's
still
reminds
me
of
conversation,
video
a
couple
of
years
back
or
about
a
year
and
a
half
ago,
around
eks
distro
and
supporting
that
within
Kappa.
At
the
time
there
was
push
back
saying.
Well,
this
is
you
know
it's
AWS
specific
things,
but
yeah
seems
like
this
would
be
a
vehicle
to
support
things
like
that
as
well.
So
it
could
be
quite
interesting.
A
A
No
e
case
anywhere
actually
uses
eksd,
so
you
can
say
anywhere
is
is
also
the
provision
inside
the
front
that
actually
uses
Cappy
as
well
under
the
covers.
But
eksd
is
the
actual
kubernetes
bits
and
they're
all
the
same
same
as
as
the
eks
cloud
offering.
G
A
Yeah
well,
yeah
I,
don't
think
we'd
have
e
case
anywhere,
but
we
could
have
eksd.
So
if
someone
was
that
had
an
eks
cluster
in
the
cloud
and
then
they
had
I,
don't
know
cap
V.
Well,
there's
that's
a
bad
example.
Actually
isn't
it
cap
V
could
run
eksd
yeah.
So
this
wouldn't
be
a
capital
thing
anyway.
Ekst
really
come
on
Daniel.
D
G
A
Yeah
yeah,
we
have
our
own,
we
have
control
plane
and
bootstrap
providers
for
eks.
A
Well,
we've:
well,
they
were.
They
were
completely
separate
from
from
a
cluster
cutl
point
of
view,
but
actually
we
we
merged
them
into
the
same
infrastructure
provider.
So
the
AWS
infrastructure
provider,
but
but
notionally
they
are,
you
know
the
obstruct.
They
are
bootstrap
and
control.
Plane
providers
they're
just
not
installed
via
dash
dash,
B
or
bootstrap
with
cluster
cattle
I
see.
A
A
D
No
just
if,
if
there
are,
if
you
know,
if
open
shift
needs
its
own
bootstrap,
then
that
that
might
be
something
that
happens
down
the
road
as
a
separate
bootstrap
provider
that
could
be
used,
for
example,
in
you
know,
in
in
Azure
or
or
with
other
infrastructure
providers,
which
I
think
I
mean
I.
Think
that's
that's
a
that's
a
oh,
that's
a
that's
something
that
should
work
in
theory.
D
I
know
that
that
a
number
of
info
providers,
including
Kappa,
have
have
been
built
on
this
in
many
places,
on
the
assumption
that
it's
a
cub
idiom
bootstrap
provider,
not
not
on
you
know
not
not
on
purpose,
but
it's
just
it
just
sort
of
turned
out
that
way,
but
I
think
we.
A
And
I
I
guess
a
question
for
you:
Vince
did
you
want
people
just
to
ping
you
with
if
they
had
any
concerns,
or
did
you
want
them
to
attach
it?
To
you
know
the
channel
or
anywhere
else.
G
Channel
DM
or
whatever
works,
but
that's
fine,
I
think
the
first
step
is
going
to
be
like
kind
of
do,
a
PLC
of
sorts
and
then
proceed
from
there
for
the
bootstrap
question,
I
think
for
as
far
as
I
understand
like
if
this
will,
just
mostly
using
machine
pools
to
begin
with
just
like
I,
don't
necessarily
think
we'll
need
a
bootstrap
provider
or
like
a
special
kind
of
sort.
A
A
Yep
cool
yep,
so
we
are
at
the
end
of
the
agenda.
Does
anyone
have
any
last
items
that
either
are
not
on
the
list?
Okay,
once
going
twice
three
times
cool?
Well,
we
can
call
the
meeting
to
a
close.
Normally.
If
there
is
time
we
do
hang
on
afterwards
to
do
some
bug
triage,
but
as
we've
only
got
10
minutes
left,
we
can.
We
can
call
it
a
day.
So
thank
you
all
for
coming
and
see
you
again
in
two
weeks.