►
From YouTube: 2020-10-29 CAPZ Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right:
hey
everybody!
Welcome
to
cluster
api
provider
azure.
We
are
a
sig
life
cycle
project
and
we
follow
the
cncf
guidelines.
So
if
everybody's
here,
let's
try
to
be
nice
to
each
other
and
be
friendly
use
the
raise
hand
button
if
you'd
like
to
be
noticed
all
right
awesome.
A
So,
let's
see
our
reoccurring
topics
today.
Do
you
have
any
new
members
or
anybody
in
here
that
would
like
to
say
hello,
introduce
yourself.
B
See
maybe
I
can
do
two
words,
so
my
name
is:
I
just
started
to
work
with
cluster
api
into
projects
actually
for
the
companies,
my
old
previous
employer
and
the
new
one.
The
previous
one
is
a
heavy
microsoft
user,
so
the
cluster
api
azure
provider
is
very
important
for
them
and
the
one
is
a
startup
where
we
are
going
to
support
multiple
different
clouds.
So
that's
also
a
very
use
case
for
for
the
project.
B
B
It
is
quite
important
to
know
if
it
is
ready
to
be
used
in
production.
For
you
know
serious
production
deployments
or
not
so
so
that's
also
one
of
the
reason.
A
Indeed,
welcome.
Thank
you,
pierre.
We
are
always
interested
in
hearing
any
kind
of
issues
that
you
come
across
and
happy
to
help
prioritize.
Those
so
welcome.
Thank
you.
Anybody
else.
C
A
Later,
all
right,
if
nobody
else
wants
to
speak
up,
we
move
on
one
two:
okay,
all
right,
everybody.
So,
let's
see
eddie.
If
you
haven't,
please
add
your
name
to
the
attendees
list
and
do
we
want
to
get
into
project
board?
D
A
D
On
the
list
plans
for
release
thanks,
I
actually
sneaked
that
in
right
and
before
nader's
sorry
nader,
but
I
just
wanted
to
quickly
it's
kind
of
a
psa
but
the
so
the
release,
so
we're
nearing
the
end
of
the
month,
which
we
had
targeted
for
the
next
release,
and
so
I
just
wanted
to
see.
D
D
So
we
can
take
in
cluster
api
and
test
with
that
before
we
cut
the
final
release
and
also
gives
us
another
week
to
finish
some
of
the
big
items
that
are
in
the
milestone
that
I
know
are
in
progress
like
the
start
of
multi-tenancy,
the
private
cluster
pr
which
is
under
review,
and
then
there
is
another
one
I
forget,
but
oh
the
gpu
node
one
is
also
pretty
much
close
to
being
ready.
D
A
A
A
D
A
Yeah,
it
should
be
because
the
way
that
we
model
machine
pools
is
that
each
instance,
underneath
the
machine
pool,
unlike
azure
machines,
is,
is
using
the
token
that
was
allocated
for
the
azure
machine
pool.
So
as
each
new
instance
is
built.
It's
using
that
same
bootstrap
token.
So
that
is,
it
just
only
gets
created
and
then
has
a
ttl
of
15
minutes
or.
A
A
Try
to
get
that
in
that
would
be
great.
If
we
can
likelihood
is
that
we'd
actually
have
to
get
a
patch
into
cappy
and
then
really
delaying
the
releases.
And
maybe
if
we
can
find
a
mitigation,
because
the
real
the
real
fix
is
going
to
be
in
gabby.
F
For
this
I
would
imagine,
but
I
mean
we
would
have
to
delete
the
release
to
pick
up
their
release
anyway,
so
I
think
it
makes
sense
for
us
to
wait
for
311.
A
D
Nope
that
answers
my
question,
so
I
think
I'll
just
hold
until
after
cappy
to
cut
the
release
and
start
the
new
melson.
We
can
probably
plan
the
new
milestone
in
the
next
office
hours.
So
in
two
weeks,
if
that's
okay,
great.
A
Super
nader,
you
have
a
four.
F
Yeah,
so
that's
my
my
weekly
discussion
topic
about
the
clicky
test.
It's
about
the
kcp
upgrade
one
fabric.
You
opened
a
pr
and
cluster
api
about
that.
The
kcp
is
reporting
the
wrong
number
of
strong
number
of
the
machines
when
it's
rolling,
like
the
update,
which
I
don't
know,
if
is,
is
the
thing
causing
our
test
to
be
flaky
or
not,
but
he
has
a
pr
open
for
it
anyway.
F
So
once
it's
done
I'll,
pull
that
in
and
test
with
it,
but
through
all
my
testing
locally
and
stuff
like
usually
it
just
like
the
machine,
it
does
everything
in
serial.
So
it
creates
a
machine
and
creates
like
all
three
of
them
and
then
creates
on
one
of
the
new
version
and
then
start
deleting
an
old
one
and
then
delete
creates
another
one
and
then
delete
so
on
and
so
on.
So
it
just
takes
a
long
time
and
then
we
eventually
time
out
an
hour
test
in
my
local
environment.
F
D
Yeah,
so
I
looked
at
the
logs
of
some
of
the
failures
that
happened
recently
and
now
we
have
more
locks,
it's
easier
to
look
at
the
ci,
the
like
ci
failures,
but
yeah.
So
what
I
noticed
is
the
basically
the
what
happened
is
there's
a
fourth
control
plane.
That's
left
at
the
end.
D
That
is
one
of
the
old
kubernetes
versions.
So,
basically,
all
three
of
the
new
control
planes
come
up
correctly
and
they're
running
and
they're
running
the
new
version
that
we
upgraded
to,
but
the
other
the
first
two
get
deleted
properly,
but
the
third
one,
its
old
replica,
doesn't
get
deleted.
So
we
end
up
with
like
old
nodes,
remain
at
the
end,
it
times
out
waiting
for
that
node
to
be
gone.
D
What
I
noticed
is
it
did
not
attempt
to
delete
the
machine,
so
the
machine
is
not
in
deleted
state
or
deleting
it
doesn't
have
a
deletion
timestamp
and
so
that
the
azure
machine
also
isn't
being
deleted.
So
it's
not
like
a
timeout
in
the
environment
trying
to
delete
azure
resources.
It's
really
timing
out
waiting
for
kcp
to
say,
okay
delete
that
machine,
and
so
I
think
that's
an
issue,
not
azure
specific.
D
I
think
it's
at
the
capi
level,
where
it's
trying
to
do
the
rolling
update
and
somehow
it's
not
it's
not
finding
that
it
should
delete
the
third
replica,
the
old
one.
So
I
to
me
it
looks
very
related
to
the
issue
that
forbids
you
open
that
you
faced.
F
One
thing
I've
noticed
because
I
kept
looking
at
this-
is
that
there's
like
a
lag
of
a
few
minutes
between
each
step,
so
it
would
create
create
the
three
of
the
old
version
and
then
wait
a
few
minutes
and
then
create
the
first
one
and
then
wait
a
few
minutes
and
then
start
deleting
and
so
on.
So
I
don't
know
if
that
has
to
do
with,
like
the
reconciliation,
loop
or
something,
but
something
in
kcp
waits
between
each
step
for
some
time
before
anything
happens.
A
I
wonder
if
it's
collecting
the
node
references
and
it
waits
for
those
node
references
to
eventually
like
fall
off
the
list
like
they
haven't,
reported
into
api
server
for
a
little
bit
and
then
api
server
says
yeah
they're.
Not
there
really
not
sure.
F
F
Yep
next
one
so
in
vmware
we
started
using
cab,
z
and
as
supporting
azure
and
we're
doing
some
scaling
testing,
and
I
got
a
question
saying
from
the
team
doing
the
scale
testing
saying
that,
once
they
were
trying
to
create
a
cluster
with
500
nodes
and
they
got
a
hard
limit
of
available
like
vm's
gonna
more
than
like
the
resource
group
could
not
have
more
than
or
cannot
have
more
than
250
vms
in
the
availability
set,
and
they
said
that
this
is
like
a
hard
limit
on
azure,
and
I
had
some
discussions
with
like
a
bunch
of
people,
including
who
I
think
is
here.
F
For
that
reason,
and
then
was
wondering
if
you
guys
have
thoughts
about
that.
I
think
machine
pools
would
be
like
something
we
can
look
into
that
wouldn't
have
that
limit.
But
is
that
limit?
Are
we
doing
something
wrong?
That's
causing
that
limit,
because
it
shouldn't
be
there
or
is
that
expected
or
what's
your
thoughts.
D
D
Yeah,
that's
an
interesting
behavior,
because
I
have
not
seen
that
and
that's
actually
one
of
the
things
I
wanted
to
bring
up
later
because
we're
running
into
issues
with
pvcs
because
we're
not
deploying
availability
sets.
So
I
would
love
to
see
like
we
should
look
into
this.
But
but
regarding
your
question,
yes,
you
should
use
vmss.
D
That's
that's
one
of
the
reasons
that
that
was
one
of
the
main
things
in
the
machine
pool
proposal
that
we
brought
up
as
one
of
the
motivation
is
large
clusters,
because
exactly
of
that
limit,
and
so
with
vmss,
you
can
have
up
to
1
000
instances.
I
think
that
was
the
case
last
year.
Maybe
it's
better
now,
but
1
000
instances
per
skill
set
and
then
you
can
have
like
800
resources
per
resource
group.
So
you
can
have
multiple
skill
sets.
D
A
I
I
don't
think
it
should
create
this
side
unless
you
tell
it
to
that.
That
strikes
me
as
a
little
bit,
I'm
not
saying
that
it
isn't
actually
happening.
I
would
I
would
love
to
actually
I'd
love
to
reproduce
this.
F
D
E
I
think
it's
probably
more
related
to
cecile's
last
item.
I
just
I
can't
read
and
kappa
are
both
thinking
of
creating
crds
for
these
types
of
resources
or
placement
resources
and
then
linking
them
into
clusters
and
machines,
which
is
not
fully
figured
out.
I
think
capri
is
further
along
the
design
path,
so
yeah,
but
it
probably
relates
to
that
exactly
that.
Last.
E
D
Hey
nader:
is
it
okay
with
you
if
we
jump
to
that
one
and
then
come
back
to
multi-tendency
since
it's
sort
of
related
okay?
So
what
I
was
going
to
bring
up
is
that
today,
yesterday
we
did
a
bunch
of
investigation
on
slack
with
a
user
who
brought
up
an
issue
they
were
running
into
with
pvcs.
D
D
If
you
install
the
entry
storage
classes
and
then
you
deploy
a
pvc,
then
there's
a
thing
in
the
cloud
provider
that
expects
either
vms
to
be
in
availability,
sets
or
vmss
to
be
a
skill
set,
and
there
is
no
knowledge,
I
think,
in
the
cloud
provider
of
vms
using
zones.
So
I
think
that's
historical,
because
aks
only
supports
vmss
with
zones,
it
doesn't
support
vms
with
zones,
because
vms
were
like
the
vmss
is
the
newer
default,
and
so
I
think
that's
a
problem
in
cloud
provider.
D
The
only
two
things
that
we
can
do
to
address
that
from
our
side
that
I
know
of
is
either
one
tell
cloud
provider
config
to
use
the
the
standard
vm
type.
So
if
you
know
how
in
cloud
provider
config,
you
can
choose
your
vm
type,
it's
either
standard
or
vmss,
but
if
we
set
it
to
standard,
then
that
won't
work
for
vmss
nodes,
which
means
we're
blocking
ourselves
from
having
mixed
node
clusters.
D
That's
not
good
or
the
other
thing
is
to
put
everything
in
availability
sets,
but
we
went
for
vms,
but
the
problem
with
that
is
we
don't
want
to
do
that
systematically.
We
want
to
use
zones
where
we
can
so
that's
where
I'm
at
I
opened
an
issue
and
discussing
it
with
one
of
the
experts
there,
but
I
think
this
might
require
some
changes
to
the
cloud
provider,
because
cavsy
is
breaking
some
assumptions
that
were
made
around
the
other
azure
kubernetes
implementations.
D
No,
so
the
problem
we're
seeing
is
with
a
vm,
not
vmss,
with
a
machine.
It
says
not
a
vmss
instance
it's
failing,
because
it's
trying
to
see
if
it's
a
vmss
instance,
because
the
basically,
if
you
look
at
the
codepath,
it's
linked
in
the
issue,
it
just
says,
is
this
an
availability
set?
If
not,
then,
as
humans
vmss.
D
D
So
I
guess
the
short-term
mitigations
would
be
either
we
add
availability
sets
in
azure,
but
I
think
that's
more
work.
It's
not
necessarily
something
we
want
to
do
or
for
now
we
say
you
can't
have
mixed
node
clusters
and
we
set
the
cloud
provider
config
to
standard
when
using
vms
and
vmss,
when
using
vmss.
D
If
you
want
to
use
pvcs
because
I
think
that's,
we
also
need
I'm
sure
there
are
other
use
cases,
because
this
code
is
being
reused
in
other
places
in
cloud
provider.
But
that's
the
only
one
that
I
found
not
working
so
far.
D
It
is
it's
when
it's
so
the
volume
gets
mounted
proper
or
the
it
gets
created
properly,
but
it's
the
attach
that's
failing
so
during
the
attach
there's,
this
get
vm
set,
get
node
vm
set
function
where
it
tries
to
get
it
and
then
it
does
the
attach
and
when
the
get
fails,
because
it's
not
looking
for
the
right,
vm
type.
A
All
right
any
other
questions
on
this
topic.
A
All
right,
thank
you
for
the
deep
dive,
that's
cool.
I
wish
it
was
not
an
issue,
but
it's
great
to
learn
more
about
that
and
I
guess
back
to
nader,
I'm
gonna
talk
about
some
multi-tenancy
and
the
the
fun
of
using
aed
identity.
F
Yeah,
so
I
think
I'm
getting
stuck
on
this
thing,
where
I
keep
getting
identity
not
found
and
anish
said
that
it
might
be
a
bug
in
imds
that
he's
looking
into
because
sometimes
doesn't
return
to
identity
with
that
error,
because
I'm
doing
all
the
other
things
that
I
val,
like
confirmed,
everything
else
that
he
said
I
should
have,
and
I
have
it
so
I
don't
know
how
to
get
around
that.
Okay,
so.
A
Some
quick
questions:
what
what
kind
of
identity
is
it?
Oh
user
assigned
identity?
Okay,
did
anish
talk
about
the
nature
of
this
issue
for
imds.
F
It's
on
the
slack
thread
that
you
were
on
like
this
long
one.
He
just
said:
there's
an
imds
bug.
That's
we're
chasing
internally,
which
caches
the
wrong
creds
and
returns
identity
not
found.
A
Yes,
I
believe,
if
you
ask
for
the
identity
before
it
has
been
bound
to
the
machine
before
the
it
has
actually
been
attached
to
the
azure.
Vm
imds
will
cache
the
result
and
then
the
result
will
stay
in
the
cache
for
a
period
of
time
and
you
will
continue
to
see
identity
not
found.
F
A
I
I
think
that
is
one
like
that
is
the
short
term
mitigation
longer
term.
It
should
not
cache
that
it
should.
You
know,
ask
again
to
see
if
I
it
has
the
identity
available
to
it.
Again,
though,
I
wonder
if
there's
something
else
going
on
as
well,
and
it
might
be
just
that
this
one's
difficult
to
diagnose.
A
Would
you
be
interested
in
pairing
up
I'd
love
to
take
a
look
at
it?
Yeah
yeah,
I'm
totally
getting
stuck
yeah.
F
F
A
A
All
right,
yeah
shout
out
to
nadir.
Thank
you
for
the
lovely
starting
point
for
the
multi-tenancy
documents.
A
All
right,
let's
see
anything,
so
do
you
wanna
dear.
Do
you
wanna
talk
about
the
cafe
failure,
domain
proposal.
E
Okay,
you
are
interested,
so
if
you
are
thinking
along
the
lines
of
creating
those
types
of
resources,
then
probably
makes
sense
to
align
it
a
bit.
I
guess
in
the
same
way
that
we've
done
for
multi-tenancy,
so
yeah
we're
basically
particularly
for
on-premise.
E
It's
value
domain
is
kind
of
like,
however,
the
data
center
has
been
set
up,
so
you
kind
of
have
to
give
some
of
that
information,
so
we're
going
to
enable
the
ability
to
drive,
drive
that
setup
in
vcenter,
wire,
plus
the
api
and
then
aws
cluster
enables
the
machine
can
consume
them.
We
do
for
aws
as
well,
because
we
only
support
availability
zones,
but
now
there's
all
kinds
of
zones
and
placement
groups
that
they've
added
support
for
us
so
which
makes
things
really
complicated.
So
we
have
to
go
along
the
same
kind
of
route.
A
Yeah
right
now
we're
just
modeling
them
on
the
resources
themselves,
like
it's
an
integer
for
the
zone
or
a
set
of
string
like
basically
a
set
of
integers
yeah
it
would,
I
don't
know
if
how
others
feel
about
it,
but
I
I
it's
complicated
and
it's
not
just
transparent
to
the
naive
observer.
You
know
any
other
thought
anybody
have
any
thoughts
out
there.
D
E
Not
really
figured
that
out
yet
I'm
not.
I
don't
know
if
we
need
to
do
some
changes
for
caffeine,
v1,
alpha,
4
and
kind
of
like
letting
seeing
what
the
end
users
are
sort
of
deciding
on
what
they
want
the
behavior
to
be,
and
then
I
think
we
do
need
to
figure
out
what
should
look
like
in
cafe
if
we
do
need
to
make
changes,
how
fairly
domain's
working
caffeine.
I
don't
know
right
now.
A
Okay,
all
right
anybody
have
anything
else
they
would
like
to
cover
during
the
discussion
phase.
A
Would
you
would
you
like
to
tell
us
a
little
bit
about
your
use
case
and.
B
Sure,
sorry,
I
lost
you
for
for
a
second,
so
so
yeah.
So
basically,
I'm
representing
potential
users
right-
and
I
know
the
project
since
about
since
its
beginnings.
B
Actually,
but
I
am
not
yet
sure
if
this
is,
you
know
if
it
is
ready
to
be
used
for
real
production
use
cases,
and
I'm
also
not
sure
how
open
you
are
to
contribute
to
your
project,
because
if
you
decide
to
go
with
cluster
api,
seeing
the
how
early
stage
of
the
project
project
we
have
today,
I
think
that
we
will
need
to
assign
at
least
one
or
two
developers
to
the
project
from
our
site
right.
B
So
all
the
urgent
and
custom
specific
custom
needs-
and
you
know
bugs
we
find-
are
quickly
solved
on
our
site.
So
now
these
are
the
major
topics
I'm
interested
in,
of
course,
we'll
be
also
using
now
looking
into
the
other
providers
on
amazon,
google,
probably
in
the
meantime,
but
yeah
azure
is
one
of
the
major
ones
for
us.
So
so
we
need
to
look
deeper
into
the
cluster
api
itself
and
and
the
azure
provider.
B
B
I
see
that
there
is
giant
swarm
involved
in
the
project
vmware,
which
is
something
completely
new
to
me,
so
it
would
be
nice
if
you
could.
Maybe
you
know
elaborate
a
little
bit
more.
How
do
you
feel
today
with
the
maturity
of
the
project
and
if
it's
ready
to
use
or
not,
I'm
really
extremely
surprised
by
the
quick
support?
I
got
on
slack
from
you
guys,
so
that
was
really
amazing
but
yeah
for
us.
It's
like
a
big
bet.
B
Today
right
we
will
have
to
hire
at
least
two
golan
developers
to
make
the
platform
enough
stable
to
deploy
product
production,
workloads
with
use
of
cluster
api
and
azure
providers,
so
so
yeah
anything
related
to
that.
I
would
be
happy
to
hear
from.
A
B
So
we
want
to
have
a
kind
of
management
cluster
in
zelando
they
call
it
lifecycle,
manager,
cluster
life
cycle
manager.
I
believe,
but
this
is
their
own.
You
know
custom
implementation
which
looks
much
different
than
cluster
api,
so
we
don't
want
to
use
it
so
so
this
is
our
use
case,
and
so
my
my
job
for
now
for
the
next
few
weeks
is
to
assess
you
know
if
cluster
api
will
be
able
to
do
the
same
job
as
like
we
do
with
terraform.
Today.
B
Yes,
I
spent
the
last
five
years
in
consulting,
and
actually
what
I
see
today
is
that
no
one
is
interested
in
managing
their
own
clusters
anymore,
right,
everyone.
At
least
this
is
my
my
own
experience
most
of
the
companies
I
work
with.
They
want
to
use
manage
clusters
only
we've
spent
too
much
time
on.
You
know
with
cops
with
spray,
with
a
number
of
tools
for
for
provisioning.
B
You
know
custom
or
verninia
kubernetes
clusters,
but
you
know
it's
just
a
waste
of
time
in
the
long
term
to
maintain
your
own
deployment
so
most
of
the
customers,
I
know
most
of
the
companies.
I
know
they
are
exclusively
interested
in
the
managed
clusters
and
nothing
else.
A
B
So
if
you
dive
deeply
into
terraform
templates,
it's
just
a
nightmare,
so
you
would
like
to
have
a
kind
of
standardized
way
of
deploying
clusters,
but
also,
what
is
more
important
to
me
is
the
life
cycle
part
right,
because
in
terms
of
terraform
is
kind
of
naive,
I
mean
it
just
knows
that
the
cluster
is
up
or
down,
and
this
is
actually
all
what
terraform
is
aware
of
in
case
of
cluster
api.
Can
imagine
that
you
can,
you
know,
manage
upgrades
roblox.
You
know
many,
many
more
states
cluster
states
than
with
terraform.
B
C
So
I
want
to
dig
a
little
bit
into
what
you
mean
by
managed
clusters
in
this
case,
because
we've
talked
about
the
cluster
api
management
cluster.
But
do
you
when
you're
talking
about
managed
clusters?
Do
you
mean
using
the
managed
service
on
each
of
the
cloud
providers
so
gke
on
google
eks
on
amazon
and
aks
on
azure?
Or
do
you
mean
to
use
like
cap
z,
to
provision
a
cluster
on
azure
that
is
then
managed
by
you
and
the
management
cluster?
C
The
difference
is
say
in
the
aks
scheme:
aks
the
you
would
not
have
access
to
the
control,
plane
nodes
or
be
responsible
for
managing
the
control
plane
and
that
cd
on
aks.
But
within
the
cav
z
scenario,
you
have
access
to
all
of
those
resources,
and
you
know
you
would
use
cap
z
to
manage
the
control
plane
as
well.
As
the
you
know,
the
workload
worker
nodes.
B
Yeah
so
so
I
think
the
main
idea
is
to
avoid
managing
the
control
plane
right,
so
we
don't
want
to
be
responsible
for
ecd
and
anything
related
to
that.
I
remember
quite
a
lot
of
painful
migrations
from
each
cd
to
zero
to
three
zero,
which
is
you
know,
just
quite
a
lot
of
effort,
that
is
that
doesn't
bring
any
business
value
to
the
customers
no
to
the
business.
So
you
want
to
avoid
managing
kind
of
kind
of
consumption,
so
by
managed
clusters-
I
I
I
mean
eksi,
ks
and
gpa.
D
So
to
answer
your
questions
from
the
beginning
in
terms
of
contributions,
yes,
we
are
very
open
to
any
contributions
that
you'd
like
to
make
to
the
project.
We
do
everything
in
the
open.
We
have
this
office
hour
every
two
weeks
and
then
everyone's
very
active
on
slack,
as
you
saw.
D
If
there
are
any
like
bigger
scope
changes,
we
usually
do
like
a
proposal
process,
so
we
have
like
a
template
and
you
submit
a
proposal
in
google
doc
form
and
then
people
review
and
we
discuss.
But
yes,
so
the
short
answer
is
yes:
if
you'd
like
to
contribute,
that's
not
a
problem
at
all
in
terms
of
maturity,
I
think
the
unmanaged
story
for
cluster
api
is
more
mature
than
the
managed
cluster
story
and
nader
and
others
can
probably
tell
you
more
about
that,
but
they're.
Definitely
like
production
users
already.
D
It
is
an
alpha
project,
which
means
we
are
not
completely
protected
from
having
breaking
changes
to
the
api
in
the
big
api
versions,
but
so
far
they've
been.
I
think
the
conversion
has
been
handled
in
a
way
that
you
can
always
upgrade
from
older
api
versions
and
we're
already
at
v1
alpha
4,
which
means
we're
getting
closer
to
beta.
I
think
we're
targeting
beta
end
of
next
year,
possibly
so
it's
it
is
like
quite
stable.
D
In
that
sense,
the
managed
cluster
story,
though,
is
still
an
experimental
in
the
experimental
part
of
the
project.
So
I
don't
know
if
you
took
a
look
at
the
code
base
already,
but
the
aks
managed
cluster
types
are
all
in
the
experimental
folder.
D
We
were
planning
to
take
them
out
of
experimental
as
part
of
v1
alpha
4.,
but
that's
where
it
is
today
and
then
in
terms
of
priority.
I
know
that
some
people
are
very
interested
in
the
managed
cluster
story
and
have
been
active
both
in
azure
and
aws
and
others,
I'm
sure.
But
I
I
don't
think
that
was
where
the
priority
of
the
main
priority
of
the
project
was
first
set.
D
I
think
the
main
goal
was
to
get
the
unmanaged
cluster
story
right,
but
yeah,
that's
just
my
experience
and
then
what
I'm
curious
about
is
why?
How
do
you
think
cluster
api
would
help
you
with
the
life
cycle
management?
Because
the
I
mean
I,
I
understand
the
whole
thing
that
you
said
about
terraform,
but
the
managed
clusters
have
their
own
like
life
cycle
right
so
like,
if
you
do,
an
upgrades
like
aks
would
take
care
of
the
upgrade
for
you.
We
don't
orchestrate
any
of
that
like
are.
B
Yes,
yes,
of
course,
I'm
aware
that
it's
partially
managed
by
by
the
cloud
provider
right,
so
it
doesn't
need
to
be
managed
by
the
by
tutorial
itself,
but
even
then
I'm
pretty
sure
that
cluster
api
should
be
aware
that
the
upgrade
was
successful
or
wasn't
successful,
or
you
know
that
it
was
started.
So
we
can
get.
You
know
some
time
timestamp
where
the
upgrade
was
started.
B
You
can
get
some,
maybe
some
more
informations
about
the
more
details
about
the
actual
upgrade
you
know
about
how
much
time
it
took
to
upgrade
every
single,
every
particular
notes,
how
much
time
the
entire
operation
took.
So
so
you
can
call
it.
Maybe
some
metrics
might
be.
You
can
export
the
metrics
into
primitives
and
to
make
some
grafana
dashboards
to
see.
B
You
know
a
wider
view
on
our
200
clusters,
because
if
the
scope
is
higher,
you
know
we
want
to
know
how
much
time
we
need
to
upgrade
to
all
the
200
clusters,
and
we
won't
don't
want
to
do
the
upgrade.
You
know
at
the
same
time,
for
all
of
them
right.
B
We
want
to
do
it
step
by
step.
We
want
to
build
a
little
bit.
Smarter
procedures,
like
you
know,
try
first,
with
maybe
five
percent
of
our
customers,
maybe
the
biggest
ones,
maybe
the
smallest
ones,
for
start
so
we'll
need
to
build
a
kind
of
logic
around
cluster
api
kind
of
workflow
tool,
but
cluster
api
would
be
the
the
backend
responsible
for
for
managing
everything
related
to
to
the
clusters.
B
B
Also,
I'm
looking
into
more
like
into
the
entire
life
cycle
and
so-called
day
two
operations
in
the
future.
There
may
be
also
cases
like
cluster
cluster
migrations.
I
know
so
I
can
imagine
much
more
cases
like
this
scanning
cups
going
down.
B
So
yeah,
but
I'm
aware
that
part
of
this
of
the
work
that
was
in
the
pa
in
the
past
was
done
by
the
devops
guys,
it's
already
done
by
by
the
cloud
providers
cloud
vendors.
B
So
we
don't
need
to
put
too
much
attention
into
the
upgrades,
but
even
then
it
comes
to
networking
like
you
know,
calico
stack
or
or
some
other
network
options,
all
powerpoint
police
agent
there's
a
number
of
components
in
the
middleware
that
is
quite
sensitive
to
kubernetes
upgrades,
not
mentioning
ingest
controllers,
so
yeah.
B
I
would
like
to
have
a
tool
that
is
capable
to
react
for
unexpected
changes,
not
only
for
for
upgrades
but
to
be
aware
of
something
more
than
than
just
the
cluster.
It
might
be
a
little
bit.
You
know
two
vision
areas
it's
like
for
today,
but
yeah.
If
you
want
to
have
everything
for
you
automated,
we
need
to
know
the
state
of
all
the
clusters,
so
we
need
to
the
customers
need
to
be
aware
of
middleware.
D
Like
azure
arc
enabled
kubernetes
it's
in
preview,
but
basically
it
allows
you
to
manage
your
kubernetes
clusters
on
any
platform,
and
so
they
already
support
like
ets,
gke
aks
cluster
api
aks
engine,
all
the
all
the
things,
and
so
it
makes
it
really
easy
to
install
things
like
metrics
and
monitor
your
clusters
on
different
platforms.
A
Besides,
just
that
one
is
there
a
reason
like
there
are
other
solutions
out
there
too
right
that
do
relatively
similar
tasks.
To
that
is
there
a
reason
why
you
would
choose
cluster
api
versus,
say
a
solution
from
you
know,
microsoft
to
google
vmware
or
some
other
representative
company.
A
In
mind
managed
service
similar
to
arc
or.
B
I
think
the
this
approach
should
be
also
considered,
but
I
I'm
not.
I
don't
have
too
much
experience
in
that
area.
I'm
yeah
the
the
one
of
the
problems
we'll
be
have
facing
is
that
we
need
to
support
multiple,
different
clouds
in
a
kind
of
cloud
native
way
with
infrastructures
codes
in
place,
and
I'm
not
really
sure.
If
you
know
if
arc,
we
can
achieve
this
kind
of
a
go
today,
but
yeah
I
would.
I
would
have
a
look
deeper.
B
I
think
this
is
a
very
good
point,
because
we
focus
mainly
on
terraform
and
cluster
api
and
we
didn't
consider
using
any
kind
of
you
know:
tools
like
like
arc
or
antus
or
similar
yeah.
So
that's
definitely
something.
We
should
also
look
into.
D
Yeah
not
saying
that
cluster
api
can't
do
what
you're
describing,
but
it
I
think
it's
worth
considering
also
the
managed
services,
because
they
would,
you
know,
be
more
managed.
So
it's
less
do
it
yourself,
I
mean
there
are
pros
and
cons
to
both
approaches
depends
if
you
want
to
be
more,
be
able
to
control
things
more.
If
you
want
like
a
more
managed
service
experience,
but
I
think
they're
both
valid
ways
to
go
with
this.
A
Yeah,
I
think
why
we're
asking
is
more
for
understanding
the
differentiation
and
and
your
point
of
view,
of
what
benefits
cluster
api
brings
that
are
beyond
that.
But,
like
we
work
on
cluster
api
every
day
and
we
we
believe
in
it
yeah,
we
we
love
it
and
we
just
we.
It's
really
great
to
hear
a
point
of
view
from
somebody
outside
of
the
project
looking
to
adopt
and
the
reasons
why
it's
they're
passionate
about
it,
and
we
really
appreciate
you.
You
explain
this
to
us.
Thank
you
so.
B
Much
so
so
yeah
for
for
me,
you
know.
Operators
like
actually
not
only
cluster
api,
but
kubernetes
operator
pattern
is,
is
the
way
to
go,
and
I
think
this
is
the
future
of
most
of
the
future
applications
out
there.
This
is
also
how
it
is
advertised
by
by
quite
a
lot
of
kubernetes
creators
like
tim,
hawking
and
so
on.
B
So
I
think
that
this
pattern
is
is
should
be
a
kind
of
baseline
today
for
building
any
kind
of
application,
and
so
so
that's
the
first
thing,
and
when
we
speak
about
the
operator
pattern
we
can
assume.
Actually
we
should
assume
that
any
kind
of
operator,
including
cluster
api,
provides
all
the
basic
features
out
of
the
box
like
scalability
scanning
self-healing,
you
know,
so
there
is
a
lot
of
features
there.
There
should
be
a
lot
of
features
built
in
and
into
the
operator
itself
that
we
shouldn't
care
too
much
about
and
see.
H
B
Case
of
terraform
it's,
it
is
not
the
case
right
I
mean
terraform
is
great
for
a
number
of
use
cases,
but
yeah
it's
not.
It
doesn't
provide
any
kind
of
high
availability
availability.
Today,
if
you
kill
the
machine
where
the
terraform
process
is
running,
we
will
have
to
restart
the
process
again
and
if
the
plan
and
execute
one
execution
takes
longer
than
I
know
one
two
three
than
20
minutes,
it
is
a
really
very
painful
process.
B
So,
if
you
consider
like
managing
hundreds
of
clusters,
maybe
even
thousands
of
plasters
with
terraform,
it's
just
a
mine
nightmare,
but
not
only
for
kubernetes
clusters,
but
also
for
other
objects
like
s3
buckets
is
quite
a
common
use
case.
Where
you
know
people
are
managing
hundreds
or
thousands
of
s3
buckets
with
with
terraform,
and
you
know
deleting
a
bucket
or
creating
a
new
one
takes
minutes,
or
maybe
even
hours,
sometimes
with
the
reform
and
yeah,
and
there
are
also
some
other
issues
like
api.
B
Many
api
limits,
you
know
that
are
introduced
by
all
the
cloud
providers
out
there
right
so
in
ibs,
I
guess
in
azure.
I
guess
it's
1200
requests
per
minute.
So
if
you
run
a
bunch
of
different
terraform
processes
at
the
same
time,
they
are
not
aware
of
each
other
and
it
is
very
easy
to
hit
the
limit.
So
so
your
entire
csc
infrastructure,
you
know,
stops
and
so
terraform
doesn't
scale
in
the
end
of
today's
form.
So
so
these
are
actually
two
main
reasons
for
me.
B
You
know:
terraform
is
not
that
good
in
addressing
more
complex
use
cases
and
big
scale
deployments,
but
the
basic
one
is:
is
the
operator
pattern?
I
think
that
all
the
all
the
applications
should
be
already
built
according
to
kubernetes
operator
principles
with
all
the
you
know,
good
things
that
that
are
coming
for
free
with
the
operator
yeah,
I'm
not
mentioning
backups
or
you
know
another
things
like
that,
so
scaling.
B
Yeah
so
being
honest,
you
know,
I
would
be
much
more
excited
to
work
with
cluster
api
than
than
terraform
and
I
believe
you
already
have
this
feeling,
so
I
will
do
my
best
to
evangelize
all
the
come
up
of
all
the
companies.
I
work
with
to
try
after
api
but
yeah.
We
are
on
a
kind
of
pocilla
level
today
and
I
think
there
should
be
a
little
more
companies
interested
to
develop
the
the
platform.
I'm
not
sure
if
you
know
if
it
is
easy
to.
B
To
add
new
developers
how
much
time
it
takes
for
new
developers
to
be
more
productive
in
the
project.
A
B
Developers,
because,
from
the
business
perspective,
it
looks
more
today
more
like
we
need
to
hire
two
devops
guys
doing
our
front
templates
or
maybe
to
go
like
developers
working
with
cluster
api,
extending
the
cluster
api
right.
So
that's
more
or
less
how
it
looks
from
the
from
the
business
perspective.
A
Makes
sense,
does
anybody
have
any
other
questions.
H
Yeah
I
wanted
to
jump
in
and
ask
a
quick
question,
so
I
work
on
aso
and
I
just
wanted
to
double
check.
You
want
cross-platform
for
your
managed
cluster
deployment.
Is
it
the
case
that
once
you've
done
that,
then
you've
deployed
to
your
aks
cluster
or
your
gke
or
whatever,
and
then
you're,
okay
being
more
locked
into
that
provider?
Basically,
the
reason
I
ask
is
because
aso
only
manages
azure
resources
right
so
like
you're,
not
giving
any.
B
Yes-
and
this
is
actually
a
very
serious
problem,
because
it
is
not
that
easy
to
match
to
map-
you
know
cloud
resources
from.
I
know
cloud
vendor
a
to
cloud
vendor
b
to
be
you
know,
100
compliant
with
with
operators
so
yeah.
What
I
I
was
thinking
about
with
my
colleagues.
It
was
the
kind
of
meta
operator
that
would
be
on
top
of
aso
amazon
operator
and
google
operator
hiding
for
this
actually
something
similar
to
class
api
pattern.
H
B
Files
like
bucket,
if
you
wanted
to
have
a
bucket
in
in
google
or
amazon
or
azure,
we
just
you-
should
have
just
a
simple,
even
file
saying
you
know
buckets
bucket
name
and
that's
basically
on
all
the
all
the
complexity
behind
could
be
might
be
hidden
in
the
lower
layer
of
the
operators.
I
don't
know
so
so
yeah,
but
it's
very
serious
problem.
Actually,
the
only
solution
we
found
you
was
to
build
a
kind
of
meta
operator.
H
Have
you
looked
any
into
to
cross
plane?
They
have
at
least
my
understanding.
Is
they
have
a
that
sort
of
meta
operator
layer
where
you
can
define
these
abstractions?
I
forget
what
they
call
them,
but
you
can
define
these
abstractions
and
then
you
have
to
do
the
mapping.
They
don't
do
it
for
you,
but
you
know
you
can
say
like.
H
I
would
like
to
deploy
a
sql
instance
and
my
definition
of
a
sql
instance
has
these
like
seven
properties
and
here's,
how
those
seven
properties
map
into
you
know
into
azure
and
those
same
seven
properties
also
map
into
you
know
google's
cloud
apis
in
this
way.
So
I
believe
that
they
do
support
that
I
mean.
Obviously
the
downside
with
crossplane
is
at
least
right
now.
H
I
don't
think
any
of
the
cloud
providers
are
giving
as
much
support
to
that
as
they
are
to
sort
of
their
own
homegrown
operators.
H
I
know
in
microsoft,
like
we're,
look
working
quite
closely
with
the
crossplane
developers
and
sort
of
leveraging
some
of
the
code
generation
stuff
we're
doing
for
aso,
and
the
idea
at
least
is
we're
going
to
use
that
same
tooling
to
help
level
up
crossplane
support
of
resources
as
well.
But
it's
worth
mentioning
because
they
do
have
that
meta
operator
you're
talking
about,
or
at
least
they're
they're
close
to
it.
A
We're
running
really
short
on
time:
nader
you've
had
your
hand
raised,
I'm
sorry
we
haven't
gotten
to
it.
Please
we'll
be
interested
interested
in
the.
E
Yeah,
I
was
just
gonna
say:
there's
a
really
good
presentation
from
deutsche
telekom
in
april.
So
if
you
are
gonna
go
and
build
your
own
stuff,
then
they've
done
quite
a
bit
of
work
on
like
building
a
layer
around
cluster
api.
Take
a
look
through
the
link
here,
and
so
it
should
be
linked
to
the
zoom
recording
there
and
just
it's
fine.
We
can
talk
offline,
but
there
are
differences.
E
A
This
this
has
been
a
really
awesome
conversation.
Thank
you,
everybody
for
joining
in.
We
are
right
at
the
end
of
our
time,
and
I
just
want
to
say
thank
you
to
the
community
anything
else.
We
need
to
hit
any
psas
before
the
end,
cecile
all
right.
Thank
you.
So
thank
you
all.
We
appreciate
it
and
see
you
on
slack.