►
From YouTube: 2020-07-24 CAPZ office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Need
for
the
agenda
as
well?
Let's
start
okay,
my
name
is
carlos.
I'm
gonna
be
the
host
for
today
for
the
this
meeting.
This
is,
let
me
try.
A
A
A
Okay,
if
you
have
any
topics
to
discuss,
please
add
in
their
open
discussions
in
their
gender
and
I
don't
know
cecil
if
you
want
to
go
straight
to
the
open
discussions
or
oh,
let's
update
that.
It's
not
that
sweet
anymore.
B
A
Yeah,
okay,
let's
david,
didn't,
have
some
topic
here
for
the
good
standard.
C
Awesome:
okay,
so
kappa
recently
had
put
out
a
multi-tenancy
proposal
where
they
are
having
each
cluster
have
the
ability
to
have
a
principle
that
reconciles
that
cluster
and
that
the
controller
can
assume
the
role
of
that
principle
in
aws
and
then
reconcile
that
cluster.
C
So
this
is
the
idea
that
you
may
have
multiple
tenants
using
the
same
controller
where,
as
now,
we've
always
expected
that
we
will
have
different
controller
instances
running
in
for
a
single
tenant,
so
each
each
one
of
the
controller
instances
would
be
for
a
specific
tenant,
so
the
environment
that
we
load
up
when
the
controller
starts,
with
the
credentials
that
that
environment
has
is
what
the
controller
will
be
using
to
access
azure
and
to
create
infrastructure.
On
behalf
of
that
individual
identity.
C
With
this
proposal,
it
opens
it
up
to
each
cluster
having
its
own
identity,
to
reconcile
infrastructure.
The
linked
proposal.
Is
there
I'm
looking
for
feedback,
whether
or
not
we
actually
want
to
go
down
this
path?
I
think
we
do
but
I'd
love
to
hear
from
other
folks
and
just
raise
concerns
and
suggestions
within
your
vr.
B
C
E
F
David,
are
we
confident
that
this
proposal
is
on
the
way
towards
being
approved.
C
I
think
so
peter.
C
I
haven't
heard
any
major
obstacles
to
it.
If
there
are
objections,
please
bring
them
up
as
early
as
possible.
B
I'm
confident
because
it
already
went
through
the
kappa
review
process
and
this
one
is
pretty
similar,
we're
just
adding
some
azure
specific
things
to
it,
and
so
it's
already
gone
pretty
like
a
lot
of
christianity
and
we've
already
done
a
round
of
review
on
this
one
and
no
one
has
brought
up
any
like
groundbreaking
objections.
So
I
think
we're
pretty
confident
so
far.
F
A
B
I
added
that
I
think
ace
isn't
here
today,
but
I
just
wanted
to
call
this
out
in
his
name.
There's
a
pr
open
from
ace
to
auto
generate
the
azure
json,
which
normally
you'd
have
to
input
as
part
of
your
template
as
a
cube,
config
or
cubidium
config
file,
which
isn't
great
ux,
because
it's
tied
to
the
azure
cluster
spec.
So
you
have
to
like
make
sure
that
those
are
always
synchronized
and
like
really.
This
could
be
auto-generated.
B
D
Soon
any.
D
Yeah,
I
have
a
good
question.
I
know
cappy
said
there
like
found
some
bugs
I'm
going
to
do
like
a
0-3-8
release.
D
B
Well,
they're
not
tied
to
each
other,
so
I
I
wouldn't
say
it's
a
strict
requirement,
especially
if
the
village
is
fixing
a
bug
like
a
critical
bug.
We
don't
want
to
block
it
on
like
an
improvement,
but
the
way
that
this
is
going.
I
think
it's
close
to
being
ready-
and
I
agree
with
you-
it'd-
be
really
cool
if
you
could
have
that
out
there,
so
I'm
not
opposed
to
it.
If
it
can
happen.
C
It
seems
like
there
might
be
some
documentation
updates
to
be
done
in
the
cappy
book
as
well.
B
Are
you
tracking?
That's
true
we're
not
at
the
moment.
That's
a
good
point.
The
cappy
book
is
tied
to
the
main
branch
of
cappy,
though
so
it's
not
it's.
We
can
do
that
after
their
release,
which
is
good,
but
the
other
thing
is
that
this
is
a
pretty
big
change
and
it's
very
user
impacting
normally
it's
backwards
compatible.
So
if
you're
still
using
the
old
weight
should
still
work,
but
I
don't
know
if
we
want
to
patch
that
as
a
patch
release
that
includes
a
bug
so
fast.
B
Yeah,
at
least
in
the
tests
yeah,
but
we
can
think
about
that's
a
good
point.
A
Take
and
the
next
stop
is
these
meeting
times
this
year.
B
Yeah
this
one
is
more
logistics
and
boring.
So
if
anyone
else
has
any
interesting
topics
about
cabzi
that
they
want
to
talk
about,
first,
I'm
happy
to
wait
this
one
till
the
end.
D
D
My
conclusion
out
of
it
was
that,
like
we
just
need
to
add
some
more
validations
on
the
cluster
name,
but
there
are
recommended
naming
of
things
that
I
put
a
lot
of
details
in
the
issue
that
I
put
a
link
for.
It
now
is
like
recommended,
naming
that
we
don't
follow.
Like
example,
rg
dash
whatever
for
the
resource
group
or
vnet
dash,
something
and
the
subnet,
and
all
that,
and
we
don't
use
that.
So
I'm
not
sure
if
we
care
but
just
a
question.
C
So
I
I
did
comment
slightly
just
just
a
small
little
tinge
of
a
comment
from
sourced.
From
a
slack
conversation
we
we
had
with
the
user
a
while
back.
I
I
think,
the
sooner
that
we
can
get
this
in
the
you
know
better.
It
is
for
for
consumers
of
capsi
for
users
of
capsi,
because
they're
just
not
going
to
shoot
themselves
in
the
foot,
and
until
we
really
have
like
machine
health-
and
you
know
a
reliable,
a
cluster
came
up
or
clustered
didn't.
C
Because
of
this
reason
I
think
this
is.
This
is
pretty
important
where
what
are
your
thoughts.
B
I
agree,
I
think
the
conventions
for
like
naming
this,
the
things
like
subnet
application,
name,
blah
blah
blah
or
like
rg-
doesn't
really
apply
to
us
here
because
we're
not
really
you
know
it's
it's
cluster
infrastructure,
it's
not
application
infrastructure
so
and
I
think
it's
not
like
it's
not
going
to
change
user
facing
things.
But
what
is
going
to
impact
users
is
having
validation
that
doesn't
work
by
default
and
having
users
be
able
to
use
a
cluster
name.
That
get
then
results
in
a
failed
subnet,
but
we
don't
want
that.
D
So
that's
what
I've
done
so
far:
try
to
kind
of
get
like
the
minimal
version
of
the
cluster
name
that
wouldn't
break
anything
which
is
like
very
limited
but
anyways.
That's
the
only
change
I've
done
so
far.
So
if
you
can
just
like
take
a
look
at
some
point.
F
D
F
B
C
I
think
there's
one
kind
of
related
issue
that
this
brings
up
and
I
think
I
I
brought
it
up
in
the
issue
as
well,
that
we
might
want
to
think
about
having
a
override
for
the
cluster
name,
so
that
we're
basing
it
off
of
an
overwritten
value
instead
of
the
metadata.name
to
allow
people
to
name
things.
However,
they
want
that
fit
kubernetes
constructs,
but
you
know
then
have
an
override
for
here's.
Your
azure
stuff.
E
B
A
B
The
yeah
the
meeting
time
okay,
so
I
just
wanted
to
bring
up
that
so
I
know
we
just
had
a
like
survey
to
choose
this
time
like
a
couple
months
ago
in
march.
I
think,
but
I
just
wanted
to
put
it
out
there,
that
obviously
a
lot
of
things
have
changed
since
then.
A
lot
of
people
are
working
from
home.
Some
people
have
changed
time
zones
even,
and
the
people
like
who
are
involved
in
this
project
have
also
changed
a
lot.
B
There
are
a
lot
of
new
faces
and
so,
like
the
things
are
a
bit
different
and
also
this
meeting
is
kind
of
like
in
the
start
of
a
lot
of
people's
weekends,
which
I've
never
really
liked.
So
I
was
wondering
if
you
all
would
be
open
to.
Maybe
us
just
reevaluating
if
this
is
the
best
time
by
doing
a
new
survey
and
putting
this
time
as
one
of
the
options
and
if
it
still
comes
out
as
the
winner,
then
we
just
keep
it.
B
A
B
F
B
B
Yeah,
so
it's
automatically
filtered
and
I
actually
added
everything
to
the
project
yesterday,
so
everything
should
be
in
here.
But
if
you
click
on
add
cards
top
right,
you
should
see
if
anything's
missing.
B
Oh
so
there
are
some
stuff
that
I've
got
added
since
yesterday.
So
usually
what
we
do
is,
we
just
add
the
new
ones.
So
this
is
a
pr.
So
it's
in
progress
and
the
ones
that
are
issues
just
add
them
to
the
backlog
and
just
drag
them
in
there
and
then
yeah
and.
A
A
A
We
have
the
coverage
as
we
merge
this.
I
can
create
the
in
the
testing
for
a
new
brow
config
to
create
a
new
job
that
gonna
trigger
this.
I
was
thinking
in
in
the
beginning,
just
not
put
to
run
every
pr,
but
then
we
manually
triggered
this
job
and
see,
and
if
it's
working
and
that's
fine,
then
we
can
maybe
put
to
trigger
automatically
in
the
prs.
Also
we
can
add
not
only
for
the
pr
but
for
the
master
as
well.
A
Then
we
can
have
like
a
continuous
information
about
the
like
during
the
the
time
frame
for
the
the
life
of
the
master
stuff
as
well.
If
it's
increasing
or
decreasing.
B
Yeah
this
one's
still
in
progress.
I
helped
james
rebase
it
on
top
of
some
of
the
recent
refactors,
and
I
think
he
still
has
a
few
tvs
in
there.
Yeah.
A
A
A
This
is
an
issue
for
the
bastion
holes,
the
the
first
part
that
the
piano
did
it's
it's
I
already
refactored
to
use
the
new
servicing
we
are,
we
did,
and
I
just
need
to
you
guys
to
take
a
look
again
and
see
if
you
need
to
change,
but
this
is
the
first
part.
The
second
part
is
implementing
the
in
the
reconciliation
or,
if
you
we
don't
want
this
anymore,
you
can
just
close
the
piano
as
well.
A
C
Yes,
yes,
I
am,
unfortunately
I
will
get
that
in.
B
Yeah,
it's
in
progress.
Matt
was
out
this
week,
so
I
think
he'll
get
back
to
it
next
week.
B
I
think
we're
waiting
for
cappy
pr
to
merge
for
this
one.
It's
blocked.
B
C
A
C
A
B
So
when
they
merge
everything
that
merges
gets
automatically
applied
to
the
milestone,
we
have
a
multiplier
plug-in
enabled,
but
let's
take
a
look
at
the
milestone
quickly
and
make
sure
the
things
that
are
open
in
the
mouth
so
and
unassigned
that
we
know
about
them
yeah,
so
everything,
oh
great
everything
and
to
do
is
what
we
can
well.
We
said
we
would
commit
to
for
two
weeks
from
now.
B
Yeah,
I
think
the
two
big
ones
that
are
three
big
ones
that
are
left
are
cni,
which
I
don't
know
if
we're
going
to
get
to
honestly
the
bootstrap
failure
detection,
that's
probably
not
going
to
be
this
milestone
and
then
dedicated
host.
I
think
ace
was
looking
at
that
and
I
was
also
looking
at
it.
So
I
think
one
of
us
is
going
to
take
it.
Whoever
gets
to
it
first
and
then
I
guess
the
proposal
for
the
cloud
resource
reconciliation
breakout.
D
I
talked
to
the
people
about
the
failure,
detection
thing
and
they're
still
kind
of
not
sure
how
to
proceed
about
that.
So
that's
why
I
didn't
like
do
too
much
about
it.
They
still
need
to
figure
out
what
their
plan
is,
and
so
I
think,
could
probably
just
have
like
consistent
some
kind
of
consistency
between
the
different
providers,
even
if
it
requires
cluster
api
itself
or
something
then
probably
better
to
wait
a
little
bit
unless
you
all
think
it's
surgeon
and
we
should
start
working
on
it.
B
Follows
a
similar
approach,
but
I
think
we
should
start
with
a
design
like
azure,
specific
design
proposal
and
we
had
different
alternatives
in
the
original
dock.
So
we
could
just
start
from
there
and
expand
a
bit
and
then
choose
which
alternative
we
want
to
go
for,
but
yeah.
No,
I
think
no
urgency
in
the
next
two
week
or
so.
A
Thank
everybody.
We
closed
the
meeting
today
and
have
a
nice
weekend.