►
From YouTube: 2020-12-10 CAPZ Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Yes,
actually
yeah
good
point,
so
we
released
zero
four
ten
last
week
and
has
a
bunch
of
cool
features
like
gpu
clusters
and
private
clusters,
etc
and
yeah
we
hadn't
released
in
a
while.
So
it's
a
bigger
release-ish
but
yeah
take
a
look.
Let
us
know
if
you
have
any
issues,
feel
free
to
reach
out
in
slack
or
on
github,
and
I
think
this
will
be.
B
I
will
have
one
last
release
or
one
last
bigger
feature:
release
for
v1,
alpha
3
and
then
that
will
have
multi-tenancy
and
such
and
then
we'll
probably
move
on
to
the
one
off
before
after
that.
A
Yes,
they
mentioned
in
the
cluster
api
meeting
yesterday
that
they're
about
to
do
like
a
tag
at
least
or
something
for
the
alpha
4,
because
the
providers
are
asking
for
it.
A
Okay,
so
let's
I
guess,
let's
go
through
the
open
discussion
items-
windows,
vr
status,
that's
the
first
one
was
that.
B
A
B
Yeah,
I
kind
of
sorry
just
wanted
to
well
if
james
is
okay
talking,
because
I
know
he
was
driving
earlier,
but
just
wanted
to
see.
You
know,
what's
left
to
do
in
the
windows
pr
and
where
we're
at.
C
Yeah,
can
you
hear
me
yeah
yeah,
okay,
cool
yeah,
so
I
I
been
working
on
getting
the
vhd's
published.
I
all
the
tests
are
passing
locally
when
I
run
it
against
that
image
that
I've
built
and
so
the
I
think,
one
so
I
published
1193,
but
but
our
tests
run
on
194,
so
I
published
that
last
night,
so
it
should
be
available
this
morning,
so
I'll
rerun
the
tests.
C
Beyond
that,
I
think
the
open
question
was
just
around
the
additional
windows
tests,
because
windows
is
using
flannel.
Instead
of
calico,
I
had
to
add
extra
extra
set
of
clusters
to
run
those
tests.
So
I
know
that's
not
a
deal,
but
it's
the
that's
where
we're
at
right.
Now,
with
the
the
windows
support,
I
did
look
into
adding
calico
support,
but
it's
going
to
be
a
little
bit
more
extra
work
and
just
didn't
want
to
delay
windows
much
longer.
So
anybody
have
any
thoughts
on
any
of
that.
B
So
a
while
back,
we
added
a
end-to-end
test
full
suite
for
gpu
clusters,
because
we
didn't
want
to
run
gpu
clusters
at
every
pr
because
they're
pretty
costly
and
they
were
slowing
down
everything
else
so
wondering
like.
I
don't
want
to
exclude
windows
from
pr
test.
We
should
definitely
have
some
windows
like
some
windows
cluster,
but
maybe
we
should
rebalance
what
we're
doing
with
the
linux
clusters,
like
maybe
move
some
of
them
to
the
full
suite,
so
we
don't
run
them
on
every
qr
and
they're
optional,
not
sure
what
people
think.
B
D
E
E
Sorry,
it
was
craig,
I
was
just
gonna
say
I.
I
would
like
to
see
at
least
some
windows
tests
in
the
vr.
Yes,.
B
F
What
do
we,
so?
The
test
might
be
taking
a
little
bit
of
a
long
time.
Perhaps
we
just
have
two
pr
jobs
for
the
time
being
one
for
windows,
one
for
linux
and
just
run
them
parallel.
A
A
Even
though,
if
they're
running
in
parallel,
that's
probably
safe
to
have
too,
I
think
it's
good
to
have
more
windows
tests
right
now,
just
to
make
sure
we're
not
breaking
it,
because
we
might
not
be
paying
attention
to
some
of
the
things
it
needs.
So,
just
just
in
case,
it's
probably
safer
to
have
a
couple
of
the
windows
test
at
the
beginning,
at
least.
F
B
Yeah
yeah,
okay,
we
weren't
before
we
thought
we
were
now
we
are,
but
we
only
have.
We
have
a
limit
on
the
number
of
nodes
that
we
can
use
like
kinko
nodes,
so
we're
not
necessarily
running
like.
If
we
have
seven
seven
tests,
we're
not
gonna
run
all
seven
in
parallel.
I
think
it's
three,
so
it
will
take
longer.
A
F
A
B
A
B
F
All
right
cool
thanks
cecile,
so
I
would
I
would
love
to
talk
about
vmss
and
async
reconciliation.
F
So
I
a
little
background,
I'm
pretty
sure
most
of
the
folks
on
this
call
know
that
the
reconcilers
go
through
and
they
make
a
call
to
azure
and
they
make
several
calls
azure
usually,
and
then
we
build
resources
if
they
don't
exist.
We
get
to
that
goal.
F
State
and
part
of
building
a
resource
in
azure
is
that
when
we
do
a
create
a
put
or
a
patch
to
create
an
azure
resource,
oftentimes
those
trigger
what
we
refer
to
as
long-running
operations
in
a
long-running
operation
returns
you
a
successful
initial
response
from
the
http
layer
and
you
get
a
like
a
202
accepted
or
two
one
created,
and
then
that
starts
kicking
off
the
azure
service,
the
azure
back
end
into
actually
creating
that
resource,
provisioning,
the
infrastructure
and
doing
what
it
needs
to
to
get
it
into
a
ready
state.
F
Part
of
that
is
that
once
we
get
that
first
successful
response
that
triggers
in
the
azure
sdk
a
pulling
back
and
forth
with
azure,
so
it's
going
to
constantly
be
getting
that
resource
asking
it.
If
it's
reached
a
terminal
state
once
it
reaches
a
terminal
state,
either
either
successful
or
failure,
we
end
up
the
sdk
returns
back
that
object
for
us
as
a
consumer
of
the
sdk.
F
It
looks
dangerously
simple,
like
we,
we
call
create
and
then
it
waits
and
gets
it
back,
which
is
not
actually
the
case
it
it
functions
that
way,
but
for
us
that
that
creates
a
really
long
group
loop
in
our
reconcilers.
So,
for
example,
in
a
vm,
it's
going
to
take.
You
know
70
seconds
or
something
like
that
to
complete
a
reconcile
loop
for
a
machine.
F
That
is
that's
bad,
ux
right.
So
that's
going
to
cause
our
users
to
wait
to
see
any
kind
of
like
status
updates
on
a
object
for
like
70
seconds,
so
I
feel
like
that
is
unacceptable
and
that
we
should
do
better
and
one
way
we
can
do
better
is
kick
off
these
operations
and
track
them.
So
we
reconcile
quickly
pushing
towards
a
goal
state
and
we
do
this
asynchronously.
F
F
It
takes
a
future
from
the
azure
sdk
and
it
stores
that
on
the
status
for
the
machine
pool
the
machine
pool
then
comes
back
and
it
the
machine
pool.
Reconciler
then
says:
hey
call
me
back
in
15
seconds
and
it
comes
back
and
reconciles
again.
It
sees
that
there's
a
future
on
the
machine
pool
it
rehydrates.
That
future
passes
it
into
the
azure
sdk
and
the
azure
sdk
goes
and
makes
a
request
and
checks
to
see.
F
If
it's
done
yet
and
if
it
responds
back
with
hey,
I'm
not
done,
then
it
says:
hey
call
me
back
again,
so
this
is
not
super
dis.
It's
not
very
different
from
what
the
sdk
does
under
the
covers,
but
for
us
it
actually
gives
us
the
ability
to
respond
to
users,
so
it
makes
our
concur.
Our
concurrency
levels
can
be
a
little
bit
lower
but
be
much
more
responsive
to
users
of
our
controller.
So.
F
Where
we
go
from,
there
is
eventually
once
that
the
operation
is
complete
in
azure.
We
can
then
say:
yes,
it's
created
mark
the
terminal
state
on
the
k-8s
resource,
and
you
know
move
on
with
our
day
all
the
while
we've
been
updating
status,
we've
been
updating
conditions,
we've
been
producing
events
and
we've
been
very
responsive
to
our
users,
so
this
is
going
into
machine
pool,
but
I
think
it
also
sets
a
pattern
and
this
pattern
can
be
used
for
all
resources.
F
F
This
causes
us
to
stop
reconciliation
right
there
and
continue
from
there
when
we
come
back,
so
this
might
be
a
little
bit
odd
with
the
ears,
but
I'm
curious
what
folks
think
it
seems
to
make
sense
in
the
way
that
we're
handling
it
it's
a
little
bit
chatty
in
the
logs
right
now,
and
I
would
like
to
make
sure
that
these
transient
errors,
these
transient
states,
don't
dirty
up
the
logs,
so
we
probably
quiet
those
down,
but
I
think
it's
more
hey.
How
should
we
do
it?
F
A
F
Well,
we
know
what
type
the
error
is,
so
what
we
do
is
at
the
top
level.
We
say:
is
this
a
you
know
not
not
completed
error,
and
if
it
is
a
not
completed
error,
we
we
shouldn't
log
it
like
we're
just
not
done
yet
so
yeah.
B
A
Yeah,
it
seems
reasonable
to
me
cecil
did
you
have
any.
B
Comments,
no,
I
think
the
whole
thing
is
really
great.
I
already
looked
at
the
pr
and
I
had
like
a
few
questions
around
well
like
yeah
the
errors.
How
does
it
work
if
another
resource
gets
created
after
that?
That
expects
the
resource
before
it
to
be
created,
and
they
would
answer
that
here
and
then
the
other
question
I
had
was:
how
does
it
affect
the
number
of
api
calls?
B
Because
I
know
that's
something
that
can
be
quite
you
know
sensitive
with
big
clusters
and
lots
of
resources,
and
my
wrong
impression
was
that
if
you
call
the
sdk
to
create
a
resource
and
then
you
wait
for
it
to
be
completed,
that's
just
one
api
call,
but
I
was
wrong.
It
turns
out,
as
david
explained,
that
you
call
it
once
and
then
you
have
to
pull
it
constantly
to
know
if
it's
done,
so
it's
actually
more
than
that.
So
it's
not
like
we're
increasing
the
number
of
api
calls
by
doing
an
async
reconcile.
B
We
might
actually
decrease
it
because
we're
doing
it
more
smartly.
Now,
where
we
can
control
like
how
long
we
expect
the
resource
to
take
and
not
re-queue
until
we
think
there's
a
good
chance
that
it's
created,
for
example,
there's
no
point
checking
like
right
now,
there's
no
point
in
checking
that
a
vm
is
created
after
10
seconds,
maybe
in
the
future,
but
right
now
it's
not
there
so
yeah.
I
think
it's
really
great
work.
A
A
So
this
is
the
last
item
I
didn't
add
this.
I
just
want
to
mention
that
I
I
pushed
a
bunch
of
changes
to
the
multi-tenant
cpr,
all
the
things
we
talked
about,
except
the
only
one
thing
that
is
not
done
yet
is
the
allowed
namespaces
because
they
don't
know
how
to
do
it
yet,
but
anyway,
we
can
like
I'm
still
gonna
work
on,
like
whatever
comments
and
stuff
that
come
up
so
and
I'm
doing
some
more
testing
right
now,
as
as,
hopefully
somebody
takes
a
look
at
it.
F
I
am
a
midway
through
a
review,
it
looks
really
good.
So
far,
cool.
F
So
maybe
maybe
it
would
be
good
to
talk
through
that
a
little
bit
with
the
group
and
maybe
get
some
points
of
view,
because
I
would
love
to
hear
I
would
love
to
get
a
sanity
check
on
it
like
does
it
make
sense?
Should
we
continue
down
this
path?
Do
you
want
me
to
do
you
want
me
to
kind
of
phrase
it
up,
or
would
you
like
to
yeah.
F
F
So
what
this
gives
is
the
user,
the
ability
to
have
an
identity,
that's
separated
from
the
environment,
variables
that
we've
instantiated
onto
the
controller
and
so
that
that
cluster
can
be
built
by
whatever
identity
the
user
says.
So
this
really
leads
to
two
main
scenarios.
One
scenario
is
that
the
user
is
not
terribly
concerned
with
security.
F
They
will
put
their
identity
in
the
same
name
space
as
their
cluster
and
we
have
an
object.
Reference
nader
has
in
the
pr
object
reference
to
an
identity
and
it's
a
core
v1
object
reference
and
that
identity
is
assumed
to
be
in
the
same
name
space
in
in
the
easy
case,
and
the
identity
has
a
reference
to
a
secret
and
we
use
that
to
build
the
aad
pod
identity,
binding
that
binds
the
controller
and
the
identity
together,
and
we
can
use
that
to
provision
the
cluster.
F
The
more
advanced
use
case
is
that
say
we
have
a
user
and
a
security
organization,
and
the
security
organization
would
like
to
create
identities,
but
only
house
them
in
the
security
namespace.
So
it's
like
their
high
security
namespace,
normal
users,
don't
have
access
to
it.
F
Only
the
security
organization
and
our
controller
have
access
to
this
namespace.
So
a
user
with
in
you
know,
bank,
like
foo,
the
user
is
told
there
is
a
identity
that
you
can
use
to
provision.
Your
cluster
here
is
the
name
and
the
namespace
for
this
identity.
We
have
provisioned
it
for
you,
you
can
use
it.
The
user
comes
in
and
creates
their
cluster
in
namespace
called
a
so
alpha,
the
alpha
namespace
and
they
create
in
the
alpha
namespace.
F
The
identity
in
the
security
namespace
has
allowed
namespaces
on
it,
and
the
allowed
namespaces
is
a
label
selector
that
label
selector
has
both
alpha
and
beta
on
on
the
label.
Selector
the
namespace,
where
the
by
virtue
of
using
the
alpha
namespace
our
controller,
is
then
able
to
to
reconcile
that
cluster
using
the
identity
provided
by
the
security
org,
because
the
cluster
was
created
in
the
alpha
namespace.
F
Now,
if
that
cluster
was
created
in
the
delta
namespace
instead
of
alpha,
then
if
someone
tried
to
create
a
cluster
in
the
delta
namespace
and
use
that
same
reference
to
that
identity,
the
identity
would
not
be
allowed
to
be
used
because
it
does
not
have
delta
on
the.
F
So
those
are
the
two
scenarios
one
is
trying
to
the
secure
org
is
trying
to
make
it
where
identities
aren't
available
to
users
they're
available
to
a
cluster
and
they're
scoped
to
wear
the
namespace
where
the
cluster
is
built.
So
if
the
cluster
is
built
in
given
namespace
allowed
namespaces
constraints
at
so
that's
the
thought
on
it.
Anybody
have
any
ideas,
feedback
concerns.
C
Yeah,
I
think
for
me,
I
need
to
like
kind
of
visualize
that
a
little
bit
so
there'll
still
be
a
single
controller
that
can
create
clusters
in
from
various
namespaces
and
the
label
selector
that
enables
them
to
use
a
particular
identity,
and
so
the
single
controller
can
have
multiple
identities
or
now
they're,
going
to
be
creating
a
controller
for
a
certain
identity
and
for
a
different
identity
for
a
different
part
of
the
company.
B
B
Let's
say
you're
in
scenario,
one
where
you
create
your
identity
in
the
same
name
space
as
your
cluster,
and
you
don't
specify
allow
namespaces
because
you're
not
in
that
use
case.
That
means
anyone
can
create
a
cluster
in
another
namespace
using
that
identity.
B
A
F
This
is
one
of
the
things
that
I
find
very
weird
about.
Kubernetes
in
general
is
the
idea
that
we
don't
the
controller,
doesn't
know
the
user.
That's
trying
to
make
the
operation
right.
So
in
a
lot
of
places
you
know
the
our
back
rights
to
a
user
and
we
don't
know
the
user.
That's
that's
actually
manipulating
the
object,
so
we
we
have
to
constrain
it
by
other
things
and
the
label
selectors
is
how
we
constrain
it.
F
A
F
A
Okay,
no
I'll
look
at
how
to
do
that.
I
haven't
done
that
part
yet,
but
does
anybody
have
questions
or
concerns
about
this
approach?.
B
Cecile,
I'm
not
really
concerned,
but
I
think
we
should
whatever
we
do,
make
sure
that
we're
in
talk
with
the
other
providers
to
make
sure
that
there
isn't
like
super
big
discrepancies
in
the
user.
Experience
like
even
if
the
underlying
implementation
isn't
the
same,
at
least
so
that
the
user
experience
is
somewhat
consistent.
B
Like,
for
example,
I
know
kappa
was
talking
about
whether
they
should
have
the
like
credentials,
part
of
the
infrastructure
components
and
have
multiple
ones
or
if
it
should
be
a
prerequisite
that
you
create
your
credential
secret,
which
I
think
we're
going
for
the
latter.
So
we
should
make
sure
that
we're
talking
to
them
about
it,
and
you
know
explaining
why.
F
Yeah,
a
lot
of
the
original
design
was
cribbed
from
what
kappa
had
done,
nadir
and
folks
yeah.
A
A
How
was
the
last
item
does
anybody
have
any
other
questions
or
comments?
They
want
to
bring
up.
B
A
A
B
Oh
actually
yeah.
I
have
one
thing:
psa,
there's
an
issue
with
kubernetes
11714
in
the
azure
cloud
provider,
the
entry
azure
cloud
provider-
so
don't
use
it.
B
A
Here's
the
milestone
coming
out
here:
okay,
so
multi-tenancy-
that
is
the
one
I'm
working
on
and
we're
talking
about.
I
think
it's
pretty
close
david
is
reviewing
so
hopefully
it's
pretty
close
machine
pool,
kate's
version
upgrade,
doesn't
work.
F
Yeah
part
of
it
is
getting
the
async
stuff
in
place
so
that
we
don't
so
that
we
can
start
putting
in
rolling
updates
max
search.
That
kind
of
stuff.
A
Okay,
so,
oh
that's,
okay,
support,
client
certificate,
authentication,
that's
part
of
the
same
pr,
that's
like
kind
of
fixed
by
using
a
yeah
but
identity.
So
once
we
have
multi-tenancy
that's
kind
of
taken
care
of,
so
I
added
a
part
in
the
documentation
about
it,
and
both
of
them
should
be
fixed
by
the
same
pr.
F
Just
a
quick
one,
so
for
the
version
k,
it's
version
upgrade
sorry
seal.
You
got
the
pr
into
capi
for
fixing
token
refresh.
Is
this
associated
to
token
refresh
and
are
we
should
we
wait
to
get
the
new
version.
B
It's
associated
in
the
sense
that,
like
if
you're
trying
to
do
an
upgrade
after
15
minutes,
it's
not
going
to
work,
but
we
can
still
work
on
the
mechanism
of
the
upgrade
and
the
test
will
run
within
15
minutes.
So
it
should
be
fine
but
yeah.
The
token
refresh
is
part
of
the
the
zero
three
release
branch.
So
it
should
be
part
of
the
zero
three
twelve
release
which
we're
cutting
on
monday.
F
Fantastic
and
I,
what
I'm
going
to
do
is
I'm
going
to
go
and
reword
the
title
a
little
bit
just
to
kind
of
scope
it
to
the
mechanics
of
update.
A
Sounds
good
sounds
good,
so
next
one
experimental
retry
join.
I
think
we
have.
I
have
a
pr
for
that.
That's
just
we
have
to
keep
running.
We
at
least
need
to
wait
for
the
next
happy
release,
312
and
then
decide
after
that.
So
I
think
we're
okay,
if
that's
not
merged
we're
just
testing
with
it.
So
we'll
decide
based
on
what
happens.
B
I
think
it
makes
sense
to
have
the
multi-tenancy
merch
first,
since
that's
going
to
affect
how
your
identities
work,
but
also
we
should
probably
first
and
foremost
update
the
multi-tenancy
proposal
to
match.
Then
what
we're
doing
now
is
that
number
one.
A
That
should
be
number
one
david
yeah,
and
then
I
can
take
that
one
once
like
I'm
done
with
like
the
actual
pr.
I
don't
want
to
do
it
until
we
were
done
so
if
you
haven't
got
into
it
at
that
point
I'll
take
it
sounds.
E
Yeah,
I
I
definitely
don't
have
a
strong
opinion
on
how
it
should
go,
but
I'll
be
a
very
good
error,
tester
or
confusion.
Tester.
F
So
I
I
would
invite
you
to
go
through
the
quick
start
and
just
the
way
that
they
set
up
like
roles
and
set
up
the
the
initial
cloud
formation
stack
is
kind
of
it's
interesting.
They
they
use
their
own
command
line
to
help
provision
some
of
the
resources
that
you
need
to
get
started
with
init
knit
honestly.
F
E
A
B
A
Okay,
so
that's
the
status
of
the
milestone,
as
is
right.
Now,
it's
probably
enough.
We
don't
want
to
add
any
thing
unless
there's
like
some
critical
bugs
or
something
does
that
make
sense.
B
Yeah-
I
am
it's
not
in
here,
but
I
think
matt
boersma
was
working
on
using
the
new
nvidia
operator
for
since
they
really
support
for
container
d.
A
A
Okay,
no,
I
was
gonna
say
I
mean
if
he's
already
working
on
it
and
it's
ready,
we
can
just
add
it,
but
we
don't
have
to
wait
for
it.
If
you
want
the
release
to
come
out.
Oh.
A
Yeah,
but
keep
in
mind
that
we
won't
be
able
this
if
you're
saying
this
is
the
last
like
zero
for
release,
then
we'll
probably
won't
be
able
to
do
like
another
quick
release
unless
there's
like
bug
fixes
because
we'll
be
working
on
the
zero
five,
so
yeah.
If
we
want
this
to
be
there,
we
can
wait
for
it.
I
don't
think
this
is
a
big
one.
B
F
I
agree
with
you
and
also
from
a
timeline
perspective,
the
amount
of
work
going
into
prep
release
and
then
go
through
all
that
stuff.
It's
probably
a
little
a
little
tight
on
time
to
get
the
pr's
that
we
want
to
get
in
in.
B
Yeah,
I
agree
okay,
so
let's
just
aim
for
january
for
the
next
release
and
then
we
can
get
that
in
there
as
well,
because
I
think
we
definitely
want
to
make
sure
multi-tendency
and
windows
make
it
in
and
we
don't
want
to
rush
those.
So.
A
F
Since
we're
on
the
topic
of
releases,
I
just
want
to
give
carlos
a
big
cap
g
way
to
go
carlos.
What
nice
job.
D
A
Okay,
I
guess
we'll
finish
a
few
minutes:
early,
okay,
everybody
enjoy
your
vacations
whoever's
taking
vacations
and
we'll
see
you
in
the
new
year.