►
From YouTube: CAPZ Office Hours 05-15-2020
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right
now,
okay,
so
we're
recording
hi
everybody.
This
is
a
Friday
and
it
is
I.
Don't
know
day
it
is
May.
15
2012
see
office
hours
and
yeah.
So
let's
get
started,
let's
start
with
open
discussion
items
and
then
wow
sale.
You
do
have
a
lot
today
and
then
we'll
go
into
the
project
board
and
I
might
add
a
couple
of
things
at
the
end
too.
So
this
is
Dale.
Do
you
want
to
take
it
away.
B
So
the
first
thing
is
I
just
want
to
call
out
that
0.43
minis
is
out,
and
this
one
includes
a
bunch
of
goodies
and
teachers
from
different
contributors.
I'm
really
excited
about
that,
and
just
a
few
but
I'm
probably
gonna
forget
some
there's
Azure
machine
tools
in
their
failure,
domains,
managed
system
a
system
assigned
managed
identity
and
yeah.
That's
all
I
can
remember
right
now,
but
you
should
check
it
out
and
try
it
out
and,
let's
find
anything,
any
questions.
C
Do
we
have
templates
that
kind
of
outline
all
of
the
kind
of
potential
features?
Yes.
B
Great
question:
so
if
you
look
under
the
release,
their
assets
that
come
with
their
release
and
they're,
actually
a
few
template
flavors.
So
you
know
how
it
says:
cluster
templates,
so
there's
one
for
a
machine
pool
one
fair
system,
sign
identity
and
those
are
actually
all
earlier
generate
it
from
the
flavors
directories
inside
the
templates
directory
on
using
customize.
So
if
you
want
to
create
a
cluster
with
those,
you
would
do
cluster
cuddle,
config
cluster
flavor
and
then
put
like
whatever
comes
after
your
cluster
template.
So
flavor
machine
pools,
ready,
for
example,.
A
D
B
So
that
was
the
good
news
and
now
I
have
a
little
bit
of
bad
news
there.
We
found
a
bug
yesterday,
where
there's
actually
a
problem
in
how
we
were
defining
the
web
hooks
and
there
was
missing
plural
in
two
places
which
caused
us
to
enable
with
look
for
Azure
machine
I
drew
closer,
but
not
a
drum
machines
and
Azure
clusters,
which
doesn't
work
and
so
took
a
bit
of
time
to
find
like
why.
That
was
happening
and
some
debugging.
But
thank
you
ace
for
helping
fight
hi
in
the
solution.
B
A
That
sounds
and
thanks
for
catching
that,
let's
move
on
call
so
Richard
failure
to
mean
need
to
change
the
logic
question
mark.
E
Yeah
there's
more
of
a
question:
I,
don't
know
if
we've,
if
I
put
it
in
this
I,
don't
know
if
I've
got
it
right
most
thinking
about
this
essentially
the
way
the
logic
stands
currently
is.
It
will
take
the
failure
domain
from
the
machine
first,
if
it's
not
there,
then
it
will
take
it
from
the
azure
machine
and
then,
lastly,
it
will
take
it
from
the
availability
zone.
But
the
way
I
was
thinking
is
that
the
failure
domain
on
the
machine
will
always
be
set
if
we're
running
in
a
region
that
supports
availability
zones.
E
So
it
doesn't
matter
or
it
will
ignore
everything
that
is
in
there
is
your
machine
or
or
in
the
deprecated
availability
zone.
So
I'm
wondering
whether
you
know
we
change
the
order
to
to
take
it
from
say,
there's
your
machine
first
and
then,
if
not
fall
back
to
the
machine,
because
machine
will
always
be
set
so
sort
of
makes
sense.
E
E
E
E
G
E
B
H
Yeah
I
think
this
is
some
behavior
that
changed
as
we
were
working
on
failure
domains.
If
you
look
in
the
guide
for
migrating
from
alpha
2
to
alpha
3,
there's
mention
about
migrating
from
failure,
domains
that
were
previously
specified
on
provider.
Specific
resources
such
as
a
dream
machine,
so
I
think
Jason
is
still
out
today.
But
when
he's
back
hopefully
next
week,
he
would
be
there
probably
one
of
the
right
people
to
ask
or
Vince,
but
I
definitely
remember
Jason,
making
some
changes
around
this
well
rise.
B
A
B
Yeah
also,
if
anyone
else
has
discussion
items
they
want
to
add
and
like
so
figured
with
them
for
me,
but
yeah
I
just
wanted
to
that
I
opened
an
issue.
I
think
we
should
start
talking
about
what
you
want
the
API
to
look
like
when
we
moved
to
be
one
altar
forward,
because
there
are
a
few
things
that
you
know.
B
So
now
we
have
two
identity
like
feels
in
ID,
as
your
machine
spike
that
are
related
to
each
other,
but
like
together
in
one
place
and
so
I'm
thinking.
We
should
start
you
know
thinking
about
this
now
and
discussing
it
so
that
when
the
time
comes
for
t1
off
before
we're
kind
of
in
a
good
place,
where
we
have
a
proposal
and
have
a
good
idea
of
what
we
want
to
do
with
that,
that's
it.
C
B
That's
something
to
discuss
in
the
proposal.
I
think
that
would
make
sense
to
me.
I
I'm
I
can
see
something
being
like
you
know:
storage
and
networking
and
compute
like
separating
those
I'm
just
like
making
some
logical
things
behind
between
those
but
again
yeah.
But
it's
open
for
discussion
at
this
point.
Okay,.
F
B
H
H
However,
providers
are
free
to
add
new
API
versions
as
they
want
to,
and
we
do
have
support
in
the
CR
DS
themselves
to
indicate
that
a
given
cluster
API
API
version
like
v1
alpha
3,
would
work
with
cap
z,
v1,
alpha
3
and
V
1,
alpha
4
as
an
example.
So,
as
a
provider,
you
could
Rev
to
a
new
API
version
before
cluster
API
does
you
would
still
continue
to
work
with
the
cluster
API
v1
alpha
3
contracts?
But
if
you
need
to
make
some
breaking
changes
in
your
API,
as
you
can.
A
Oh,
are
we
good
for
clothing
discussion?
Laughter
now
quote
awesome
to
sell
you
all
set
the
next
one.
If
you
want
I
can
go
next
and
then
we
can
go
to
you
break
it.
A
Okay,
so
I
wanted
to
talk
about
specifically
where
this
would
go.
We're
gonna
have
to
start
thinking
about
some
other
Audrey
related
things
that
we
have
to
add
within
cubsy
to
make
it
easier
for
users
to
like
integrate
into
the
ad
or
ecosystem,
and
the
first
thing
would
be
container
monitoring.
So
that's
usually
an
add-on.
That's
within
a
kiss
engine
and
within
a
kiss
today,
that's
created
as
an
add-on.
A
F
I
I
To
what
students
will
do
is
a
CRD
to
as
your
resource
reconciler,
it's
a
experimental
project
that
we've
been
working
on
to
give
us
infrastructure
provider
for
edger
similar
to
AWS
and
GCP.
So
we
had
it's
it's
really
early,
but
we'd
love
to
start
getting
some
feedback
and
start
getting
in
front
of
folks
and
and
all
the
source
code
is
gonna,
be
open
source,
so
I'll
get
a
and
everybody
can
party
on
it
and
yeah.
I
Let's,
let's
make
some
flowers
bloom,
I'm
super
excited
about
this
awesome
I'm,
not
sure
this
does
too
much
for
the
container
modder
in
question.
Rhea
have
brought
up,
but
this
definitely
starts
to
go
down
the
path
of
how
do
we
do
like
a
sync
reconciliation
versus
like
synchronous,
controller,
reconcile
loops
a
lot
of
the
stuff
that'll
make
the
the
controller
and
cap
see
a
little
bit
easier
to
deal
with.
I
B
Yeah
I
just
I'd
love
to
hear
what
Andy
thinks
about
this
because
I
know
there's
the
proposal
on
resource
set
or
like
the
applying
resources
to
the
cluster
after
deployment.
That's
going
on
and
there's
also
clustered
add-ons
and
I
feel
like
we've
been
getting
this
question
a
lot
recently
about.
What
do
we
do
with
you
know
like
add-ons
like
things
we
want
to
play
on
cab,
Z
clusters
that
aren't
a
part
of
the
deployment
and
that's
gonna
become
a
recurring
question.
H
Sure
so
we
have
Cappy
pour
across
30
50,
which
is
the
cluster
resource
set
proposal.
My
vision
at
a
high
level
and
I
don't
know
if
this
will
come
to
fruition
or
be
modified
or
just
thrown
out.
The
door
is
that
we
implement
something
like
what's
specified
in
30
50,
to
allow
people
to
declaratively
define
what
sort
of
initial
things
they'd
like
to
have
deployed
to
clusters.
But,
as
we've
stated
in
the
proposal,
we
don't
want
to
replace
any
sort
of
add-on
management
or
operators
that
are
out
there.
H
So
if
you
have
folks
who
want
to
deploy
clusters
on
Azure
and
you
need
container
monitoring
added
to
every
cluster
or
anything
else,
I
think
that
there's
a
couple
approaches
that
you
could
take.
One
is
like.
We
know
that
clusters
need
CNI
installed,
and
so
this
proposal
would
give
you
an
easy
way
to
get
CNI
installed,
and
if
you
also
had
some
sort
of
container
monitoring
bundle
of
gamal,
you
could
use
this
same
mechanism
to
get
that
installed.
H
H
You
just
say:
I
need
this
container
monitoring
thing
and
it
knows
how
to
go
find
it
so
sort
of
you
know:
homebrew
Bay's,
that
type
of
thing
where,
in
your
manifests
for
your
clusters
and
for
your
cluster
resource
sets,
you
have
just
enough
to
get
the
real
stuff
installed
in
the
clusters
to
vengo
manage
add-ons.
If
that
makes
sense,.
C
H
So
I
think
it
would
be
nice
if
we
didn't
have
to
have
the
full
definition
of
every
add-on
that
you
want
to
install
in
this
manifest.
So
I
like
to
reiterate
what
I
said
a
minute
ago,
it
would
be
really
nice
if
so,
let's
pretend
we
have
cluster
resource
set.
So
in
my
cluster
resource
set,
I
would
have
a
reference
to
a
config
map
or
a
secret
that
does
have
the
actual
gamal
for
a
deployment
of
daemon
set.
H
I
would
like,
as
your
container
monitoring
I,
would
like
Valera
over
backups
I
want
Prometheus
and
Postgres,
or
whatever
else
that
that
you
treat
as
an
add-on,
but
that
those
would
just
essentially
be
like
names
and
versions,
and
maybe
a
little
bit
of
configuration,
but
I
think
it
would
be
nice
not
to
have
to
specify
a
giant
set
of
yeah
Milland
and
all
the
resources
for
each
individual
add-on.
When
you're
doing
that-
and
this
is
like
pie-in-the-sky
vision-
I
don't
have
anything
written
down,
probably
should.
But
that's
what
I've
been
thinking
about
lately,
yeah.
F
C
C
H
And
that's
kind
of
what
this
cluster
resource
set
is,
but
they're
not
like
it's.
It's
loosely
defined
like
a
cluster
resource
set,
can
point
at
one
or
more
config
maps
or
secrets
that
contain
my
add-ons,
but
we're
specifically
and
explicitly
not
considering
this
thing
to
be
an
add-ons
manager,
because
there's
the
cluster
add-ons
sub-project
that
exists.
That's
also
part
of
the
cluster
lifecycle
and
we're
not
trying
to
step
on
their
toes
or
be
a
replacement
for
it.
H
It's
just
I
have
a
cluster
I
need
to
get
some
stuff
installed
initially,
so
that
the
rest
of
like
the
real
stuff,
the
real
add-on
managers,
can
do
their
work.
So
if
there's
some
way
to
have
that
collection
or
catalog
in
the
management
cluster,
that's
not
a
cluster
API
specific
thing
and
you
figure
out
how
to
sync
it
from
the
management
cluster
to
the
workload
clusters.
Sure
I
think
we
could
definitely
continue
to
brainstorm
about
stuff
like
that.
B
Do
you
know
timelines
for
both
of
these,
because,
right
now
there
is
no
like
ready
solution
right
and
for
external
cloud
provider.
What
we've
done
is,
like
just
add
it
as
a
yeah
mole
inside
the
repo
and
then
just
like,
have
it
manually
applied.
So
what
should
we
do
for
now?
Is
that
the.
H
H
It's,
unfortunately,
not
ready
for
lazy
consensus
at
this
point,
because
we
still
have
some
design
questions
that
are
outstanding.
That
involves
some
security
issues.
I
highly
encourage
you
to
take
a
look
at
the
proposal.
I
won't
tell
you
not
to
comment
out
the
wazoo,
but
you
know
if
you
could
try
to
reserve
your
comments
for
things
that
are
like
really
big
issues
or
perceived
problems
or
questions
instead
of
nitpicking.
Every
detail
like
we
like
to
do.
H
We've
had
this
open
for
comment
for
a
while,
so
we're
just
trying
to
get
through
the
last
issues
and
get
it
merged,
so
not
telling
you
not
to
comment,
but
I
would
suggest
that
the
types
of
comments
should
have
a
slightly
higher
bar
than
free-for-all.
If
that's
possible,
where's
this
R.
This
is
cluster.
Api
pull
request
three
zero
five
zero
I
can
I
can
get
you
a
Lincoln
zoom
chat
here,
real,
quick
or
even.
A
Yeah
and
there's
probably
gonna
be
work
internally.
We're
gonna
have
some
discussions
with
container
monitoring
to
see
like
how
this
stuff
would
even
get
exposed
because
I'm
sure
right
now,
initially,
if
anyone
added
a
container
Madhuri
add
on
to
their
clusters,
it
wouldn't
look
great,
so
we're
just
gonna
have
to
go
manually.
Try
that
out
from
that
end,
and
from
this
end
I
guess
we'll
we
can,
we
can
document
it
yeah
through
credible
commands,
control,
commands
and
wait
for
this
proposal
to
go
through
yeah.
A
A
Not
yet
just
because
no
one's
actually
really
using
Katzie
and
it's
full
extent,
but
I
see
it
down
there
just
down
the
runway
of
things
that
we're
gonna
hit
start
to
to
need
to
incorporate.
I
I
did
actually
get
specifically
asked,
but
this
is
it's
for
some
other
stuff,
so
yeah
just
for
us
to
become
like
any
sort
of
viable
product.
There's
just
a
couple.
A
Definitely
a
couple
add-ons
and
a
couple
of
things
we
have
to
layer
on
top
of
these
clusters,
because
it's
something
natural
that
I,
as
your
customers,
user,
will
ask
for,
and
we
also
just
be
a
part
of,
as
your
other
teams
want
to
make
sure
like
their
stuff,
is
still
supported.
So
like
we
definitely
support
prometheus,
like
Prometheus
works,
for
you
go
use
it,
but
we
also
want
to
give
people
the
option
if
they
are
completely
in
the
azure
ecosystem
to
be
able
to
use
r-roger
products
really
easily.
Oh.
A
Yeah
cool
awesome,
so
I
think
we
can
hold
on
that
one.
Oh,
the
next
one,
a
kiss
provider
PR
emerged,
there's
no
name
on
it.
So
yeah.
A
B
First,
one
is
I
opened
PR
for
the
HD
data
disc,
kept
in
cluster
API.
It's
under
lazy
consensus
until
I.
Think
next
Tuesday
go
ahead
and
review
it
if
you
haven't,
but
basically
it's
adding
a
generic
way
to
add
cloud
and
it
disk
setup,
file,
system,
setup
and
mounts
and
I.
B
So
that
part
is
static,
is
not
gonna
change
the
cluster
API
part,
the
part,
that's
like
still
I
mean
a
little
bit
under
I,
guess
not
anymore,
but
still
being
tested
as
the
Kaposi
integration
of
how
we
can
reference
the
device,
because
it's
complicated,
but
basically
the
device
name,
isn't
persistent
on
on
reboot.
So
we
need
to
use
assume
to
the
device
when
we
create
the
partition
and
then
we
refer
to
the
mount
with
a
processing
label.
Anyways.
C
B
B
It's
not
I'll
just
go
ahead
and
say
the
next
one
too
I
just
wanted
to
call
out
that
the
yes
provider
PR
emerges.
That
was
a
big
one
that
aces
been,
which
month
a
while
and
they
just
merged
yesterday.
So
it
semester
it's
not
in
the
latest
release,
but
go
ahead
and
try
it
out.
If
you're
interested
I'm
super
interested.
A
It's
like
I'm
gonna
attempt
to
share
my
screen,
while
also
query
this
I'm
only
on
one
screen
today
and
I've.
Never
done
this
with
just
one
screen
who
thought
worked,
all
righty.
So
let's
get
these
news
cards
over
and
then
just
double-check
her
to
do
call
them
and
make
sure
it
makes
sense
already
defaulting
in
templates
for
marketplace
images.
A
I'm
going
to
put
that
in
the
backlog,
assuming
no
one's
working
on
it
managed
cluster
should
check
provisioning
status
before
attempting
changes
I'll
put
in
the
backlog
until
I'm
guessing
this
is
ace,
shows
up
and
says,
he's
working
on
it
already
add:
CI
API
dis
job.
No,
these
no.
A
B
A
F
B
B
A
A
A
G
G
A
Open
in
the
backlog,
and
maybe
in
a
week
or
two
that'll
change,
provide
deadlines
and
cancellations
for
all
uses
of
contact.
Our
contacts
David
is
that
yours,
what's
that
clog?
Well,.
I
I
So
there's
gonna
be
plumbing
all
the
way
down
so
like
even
like
a
patch
object
has
a
context
call
and
we
we
like
in
our
scopes
and
saying
hey,
let's
instantiate
a
patch
object
and
then
we
call
patch
and
we
don't
pass
context
down
to
it.
So
it's
it's
there's
a
lot
of
plumbing
all
the
way
down
right.
So
if
we
want
to
really
have
hierarchical
context,
all
the
way
down.
I
So
if
like
say,
we
say,
a
reconcile,
loop
can
only
take
60
seconds
and
we
get
all
the
way
down
into
a
patch
Hall
and
the
patch
call
doesn't
have
you
know
a
relationship
to
that
that
higher-level
context.
Then
the
patch
call
could
run
indefinitely
and
eventually
you
know,
trees.
The
long
loop.
F
I
Up
to
close
out
on
that,
the
the
actual
like
design,
discussion,
I
and
maybe
it
doesn't
have
to
be
a
proposal
back,
but
more
just
like
hey.
What
is
our
philosophy
on
how
long
the
reconcile
look
can
run
like?
Do
we
have
like
a
time
that
we're
going
to
terminate
and
say
hey,
you're
done,
let's
try
another
at
some
point.
Your
failed.
B
F
F
B
Wanted
to
ask
Stephen
actually
last
time
when
we
did
the
triage.
We
were
thinking
about
whether
we
should
our
giant
arcade
was
nothing
done
cuz.
We
have
this
big
new
done
column,
which
is
supposed
to
be
q1
but
meter,
past
q1.
Now,
but
also
does
it
really
make
sense
to
check
his
stuff
by
timeline
and
not
by
milestone.
F
It
depends
so
right
now
the
board
has
automated
as
done
configured
for
that
column
so
like
we
could
just
create
another
column
and
reconfigure
as
automated
a
son.
The
reason
I
was
doing
that
at
the
time
at
least,
was
that
there
was
less
activity
in
the
repo,
so
at
least
scoping
it
by
quarter
made
sense.
F
The
milestones
we
could
do
milestones
I
think
we
also
had
multiple
milestones
that
were
like
one
was
like
infra
or
something
like
not
necessarily
code
base
milestones,
but
also
like
plumbing,
so
yeah
I'm,
fine,
I'm
fine
to
go
to
milestone,
so
if
it
makes
sense
now
this
was
just
what
worked
at
the
time.
Yeah.
B
B
Support
is
in
July
anyway,
and
so
now
I
had
this
like
ongoing
milestone
and
that's
probably
gonna
last
a
while
and
I
feel
like
it'd,
make
more
sense
to
have
smaller
scopes,
milestones,
multiple
ones
right,
because
we're
only
tracking
0.5,
which
I
think
Abby's
putting
stuff
in
0.3
dot,
six
and
like
doing
dot
patch
releases,
third
passion,
Austin's
and
Jess.
Oh
really,.
F
A
F
We
have
one
one
thing
about
that
is
like
once:
PR
merges
like
or
closes,
or
an
issue
closes
right
now.
The
milestone
applier
will
automatically
apply
some
milestone
right.
So
the
milestones
are
fine.
We
don't
necessarily
need
to
track
what
we
could
like.
If
someone
is
interested
in
seeing
what
milestone
something
landed
in,
they
could
filter
by
milestone
on
the
board
and
you'd
not
only
be
able
to
see
the
milestone.
You'd
also
be
able
to
see
what
quarter
it
was
done.
It
right.
A
A
F
C
I
just
thought
of
something
both
David
and
Cecile-
that
I
wanted
to
get
your
feedback
on.
This
is
related
to
the
azure
sovereign
support
issue.
C
I
I
think
one
of
the
main
reasons
why
I
think
the
main
reason
why
it
was
pushed
to
kind
of
go
down
that
way
is
we
have
a
lot
more
direct
control
and,
if
there's
any
sort
of
changes
that
need
to
take
place
to
that
logic,
we
will
be
able
to
make
those
changes
much
quicker
in
cat
Z
will.
Would
they
be
better
and
go
autors
in
the
long
term?
Perhaps
in
fact,
I
agree
with
you,
I
I
do
think
they
would
be
better
served
to
be
and
go
on
arrest
in
the
long
term.
I
C
Just
thinking
we're
like
we
were
talking
about
primarily
using
the
environment,
name,
lookup
and
then
falling
back
to
a
file
to
provide
the
details
and
that
functionality
already
exists
go
on
arrest.
It's
not.
It
doesn't
use
the
metadata
lookup
that
we
talked
about
so
I'm
just
wondering.
If
is
it
really
worth
to
build
it?
The
environment
name
to
environment,
mapping
in
caps,
II,
just
to
add
the
functionality
of
the
metadata
lock
up,
see.
J
I
can
feel
this
since
I
left
that
comment
actually
yeah,
so
no
I
don't
think
we
should
be
statically
remapping,
the
user
soon
go
on
arrest
environments.
The
main
scenario
where
this
is
relevant
is
for
a
custom
class
scenarios
exactly
so
we
just
want
to
make
sure
that
we
do
have
support
for
those
environments
and
I.
Think
one
of
the
issues
that
maybe
I
can
clarify
offline
is
that
the
go
autographs
like
environment
struck
does
not
actually
reflect
all
of
the
endpoints
that
you
need
in
a
custom
environment
yeah.
J
A
A
A
A
B
B
B
Take
people
yeah
well,
okay,
oh.
J
J
F
So
we're
also
I
think
what
I'd
eventually
like
to
do
is
I
was
gonna,
suggest,
triage
party
and
then
boom
I.
Think
what
I,
eventually
like
to
do
is
allow
multiple
versions
of
triage
chart
or
like
multiple
instances
of
triage
party
on
the
kubernetes
level,
like
we're
testing
it
out
on
for
cig
release.
What.
F
And
I
said:
yeah
it'd
be
nice
to
be
able
to
deploy
all
of
those
in
one
place
and
then
swap
configs
easily.
So
maybe
we
can,
if
you
want
to
get
up
with
me
later,
maybe
next
week
and
then
more
so.
Marki
is
working
on
that
in
and
cig
release
and
then
I
think
so.
Thomas
did
a
demo
for
us.
I
think
both
on
the
cig
chairs
meeting,
as
well
as
within
the
release
engineering
meeting.