►
From YouTube: 2020-07-10 CAPZ office hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool
so
welcome
everyone
to
the
July
10th
2020
cluster
API
provider
for
Azure
meeting
cluster
API
is
a
kubernetes
project.
We
also
call
it
Cappy
and
the
cluster
API
provider
for
Azure
is
also
known
as
cap
Z
and
that's
its
implementation
for
Azure
cloud.
This
meeting
is
open
to
the
public
and
we're
recording
it.
So
we
ask
you
please
to
abide
by
the
CN
CF
code
of
conduct,
while
you're
here,
hopefully
you've
read
that,
but
if
you
haven't,
it
basically
boils
down
to
everybody.
A
Please
be
welcoming
and
open
and
let's
respect
each
other
and
make
sure
everybody
has
time
to
speak.
We
have
this
meeting
every
other
Friday,
so
the
next
one
will
be
July.
24Th,
I'm,
mat,
Boersma
and
I
volunteer
to
moderate
today
and
I.
Think
James
is
going
to
hit
the
real,
hard
work
of
taking
notes.
If
you
want
to
pitch
in
in
the
future,
you
can
do
either
of
those
roles
by
just
signing
up
in
our
agenda
and
notes
document
for
the
future
I'll
post
a
link
to
that.
A
A
B
A
I'm
right
there
with
you,
there's
a
cognitive
burden,
switching
between
zoom
and
teams,
all
the
time,
but
I'm
not
able
to
overcome
great
so
and
please
go
ahead
and
add
your
name
to
the
attendee
list.
So
we
know
everybody
who
was
here:
that's
in
the
document,
I
just
posted
and
do
we
have
any
and
oh
I'm
when
I
congressionally
I
mean
I
thought,
Nader
and
David
were
already
maintained,
errs
and
Carlos
was
as
well
but
I
guess
that's
about
to
be
official.
So
congratulations
to
new
maintained,
errs
and
reviewers.
A
B
E
A
B
We
actually
haven't
done
this
in
a
while.
So
it's
good
you
did
this.
We
should
probably
get
in
the
rhythm
of
checking
those,
but
most
of
them
are
PR
like
pre
submix.
So
it's
not
very
accurate,
like
it
doesn't
really
give
you
a
good
view
of
the
state
of
things,
but
you
can
look
at
the
periodic
ones
which
are
periodic
conformance,
purely
Kathy
2e
and
those
two.
A
E
D
A
B
B
E
A
That's
just
kind
of
a
great
discussion,
so
unless
anybody
else
has
other
announcements
or
anything
that
fits
up
here,
we'll
go
over
into
the
discussion
section
and
we
already
congratulate
ourselves,
four
new
approvers
and
maintain
errs,
and
so
then
the
next
I
put
words
in
your
mouth
here
David,
but
it
seemed
like
when
we
had
a
stand-up
the
other
day.
You
had
a
question
we
wanted
to
bring
up
here
about.
How
should
we
structure
PR
tests
or
not?
You
know
it's
Friday.
D
B
I'll
start
in
Nederland.
Let
me
know
if
you
want
to
add
anything
right,
but
I
think
what
we
so
we
basically
broke
down
into
several
categories
and
we
so
their
units
has
conformance
testing
and
to
end
testing
and
periodic
night
Ning
builds.
We
decided
to
focus
on
end
to
end
for
now,
because
that
was
where
there's
the
most
I
guess:
the
biggest
gap
for
cabs
II
and
for
unit
tests.
We
thought
right
now
as
a
short,
like
short
term
action,
we
should
start
measuring
the
test
coverage
to
basically.
B
Like
add
a
like
more
formal
like
reviewing
when
we
new
features
to
make
sure
that
people
are
encouraged
to
add
unit
tests
with
new
codes
and
so
for
end-to-end
testing,
we
need
to
figure
out
which
subset
of
end-to-end
tests
we
want
to
run
as
pre
submits,
because
these
things
can
take
a
long
time.
We
don't
want
to
have
too
many
in
the
pre
submit,
but
at
the
same
time
we
don't
want
to
have
too
few
that
doesn't
catch
regressions.
B
So
it's
kind
of
a
hard
balance
to
find,
and
we
were
thinking
we
could
have
several
jobs
like
several
proud
jobs,
each
running
different
cluster
configuration
so
that
they
run
in
parallel,
which
cluster
configurations
we
want
to
test
that
still
TDD.
But
that's
the
idea
and
then
the
periodic
jug
should
be
the
superset
of
everything.
B
H
B
So
either
has
like
four
now
manual
triggers
or
even
if
you
can
be
smart
smart
about
it,
have
like
certain
taps
in
the
code
base
trigger
certain
jobs,
so,
for
example,
for
the
ones
are
bigger
like
we
can
run
conformance
on
it
to
make
sure
that
you
know
conformance
takes
like
two
hours,
but
it's
worth
it
it's
the
PR
is,
you
know,
considered
risk
you're
changing
more
things.
What
do
you
guys
think
about
that.
D
D
H
D
A
F
H
Think
I
think
at
the
point
we
were
talking
about
this
in
writing
this
document.
We
were
more
concerned
of
having
the
test
and
I
mean
all
the
tests
that
can
least
trigger
manually
when
we
feel
that
we
need
to
and
I
don't
think
we
have
everything
yet
so
maybe
we
need
to
just
make
sure
we
have
all
the
things
and
then
figure
out
how
to
optionally
run
things
or
whatever,
based
on
I.
H
B
The
other
thing
I
would
like
to
see
is
like
as
much
as
possible.
We
should
try
to
test
things
and
unit
tests,
because,
if
we're
using
end-to-end
tests
to
test
everything
as
like
an
excuse
that
we
don't
have
good
in
a
test,
that's
not
great
I
think
there
are
many
things
that
we
can
test
in
unit
tests
and
we
should
still
have
the
end-to-end
integration
test,
but
it
like
we
are.
B
A
Yeah
I,
just
one
practical
I
mean
since
we're
just
trying
to
this-
is
the
approach
that
a
case
engine
essentially
takes
right
is
training
through
the
kitchen
sink
and
get
the
most
bang
for
the
buck.
You
possibly
can
given
that
it's
expensive
to
startup
one
cluster
and
I,
see
Jack
made
some
comments
about
that's
problematic.
Maybe
he
can
explain
more,
but
the
only
thing
I
was
going
to
say
is.
A
It
seems
like
right
now
on
each
PR
we
spin
up
a
reasonable
cluster
with
three
masters
in
a
node,
and
then
we
spin
up
this
sort
of
degenerate
cluster
with
just
a
single
master
me.
We
should
stop
doing
that
configuration
or
add
some
nose
to
it.
Like
Cecile
was
saying,
because
if
that
doesn't
really
get
us
anything
I
think
it
was
just
so
we
could
spin
up
the
fastest
possible
cluster
for
a
smoke
test,
but
we
should
probably
move
beyond
that.
D
A
C
I
would
argue
that
all
these
discussion
points
can
be
formalized,
and
so
maybe
we
should
work
toward
you
know
integrating
these
things
into
the
actual
governance
of
the
project
as
sort
of
first-class
requirements,
I'd
love
to
contribute
to
that
effort.
Just
basically
documenting
all
these
things
that
we're
talking
about.
So
what.
A
A
Cool
next
topic
was
mine,.
E
A
We
kind
of
brought
this
up
at
the
last
meeting
that
didn't
really
get
anywhere,
but
basically
we
merged
some
tests
that
I
wrote
in
e2e
that
just
used
the
client
go
library
to
publish
deployments
and
stuff,
and
then
Carlos
has
a
nice
PR
out
there
for
testing
network
policy,
that
or
network
configuration,
that's
basically
wrapping
cube
CTO,
which
is
an
entirely
valid
way
to
go.
So
the
question
obviously
comes
up.
Is
this
a
problem?
E
E
A
H
A
That
worked
okay,
but
if
we
need
to
do
anything
fancier
like
if
we
want
to
make
an
actual
sort
of
cube,
see
tail
exec
type
call
and
get
into
the
pod
and
do
something
at
runtime.
Obviously,
that's
possible
in
client
go,
but
I
don't
want
to
write
that
code.
So
so,
at
that
point,
cube
CTL
would
be
arguably
better.
A
A
So
I
wrote
up
the
pros,
so
let
me
just
go
over
those
and
probably
there's
some
other
ones
I
didn't
think
of,
but
for
client
Oh.
Some
of
the
positives
are.
It
looks
to
me
similar
to
what
Cappy
is
doing
upstream,
and
so
the
code
fits
in
a
little
cleaner
to
that
and
you
don't
need
to
make
sure.
A
If
we're
selling
out
the
cube
CTL,
we
would
want
to
make
sure
we're
using
the
exact
same
version
of
cube
CTL
as
the
server
supports,
because
potentially
there's
incompatibilities
there,
and
that
can
be
a
little
bit
tricky,
especially
if
the
test
is
upgrading
kubernetes
itself.
And
then
you
want
to
start
testing
midstream
with
the
newer
version
to
keep
CTL.
But
those
are
solvable
problems
and
then
the
advantages
of
using
cube
CTL
are
I.
Think
the
test
code
ends
up
being
a
little
easier
to
read
or
a
lot
easier
to
read.
A
It
could
help
us
with
just
some
straight-up
copy-paste
from
existing
aks
engine
tests,
because
those
wrapped
cube
CTL
and
it
avoids
the
API
compatibilities
of
using
client
go
because
with
Clank.
Oh
we're
obviously
just
venting
in
one
version
of
kubernetes
and
then
kind
of
crossing
our
fingers
that
it
works
with
all
the
versions
we're
testing
with
which
generally
it
does,
but
then
at
some
point,
there'll
be
a
incompatibility
and
that'll
be
painful.
A
C
A
A
A
But
as
to
the
first
question,
I
mean
our
people:
are
we
okay,
because
Carlos
PR
has
been
out
there
for
a
while,
we're,
probably
just
going
to
merge
it
and
then
we'll
have
to
sort
of
different
styles
of
e2e
tests?
Are
we
okay
with
that?
Or
do
we
consider
that
something
we
need
to
fix
right
away?
Yeah.
E
E
B
B
A
I
There's
two
different
kinds
of
breakage
there
right,
there's
like
the
client
co-signature
is
change
and
then
there's
also
like
the
API
types
change,
so
I
think
for
client
code
breaks,
they're
less
frequent
generally,
like
one
eighteen
was
the
first
big
one
I'm
a
long
time,
I
think.
But
the
API
types
I
mean
there's
a
little
bit
Morris
there.
But
if
we're
clever
with
this
February,
maybe
we
can
win
it.
Yeah.
C
I
mean
if
there's,
if
there's
a
way
to
do
it
without
I,
just
wonder
if
it's
literally
like
a
yeah
instantiate,
this
version
of
the
API
call
because
I'm
using
this
version
of
client,
though
that
was
something
that
say
two
three
years
ago,
was
a
real
struggle.
I'm
not
sure
if
go
mod
has
improve
that
or
if
we
just
didn't
know,
we
were
doing
trickier
to
go.
I
I
will
say
that
I've
been
shielded
from
some
of
this
because
I've
been
using
the
controller,
runtime
client,
a
lot
which
does
already
do
discovery
out
of
the
box
to
find
the
trig
version.
So
if
you
use
like
the
pre-generated
clients
where
you
are
specifying
a
version,
that's
hard-coded
right.
So
you
need
to
switch,
which
version
you've
specified
for
like
apps
v1
deployment,
because
it's
fully
qualified
in
the
pre
generated
client
right,
okay,
so
I
think
the
thing
that
you're
using
maybe
didn't
exist
three
years
ago.
So
that's.
D
A
I
C
D
A
B
You
and
then
also
if
you
could
open
the
milestones
page
from
tab,
see
there
we
go
so
what
we
usually
do
in
the
meetings
as
we
just
go
over
the
boards
and
look
over
the
interest,
asks
and
the
to
use
and
then
add
new
issues
to
the
backlog,
but
I
think
for
this
one.
We
wanted
to
do
something
a
bit
different
to
plumb
the
next
milestone,
since
it's
been
a
while
since
last
time
and
so
yeah.
If
you
take
a
look
at
the
current
rail
stones
series.
B
B
So
what
I
was
thinking-
and
let
me
know
if
anyone
has
thoughts
on
this,
but
we
should
maybe
close
this
one
and
then
reopen
a
new
one
and
all
the
ones
that
were
still
not
done
in
the
new
one
and
then
maybe
prioritize
by
time
design
instead
of
like
bite
number
well,
that's
all
we
did
last
time
too,
but
have
a
short
timeline.
Instead
of
this
one
was
I
think
three
months,
but
we
could
maybe
do
two
weeks
or
a
month,
I'm,
not
sure
what
people
think
a
month
seems
reasonable.
H
B
A
great
question
it
can
be,
it
can
also
not
be
I
feel,
like
we've,
mostly
been
doing
releases
independently
of
milestones,
but
it
was
every.
B
B
D
H
A
B
A
B
A
B
B
Okay,
all
right,
so
we're
about
to
release
0
for
6
right,
so
I
think
we
can
just
call
this
0
for
7
if
we're
trying
to
match
those
two
release,
not
good,
but
then,
if
we
need
to
release
in
between
it
doesn't
really
make
sense
anymore,
but
or
I
can
just
call
this
0
for
X
any
box
that
seemed
safer,
excellent,
okay
and
then
I'll
put
this
port,
maybe
for
Friday's.
For
now
over
3
yeah
3
that
Waits
and
a
month
looks
like
the
day
of
our
office
hours.
H
B
Thanks
one
question:
I
have
is
this
enhancement
proposal
to
me.
It
seems
more
like
your
long-term
epic
thing
that
we're
not
necessarily
gonna
get
done
in
the
next
month,
but
it's
still
at
the
top
of
our
minds.
Is
it
something
we
want
to
keep
in
the
mouth?
Oh,
and
is
there
a
way
to
make
me
break
it
down
into
what
we
think
we're
going
to
accomplish
in
the
next
milestone.
H
D
B
F
B
B
So,
by
default,
the
network
security
groups
that
we
create
have
port
22
for
SSH
and
I.
Don't
think
we're
gonna
stop
doing
that
for
now,
until
we
have
private
clusters,
so
I
think
we'll
have
to
add
the
option
make
it
so
that
you
can
configure
it,
and
actually
we
actually
thanks
to
some
work.
That's
ask
me:
what's
the
first
name
again:
Steven
not
not
Steven
anyway,
sorry,
Spencer
I,
think
Spencer
Thank
You
Spencer.
Did
we
not
have
configurable
security
ingress
rules,
so
you
can
actually
configure
the
rules
and
override
the
default
one.
B
B
Okay,
great
and
so
yeah,
and
then
this
one
I
think
we
still
well
yeah
still
want
that
I'll
think
about
it
and
then,
as
you
see
and
I
I
think
we
still
want
to
do
that.
Whether
we'll
be
able
to
commit
to
in
the
next
month,
I'm,
not
sure
but
I,
think
we
should
prioritize
it
and
then
really
start
negation.
I
have
to
ask
Steven
about
it,
but
we
should
definitely
do
that
peace
out
any
objections
or
anything
in
here
that
anyone
thinks
we
should
kick
out.
Henry.
B
B
F
B
So
I'm
going
to
put
the
ones
that
are
in
priority
important
soon
in
there
for
now,
so
this
one
is
the
refactor
which
is
in
progress
so
by
the
way,
thanks
Carlos
with
all
the
help
there,
it's
been
really
really
great,
but
yeah,
so
I
definitely
want
to
get
that
done
in
the
next
month.
That's
what
I'm
working
on
right
now!
This
is
also
in
progress
thing.
We
want
to
get
that
in
thinking
about
the
resource
names.
B
I,
don't
know
if
we
want
to
do
that
right
away,
but
I
think
it's
important
at
some
point,
the
sooner
the
better
because
it
might
be
potentially
breaking
and
so
I
think
if
we
wait
too
long,
we'll
break
more
users,
and
so
it's
better
to
do
it
like
rip
off
the
band-aid
students.
That's
my
view
on
it,
but
and
your
thoughts
on
that.
B
D
B
Which
drive
failure,
detection
I
think
that
one
is
really
important.
I,
don't
know
if
that's
like
doable
in
the
next
month,
though,
especially
since
we're
in
talks
with
Kathy
but
I,
think
we're
going
to
be
looking
into
it
and
looking
at
the
design
in
the
next
month.
So
maybe
that's
also
one
of
the
ones.
B
What
you
want
to
break
it
down
into
enhancement
proposal
first,
as
part
of
this
milestone
and
an
implementation
later
custom
image
Docs,
we
want
that
and
yeah
I'm
going
to
put
a
femoral
disc
in
there
for
now
as
well,
since
there
is
a
PR
open
for
it.
So
that's
first
round,
okay
and
then
now
everything
else,
I
guess
is
fair
game.
So
it's
anyone
have
any
proposals
like
of
issues
that
they'd
like
to
see
in
this
milestone
or
leaflet.
H
B
Yeah
I
think
it's
okay
to
overcommit
a
little
bit
this
is
you
know
this
is
more
of
a
guidance
not
really
like.
If
we
don't
get
everything
done,
it's
not
the
end
of
the
world,
but
we
should
definitely
make
it
scoped
enough
that
we
don't
end
up
having
way
too
much
and
then
not
having
focus
on
and
also
the.
This
is
more
like
a
for
us
to
kind
of
like
know
what's
priority,
but
we
it's
also
really
hard
to
gauge
how
many
people
are
going
to
be.
B
You
know,
working
or
contributing
how
many
hours
in
the
next
month
at
any
given
time.
So
it's
not
like.
We
can
really
accurately
predict
what
the
project
overall
it's
going
to
get
done
and
for
sure
people
are
gonna,
contribute
features
that
aren't
in
this
list
and
that
aren't
hardly
milestone.
But
this
is
more
like
for
us
like
it's.
If
we
like
have
something
that
like.
If
we
don't
know
what
to
pick
up
next,
then
it
should
be
one
of
these.
B
Also
in
general,
I
prefer
not
adding
any
good
first
issues
to
the
Mel
soon,
just
because
those
are
kind
of
hopefully
they're
supposed
to
be
like
at
any
given
time.
You
can
get
it
done
and
there's
no
timeline
or
a
deadline
on
it,
but
because
I
think
first
time
contributors
it
tends
to
scare
them
away
that
there's
a
deadline
on
when
it's
due.
B
A
A
B
Actually,
let's
look
at
the
ones
that
are
already
assigned
David
this
one
should
I
put
it
in
there.
D
B
B
H
B
D
B
Okay,
oh
yeah,
I
agree:
let's
leave
it
for
now
and
make
it
may
be
a
stretch
goal
if
we
have
extra
time
how
about
Windows
worker
nodes
James,
do
you
have
any
update
from
that?
Is
there
a
proposal
coming
soon,
yeah.
G
B
H
The
only
other
thing
that
I
know
of
there
was
an
issue
for
private
cluster.
That
Justin
mentioned
I,
don't
know
if
he
wants
to
include
them.
B
D
F
B
D
B
B
Yeah
I
agree:
I
mean
yeah
it.
Ideally,
we
would
do
that.