►
From YouTube: 2020-10-15 CAPZ Office Hours
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
today
is
thursday
october
15th,
and
this
is
the
cabsie
office
hours.
Cabsie
is
a
sub
project
of
sick
cluster
lifecycle.
Please
make
sure
that
you
follow
the
cncf
code
of
conduct
and
be
nice
to
each
other,
all
right.
So
if
you
can
add
your
name
to
the
attendee
list
and
we'll
get
started
so
before
we
start.
A
Is
there
anyone
here
who's
here
for
the
first
time
at
the
meeting
and
who'd
like
to
take
him
like
a
small
moment
to
like
introduce
themselves
and
tell
us
what
they're
here
for
and
what
you're
interested
in.
B
Well,
today
is
my
first
attendance
of
this
meeting.
I'm
craig
peters,
I'm
a
pm.
I
actually
work
very
closely
with
david
matt
and
cecile,
and
my
interest
is,
I
will
add
my
name
to
the
attendees,
but
my
interest
is
in
understanding
where
v1
alpha
4
is
headed
and
starting
to
move
towards
what
cecile
was
mentioning
earlier
before
the
recording
started,
which
is
getting
more
users
involved
in
this
discussion.
A
Okay,
so
let's
start
with
the
discussion,
we
have
one
psa,
cav,
zero.
Four
nine
release
is
out.
I
think
this
was
almost
two
weeks
ago,
but
you
can
check
it
out.
Let
us
know
if
you
find
any
issues
or
fail
any
github
issues
that
are
relevant
but
yeah.
A
lot
of
cool
stuff
went
in
that
release,
so
check
it
out
any
questions
on
that
or
comments.
Anything
I
forgot
to
mention
later.
C
A
Yes,
try
out
ipv6
single
stack
if
you're
interested
cool,
okay,
the
kcp,
upgrade
job.
My
favorite
topic.
A
Yes,
please
keep
doing
that
so
actually
that's
pr
to
add
logs
to
like
the
workload
workload
cluster
logs
just
merged,
so
I
was
hoping
to
kind
of
rely
on
that
to
get
more
info
on
it.
C
But
last
time
we
mentioned,
we
would
do
some
like
metrics
to
kind
of
help
figure
out
what
the
problem
was.
I
don't
think
I've
done
that
yet.
A
I
think
david
said
he
was
gonna
start
on.
That
is
that
still
on
the
table,
david.
D
A
A
Okay,
actually,
I
have
a
small
related
thing,
I'm
not
sure
if
people
have
any
ideas,
how
we
could
maybe
be
more
reactive
to
failing
tests
in
the
like
the
periodic
ones,
because
right
now,
unless
we
go
check
them,
we're
not
going
to
see
failures,
and
I'm
saying
this
because
recently
I
noticed
that
the
so
we
actually
have
tests
in
two
places
we
have
tests
in
the
state
clustered
lifecycle
tab,
but
we
also
have
them
in
provider
azure.
A
You
can
see
my
screen
right
yeah,
so
if
you
go
in
pure
provider,
azure
periodic
chose
the
end.
There's
a
bunch
of
cab,
zip
jobs
that
run
the
azure
file
azure
disk
end
to
end
and
the
other
day
I
was
helping
someone
on
slack
and
this
is
not
capzi.
I
realized
that
they
were
just
broken
and
not
running,
and
this
had
been
like
that
for
a
while,
and
no
one
had
noticed
so
it
was
a
simple
fix.
A
A
All
right
so
calico
version
update
blocked.
I
wrote
that
down
so
we're
currently
on
calico
1.12.,
like
the
example
add-on,
that's
in
the
repo
is
112.
and
I
was
trying
to
bump
it
to
116,
which
is
the
latest,
but
I'm
running
into
a
failure
coming
from
the
cappy
test
framework,
where
the
test
framework
I'm,
I
haven't,
really
exactly
been
the
root
cause
yet,
but
it
basically
like
fails
with
customize
a
customized
execution
error,
because
the
argument
list
is
too
long,
so
I'm
guessing.
A
D
Yeah,
are
you
good,
so
I've
run
into
this
issue
before
and
it
was
well.
Thank
you
man.
It
was
when
we
were
piping
in
to
customize
or
we
were
taking
customized
pumping
into
event
substitute
right.
So
when
you
do
that
from
the
command
line,
you're
you,
if
your
pipe
is
too
big,
it
fails
right
so
yeah
we
need
to
write.
We
just
need
to
write
a
temporary
file
and
use
a
temporary
file
as
input
if
possible,.
A
Yeah,
so
I
saw
the
issue
of
talking
about
that
was
the
pr
from
james
to
fix
the
tilt
style.
That
was
doing
that.
But
the
thing
is
that
we're
not
exactly
sure
how
that's
happening
yet,
because
the
place
that
throws
the
error
does
not
take
this
united
config.
If
that
makes
sense,
and
then
I
also
tried
to
repro
with
cap
d,
the
docker
provider
and
I
couldn't
regrow
it
using
the
same
cni
file.
So
there's
something
strange
going
on.
That's
still
not.
D
Maybe
it's
one
of
our
scripts.
Maybe
it
is,
are
you
sure
it's
in
the
test
framework.
A
D
A
A
Yeah,
so
so
yeah
I'll
keep
you
all
posted
for
updates.
Okay,
I
think
that's
the
end
of
the
discussions
that
people
had
added.
Is
there
anything
else
someone
wants
to
talk
about?
We
can
talk
a
little
bit
about
the
one
alpha
4
planing
and
the
milestone,
and
all
of
that
and
the
roadmap
if
people
are
interested
unless
there
are
any
other
topics.
C
I'm
gonna
just
give
a
quick
update
on
the
multi-tenancy
thing
that
I've
been
working
on
for
some
time.
I
have.
I
just
pushed
some
my
co-changes.
I
had
a
chat
with
david
yesterday
to
ask
him
about
your
question
and
then
I
pushed
my
code
this
morning.
I'm
having
some
weird
issue
that
I
was
just
going
to
send
on
slack
to
david
and
manage
to
kind
of
take
a
look
at
seems
that
I'm
not
passing
something
correctly
to
the
pod
identity
library.
C
C
No,
I'm
still
manually
trying
to
make
it
work
because
it
hasn't
worked
yet.
A
A
Okay,
while
we're
at
it
I'll
also
give
a
quick
update.
I
just
remembered
on
my
api
server
endpoints
pr,
so
it's
ready
for
review
and
basically
what
it's
doing
is
adding
a
api
server
load,
balancer
spec
to
the
azure
cluster
network
spec,
which
will
allow
you
to
specify
an
api
server
type
which
can
be
internal
or
public.
And
if
you
select
internal,
that
means
you
get
a
private
cluster,
so
private
api
server,
endpoints.
A
And
yeah,
so
the
code
is
there
unit
tests.
Are
there
I'm
just
working
on
an
end-to-end
test
which
I
haven't
pushed
yet
it's
in
my
local
branch,
but
it's
not
working
yet,
so
I'm
still
working
on
that.
But
in
the
meantime,
if
you
want
to
start
reviewing
the
code
itself,
that'd
be
great.
It's
a
pretty
big
pr,
but
a
lot
of
the
changes
aren't
like
real
changes.
So
the
actual
logic
shouldn't
be
too
too
big.
B
A
Yes,
so
for
cabbie,
the
backlog
grooming
is
still
like
yeah,
it's
happening
tomorrow,
and
so
the
milestone
is
not
completely
planned
yet.
But
if
you
want
to
get
a
sense
of
what's
happening,
I
suggest
going
to
milestones
in
the
cafe
repo
and
then
checking
out
the
zero
four
zero
milestone,
which
I
think
is
where
most
issues
that
we
are
planning
on
working
on
have
been
added
and
then
also
the
sorry.
A
A
Sorry,
I
think
it
got
merged
into
the
main
branch,
but
it's
not
updated
in
the
book
yet
so
anyways
I'd
recommend
yeah
just
going
in
the
in
the
repo
and
looking
at
it
here.
A
A
But
yeah,
so
that's
for
cappy
for
cab
z.
There
are
a
few
things
going
on,
so
I
think
we're
we're
holding
off
a
little
bit
because
we
want
to
see
what
kathy
does.
First,
because
that's
going
to
impact
us
like
we're
just
going
to
take
that
in
and
then
add
our
own
things
on
top,
so
we're
still
focusing
right
now
we're
still
doing
v1
off
of
three
features
and
we're
planning
on
doing
another
release:
zero
410
before
we
start
adding
the
v1
f4
types,
the
main
branch.
A
So
this
is
the
current
milestone
that
we
have
in
place
and
I
think
all
of
these
are
doable
by
then
I
think
we're
targeting
end
of
october,
but
a
lot
of
them
are
already
in
progress.
There's
only
one,
that's
unassigned!
A
So
that's
what
we're
targeting
for
now
and
then
after
that,
in
terms
of
v1,
alpha
4
features.
There's
the
cluster
api
azure
book,
which
has
a
roadmap,
and
where
is
that
here
and
well.
Some
of
these
are
already
in
progress
or
done
so
we're
a
bit
ahead
of
ourselves.
But
these
are
the
big
ones
that
we
said
we
should
have
by
v1
alpha,
for
so
I
think
out
of
these
right
now
that
are
missing
is
bootstrap
failure,
detection,
which
would
add
the
vm
extension
and
then
gpu
notes.
A
A
Okay,
great
private
cluster,
I'm
working
on
right
now,
multi-tendency
meter
is
working
on
right
now.
Worker
nodes,
that's
also
being
worked
on
by
james
and
then
ipv6
is
already
merged.
So
we
should
probably
add
new
stuff
is
what
I'm
realizing
right
now
by
looking
at
this
for
everyone
off
of
four.
But
it's.
C
D
What's
blocking
users
from
consuming
cap
c.
B
Well,
the
last
few
I've
talked
to
the
things
that
were
blocking
them
were
things
like
gpu
like
there's,
gpu
use
cases
or
windows
use
cases,
so
I
my
and
then
the
private
clusters.
Those
are
the
three
things
that
I
have
talked
to
users
about
that
they
wanted.
So
once
these
things
are
out
of
the
way,
I
would
be
willing
to
think
about
that
as
an
mvp,
and
you
know,
push
people
much
more
aggressively
to
kept
see.
A
A
Just
because
at
the
time
where
we
moved
to
vmware
and
alpha
3,
a
lot
of
us
were
new
to
the
project
and
we
were
still
like
picking
it
up.
So
I
think
we
did
a
lot
of
like
lift
and
shift
and
not
much
actual
changing
of
the
apis
as
far
v1
off
of
three
just
like
trying
to
do
the
minimum
to
support
the
character
types.
So
I'm
hoping
for
v1
f4.
We
can
be
a
bit
more
like
opinionated
about
how
we
add
the
new
types
and
sorry.
This
is
not
the
right
issue.
A
And
yeah,
I
have
this
like
in
progress
issue
that
I've
had
since
may.
That
was
just
like
tracking
things.
Every
time
something
came
up
in
an
issue
or
a
pr
and
a
review
like
a
breaking
change
that
someone
suggested,
I
would
just
write
it
down.
So
we
can
like
go
through
this
list
and
just
like
see
the
ones
that
we
still
want
to
do,
and
hopefully
try
to
like
consolidate
a
little
bit
right
now.
A
For
example,
the
azure
cluster
spec
is
a
bit
all
over
the
place
like
there
are
a
lot
of
fields
like
are
exposed
individually,
and
it
could
be
like
a
bit
more
organized
in
terms
of
like
security
or
spec
storage,
space,
spec
network
spec.
Things
like
that.
So
this
is
more
like
the
ux
side
of
it
right.
B
That
makes
sense.
So
what
is
there
any
way
to
protect
a
user
who
say
adopts
it
with
v1
alpha,
3
and
and
minimize
their
cost
as
they
move
to
v1
alpha
4?
Can
they
just
glad
you
asked.
A
So
that's
why
we
have
conversions
in
place,
so
the
api
conversions
actually
like
every
time
you
make
a
change
to
the
api
that
is
like
that
would
be
breaking
from
v153
to
v1
f4,
and
that
requires
manual
conversion.
A
It's
expected
and
it's
actually
enforced
by
unit
tests
that
you
would
add
a
conversion
that
comes
with
it
and
so
right
now
we
have
conversion
from
the
one
offer
two
to
v,
one
after
three,
but
once
we
add
v,
one
f,
four:
u
one
alpha
four
will
become
the
new
hub
and
I
think
we'll
deprecate
v1
after
two
and
just
remove
it
all
together.
A
So
we'll
have
conversions
from
three
to
four
and
that's
just
the
like
thing
that
says
like
oh,
like
if
you
for
what's
a
good
example
of
this,
like
okay,
for
example,
in
v1,
alpha
2,
we
had
api
endpoints
and
then
in
v1,
alpha
3
got
changed
to
control,
plane,
endpoints,
and
so
this
is
like
converting
api
endpoints
to
control,
plane,
endpoints
or
vice
versa.
A
B
A
So,
of
course,
like
we
are
still
an
alpha
project,
and
so
the
expectation
is
that
breaking
changes
are
allowed
at
that
stage,
but
I
think
we
like
the
we
when
I
say
we
is
like
the
whole
cluster
api
community
has
been,
I
think,
more
on
the
cautious
side
of
things
and
tends
to
like
disallow
breaking
changes,
and
I
can
like
wait
for
a
big
alpha
with
a
big
minor
releases
with
conversions,
rather
than
do
them
in
the
middle
of
the
release.
B
A
I
think
it's
more
the
latter.
I
don't
think
there
are
any
expected
changes
that
we
know
of.
I
think
it's
just
like
trying
to
not
go
too
fast
approach
so
and
since
all
right,
I
guess
the
other
way
to
look
at
it
is
they're,
considering
that
people
are
already
using
it
in
alpha.
So
it
doesn't
really
you
know
matter.
A
C
A
Okay,
any
last
minute
topics
anything
else.
Anyone
wanted
to
discuss.
Thank
you
for
taking
that
statement
through
all
my
rambling.