►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
today
is
thursday
october
28th,
and
this
is
the
cluster
api
provider
azure
office
hours.
As
always,
please
follow
the
cncf
code
of
conduct.
A
If
you'd
like
to
speak
up,
you
can
raise
your
hand
and
I'll
make
sure
that
you
get
a
chance
to
speak
and
if
you
can,
please
add
your
name
to
the
attendee
list
and
any
discussion
topics
you
might
have
in
the
agenda
right.
So
let's
get
right
into
it.
I
don't
see
anyone
new,
so
I'm
going
to
skip
the
welcome
new
members
and
let's
get
straight
into
the
beta
release
check-in,
so
just
to
give
it
a
little
update
for
those
who
haven't
been
following.
A
We
did
delay
our
beta
release
by
a
bit.
We
had
several
prs
that
need
to
get
in
that
were
so
there's
one
pr
that
is
adding
a
cluster
steel
upgrade
test.
Thank
you
champ
for
that
pr
that
one
was
blocked
on
some
other
failing
tests
which
were
preventing
us
from
getting
test
signal
on
that.
A
So
we
delayed
the
release,
so
we
can
get
proper
upgrade
signal
from
alpha
4
to
beta
1
before
releasing,
and
we
also
had
some
issues
with
the
provider
azure
tests
recently
due
to
some
prs
that
merged
in
casey
that
regressed
one
of
the
scripts
and
some
of
the
templates.
So
we
just
wanted
to
get
that
signal
back
to
stable
so
that
we
could
have
green
signal
before
cutting
the
release.
A
So
that
being
said,
I
think,
where
we're
at
right
now
and
feel
free
to
jump
in,
if
I'm
forgetting
anything.
But
I
think
there
are
two
pr's
that
we
want
to
get
in
for
the
release
and
I
should
actually
mark
those.
B
So
so
I
rebased
it
with
the
changes
that
james
had
made,
so
the
upgrade
tests
are
passing
now,
but
there
are
some
other
tests
that
are
failing,
which
I
think
are
flagged
because
I
ran
it
twice
and
and
the
first
time
there
were
four
tests
that
were
failing,
but
the
second
time
there
was
a
quick
start,
spec
that
was
fading,
but
that
which
didn't
frame
the
first
time.
So
I
think
those
are
flakes,
probably
try
it
again
and
see.
A
Got
it
thanks
and
then
the
other
one
we
have
is
the
azure
machine
for
machine
conversion
david?
Where
do
you
think
that
is.
C
I
mean
it's
pretty
straightforward,
there's
a
test
for
it.
The
rabbit
hole
that
I
started
digging
into
is
the
experimental
test
and
trying
to
get
that
to
not
flake.
I
think
beyond
that
test
failure
there
isn't
anything,
that's
holding
us
back
and
in
fact
I
don't
think
that
test
failure
is
actually
indicative
of
anything
that
this
pr
is
changing,
as
as
you
called
out
in
that
comments,
seal
thanks,
chan.
B
B
A
Okay.
Could
we
maybe
advocate
that
that
gets
back
ported
or
so
we
don't
have
to
wait
for
the
next
minor
release
to
get
that
tested.
A
Version
to
cappy
is
it
it's
not
in
1.0
right
because
it
just
works
best
for
me.
A
Cool
yeah
and
then,
if
it's
only
a
test
change
and
it's
unblocking
like
valuable
upgrade
tests,
that
might
you
know,
make
sense,
cool
and
then
to
go
back
to
the
this
one
david.
I
I
think
I
would
vote
that.
We
don't
block
this
pr
on
the
experimental
ats
test,
because
you
know
it
might
take
a
while
to
fix
it
and
this
we
really
want
to
get
into
the
release
right.
C
Yeah
and
I'm
happy
to
I
put
a
item
down
to
discuss
that
task
a
little
bit
later.
I
I've
done
a
fair
amount
of
investigation
at
this
point
and
I
think
I've
narrowed
down
a
few
issues
that
are
there
and
cleaned
up
a
couple
of
them
so
far.
I
have
one
outstanding
that
trying
to
hunt
down
but
yeah.
I
agree
with
it.
A
Awesome.
Thank
you
so
much
for
looking
into
it
and
just
to
clarify
the
aks
provider,
doesn't
use
azure
machine
for
machine,
correct.
C
Or
does
it
no
not
yet
that's
something
that
we
would
need
to
add
in
the
future.
A
Okay,
all
right
so
this
one
and
then
is
there
anything
else
that
I
missed,
that
is
blocking
for
the
release.
A
Okay,
great
so
I'm
hoping
we
can
merge
so
this
one's
ready
and
then
this
one.
Let's
just
wait
for
the
cappy
test
to
pass.
Hopefully
we
can
get
that
passing
on
the
next
run
and
then
we
should
merge
it
and
then
I'm
thinking
of
cutting
an
rc
like
as
soon
as
those
two
merges
today
in
terms
of
releasing
what
do
folks
think
in
terms
of
like
targeting,
maybe
monday
or
tuesday.
A
Cool
okay,
awesome,
let's
so,
let's
aim
for
that
so
we'll
check
in
monday
morning
and
then
prep
the
release
for
tuesday
morning.
If
that's
the
case
awesome,
this
has
been
a
tough
one,
but
we
want
to
make
sure
we
get
it
right.
So,
thanks
for
everyone,
who's
been
waiting
for
the
release
for
your
patience
and
yeah
thanks
for
everyone
who
helped
like
work
on
the
tests
and
fixing
everything
all
right.
A
So
actually,
I
will
move
my
second
topic
to
the
end,
since
it's
more
of
a
looking
discussing
the
future
and
then
let's
just
go
into
the
other
ones,
are
more
concrete
right
away
all
right!
Matt!
You
want
to
start
us
up
well,.
D
Yeah,
this
is
really
just
an
announcement,
but
since
the
process
of
actually
creating
the
reference
images
is
all
inside
azure
and
not
visible.
I
feel
like
I
might
as
well
announce
it
here.
So
obviously
there
were
some
kubernetes
patches
yesterday.
I
started
building
them
with
with
luck,
they'll
be
available
tomorrow,
but
sometimes
we
run
into
hiccups.
D
A
Cool
all
right,
david.
C
Okay,
so
going
into
the
ed
test
for
managed
clusters,
it
looks
like
there
is
a
an
issue
where
we
are
diffing
what
azure
has
versus
what
our
spec
says
we
should
have-
and
this
is
a
common
error
that
we
run
into
this.
This
caused
an
endless
update
loop
for
the
managed
control
plane.
C
We
need
to
think
about
how
we
do
this
in
a
more
resilient
way,
because
when
azure,
when
we
we
throw
stuff
at
azure,
we
we
do
a
put
to
azure.
We
create
a
resource.
Oftentimes
azure
will
make
computed
properties
computed
fields,
and
so,
if
what
we
do
is
we
take
the
spec
and
we
turn
it
into
the
azure
api
object
that
we're
going
to
do
the
put
on,
and
we
actually
do
a
diff
from
the
one
that
we
get
versus
the
one
that
we're
going
to
put
and
they're
they're.
C
Actually,
the
azure
structures
from
the
sdk.
Those
computed
fields
exist
in
the
things
that
you
know,
azure
sends
us
back,
but
they
don't
exist
from
our
spec.
So
we
have
to
be
a
either
very,
very
careful
that
we're
creating
all
of
the
computed
fields
as
we
generate
from
the
spec
or
we
need
to
start
milling
out
fields
from
that
we
get
from
azure
to
make
the
diff
work
out
perfectly.
C
This
is
this
is
brittle
and
it
is
difficult
to
do
well.
I
think
we
need
to
think
about
it
from
a
higher
level
right
now.
I
I've
fixed
it
by
milling
out
the
computed
field
so
that
the
diff
calculates
correctly,
but
there's
very
little.
Stopping
you
know
the
the
azure
service
from
adding
more
computed
properties,
which
would
then
cause
a
diff
to
go
into
an
endless
loop
again.
C
We
need
to
be
precise
in
what
we're
checking
to
trigger
updates
that
we're
sending
azure,
and
we
probably
need
to
be
a
little
bit
more
cognizant
of
how
we're
dipping
objects-
and
that's
that's
that's
one,
but
two
for
some
reason.
There's
an
outstanding
issue
where
the
agent
pools
the
the
azure
managed
machine
pools
are
in
a
constant
cycle
where
they
don't
find
the
vmss
that
they're
associated
to.
C
So
that's
what
I'm
looking
into
right
now.
I
think,
after
resolving
that
issue,
we
should
be
good
on
those
ede
tests,
but
if
ede
tests
are
an
indication
of
stability
of
azure
managed
clusters
at
this
point
they
they
aren't
very
stable.
A
Plus
one
yeah,
it's
we've
had
an
open
issue
for
flaky
a
guest
test
for
a
while
and
haven't
you
been
able
to
get
to
it
and
fix
it?
I
was
hoping
that
creating
an
owner's
file
just
for
the
aks
experimental
feature
would
help
with
that
and
having
kind
of
more
formal
support
around
it,
but
we
were
unable
to
actually
create
the
folder
because
you
have
to.
We
have
to
do
some
refactoring
to
get
the
file
separated
from
everything
else.
A
So
it's
it's
kind
of
been
slowed
down
from
that,
but
yeah
the
aks
contributions
tend
to
come
in
waves
and
so
yeah.
I
think
we
also
need
to
discuss
as
a
project
what
the
like,
if
we
add
it
as
a
non-experimental
feature,
that
means
we're
signing
up
for
supporting
that
feature
when
it
fails
and
when
there's
a
bug
and
when
a
user
opens
new
requests.
So
that's
something
we
need
to
think
about.
A
Right
is
that
diff
thing
only
happening
for
machine
pose,
or
are
we
doing
that
behavior
anything
anywhere
else
in
the
code.
C
So
for
managed
machine
pools,
we
use
the
diff
compare.
So
this
is
cmp.diff.
C
C
E
Roger
yeah,
I
mean,
I
think
the
diff
thing
is
also
happening
in
managed
control
plane
not
just
manage
my
simple,
but
even
I've
seen
I
mean
like
after
we've,
like
added
a
few
of
the
specs
like
initially
when
I
was
working.
I
think
like
about
a
couple
of
months
before
there
was
less
less
spec
and
the
diff
was
doing
was
happening
properly,
but
after
we've
started
to
add
in
more
fields.
I
think
even
I've
started
seeing
this
issue,
but
I
could
not
get
to
the
crux
of
this
but
yeah.
C
A
So
I
assume
your
pr
is
not
like
fixing
or
changing
that
pattern
right.
It's
just
fixing
the
missing
fields,
but
that
issue
might
happen
again
right.
C
A
C
So
I've
seen
three
patterns
for
logging
now
and
I
would
like
to
get
it
down
to
one
so.
The
three
patterns
I've
seen
are
a
using
the
scope,
logger
b,
the
new
logging
functionality
with
spams.
C
So
if
anybody
didn't
see,
we
now
have
spam
loggers
that
will
add
events
to
the
distributed
trace
spans
for
each
log
entry.
It
also
writes
those
out
to
standard
out.
This
is
the
the
pattern
that
I
think
we
should
use
the
scope.
Logger
is
something
that
we've
had
around
for
a
while
that
we
decorate
the
scope,
each
scope
with
a
logger
and
have
a
longer
interface
on
the
scope,
and
then
I've
also
seen
just
raw
k-log
being
used.
C
So
this
doesn't
have
the
hierarchical
kind
of
log
where
we
add
values
up
at
the
top
reconciler
and
it
continues
down
it's
just
raw
k
log.
So
you
just
get
the
log
message
without
any
of
the
continuation
of
context,
so
that
that
makes
it
really
difficult
to
look
in
the
logs
and
find
you
know
what
is
this
related
to?
You
know
the
standard
outlooking
you
have
to
have
some
sort
of
a
correlation
to
be
able
to
figure
out.
You
know
what
is
the
hierarchy
of
calls
that
went
into
this?
C
A
C
The
only
place
that
we
wouldn't
have
a
scope,
logger
or
a
spam
logger,
I'm
sorry
is
when
we
don't
have
a
context
available
to
us,
so
the
logger
piggybacks
on
the
context
and
the
context
can
then
be
used
to
get
the
longer
instance
or
create
one
if
it's
not
there.
So
in
a
funk
that
we
don't
have
a
context,
you'd
probably
have
to
either
pass
in
a
logger
or
you
could
possibly
take
advantage
of
the
skill
blogger.
C
However,
the
scope
interface
is
like
super
huge
right
and
it's
probably
a
better
pattern
to
use
the
the
context
based
logger,
similar
to
what
cluster
api
is
doing.
D
D
I
was
just
going
to
say
I
don't
know
about
everybody
else,
but
I
always
when
I
add
code,
I
just
follow
the
local
convention,
so
you
know
if
someone
is
using
k,
log
or
whatever
I
probably
just
robotically-
go
ahead
and
put
similar
calls
in
rather
than
ed.
So
it
seems
like
it's
important
that
we
do
a
refactoring
of
the
whole
code
base
and
maybe
start
with
the
k
log
and
the
stupid,
the
the
simpler
stuff
that
doesn't
involve
context
but
but
overall
yeah.
A
C
Yeah
I
was
I
was,
I
was
trying
to
think
of
the
same
thing.
It'd
be
really
nice
to
not
have
to
you
know,
search
for
those.
I
don't
know
I'll,
look
into
that
and
see
if
I
can
find
something
or
see
if
there's
something
that
would
be
relatively
easy
to
build.
A
Okay,
so
next
one's
me,
so
I
wanted
to
discuss
what
we
want
to
do
after
the
release,
so
we're
going
to
release
a
1.0
0.0,
and
so
there
are
a
few
things
we
need
to
talk
about.
First
of
all
like
what
do
we
want
our
release
pattern
to
be
after
1.0?
Do
we
want
to
just
take
the
easy
route
and
follow
whatever
convention
cappy
is
establishing,
or
do
we
want
to
do
something
slightly
different.
A
So
for
the
first
part,
I
guess
we
can
discuss
my
personal
take
is
that
I
think
we
should
do
something
similar
to
cappy
where
minor
releases
are
features
and
bigger
releases,
but
don't
necessarily
need
a
new
api
version
every
time
and
then
patch
releases
or
bug
fixes,
which
means
we
backboard
bug
fixes
so
that
we
can
release
them
quicker
to
the
users
without
having
to
get
like
like
without
having
to
deliver
features
like
mixed
with
bug
fixes
when
there's
a
critical
but
fix.
A
That
being
said,
where
I
think
we
should
diverge
from
cappy
is
maybe
doing
smaller
more
frequent
feature
releases,
maybe
aiming
for
every
month
for
every
two
months
instead
of
every
quarter.
Just
because
we
still
have
a
lot
of
future
velocity
and
a
lot
of
things.
We
want
to
get
out
so
and
we're
a
relatively
smaller
project.
A
A
F
D
As
long
as
things
are
well
automated,
which
it
sounds
like
we're
pretty
good
just
because
having
high
stakes
in
frequent
releases
is
really
problematic.
A
Awesome
any
other
thoughts,
any
one
thinks
we
should
not
follow
kathy,
not
this
process.
A
Okay,
in
terms
of
automation,
that's
a
really
good
point.
Our
release
process
is
completely
automated
now,
in
terms
of
like
actually
cutting
the
release.
The
one
thing
that
I
think
we
need
to
do
as
an
action
item
is
bringing
the
cherry
pick
bot
that
cappy's
using
to
count
z.
A
So
we
need
to
look
into
how
to
do
that,
and
if
anyone
wants
to
work
on
that
and
pick
it
up,
that'd
be
great.
Just
let
me
know
otherwise
I'll
just
open
an
issue
and
whoever
gets
to
it
first
can
take
care
of
it.
A
I
was
thinking
like
if
we're
gonna
be
doing.
You
know,
maybe
more
frequent
minor
releases
and
we're
following
the
spider.
It
might
give
us
a
bit
more
flexibility
in
terms
of
planning
and
we
have
been
using
milestones
in
the
past
for
planning,
but
I
think
lately,
we've
just
not
been
really
quite
sticking
to
what's
in
the
milestone
in
terms
of
what
needs
to
happen,
but
yeah
I
was
thinking.
Maybe
it'd
be
good.
A
If
we
try
to
put
together
a
list
of
things
that
we
want
to
work
on
as
a
community
for
the
next
minor
and
just
follow
that
exercise,
it
doesn't
have
to
be
a
strict
list,
but
like
off
the
top
of
my
head,
I
have
a
few
ideas,
like
maybe
the
logging
consistency
thing
that
david
brought
up.
Maybe
I
think
it'd
be
really
cool.
A
If,
for
the
next
miner,
we
could
focus
on
getting
all
the
async
reconciliation
pr
is
merged
and,
having
that
be
like
a
feature
for
the
next
miner
and
then
maybe
like
focusing
on
tests
like
robustness
and
stable
instability,
we've
had
some
flaky
tests
and
prs
and
it's
always
nice
to
like
work
on
stabilizing.
Well,
it's
always
nice,
it's
annoying,
but
it's
nice
when
you
get
a
good
result
of
having
stable
tests.
A
So
those
are
just
things
at
the
top
of
my
head,
but
I
think
I
encourage
everyone
to
like
take
a
look
at
issues
that
are
open,
maybe
issues
that
you're
assigned
to
that
you
kind
of
forgot
about.
I
know
I
always
have
issues
that
have
been
just
like
left
on
the
side
and
then
try
to
maybe
add
them
to
the
milestone
and
we
can
kind
of
next
meeting.
We
can
try
to
like
go
through
them
and
try
to
see
which
ones
we
want
to
keep.
A
What's
what's
the
thing
we
should
focus
on,
maybe
like
our
three
top
stories.
A
Create
the
milestone,
so
we
don't
have
the
nelson
yes,
but
I
was
just
looking
at
those
before
the
meeting
started
and
they've
actually
been
quite
updated.
A
We
haven't
been
using
1.0
because
it's
not
set
in
the
plug-in,
so
our
main
branch
has
been
going
to
0.5,
which
we
should
have
changed,
but
it's
not
a
big
deal
because
we
weren't
following
the
milestone
but
I'll
make
sure
that
those
things
get
put
back
into
1.0
and
then
in
terms
of
flying
the
next
one
we
could
do
a
1.1
and
at
the
next
meeting
we
can
agree
on
like
what
date
we're
targeting
approximately.
A
So
let's
say
we
release
1.0
on
the
1st
of
november.
Then
probably
we
want
to
target
so
the
holidays
are
maybe
going
to
be
a
bit
weird
because
we're
going
to
reduce
time.
So
maybe
that
one
will
do
like
one
release
in
december.
A
Maybe
like
the
15th,
something
like
that
and
then
we
just
like
aim
for
that
date
and
yeah.
A
H
Well,
I,
it
might
be
kind
of
a
big
feature
which
may
not
fit
into
the
schedule,
but
is
cluster
class,
something
that
we
can
start
to
implement
now
that
we
have
v1
beta
1.,
I'm
wondering
if
it
would
help
kind
of
clean
up
some
of
our
templates
in
the
test.
A
Yeah
that'd
be
a
good
one
too.
I
think
there's
still
it's
still.
The
implementation
is
still
ongoing.
It
can't
be.
There
is
a
working
prototype,
so
we
could
start
getting
like
an
idea
of
like
what
it
would
look
like
in
casey.
I
don't
think
there's
that
much
heavy
work
to
be
done
in
the
provider
side,
though
it's
mostly
like
the
first
one
that
we
need
to
do
that
wasn't
merged,
which
is
the
blocker,
is
to
have
an
azure
cluster
template
resource.
A
So
this
one
that
one's
needed
for
cluster
class
to
work
and
this
one
was
blocked
on
some
thing.
I
don't
remember
so
we
need
to
revisit
that
and
then
once
we
have
that,
I
think
it's
just
a
matter
of
adding
templates
honestly
and
leveraging
the
newest
cluster
api.
There
shouldn't
be
too
much
to
actually
be
done,
but
maybe
1.1
would
be
too
early
for
releasing
those
templates
as
like
official
templates,
but
maybe
1.2
would
be
a
good
time
frame.
A
What
do
you
think?
Maybe
we
can
start
testing
in
1.1
and
then
release
it
in
1.2?
If
cluster
api
is
ready,
yeah
there's,
obviously
a
lot
of
stuff.
We
want
to
do
I
mean
we
should
be
kind
of
selective,
because
if
we're
trying
to
do
one
month
releases
and
we're
putting
like
10
features
in
there,
that's
not
gonna
work.
So
maybe
like
yeah,
I
would
say
if
we
can
choose
like
our
top
three
priority,
maybe
for
each
release
and
then
we
can
focus
on
those.
A
But
then,
if
anyone
wants
to
like
work
on
something
else,
that's
not
in
the
list
of
course,
like
you're,
always
welcome
to.
If
you
open
a
pr,
but
it's
just
kind
of
like
to
try
to
give
a
direction
to
like
a
story
to
the
release.
A
A
All
right,
if
not,
I
think,
let's
call
it
a
day,
so
yeah
so
for
next
time.
Let's
all
think
about
what
we
want
to
see
in
the
mail
soon,
and
then
we
can
talk
about
it
again
next
time
and
look
out
for
the
release
on
tuesday
all
right
see
y'all
later
thanks.