►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
good
morning,
good
afternoon,
good
evening,
this
is
the
cluster
API
provider
Azure
office
hours
today
is
March
16
2023.
A
We
are
a
sub-project
under
Sig
cluster
lifecycle
and
the
cncf,
and
as
such,
we
abide
by
the
cncf
code
conduct,
which
essentially
boils
down
to
be
kind
of
respectful
to
each
other
and
try
to
raise
your
hand
and
zoom.
When
you'd
like
to
speak
at
the
beginning
of
each
meeting,
we
like
to
take
a
couple
minutes
to
let
anybody
who's
new
to
the
call
or
anybody
who
just
wants
to
say
hi
to
introduce
themselves
and
tell
us
a
little
bit
about
why
you're
here.
A
All
right-
and
if
you
haven't
already
please
add
your
name
to
the
attendees
list
above
in
case,
we
want
to
chat
with
each
other
later,
all
right
so
going
into
the
open
discussion
items.
A
So
this
week
we
got
the
PRS
to
add
Willie
and
Nawaz
as
kab
Z
reviewers
finally
merged,
despite
Ci's
best
attempts
to
prevent
those
cares
for
merging
so
congrats,
Willie
and
Nawaz,
and
if
anybody
else
would
like
to
become
a
kepsi
reviewer,
please
let
us
know
and
just
start
reviewing
PRS
and
we
will
be
happy
to
help
guide
you
through
that
process.
C
A
All
right
and
then
the
next
item
was
mine,
so
this
is
about
the
ASO
proposal.
Dock
I
see
that
feedback
on
that
has
been
slowing
down
recently
and
I
know.
Cecile
gave
it
an
LG,
TM
and
I
know.
Matt
did
too
I
did
make
a
couple
super
small
tweaks
to
that
since,
but
I
guess
this
is
a
question
for
Cecile
and
I.
Guess
other
maintainers
I
guess
what
are
the
next
steps
here?
Once
we
start
getting
this
close
to
I
mean
it
looks
like
it's
fairly
close.
D
Yeah
I
think
you
got
some
really
good
feedback
from
like
a
variety
of
folks.
So
I'd
say,
let's
ping,
those
folks
that
had
given
previous
feedback
first
and
give
them
a
chance
to.
You
know
make
sure
that
the
you've
addressed
their
feedback
and
if
we
don't
hear
back
from
people
within
I'd
say
like
next
week
or
so,
then
we
can
start
a
lazy
consensus,
but
let's
give
them
a
chance
to
to
come
back
to
it.
First,
okay,.
A
Yeah
sounds
good,
I
guess
one
other
thing,
and
if
anybody
has
any
questions
about
the
proposal
that
I
can
answer
here,
I
can
do
my
best
to
you
to
do
that.
One
question
I
had
for
folks
was
I
think
there
might
be
some
hurdles
with
if
you
have
ASO
already
installed
on
a
management
cluster.
The
way
that
we're
thinking
about
installing
ASO
per
the
doc
might
collide
with
existing
installations.
A
A
So
yeah,
that's
just
one
item
that
I
had
there,
but
I
didn't
have
anything
else
for
this
item.
Does
anybody
else
have
any
other
comments
or
questions
on
the
ASO
proposal?.
D
No
question
just
great
job:
you
know
driving
that
John.
Oh.
C
Yeah
I
was
because
I
say
the
same
thing.
It's
lgtm
from
my
point
of
view,
so
I
think
if
you
touched
the
last
few
commenters,
it's
probably
right
for
lazy
consensus
and
really
good
work.
It's
great
cool.
A
All
right
Willie,
it
looks
like
you
have
the
next
item
about
test
coverage.
E
Yep
yeah,
just
for
everyone
who
doesn't
know
I
created
this
big,
p
or
big
issue
to
kind
of
track
down
tests
for
increasing
code
coverage
overall,
because
that
is
one
of
our
goals
and
yeah.
So
I've
been
putting
together
a
list
of
files
that
needed
tests,
but
before
I
was
kind
of
just
looking
at
the
code.
Coverage
percent
and
I
realized
thanks
to
John
and
some
other
people.
E
That
I
should
probably
look
at
other
things,
such
as,
like
the
impact
of
the
code
or
how
much
the
how
important
it
is
to
test
this
code
and
yeah
so
I
discovered
a
lot
of
the
tests.
That
reporting
like
lower
coverage
numbers
are
actually
like.
Fine
they're,
just
a
few
like
one-line
functions
that
are
very
trivial,
which
it
should
be.
It's
good
practice
to
still
write
unit
tests
for
this.
But
it's
not
going
to
be
like.
Probably
we
shouldn't
like
do
those
first.
E
So
if
you
go
click
on
the
link
there,
there's
I
went
over
and
found
like
a
lot
of
the
web
hook.
E
Web
hooks
have
like
higher
code
coverage
numbers,
but
we're
missing
like
pretty
important
tests
for
certain
functions,
and
so
I
think
we
should
definitely
be
validating
like
at
least
all
of
the
Web
book
functions.
Just
to
make
sure
nothing
is
like
any
like
breaking
API
stuff
is
not
being
is
not
happening
and
besides
the
web
hooks
there's
also
a
good
amount
of
like
files
inside
the
capsu
controller.
Is
that
we're
not
being
tested
but
I
feel
like.
E
That
is
a
little
bit
harder
to
test,
because
you
need
to
use
a
lot
of
mock
functions
and
lock
it
all
up.
So
I
don't
know
like
if
I
should
prioritize
these
web
hook
functions
or
the
controllers
instead,
because
the
controllers
I
didn't
list
out
any
of
the
controllers,
because
all
of
them
have
like
below
40
code
coverage.
So
I
just
wondering
what
everyone's
thoughts
on
or
on
like
what
parts
of
the
code
is
a
little
bit
higher
priority
for
testing
and
also
there's
a
link
to
code
coverage,
dashboard
somewhere,
I.
Think
above.
D
I
think
another
dimension
to
look
at
also
is
code
churn
because
that
usually
is
like.
You
want
to
make
sure
your
areas
that
you're
testing
or
that
you're
changing
a
lot
are
well
tested,
since
those
are
more
likely
to
get
regressions
introduced.
E
D
E
G
Yeah
I,
just
and
I
think
John
put
this
into
one
of
the
comments,
but
I
also
think
it's
also
important
to
think
about
which
which
of
the
code
is
going
to
be
refactored
with
the
move
to
you
know,
either
the
SDK
track,
2
or
the
ASO.
G
E
E
I
think
yeah.
He
mentioned
that
a
lot
of
things
in
Azure,
in
fact
in
the
Azure
package,
is
being
changed.
So
I
left
that
let
left
that
out
of
the
list
for
now
I'm
not
sure
what
else
is
like
likely
to
change
so
I'll,
maybe
talk
to
John
about
it
as
well.
Cecile.
D
Yeah
I,
just
I,
would
add
a
bit
of
nuance
to
that.
Like
I,
agree
that
putting
a
bunch
of
new
tests
and
functions
that
are
going
to
be
removed,
very
imminently
is
not
good
time
spent,
but
testing
the
areas
around
the
code.
That's
about
to
go
under
a
major
refactor,
might
not
be
a
bad
idea,
because
tests
are
one
of
the
things
that
let
you
verify
that
you're
not
introducing
regressions
as
you're
like
taking
apart.
All
of
this
code
so
might
actually
be
good
to
focus
on
that
first,
but
yeah.
D
It
depends
so
like,
for
example,
the
client.gov
files
I
would
not
touch
those
because
those
are
going
to
be
gone
but
maybe
like
the
spec
files
or
things
are
like
calling
to
the
SDK
might
be
good
to
test
and
yeah
and
I.
Think
Azure
is
a
bit
generic.
There's
lots
of
stuff
under
Azure.
That
won't
be
touched
so,
like
I,
think
Azure.
Slash.
Services
is
maybe
a
better
Target
for
the
iso
work.
E
But
yeah
I
just
wanted
to
shout
out
this
effort
that
we're
doing
and
there's
going
to
be
a
lot
of
issues
created.
So
if
anyone's
looking
for
like
quick
issues
and
stuff
that
will
be
coming
out
soon,.
A
Cool
thanks
Willie
anybody
else
have
any
other
comments
or
questions
on
test
coverage.
D
D
I
have
some
concerns
with
this
I
think
before
we've
like
really
gone
and
audited
our
tests
and
make
sure
that
we
have
at
least
some
healthy
Baseline,
it's
hard
to
really
enforce
this
on
PRS
in
the
sense
that,
like
this
essentially
means
that
code
contributors
cannot
add
a
single
line
untested,
because
otherwise
you're
increasing
you're
decreasing
code
coverage
with
every
code
line
you
add,
which
might
be
tricky
to
enforce
in
the
short
term.
So
I
wonder
if
there's
a
way
that
we
could
maybe
start
with.
D
Maybe
adding
visibility
to
you
know
a
PR's
decreasing
coverage
rather
than
enforcing
it
would
be
my
preference
so,
like
maybe
commenting
like
about
comments
saying
hey
your
PR
is
decreasing
coverage
by
this
much.
You
should
make
sure
you
add
tests
and
then
it's
up
to
reviews
to
either
enforce
that
or
not
in
the
short
term
and
then
later
on.
If
we
see
that
being
really
useful,
we
can
move
to.
Maybe
enforcing
it,
but
that's
another
discussion,
I
think.
G
Just
I
I
think
that
makes
sense
I'm
just
one
you
do
I
agree.
You
do
have
to
establish
some
form
of
Baseline
that
you
want
to
try
and
stick
to.
I
would
say
an
alternative
idea
to
yours,
other
than
just
speaking,
invisibility
would
be
also
to
arbitrarily
set
it
significantly
lower.
So,
for
instance,
if
you
know
it's
at
40,
setting
it
to
25
or
30.
G
I
mean
it's
unlikely
that
that's
going
to
really
cause
any.
You
know
people
to
hit
that,
but
with
one
PR,
but
at
least
that
way
you're
on
the
right.
In
my
mind,
you
know
like
there's
some
there's
some
bar,
even
if
it's,
if
it's
way
lower
but
but
yeah,
I
and
I,
you
definitely
want
to
not
block
people
or
scare
people
away
from
contributing
just
because
of
that
I
think.
G
On
the
flip
side,
you
also
want
people
to
ideally
write
some
unit
tests
for
the
code
they
contribute,
even
if
it
even
if
it
you
know,
decreases
your
code
coverage
amount.
You
want
to
write
something
to
at
least
care
about.
It.
G
B
C
Yeah
I
think
we're
all
saying
the
same
thing
and
We've
definitely
done
this
in
the
past
several
times
with
other
projects
you
know
and
I
I
go
a
little
farther
I
think
it's
okay
to
draw
a
line
in
the
sand
and
say
we
won't
accept
any
PRS
that
don't
that
decrease
code
coverage,
but
then
in
practice
you
have
to
be
flexible
because
there
will
be
areas
where
we
don't
have
a
test
harness
or
there
are
no
unit
tests
written
and
it's
not
actually
reasonable
to
ask
somebody
to
write
the
test
harness
essentially,
for
you
know
a
one-line
change,
so
there'll
be
exceptions,
but
I
I
think
we
could
and
should
get
to
the
point
where
we're
at
least
rhetorically
insisting
that
everything
has
to
be
covered
with
unit
tests.
C
E
Yeah
I
think
that
the
I,
like
the
idea
of
having
maybe
just
a
reminder,
a
comment.
Nothing
like
blocking
the
PR,
but
just
maybe
like.
Oh
you
did
your
code
coverage
decrease.
Two
is
the
code
code
dashboard,
let's
here?
Actually
that
might
not
work,
because
that
dashboard
only
gets
updated
on
Main
so
but
I
think
there's
still
a
link
to
like
the
the
unit
test.
E
That
runs
that
that
shows
the
code
coverage
overall,
but
yeah
I
think
just
having
a
reminder
for
like
to
write
the
test,
because
I
think
sometimes
I
also
like
miss
one
of
the
functions
that
like
testing
one
of
the
functions,
I
wrote
so
I
think
it'd
be
like
good
to
maybe
add
a
reminder.
Instead
of
strictly
enforcing
that
yeah,
the
Go
Go
cover
data.
A
I
think
that
is
what
has
some
functionality
to
just
on
every
PR
it'll
run
all
the
test
and
Report
code
coverage
and
the
Delta
for
that
PR,
I
I
think
some
of
those
numbers
have
just
not
made
sense
to
me
in
other
projects,
sometimes
where
you
make
like
a
one-line
change
in
a
shell
script,
and
then
it
reports
like
your
code
coverage,
went
down
by
10,
which
is
insane
so
I
think
something
like
that
would
be
helpful,
but
I
think
we
should
just
be
mindful
to
make
sure
that
the
numbers
look
reasonable.
E
Okay
yeah-
maybe
instead
of
now
this
modified
the
issue
to
be
like
making
the
code
run
on
HPR
because
then
I.
Don't
then
there'll
just
be
like
an
easy
link
and
it's
much
easier
to
click
through
this
dashboard,
because
it
shows
you
the
exact
lines
that
are
missing
code
coverage,
so
I'd
prefer
doing
that
over
like
running
a
Go
cover
or
something,
and
then
it
just
shows
like
the
overall
percentage
profile.
Maybe
there's
a
way
to
look
at
individual
lines,
but
the
dashboard
is
just
a
little
nicer.
A
Yeah
there
there
is
a
way
you
can
see
things
locally.
When
you
just
do
go
tests
locally
line
by
line
yeah
I
can
I
can
show
you
that
sometime
cool.
D
Sorry
good
button
who's
hidden,
yes,
so
just
wanted
to
Circle
back.
We
did
want
to
do
a
patch
release
last
week,
which
we
ended
up
not
doing,
because
we
had
several
really
important
fixes
that
were
in
flights,
and
then
we
had
the
whole
CI
meltdown,
where
we
were
blocked
on
merging
PRS
for
a
couple
hours
and
several
issues,
kind
of
overlapping
and
proud
being
down
for
a
bit.
D
So
now
that
we're
back
and
we
have
merged
most
PR's,
we
still
have
a
couple
of
important
fixes
that
have
PR's
open,
but
I
think
we
do
have
to
draw
a
line
at
some
point
because
otherwise
we're
just
going
to
always
be
waiting
for
the
next
thing
to
merge
and
we're
never
going
to
release
so
yeah.
What
do
folks
think?
Should
we
try
to
do
release
today?
D
I
think
we
should
because
then
tomorrow
is
Friday,
so
I'd
say
we
should
like
set
a
time
and
then,
if
things
don't
make
it
by
that
time,
then
we
just
release
again
next
week
early
in
the
week.
But
that's
my
take
I'm
curious.
What
others
think.
A
A
G
G
D
Yeah
the
deadline
is
last
week,
so
I
could
really
do
it,
but
yeah
I'm
happy
to
start
driving
the
process.
I
can
start
up.
A
I
do
have
a
couple
of
meetings
this
morning,
so
I
won't
be
able
to
get
that
start
until
midday,
which
does
give
a
chance
for
PRS
that
are
like
out
there
hoping
to
get
in
to
get
in
by
then.
D
But
let's
say,
maybe
around
11
30
noon.
Pst
I
can
kick
off
a
zoom
and
slack
and
get
that
started.
If
that's
okay
with
folks.
D
Well,
Matt,
how
close
do
you
think
we
are
from
getting
that
Flex
fix
or
Flex
scale
down
fix
in
there
I.
C
D
A
Right
any
other
thoughts
on
whether
or
not
to
release
a
patch
this
week.
D
D
The
Calico
node
image
gets
right
or
Docker
gets
rate
limited
trying
to
pull
the
Calico
node
image,
which
is
posted
on
docker.io
on
and
that's
using
the
official
Calico
home
chart
that
we
have
in
cabs,
which
is
a
problem
so
I'm
just
looking
at
this
a
little
bit.
But
some
preliminary
thoughts
that
come
to
mind-
and
this
is
without
having
tested
any
of
them
so
take
that
out
with
a
grain
of
salt.
D
But
one
thing
we
could
do
is
switch
the
so
the
images
are
configurable
in
the
home
chart.
So
we
could
switch
to
different
image
registry
and
I
think
those
images
are
hosted
on
the
MCR
like
OSS
registry.
So
that
could
be
an
option
since
MCR
doesn't
get
the
same
rate
limits
which
we're
already
using
it
for
cloud
provider
Azure
and
that
isn't
get
getting
Limited.
In
this
case.
D
Another
option
would
be
to
like
bake
the
images
on
the
VM
image,
but
that's
a
little
tricky
because
it
depends
on
cni.
It
depends
on
cni
version
and
then
you
just
explode
The
Matrix
of
like
Kate's
version
to
cni,
or
you
put
like
all
the
cnis
on
like
the
image.
But
then
you
have
a
bunch
of
stuff.
You
don't
need
on
your
image.
D
So
it's
not
great.
Another
option
is
like
doing
more
Research
into
so
Jack
had
started.
Jack's
not
here
today
he's
sick,
but
he
had
started
this
prototype
or
this
POC
a
while
back
for
a
thing
called
VM
methus
prototype,
which
is
essentially
kind
of
cloning,
an
image
from
an
existing
node
and
then
using
that
image
to
build
up
all
new
nodes.
D
So
that's
something
we
could
maybe
investigate
like
bringing
into
cabsy
so
that
whenever
you're
scaling
up
you
don't
have
to
repull
all
the
images
that
have
already
been
pulled
by
your
existing
nodes,
but
yeah.
So
these
are
all
just
ideals
right
now.
I
just
wanted
to
put
this
out
there.
If
anyone
has
run
into
this
and
has
suggestions
or
ideas,
yeah
Reach,
Out.
G
Are
these
are
the
Calico
references
baked
into
the
core
kind
of
cluster
API,
or
is
it?
Is
it
something
we
could
tweak
just
for
cap
C.
G
Okay,
it's
possible
that
we
already
have
these
Calico
images
on
MCR.
G
C
I
I
also
like
that
as
a
first
approach,
because
you
can
see
it's
pretty
straightforward
and
it
somewhat
addresses
we
have
an
issue
out
there
about.
You
know
what,
if
a
Calico
chart
itself
goes
away
again
like
we
did
a
few
weeks
ago.
This
would
this
would
not
fix
that.
But
I
wonder
if
we
would
also
want
to
I
guess
we
would
just
use
the
same
home
chart
and
pass
it.
Some.
C
C
And
then
I
guess
my
other
question
is,
but
maybe
this
is
going
a
little
too
deep,
but
I'm
just
I'm
curious
if
we
did
build
the
image
so
that
there
were
Calico
Docker
images
already
on
the
in
the
image.
Does
that
save
us
from
rate
limiting?
Because
it's
still
good
I
guess
it
probably
does
because
each
layer
pole
is
a
separate,
API
transaction.
So
we
just
we
just
have
the
head
request,
it
would
say:
you've
already
got
the
Shaw
and
then
we.
H
B
D
Yeah
I,
it's
not
a
taxi
issue
per
se,
like
you
could
run
into
this
in
any
environment
like
you
could
run
into.
F
D
With
capital
also,
it's
just
a
Docker
dot,
IO
rate
limits,
but
I
will
file
an
issue
in
capsi
since
I
observed
it
with
cap.
Z
I
just
haven't
had
time
to
since
I
literally
found
this
like
end
of
day
yesterday,
but
I
all
Fallen
issue
as
the
next
step.
Yeah
no.
D
Yeah
and
I
did
like
At.
First
I
was
testing
with
150
nodes,
and
that
was
working.
Fine
like
there
was
no
issue,
so
it
was
really
once
I
started,
doing
300
notes
that
I
ran
into
that
issue
and
had
to
request
like
quota
for
that
first
so,
and
eventually
like
to
also
just
like
this
doesn't
block
the
scale
up.
The
scallop
succeeds:
it's
just
that
it
takes
a
little
bit
longer
than
we'd
like
to
so.
D
Instead
of
taking
like
two
three
minutes
to
scale
up
300
nodes,
it
took
12
13
minutes,
so
it
does
work
eventually,
but
yeah.
It's
not
great.
A
All
right
any
other
last
thoughts:
oh
yeah
Mike
go
ahead.
I
So,
to
get
around
the
rate,
limiting
all
you
need
is
a
Docker
Hub
account.
It
doesn't
even
have
to
be
a
paid.
F
I
I,
don't
believe
to
get
around
most
rate
limiting
or
you
get
an
increased
rate
limit
with
a
paid
account.
I
G
D
It
was
anonymous
because
it
was
on
the
Note
like
on
the
cap,
Z
note
that
I
had
created
and
we
don't
pass
in
like
Secrets
or
a
Docker
account
I
mean
I
could
work
around
this
with
my
own
account,
but
I
don't
think
it's
a
very
good
like
generic
solution,
because
then
it
would
require
every
single
cabs
user
to
have
a
Docker
Hub
account
and
then
provide
that
because
we
can't
just
put
a
generic
account
Secret
in
there
for
everyone
to
use.
D
So
that
would
like
add
that
requirements
I
mean
it
is
one
option.
I'll
add
that
to
the
list
of
options
for
sure,
but
it
would
work
around
it
in
the
testing.
Yes,.
I
Yeah
I
was
just
thinking
for
the
situation,
you
know,
so
what
we
do
for
our
large.
You
know
when
we
do
large
scalps.
We
just
make
sure
we
set
an
image
full
secret
for
Docker
Hub
on
our
system
namespaces,
regardless
of
cluster
size.
That
way,
if
we
ever
do
scale
up
to
that
larger
amount
or
do
large
scales,
we
have
that
imageable
secret
set
and
I
think
we
should
probably
just
have
that
as
a
general
recommendation
for
larger
cluster
sizes.
I
For
that
reason,
and
it's
it's
just
on
the
cluster
operators
to
decide
whether
to
do
that
or
not
it's
not
a
requirement,
but
if
they
have
that
set
in
their
Cube
system,
namespace
or
whatever
name
space
Calico
runs
in
it'll
just
automatically
get
picked
up.
If
the
Manifest
has
that
on
there.
D
Yeah
I
think
okay,
this
might
be
a
little
bit
into
the
weeks,
but
so
the
Calico
stuff
runs
in
the
Calico
system
namespace
and
that
name
Suites
gets
created
by
the
Home
chart,
so
it
doesn't
exist
until
color
code
does
so
you'd
have
to
either
like
pre-create,
the
namespace
or
cabs.
You
would
have
to
do
that
somehow
or
yeah.
We
can
think
about
this
I'll
open
the
issue
and
then
maybe
we
can
brainstorm
on
it.
A
Sounds
good
all
right,
any
other
last
thoughts
on
Calico
images.
A
Cycle
so
Cecile,
what's
the
best
way
to
go
about
doing
this,
do
you
think.
D
I
would
say
at
this
point
since
it's
so
early,
it's
probably
not
worth
going
through
every
single
one
of
them,
but
maybe
we
can
skim
through
and
then,
if
anyone
sees
something
that's
in
there,
that
should
not
be
in
there.
That's
no
longer
browse
or
something
that
isn't
in
there
that
should
be
in
there.
They
can
call
it
out.
H
Yeah
I
just
wanted
to
provide
an
update
like
I'm
back
from
P2
and
I'm.
Again
back
to
this
workloaded
entry,
part
I
got
blogged
last
time.
I
was
not
able
to
test
due
to
one
limitation
in
Cappy
or
kind
clusters,
and
there
are
a
couple
of
feedback
that
I
see
on
the
proposal.
Pr,
so
yeah
I
just
wanted
to
update
that
I'm
back
to
work
on
working
on
that
and
probably
I'll
make
a
PR
in
Cappy,
so
that
I'm
able
to
use
Bitcoin
cluster
to
be
able
to
test
workload
identity
in
RCI.
H
A
C
A
C
I
mean
we
do
have
a
couple
other
services
that
are
not
async
still
right,
so
there's
precedent,
but
anyway,
I
just
want
to
ask
you
would
know
better
I'm
sure
we're
not
wasting
time.
If
we
do
that,
okay.
D
D
So
I
still
feel
like
it
would
be
beneficial
to
do
the
cleanup
and
then
be
able
to
start
from
that
rather
than
yeah.
It's
it's
just
very
different.
Right
now,.
B
A
Looks
like
that's
the
end.
Should
we
go
through
the
new
issues
to
see
if
there's
anything
we
should
put
in
the
milestone.
A
And
I
might
need
help
from
a
maintainer
to
like
actually
do
the
milestone.
Swizzling,
let's
see
Willie
the
code
coverage,
one.
Oh,
the
CI
job
is
this:
is
there
a
good
first
step
here?
We
could
take
this
milestone.
C
I
was
I'll.
Do
it
sure
let
me
switch
over
okay.
A
B
D
I
would
say
we
should
put
that
one
in
there.
It
seems
like
it
pretty
important
bug
just
from
scheming
at
it.
Yesterday.
A
Let's
see
Cappy
1.4
I
think
that
should
go
in
the
milestone.
Yes,.
D
I
think
you
can
probably
put
that
one
in
there.
This
is
like
a
follow-up
from
Jack's
PR.
Last
week
we
decided
to
split
up
the
pr
into
to
make
it
easier
to
cherry
pick,
and
this
was
the
follow-up
part
that
still
needs
to
be
re-added.
B
A
E
These
two
are
very
quick.
I
just
opened
this
like
good,
first
issues
that
this
is
before
I
like
re-did
the
prioritization,
but
people
have
already
picked
these
up
and
they're,
mostly
almost
done
so
yeah
cool.
A
Do
we
have
any
context
on
this?
One.
D
A
A
B
C
C
A
D
Yeah
I
guess
the
only
reason
to
not
do
it
is
just
because
it's
a
good
first
issue
and
generally
good.
First
issues
tend
to
be
like
not
associated
with
a
timeline
just
to
make
it
easier
for
new
contributors,
but
I
mean
I.
I
could
see
those
go
either
way,
since
it's
already
assigned.
B
Cool,
let's
see
what
was
I
here.
A
C
The
author
is
also
the.
A
Should
I
go
ahead
and
assign
this
issue
to
the
author?
Is
that
yes,.
F
A
B
A
D
F
A
F
D
Yeah
I
don't
want
to
like
distract
the
conversation
too
much,
but
it
looks
like
this
is
similar
to
that
issue.
We
are
just
fixed
where
the
whole,
like
Cappy
Suite,
with
timing
out,
remember
on
the.
A
Bring
your
own
face:
was
there
a
PR
open
to
this?
No,
not
yet.
A
Is
this
worth
adding
to
the
Milestone?
Do
we
think.
D
If
hit
only
occurred
with
1.6
we're
about
to
get
rid
of
1.6,
so
actually
we're
always.
We
already
got
rid
of
it
because
we
released
1.8.
So
it
would
be
good
to
see.
I
can
comment
on
this
issue
in
astronauts
and
if
this
was
observed
in
any
other
branches.
A
All
right
and
then
I
think
after
that
yeah
we
start
seeing
the
Milestone
so
I
think
these
are
at
least
the
new
ones.
For
this
week
I
will
shamelessly
plug.
There
was
one
issue
that
I
noticed
probably
should
be
that
I
would
like
to
throw
in
the
Milestone
this
one.
A
A
Yeah,
if
we
could
put
this
in
the
Milestone,
thank
you
Matt
cool,
any
other
issues
that
we
didn't
get
to
that
are
worth
adding
to
the
milestone.
A
Oh,
there
was
I,
know:
Willie
opened
a
bunch
of
bugs
last
week.
Did
we
get
to?
Did
we
get
through
all
those
last
week?
I,
don't
remember
exactly
where
we
left
off
on
that.
A
E
Sorry
I
didn't
add
any
new
bugs
or
like
create
any
new
issues.
I
was
just
looking
at
the
old
bugs
that
needed.
F
E
That
had
the
helpline
attack
so
I,
don't
think
any
of
them
got
picked
up
so
I
don't
know
if
we
can
put
them
onto
Milestone
just
yet,
but
there
was
one
that
I
was
looking
at
for
oh
the
crap.
E
It's
like
escaping
my
mind
right
now,
but
there's
there's
one
bug
that
I
was
gonna,
pick
up
that,
but
it
was
oh,
my
stream
full
drain
test
getting
like
a
barking
machine
full
drain
test
and
I
was
gonna
work
or
maybe
triage
at
first
with
Matt
and
Johnson,
who
were
kind
of
working
on
that
part
of
the
code
just
to
see
if
they've
added
a
test
like
that,
but
I
don't
think
they
did
so
maybe
we
can
add
that
to
the
Milestone
it's
and
they
I
think
it's
called
enable
working
machine
pool
drain
test.
B
F
E
A
Okay,
Willie
should
I
Willie
should
I
sign
this
to
you,
Yeah,
okay
or
maybe
I'll.
Let
Cecile
speak
before
I.
Do
that?
Oh.
D
A
D
So
yeah
I'm,
going
to
ask
Matt
about
that.
Is
that
something
that
is
feasible
to
look
at
in
the
next
milestone.
D
Yeah
I
I
was
poking
around
a
little
bit.
I
think
there
is
like
some
inconsistency
with
like
machines
where,
like
for
a
machine
as
long
as
the
no
as
long
as
the
VM
has
succeeded,
we
put
the
machine
or
Azure
machine
as
succeeded,
but
for
machine
pool
we
don't
just
look
at
the
provisioning
state.
We
also
look
at
the
nodes
for
the
Azure
machine
pool.
D
So
what
we
could
do
is
like
put
the
Azure
Machine
Tool
and
succeeded
as
long
as
the
vmsf
is
secluded,
so
that
and
then
maybe
change
the
like
make
it
a
little
more
visible
like
what
exactly
it's
stuck
on,
because
I
think
right
now
it
says
updating
or
scaling
up
or
something
like
that,
where
it's
not
really
updating
the
vmss.
It's
just
like
it's
not
ready,
because
cni
is
missing.
D
So
I
think
there
is
some
it's
not
a
bug
per
se,
but
there
is
some
like
ux
Improvement
that
could
be
made
because
it
is
pretty
confusing.
Like
I
myself
got
like
kind
of
tricked
on
that
last
week,
when
I
was
trying
to
test
scaling
up
with
flex.
C
Yeah
I
understand
it
should
definitely
at
least
behave
the
same
way
that
machine
deployments
and
stuff
do
so
yeah
I
put
it
on
the
milestone,
give
it
a
look.
A
All
right
does
anybody
else
know
of
any
issues
that
would
be
good
candidates
for
the
milestone.
A
All
right
should
we
is
it
worth
cruising
through
open,
PR's,
see
if
there's
anything
worth
adding
to
the
Milestone
or
it's
so
early
that
it
seems
like
just
about
everything
is
probably
going
to
land
in
a
milestone
anyway,
should
we
go
through
these.
A
D
So
we
do
have
a
milestone
plug-in
like
bot,
which
will
add
PRS
to
the
Milestone
as
soon
as
they
merge
for
the
current
Milestone.
So
since
we're
adding
issues
to
the
mouse,
then
I
don't
know
that
it's
that
important
to
also
add
the
pr,
especially
if
they're,
linked
to
an
issue
which
they
should
be.
D
C
Yeah,
pardon
me
I,
don't
think
it's
necessary
to
go
through
all
these,
but
I
was
just
noticing
related
to
what
you
were
saying,
Cecile,
that
if
a
PR
has
V
Dot
next
or
a
different
Milestone
on
it,
it
won't
change
it.
So
we
have
a
bunch
of
things
that
got
merged
that
are
attributed
to
the
V
Next
Milestone.
So
ideally,
there'd
be
some
way
to
undo
that
and
attach
them
to
the
right
milestone.
G
Yeah
I
noticed
that
a
while
back
the
v-necks,
the
v-next
Milestone,
is
a
bit
messy
because.
C
G
F
D
F
D
David
I
guess
that's
another
good
reason
to
not
assign
a
milestone
to
PR's
and
since
we
can't
predictably
say
when
they're
gonna
land
and
we'd,
rather
just
the
thought,
put
them
in
the
right
one
right.
A
All
right
did
I
miss
anything
Under,
the
Umbrella
of
Milestone
review,
or
are
we
just
about
wrapped
up
for
the
week.
A
Awesome
all
right
looks
like
we'll
end
up
a
couple
minutes
early.
So
thanks
everybody
for
joining,
and
we
will
see
you
next
week.