►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone:
it
is
Thursday
January,
19th
2023.
This
is
the
weekly
office
hours
for
cluster
API
provider,
Azure
cap
Z,
cap
Z
and
cluster
API
are
sub-projects
of
Sig
cluster
lifecycle
and
we're
part
of
the
cncf.
So
we
abide
by
the
code
of
conduct.
Please
be
kind
to
one
another,
be
enthusiastic.
Raise
your
hand
if
you
want
to
speak
love
to
have
everybody
here
at
the
beginning
of
each
of
these
discussions.
B
Hey
thank
you.
Thanks.
Jack
yeah
I
I've
been
around
the
community
a
while
I
I
can't
remember
last
time.
I
dropped
in
here,
but
I
just
wanted
to
yeah
see
how
things
are
going.
We
I
work
for
a
day
two
IQ.
We
we
use
the
Azure
provider
and
you
know,
try
to
try
to
contribute
here
and
there
yeah
I
just
wanted
to
drop
in.
C
Hi
there
yeah
I'm
Dominic
I'm
working
for
giant
Swan
on
the
product
owner
for
our
yeah
capsi
kubernetes,
cluster
of
leads
and
yeah
you,
you
might
have
already
met
some
of
my
engineering
colleagues
that
are
working
together
with
me
on
the
product.
A
Welcome
yeah
I
definitely
hung
out
with
some
of
your
fun
colleagues
in
Detroit
at
UConn.
C
A
All
right,
I,
don't
see
any
more
hands,
I'll
go
ahead
and,
let's
start
burning
down
the
agenda
here.
So
the
first
thing
we
want
to
do
I'll
mention
that
just
a
preview
of
Coming
Attractions
we'll
do
some
Milestone
review
at
the
end.
That
will
be
especially
interesting
this
week
because
we
cut
one
to
seven
zero
last
week,
so
we've
got
a
new
Milestone
to
sort
of
populate
and
sanitize.
A
But
the
first
agenda
item
that's
been
marked
is
ashtosh
anyone
else.
If
you,
if
you
want
to
speak
up-
and
it's
not
here-
feel
free
async
to
add
a
an
item
and
I'll
call
on
you
in
order
just
pop
it
to
the
back
of
the
queue
here.
Ash
Josh
go
ahead.
D
Yes,
so
this
is
the
issue
if
you
can
open
that
Jack.
This
issue
on
screen
had
opened
like
last
time.
D
This
is
a
problem
with
outbound
load,
balancer
and
I
have
like
added
a
proposed
Solution
on
the
pr
and
also
you
know,
synced
up
with
Cappy
folks,
especially
for
Vizio
on
this
one,
and
there
was
a
question
that
was
asked
like.
What's
the
specific
used
case,
around
Azure
cluster
name
not
equals
to
Cluster
name,
and
that's
where
you
know,
if
you
can
see
the
response,
so
I
didn't
had
like
much
context
on
that
and
that's
where
I
thought
I
will
ask
help
from
Fabrizio.
D
D
What
is
as
the
same
as
cluster
name
and
one
challenge
would
be
to
actually
ensure
that
how
do
we
fix
it
automatically
for
the
previous
cluster
previous
existing
cluster?
That
didn't
do
so,
but
I
think
that
should
be
doable
by
putting
some
logic
in
the
Recon
reconciliation
hooks.
Where
we
see
that
okay,
this
lb
name
is
not
same
of
azure
cluster
name,
and
then
we
try
to
sync
that
up.
A
Cool,
thank
you.
So
much
I
think
I
understand
this
I'll.
Just
restate
everything
you
said
so
I
understand
it.
Maybe
other
folks
better
understand
it.
So
essentially
the
cluster
name
in
a
in
a
cluster
class
definition
becomes
sort
of
like
a
prefix
and,
as
you
horizontally
scale
clusters
from
a
cluster
class
recipe
in
order
to
guarantee
uniqueness
a
little
bit
of
a
random
suffixes
added
at
the
end.
A
But
the
the
load
balancer
by
convention
assumes
that
it's
going
to
have
the
same
name
and
it's
using
that
prefix
can
I
just
to
demonstrate
this
more
concretely
for
folks,
because
this
might
be
helpful
for
I'm
gonna
re-share
screen.
Actually
I
don't
need
a
share
screen.
Let
me
go
to.
A
B
A
Know
it
makes
sense
in
terms
of
back
compat,
for
existing
clusters
does
do
Azure
load
balances,
support
rename
operations
is
that
the
idea
we
do
put
against
the
load
balancer
resource
with
a
new
name.
A
Yeah,
so
that's
that's
cool.
It
sounds
like
we
have
a
enough
of
a
heads
up
that
that
may
be
tricky.
Go
ahead
for
Brazil.
E
Hi
everyone,
so
just
for
my
curiosity
or
understanding.
So
if
I
got
it
right,
current
recapsi
make
this
assumption
so
make
this
assumption
that
the
name
of
the
Azure
cluster
is
equal
to
the
name
of
the
cluster
and
then
generates
the
the
name
of
the
a
load
balancer
equal
to
all
those
two
names
Okay.
So
if,
if
I
got
this
right,
that
means
that,
as
of
today,
there
are
no
Azure
cluster
with
load
balancer
with
name
different
than
the
Azure
cluster.
So
really
there
is
no
migration
path.
A
A
It's
a
simple
template
again
so
in
in
our
Temple,
so
we're
using
sort
of
customize
to
inject
environment
variables
when
we
build
these
templates
during
our
end
to
ends
and
as
an
example
here.
The
convention
here
is
that
the
cluster
and
the
Azure
cluster
are
injected
according
to
a
common
variable,
so
this
will
always
be
in
this
template.
This
will
always
be
the
same.
A
We
have
a
lot
of
reference
temples
here,
so
I
can't
look
at
everyone,
but
that
does
make
sense
that
we
have
this
convention
where
we
we
share
that
value
across
both
the
Cappy
cluster
definition
and
the
cap
C
cluster
definition.
My
question
when
you
said
earlier
Fabrizio
that
we
in
the
load
balancer
code
we
compare
both
to
the
cluster
and
the
Azure
cluster.
We
Pro
I,
assume
we're
actually
not
doing
that,
we're
picking
one
over
the
other
and
then
in
the
cluster
class
scenario.
E
A
E
So
that
name
can
be,
can
be
everything.
So
if
you
want
your
load
balancer
to
have
us
at
the
same
number
of
the
class
there,
you
have
to
pick
the
name
of
the
cluster,
not
the
name
of
the
Azure
cluster,
okay,
but
yeah.
Not
now
sorry,
if
I
bring
it
up,
I'm
just
curious
to
understand.
A
But
which
is
fine
so
still
go
ahead.
F
Hey
sorry,
I'm
joining
this
conversation
halfway
through
so
I
might
be
missing
the
beginning
context,
but
I'm
assuming
we're
talking
about
the
load,
balancer
issue
that
Ash
Dash
brought
up.
So
all
the
Azure
resources
that
we
create
in
kab
Z
are
prefixed
with
or
have
the
Azure
sorry,
the
yeah,
the
Azure
cluster
name
in
it
right
now.
F
So
if
we
were
to
change
the
load
balancer
to
be
the
cluster
name
instead
of
the
Azure
cluster
name,
that
means
we'd
have
this
inconsistency
now,
where,
like
everything
else,
has
the
same
prefix
in
it
except
the
load
balancer,
which
might
be
a
little
weird.
When
you
look
at
your
resource,
Group
and
your
resources
have
we
thought
about
doing
it
the
other
way
around
and
telling
cloud
provider?
This
is
my
cluster
name
and
making
that
Azure
cluster
name,
because.
D
Yeah,
that
was
one
of
the
thing
that
also
popped
in
my
head,
so
say
like
the
only
thing
is
like
I
I
don't
know
like.
Does
anybody
know?
How
does
cloud
provider
Azure
picks
up
the
name
of.
A
F
F
A
We
I
think
we
should
probably
time
box
this
I'm
super
glad.
You
brought
this
up
ashtash,
but
it
sounds
like
we
have
multiple
ways
we
could
go
and
we
want
to
get
to
all
the
agenda
items.
G
D
D
D
So
the
failure
is
when
I,
when
I,
create
a
I,
have
a
like
pocpr
for
workload,
identity.
That
just
makes
couple
of
changes
and
sets
up
the
prerequisites,
for
example,
generating
the
keys,
and
you
know,
deploying
workload,
identity,
Web
book
when
I
create
a
cluster
using
my
current
PR
branch,
and
you
know
not
use
workload
identity.
D
It
just
goes
fine,
but
when
I
do
create
a
cluster
and
I
change,
you
know
the
type
of
Hazard
cluster
identity
to
workload,
identity,
cloud
provider,
as
it
just
crashes,
and
this
was
one
one
issue
with
entry
provider,
so
I
assist
to
so
I,
so
I
thought,
I'll
use,
external
cloud
provider
and
external
cloud
provider
is
any
way
crashing
like.
Whenever
I
deploy
onto
the
workload
cluster,
the
external
cloud
provider
is
crashing.
So
these
are
the
two
things
that
I've
captured
in
a
Google
document
with
me.
A
G
A
D
D
D
Yes,
because
I'm
not
able
to
test
my
PR
because
of
that,
so
the
only
thing
that
I
want
to
test
right
now.
The
last
thing
is,
you
know
when
I
create
a
workload
cluster
from
the
kind
cluster
that
workload
cluster
is
actually
what
gets
converted
to
management
cluster
in
a
lot
of
user
setup.
So
I
just
want
to
like
transfer
that
identity
to
the
workload
cluster
and
then
test
that
now
this
workload
cluster
is
able
to
to
the
API
authentication
with
azure.
A
That's
fair,
it
does
seem
like
that's
sort
of
like
run
crawl.
A
Run
walk
crawl
instead
of
crawl
walk
run.
It
seems
like
that's
the
more
complicated
scenario.
D
A
It's
not
that,
maybe
that's
probably
that's,
certainly
going
to
be
true
in
production,
but
I
I'm,
not
sure.
If
we
have
the
data
to
suggest
whether
most
folks
use
self-managed
Cappy
clusters
versus
having
a
dedicated
management
cluster,
but
anyway
it
doesn't
really
matter.
I
was
just
making
an
observation.
Go
ahead,
Cecile.
F
Yeah
just
and
I
guess
this
applies
to
the
issue
we
were
talking
about
previously
too,
but
for
this
one
as
well,
since
we
have
kind
of
the
entry
Club
provider
and
out
of
Street
Club
provider
scenario
side
by
side
right
now,
I
think
it
might
make
sense
to
just
focus
on
out
of
tree
for
now,
because
we
are
planning
to
switch
like
all
the
templates,
all
the
reference
templates
and
all
the
tests
to
out
of
tree
by
the
next
release.
F
There's
an
issue
for
that
and
I've
assigned
myself
planning
on
working
on
it
like
this
week
or
next
week.
So
I
think
it
might
make
sense
to
just
focus
on
out
of
tree
as
the
like,
only
workload,
identity
scenario
going
forward,
since
those
two
will
get
released
together.
D
G
D
That
so
Jack
I
think
I'll
share
I'll
paste
the
link
of
the
talk.
Can
you
please
share
and
open
the
document
like.
D
Yeah,
so
if
you
see
these
are
the
things
that
I
did
Step
One
is
the
normal
make
tilt
up,
and
this
is
on
my
branch,
which
just
you
know,
that
has
vertical
identity,
changes
and
I'm
able
to
create
a
workload
cluster
using
workload,
identity
and
step.
Two
is
when
the
workload
cluster
is
created.
I
just
wanted
to
deploy
external
cloud
provider
and
when
I
do
that,
I
see
that
you
know
the
cloud
provider
is
still
crashing
here,
so
it
it.
A
D
Yeah
so
using
workload,
identity
means
like
I,
am
not
using
the
current
existing
supported
way
of
identity,
and
you
know
we
have
it
as
a
cluster
identity
object
and
in
my
PR
I
have
like
made
a
change
that
yeah.
This
is
not
it's
supported.
This
is
work
in
progress
and
you
know
I
have
made
it
change
that
actually
takes
one
more
value
in
Azure.
Cluster
identity
object,
spec,
DOT
type,
that
is
workload
identity.
D
So
when
you
do
that
cap
Z
automatically
switches
to
use
for
cloud
identity
workflowing
the
code,
that's
currently
WIP,
that's
not
in
main,
that's
not
in
any
release
version.
So
the
cluster
gets
created.
Fine
authentication
works,
I,
have
a
full-fledged
workload,
clustered
applied
on
Azure
infra,
but
whenever
I
do
like
a
Helm
install
of
cloud
provided
Azure,
it
just
crashes
right
and
it
just
crashes
even
for
non-workload
identity,
best
tests,
so
I
think
there's
something
wrong
with
like
cloud
provider
Azure
or
there
is
some
configuration
that
I'm
missing.
D
C
F
Yeah
I
think
it's
really
hard
to
like
look
at
this
all
together
in
office.
Hours
and
I.
Don't
want
to,
you
know,
hold
sure
else,
but
I
really
want
to
help
you
get
unblocked.
So
would
it
help
if,
like
maybe
either,
if
we
finish
or
if
we're
able
to
finish
early,
we
stay
on
and
the
last
part
of
office
hours
to
pair
on
this,
or
maybe
I
can
get
up
earlier
tomorrow
morning
and
try
to
sync
with
you
on
your
hours.
F
Whatever
works
because
I
think
it'd
be
easier,
if
we
just
dug
dig
in
and
just
try
to
Repro
it
together,
sure.
F
H
F
D
Yeah
just
to
wrap
up
yeah,
so
this
is
where
I'm
at
and
I'll
just
take
one
more
minute.
So
one
progress
that
I
made
in
workload
identity
was
like
thanks
to
Fabrizio.
He
gave
a
good
suggestion
on
to
how
to
use
the
secrets.
So
so
there
was
a
challenge
in
terms
of
Distributing
the
key
pairs
to
control
pin
nodes,
so
that
is
solved
and
I
validated
that
so
that's
that's
all
good.
D
A
Okay,
cool,
so
I
guess:
I've
got
the
next
two
agenda
items,
they're
sort
of
related.
The
first
one
is
just
a
brief
announcement
that
the
AKs
graduation
from
experimental
is
now
live.
So
that's
great
we're
going
to
be
testing
that
in
this
development
cycle
and
we'll
ship
the
changes
with
1.8.0.
A
So
thanks
to
everybody
who
supported
that
effort,
it's
been
months
long,
years-long,
Journey,
that's
really
great
news.
I
want
to
clarify
that
the
the
live
and
man.
What
this
means
in
practice
is
does
not
actually
amount
to
any
functional
changes.
A
It's
just
a
sort
of
rearranging
of
a
couple
main
things
rearranging
of
the
the
source
code,
so
that
the
API
definitions
and
various
controller
runtimes
are
not
in
the
experimental
part
of
the
way
we
organize
our
source
and
then
the
second
important
thing
is
that
the
feature
flag
that
is
normally
required
now
in
1.7
inversions
below
that
the
feature
flag
to
turn
this
on
is
no
longer
required.
It's
locked
to
True,
which
means
following
the
sort
of
Upstream
kubernetes
conventions.
A
You
actually
can't
disable
the
feature
flag.
It's
essentially
there
as
a
sort
of
vestigial
thing
that
will
wait.
Four
releases
to
formally
deprecate
any
any
questions
on
any
of
that.
A
Okay,
cool
and
then
I
wanted
to
mention
and
give
some
room
for
folks
to
share
any
concerns
with
PR
and
test
flakes.
If
you
want
to
vent
now
as
a
time
when
you
have
lots
of
sympathetic
voices,
it's
been
I
think
fairly
challenging
the
last
few
days.
For
me
at
least
concretely
I
can
say
one
of
the
reasons
is
because
the
AKs
and
then
test
is
now
required,
and
there
are
still
some
flakes
there.
A
So
I
will
be
excuse
me
I'll,
be
prioritizing
with
any
other
folks
who
want
to
work
on
that
addressing
those,
so
apologies
in
advance
if
your
PR's
take
a
few
more
retests
to
get
to
Maine
after
approval
and
in
addition,
there
is
another
very
particular
annoying
flake
that
I
see,
at
least
in
the
end,
to
end
the
standard
end-to-end
test
for
multiple
control,
plane
nodes
where,
where
the
the
test
Suite
times
out
waiting
for
cluster
deletion,
so
I'm
actively
debugging
that,
in
my
laptop
right
now,
in
the
background,
go
ahead.
Cecile.
F
Yeah
I
just
want
to
give
a
quick,
also
shout
out
to
Jonathan
for
bringing
back
some
cluster
API
template
goodness
into
cab
Z
as
well
for
flaky
tests,
reports
and
I
think
we
should
really
try
to
use
that
going
forward
once
it
emerges
whenever
we
see
like
a
failing
test.
I
think
that
will
help
us
have
a
better
clearer
idea
of.
What's
going
on
with
flakes.
G
Yeah
Jonathan
good.
To
add
on
to
that,
we
were
at
least
within
our
team.
We
were
also
looking
into
finding
some
way
to
try
to
automate
tracking
our
test
grid
failures
and
cap
Z.
So
if
we
come
up
with
a
solution
for
that,
we
could
also
share
that
with
other
providers
as
well.
A
Yeah,
so
I
would
definitely
encourage
anybody
suffering
through
this
to
reach
out
in
slack
file
an
issue
bug
us
it's
something
that
we
we
care
about.
We
don't
enjoy
inflicting
pain
on
our
community,
go
ahead
for
Brazil.
E
Yeah,
just
as
many
of
you
are
aware,
the
main
copy
that
is
a
release
team
and
in
the
release
team.
There
is
also
asean
senior
person
and
they
are
looking
at
the
same
family
of
problem.
How
do
we
track
failures?
How
do
you
figure
it
out,
failure
from
Fleck,
etc,
etc?
If
you
want
to
please
reach
out
to
them,
maybe
they
already
have
somebody
on
the
table.
A
B
A
Okay,
so
we
are
at
the
Milestone
review
part
of
the
office
hours.
I
would
I'll
set
expectations.
This
might
take
a
while,
because
this
is
the
first
Milestone
review
for
1.8.
So,
if
you
feel
like
this
is
too
low
level-
and
you
have
other
things
to
work
on,
you
will
be
forgiven
for
dropping
early,
but
for
folks
who
are
actively
looking
to
do
work
on
this
Milestone.
C
C
C
So
that's
really
like
a
good
thing
that
we
want,
but
on
the
other
side,
we've
run
into
a
lot
of
problems
with
machine
pools,
so
I
at
least
wanted
to.
You
know
check
out
with
the
community
of
all.
Is
there
any
like
commitments
to
machine
pools
or
is
it
more
like?
You
know?
We
have
it,
but
you
know
whoever
wants
to
work
on
that.
G
Yeah
so
as
far
as
machine
pools
go,
there's
been
a
proposal
a
while
back
that
emerged
for
adding
machine
for
machines.
So
the
point
of
that
is,
it
would
be
a
Cappy
PR
that
allows
machine
pools
to
provision
machines
so
that
you
can
interact
with
the
individual
instances
for
our
cloud
provider
that
supports
it
and.
E
G
Would
also
be
able
to
work
with
cluster
Auto
scaler,
so
that
proposal
has
emerged
and
I'm
currently
working
on
the
implementation
right
now.
I
have
a
PR
open,
that's
WIP,
and
once
that's
once
that
is
close
to
merging
I'll
Circle
back
and
Implement
that
in
Azure
as
well.
So
I
guess
to
answer
your
question.
It's
there's
a
commitment
and
it
is
currently
in
progress.
C
Awesome
yeah.
We
already
also
created
two
pull
requests
for
issues
that
we
found
and
that
we
could
directly
work
on.
We
have
to
get
like.
We
have
to
do
one
step
back
now,
go
again
with
machine
deployments,
I'm
just
to
onboard
customers
a
bit
faster
because
those
issues
are
currently
blocking
us,
but
you
know
we
want
to
help
to
get
machine
pools
in
the
right
track.
So
if
you
need
any
help
there,
we'll
sure
in
a
few
weeks
definitely
come
back
to
that
issue.
F
Yeah,
what
Jonathan
said
is
exactly
right
and
then
also
one
other
data
point
is
the
experience.
Sorry,
the
manage
cluster
AKs
graduation.
That
just
happened
that
has
a
dependency
on
machine
pools.
So
it's
in
our
best
interest
to
graduate
that
also
and
keep
that
stable
long
term
so
just
to
like,
if
that
helps
you
know,
machine
pools
are
also
being
used
by
managed
clusters.
F
So
that's
that's
a
big
part
there,
but
that
being
said,
I
think
machine
deployments
do
solve
a
lot
of
the
problems
that
cloud
providers
are
trying
to
solve
with
machine
pools
like
vmss
and
Azure.
So
it
is
worth
you
know
if
you're
using
self-managed
clusters,
you
know
considering
Mission
deployments
as
an
alternative
and
seeing
what
in
vmss
or
machine
pools
is
it
that
you
want
and
whether
machine
deployment
solves
that
or
not
because
they're,
essentially
very
similar.
It's
just
that
machine
deployments.
F
The
orchestration
of
of
the
instances
is
done
by
cluster
API,
and
so
it's
shared
amongst
other
Cappy
infer
providers,
whereas
the
machine
pools
the
orchestration
of
the
instances
is
delegated
to
the
cloud
provider.
So,
for
example,
in
the
case
of
azure
Azure
vmss
is
the
one
that
orchestrates
the
instances
for
things
like
rolling
upgrades
and
like
high
availability,
Etc.
C
Yeah
exactly
that,
you
know
that
we
can
delegate
that
responsibility,
and
with
that
you
know,
if
the
management
cluster
breaks,
the
workload
clusters
can
still
scale.
It's
definitely
a
very
good
selling
point
and
that
AKs
is
using
it
I
think
is,
is
the
maximal
maximum
commitment
we
can
get
there
so
yeah.
Thanks
for
that
information.
A
Great
I
see
some
so
I'll
just
go
ahead.
Click
through
I
mean
maybe
I'll.
Does
anybody
have
any?
If
you
have
any
knowledge
of
where
these
issues
are
feel
free
to
speak
up.
A
A
All
right
that
so
I
could
speak
to
this
one
a
little
bit.
This
has
probably
a
lot
to
do
with
the
configurable
flexibility
of
vmss
and
Azure.
So
updating
the
updating
one
part
of
the
vmss
configuration
is
going
to
you
have
lots
of
options
in
terms
of
how
you're
going
to
roll
that
just
similar
to
sort
of
cluster
API
as
well.
A
So
we
could
probably
work
toward
sort
of
outlining
the
various
configuration
options
for
vmss
and
and
seeing
if
we
have
all
those
exposed
to
capsa
users,
because
it's
possible
that
the
current
configuration
is
a
limited
set
of
what
bmss
offers,
for
example,
update
the
recipe.
What
we
call
the
model
in
bmss,
but
not
actually
apply
that
across
existing
VMS
in
that
vmss.
A
That's
a
slightly
more
complicated
thing
to
do.
In
coordination
with
the
kubernetes
node
abstraction
layer
go
ahead.
Cecile.
A
F
A
A
All
right
excuse
me,
so
we
have
already
closed
24
items,
that's
sort
of
surprising
I,
guess
I'm
not
going
to
drill
through.
We
got
limited
time,
but
we've
got
22
here.
So
let's
maybe
PR's
first
issues.
First,
let's
go.
Let's
do
PR's.
First,
I
think
this
is
lowest
hanging
fruit,
just
to
make
sure
that
our
there
aren't
any
old
PRS
that
we
need
to
maybe
consider
bumping
so
by
default.
This
is
a
brand
new
Milestone.
A
We've
got
almost
two
months
in
our
development
cycle,
so
you
would
imagine
that
all
these
PRS
are
going
to
be
a
part
of
the
Milestone,
but
it
would
make
sense
from
a
planning
perspective,
to
add
these
ahead
of
time,
I
think
and
also
as
an
exercise
of
seeing
if
there's
anything
we
want
to
drop
so
the
oldest
PR
we
have
here,
looks
like
this
one
and
it
is
Skips.
A
simple
validation
requested
pause.
A
F
I,
we
should
absolutely
tag
this
I
think
in
fact,
I
think
we
already
put
the
mouse
the
sorry
the
issue
in
the
Milestone
I,
don't
know
whether
this
VR
as
is,
is
going
to
merge
because
it's
been
a
while
and
it's
kind
of
stale
at
this
point,
but
I
think
we
want
to
solve
the
issue
that
this
is
targeting
just
because
that's
a
known
bug
in
managed
clusters
and
we're
graduating
manage
clusters.
So
that
was
one
discussed
in
the
issue.
A
Okay,
cool
well
I
added
it
to
1.8
I
think
that
it'll
at
least
get
more
of
our
attention.
If
it's
in
that
Milestone,
because
I'm
not
sure
folks
are
actually
looking
at
the
next
milestone,
all
right,
here's
another
one!
That's
in
next,
so
by
default,
I'll,
add
it
to
1.8.
Unless
we
have
an
objection,
is
there?
Is
there
a
reason
why
we
shouldn't
merge?
We
shouldn't
tag
draft
PR's
in
a
concrete
milestone.
A
F
Okay,
either
way,
let's.
A
A
A
A
I
think
I
don't
have
consensus
on
this,
so
I
might
just
close
this.
Does
anybody
think
that
they're.
B
A
A
A
A
F
A
Milestone
yep,
let
me
quickly
see
if
there
are
any
of
these
are
controversial.
A
All
right
great
so
do
we
want
to
go
into
the
issue,
queue
and
burn
those
down?
Actually,
let
me
really
quickly
go
to
Milestone
V
Next.
Why
can't
I
so
go
ahead?.
F
A
Let's
that's
a
good
point,
so
maybe
the
the
lowest
Tangy
fruit
at
this
point
is
to
just
quickly
spot
check
and
identify
if
any
of
this
is
out
of
scope
or
1.8,
so
I'll
quickly
go
down
the
list
and
I
I,
actually
I.
Think
I'll
just
rely
on
folks
to
unmute
and
and
add
some
clarifying
instructions,
so
we'll
make
an
exception
for
the
raise
hand
rule
here.
A
F
F
A
Yep
and
to
be
clear,
the
cloud
fighter
Azure
project
itself,
which
is
natively
out
of
tree
fuel,
pardon
the
weird
language
there
of
uses
capsc.
So
we
do
have
I.
Think
our
functional
test
test
for
out
of
tree
and
capsi
is
is
Rock
solids,
it's
more
across
the
feature
Matrix
and
capsi.
We
don't
have
that
fully
across
the
finish
line.
Yet,
okay,
so
I,
don't
see
Matt
here,
but
I
am
pretty
sure
that
there's
only
a
single
PR,
that's
needed
to
address
this,
and
it's
already
almost
across
the
finish
line.
Is
that
a
controversial
statement.
F
It's
not
exactly
as
simple
as
that
that
first
PR
is
part
of
it,
but
then
there's
more
to
do.
That
is
a
little
more
complex
regarding
like
Futures
and
pullers
and
stuff
like
that.
So
he's
working
on
that
do.
F
I
think
it's
I
think
that's
I
would
say
that
and
out
of
Cheers
our
top
two
things
that
we
should
focus
on
because
of
the
timeline
of
it,
since
otorrest
will
be
unsupported
starting
April,
and
this
is
due
March.
So
it
doesn't
give
us
much
margin
afterwards.
H
A
D
Just
a
question:
when
is
the
like
release
date
with
the
1.8
link?
That's.
A
Okay,
I
feel
like,
hopefully
you
can
speak
to
this,
but
I
think
it
sounds
like
we're
we're
on
the
same
page
that
this
is
the
target
is
for
this
to
land
before
1.8.
D
A
Okay,
cool
I
know
that
there's
a
PR
for
for
this,
so
I
think
that's
safe
to
keep
in
the
yeah.
So.
D
Go
ahead:
well,
it's
just
that.
You
know
there's
one
last
part
of
the
thing
that
I
just
want
to
test
on
my
PR
and
then
update
the
design
document
and
then
I'll
ask
for
a
final
review.
So
you
know
that's
the
plan,
so
I
think
we
have
like
removed
the
bloggers
other
bloggers.
So
this
is
the
last
one
I
think
so.
You're.
A
C
D
A
Cool
great,
thank
you
so
much
so
moving
down
the
list.
There's
a
couple
of
AKs
end-to-end
tests.
These
I
think
these
are
issues
yeah.
These
are
issues
at
Circle.
This
is
the
sort
of
Epic
issue.
I
feel
comfortable
keeping
these
in
the
Milestone.
You
know,
just
one
opinion
feel
free
to
disagree.
If
you
disagree,
I
think
that
this
is
in
progress
so
we'll
keep
that
in
the
Milestone
again.
A
D
Yeah
I,
don't
have
any
objects
in
so
if
it
makes
it
it's
fine,
only
thing
is
like
we.
There
was
some
issue
with
testing.
Let
me
see
on
Camp
C
like
I,
have
not
recently
looked
into
it.
I
had
revealed
that
PR
sometime
back
like
do.
Anyone
have
any
context
on
that.
F
Yeah
I
did
add
some
comments
on
it
yesterday
and
put
it
on
hold
because
there
are
19
commits
but
yeah
waiting
to
hear
back
from
the
author.
H
A
Okay
I
know
the
flat
car
template
is
almost
ready,
so
I'm
going
to
keep
that
here.
I'm
going
to
vote
for
that,
Willie
I
think
that
this
is
moving
forward.
So
is
it
reasonable
to
assume
this
will
ship
in
one
eight
cool
I'm,
seeing
a
nodding
head
yep?
This,
hopefully
will
land
this
week.
This
PR
for
Matt
I
think
this
one
is
basically
ready
from
John.
So
when
I
say
that
that's
shorthand
for
me,
including
it
in
the
sorry
since
you
I'm
not
doing
any
kicking
out
here
so
doing
a
bad
job.
A
This
seems
like
the
this
is
the
the
north
star
of
the
auto
rest
stuff
right.
A
Well,
since
we
included
this
in
this
p
in
this
Milestone-
and
this
is
a
part
of
that-
we
probably
need
to
include
both
or
kick
both
out.
A
All
right,
let's
be
confident
we
can
help
ashtash
move
this
forward
in
the
1-8
Milestone
about
the
load,
bouncer
cluster
class
bug.
A
D
Yeah
and
just
that
I
was
able
to
isolate
that
it
does
not
have
anything
to
do
with
aadbot
identity,
and
it
just
occurs
with
workload.
Identity
too,
and
I
I
think
it
has
to
do
something
with
Azure
API,
so
I
I
have
like
no
tools,
not
like
kind
of
you
know,
go
much
deeper
into
that,
so
either
like
somebody
who
has
a
wider
access,
or
you
know
idea
on
how
to
like
nail
it
down
or
or
just
we
use
the
like.
You
said
that
eventually
consistent
practices
too.
A
Right
this
yeah:
this
is
the
first
thing,
I'm
I'm
hearing
that
might
be
appropriate
to
kick
to
be
next.
We
can
always
move
it
back
yeah.
D
A
So
we've
got
five
minutes
left.
We
might
not
get
through
this,
but
we've
definitely
made
some
progress
and
we'll
do
this
next
week
and
asynchronously
between
meetings
as
well.
Where
was
I?
Where
was
I
reorder
things
the.
A
Got
it
cool
I'll,
keep
that
in
this
is.
A
It
cool:
let's
keep
that
in
because
it
sounds
if
it's
probably
just
an
easy
fix,
and
it
sounds
like
a
really
annoying
thing.
This
I
think
is
definitely
one
eight
I've
assigned
that
to
myself.
A
A
A
F
Yeah,
there's
there's
a
lot
in
here:
I
think
it's
yeah.
H
H
Yeah
go
ahead,
I
agree.
We
we
shouldn't
be
adding
more
things
or
we
need
to
probably
go
to
the
the
next
and
at
some
point,
maybe
not
right
now
in
our
three
minutes,
but
to
see,
if
there's
really
anything
that
prompts
anything.
That's
that's
here
because
all
the
stuff
that
I
think
is
in
one
eight
is
it's
all
really
important,
good
stuff.
A
I
mean
to
be
clear,
we're
near
the
beginning
of
the
development
cycle,
and
so
as
a
community,
we
can
decide
on
what
side
of
the
spectrum
we
want
this
process
to
sort
of
Target.
Do
we
want
to
Target
accuracy
at
the
end
of
the
Milestone,
or
do
we
want
to
Target
sort
of
maximum
opportunity
for
folks
in
the
community
to
contribute,
and
so
for
folks
who
are
landing
and
saying,
what's
what's
been
sort
of
approved
for
immediate
forward
progress?
A
Does
that
do
those
alternative?
Spectrum
ends
make
sense
to
folks.
A
F
To
me,
the
issue,
cue,
is
what
is:
is:
what's
there
for
people
to
go
and
find
work,
the
Milestone
I
see
it
more
as
a
way
to
be
like
give
High
Fidelity
prediction
to
our
users
of
what
is
going
to
be
in
the
next
release
like
what
they
can
expect
to
see.
I,
don't
really
see
it
as
a
way
for
people
to
pick
up
work
because
we
tend
to
put
stuff
in
there
once
it's
assigned
or
once
there's
already
a
PR.
F
H
I'm,
just
just
gonna,
Echo,
I
I,
don't
think
it
matters
as
much.
If
we
don't
everything
that's
on
here,
but
I
do
think
that
that
we
should
be
try
to
be
somewhat
realistic
within
the
capacity
of
the
community
at
large,
with
what
we
we
can
try
and
do,
and
I
also
would
just
say
since
we're.
You
know,
obviously
we're
here
on
the
community
call
and
anyone
who
feels
otherwise
like
such
and
such
issue,
or
whatever
else
should
be
in
the
Milestone.
H
Then
you
know
please
like
let
us
know
you
know,
obviously
amongst
ourselves
here
at
Microsoft
contributors.
We
only
have
so
much
capacity
to
give
so
yeah.
A
That
means
I
can
I
can
actually
competently
planned
that
in
March
I
will
be
adopting
1.8
and
rolling
it
out
to
my
environment.
So
I
think
we
don't
have
the
sufficiently
robust
Milestone
approval
process
to
to
achieve
that.
If
that's
one
of
our
goals
so
I
think
it's
just
worth
being
honest
about
that.
I
know.
H
I
think
it's
something
to
strive
for
I
mean
I,
definitely
think
it's
something
to
strive
for,
but
I,
but
I'd
unless
I'm
reading
or
hearing
something
otherwise
I.
Don't
think
we're.
You
know
gonna
penalize
us.
If
we
don't
make
one
of
the
things
that's
there,
you
know
I
think
we
need
to
try
and
be
operate
with
honest
intentions,
but
that's
part
of
why
we
have
the
weekly
meetings
to
adjust
like
if
we
think
like
something's,
at
risk
of
not
making
it.
A
Cool
that
sounds
great.
There's
lots
of
things
out
there
that
have
lots
of
very
robust
Milestone
approval
processes.
So
I'm
not
sure
we
actually
want
to
go
down
that
path,
but
it
does
have
the
advantages
of
producing
more
High
Fidelity
accuracy
in
terms
of,
what's
that,
what
actually
ships,
but
it's
non-trivial
work
to
do
that
yeah
great!
We
are
one
minute
over.
That
means
we
had
a
lot
of
great
discussion.
Thank
you.
Everybody
for
coming
I'll
upload,
the
recording
paste
it
to
the
doc,
see
folks
online
and
next
week.