►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
to
today
December
third
meeting
of
the
cloud
provider,
Azure
cloud
provider
sub-project
meeting-
and
this
is
a
meeting
in
the
kubernetes
organization-
we're
under
the
CN
CF
code
of
conduct.
As
such,
this
meeting
is
being
recorded
and
will
be
posted
to
YouTube.
So
let's
follow
the
credit
contact
and
be
good
to
each
other.
I
just
want
to
welcome
folks,
I,
don't
remember,
attending
before
I
think
Heba.
This
might
be
your
first
time
joining
this
meeting
with
honor.
B
So
it's
given
me
absurd
little
bit
crowded
here,
I'm
the
airport,
so
actually
I
just
a
lot
of
good
good
company
and
you
can't
and
found
out
like
hey
it's
not
that
hard
to
be
a
contributor.
So
let's
try
so
I
believe
I
have
a
love,
not
a
lot,
but
it's
like
knowledge
about
it.
Yes
and
there's
engine,
so
I
thought.
Maybe
this
is
like
a
good
beginning.
A
A
We
can
talk
about
who's
working
on
them
and
what
blockers
there
are
for
whether
or
not
they
can
make
it
into
118
and
identify
things
that
don't
belong
in
cloud
provider.
There
may
be
things
in
this
list
that
are
belong
in
other
projects
or
other
places
or
then
aren't
you
know
there,
aren't
any
owners
for
or
that
need
help
and
or
that
don't
even
have
issues
yet
in
github,
and
so
what
I'd
like
to
do
is
just
sort
of
crowdsource
all
that
information
and
get
it
into
the
documents.
A
A
So
next
up
so
the
first
big
item
in
1.18
is
to
continue
the
work
to
pull
or
extract
the
cloud
provider
from
the
kubernetes
kubernetes.
Repo
linked
in
the
issue
are
links
to
the
issue.
Sorry
linked
in
the
document
links
to
the
issue
and
the
kept
the
three
big
items
that
are
identified
in
tracked
in
that
issue
are
a
modeling
sports
for
the
credential
provider
and
updates
to
the
CSI
drivers
for
file
and
as
your
disc.
A
D
So
for
all
the
changes,
that's
already
there
in
three
I
think
this
one
work
item
which
is
open,
which
basically
details
the
six
steps
that
we're
going
to
take
in
order
to
avoid
throttling
out
of
that
I
think
we
have
three
of
them
were
covered,
are
basically
moving
the
list.
Instead
of
get
then
able
to
read
from
the
dirty
cash
for
cause,
that's
not
required
to
go
out
and
I
think
there
are
three
of
them
remaining.
D
One
is
basically
looking
at
the
retry
after
header
and
honoring
that
and
then
there's
also
one
change
which
was
recommended
where,
if
we
hit
throttling,
then
we
still
read
from
whatever
cache
data
that
we
have
so
I
think
that
those
two
or
three
items
are
still
left
and
I.
Think
that's
something
that
we'll
go
through
and
try
to
get
that
and
what
about
18,
and
so
these
are
the
changes
for
entry.
D
D
So
for
the
out
of
tree
I
think
the
reason
that
things
they
added.
That
was
because
the
cloud
node
managers
that
we
run
instead,
basically
making
the
other
API
calls.
Instead,
it
would
distort
why
MBS
and
initialize
the
node
and
I
think
that's
what
I
think
fader
mode
in
the
deep
dive,
so
that
will
ensure
that,
by
having
something
running
on
each
node,
we
are
not
going
to
be
adding
more
load
to
what
is
already
there
and
the
improvements
that
we're
doing
now
will
make
sure
that
whatever
we
having
right
now
will
not
occur
later.
A
E
E
F
E
E
E
E
C
A
A
A
Since
yes,
yesterday
morning,
our
time,
obviously
too
late
for
you
guys
basically
was
just
Rita
hates
and
I,
so
we
decided
to
move
it
because
we
wanted
to
go
through
118
planning.
So
thanks
for
jumping
on
we're
reviewing
right
now,
the
suicide
driver
updates
may
because
be
quickly
to
the
AZ
support
migration
and
window
support
status
and
if
there's
any
blockers
or
help
the
you
need.
H
Yeah,
well
we're
still
working
on
that
yeah
there's
a
few
things.
First,
here's
the
windows
support
kke,
it's
already
working
on
sets
and
another
thing
is
the:
is
the
migration
increased
CSI
jharvard
in
street
travel
to
the
CSI
genre,
so
the
work
is
to
going
and
we
are
targeting
to
make
it
to
work.
I
mean
make
it
make
it
a
better
on
less
traversing
1.18.
A
A
E
A
A
A
H
A
A
H
A
H
A
So
I
want
to
get
into.
Is
it
something
that's
later
down
in
the
agenda,
which
is
I?
Think
we
need
to
find
as
a
as
a
working
group
sub-project
what
our
release
policies
are,
what
our
compatibility
plan
is
and
what
our
testing
plan
is
for
the
forward
and
backward
compatibility
between
a
cloud
provider
and
and
the
kubernetes
release.
A
H
Yeah
actually
the
future.
Yes
totally
two
years,
you
know
common
interface,
and
it
has
already
it
has
already
documented,
with
the
existing
community
versions,
for
example,
for
the
zone
supports
some
higher
versions
like
1.15
1.6
already
supports
the
zone
song
feature,
so
it's
already,
so
it
is
wholly
documented
and
we
only
need
to
implement
so.
It's
interfaces
and
publish
the
release
for
the
comparability
supports
I.
Think
it's
already
documented
in
the
common
CSI
documentation
in
the
official
official
websites,
yeah.
A
I
think
we're
not
quite
communicating,
because
what
I'm
talking
about
is
when
we
make
a
release
of
the
cloud
provider
and
say
I'm,
a
user
and
I
need
to
use
a
cloud
provider.
I
need
to
understand
which
versions
of
kubernetes
I
expect
that
particular
release
of
the
cloud
provider
to
work
with
correctly.
A
E
I
A
E
H
A
A
E
H
A
H
H
A
H
A
H
E
A
So
I
marked
it
with
any
Ernest
and
Rita
as
looking
at
that
more
okay,
okay
next
another
next
one
was
fixing
issues
of
flaky
tests
like
availability
zone
and
slow
tests
on
vmss.
So
I
guess
those
are
tests.
The
timeout.
Is
there
an
issue
for
tracking
that
yet
or
do
we
need
to
create
an
issue
for
that
yeah.
A
J
E
J
A
C
A
A
A
And
so
the
idea
is
to
extract
from
the
cloud
provider
and
these
others,
the
common
set
of
code
to
do
those
credit
operation,
and
so
these
are.
This
is
kind
of
a
top-level
list
of
things
to
do.
Are
there
people
on
this
call
who
are
interested
in
helping
with
sort
of
analysis
and
understanding
where
and
how
to
proceed
on
this
issue
or
on
this
effort.
A
Fantastic,
so
we
identified
a
couple
of
items
here,
just
in
the
in
the
notes.
I
think
we
could
start
this
well.
I.
Think
in
the
context
of
working
in
a
gas
engine
in
the
cluster,
autoscaler
will
probably
need
some
additional
collaboration
and
some
design
you
know
kind
of
a
design
document
to
describe.
You
know
what
it
is.
Why
how
to
go
about
building
the
first
iteration
of
it.
A
The
initial
concepts
that
I
think
came
out
of
the
cluster
API
as
your
provider
discussion
was
that
that
would
be
a
natural
place
to
incubate
the
kind
of
the
POC
of
this
as
a
library,
but
that's
just
an
initial
idea.
If,
if
you
guys
could
put
your
heads
together
and
think
about
that
and
see
if
that
makes
sense
to
you
or
if
it
makes
sense
to
do
it
some
other
way,
we
would
love
to
see
some
thinking
around
that.
A
A
A
So
the
next
issue-
let's
say
I'm
just
doing
a
time-
check
we're
already
three
minutes
past
through
the
half
hour.
I
knew
that
this
was
going
to
take
more
than
one
half
hour
session,
took
us
a
little
while
to
get
going.
Are
you
guys
free
to
keep
going
for
a
little
bit
longer,
or
does
anybody
need
to
drop
off.
A
E
E
E
E
A
Okay,
so
we'll
create
issues
and
figure
out
which
you
described
and
the
kind
of
what
needs
to
be
done,
and
hopefully
folks
can
pick
those
up
and
then
we
can.
We
can
triage
that
through
aks
engine,
but
I'd
love
to
see.
Somebody
in
this
community
pick
up
those
changes
and
they
can't
engine
if
we
possibly
can.
A
A
E
A
Is
the
extraction
meeting
that
happens,
the
cloud
provider
extraction
meeting?
Are
you
in
the
meeting
I
think
so?
Yeah?
Okay,
if
not
I,
can
afford
you
the
invite
that's
a
great
place
to
talk
about
it
and
collaborate
with
the
other
cloud
providers
to
make
sure
that
we're
doing
something
consistent
with
what
they
do,
but
we
also
need
to
think
it's
the
right
thing.
A
A
C
E
I
E
A
H
A
H
A
I
E
A
A
Okay
and
then
close
things
out,
this
there's
some
legacy,
things
that
were
in
the
notes,
an
agenda
document
from
well
before
I
started
operating.
This
meeting
I
wanted
to
see
if
there
were,
there
were
a
bunch
of
things
that
were
marked
as
pins
for
what
version,
114
and
I
wanted
to
just
kind
of
before
I
deleted
them
see
if
any
of
them
needed
additional
work
or
tracking,
rather
than
just
delete
them.
If
that
makes
sense,.
A
A
H
H
I
A
E
G
So
I
was
looking
at
that
earlier
today.
We've
noticed
that
the
deployment
phase
is
failing
in
a
KS
engine
for
the
windows
for
a
lot
of
the
Windows
jobs,
and
it's
happened
like
five
times
out
of
the
last
seven
days
for
just
one
of
the
runs
I
have
a
suspicion
that
a
KS
engine
is
not
doing
something
right
with
arm
templates
when
extra
extensions
are
added
to
a
VM,
because
we
only
see
it
in
the
profiles
where
we
added
a
lot
of
extensions.
G
E
G
What
I'm,
mainly
seeing
is
I
actually
started
a
thread
in
the
upstream,
a
team
about
this
with
with
earnest
we're
seeing
cases
where
the
ARM
arm
deployments
just
fail
with
something
saying
that
there's
a
con
and
I've
been
digging
into
that.
So,
if
we're
seeing
that
same
behavior,
that's
what
I'm
looking
into
if
there's
other
failures,
I
think
we
should
track
them
separately.