►
From YouTube: SIG Cloud Provider 2020-01-22
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
D
A
G
H
I
J
J
C
M
D
A
Awesome
sounds
good
GCP,
yeah,
ditto,
we're
aiming
for
the
1:18
release
and
you
know
I
mean
right
now
we
have
a
build
for
out
of
tree
right
now,
I'm
working
on
no
no
alpha
alpha.
There
don't
jump
the
gun
on
me,
Andrew,
so
alpha
for
118
and
hoping
to
to
get
something
that
we
can
actually
fully
bring
up
and
have
all
the
tests
pass.
E
A
A
C
C
Is
the
recommended
cloud
provider
right
now
for
vSphere?
So
if
you're
running
six
seven
year,
three
or
Newark
right
now,
we're
recommending
auditory
so
we're
fully
on
the
GA
stable
path
for
that.
So
the
next
question
awesome
great
news.
Thank
you.
I
love
the
aggressive
timeline
for
the
removal
as
well.
F
Yeah
yeah,
it's
just
an
FYI.
Yesterday
I
opened
the
PR
that
adds
a
bit
more
than
position
details
and
it
goes
through
so
like
so
they
so
the
API
was
what
you
initially
proposed
in
your
POC
Walter.
So
this
just
takes
that
API
and
then
kind
of
talks
about
like
how
we're
gonna
inject
the
config
into
the
components
and
it
does
a
walkthrough
of
been
like
wish
move,
does
like
and
so
and
I
think
we're
just
block
down
like
someone
from
the
time
she
read
reviews
I
can.
F
F
F
Yeah,
like
we're
talking
like
like
zero
down
time,
then
use
this,
but
for
most
cases
and
for
most
clusters
that
can
tolerate
you
know
a
few
minutes
of
downtime
during
upgrade,
then
we're
recommending
just
like
flipping
your
cluster
over
to
use
to
disable
the
cloud
providers
and
then
just
deploying
the
CC
on
top,
which
is
probably
the
more
easiest
simplest
way
to
do.
It
I
agree.
A
Yeah
I
mean
definitely
if
you
can
get
away
with
just
take
them
at
your
control,
plane
down
upgraded
with
the
seat,
to
have
the
CCM
and
be
completely
configured
and
then
bring
it
up
again.
That
is
a
much
simpler
solution.
Yeah.
This
is
definitely
for
people
with
a
Chae
clusters
that
don't
want
any
downtime.
L
So
I
published
another
revision
of
it
that
incorporated
the
feedback
that
I
got
from
sig
off
and
sig
node,
so
I
think
we
we
need
more
eyes
from
cloud
provider
for
sure,
but
I
think
it's
getting
there.
There
wasn't
any
I
guess
like
critical
disagreements
or
anything
like
that.
I
think
people
seem
to
think
that
it's
going
in
the
right
direction.
Awesome.
A
L
A
Awesome
I
mean
not
a
nothing
changed
there,
but
the
B
Network
proxy
SSH
removal
cap
is
already
the
beta
requirements
have
already
been
set
for
actually
for
117.
Although
we
didn't
get
all
the
changes
made
and
I've
got
a
couple
of
engineers
right
now,
working
on
trying
to
get
that
all
done
for
118,
so
I
think
we're
in
pretty
good
shape
on
that
front.
N
F
A
A
F
C
F
C
Ahead,
not
a
strong
objection,
the
opposite,
but,
like
a
you,
know,
I'm
pulling
agreement
with
aggressive
timeline
when
I.
My
question
is
around
storage
providers?
Is
there
like
concert
effort
in
creating
CSI
providers
for
for
all
the
missing
auditory
copper
letter?
Is
this
coordinating
in
some
way
yeah.
A
We're
coordinating
with
six
storage,
so
sod
and
crew,
so
I
mean
again
we're
trying
to
transmit
we're,
not
saying
it's
going
to
happen,
but
it's
certainly
you
know
it's.
It
is
our
goal
right
now
we
have
learned
from
previous
experience
that
if
we
don't
set
it
go
like
this
things
don't
happen.
Absolutely.
C
Like
I'm
I
join
occur,
you
know
that
will
get
to
121
with
you
know,
at
least
the
cs5
plugins.
You
know
in
line,
so
we
have
yeah,
there's
actually
transition
path,
sometimes
I'm
worried
about
like
blinders
right.
You
know,
people
work
just
something
and
not
considering
the
already,
but
I
completely
sure.
F
I,
like
the
expectation,
is
that
I
now
everyone
has
a
CSI
plugin.
Maybe
it's
not
like
GA,
but
it's
it's
it's
being
developed.
The
only
thing.
That's
kind
of
a
question
mark
is
like
whether
you
have
seized
on
migration
developed,
which
is
by
like
118
signifies
the
CSI
migration
mechanism
is
data,
which
means
that
the
only
thing
waiting
for
is
the
providers
to
implement
the
migration
mechanism
and
if
they
don't
implement,
that
just
means.
If
you
want
to
use
like
version
1.22
or
21,
you
have
to
you
can't
like
upgrade
from
an
existing
cluster.
F
A
The
other
comment
I
will
make
to
that
is
I
mean
Andrew
and
I
worked
closely
with
the
storage
team
on
the
migration
plan
on
coordinating
the
timing,
but
we're
we've
been
mostly
letting
them
take
care
of
pushing
the
actual
CSI
drivers.
We
are,
however,
sort
of
championing
in
every
chance
we
get
with,
since
we
have
good
reach
into
all
of
the
entry
providers.
A
We
have
an
audience
here
right
now
that
hey,
please
check
that
you're,
the
CSI
drivers
that
your
cloud
provider
needs
are
being
taken
care
of
you
know,
and
so
thank
you
for
bringing
it
up.
Yeah,
please.
You
know
Google
Amazon
adjure,
you
know
vmware
vsphere,
whichever
please
make
sure
that
you
guys
have
a
plan.
A
The
other
thing
I
will
talk
at
is
if
you're
wondering
why
we
pick
121
and
why
we
say
that's
aggressive
for
us
to
to
remove
in
121
we
have
to
wheat
GA
in
120,
which
means
we
need
to
beta
in
119,
which
means
we
need
to
have
everything
successfully
alpha
in
180,
so
I
mean
we're
basically
looking
at
successfully
you
want
having
everything
hit.
It's
you
know:
it's
appropriate
status
in
each
of
the
next
four
releases.
F
Okay,
so
the
folks
azure,
maybe
pink
bay,
hoping
to
cap
a
little
while
back,
which
was
to
do
instance,
level.
No
there's
also
olive
tree,
and
so
the
main
motivation
is
that,
if
you
do,
if
you
follow
the
CCM
model
to
do
no
registration,
you
have
to
do
if
to
make
an
Katia
endpoint
call
multiple
times
for
every
node
that's
registered
and
for
certain
cloud
providers
that
are
very
Kota
sensitive
its
to
a
lot
of
quota.
F
Pretty
much,
and
so
there
was
a
puzzle
to
pretty
much
extract
the
part
of
the
CCM
into
a
daemon
set
so
that
the
daemon,
the
daemon
set
pod
does
the
administration,
on
the
same
note
as
registering,
and
it
queries
the
metadata
service
on
the
node
to
do
that,
rather
so
that
pretty
much
like
bypasses
the
the
the
API
calls.
But
then
at
the
cost
of
running
a
dataset
you're
running,
you
know
the
community
stacks
of
more
pots
on
a
node,
so
we
agreed-
or
at
least
I
disagreed
that
we
should
like
that.
I.
F
Don't
think
we
should
just
put
this
into
core
a
suit
and
assume
that
everyone
has
the
same.
Api
put
a
problem,
and
so
what
I
said
was:
let's
accept
the
cap
as
an
azure
specific
cap
and
let's
get
a
sense
of
how
many
other
providers
have
a
similar
problem,
and
if
you
find
that,
like
everyone
would
benefit
from
this
and
the
trade-off
of
running
the
Dame
says
worth
it,
then
we
can
kind
of
adapt
it
into
like
the
core
or
top
advisor
or
tuition.
F
F
So
yeah
this
is
just
it's
I'm,
just
trying
to
find
it.
One
second.
F
Okay,
so
this
is
the
spec
she
shows
here.
It's
a
new
dating
spec,
new
dataset
called
cloud
node
manager.
So
when
you,
when
you
deploy
out
of
tree
azure,
you
pretty
much
deploy
the
cloud
controller
manager,
which
is
just
you
know,
one
component
like
the
controller
manager,
but
it
would
run
on
every
control
plane.
You
can
run
eh
eh
and
then
you
have
your
C
type
driver
for
the
storage
support.
But
then,
if
you
would
like
with
this
change,
you
also
deploy
dataset
cloud
node
manager
and,
like
I,
said
it's
just
it's
really.
F
It's
a
really
simple
pod
that
checks.
If,
if
its
current
node
is
a
node
in
the
cluster,
and
if
it's
not,
it
queries
the
metadata
service
on
the
node
and
then
it
registers,
it
knows
how
to
get
registers
itself
to
be
a
place
order,
so
it
pretty
much
initializes
the
node
in
two
steps,
the
cubic
does
kind
of
know,
preponderate
registration
and
then
the
cloak.
The
cloud
know
manager,
reinitialize
the
node
with
all
the
cloud
provider
details
and
so.
F
D
F
K
F
Okay,
cool
yeah,
I'm
cool
with
that
yeah
that
works
for
me,
okay,
cool,
so
I
will
write
I
will.
Let
me
know
that
we're
gonna
have
this
we're
gonna
use
Azure
as
kind
of
the
example
provider
for
this.
If
another
provider
wants
it,
but
we're
not
gonna
we're
not
gonna
like
prescribe
it
as
a
default.
I
think.
K
Also,
one
of
the
things
that
could
make
it
easier
is
maybe
represented
on
the
CPI's
on
the
CPI,
the
control
loops,
that
a
provider
should
run,
and
this
could
be
implemented
by
the
provider
so,
like,
let's
say,
as
a
wants
to
run,
for
example,
on
the
CCM,
they
were
around
controller
to
servers
controller,
but
not
the
node
controller
one.
It
would
return
these,
and
this
would
like
this
would
simplify
a
bit
the
machinery
to
run
the
daemon.
Sir.
F
F
A
When
a
particular,
when
they
change
breaks
a
cloud
provider
or
set
cloud
providers
or
all
the
cloud
providers,
how
do
we
find
it
under
what
conditions
do
we
roll
it
back
and
if
we
don't
roll
it
back?
Who
is
responsible
for
fixing
it
I?
Don't
necessarily
want
to
have
that
conversation
right
now
before
people
have
had
a
chance
to
think
about
it,
but
I
would
like
to
sort
of
have
people
think
about
it,
and
then
let's
discuss
it
next
time,
so.
A
A
I
think
the
problem
is
we're
talking
about
you
know.
Kk
at
this
point
is
the
kernel
and
we're
talking
and
and
GCE
or
Azure
or
AWS
is
released.
Our
distros
and
so
yeah
I
mean
it's
it's
a
rough
analogy,
but
it's
it's
like
e
to
e
test
is
a
distro
test
and
you're
trying
to
say
if
I
make
this
kernel
chain
before
I
make
this
kernel
changes.
This
kernel
change
gonna
cause
problems
with
the
distro
right.
F
I
guess
I
guess
when
saying
is
like,
like
we
have
unit
tests
that
are
dependent
on
a
provider,
not
necessarily
because
the
provider
offers
sorry.
We
have
tests
that
depend
on
a
provider,
but
if
it
is
not
necessarily
not
necessarily
based
on
the
fact
that
it's
a
cloud
provider
feature
it
just
the
fact
that,
like
this
cloud
provider,
just
has
a
certain
trait
that
that
is
pending
on
this
test.
So
like
like
you,
can
there
are
there
a
set
of
tests
where
they're
useless?
A
I
I
kind
of
agree,
but
at
the
same
problem
as
an
example,
one
of
the
tests
I
was
looking
at
was
aggregator.
So
aggregator
is
a
beta
API
Machinery
feature
that
should
in
no
way
depend
on
GCE
Google
engineers
wrote
the
test.
There
were
a
couple
of
things
that
needed
to
be
done
to
get
it
to
work.
It
turns
out.
There
is
one
it
does
the
it
tests
a
slightly
different
configuration
of
Google,
but
in
fact
that
test
should
work
on
everyone
on
all
cloud
providers.
F
A
F
A
But
I'm
sorry
I'm,
not
explaining
myself
well
so
first
I
need
a
KK
image
and
then
that
KK
image
has
to
be
built
into
the,
and
maybe
we
should
just
wait
till
we're
all
everyone
sort
of
had
some
time
to
think
about
it.
But
I
think
we
need
to
understand
how
we
build
these
images,
because
my
my
gotcha
is
that
I
don't
believe
we
can
build
the
GCE
image
until
we've
built
the
KK
image
and
we
can't
build
the
KK
image
until
that
commit
occurs
which
precludes
this
we're
from
running
as
AI
pre-commit.
F
A
So
going
back
to
my
analogy
and
I
think
maybe
soon
we
should
take
this
offline.
But
from
my
analogy,
if
you
think
that
the
KK
is
the
Linux
kernel,
it's
sort
of
I
don't
understand
how
a
bun
or
Fedora
or
Debian
can
make
a
build.
A
latest
build
that
doesn't
build
that
that
is
supposed
to
consume
the
latest
kernel
without
consuming
the
latest
kernel.
L
F
F
A
A
I
completely
agree
with
that.
My
point
is
that
today
there
are
tests
which
I
mean
there's
two
there's
two
scenarios
that
I
think
we
need
to
think
about
one,
the
one
you're
talking
about,
which
probably
applies
to
my
sample
case,
which
is
there
are
tests
that
are
run
today,
cloud
provider
specific
because
of
who
wrote
the
test
right
that
probably
shouldn't
be
cloud
provider
specific
ran
unless
a
bunch
of
extra
work
gets
done.
When
we
move
the
remove
the
cloud
provider
specific
code,
that
testing
is
going
to
disappear.
A
The
second
is
even
even
if
we
get
the
aggregators
actually
a
beautiful
case.
In
both
cases,
we
should
probably
have
an
aggregator
cat
test
in
entry.
That
is
not
cloud
provider,
specific,
that's
missing,
and
then
it
turns
out
that
the
way
routing
gets
done
in
aggregator
for
GCE
is
slightly
different
than
how
it
works
in
any
other
cloud
provider,
which
means
that
it
is
actually
possible
to
make
an
aggregator
change.
That
breaks
just
DCA,
which
that
that
particular
use
case
is
definitely
one
where
you
know
we're
not
going
to
know
about
that
breakage
today.
A
F
A
And
I
think
we
also,
we
need
to
think
of
I
think
we
need
to
fully
think
about
the
various
scenarios,
because
AI
agree
this
exactly
what
you
just
said
needs
to
happen,
but
B.
We
still
need
processes
around
what
happens
when
the
kk
breaks
a
cloud
provider
and
how
we
deal
with
that
and
what,
if
that's
one?
What
if
it's
several
what
if
it's
all
of
them,
got
it
cool
all
right.
Sorry
about
that.
Does
anyone
else
have
anything
else
they'd
like
to
discuss.
F
A
F
N
A
L
F
Better
fellow
bouncer
names
I
think
this
p3
is
reasonable
and
next
like
this
is.
This
is
a
good
thing.
This
is
a
nice
thing
to
have,
but
it's
pretty
xx
problem
because
you
have
to
be
named
Howell
L
bees
and
that
sounds
dangerous,
but
I
think
if
someone
wants
to
do
the
work,
that's
cool
but
I,
don't
think
anyone's
gonna
prioritize
this.
So
this
is
up
for
grabs.
They
don't
want
sort
of
money.
Should.
F
F
Investigate
usage,
slash
requirements
for
cluster
ID,
the
missing
context
on
this
one,
alright,
today
right
so
cluster
ID
feature
is
only
used
by
EWS,
so
we
asked
Nick
to
investigate
if
we
can
just
get
rid
of
it.
So
I
think
p2
milestone.
X
is
fine.
It's
not
a
super
high
priority
critic
provider
yeah.
We
already
talked
about
this.
F
1808
guys
hauling
in
node
controller,
this
was
I
guess
this
is
related
to
like
Peng
phase
issue
like
API
quota,
sensitive
controllers,
I
think
Alibaba
folks
had
the
same
problem,
so
maybe
we
can
suggest
the
instance
metadata
issue
to
the
other
folks,
but
I
think
like
p3.
Next
is
fine,
unless
other
folks
have
run
into
this,
and
we
can
maybe
bump
the
priority
on
this
and
we
work
on
that
solution.
F
F
M
F
F
Decoupling
top
providers
from
the
unit
testing
framework
related
to
what
we
just
talked
about,
but
this
is
this-
is
specifically
pertaining
to
like
the
actual
providers
package
in
the
framework
which
does
like
somebody
take
calls
so
yeah.
There's
some
work
going
on
to
this,
but
it's
again
like
high
priority,
but
not
like
immediate
for
118,
but
maybe
we
should
consider
putting
this
in
a
milestone.
Walter,
you
want
to
I
want
to
put
this
118,
or
do
you
want
to
wait
until
119
or
120
to
prioritize
is
higher
I.
A
F
A
F
K
A
But
I
needs
to
be
a
p0
p1.
We
want
to
stop
making
changes
to
CCN.
If
we're
basically
saying
everyone
needs
to
be
out
of
tree
everyone's
I
I.
Don't
want
to
do
this
again
in
119,
when
everyone's
trying
to
get
beta
I
mean
we
already
have
one
one
alpha
or
one
g8
out
of
tree
making
change.
Dcm
is
just
gonna
cause
headings
for
everyone.
If
we
keep
doing.
F
It
so
I
I
think,
like
so
I,
at
least
from
so
from
the
vSphere
side
of
things
like
if
we
moved
CMD
capture
manager
into
like
cloud
providers.
Last
mu
couple
manager
like
it
is
a
change
that
we
had
to
fix,
but
I
think
that
would
make
our
gold
module
spot
like
much
smaller
and
I.
Think
we
would
all
be
happy
to
do
that.
I
agree
with.
It
means
you
can't
if
it
means
you
don't
have
to
import
KK
totally.
A
Agree
with
that,
but
I
think
part
of
this
that
and
I
don't
know
when,
with
the
writing
I'm
going
off
the
title
more
than
the
bug
bug
text
but
I
think
there
are
other
things
that
need
to
be
done
as
part
of
this
I
think
we've
got
to
make
the
config
easier.
We
should
be
making
it
easier
to
plug
in
new
controllers
and
I
think
by
the
time
you've
done,
that
set
of
things
you're,
starting
to
look
at
a
broader
set
of
changes
and
I'd.
F
M
F
A
F
A
The
one
tricky
part
that
I
think
is
gonna
come
into
this.
Is
that
and
and
yeah
I'm,
not
sure
that
would
the
best
place
to
talk
about
it
is
but
I
think
we're
gonna
end
up
in
a
situation
where
the
controllers
that
get
brought
in
are
gonna
be
dependent
on
the
cloud
providers
and
then
so
the
cloud
the
controllers
are
going
to
depend
on
config.
K
F
Update
talk
about
your
sample,
repo
I,
don't
see
team
on
the
call,
but
I
think
this
would
be
nice
to
have
also
for
testing
like
I'm,
realizing
that
we
can.
If
we
can
create
a
sample
cloud
provider
of
that,
but
can
render
the
cloud
provider
data
off
a
file,
then
you
can
test
it
with
kind
and
that
stuff,
but
someone
just
needs
to
do
I'm
gonna
on
assigned
team
of
o,
because
I'm
not
sure
you
smoke
on
it.
But
a
rule
leave
is
open
like
this.
Repo
is
actually
like
blank
so
like.
A
F
F
F
K
K
N
F
This
is
a
signet
we're
sick,
klappa
butter
thing,
but
last
time
I
checked
the
service
controller.
It
doesn't
actually
do
a
back-end
update
for
notes
and
then
in
a
little
bouncer
when
there's
an
old
update,
it
just
pulls
every
hundred
seconds
and,
as
you
can
imagine,
that
can
cause
problems.
So
I
think
there
was
an
actually
I
think
I
opened
a
PR.