►
From YouTube: SIG Cloud Provider 2020-03-04
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
D
A
D
B
E
Openstack,
since
the
last
meeting
we've
been
continually
running
tests
to
make
sure
the
entry
to
outer
tree
migration
works
and
the
most
recent
efforts
have
been
towards
supporting
multi
architecture
for
releases
and
a
change
in
keystone
off
webhook
for
supporting
or
for
support
role.
Mapping
I
put
the
the
links
for
those
pull
requests
and
good
awesome.
F
I
can
do
it.
I
saw
Andrew
put
something
in
the
air,
which
is
the
main,
a
new
thing
that
we
have.
We
merge
support
initial
support
for
service
type
load,
balancers
using
NSX
D,
it's
the
very
alpha
at
the
moment
and
it's
behind
a
feature
gate,
but
it
will
probably
be
part
of
the
next
release
but
yeah.
It's
it's
a
reason
for
joy,
since
we
done
didn't,
have
service
type
local
support
before
so
it's
it's
a
good
thing.
Yeah.
A
And
to
add
to
that,
we're
also
exploring
supporting
yamo
four
o'clock
config,
so
we're
we're
thinking
of
adding
like
a
parser
that
can
do
or
data
can
can
speak
more
on
this
because
you
know,
but
we're
considering
a
parser
that
can
convert
our
ini
files.
Yeah
mellow
or
you
know
the
other
way
around
and
so
I'd
be
curious.
If
other
providers
have
considered
yeah
mellow,
JSON
or
clock
config
instead
of
I
and
I,
and
if
there's
anyone
for
collaboration
there
I
think.
B
C
C
This
PR
is
the
like
cubelet
machinery
that
will
send
the
request
to
the
plugin,
get
the
response:
read
the
config
to
figure
out
what
credential
providers
are
exist
and
then
the
other
PR
which
I'm
currently
working
with
him
to
get
it
out
later
today
is
the
kind
of
AWS
implementation
of
so
it'll
keep
existing
provider
behavior
as
is
but
use
the
same
code
base
to
also
have
a
compiled
target.
That
creates
a
binary
that
can
be
used
with
this.
B
C
Some
other
things
that
we
were
talking
about
last
meeting
that
I
figured
I'd
quickly
mention
it's
currently
in
so
we
mentioned
like
moving.
It
was
in
client.
The
API
types
are
in
client
go
andrew
said:
that's
probably
not
going
to
fly
so
I
moved
them
into
package,
credential
provider
api's,
and
then
we
also
talked
about
actually
just
moving
that
whole
thing
to
staging,
which
I
haven't
done.
Yet
it
should
be
an
easy
change.
C
B
C
C
C
So
right
now
so
I
had
initially
had
a
in
the
plug-in
response.
I
had
had
an
expiration
field
that
could
be
passed
along
with
the
credentials
in
case
like
we
wanted
to
do
any
caching
on
the
queue
beside
currently
there's
none
and
the
plug
the
the
plugins
them
cell,
like
AWS,
has
its
own
caching
there.
C
So
I
had
thought
that
maybe
it
was
a
good
idea
to
pass
an
expiration,
and
maybe
cubelet
would
do
some
kind
of
caching
these
credentials,
but
as
I
kind
of
got
into
it,
it
just
seemed
to
make
more
sense
to
just
keep
the
behavior
as
is,
and
if
you
know
nobody's
asking
for
for
that
change.
So
unless
somebody
starts
asking
for
they
figured.
Why
kind.
C
A
B
C
B
B
We
are
currently
investigating
a
problem
that
was
found
in
the
HTTP
connect
client.
It
only
seems
to
be
in
the
HTTP
connects
client,
so
the
G
RPC
client
is
fine.
However,
API
machinery
has
asked
that
we
not
declared
beta
on
the
akai
server
network
proxy
and
still
the
HTTP
connect
problem
has
been
resolved.
B
B
A
Like
until
unless
we
can
decide
on
a
common
place
for
generic
controller
manager,
types
where
the
migration
configuration
can
live
and
I,
don't
think
we're
gonna
get
it
in
for
1:18,
so
feel
free
to
not
review
it
until
optical
trees.
But
yeah.
We
do
need
to
have
a
discussion
around
where
to
store
those
types,
because
the
API
server
api
server
api
group.
It
seems
kind
of
weird
that
it
lives
there.
A
So
one
idea
I
had
was
creating
a
controller
manager
direct
a
package
in
component
base
and
having
the
config
API
is
there,
and
so
we
can
also
store
the
generic
controller
manager
config
and
the
the
cube
cloud
shared
config.
Some
of
that
in
there
and
that'll
also
unblock
a
bunch
of
the
work
we're
trying
to
do
to
stage
the
cloud
controller
manager
package
as
well.
So.
B
We
don't
I
will
say
that
I
talked
to
the
TL
of
API
machinery
for
118.
They
would
be
fine
with
component
based.
They
did
think
that
we
should
probably
just
have
a
top
level
controller
manager
directory
that
contain
this,
and
in
fact,
probably
where
we
eventually
move
all
the
common
CCM
come
to
anyway,
which
I
think
like.
A
B
A
B
B
B
All
right,
so
I,
unfortunately,
did
not
get
as
much
written
up
on
this
topic
as
I
wanted,
but
I
did
promise
to
at
least
Gillis
get
a
skeleton
version.
So
we
have
a
goal
part
of
the
cloud
provider
extraction,
of
trying
to
get
the
cloud
providers
out
in
the
121
release
and
in
fact
with
me
today,
I
have
a
meet
and
ben
who
have
opinions
on
this
subject,
but
the
gist
of
this
product.
There's
a
couple
of
things
that
go
on
here.
B
One
cloud
provider
builds
as
I'm
sure
you
guys
know
better
than
most
are
going
to
depend
on
things
that
are
currently
in
KK,
so
they
they
have
a
dependency
on
KK.
So,
if
you're
making
a
change
to
KK,
they
have
to
build
afterwards.
But
if
I'm
trying
to
merge
code
in
there's,
then
this
downstream
ripple
effect
on
the
builds
where
I
built
KK
and
then
having
built
the
KK
with
that
fix.
B
I
meant
in
a
position
to
try
to
consume
those
new
KK
binaries
and
libraries
from
the
various
downstream
cloud
provider
builds
and
then
I'm
in
a
position
with
those
bills
I'm
in
a
position
to
actually
kick
off
the
e2e
tests
and
there's
more
detail
here.
So
that's
one
portion
of
the
problem
here
getting
then
feeding
that
signal
back
as
to
whether
the
original
KK
change
was
good
is
another
piece
of
this.
B
What
our
processes
around
handling
bad
commits,
how
many
cloud
providers
need
to
be
affected
is
another
thing
that
I
think
is
fairly
important
in
this
system.
You
know
we
need
to
consolidate
the
results.
We
need
to
have
things.
B
So
it
would
be
nice
if
we
had
a
standard
mechanism
by
which,
when
we
create
the
119
or
the
122
branch
of
KK,
each
of
the
cloud
providers
then
also
branches
at
that
point
and
has
the
right
things
in
terms
of
their
dependency
picked
up
and
and
their
tests
and
all
the
related
pieces
there.
And
then
you
know
what
do
we
do
about
determining
where
tests
run
and
and
in
fact
making
the
e
to
e
system
itself
works.
So
in
KK
there's
a
very
heavy
dependency
on
cloud
provider
which
is
sort
of
hidden.
G
H
Just
shared
some
Doc's
in
st.
cloud
provider,
I'm
not
in
the
zoo
I,
would
recommend
taking
a
look
over
those
we
have
in
the
past,
set
some
fairly
specific
standards
for
I
want
to
have
jobs,
that
block
the
release
and
give
signal
back
to
the
lease
team
and
for
how
cloud
providers
can
contribute
results
to
the
test
grid.
Ideally
for
things
blocking
the
release,
we'd
like
to
get
them
in
the
upstream
CI,
but
you
may
be
able
to
convince
sig
release,
just
falling
bait
like
how
to
contribute
results
thing
and
that's
gotten
a
bit
better.
H
More
of
that
interest
actually
hosted
under
the
Kate's
in
for
a
working
group
which
I'm
also
part
of
in
terms
of
how
we
test
this
I
think
almost
all
the
things
mentioned
are
going
to
need
resolving
end
that
we
need
to
decouple
the
test
framework
somehow
or
move
those
tests
out
of
kaykai,
maybe
to
a
kubernetes
SIG's
cloud
provider
test
thing
just
because
they
currently
actually
hard
depend
on
cloud
provider
code
separately
from
solving
that
the
kinds
of
tests
that
do
that
are
disruptive
things
that
we
absolutely
need
to
test.
H
But
we
tend
to
do
that
and
like
a
release
blocking
dashboard,
so
I
think
there
is
one
knocking
here
that
suggesting
has
been
pretty
generally
interested
in,
which
is
that
we
say:
okay,
follow
the
release
blocking
test
guidelines,
and
this
is
how
you
get
your
signal
into
kk4
presubmit.
We
can
probably
move.
H
We
can
probably
in
the
very
near
future,
cover
all
the
rest
of
the
things
will
kind
and
say:
okay,
this
is
no
vendor
here
and
we're
gonna
take
all
the
vendors
stuff,
and
if
you
meet
the
release
team's
guidelines
in
the
sig
release
repo,
then
we're
going
to
block
you
Bernays
releases
on
this
and
there's
already
some
guidelines
around
like
how
reliable
does
your
tests
have
to
be?
How
often
is
I
have
to
run?
H
H
H
Things
like
turning
services
on
and
off
are
pretty
disruptive
and
we
tend
to
already
do
those
in
post
MIT.
We
have
blanket
bans
on
pretty
much
anything,
there's
a
feature,
disruptive
or
slow,
and
we're
also
already
looking
at
like
kind
of
reducing
our
dependency
on
slow,
flaky
tests,
I
think
also
in
post,
submit.
H
It
will
be
a
lot
more
viable
to
have
a
very
wide
variety
of
how
we're
testing
these
things
and
what
we
test,
because
you
are
now
not
doing
it
on
every
push
and
I,
would
like
to
highly
encourage
that
all
cloud
providers
participate
in
this.
And
if
you
need
to
help
sorting
that
out,
feel
free
to
reach
out
to
yeah.
G
Yeah
I
think
plus
one
from
the
sick
testing
point
of
view.
Just
as
a
rule
of
thumb
is
we
don't
want
everything
to
be
in
please
onyx
and
what
we
still
want
ability
to
revert
if
we
want
like,
if
all
crowd,
roars,
end
up
clicking,
I
still
want
to
find
it
and
it's
becoming
really
fun.
Latency
I
think
awesome.
It
all
would
we're.
H
Also
pretty
motivated
to
get
more
there
as
we
managed
to
get
a
point
where
we
had
like
GCE
and
OpenStack,
but
we've
had
some
difficulty
getting
cloud
providers
to
provide
reliable,
ongoing
infra
for
anything
I'm,
not
just
cloud
providers
pretty
much
like
broadly
for
the
community
going
forward,
I'm,
hoping
that
pushing
things
out
of
tree
is
going
to
make
people
reconsider
this.
There
is
a
lot
of
value
and
getting
these
tests
upstream.
If
you
can
get
them
you're
much
more
likely
to
catch
failures.
Upfront,
as
opposed
to
when
you
go
to
do
your
own
releases.
H
H
B
H
Think
sorry,
one
other
recommendation
that
I've
been
pushing
on
myself
having
gotten
formally
written
done
anywhere,
is
to
when
you're
evaluating
these
tests
as
much
as
possible.
To
actually
reconsider.
Can
you
just
use
kubernetes
features
to
accomplish
this
there's
a
lot
of
things
like
oh
I
need
to
run
a
command
on
a
node.
H
They
can
be
accomplished
most
choice
in
most
cases
by
using
a
like
privileged
host,
exec,
pod
and
stet,
and
do
everything
through
the
kubernetes
api
instead
of
saying,
oh
I'm,
on
GCE
I'm,
going
to
SSH
I
have
not
found
many
things
where
you're
not
going
to
be
able
to
do
this.
It's
still
a
matter
of
like
someone
has
to
do
the
work,
but
if
you're
looking
to
change
things
going
forward,
I'm
pretty
certain
that
kubernetes
is
flexible
enough,
that
most
of
these
things
can
be
done.
How
well
does
that
work
on
distro
las'?
H
It
works
well
enough,
but
you
run
in
privileged
pod,
and
then
you
use
NS
enter
to
go
out
on
those.
So
we've
gotten,
six
storage
has
moved
almost
all
their
things
to
this.
They
no
longer
use
things
like
SSH
and
basically
all
their
tests,
and
it's
been
pretty
well
so
far.
We
did
uncover
a
Runcie
bug,
race
condition,
but
now
that
some
of
that
stuff
sorted
out,
it's
been
pretty
stable
and
it
like
out
of
the
box,
works
on
pretty
much
all
providers.
H
A
Think,
like
one
like
that
end
goal,
there
could
be.
Maybe
we
take
all
those
tests
that
currently
run
on
the
mostly
juicy
infrastructure,
put
it
into
a
common
repository
and
then
like
right
now,
like
the
test
framework,
has
a
providers
interface
which
is
kind
of
not
like
it's
it's
it's
there,
but
like
we
haven't,
we
put
a
lot
of
thought
into
how
that
interface
should
be
designed,
so
maybe
revisit
that
interface
and
like
get
to
a
place
where
any
provider
can
just
import
the
e2b
package
from
our
cloud
provider
repo
implement
that
interface.
A
A
H
Yeah
I
think
that's
probably
the
most
likely
thing
that
needs
to
happen
in
particular,
because
also
like
I,
don't
think
anybody
wants
to
delete
these
tests.
I,
don't
think
anybody
is
staff
to
go
and
reinvent
all
of
them,
but
they
do
actually
depend
on
the
entry
cloud
providers
and
we
want
to
remove
those
from
the
tree
yeah.
So
if
we
just
take
those
and
ship
them
to
another,
repo
I
think
we'll
be
fine
and.
B
You
know
I
think
there
already
have
been
cases
where
we've
moved
individual
tests
like
there
are
certain
tests
that
we're
gke
only
that
are
now
only
in
the
cloud
provider,
GCP
repo,
so
I
I
think
we
it's
going
to
be
a
case-by-case
basis,
but
yeah
where
we
can,
where
we
can
unify
and
provide
value
to
all
collaborators.
I
agree
with
you,
Andrew
I
think
that
is
the
best
way
to
go.
The.
H
Last
thing
I
want
to
talk
about,
for
your
sanity
is
I.
Think
one
thing
that
we
haven't
touched
on
a
ton
here
is
how
cloud
providers
can
test
in
the
cloud
provider
repo
that
can
create
implementation
and
all
I,
don't
think
we
have
to
be
prescriptive.
There
I
think
we
should
strongly
encourage
considering
same
patterns
for
that.
One
of
them
that
I
think
has
played
out
pretty
well
so
far
comes
from
cops.
H
We'll
probably
want
to
pull
your
provider
at
a
even
the
post.
Events
we
want
to
pull
a
known
good
version
of
your
provider
instead
of
what's
ahead,
probably
if
you're
gonna
be
very
on
top
of
things,
you
can
do
head
speaking
from
some
experience,
running
kind,
this
way
and
free
submit
it's
fairly
expensive,
so
it
I
think
it's
definitely
worth
looking
into
having
some
kind
of
standard
plan
for
ratcheting,
which
version
you're
running
in
each
for
your
general
testing,
yeah.
B
A
Okay,
yeah,
we
just
added
those
I'll,
add
something
real
quick.
So
like
it
sounds
like
like
this
problem
like
to
me,
it
sounds
like
there's
three
concrete
problems
and
as
part
of
this
like
the
first
is
that
we
have
tests
that
are
specific
to
a
cloud
provider
so
like
an
are
like
Google,
that
little
bounce
or
test
or
whatever
right,
and
then
we
have
tests
that
are
generic
to
core
or
required
for
core.
A
But
it
depends
on
a
provider
specific
implementation
to
do
some
yeah
and
then
the
third
problem
is
that
we
need
to
move
the
tests
so
that
the
providers
himself
run
them
and
we
don't
run
them
as
part
of
core.
Does
that
sound
like
those
are
the
three
kind
of
categories
of
problems
that
we
have
to
address
here
or
is
there
something
I
missed?
B
A
Do
we
feel
about
like
taking
those
three
prioritizing
like
picking
one
of
highest
priority,
one
of
the
three
prioritizing
it
for
119,
and
then
we
can
kind
of
tackle
it
one
at
a
time,
because
I
feel
like
there's
a
pretty
big
problem
to
solve
and
trying
to
fix
all
of
it
at
once.
I
think
is
a
little
unrealistic,
I.
B
B
Awesome
so
having
just
said
that
I'll
repeat
it:
if
everyone
could,
please
take
a
look
at
this,
so
so
this
is
the
proposal.
I'd
love
to
get
everyone's
feedback,
I
mean
I,
call
it
a
proposal.
It's
really
a
proposal
of
what
I
believe
is
a
problem
we
need
to
tackle,
then
with
some
brainstorming
added,
rather
than
an
actual
proposal
for
fixing
it
but
I'd
love
to
get
the
you
know,
make
sure
that
we're
covering
all
the
problems.