►
From YouTube: SIG Cloud Provider 2020-04-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
C
B
D
D
B
The
only
thing
we're
working
on
is
we
added
load,
balancer
support
for
our
network
platform
called
an
SSD,
so
that
is
alpha
and
yeah
actively
being
developed
and
then
do
some
work
to
explore.
Like
yeah
well
configuration
instead
of
I&I
I
know,
most
of
the
fighters
are
still
on
ini,
but
I.
Think
given
everyone's
familiar
with
the
mo,
should
serve
that.
B
B
D
B
Remove
dependencies
in
the
CLEF
controllers
and
the
controller
managed
binary
that
way
we
can
stage
it
all
into
the
single
cloud
provider
repo
and
then,
if
you
need
to
build
this
ECM
you
just
you
just
import
that
repo,
and
it
should
I
mean
it
should
still
gonna
still
gonna
pull
in
like
client
go
API
machinery
like
all
the
core
stuff
in
kubernetes,
but
it
shouldn't
create.
You
know:
nasty
dependency
loops
that
you
get
when
you
import
communities
kubernetes.
B
A
B
Unless
you've
done
stuff,
unless
you're
importing
stuff
look
outside
of
just
the
clock,
what
the
file
provider
does-
and
you
have
important
KK
for
all
the
reasons,
but
yeah
I'm
really
hoping
that
we'll
move
everything
into
the
staging
crew
cloud
provider
be
filled,
including
the
cloud
controller
manager,
binary
and
all
the
controllers
and
conveyux
trucks
and
everything
and
you
you
wouldn't
have
to
depend
directly
on
coverage
Chris.
Yes,
that
looks
that.
C
C
On
the
grinch
provider,
yeah
I
was
just
going
to
say
that
that's
awesome,
work,
Andrew,
so
I,
based
on
this,
the
changes
that
we
were
gonna
make
to
the
cap
on
your
suggestion,
Andrew
I'm,
working
on
a
cure
for
that
I
am
getting
close.
There's
not
that
much
there.
The
one
thing
that
I
wanted
to
add
was
just
like
add
the
the
move
to
staging
into
the
kept
itself,
because
I
actually
started
working
on
that
and
the
PR,
so
I
should
have
that
next
couple
days.
C
Definitely
by
the
end
of
the
week
for
review
the
kept
PR
at
least
and
then
yeah
in
terms
of
code
took
a
break
from
it,
but
this
week
I'll
started
getting
back
to
it,
starting
with
given
the
move
to
staging,
and
then
I
still
have
some
comments
to
address
on
the
actual
code
pair
as
well.
So
I'll
probably
get
a
little
deeper
into
that
next
week.
B
B
Okay,
so
just
go
straight
into
gender,
supporting
running
multiple
systems,
concurrently
and
conflict-free
team.
Getting
you
out
of
this
one
right.
A
We
do
this
for,
for
instance,
for
our
CSI
stuff,
Arceus
I'd
rather,
and
ideally
we'd
like
to
do
that
for
the
CCM
as
well.
But
the
challenge
we
have
today
or
see
today
is
that
by
design
as
far
as
I
understood
is
that
if
you
have
a
cluster
where
a
CCM
instance
already
runs
like,
for
instance,
if
we
spin
up
a
production,
ready,
do
KS
cluster
that
comes
with
with
pre-built
CCM
already.
A
Of
course,
you
can't
have
another
cesium
run
there
as
well,
because
I
think
the
design
assumes
or
the
architecture
assumes
that
whatever
CCM
is
running
in
there
kind
of
takes
responsibility
for
all
objects
that
that
are
implemented
by
the
interface
like
nodes
and
load,
balancers
and
whatnot,
and,
and
that
makes
it
that
makes
it
tricky
and
yeah
what
we
ideally
like
to
have
a
disability
very
similar
to
our
contests.
Csi
drivers
or
different
interest
controllers.
There's
this
notion
of
specifying
your
class
and
run
in
parallel
and
kind
of
partition.
The
the
object
space.
A
Think
gotcha
couplet
was
the
specific
use
case
there.
So
there
was
a
lot
of
talking
about
notes.
Specifically
notes
is
definitely
the
thing
that
I
have
in
mind
as
well,
but
in
particular
load.
Balancers
is
also
something
that
we
want
to
test
because
in
the
OCC
n
right
now
we
support
notes
and
a
load
balancer.
So
these
are
the
two
resource
types
that
we
like
to
be
able
to
test.
You
know
on
managed
clusters
yeah,
so
that's
kind
of
the
context,
maybe
maybe
and
I
guess
I
should
I
should
have
said
that
up
front.
A
Maybe
the
way
people,
especially
out
of
three
providers
test
there-
are
CCM
implementations,
maybe
there's
a
good
way
and
I
wasn't
aware
of
it's
just
felt
very
natural
for
us
to
use
our
own
manage
products,
but
if
there's
none,
then
basically
the
discussion
I
want
to
kick
off
is
if
this
PR
that
was
closed
at
the
time.
I
think
it
was
just
them.
A
A
B
B
And
then
it
goes
off
and
does
its
thing
even
at
so
we'll
add
VMware
and
internally.
We
have
some
people
asking
for
this
feature
too,
and
we
never
really
went
through
with
it,
because
I
didn't
really
it
didn't
seem
it
I
thought
it
was
just
something
that
we
wanted
and
a
lot
of
you
wanted.
So,
but
if
there's
enough,
if
there's
other
cloud
providers
who
actually
think
this
is
gonna
be
useful,
I
I
wouldn't
be
opposed
to,
starting
with
the
whole,
like
label
label
ownership
idea
and
seeing
where
that
goes.
A
Yeah,
that's
that
sounds
good
to
me
as
well.
I
did
like
the
the
label
label.
Selector
idea,
I
think
that
could
work
or
would
work
for
the
oh
cool,
all
right
so
yeah.
If
I'm
not
sure.
If
I
understood
you
correctly,
then
the
next
step
would
be
to
start
a
cap
and
I
guess
gather
some
some
some
feedback
and
and
opinions
on
that
yeah.
B
And
what's
nice
is
that
the
interface
already
supports
like
a
cluster
ID
or
like
I,
think
the
system
has
a
flag
called
coaster
ID.
So
this
view
controller
manager
and
that's
used
for
I
think
it's
mainly
used
for
like
tagging
on
AWS,
so
you
can
like
Mack
to
resource
to
cluster
and
you
don't
actually
touch
a
resource
to
another
cluster.
But
we
couldn't
you.
We
can
use
that
same
flag,
a
cluster
ID
flag
and
use
that
as
the
owner
ID.
B
So
that
might
be
a
actually
that
could
be
breaking
for
AWS
but
anyways,
but
like
we
could.
We
can
start
like
we
can
explore
ideas
on
that
run,
but
yeah
feel
free
to
open
a
cap
or
if
you
want
to
like
open
a
PR
with
like
a
bare
minimum
implementation,
and
then
we
can
back
and
help
those
kind
of
guide
your
discussion.
And
then
we
can
move
on
to
cap
that
works
too
yeah.
A
Yeah
I
think
that
makes
sense
in
my
mind
at
least
I
hope
I'm,
not
under
estimating
the
effort,
but
in
my
mind
there
should
be
like
a
PR
should
be
doable
within
within
a
day
or
so
and
shouldn't
be
shouldn't,
be
shouldn't
be
too
big.
The
the
class
ID
idea
is
interesting
as
well.
I
know
that
that
class
idea
we
talked
about
that
some
time
ago,
a
few
months
ago
that
this
was
in
a
kind
of
odd
state,
because
I
think
it's
marked
as
like.
It's
it's
supposed.
A
If
you,
if
you
don't
add
the
flag,
you
get
a
warning
on
startup,
saying
that
you
should
support
it.
I,
don't
know,
there's
also
this
issue
where
we
talked
about
to
what
extent
this
is
still
needed
and
I
think
somebody
from
AWS
chimed
in
back
then
as
well,
and
it
seemed
loud.
It
seems
like
it
wasn't
really
used
anymore
but
yeah.
If
this
is
an
opportunity
to
maybe
bring
it
back
to
life
yeah,
and
why
not?
We
can.
We
can
look
into
that,
but
I
guess
that's
a
teacher.
B
That's
sorry,
yeah
I,
think
closer
name
is
where
I
met,
but
Australia
is
also
a
weird
thing
like
I
think
we
deprecated
it
because
it's
only
used
on
Amazon
and
but
like
yeah
but
like
I,
haven't
see
it
going
anywhere
because
you
can't
without
breaking
energy
users.
So
it's
in
this
weird
state,
maybe
we
should
under
pocket
it
because
call
me
not
gonna
go
anywhere
but
yeah
I
think
starting
with
Custer
name.
It's
a
good
start,
because
I
know
Custer
memes
already
propagated
to
all
the
controllers.
A
A
It
would
be
something
like
I,
don't
know,
yeah
you
would
be
production
and
I
know
test
I,
don't
know!
Oh
yeah,
you're
right.
That's
a
good
point
that
regard
we
can
the
yeah
the
kind
of
the
idea
that
yeah.
A
B
Why
would
we
give
it
to
separate
cluster
names
but
essentially
like
this
is
a
flag
that
you,
you
can
kind
of
trick
the
CCM
into
saying
it's
a
different
cluster
and
have
it
only
watch
resources
that
match
the
cluster
name
or
label
that
specifies
custom
name
but
I
agree?
Maybe
we
need
new
flag,
but
I
would
if
you're
gonna
appeal,
see
it
or
something
I.
You
can
use
this
as
kind
of
like
a
way
to
demo.
How?
How
do
we
do
this?
Because
if
this
flag
is
already
kind
of
being
propagated
everywhere
and.
B
B
A
A
That
that's
well
that
was
fired
by
a
colleague
of
mine,
and
this
is
something
that
I'm
less
sure
about
kind
of
the
proposal.
I
made
make
sense
or
not
and
again
some
some
context
upfront.
The
context
is
that
there's
the
let
me
make
sure
as
well
right
so
there's
this
parameter
and
CCM
are
two
parameters.
A
Actually
it's
min
retry
delay
and
Max
retry
delay,
which
basically
defines
the
interval
in
which,
in
which
delays,
happen
or
retries
happen,
and,
of
course,
that's
and
that's
an
excellent
exponential
delay,
I
believe,
or
at
least
increases
until
it
hits
the
maximum
and
and
and
we
we
have
one
use
case
where
so,
usually
we
are
completely
fine
with
that
and
the
kind
of
the
upper
limit
as
well,
because
that's
what
you
use,
if
you
say
you
hit
an
error
repeatedly.
Of
course.
A
A
If
you
can't
do
things
like
delete
the
LD
or
change
the
configuration
of
something,
which
is
why
we
decided
to
to
back
off,
but
in
the
case
where
an
lb
we're
just
waiting
for
an
it'll
be
to
get
active,
this
incurs
some
some
extra
delay.
That's
not
that's
not
really
necessary,
like
if
we
had
this
kind
of
spot
where,
when
lb
would
be
ready,
but
the
exponential
back-off
has
already
driven
us
into
the
upper
limit.
If
you
will
a
number
of
extra
seconds
or
even
minutes
can
pass
by
before
the
LV.
A
Actually
becomes
becomes
active
because
we
have
to
wait
for
the
controller
to
to
make
that
next
call
out
into
our
interface
implementation
and
that's
the
problem
that
we
have
and
the
question
we
have
around
this
is:
is
there
a
way
to
maybe
try
to
make
a
differentiation
between
between
delays
that
are
just
progress,
delays,
that's
how
I
call
them
in
the
ticket
and,
on
the
other
hand,
failure
delays
where
we're
really
hitting
hitting
a
problem
and
an
exponential
back-off.
This
is
actually
justified.
A
That
up
being
said,
I'm
wondering
if
we're
doing
the
right
thing,
maybe
the
decision
we
took
to
let
the
controller
do
the
back
off
and
retry
for
us.
Maybe
that's
wrong,
maybe
maybe
there's
a
better
way,
but
given
this
kind
of
setup
that
we
had
and
firing,
this
ticket
seemed
to
make
sense
at
least
to
get
some
some
input
on
that
question
or
request.
B
Have
you
confirmed,
have
you
at
the
WT
code
to
make
sure,
but
are
you
hungry
pretty
sure
that
the
operation
is
blocking
like
if
you
have,
if
you
get
too
low
balanced
operations
and
the
first
one
blocked
for
minutes
like
for
sure
the
second
one
doesn't
come
up,
because
I
thought
I
thought
there
was
like
we,
we
have
a
work
queue
and
it's
it's
consumed
has
a
concurrency
of
five
or
something
oh,
okay,
but
but
and
I
could
be
wrong
with
I'm.
Just
assuming
that
I
needed
check
the
code,
I
was.
A
B
B
Think
I
think
I
think
I
mean
if
you
can,
if
you
can
test
it
and
prove
that
it
it
blocks,
then
I
think
we
should.
We
should
definitely
I'd
be
open
to
adding
like
different
error
types
for
the
service.
So,
like
you
know,
every
tribal
error
or
whatever
or
like
a
rep
progress
error,
whatever
you
want
to
call
it
and
and
then
we
can
update
the
service
controller
yeah
like
if
it's
yeah,
like
whatever
the
provider
returns
they
based
on
the
air
top
we
can.
B
A
B
B
We
have
a
backlog
here,
so
I'll
probably
start
just
from
these
stuff
around
or
or
we
can
push
it
to
the
next
meeting.
But
if
folks
wanted
to
assign
themselves
any
issues
here
or
they
want
to
bump
the
priority
on
anything,
please
comment
on
the
issues
and
that's
what
we're
going
to
use
to
drive
some
of
the
work
we're
doing
for
one
night.