►
From YouTube: Kubernetes SIG Cloud Provider 2019-04-17
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everyone
today
is
April
17th
2019,
and
this
is
the
bi-weekly
sick
cloud
provider
meeting.
So
let's
start
first
with
announcements.
I
just
got
confirmed
for
our
cig
face-to-face
at
Keep
Calm
Barcelona.
So
this
is
going
to
be
on
Monday
May
20th
from
2:00
p.m.
to
5:00
p.m.
if
you
can
make
the
cig
face-to-face-
and
you
can.
Let
me
know
at
a
time
that'd
be
great.
I
know
it's
probably
going
to
conflict
with
other
face
to
face.
A
A
First
thing
on
the
agenda:
I
wanted
to
give
a
sub-project
update
for
the
cloud
provider,
extraction,
migrate
extraction
and
migration
sub-project,
and
so
this
is
a
sub
project
under
supplier
that
is
focusing
on
trying
to
get
all
the
entry
cloud
providers
removed
and
we
want
all
the
cut
providers
using
they're
out
of
treetop
controller
managers
instead
of
existing
entry
implementations,
and
so
we
meet.
We
meet
weekly
on
Thursdays
at
4:00
p.m.
Eastern,
1
p.m.
A
Pacific,
and
so
we
recently
introduced
a
new
staging
repo
called
kubernetes,
slash
legacy
cloud
providers,
and
this
is
where
we're
going
to
start
publishing
the
entry
cloud
providers,
so
we're
gonna.
So
the
plan
is
we're
trying
to
migrate
all
the
entry
cloud
providers
to
the
staging
refills
so
that
we
have
a
way
to
publish
the
entry
providers
externally
so
that
they
can
be
vendored
in
from
the
existing
out
of
tree
external
repos.
And
so
the
plan
is
for
one
from
team
that
we're
gonna
move
all
of
the
entry
providers.
A
That
plan
to
stay
in
kubernetes
and
supporting
cronies
in
the
future
to
this
legacy
legacy
staging
repo
and
then
we're
gonna
give
providers
the
option
to
the
providers
are
gonna,
have
the
option
to
vendor
those
legacy
providers
into
their
other
implementations
if
they
want.
So
this
is
not
required.
This
is
just
optional.
I
know
a
lot
of
providers
prefer
to
build
their
implementations
again
from
scratch
and
not
have
to
depend
on
legacy
behaviors,
but
for
the
providers
that
do
have
a
dependency
to
whatever
existing
behaviors
they
added
entry.
A
So
we
do
have
the
first
PR
open
for
vSphere
when
that
lands,
I'm,
hoping
that
we
can
get
pr's
open
for
four.
As
your
AWS
and
you
see
and
maybe
open
south
I
know,
Tim
said
that
we
may
keep
open
stock
where
it
is
because
we're
planning
on
just
removing
it.
Instead,
folks
have
any
questions
on
this
so
far,
I
just.
B
Wanted
to
jump
in
and
say
this
is
a
super
wonky
approach
to
this
problem,
but
there's
precedent
for
it
in
the
kubernetes
community.
We've
done
this
in
other
places,
especially
in
cig
API
machinery,
where
types
need
to
be
rendered
in
two
different
repositories
elsewhere,
and
it
does
work
even
though
it's
wonky
and
it
requires
a
little
extra
thought
in
what
get
built
at
one,
what
time
and
where
he
takes,
there's
some
cognitive
toil
and
where
you
need
to
change
code.
And
what
do
you
need
to
check
in?
B
So
if
you
have
questions-
or
this
doesn't
make
sense
to
you-
that
is
perfectly
natural
and
expected
in
some
sense.
But
Walter
is
a
good
person
or
any
one.
Insignificant
machinery,
Chau
and
others-
have
kind
of
established
this
pattern
over
the
years
and
can
offer
some
guidance
and
tips.
If
you
find
it
too
strange
path,.
A
A
Don't
expect
us
to
come
up
with
timelines
now
or
at
this
meeting,
but
something
to
think
about
for
folks
as
we
and
as
we
get
closer
to
the
end
of
115
I.
Think
that's
around
the
time
where
we
want
to
come
up
with
a
firm
deadline
to
delete
that
repository
okay,
so
the
sub-project
also
has
a
few
caps
in
flight.
We
have
two
caps
in
progress.
The
first
one
is
supporting
an
addict
tree,
credential
provider.
This
is
a
mechanism
so
that
cloud
provider
can
provide
the
couplet
with
image
filling
credentials
without
having
to
compile
it.
C
No
I
think
that
that
we've
kind
of
got
an
idea
of
what
the
scope
of
this
is
we're
trying
to
introduce
a
little
bit
more
fine-grained,
locking
to
the
cue
blood
so
or
the
controller
managing
the
cloud
controller
manager,
so
that
high
availability
clusters
can
upgrade
better
I'm.
Sorry
I
was
reading
ahead.
The
credential
provide
yeah,
so
the
credential
provider
work
is
trying
to
so
there's
a
lot
of
clout.
C
There's
cloud
specific
credential
providers,
entry
right
now
and
how
do
we
move
those
out
without
breaking
people
who
are
using
kind
of
cross
cloud
functionality
right
now?
It's
kind
of
the
cap
is
kind
of
dotted
lines
just
because
there's
a
handful
of
problems
that
we
need
to
solve,
but
hopefully
we're
gonna
soon
get
to
a
more
concrete
proposal
that
we
can
talk
about
more
here.
A
See
yeah
and
yeah
so
continuing
what
Mike
said.
The
other
cap
is
shared
leader
election
across
they
keep
controller
manager
and
the
cloud
controller
manager
for
all
the
cloud
specific
controllers,
and
so
this
is
to
make
sure
that
as
users
migrate
between
entry
and
other
cloud
providers,
there's
not
we're.
A
Not
gonna
have
a
scenario
where,
as
you
deployed
the
two
different
components,
you
have
the
cloud
controllers
running
in
both
of
them
during
British
version,
skew
and
so
having
a
shared
leader
election
implemented
right
now
can
can
make
sure
that
if
you
migrate
an
H
a
cluster,
only
one
of
the
components
is
going
to
run.
The
cloud
controllers
at
any
specific
time
so
expect
the
cap
for
both
of
these
in
the
next
few
weeks.
A
And
if
anyone
is
interested
in
helping
implementations
with
these,
let
me
know,
and
we
can
we
can
figure
something
out
and
then
I,
don't
think
Walters
here,
but
Walters
also
working
on
API
server
network
proxy,
which
is
supposed
to
replace
the
SSH
specific,
the
provider,
specific
SSH
autonomy
organism
from
from
the
entry
providers.
So
there's
a
repo
where
the
reference
implementation
is
going
to
live.
So
if
you
want
to
check
it
out,
you
go
there,
it's
empty
right
now,
but
it'll.
Something
will
be
there
soon.
D
A
I
think
so
this,
though
this
cap
is
interesting
because
it
falls
into
three
different
from
six
signet
work,
api
machinery
and
cloud
provider.
So
the
only
interest
from
our
end
is
that
we
need
so
providers
that
rely
on
the
SSH
tunnel
need
and
it
care
subvert
Network
proxy
to
get
rid
of
it.
And
so
this
is
more
like
just
saying
that,
hey,
if
you're
a
provider
and
you've
been
using
this
SSH
tunneling,
then
you
need
to
implement
an
outer
tree
network
proxies
so
that
you
can
get
rid
of
that
under
SiC
network.
Yes,
good
I,.
B
E
F
E
G
A
G
And
you're
at
a
question
on
the
shared
leader
election,
so
last
time
I
talked
with
Walter.
There
was
also
another
option
to
do
a
multiple
stage:
rollout
where
you
disabled,
the
controller
manager
controllers
for
like
load,
balancer
or
the
service
and
then
bring
them
up
on
the
cloud,
controller
manager
and
I.
Don't
know
if
that's
an
alternate
approach
that
was
considered
and
dropped
and
now
we're
going
to
cheerleader
yeah.
A
I
think
I
could
be
wrong
here,
but
that
that
mechanism
is
available
today,
because
we
have
a
controllers
flag.
Where
you
can,
you
can
specify
which
controllers
you
don't
want
running
and
I.
Think
there's
even
like
a
like,
you
can
say
like
there's
like
an
enable
cloud
loops
flag
or
something
where
it'll
the
key
controller
manager
will
look
at
all
the
controllers
that
are
under
like
cloud
controllers
and
disable
them.
So
that's
so
possible
today.
C
A
H
H
What
we're
doing
is
we're
basically
just
asking
the
community
to
create
new
repos
on
their
kubernetes
sakes,
for
for
basically
everything
that
is
not
a
cloth
provider
or
a
cluster
API
provider
for
a
specific
cloud
provider,
meaning
in
a
cloud
platform
really
so
I
was
looking
at
the
wording
in
the
charter
of
cyclop
letter
and
I
couldn't
find
anything
that
was
basically
covering
everything
else.
That
is
not
a
cloud
provider.
H
Basically
and
I
was
looking,
you
know,
through
the
various
things
that
are
being
planned
to
be
folded
inside
the
state
cloud
provider,
and
you
know
just
you
know,
taking
cigarettes
as
an
example.
You
know,
besides
the
cloud
provider
AWS,
there's
like
there's
things
like
you
know:
ELB
English,
controller
I
am
Authenticator
encryption
provider
and
three
different
CSI
drivers.
So
I
was
wondering
if
we
could
add
some,
you
know
either.
B
I
think
formalizing.
It
is
a
great
idea
and
I
think
there
are
probably
some
rough
guidelines
we
could
come
up
with
where
something
is
directly
tied
to
a
specific
provider.
The
behavior
of
load,
balancers,
is
a
good
example.
There
are
also
those
components
or
see
it
like
CSI
drivers
is
a
good
example
where
it's
decoupled
from
the
cloud
provider
itself,
so
we
may
need
some
directory
structure
that
supports
cross
clouds
or
something
cross
provider.
I,
don't
I,
don't
know
the
exact
terminology
we
want
to
use,
but
I
think
we.
We
can't
set
up
a
situation.
B
B
We
likely
also
have
to
go
and
see
how
the
CSI
folks
feel
about
their
current
organizational
structure,
and
maybe
we
just
refer
to
a
completely
different
community
stings
repos
at
that
level,
so
I
that
that's
how
I've
thought
about
it
before
is
that
there's
a
distinction
between
things
that
are
directly
tied
to
a
provider
and
things
that
are
potentially
portable
across
those
providers.
We
should
at
least
make
that
distinction
and
build
that
in
from
the
beginning.
And
what
do
you
think
about
that?.
H
H
Because
again,
you
know,
just
like
you
know,
CV
amber
has
claw
provider
reviews
for
you
in
capoeira
vsphere.
It
really
has
you
know
one
repo
and
true
project
inside
it,
so
we're
trying
to
split
that
up
into
two
discrete
repos.
So
you
know,
I
was
looking
at
OpenStack
Opus
like
has
the
same
thing
like
they
have.
You
know.
Claw
provider
opens
tag
that
has
like
four
or
five
different
projects
all
inside
the
same
repo
right
so.
H
B
I
think
part
of
it
gets
easier
if
we
flip
it
upside
down
and
consider
it
as
what.
What
is
a
reasonable
structure
when
building
a
distribution
of
kubernetes
and
sort
of
having
some
consistent
structure
for
that
would
be
useful.
Maybe
we
try
to
make
it
more
useful
than
make
it
a
mandate,
so
that
tools
like
cube,
ATM
or
something
that's
trying
to
package
and
bill
would
be
able
to
find
all
the
required
and
optional
bits
and
put
it
together
into
something
that
works
for
an
end-user
yeah.
I
This
is
I
mean
I,
think
this
is
one
of
the
one
of
the
one
of
the
goals
and
the
mandates
that
we
had
and
and
I
think
that
you
know
when,
like
when
OpenStack
built,
that
repository
out
it
served
our
needs,
but
now
we're
at
the
point
where
it's
not
serving
our
needs
very
well.
Just
because
we
have,
you
know
essentially
siloed
groups
of
people
working
on
different
projects
inside
and
but
we
only
have
a
small
set
of
reviewers.
Who
are
you
know,
kind
of
handling
that
workload?
I
What
would
be
you
know
from
from
my
point
of
view,
what
would
be
really
nice,
as
if
you
know,
as
a
cloud
provider,
as
you
know,
helping
set
some
institutional
guidelines
of
you
know,
maybe
actually
coming
up
with
you
know
if
it's
going
to
be
in
a
mono
repo
or
if
it's
going
to
be
in
in
smaller
repos?
How
do
we
coordinate
with
other
SIG's
to
build?
I
You
know
consistence
that
our
practices
so
that
if
you
want
to
go,
get
the
VMware
cloud
provider,
you
know
exactly
where
to
get
it
and
it's
in
a
similar
location
to
the
AWS
and
it's
a
similar
location
to
the
to
the
Amazon.
Similarly
for
the
CSI
providers.
Similarly,
for
the
a
cluster
API
work,
this
happening,
you
know
because
that's
another
repo,
that's
you
know
that's
sick
OpenStack
is
interested
in,
but
you
know
that
doesn't
that
hasn't
fallen
officially
under
our
sig
and
so
a
lot
of
work
is
happening
with
that.
I
Just
you
know,
you
know
by
a
set
of
interested
parties,
so
you
know
I,
think
that
there's
an
opportunity
for
us
to
right
now
say
you
know
institutionally,
if
you
want
to
have
repositories
that
are
hosted
somehow
inside
of
you
know
that
the
state
cloud
provider
structure
that
including
the
requirements
of
you,
know
this
the
integrity
testing
you're
going
to
document
it
that
it's
also
you
just
have
a
you-
have
a
consistent
layout
across
to
everybody
and
I.
Don't
think
it's
probably
gonna
solve
quickly
but
I
think
it's
a
problem
we
can
solve.
I
H
Like
thinking
of
thinking
about
this,
you
know
in
the
eyes
of
end-user,
or
you
know
somebody
that
is,
you
know,
trying
to
build
their
own.
You
know
kubernetes
stack
right,
any
one
ear
on
this
stack
on
a
specific
local
ladder.
Right
now
you
just
have
to
go
shopping
around.
You
know,
different,
read
those
and
different.
You
know
documentation,
you
know,
there's
there's
some
some
stuff
is
on
the
core.
Some
stuff
is
a
mistake.
H
If
sick
law
provider
is
going
to
own
those
subs
odd
or
sub
projects-
and
you
know
as
far
as
I
can
it
can
tell
the
only
author
value
that
we
probed,
which
is
sick
storage,
if
in
asking,
if
they
wanted
to
take
ownership
of
all
the
CSI
providers,
looks
like
they
got
back
to
us
with
a
strong?
No,
they
don't
want
to
do
that.
So
I
was
wondering
if
we
want
to
take
ownership
or
death,
not
I,.
A
Don't
think
we
have
a
choice,
because
when
we
like,
when
we
fold
right,
we
have
all
the
existing
suicide
drivers
that
were
under
the
six
I'm
gonna
fall
into
sub
projects
and
so
I
think
a
reasonable
next
step
is
having
our
Charter
account
for
those
CSI
drivers
and
then
maybe
add
a
note
in
the
chart
are
saying
longer-term.
We
may
want
to
do
something
else
with
this
I,
don't
know
what
that
is,
but
yeah.
B
H
Alright
I
can
propose
an
amendment
to
like
to
the
various
place
or
particular
provider
scene
and
we're
gonna.
You
know
single
owner
is
gonna
own.
All
the
repos
that
are
kind
of
orphan
provider,
sig
folding,
so
I
can
do
that
and
in
terms
of
organization
I,
don't
know
if
we
want
to
discuss
this
a
little
longer,
but
I
think
that
we
could
be
doing
something
around
the
lines
of
like
creating,
maybe
a
different
org
in
colleano's
kubernetes
cloud
providers
or
somewhat
her
name
and
put
everything
under
there.
A
Yeah
I
think
I
think
that
would
be
I.
Think
they'd
be
good
to
discuss
that
in
the
mailing
list,
but
yeah
like
if
you
open
up
here
for
the
Charter,
let's,
let's
to
that
up
for
the
next
next
meeting
and
then
yeah
potential
new
orgs,
we
can
talk
in
the
man,
West
I,
think
that
was
a
plan
originally
for
the
cloud
provider.
Star
V
pose,
but
yeah
I
don't
know
like.
Is
it
just
not
really
a
high
priority,
but
definitely
something
you
should
discuss
it
for.
G
J
J
What
are
they
called
cloud
provider,
labels,
GA
and
then
I
started
thinking,
and
that
and
then
I
started
thinking
that
what
if
there
was
a
more
generic
way
to
allow
individual
providers
to
specify
their
own
topologies
their
own
topology
labels,
so
that
applications
can
make
more
intelligent
decisions
about
how
to
spread
themselves
out
across
a
across
a
given
provider.
Since
even
among
the
cloud
providers,
they
aren't
exact
one-to-one
mappings
of
all
the
different
topology
layers
that
are
available.
J
So
part
of
this
general
part
of
this
generic
interface
would
also
be
allowing
a
mechanism
for
kubernetes
administrators
via
the
use
of
new
CR
DS,
for
instance,
to
specify
their
own
topology
layouts
for
their
on-premise
for
their
on-premise
platforms.
So
you
know
they
could
get
as
granular
as
being
like
it's
on
this
rack.
It's
in
this
row.
It's
in
this
room.
It's
in
this
data
center
things
like
that,
so
I
figured
trying,
instead
of
trying
to
come
up
with
something
that
was
even
just
specific
to
our
storage
system.
J
There
might
be
something
in
looking
into
a
solution,
but
more
generally
for
on-premise
users
and,
more
broadly,
for
any
platform
providers
in
general.
I
already
talked
to
Andrew
about
this
a
little
bit
and
he
said
just
that.
I
come
here
to
present
it
to
a
broader
audience
and
see
if
we
could
get
any
more
feedback
on
the
idea.
K
J
Being
able
for
basically,
we
want
to
be
able,
because
the
project
I'm
working
on
is
varying
concerned
is
very
concerned
with
providing
a
first-class
experience
to
on-premise
users
or
on-premise
admins.
So
we
want,
for
we
want
to
be
able
to.
Let
them
tell
us
what
sort
of
topology
they
want
us
to
distribute
against
so
like
if
they
want
us
to
do
across
data
centers
or
across
different
rooms.
In
the
same
data
center.
J
A
A
K
I
have
learned
here
just
a
side
side
topic
too
every
time
I
hear
extending
the
levels
or
extending
the
annotations.
The
problem
is,
if
you
have
a
well-known
label,
then
let's
have
it
as
a
part
of
the
type
itself
and
instead
of
just
label
the
reason
behind
that
is
having
it
as
a
field
or
a
property
or
as
a
prototype.
It
means
it
that
we
don't
rely
on,
but
it
like
there
is
no.
K
And
his
burgeoning
and
and
then
there
is
a
lot
of
hands
that
goes
into
having
it
as
part
of
the
type.
Obviously,
the
node
type
right
now
is
humongous,
so
we
should
approach
this
carefully.
As
for
who
supports
the
value,
I
am
I'm.
Okay,
with
having
this
as
part
of
the
labels
that
the
cloud
provider
is
throwing
it
all
right.
K
But
to
tell
you
the
truth,
most
of
the
information-
that's
our
part
of
that
particular
discussion
can
be
centrally
patched
to
the
node,
meaning
I
can
have
one
controller
that
does
this
on
behalf
of
the
entire
cluster,
instead
of
having
every
node
just
grinding
to
get
those
same
labels
which
probably
gonna
be
the
same
across,
let's
say
one
third
of
the
cluster
or
one
tenth
of
the
cluster.
Maybe
thinking
about
it
in
a
way
that
allow
us
to
do
this,
or
that
might
be
a
better
option.
J
K
B
B
So
what
I'm
trying
to
get
at
is
what
is
the
motivation
for
this
discussion
here
right
now?
Is
it
to
bring
that
thinking
from
the
topology
aware
volume
scheduling
here
to
determine
if
there
are
other
interested
parties,
or
is
it
to
get
support
to
add
the
concepts
of
data
centers
and
racks
to
those
supported,
topologies
or
I?.
A
Think,
if
we're
following,
what
volumes
of
it
I
think
so
I
think
what
the
volumes
in
the
implementation
and
I
could
be
wrong
here.
So
please
correct
me:
if
I'm
wrong,
we
have
well-known
labels
like
zones
or
regions,
but
we
still
keep
the
implement
like
we
keep
the
interface
for
getting
the
labels,
generic
and
so
I.
Think
you
could.
A
A
Think
the
right
approach
is
whatever
we
want
so
like
to
have
the
provider
return
a
map
with
string,
two
strings,
obviously
keeping
backwards
compatibility
and
not
return
it
like
still
returning
the
labels
we
previously
supported,
but
I
think
it'd
be
good
to
keep
that
interface,
generic
for
any
labels
in
the
future.
Yeah.
B
J
J
So
there
are
CSI.
So
there's
parts
of
storage
are
doing
something
like
this
already,
but
the
issue
comes
in
the
usual
comes
in
with
you
know:
the
project
I
work
on,
which
is
software-defined
storage,
where
we
are
not
a
separate
storage
thing
underneath
kubernetes
we
are
storage
in
kubernetes,
and
so
our
distribution,
especially
in
something
like
AWS
or
GCE,
depends
entirely
on
the
distribution
of
nodes
in
that
on
that
provider.
B
J
Basically
yeah,
so
so
with
the
this
would
be
in
tandem
with
the
existing
topology
labels
on
nodes,
so
that
an
application
that
just
wants
to
use
the
general
labels,
for
instance
of
Zone
in
region,
could
use
that
for
whatever
they
want,
but
then
other
applications
that
want
to
provide
more
more
finer
granularity
on
supported,
unsupported
providers.
They
would
be
able
to
make
use
of
this
additional
information
straight
from
the
straight
from
the
cube
cuddle
or
straight
from
the
cube
API
mm-hmm.
A
Yeah,
another
trade
off
is
like
delegating
the
label
keys
to
providers,
which
means
that
they
have.
They
could
they're
more
likely
to
make
a
breaking
change,
which
could
be
dangerous,
but
I,
don't
know
I
feel
like
if
it's
documented
well
and
it's
clear
that
labels
like
once
you
once
a
label
is
beta
or
GA
or
whatever
you
know.
Don't
you
can't
you
can't
change
it
pretty.
A
A
B
A
J
I
It's
crystal
Nicole
yep
I'm,
here
yeah,
so
this
was
a.
This
was
an
email
that
went
out
to
the
the
the
sig
leads
mailing
list
and
so
I'm
just
kind
of
rebroadcasting
that
out
for
everyone
where
it
looks
like
sig
sig
API
is
they'd
like
to
have
some
people
who
can
shadow
who
can
I
think
they
can.
They
can
learn
how
to
do
API
reviews
and
so
kind
of
responding
to
this.
I
This
this
this
email
I
mean
that
the
details
are,
you
know,
to
develop,
reviewers
need,
say,
Garriott
and
to
distribute
expertise
and
increased
review
bandwidth.
So
if
you
know
take
a
look
at
that
email,
if
you
are
interested
in
kind
of
getting
a
little
bit
more
expertise
in
in
reviewing
api's,
then
this
I
think
this
is
a
good
opportunity
to
pick
up
the
skills
and
follow
along
with
some
people
who
have
been
doing
it
for
a
while.
B
I
want
to
follow
on
that.
This
is
an
amazing
opportunity
to
grow
for
almost
anyone.
If
you
look
back
over
a
year
to
see
what
Jordan
has
reviewed
and
like
how
many
PRS
he
comments
on
and
reviews,
it's
mind-boggling
how
much
surface
area
he
has
covered.
There
are
a
handful
of
others
who
do
like
the
next
level.
A
number
of
reviews
and
those
folks
are
just
looking
both
to
transition
some
of
their
knowledge
and
get
some
more
bandwidth.
B
The
same
collection
of
people
both
implement
very
ambitious
features
and
are
on
the
hook
for
all
the
reviews
at
the
end
of
every
quarter.
It's
madness.
So
this
is
really
a
tremendous
opportunity,
regardless
of
how
much
experience
you
have
just
a
shadow
and
see
what
it
is
that
are
looking
for
what
patterns
they're.
Looking
for
how
they
do
the
things
they
do
to
keep
the
kubernetes
codebase
safe
and
secure
and
avoid
big
incidents
for
large
customers
that
wake
us
all
up
in
the
middle
of
the
night.
So
really,
this
is
a
unique
opportunity.
A
B
I
got
the
next
one,
but
really
it
was
just
a
hat
tip
to
Chris
who
put
together
the
slide
deck
for
this
thing,
update
at
the
community
meeting
last
week,
those
of
you
who
didn't
join
the
community,
meaning
you
should
take
a
look
at
that.
There's
interesting
information
about
what
the
sig
has
been
up
to
if
you've
missed
a
meeting.
This
is
a
higher
level
view
of
that,
if
you're
new
and
are
just
catching
up,
it's
a
good
starting
point,
so
great
job
Chris.
Thank
you
for
that
and
wanted
to
give
a
shout.
I
Out
well,
thank
you,
yeah
and
thanks
everyone
in
the
sig
who
I'm
going
to
try
to
make
sure
in
that
presentation
too.
That
I
listed
a
lot
of
names
of
people
who
have
been
contributing
and
working
on
the
sig
here,
and
you
know
it's
really.
It's
been
awesome
to
see
how
how
quickly
this
sig
is
growing
and
how
active
it
is.
It's
it's
really
fantastic.
A
A
A
K
Just
questioned
on
that
that
ven
during
the
PR
that
just
all
right
I,
see,
do
not
understand
the
details,
so
you
moved,
you
move
the
cloud
provider
in
two
separate
repo
and
then
you've
entered
that
back
into
and
to
the
coconuts
coconuts.
That's
what
this
bill
is
all
about,
or
did
you
bend
or
did
you
move
all
and
then
you've
entered
back
are.
K
A
So
so
the
plan
there
is
move
the
entry
provider,
so
everything
under
like
package
cloud
providers
such
providers.
You
know
as
your
AWS
into
that's
repo
and
we're
we're
not
been
during
well
I,
I
guess.
Yes,
we
are
been
during
that
back
through
the
staging
mechanism
yeah
and
like
the
couplet
and
the
patrol
manager,
and
all
that
the
only
thing
changing
is
like
the
import
path,
right,
yeah
and
then
for
your
external
repos.