►
From YouTube: Kubernetes SIG Cloud Provider 2019-05-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
so
hello,
everyone
today
is
May
1st
2019.
This
is
the
bi-weekly
kubernetes
provider
meeting
reminder
again
that
if
you
are
here,
please
add
your
name
to
the
attending
list
on
the
agenda
and
we
have
a
pretty
short
agenda
today.
So,
if
you
want
to
add
any
last-minute
items,
please
feel
free
to
do
so.
A
So
the
first
thing
I
wanted
to
discuss
first
thing:
I
wanted
to
discuss,
is
the
face
to
face
session
at
Keep,
Calm
I'd
like
to
see
or
I'd
like
to
get
a
sense
for
who's
gonna
be
there.
So
if
you
are
planning
to
attend,
please
add
your
name
in
the
list
there,
so
we
can
plan
accordingly
and
also
I'd
like
to
kind
of
get
a
feel
for
what
topics
people
want
to
discuss
during
this
during
the
session,
so
I
put
two
things
down
established:
establishing
concrete
timelines
for
when
we're
gonna
remove
the
cloud
providers.
A
I
know
this
has
been
discussed
like
pretty
much
every
cube
con,
but
I
think
we're
kind
of
we've
been
making
a
lot
of
good
progress
in
last
few
quarters,
so
I
think
I
think
we'll
be
at
a
better
state
to
actually
predict
and
establish
some
concrete
timelines
effort
and
I
also
put
down
on
writing
adoption
for
out
a
few
cloud
fighters.
So
I
know
a
lot
of
providers
have
been
building
testing,
developing
they're
out
of
community
providers,
and
so
it'd
be
great.
B
The
motivation
here
is
that
you
know
we
have
a
lot
of
kubernetes
deployments
run
on
VMs
and
we
may
you
know,
do
some
spreading
of
pods
among
nodes
and
those
nodes
could
be
running
on
the
same
VM
host.
Without
the
you
know,
cluster
admins
knowledge,
and
so
that
itself
is
a
you
know,
heard
sort
of
reliability,
so
in
scheduling
we're
working
on
a
new
feature
for
even
spreading
across
failure,
domains,
and
this
feature
will
not
be
specific
to
any
topology.
B
A
Hearing
echo
okay,
yeah
thanks
so
yeah
I
guess
like
the
first
question,
which
has
come
up
in
the
the
mailing
list,
is
how
do
we
like?
What
criterias
do
we
use
to
actually
determine
if
a
topology
label
should
be
in
like
kubernetes
links
based
or
it
should
be?
You
know,
provider
names
based
right
and
so
I
I'd
like
to
go
with
the
the
rule
of
if
a
topology
label
is
as
common
as
what
we
have
today,
which
is
zones
and
regions
so
like.
A
If
we
can
say
like
there
are
as
many
providers
that
expose
the
physical
host
IDs
or
whatever
of
a
Granny's
node
as
commonly
as
zones
and
regions,
then
it
would
warrant
zones
region
it
would
it
would
warrant
a
Cabrera's
namespace
label.
That's
just
my
personal
take
on
this
because,
like
it's
a,
we
have
to
be
honest
and
say
that,
like
adding
labels
and
adding
more
to
the
current
interface,
we
have
for
providers
is
definitely
add
complexity
to
what
is
already
sort
of
a
mess
right
and
so
I
think
yeah.
B
B
Providing
VMs
they're,
not
exposing
the
ID
of
the
physical
host
or
some
kind
of
unique
identifier,
but
the
other
question
is
I:
guess
maybe
like
would
they
if
there
were
a
standardized
way
to
do
so,
and
so
there
I
think
may
be
the
sort
of
the
threshold
becomes
a
combination
of
like.
Does
this
concept
apply
to
most
cloud
providers
and
then
I
guess
maybe
weighing
in
the
probability
that
they
would
actually
implement
it
and
I
think
the
conduct
very
widely
applicable,
but
I
am
NOT
able
to
gauge
the
willingness
of
any
particular
cloud
provider.
C
To
sorry,
go
ahead,
you
go,
I
was
just
gonna,
say,
I
think
it
would
be
useful
to
talk
more
in
terms
of
who
are
they
the
users
of
such
labels?
That,
like
scheduler,
is
one
such
client
of
those
labels.
Cig
apps
likely
has
some
folks
who
would
be
very
interested
and
I.
Think
one
goal
that
we
need
to
keep
in
mind
is
the
portability
of
workloads
across
implementations
of
guru,
Nettie's,
and
so
that's.
A
I
agree
and
I
think
Tim
was
kind
of
advocating
for
that
in
the
mailing
list
and
yeah
I.
Don't
think
we
should
assume
that
and
I'm
not
saying
that
you're
saying
this,
but
I
don't
think.
We
should
assume
that
if
we
support
this
topology
in
kubernetes,
then
cloud
providers
will
eventually
expose
that
to
add
value
to
people
running
on
top
of
kubernetes
right.
So
I
think
we
should,
if
we
find
over
time
that
more
providers
are
exposing.
You
know
fiscal
host,
IDs
up
a
stack,
then
maybe
we
can
use
that
as
a
comment.
A
C
B
B
We're
reusing
the
concept
of
topology
key,
which
is
something
that
was
created
for
our
affinity,
rules
and
topology
key.
You
specify
what
is
the
label
that
identifies
the
topology
that
we
care
about
for
the
purposes
of
this
scheduling
decision,
so
you
can
say
my
topology
key.
Is
you
know
zone
or
you
can
I
guess
you
could
even
specify
a
Koopa
turn
a
star
io
/
hostname,
as
your
topology
key
and
say
I
want
this
to
be
evenly
spread
across.
My
nodes,
like
the
physical
host
is
specifically
because
those
nodes
might
share
a
physical
host.
D
B
E
Iii
I
would
clearly
expect
people
to
do
it.
I
mean
if
we're
gonna
support,
essentially
anti
affinity
than
supporting
the
reverse
of
affinity.
It
makes
sense,
for
some
use
cases
to
I
mean
just
trying
to
schedule
things
close
together
to
avoid
latency
penalties,
because
once
you
cross
failure
domains
with
much
modern
technology,
there
are
latency
penalties
for
crossing
those
boundaries,
so
I'd
expect
people
would
want
that.
One
thing
I
throw
out
there
as
kind
of
a
means
to
get
around
you
know.
E
Hard
identifications
of
specific
hardware
that
are
portable
is
something
there
already
has
precedent
in
storage,
which
is
the
storage
classes,
where
the
pods
themselves
put
down
preferences.
Essentially
that
mapped
to
a
class
and
the
class
is
a
second
level
of
abstraction
that
an
admin
then
would
in
turn
map
to
very
specific
backing
store
providers,
and
it
means
that
the
pod
specs
themselves
and
things
based
on
on
pods
replicas,
sets
etc
are
indeed
portable.
E
E
You
know,
if
you
know
what
you're
doing
you
define
your
classes,
with
labels
corresponding
to
desired
outcomes
like
I
have
this
class
of
high-performance
storage
in
this
other
class
of
cheap
storage,
for
example,
and
no
no
matter
what
cloud
you'd
land
in
there's
always
likely
to
be
a
concept
of
high
performance
versus
cheap.
But
what
that
really
ends
up
being
would
be
cloud
specific,
and
maybe
we
can
come
across
with
something
similar
to
that
for
this
idea
for
a
cloud
provider,
support
for
scheduling
that
doesn't
break
portability,
yeah.
C
B
E
Yeah,
so
those
classes
could
be
broken
into
like
I,
don't
know
a
choice
of
failure.
Domain
a
B
and
C
is
one
thing
to
maybe
pursue
an
orthogonal.
One
would
even
be
to
classify
if
there
are
different
classes
of
let's
say,
compute
resource,
some
of
which
are
you
know,
much
better
performance
or
something
at
high
expense
versus
others?
Maybe
you
could
put
labels
like
that
too,
but
I
think
the
key
is
to
get
general
concepts
of
desired
outcome
on
these,
rather
than
specific
mappings
that
are
going
to
break
portability
and.
C
E
E
And
and
for
some
of
those
like
GPUs,
one
could
maybe
argue
that
the
support
for
that
kind
of
stuff
is
already
there,
but
maybe
maybe
now
that
we
have
CR
D's
and
things
there's
a
better
way
to
do
it
than
the.
What
what
got
put
together
but
I've,
been
to
plenty
of
talks
going
back,
I
think
two
years
on
how
to
support
those
with
the
scheduler.
So
I
I,
don't
think
that's
you
I,
don't
think
the
case
is
there
that
this
is
completely
unsupported
today,
yeah,
but
there
might
be
opportunities
to
make
it
better.
A
Yeah,
so
there's
definitely
a
trade-off
with
complexity,
so
I
guess:
okay,
so
to
move
forward.
This
I
think
for
folks
who
haven't
read
the
mailing
list
thread.
I.
Think
folks
should
read
that
it's
a
really
long
thread,
but
there
is
we
talk.
We
touch
base
on
a
lot
of
the
things
we
talked
about
on
how
to
define
topologies
and
what
are
the
trade-offs
so
Stephan
and
maybe
Jago
like
you
guys.
You
guys
want
to
take
a
read,
a
dime
analyst
and
see
and
maybe
reply
and
continue
discussion
there.
C
C
I
think
also
the
perspective
was
open
under
I
think
there's
a
an
easy-out
of
saying:
here's
a
junk
in
the
trunk
place
to
put
whatever
you
want
for
your
own
provider
and
that's
the
easiest
thing
for
any
given
provider,
but
it
does
come
at
the
cost
of
portability
and
so
I
I.
Think.
Keeping
those
multiple
perspectives
and
in
mind
is
important
too.
C
B
E
A
A
A
E
Sorry
had
done
just
quick,
informative,
the
API
server
Network
proxy
cap
was
marked
as
implementable
last
night,
so
that
should
be
in
115
and
we
have
a
kubernetes
SIG's
network
proxy
repo
which
is
owned
by
this
cig.
We
have
contributions
from
a
couple
of
people
at
GCP
and
a
couple
of
people
at
AWS.
If
other
people
are
interested,
please
reach
out,
we
already
have
a
prototype
of
the
network
proxy
and
hopefully
soon
we'll
actually
have
a
prototype
of
integration
with
the
API
server
itself.