►
From YouTube: kubernetes sig-aws 20190712
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
There
is
a
I,
am
pasting
a
link
to
the
agenda
into
the
chat.
We
do
have
a
couple
of
items
on
the
agenda
thanks
Jay
and
thanks
Peter.
Please
do
add
any
items
to
the
agenda
if
you
would
like
to
be
sure
that
we
cover
them
for
people
looking
watching
the
video,
it
is
helpful
for
them
too.
If
we
put
your
name
on
in
the
agenda,
so
please
feel
free
to
do
that
if
you
would
like
to
otherwise
I
propose.
We
jump
straight
into
the
first
item,
which
is
Jay.
A
Yeah
we
do
have
this
cluster
ID
tag.
I
I
thought
it
was
only
required
on
AWS
yeah.
B
A
That
is
okay
and
I.
Think
I
think
I
wrote
in
this
issue
that
we
facially
laying
to
top
provider
issue
12
that
we
use
it
for
like
two
purposes
right
for
isolation
and
for
cleanup
is
how
I
described
it
so
for
isolation,
its
we're
able
to
tag
the
resources
and
therefore
like
stop
accidental
crossover
between
two
clusters
and
then
for
cleanup.
A
The
the
there
is
the
potential
to
fix
it
like
not
to
try
to
get
isolation
in
other
ways.
It
would
be
a
fairly
big
change.
I
think,
one
of
the
ones
that
I
like
there
are
places
where
we
we
pass
around
node
names,
and
if
we
passed
around
node
objects,
we
would
presumably
be
able
to
get
rid
of
a
bunch
of
of
these
things
and
it'd
be
more
efficient
in
general.
How
would
the
node
name.
A
The
some
that
the
cluster
au
tag
is
very
important
there.
If
you
have
multiple
subnets
in
your
V
PC,
so
that
okay,
we
were
able
to
know
because
often
the
subnets
for
your
ELB
will
not
be
the
same
as
the
subnets
in
which
your
node
runs.
If
you're
running
your
nodes
in
private
IP
configurations,
you
run
your
elby's
in
a
different
subnet
I
believe,
ok,
so
that's
that's
sort
of
where
it
comes
from
I.
Think
I,
don't
know.
A
If
I
don't
think
we
should
I,
don't
think
we
should
require
cluster
ID
on
other
clouds
and
I
know
it's
not
clear
that
we
have
I
think
if
someone
wants
to
push
to
reduce
the
number
of
places
here,
we
use
cluster
ID
I
need
of
us.
That
would
be
great
I,
don't
know
that
it
is
trivial
to
get
rid
of
it.
To
be
honest,
ok,
what
was
sort
of
the
cloud
provider
or
sega
architecture,
feeling
I,
don't
really
understand
what
they
were.
B
A
A
B
This
so
this
is
sort
of
related
to
the
other
thing
that
I'd
listed
down,
which
came
out
in
the
Sikh
architecture
meeting,
which
was
around
credential
providers
being
able
to
get
rid
of
that
or
not
get
rid
of
it,
but
pull
it
out
into
a
out
of
KK,
and
so
the
only
way
to
do
that
I
believe,
is
to
determine
which
ecosystem
projects
are
directly
importing:
the
credential
provider
stuff
from
KK
package
cloud
provider,
credential
Prada
or
whatever.
It
is
directly
as
opposed
to
having
an
interface
in
cloud
go
in
the
cloud
provider
repo.
B
You
know
what
I
mean
in
six
doc,
kto
forward
slash
cloud
provider,
it
has
all
the
interfaces
for
the
cloud
provider,
interface
right
in
cloud
go
and
I
believe
that
we
want
to
take
the
credential
provider
stuff
into
there
all
right.
So
so
it's
interface,
it's
substracted,
as
opposed
to
the
somewhat
tightly
coupled
situation
that
it
has
now
yeah.
A
Anyway,
just
before
we
do,
let
me
that's
talk
about
that
in
a
second
before
we
do.
I
just
want
to
I
realized
the
the
big
difference.
The
reason
why
we
have
the
custody
tag
is
because
a
sort
of
philosophical
decision
that
I
guess
I,
made
back
in
the
early
days,
was
to
discover
things
like
subnets
using
those
tags
right,
rather
than
require
it
to
be
passed
in
in
a
sort
of
configuration
to
the
cluster.
So
there.
A
I
mean
like
so
captured,
eks
Cotto
has
two
options
right:
it
creates
a
bunch
of
subnets
and
it
can
either
tag
them
or
we
could
say,
I've
created
a
bunch
of
subnets
and
I'm,
going
to
record
their
IDs
and
I'm
going
to
put
them
into
a
config
map
in
the
class,
sir.
Neither
one
of
those
would
be
fine.
We
went
with
Tex
or
I,
went
with
axe,
and
perhaps
that
was
the
wrong
choice,
but
that's
where
this
cluster
I
do.
B
A
Sure
yeah,
but
that's
I,
think
that's
where
it's
just.
That
is
the
root
of
where
it
comes
from
and
I
guess
so.
I
think
I.
Think
of
the
things
I
would
say
is
like
eventually
this
goes
becomes
a
cig
a
to
a
specific
requirement.
The
cluster
ID,
and
that
there
is
a
existing
philosophy
of
using
tags
and
discovery
which
we
may
choose
move
may
decide
is
not
the
way
to
go
and
there's
actually
a
nice
way
to
move
to
explicit
enumeration
as
well
like
you
can
have
explicit
numeration.
A
If
it's
not
explicitly
enumerated,
you
just
go
and
do
the
tag
discovery
but
then
we'd.
Imagine
the
tooling
would
move
to
explicit
information
and
the
tag
just
you
could
basically
then
remove
permissions
for
discovery,
because
it'd
be
a
fullback
path
that
would
never
be
hit,
so
we
can.
We
can
get
there,
but
I
think
those
would
be
there.
That's
sort
of
the
I
think
that
would
be
our
path
forwards.
B
Sure,
which
is
why
you
know
it's
labeled
P
yeah,.
C
B
A
Yeah,
the
other
one
you
brought
up,
though
the
the
challenge
as
I
understand.
It
is
the
person
the
person,
the
process
that
consumes
the
credentials
providers
is
cublas
yeah.
D
A
I,
don't
that
is
that's.
What
makes
this
really
hard
is
sure
we
can.
I
define
an
interface,
but
cubelet
specifically
needs
to
call
into
it
to
get
credentials
and
it
wasn't
or
health.
It
wasn't
a
net
that
could
be
indirect,
like
one
of
the
proposals
I
think
was
to
auto-populate
image
pool
secrets
as
secrets
in
in
each
namespace
or
like
alongside
each
pot
or
whatever.
It
would
be
that
sort
of
thing
so
that
then
it
could
be
that's
more
of
a
kubernetes
controller
approach
and
I.
A
B
A
Yeah,
which
we
kind
of
so
we
already
have
it,
and
they
separate
library
that
is
only
consumed
by
cubelet.
As
far
as
I
know,
I,
don't
think
it's
like
it's
I
think
Mike.
There
was
some
overlap
where,
like
they
called
into
each
other
for
convenience,
but
Mike
croute,
I
think
managed
to
split
them
so
that
they
are
like
totally
separate.
I.
B
A
B
C
A
C
C
A
Think
a
provider
person
needs
to
come
up
with
a
proposal
and
then
have
the
other
SIG's
tell
us
why?
How
it
work
or
something
like
that,
but
I
guess
yeah.
That
would
be
a
good
thing.
I
had
a
conflict
in
this
week's
cloud
provider,
but
yeah
I
think
I
should
attend
that
and
see
what's
going
on,
but,
yes
I
believe
there's
a
wrinkle
in
it.
A
A
But
yes,
I!
If
you
have
your
code
in
UCR,
if
you
have
your
code
in
GCR,
if
you
have
your
code
in
that,
then
I
don't
think.
There's
anything
that
stops
you
configuring,
the
cubelet
with
your
AWS
or
GCE
credentials
so
that
you
can
like
integrate
with
those
things.
Even
if
you're
not
running
on
my
common
question,
which
is
interesting,
I,
don't.
B
A
B
E
Feature
request
has
been
out
for
a
while
and
requested.
Often
other
of
you
have
issues,
but
essentially
this
is
requesting
currently
do
the
ingress
controller
creates
one
Hale,
beeper
and
ingresses
need
to
live
in
one
namespace,
which
means
if
we
have
many
namespaces
you've
required
to
have
many
LPS
they'll
be
use,
have
a
fixed
cost
for
running.
You
know
each
month's
$20
or
so
so.
Having
many
aggresses
means
paying
more
for
many
al
B's,
so
the
request
is
to
provide
grouping
so
that
they
can
kind
of
be
collapsed
into
one
ale
beats
all
right.
A
B
A
A
I've
I
think
for
the
record.
Why
don't
we
quickly
talk
about
the
Bosco
stuff
Jay,
so
how
we
were
talking
about
it
before
there
is
some
tests
infra
there's
this
thing
called
boss
cos
which,
as
we
talked
about
sort
of
like
let's
different
tests,
sort
of
multiplex
onto
a
smaller
set
of
GCP
projects
or
AWS
projects,
the
AWS
accounts
database
accounts.
Thank
you,
cuz,
slash,
logins,
the
and
yeah.
The
complexity
was
exactly
as
you
say
that
AWS
doesn't
have
the
same
hierarchy
that
she
has.
B
Was
the
there's
this
thing
called
the
janitor,
which
is
part
of
Bosco's
that
runs
and
tests
infra,
and
when
this
janitor
daemon
runs,
it
cleans
up
resources
that
were
leased
to
a
particular
prowl
job
for
running
a
set
of
tests?
Well,
what
was
happening
was
because
these
were
all
separate,
logins
and
not
separate
accounts.
The
janitor
was
essentially
cleaning
up
resources
that
were
owned
by
different
logins,
because
it
was
doing
everything
for
the
account
I
guess.
That's
the
best
way
to
describe
what
was
happening.
Justin
yeah.