►
From YouTube: Kubernetes sig-aws 20170908
Description
Recording of kubernetes sig-aws meeting held 2017-09-08
A
A
Also,
if
you
are
able
to
or
willing
to
please
sign
and
write
your
name
so
that
you
know
we
can
figure
out
who
who
was
attending
this
and
trackback
comments
and
things
like
that.
The
main
item
on
the
agenda
is
one
I
put
on,
which
is
that
we
are
it.
We
are
in
feature
freeze
for
kubernetes
1.8,
so
we
have
turning
our
attention
to
currently.
A
1.9
I
will
just
correct
that
little
typo,
the
Camaro's
1.9
feature
Whistler,
wish
list
of
things
that
you
know
we'd
like
to
see
in
AWS
the
ADA
was
cloud
provider
and
I
put
a
couple
of
things
on
there:
I'm
sure
that
the
the
lid
there
are
more
things
that
could
go
on
there.
So
please
do
add
them
and
I
see
everyone.
You
added
something
about
the
federated
cluster
as
well,
but
I
propose
we
start
with
well.
I.
A
Guess,
like
I
guess,
we
should
quickly
review
what
went
into
1:8
I,
don't
think,
there's
any
huge
changes
in
AWS.
No,
the
ADA
was
cloud
provider,
communities
that
are
gonna
go
into
one
that
I'm
aware
of
I,
don't
know
if
anyone
it
was
differently.
There
are
some
bug
fixes
and
some
I
think
there's
some
extra
annotations
that
made
it
in,
but
nothing
nothing.
A
A
It
feels
like
very
similar
to
the
existing
yield
be
much
similar
to
the
existing
ELB
than
ALP
is
similar
to
the
existing
yogi.
It's
called
NLB.
Some
that's
come
up
a
lot
in
cops
and
I.
Don't
know
whether
other
people
have
this
same
problem,
which
is
around
tagging
of
resources
for
networks,
so
proposing
a
way
to
better
tag,
shared
network
resources
in
a
way
that
doesn't
require
a
permit
work
per
cluster
tagging,
cube
a
cube.
Diam
integration
has
been
under
I.
A
Think
ever
since,
like
1:3,
there
is
a
pot
identity,
working
group
which
is
happening
immediately
before
this,
and
hopefully
we
will
make
enough
progress
in
there
that
we
can
get
a
good
integration
going
on
and
having
cute
I
am
work
well
to
give
pod
level
a
WS
identity
and
more
secure
and
a
good
strap,
so
the
the
bootstrapping
of
the
of
nodes,
so
they
get
better
better.
They,
their
identity
is
validated
by
the
additional
information
we
have
on
AWS,
which
is
not
generally
available.
A
Otherwise
I
will
say
that
I'm
generally
encouraging
anyone
that
wants
to
get
involved
to
to
write
code,
I'm
very
happy
to
review
it,
I'm
very
happy
to
help
people
read
code
or
mentor
or
whatever,
whatever
the
appropriate
thing
is
that
I
can
I
can
do
to
help
people
with
with
or
it
empower
people
to
get
their
code
into
kubernetes.
So
if
anyone
wants
to
write,
for
example,
the
NLB
support
feels
like
it
will
be
relatively
similar
to
EO
bees
and
might
be
a
good
starting
one.
A
Similarly,
similarly,
the
the
kms
integration
for
HCD,
my
understanding,
is
Google.
Cloud
has
a
kms
type
product
I,
think
they
actually
call
it
kms
and
the
sed
encryption
is
integrated
with
Google's
cloud
kms,
so
it
would
be
a
natural
and
hopefully,
not
a
good.
You
know
getting
started
to
ask
to
integrate
that
with
AWS
kms,
because
there
is
a
lot
of
precedent
there
and,
in
general,
any
any
other
things
that
people
want
to
do.
A
I'm
always
generally
happy
to
mentor
and
empower
the
I,
don't
know
if
any,
if
other
non
Cox
users
have
the
network
object.
Selection
issue
I
can
explain
that
a
little
bit
more,
which
is
so
you
when,
when,
in
particular,
when
you
create
an
EOB,
for
example,
communities,
the
communities
Avis
cloud
provider
has
to
choose
some
subnets,
typically
as
the
biggest
problem
to
attach
to
the
EOB.
In
order
to
choose
those
subnets,
it
looks
at
like
the
subnets
that
are
in
the
zones.
A
Does
some
heuristics
as
best
I
can,
but
at
the
end
of
the
day,
if
you
have
multiple
subnets
in
your
cluster,
it
is
going
to
it
has
to
make
a
choice
somehow
and
it
the
way
you
can
guide
that
choice
is
to
use
tagging.
We
have
a
tagging
mechanism
which
we
use
for
ownership,
so
in
other
words,
to
identify.
You
know
which
instances
are
part
of
cluster
a
versus
part
of
cluster
B,
and
that
is
to
apply
a
cluster
of
tag
name
Kate
start
by
well.
A
The
original
was
criminals,
cluster
equals
cluster
ID,
and
we
sort
of
reused
that
for
subnet
tagging
and
in
1-6
we
added
the
ability
to
we
change
the
tag.
We
added
a
second
supported
tag
so
that
you
could
have
multiple
clusters
associated
with
the
subnet.
Now
the
problem
is
that
for
each
subnet
that
you
create
and
want
to
share,
you
end
up
with
a
tag
per
cluster,
and
it
is
inconvenient
for
some
people
at
least,
and
a
deal-breaker
for
other
people
to
have
to
have
a
tag
per
cluster.
A
A
I
know
that
Greg
Taylor,
you
had
a
challenge
with
the
format
of
our
new
tag
as
well,
so
we
should
probably
make
sure
that
the
the
format
works,
but
it
would
probably
also
address
that
for
you
so
that
the
strawman
proposal
so
I'd
like
if
anyone
else
has
this
problem
and
the
strawman
proposal
is
to
essentially
create
a
another
tag.
That
would
not
be
an
ownership
tag,
like
is
a
distinct
concept
from
ownership.
C
Think
I
would
probably
support
that
I.
Think
that
makes
a
ton
of
sense,
and
it
kind
of
just
follows
on
that.
The
stuff
that
we've
worked
on
when
I
originally
was
the
one
who
wanted
to
to
launch
a
bunch
of
clusters
inside
of
different
in
one
subnet
and
BCC.
So
yeah
I
think
it
makes
a
lot
of
sense.
The
more
ability
that
we
have
to
embed
data
in
data
to
theater
I.
A
Mean
we
do
have
a
we
do.
A
sort
of
general
design
principle.
I
guess
increment
is
that
we
were
the
crew
nativist
cloud
provider
that
we
prefer
tags
to
like
direct
specification
and
I.
Don't
know
how
other
people
so
the
alternative
way
would
be.
You
know
we
have
this
cloud
config
file,
which
is
a
bit
of
a
pain
to
configure,
but
it
is
there
and
you
could
like
directly
specify
the
subnets
that
you
want.
A
You
could
say,
like
subnet
ID
one
two
three
into
the
ninety
four
five,
six
right,
I,
don't
know
whether
people
would
prefer
that
something.
My
interest
is
more
in
doing
the
tags,
but
if,
if
people
would
like
to
do
an
explicit
enumeration
of
subnets
per
use
case,
we
can
do
that
there's
also
they
we
can.
C
A
You
mean
cops:
are
you
meaning
communities?
Oh
sorry,
it
yeah
I
copy
yeah.
So
in
cops
we'd
like
to
explain
you
can
like
identify
a
subnet.
But
yes,
then,
when
I
get
to
the
kubernetes
level,
it's
only
dealing
in
the
tags
and
yeah.
So
the
the
the
approach
would
be
to
like
in
the
classic
that
we'll
have
cops
put
the
subnets
into
the
cloud
config,
which
is
on
each
machine.
It
could
be
none.
It
would
read
that
directly
rather
than
asking
the
AWS
API
what
I
like
about
the
tags
and
they
diversity
ice.
A
D
The
one
thing
that
we
think
about
justin
is-
and
this
bites
us
in
a
bunch
of
place-
is
that
you
can
think
about
this
sort
of
first
time,
but
like
what
happens,
if
somebody
changes
this,
how
do
things
adjust
as
this?
If
system
reconcile
with
it
right
now,
as
you
move
to
a
tag
based
approach,
it's
a
lot
easier
to
be
sort
of
really
loosey-goosey
about
about
changing
stuff,
and
that
can
be
that
can
be
sort
of
a
little
bit
of
a
whiplash
as
you
try
and
reconcile
things.
D
A
Definitely
isn't
look
mean
and
I
think
that's
partially,
because
they
are
flags
which
we
recognize
as
valid
use
cases
sort
of
like
that
they
are
gonna,
add
like,
but
we
don't
necessarily
encourage
their
use
because
they
are
relatively
me
and
certainly
less
well
tested
at
the
moment.
This
would
obviously
be
a
case
for
if
we
wanted
to
do
that,
we
would
say
you
know
we
would
document
this
as
like.
This
is
what
the
recommended
way
it
does.
A
I
guess
require
that
there
is
a
management
layer,
whoever
group
who
ever
provides
it
that,
like
in
practice,
configures
your
cloud
config
right,
whether
that's
cops
with
that
salt,
whether
that's
debate
am
I,
don't
have
to.
You
has
a
explicit
thing.
You
know
whatever,
whatever
a
solution
it
is,
it
does
sort
of
in
practice,
require
that
but
I
imagine
everyone
actually
has
one
of
those.
Even
if
it's
homegrown
today.
D
I
mean
my
goal
would
be
to
keep
that
stuff
as
light
as
possible
right
there's.
There
has
to
be
something
that
kind
of
sets
this
stuff
up,
but
the
more
complicated
and
the
more
fiddly
we
make
that
the
easier
it
is
to
get
wrong,
and
you
know
the
more
you
know
fracture
we're
going
to
have
in
terms
of
people
doing
this
stuff.
A
E
Sorry
there
might
be
some
overlap
if
you
haven't
seen
it
with
the
node
authorizer
work.
That's
in
1/8,
which
is
about
kind
of
taking
away
the
nodes
ability
to
real
able
itself,
but
basically
the
control
plane
should
be
the
thing
that
asserts
labels
on
a
node
in
some
configurations
that
might
be
relevant
here
to
actually
create
a
component
I.
Don't
think,
there's
I,
don't
think,
there's
an
external
point
yet,
but
to
create
it
like
a
centralized
component.
A
Well,
I
will
take
a
look.
That
is
a
good
suggestion
and
yeah
I
think
I.
Think
with
all
of
these.
Not
that,
like
the
various
options,
are
not
exclusive
from
each
other,
it
is,
but
we
do
obviously
want
to
try
to
find
out
which
works
for
everyone.
I
like
the
idea
of
doing
a
a
design
doc,
because,
like
I
think
you
know,
Greg
Taylor
pointed
out
a
mistake:
I'm
a
or
a
limitation
I
made
where
I
could
didn't
work
well
with
terraform.
A
D
So
well,
I
think
the
alb
stuff
is
more
similar
to
ingress
in
the
NL.
V
is
a
much
much
closer
fit
to
exactly
those
like
type
equals
load,
balancer
and
I.
You
know,
and
you
know,
and
the
NLP
stuff
mirrors
the
the
l3
load
balancer,
that
GCE
does
and
there's
a
whole
bunch
of
features
that,
like
it's
some
point
we
may
want
to
enable
to.
Essentially
you
know,
reduce
US
traffic
and
reduce
hops
that
the
Google
folks
have
been
pioneering.
D
There's
this
assumption
that
a
type
equals
load
balancer
service
has
one
and
only
one
external
address
right,
and
so,
if
there's
a
there's,
a
DNS
entry
that
gets
pulled
out
and
that's
the
thing
that
we
list
there
if
it's
a
set
of
IP
addresses,
that's
going
to
be
something
that's
going
to
be
a
little
bit
more
difficult
to
to
model
into
the
kubernetes
world
room.
Do
you
know
if
you
played
with
this
at
all?
Okay.
E
E
A
Is
good
I
don't
know
like
constricted
Yoga
is
like
I
did
check
and
technically
you
are
allowed
multiple
IPS
in
a
okay,
I'm,
sure
I
think
most
everything
assumes
that
you're
just
gonna
have
one
right:
yeah
I'd!
Imagine
so
like
you
have
multiple
load,
balancer
ingress
types,
each
of
which
has
a
single
IP
who.
A
I
think
DNS
would
certainly
be
an
easier
first
one,
so
I
mean
if
I
think
someone
mieka.
Do
you
know
if
you
want
to
jump
on
and
I'll
be?
That
would
be
ocean
of
your
mind.
Yeah
I
would
love
to
cool
well
mine
I
am
Justin
must
be
on
slack
if
you
want,
and
if
you
want
to
like
chat
about
it,
some
more
but
I
mean
my.
My
understanding
is,
as
joe
says
it
feels
like.
A
It
would
be
an
alternate
thing
that
would
be
created
when
you
create
a
service
of
type
load
balancer,
unlike
AOP,
which
wasn't
more
natural
to
it
for
ingress,
and
I
guess
we
can
start
there
and
find
out.
You
know
what
the
differences
are
and
yeah
good
night
I
would
love
to
see
that
and
said.
Let
me
know
the
hardest
thing
is
actually
gonna,
be
upgrading
the
AWS
SDK,
so
I'm
gonna
tackle
that,
in
other
words,
go
deps.
I'm
gonna
tackle
that
as
soon
as
the
new
branch
opens
well,
I
think
there's.
F
A
A
We
could
also
look
at
doing
an
externally,
but
I
feel
like
that's
just
going
to
be
a
lot
more
work,
shake
lots
of
shaking
their
heads
and
then
Matt
Moyer.
If
you
want
to
tackle
a
mess
that
is
they'll,
be
awesome,
yeah,
my
it's
my
a
little
bit
of
background
that,
like
you
know,
the
EDD
encryption
went
in,
there
is
a
pluggable
key
provider
mechanism
and
I
believe
a
PR
went
in
pretty
late
to
do
Google
payments.
E
I'd,
look
at
this
before
I,
even
I've
used
eight
of
us
kms
before
and
I
think
it
should
have
just
fine.
There
was
some
concern.
There
was
some
discussion
and
there's
an
issue
open
about
making
that
kms
provider
interface
pluggable
out
of
tree.
So
right
now
the
kms
backends
are
like
an
entry
plug
an
interface.
E
A
I
did
not
put
it
on
the
one
line
feature
because
it's
something
I'm
trying
to
avoid
getting
sucked
into,
but
there
is
a
general
move
to
like
move
move
cloud
providers
out
of
Cooper
tilde
manager
in
to
grab
controller
manager.
My
gut
feel,
then,
is
therefore
that
the
right
thing
to
do
for
AWS
is
you
know
implemented.
However,
if
Google
has
done
it
by
Ian
tree
it'll,
move
into
crowd,
control
and
manager
at
some
future
release,
they
were
all
be
fragmented
into
like
separately
version.
A
A
A
A
But
yeah
those
are
the
so
I,
don't
know,
I
hope
people
feel
about
that
which
mister
features.
Oh
yes
and
then
so
cute
I
am
integration.
Integrating
with
a
public
pod
identity
working
group
is,
you
know,
hopefully
will
just
will
deliver
the
things
that
we
need
to
make
that
more
more
kubernetes
and
Greg.
You
asked
about
more
secure,
node
bootstrap,
so
this
is
just
something
I
came
across,
so
kubernetes
has
a
functionality
and
I
think
once.
G
A
Was
out
when
it
was
beta,
but
where
the
cubelets
get
a
fir
node
certificate,
it's
part
of,
like
general,
hardening
I,
think
Joe
you've
got
the
blast
radius
right.
The
blast
radius,
if
like
a
node,
is
compromised,
yeah
and
the
way
that
uses
the
CSR
the
certificate
signing
request
API.
There
is
not
a
lot
of
additional
functionality
beyond
the
idea
that
if
you
have
a
token
and
can
write
a
CSR
is
my
understanding,
you
can
basically
get
any
node
certificate.
A
You
want
and
I
feel
like
there's
an
obvious
opportunity
to
do
a
little
bit
better
on
well,
any
any
any
particular
installation
can
do
better.
Aws
is
a
particular
installation
and
I'm
wondering
if
we
can
like
right
in
the
Mission
Control
like
check
the
IP
or
whatever
it
is.
You
know
just
like
whatever
small
steps
it
is
I
the
the
node
document
might
so.
The
the
identity
document
in
the
I
am
metadata
is
is
another
one
ie.
A
A
Document
we
could
try
to
leverage.
You
know
the
fact
that
hopefully
IP
addresses
are
more
controlled
and
like
we're
more
confident
in
IP
addresses
we
can
like
cross-check
IP
addresses
against
the
database
API.
Those
are
the
sort
of
things
I'm
thinking
just
to
give
a
little
bit
more
like
confidence
in
a
node
and
nodes
cubelets
certificate
actually
and
to
find
the
node
and
I
think
it's
important
because
it
ties
into
the
you
know:
the
node
who's,
probably
gonna
end
up
we'll
see
what
happens,
but
the
container
I
didn't
see
like.
D
D
Where
you've
been
involved
with
the
the
container
identity
stands
50
and
you
know,
there's
a
ton
of
overlap
Pearce's.
So
it's
spiffy
with
the
reference
motivation
for
spiffy,
which
has
been
named
spire.
There's
this
idea
the
testers,
it's
a
pluggable
model
where
you
could
have
the
AWS
a
tester
that
actually
sort
of
you
know.
Partners
with
the
client
and
the
server
to
actually
sort
of
you
know,
use
extra
information
to
be
able
to
verify
things,
not
sure
exactly.
You
know,
man
I,
don't
know.
If
you
have
more
context
on
this,
then
than
I
do
yeah.
E
So
I
think
so
the
the
way
the
bootstrap
certificate
boots
typing
system
works
right
now.
There's
two
I
think
there's
two
good
points
where
we
could
extend
it
for
an
AWS
specific
configuration.
One
is
decked
with
bootstrap
tokens
themselves.
We
could
replace
that
with
an
AWS,
specific
kind
of
tokens
game
where
you
get
a
token,
that's
from
your
instance.
I
do
maybe
document
or
there's
some
other
tricks.
You
can
do
with
the
STS,
get
caller
identity
and
pretty
signing.
E
There's
involved
vault
sort
of
pioneer
these
kind
of
tricks,
but
you
can
get
a
token
that
represents
your
instance
identity
and
you
write
an
authorizer
plugin
Authenticator,
plug-in
on
the
server
that
recognizes
the
identity
and
sort
of
says
like
oh
that's,
this
node,
then
you
can
give
that
identity
access
to
the
csr
api
as
sort
of
a
bootstrap
identity.
So
now
that
know
is
able
to
do
and
I
think
this
I
think
this
would
all
work
with
couplet.
E
If
you
plugged
that
token
into
couplets
bootstrapping
couplet,
we
go
you
a
CSR,
then
you
there's
another
extension
point,
which
is
a
CSR
approver
that
can
sit
there
and
it's
a
controller
looks
at
the
CSR.
That
CSR
contains
the
user
and
group
that
the
bootstrap
ID
them
to
be
authenticated
as
and
then
you
can
sort
of
look
up
that
at
that
information,
which
that
information
would
ideally
have
like
an
instance
ID.
F
B
D
E
D
There's
still
the
issue
of
identifying
or
validating
the
server
right
and
making
sure
that
when
the
node
wants
to
talk
to
the
server,
it's
actually
talking
to
the
server
it
thinks
it's
talking
to
right
and
so
that
you
know
we
could
use
the
the
public
key
signature
stuff
that
we're
putting
in
the
one
eight
cycle.
But
you
know
doing
that
with
the
symmetric
token.
It's
a
extra
check.
E
D
Hopefully
we
can
get
to
the
point
where
that's
something
that
you
can
actually
look
at
touch
and
use,
and
then
it
becomes
much
more
serious
discussion
about
like
okay.
Can
we
actually
use
that
so
I
I've
been
trying
to
be
careful
about
not
over-promising
around
that
stuff?
You
know
because
it's
it's,
it's
still
very
much
a
work
in
progress
but
I'm.
You
know
I'm
optimistic
that
by
the
end
of
the
year,
they'll
actually
be
some
real
meat
on
the
bones
there
that
that
folks
can
really,
you
know,
get
their
hands
dirty
with
and.
E
That
would
let
you
bootstraps
gooble
it
and
then
follow-on.
Then
it's
eventually
when
you
have
pods
running
pods,
but
also
somehow
be
able
to
talk
to
that
node
agent,
which
would
have
a
plugin
that
understood
kubernetes,
pod
membership
and
could
give
them
spiffy
identities
that
were
workload.
Specific
some
of
the
pieces
exist
some
of
it's
still
kind
of
in
the
air,
but
there's
definitely
overlap.
F
Yeah
I
was
just
wondering
if
there
I
added
a
few
notes
below
I
know
it's
available
behind
feature
flag
and
comps.
We
had
which
is
deprecated
about
53
mapper
in
favor
of
it,
but
I,
don't
know
what
the
there's
any
momentum
if
anyone's
using
it
reduction
of
what
any
sort
of
timeline
plus
I've
heard
of
it,
was
back
and
or
seen
a
thing
was
like
July,
so
actually.
A
F
F
A
What
you
said
is
the
summer:
that's
in
there
is
absolutely
correct,
like
the
long-term
goal
remains
to
use
it,
it
is
a
it
seems
to
making
a
lot
of
great
progress.
It's
a
great.
It's
like
the
shining
example
of
like
things
making
progress
outside
of
core
because
they
have
to
make
they
have
a
new
provider
every
day.
I
think
there
are
some
feature
gaps
which
I
think
Eric
and
Seth
have
you
guys
been
looking
at
it,
so
I
think
that's
sort
of.
H
A
A
Like
you
know,
we
would
like
to
get
it
integrated.
You
know
I'm
working
on
other
things
in
other
clouds
and
cops,
and
maybe
one
of
them
more
convenient
to
use
external
DNS,
and
that
will
like
motivate
me
to
do
those
things.
That's
sort
of
the
status
as
far
as
I
know,
though,
if
you
like
outside
of
cops,
it
is
good
I,
don't
know
if
anyone
is
using
in
production,
but
I
believe
it's
pretty
good
and
like
the
recommended
route,
53
integration
comments.
A
So
yeah
I
mean
I,
think
I
think
like
if
you're
exposing
ingress
I
think
it's
I
think
it
works.
Well,
as
my
understanding-
and
that
to
me
is
like
the
canonical,
cuffs
deliberately
keeps
its
hands
off
that
for
now,
and
so
that
you
can,
you
know
critten
ingress
and
the
external
route
53
project
or
the
the
external
DNS
project
will
then
configure
your
dns
with
that
ingress,
which
to
me,
is
like
what
I've
always
wanted.
Were
you
creating
rest
for
your
blog
and
it
pops
up?
You
know
the
WordPress
example.
That's
pretty
cool
my
opinion.
A
Cool
next
on
the
agenda:
Arun,
you
want
to
talk
about
federated
clusters.
I
know
yes,
I
know
you
were
gonna,
investigate
it.
Last
time,
I
think
you
had
some
issues.
I've
been
like
snowed
under
personally
I
serve
about
not
responding
for
those,
yet
I'm
gonna
get
there,
but
we
had
yeah.
We
had
massive
test
failures,
but
yes,.
I
I
You
know
as
part
of
the
Federation,
but
you
know
I
said:
okay
I'll
delete
the
clusters
all
over
again
and
I'll
start
all
over
again
and
I
created
the
clusters,
and
in
that
time
it
took
about
15
to
20
minutes
for
the
clusters
to
become
part
of
the
Federation,
so
that
that
was
not
bad
I
find
about
four
that
in
anyway
it's
not
reproducible.
So
I
can't
really
give
anything
specific
details
on
that.
So
I
left
it
at
that.
But
the
bigger
challenge
for
me
is
if
I'm
trying
to
do
any
deployment.
I
You
know
any
replica
set
or
any
any
artifact
to
that
federated
cluster.
It
says
you
know,
but
nothing
deploys
actually
and
I
just
keeps
waiting
and
I
just
put
a
watch
on
the
cube.
Cuddle
say:
okay,
show
me
the
status
as
the
status
is
changing
I
waited
for
a
few
minutes.
You
know,
I
left
it
at
one
point
over
two
hours,
but
nothing
gets
deployed
with
no
error.
Returning
back
I'm
wondering
how
should
I
D
bug
this
further.
D
I
So,
as
you
know,
if
we
were
to
well
the
two
reasons
I
brought
it
up
to
this
egg
is
because
this
was
cluster
running
on
AWS
one.
Second,
it
was
using
cops,
but
I
understand
you
know
the
Federation's
sake
might
be
a
better
place.
I
did
not
get
any
feedback.
I
posted
a
message
on
that
sig
as
well.
I
did
not
get
any
feedback,
maybe
I'll
poke
them
a
little
bit
harder,
but
is
the
thought
process
that
if
anybody
is
recommending
federate?
So
if
the
thought
process
is
that
we
should
not
be
recommending
Federation
I.
I
B
Yeah
we've
been
watching
it,
I
read
it,
but
our
cinnamon
right
now
is
that
our
usage
cases
they're
simple
enough
that
we
feel
more
comfortable
separately
orchestrating
similar
clusters
across
a
ZZZ.
We
find
that
that
we
just
a
lot
easier
to
reason
with
right
now
and
fortunately,
Cooper
Denny's,
API
and
the
clients
make
this
simple
proposition
so
yeah.
B
B
A
All
right
again,
I,
don't
speak
Federation,
but
I.
Think
like
the
challenge
with
with
it
is,
is
that
the
promise
is.
It
is
an
equivalent
scenario
to
running
like
a
deploy
across
three
clusters.
If
you're
running
your
deploy
in
continuous
integration,
it
doesn't
really
hurt
you
that
much
to
run
your
deploy
across
three
clusters
and
Federation.
I
D
Think
the
analogy
here
rune
is
like
multi
region
in
AWS
right
there
are
a
few
systems
that
work
multi
region,
but
most
of
the
time
coordinating
things
across
region
in
AWS
has
left
to
exercise
for
the
reader.
I
think
the
analogy
there
is
that
you
know
multiple
clusters.
There,
there's
systems,
there's
promise
there
there's
a
lot
of
ideas.
I,
just
don't
think
it's
a
solved
problem
yet
yeah.
I
Yeah
and
that's
the
feeling
I'm
getting
as
well,
okay,
I
think
I
will
dig
on
this
little
bit
deeper
a
little
bit
deeper,
not
a
whole
lot
but
see
you
know
if
it's
a
known
issue
or
because
I
did
filed
a
bug,
you
know
in
the
kubernetes
repo
add
it
to
the
Sikh
Federation
and
I've
not
seen
any
input
on
that.
Yet
so
I
presume
it's
an
then
maybe
know
the
issue
or
something
is
happening
on
that
I'll.
Let
me
try
to
discuss
this
in
the
Federation
sake
as
well
once
again
and
then
Park
it.
I
I
I
agree
you
know,
and
and
and
that
sort
of
my
thought
process
as
well,
because
I've
seen
our
customers
deploying
clusters
across
multiple
AZ's
in
a
region
just
to
get
master
high
availability.
You
know.
If
push
comes
to
shove,
you
know
I've
seen
the
approach
that
Greg
is
talking
about.
Where
you
know
you
have
exact
duplicate
cluster
across
multiple
AZ's
in
a
region,
and
then
that
gives
you
a
B
and
whatever
you
want
to
do
with
that.
I
think.
D
A
A
If
you,
if
you
actually
are
running
like
an
H,
a
database
on
your
communities
cluster,
you
probably
want
that
in
multiple
disease
until
Federation
comes
along
and
like
lets,
you
like
that's
where
that's
where
a
federation
I
think
it's
the
most
stable
sense,
the
most
interesting
when
a
stateful
second
spend
clusters
to
me
and
yeah
I
also,
but
I've
also
been
like.
You
know,
the
I've
been
trying
to
reproduce
I'm
trying
to
find
time
to
reproduce
your
use
case,
everyone,
because
it
certainly
works
better
about
a
year
ago.
A
I
Well,
I
have
a
workspace.
You
know:
I
I
could
I
cleared
a
github
repo
where
I
talk
about
how
the
federated
cluster
can
be
set
up,
but
then
that's
exactly
where
I'm
stuck
on.
You
know.
How
do
you
deploy
an
application
over
there,
but
I
am
definitely
where
there
are
multiple
approaches
of
not
going
the
Federation
route
of
the
pros
and
cons.
Okay,.
A
B
I
guess
know
if
anybody's
interested
on
collaborating
someone's
I
am
scoping
in
various
situations.
I'd
be
particularly
interested
in
talking
with
anybody
who
has
been
scoping
to
particular
subnets
or
using
certain
tagging
strategies.
You
know
what
I'm
trying
to
aim
at
is:
we've
got
a
larger
V
PC
a
legacy
BC.
It's
got
a
lot.
A
Greg
there
are,
there
are
some
people
working
on
it,
giving
us
that
are
using
cops
that
actually
put
in
some
some
of
that
scoping
and
we
hit
a
problem
but
yeah,
that's
that's
another
one
I
want
I,
don't
know
some
one
of
the
things
is
that
so
yeah.
So
there
is
there's
some
of
that
in
cops
already
and
you'll,
see
that
there
are
some
limitations
around
that,
and
so
there's
a
small
Iowa
called
a
bug-fix
more
than
a
feature.
A
I
want
to
get
him,
which
is
that
like
create
volume
today
is
a
create
volume
and
then
create
tanks,
process
and
March
of
this
year.
They
added
the
ability
to
create
volume
with
tags.
Our
AWS
SDK
is
not
updated
to
that
version.
Yet
another
reasons
are
like
bump
it,
but
then,
when
we
do
that,
then
you'll
be
able
to
do
you'll,
be
able
to
create
people,
but
a
crit
and
I
and
policy
that
restricts
your
there
to
create
a
volume
to
only
volumes
that
have
the
tags,
which
will
be
another
step.
A
And
then
you
know,
all
modified
volume
will
be
able
to
be
restricted
to
well
everything
we
have
restricted
to
that
and
I
think
ruin.
If
you
have,
if
you're
able
to
surface
a
feature
request
like
the
more
the
more
of
those
that
we
can
get
I.
Think
there's
like
create
security
group
is
another
one
right
now
which
is
needed
for
your
first
ELB.
We
create
a
sick
or
actually
I,
guess
every
ELB
creates
a
dedicated
security
group
right
now
and
that
that
still
has
the
two-step
process.
A
So
there's
no
way
to
restrict
create
security
group
by
there
was
restricted
tags
that
you
do
so
any
any
of
those
I'm
I
suspect
give
them
happen
in
March
like
this
is
a
gradual
continuation
and
more
and
more
will
be
atomically
done,
but
it
also
solves
the
robbing
that,
in
theory,
we
leak
if
something
crashes
right
like.
If
we
create
security
group
of
crash,
we
might
not
be
able
to
retake
it
and
we
have
some
tricks,
but
they
are
not
foolproof.