►
From YouTube: Kubernetes sig-aws 20170825
Description
Recording of kubernetes sig-aws meeting held 2017-08-25
A
We
are
recording
excellent.
Thank
you,
everyone
for
joining
just
on
Monday.
We
are
recording
the
meeting,
so
everything
you
say
will
be
public.
We
also
have
the
meeting
minutes.
Please
go
ahead
and
fill
in
your
name
in
the
attending
section
and
fill
anything
also
that
you
would
like
to
talk
about
towards
the
end
of
the
agenda
today,
we're
going
to
start
with
the
demo
of
Hetty
o
Ark.
If
are
you
guys
available?
Yes,
yes,.
B
Can
you
hear
me
all
right?
Absolutely:
okay,
great
first,
thank
you
for
inviting
us
and
having
us
here
so
I'm,
going
to
show
a
demo
of
a
tool
that
we've
been
working
on
called
Ark,
which
is
for
disaster
recovery
of
your
kubernetes
clusters
and
including
your
persistent
data.
So
just
a
brief
overview.
B
Ultimately,
we
would
like
to
have
it
multi
tenant
with
better
lockdown
security,
about
four
right
now
or
alpha
or
inversion
0.3,
and
we
basically
take
advantage
of
the
discovery,
API
and
the
API
server
to
go,
inspect
it
and
see
what
all
is
out
there
and
then
you
can
control
what
you
want
to
backup
and
I'll
show
that
in
the
demo
right
now.
So
let
me
see
if
I
can
get
screen
sharing
working
here.
B
B
Have
two
different
tabs
here:
I've
got
them
different
colors,
so
you
can
tell
which
is
which
the
darker
color
one
is.
What
I'm
calling
dev
one
and
the
lighter
colored
one
step
two
is
just
is
meant
to
represent
two
different
kubernetes
clusters.
So
what
I
have
on
the
right?
Pane
is
basically
just
what
the
arc
server
is
doing,
which
right
now
is
just
waiting
for
me
to
interact
with
it.
B
So
I've
got
it
deployed
and
I
want
to
go
ahead
and
show
you
what
it
looks
like
to
create
a
namespace
put
something
like
nginx
in
it
with
a
couple
of
persistent
volumes
and
then
create
a
backup
tear
down
the
entire
namespace,
including
all
of
the
data
on
all
the
persistent
volumes
and
then
restore
my
backup
into
a
different
cluster.
So
what
I'm
gonna
do
is
run
arth,
backup,
create
and
I'm
gonna
call
this
one
dev
1
nginx
and
just
to
show
you
the
full
line
here.
B
I'm
saying
I
want
to
include
an
engine
X
example:
namespace
and
I
want
deployment,
TB,
CPV
and
service
and
I
just
realized.
They
need
to
actually
create
that
so
I'm
going
to
apply
the.
We
have
an
example
that
ships
with
arc
for
creating
a
namespace,
a
persistent
volume
claim
engine
X
deployment
and
a
service
that
sits
in
front
of
it.
So
let
me
go
ahead
and
do
that
first,
and
so
it's
going
to
create
all
of
these
things
and
I
actually
have
two
different
bbc's.
I
have
one
for
logs
and
one
for
the
actual
HTML.
B
B
Here
we
go
so
I'm
copying
my
index.html
into
what
is
a
persistent
volume,
persistent
volume
mount
and
if
we
describe
this
service,
I
have
a
load
balancer
here
which,
depending
on
AWS
may
or
may
not
work,
may
take
a
little
bit
of
time
for
that
to
come
up.
So
let
me
go
ahead
and
just
curl
the
service
IP
to
show
you.
We
do
actually
serve
up
the
index
file
that
I
created.
So
at
this
point,
I'm
gonna
go
and
see.
B
Do
we
have
any
backups
and
we
don't
so
I
can
do
arc
backup
create,
and
this
is
going
to
create
my
backup
called
Devlin
edge
next
with
the
namespace
that
we
have
and
it's
going
to
backup
the
deployment,
the
PBC,
the
PVS
and
the
service.
So
if
we
look
over
on
the
arc
log
side
of
the
house,
you'll
see
it's
processing
the
backup
and
then
it's
snapshotting,
a
the
to
persistent
volumes
that
I
have
and
then
it's
basically
done
and
I
can
show
you.
B
So
we
can
go
ahead
and
take
a
look
in.
Oh
there
we
go,
there's
our
app
up
and
running.
So
if
we
go
look
in
s3,
which
is
where
we're
storing
our
backup
data
you'll
see
that
there
is
now
this
dev
1,
nginx
folder
and
inside
there
we
have
backup
JSON,
which
is
just
a
basically
export
of
the
backup
and
I'll
show
you
what
that
looks
like
in
a
second
we've
got
the
log
file
compressed
and
then
a
tarball
of
everything
that
we
asked
it
to
backup.
B
So
we
can
go
ahead
and
look
at
what
the
back
up
here
and
you'll
see.
It
says
that
it
completed
and
it
was
created
and
it
has
an
expiration
on
it,
and
we
can
also
take
a
look
at
it
in
yeah
Mille,
for
example,
and
you'll
see.
So
this
is
using
a
custom
resource
and
so
we've
defined
what
we
wanted
to
look
like,
and
so
you
can
basically
be
the
Ark
backup.
Create
command
is
just
syntactic
sugar
for
creating
a
backup
resource
and
posting
it
to
the
API
server
information.
B
So
let's
go
ahead
and
do
something
destructive
I'm
going
to
delete
my
nginx
example.
Namespace
and
I
have
had
some
issues
this
morning
with
it
actually
getting
rid
of
all
of
the
load.
Balancer
security
security
groups
related
to
the
load
balancer,
so
I'm
gonna
go
and
do
a
little
bit
of
hand-holding
here
to
get
rid
of.
B
So
not
I,
don't
know
if
this
is
an
issue
with
my
setup
or
an
actual
bug
in
the
code,
but
eventually
it
will
figure
out
that
I
did
that
and
get
rid
of
the
EBS
volumes
and
the
load
balancers
and
then
I
can
go
and
go
over
to
my
other
cluster
and
restore
into
it,
and
we
should
see
the
same
web
page.
That
was
on
that
persistent
volume
that
I've
backed
up
and
we
will
be
restoring
here
in
a
minute.
Let
me
just
make
sure
that
this
is
good
to
go.
B
Ok,
so
we
will
go
just
to
show
you.
If
we
look
at
the
namespaces
the
nginx
example
namespace
is
gone
from
my
dev
one
cluster
and
I
can
go
back
to
the
web
browser
here
and
if
I
refresh
it,
doesn't
know
about
it,
because
that
load
balancer
is
gone
away.
So
over
here
to
the
dev
2
cluster
I
can
Louis
backups
and
you'll
see,
even
though
this
is
a
different
cluster.
It
able
to
display
the
same
backup
that
I
created
because
I've
pointed
arc
on
this
cluster
at
the
same
s3
bucket.
B
So
if
you
as
long
as
you're
in
the
same
ability
zone
from
an
s3
standpoint
as
well
as
with
your
persistent
volumes
and
snapshots,
you
can
restore
across
clusters
if
you
just
point
at
the
same
same
s3
bucket.
So
let
me
go
ahead
and
create
my
restore
here
for
dev
one
engine
X,
and
this
one
will
you'll
see
in
the
arc
log
here
it's
using
a
restore
for
persistent
volumes
and
it's
basically
what
it's
doing
right
now
is
looking
at
the
backup
that
I,
created
and
you'll
see.
B
If
I
jump
over
here
real
quick
you'll
see
it
has
some
information
about
volume
backups
in
the
status,
and
so
it
knows
that
for
a
given
TVC,
there
is
a
snapshot
associated
with
it
and
a
disk
type
or
storage
class
type
to
use
as
well.
And
so
when
we
restore,
we
look
at
the
information
that
that's
in
the
backup
and
basically
create
new
volumes
based
on
these
snapshots
and
then
wire
them
up
appropriately,
so
that
the
in
this
case
the
nginx
deployment
is
hooked
up
to
these
new
newly
created
PBS
via
the
PVCs.
B
So
if
we
look
at
say
we
describe
service
here,
I
remember
this
is
in
dev
2.
So
this
this
didn't
exist
before
so
now.
I
have
my
ingress,
which
is
gonna,
take
a
little
while
to
pop
up,
but
I
can
go
ahead
and
curl
this
and
it's
exactly
what
it
looked
like
before
hello,
sig
AWS,
so
that
this
is
one
use
for
doing
basically
backup
and
restore
where
the
namespaces
are
identical
and
you
can
either
do
it
in
the
same
cluster
or
across
clusters,
depending
on
what
your
Disaster
Recovery
Center
is
I.
B
B
Create
a
config
map
here
in
my
source,
namespace
I'm,
calling
it
a
and
then
I'm
going
to
create
a
secret
called
super
private
and
then
I
am
going
to
backup,
create
so
I'm
gonna
create
a
backup
called
source.
I
want
to
include
the
source
name,
space
and
config
maps
and
secrets.
So
this
will
go
ahead
and
get
my
backup
it
is
done
so
I
can.
B
What
I
can
do
now
is
create
a
restore
with
the
namespace
mappings
flag
set,
and
this
allows
you
to
say
for
anything
on
the
left:
half
rename
it
to
what's
on
the
right,
half
so
source
will
get
renamed
to
target,
but
before
I
do
this.
Let
me
just
show
you
that
there
is
no
target
namespace.
We
only
have
source
so
I
can
do
this.
And
if
we
look
at
the
namespaces,
we
now
have
a
namespace
called
target
and.
B
It
has
there's
actually
two
the
service
count
token,
but
the
one
we
care
about
is
super
private
and
it's
got
the
hello
world
and
happy
Friday
as
well.
So
that
is
a
brief
demo
of
pepto
Arc.
So
let
me
stop
sharing
here
and
if
you
guys
have
any
questions,
I'd
be
happy
to
entertain
them.
That's
like
wait.
Devil.
Thank.
A
You
very
much
for
that
info,
I,
really
liked
it
ask
your
questions.
The
first
part
is
that
when
you
take
any
backup
the
metadata
you
save
it's
saved
as
a
CRD.
Yes,
yeah
everything
is
done
with
series
and
then
that's
the
ID
information.
It's
on
one
cluster
and
then
do
you
take
that
CID
to
also
save
it
to
s3,
so
that
the
other
cluster?
You
can
see
that,
yes,.
B
B
A
A
I
had
to
pull
that
information
from
history
and
create
the
CRTs
accordingly
in
that
second
cluster,
yes,
ok,
ok,
cool!
The
same
question
I
had
is
the
UID
of
the
service
it's
used
as
the
name
for
the
EOB.
So
did
you
have
to
you
know
when
you
do
the
restore
in
the
second
cluster
and
you
got
the
same
hash
of
the
EOP.
It.
B
At
least
when
I
was
testing
it
this
morning,
I
was
getting
different,
different
URLs,
ok,.
A
B
B
B
Do
have
that
so
that
we
call
that
a
schedule
and
it's
just
uses
standard,
cron,
syntax
and
so
instead
of
saying
arc
backup
create
you
just
arc
schedule
create
and
you
give
it
a
cron
schedule
and
all
the
other
parameters
are
the
same
for
creating
a
backup
and
we'll
run
it
on
whatever
schedule
you
define.
Okay,.
A
B
Cloud
providers
when
they
do
snapshots,
they
are
diffs.
Okay,
I
mean
it's
cloud
providers
specific,
but
as
far
as
I
know,
all
three
of
them
are
creating
dips
and
not
full
copies.
The
kubernetes
resources
that
we're
backing
up
and
putting
into
the
tarball
are
full
copies
right
now
so
and
that's
mainly
because
it
was
easy
to
do
it
that
way
and
they
go
into
separate
folders
within
the
s3
bucket.
So
you
know
you
are
optimized
on
your
pv
snapshots,
but
not
as
optimized
with
the
kubernetes
backup,
tarballs,
okay,.
C
B
So
the
reason
I
did,
that
is,
we
have
some
limitations
with
the
version
of
arc.
That
makes
restoring
hard,
if
you
just
say,
back
up
everything
so
right
now,
for
example,
we
back
up
and
restore
notes
which
probably
don't
want
to
do
by
default
and
there's
also
some
issues
with
cluster
rolls
and
cluster
roll
bindings
and
trying
to
restore
them.
But
the
syntax
is
very
flexible.
So
you
can
say
I
want
to
back
up
and
restore
everything.
B
You
can
use
a
label
selector
to
say
only
back
up
things
that
have
app
equals
foo
and
we
do.
We
do
want
to
make
this
as
user-friendly
as
possible
and
as
easy
to
use
as
possible.
So
you
don't
have
to
specify
15
different
command
line
flags
to
get
a
reasonable
default
and
we're
working
toward
that.
So
the
example
I
mentioned
before
about
nodes
and
probably
not
wanting
to
restore
them.
B
We
are
working
on
a
way
to
make
that
happen
by
default,
so
that
you
know
you
don't
get
messed
up
like
if
you,
if
you
try
to
back
up
and
restore
onto
gke
and
you
and
you've
met
and
you
backup
and
restore
the
nodes,
it
kind
of
goes
monkey
for
about
a
minute.
But,
yes,
you
can
backup
and
restore
your
entire
cluster
and
you
don't
have
to
specify
resources
and
you
don't
have
to
specify
a
label
selector
if
you
don't
want
to
them.
Okay,.
C
B
Assuming
the
the
objects
are
field
compatible
and
version
compatible,
so
we
basically
go
in
and
grab
the
preferred
version
of
each
API
so
like
if
you
have
our
back,
for
example,
and
it
has
V
1
beta
1
and
V
1.
If
V
1
is
the
preferred
version,
we're
gonna
back
that
up
and
then,
if
you
happen,
to
try
and
restore
into
a
cluster
that
doesn't
have
V
1,
for
whatever
reason
is
not
gonna
work.
So
there's
no
flags
right
now
to
do
any
control
there.
So
you
know
kubernetes,
like
we
from
an
api
Machinery
standpoint.
B
We
want
to
make
sure
that
api's
are
compatible
as
long
as
the
API
version
is
the
same
across
multiple
kubernetes
releases.
So
if
you're
working
with
v1
or
you're
looking
at
storage
classes
in
v1
or
whatever,
we
want
to
make
sure
that
across
kubernetes
releases,
that
stuff
is,
is
compatible
forwards
and
backwards.
So
assuming
we
hold
true
to
that,
then
as
long
as
your
backup
and
the
cluster
that
you're
restoring
into
have
all
of
the
same
API
versions
enabled
it
should
work
and.
C
B
D
Think
that's
snaps,
maybe
I
ruin
you
can
I,
think
snapshots.
You
can
actually
move
between
a
Z's.
But
yes,
the
EBS
volumes
are
definitely
tired,
but
you
can
also,
if
the
only
way
to
move
in
EBS
volume
between
eight
or
the
way
to
move
uni
be
s1
between
a
zis
is
do
snapshot
it
and
you
store.
You
create
a
new
volume
with
the
snapshot.
I.
B
You
for
your
time
and
I
actually,
unfortunately,
need
to
drop,
because
I
have
to
eat
lunch
before
my
next
meeting,
so
I'll
drop
a
link
to
arc
in
the
in
the
doc
or
the
meeting
agenda.
We
also
have
a
Google
Group
as
well.
If
you're
want
to
ask
some
more
questions,
so
we
were
trying
to
build
a
community
here
and
would
really
appreciate
any
feedback.
You
guys
have
excellent.
Thank
you
thanks.
Everyone.
A
All
right
next
on
the
agenda,
it's
the
cook,
fries
I,
want
to
just
making
sure
everybody
understands
that
code.
Freeze
is
next
week
or
Emily
all
right,
yes,
a
week
from
today.
So
please
make
sure
that
you
have
all
your
pull
requests
ready
if
you
want
them
to
be
available
and
also
that
you
help
out
or
have
any
reviews
also
available.
So
there's
a
couple
of
cures:
any
attention
and
uh-huh
they're,
both
mine,
so
I,
you
know,
if
you
have
your
own,
please
put
them
out
there.
A
I
just
feel
bad
that
they're
both
mine,
but
we
just
need
a
couple
of
eyes.
You
know
reviews
on
that.
If
you
have
some
time-
and
we
would
like
to
see
if
we
can
move
these
along
before
the
code
freeze,
let's
talk
next
about
release,
notes.
If
there's
no
questions
in
PRS,
anybody
else
have
any
PRS
that
they're
working
on
they
want
to
work
on
before
code.
Freeze.
A
D
A
D
A
A
E
Right
now-
and
so
you
know
going
through
that,
the
fake
cloud
stuff
is,
you
know
we'll
get
you
will
allow
you
to
get
in.
You
know,
pass
a
cloud
interface,
but
it
won't.
It
won't
work
for
anything
if
you
actually
need
to
to
stub
out
say
an
AWS
functionality
right.
You
wanna
you
wanna
to
to
test
something
inside
your
controller
of
points
from
getting
something
specific
from
AWS,
so
I
was
looking
around
and
it
looks
like
there
is
a
fake
AWS
services
that
was
implemented
and
is
used
in
the
AWS
cloud
provider.
A
E
A
E
D
All
right,
I
think
that
would
be
great
I
I'm,
trying
to
pull
up.
Remember
exactly
what
we
have
in
kubernetes,
but
in
cops,
which
is
you
know,
separate
project
we
actually
a
much
richer
fake,
AWS
one.
So
it's
worth
it's
worth,
peeking
a
link,
I'm
gonna
post,
a
link
to
that
one
I'm
also
gonna
try
to
find
like
what
the
state
of
the--.
E
You
know
I
kind
of
figured
that
eventually
this
this
kind
of
I'd,
this,
whatever
doing
the
separating
of
functionality.
You
know
when
we
pull
the
cloud
providers
out
to
their
own
repos,
would
make
sense
to
have
that
consolidated
dot,
repo
have
the
consolidated
to
fake
and
then
K,
ops
and
COO,
but
whatever
is
needed,
would
pull
from
there.
D
A
E
This
is
just
this
is
the
main
kubernetes
group
repo
right.
This
is
part
of
the
work
they're
saying
too,
to
be
able
to
take
the
one
of
the
functionalities
of
percents
and
volumes,
move
it
into
the
cloud
controller
manager,
which
then
facilitates
the
ability
to
move
all
the
cloud
provider
stuff
out
of
core
Kubb.
Okay,.
A
E
A
D
D
D
I,
don't
know
whether
it
needs
to
be
in
its
own
package
or
not.
I
promise
yeah
only
really
matters
a
lot,
but
if
also,
if
you
do
that,
then
I
can
also
try
to
bring
in
some
of
the
some
of
the
cop
stuff
which
I
had
a
look
at
is
much
richer
so
as
in
when
we
need
that
we
can
so
try
to
bring
those
together
copy.
That's.
A
C
I
actually
have
a
question,
so
I
am
starting
to
look
at
federated
clusters
and
I
was
looking
through
the
issues
on
cops
and
I,
see
the
documentation.
First
of
all
is
missing,
and
then
there
are
issues
you
know
some
things
work,
some
things
don't
work,
so
I
wanted
to
understand
sort
of
the
State
of
the
Union.
In
terms
of
you
know,
where
do
we
stand
if
I
were
to
set
up
a
federated
cluster
using
cops
on
AWS?
C
D
I
can
I
can
speak
to
this,
so
they
were
there.
There's
a
federation.
The
cig
federation
group
has
a
project
called
coop
fed,
which
will
you
give
it
three
existing
four.
We
know
these
clusters
and
it
will
create
a
federation
of
them
more
like
you
give
them
any
dramatic
clusters
right
and
in
theory
that
should
it
definitely
works
well
on
GCE,
I
am
not
entirely
sure
of
the
state
of
it
on
AWS.
D
To
be
honest,
it
should
work
there's
a
in
efforts,
but
it
definitely
is
more
likely
to
drift
out
of
function
out
of
there
with
more
likely
have
bugs.
Then
the
GCE
Georgie
C
Federation's,
before
Kubb
fed
cops,
started
working
on
people
dated
just
something
similar
to
cube,
fed
where
you
would
create
those
clusters,
and
it
would
be
a
Federation
which
all
just
work,
but
that
code
is
sort
of
stuck
in
limbo
and,
yes,
like
you,
does
not
work
at
all
because
it
has
not
kept
up
with
cubed.
D
I
am
not
entirely
sure
what
we
should
do
in
cops
as
to
whether
we
should
adopt
cube
feds
like
continue
the
sort
of
parallel
effort
in
bed
cube
then
whatever
it
would
be.
So
that's
the
sort
of
cops
code
is
in
limbo
and
likely
be
not
working
currently
could
be
made
to
work.
Tube
fed
should
work.
Will
work
on
GCE
may
have
issues
on
AWS
more
likely
to
have
issues
on
native
this,
but
so,
if
you
want
it
to
get
it
to
work
today,
I
would
set
up
your
clusters.
D
D
B
D
Is
they're
prepared
some
old
ones?
If
what
what
you
know
we
have
I,
don't
like
let
cops
take
over
this
like
this
is
to
get
dressed.
Nelson
cops
were
like
I,
don't
know
if
people
have
views
about
like
how
how
it
should
work
in
cops
or
whether
we
should
just
dump
that
what
we
should
do
but
feel
free
to
feel
free
to
open
a
new
issue
in
comment
or
speak
here.
If.
D
C
There
are
issues
you
know
in
son,
says:
okay,
somebody
should
document
Federation
dot,
MD
here
and
they're,
pointing
to
the
cops
Doc's
essentially.
So
there
are
issues
like
those,
but
in
terms
of
you
know
the
current
state.
You
know
there
are
three
or
four
issues
that
keep
pointing
to
each
other
so
and
they're
dated
I,
think
May
or
June
timeframe.
So
it's
still
pretty
recent.
So
what
I
intend
to
do
is
I
will
give
it
a
shot
in
the
next
couple
of
weeks
for
setting
up
a
federated
cluster.
D
And
definitely
definitely
ping
me
with
anything
you
you
find
you
know.
Even
if
it's
just
coop
fed
not
working
with
AWS,
you
know
we
can
your
still
opportunity
to
fix
that
before
yeah
and
and
yeah
I'd
say,
don't
bother
with
the
cops
one
cuz,
it's
very
unlikely
to
work.
But
you
know
if
you're
feeling,
like
you
want
to
get
stuck
in,
some
cuddle
didn't
feel
free.
Okay,.
A
A
Like
so
I
feel
like
there's
some
conversations
on
Federation
and
and
it's
just
about
authentication
in
control
when
I
log
into
any
of
these
clusters
and
now
I
had
the
same
our
back
rules
and
things
like
that.
There's
another
situation
where
they
say
I
want
to
be
able
to
move
workloads
from
one
to
the
other
and
that's
a
completely
different
thing.
So
I
want
to
just
get
an
understanding
of
from
you
just
because
we're
here
having
conversation,
what
you
iteration
means
Oh.
What
would
you
like
out
of
forget
the
name?
F
F
C
A
D
There
is
definitely
a
technical
distinction
in
it.
You
know
you
might
argue
that,
like
multi,
so
when
we
did
multi
AZ
on
a
single
cluster
in
communities
can
spend
multiple
AZ's,
and
that
was
sort
of
a
compromise,
because
you
could
argue
that
a
better
architecture
might
be
to
use
like
three
separate
clusters
and
have
the
Federation
yeah,
but
the
multi
a
that
is
a
bit
of
a
pain
and
full
Federation
isn't
fully
ready
yet,
and
so
a
sort
of
compromise
was
that
a
single
cluster
would
run
in
multiple
AZ's
and
it
would
be
okay.
D
There
are
some
very
notable
shortcomings
like,
for
example,
we
have
any
zonal
affinity
of
traffic,
so
you
could
have
your
webs.
You
could
have
your
webserver
talking
to
a
database
in
a
different,
a
Z
and
just
incurring
charges
for
no
really
good
reason,
and
but
that's
sort
of
like
the
compromise.
The
the
real
like
the
real
long-term
goal
is
that
Federation
will
be
the
approach
and
that
you
know
you'll
still
be
able
to
use
multi
easy,
but
Federation
should
work
just
as
well
and
be
just
as
easy
in
even
in
a
single
region.
D
F
A
F
C
C
F
That
that's
what
Federation
would
specifically
help
once
it's
ready
once
we're
ready
for
it
and
it's
ready
for
us.
It
will
specifically
help
us
move
some
logic
out
of
theoretically
move
some
stuff
out
of
Jenkins
logic
and
into
kubernetes
launcher,
because
we
should
be
able
to
do
we're
just
doing
rolling,
deploys
we're
just
deploy,
run
some
tests.
We
may
still
end
up
using
a
bunch
of
stuff
in
I
mean
we
even
after
Federation.
We
may
still
do
a
bunch
of
stuff
in
our
CI
CD
pipeline,
but
I
think
there.
F
Everything's
still
very
alpha
and
no
one's
running
it
in
production,
from
what
I
can
tell
yeah
as
soon
as
it's
closest
to
production,
ready
or
recording
or
mature,
and
has
enough
of
the
objects
that
it'll
make
sense
very
excited
about
playing
around
with
it
and
I'm.
Eventually
wanting
to
do
we're
right
now,
we're
all
in
AWS
but
I'd
like
to
go
to
multi-cloud
Federation.
Eventually,
that's
going
to
be
even
farther,
along
than
just
Federation
by
itself
them.
A
C
F
Me
it's
all
it's
both
latency
and
failover,
so
getting
so
so
I
I'm
mainly
concerned
with
websites
and
services
and
being
closer
to
the
customers
and
reducing
the
latency.
That
way
and
and
also
the
number
one
thing
is
just
that:
that's
the
announcement
we
don't
need.
You
don't
need
Federation
for
any
of
that
right.
C
D
My
understanding
of
the
state
of
it
is
that
stateless,
like
straight
web
web,
that
would
dub
workloads
on
Federation,
is
pretty
good.
I
think
there
is
an
opportunity
for
anyone
that
cared
to
do
like
that
was
for
pure
AWS.
The
eight
of
us
ELB
has
some
functionality,
which
is
pretty
advanced
which
isn't
like
in
terms
of
global
routing
and
all
that
stuff
which
isn't
necessarily
fully
used
in
the
in
the
in
the
stock
Federation
code,
but
I.
D
D
F
But
that's
again,
that's
that's
kind
of
below
the
level
once
you've
established
your
ability
for
the
talk
internally
to
each
other,
then
the
kubernetes
code
runs
on
top
of
whatever
network
infrastructure.
You've
built
I,
don't
feel
like.
That
seems
like
a
lower
level
that
the
networking
communication
is
below
that
and
then
Federation
is
just
you
know,
giving
you're
having
your
API
objects.
You
know
your
controllers
having
your
various
clusters,
this
regions
or
pop
providers
firm
or
door
Justin
detonates
these.