►
From YouTube: Kubernetes sig-aws 20170630
Description
Kubernetes SIG-AWS meeting from 20170630
A
Okay,
so
this
is
sick.
Aws
today
is
June
30th
and
we
are
going
to
start
with
introductions.
I
will
go
first,
I'm
Justin
I,
do
a
lot
of
work
on,
say,
get
on
the
ada
bus
cloud
provider
for
communities
and
on
cops
and
I.
Don't
know
if
anyone
else
wants
to
jump
in
otherwise.
I
will
start
calling
you
by
name
I'll.
B
E
E
A
A
We
have
a
fairly
light
agenda
today.
I
think
we
have
a
demo
from
Zalando
I,
don't
see
them
on
here
and
this
anyone
will
correct
me
so
sorry,
nothing
to
learn
from
from
Romagna
I'm.
Sorry,
you
guys
didn't
prepare
a
demo,
but
other
than
that
I
don't
know.
If
anyone
else
has
any
items
they
want
to
throw
on
the
agenda.
I.
Suppose
the
big
news
is
I,
think
one
seven
was
released
pretty
recently.
A
I
don't
know
if
we
want
to
discuss
I
guess
we
can
discuss
what
people
want
to
see
in
1/8
if
throw
that
on.
The
agenda
is
something
for
the
for
the
end
of
the
item
for
the
end
of
the
agenda.
I,
don't
know
what
the
state
of
the
cloud
controller
manager
is
in
terms
of
you
know
it
being
official
solution,
but
I
I
expected
either
in
one
acre
or
one
line.
We
can
expect
that
the
AWS
cloud
provider
will
move
into
a
separate
component
which
should
have
no
user
facing
in
impact.
A
G
Stay
well,
I
mean
it's
up
to
you
to
set
it
up,
I
I,
believe
I,
don't
know
of
anybody
doing
it
yet
right
so
cube.
A
cube
admin
default
doesn't
is
silent
on
cloud
cloud
providers
right,
there's
some
hooks,
but
it's
it's
complex
enough
in
terms
of
having
to
sort
of
get
all
those
for
the
stars,
aligned
that
we
don't
try
and
make
it
look
easy
when
it's
not
right.
G
I
think
you
know
at
the
end
of
the
day
like
if
you
launch,
you
know,
if
you
launch
a
cube
admin
cluster
and
you
don't
have
the
right.
I
am
permissions
right
like
stuff
is
gonna
fall
over,
you
know,
and
we,
even
if
we
make
the
error,
is
clear.
If
you
don't
launch
it
the
right
way
with
like
at
least
a
roll
you're,
pretty
much
like
okay
tear
it
down
and
recreate
it
right.
So
so
there's
definitely
like,
like
you
know,
I
think
over
time.
If
we
really
want
to
make
it
clear
we
can.
G
We
can
build
in
sort
of
extensions
for
different
clouds
into
cube
admin
and
have
it
actually
checked
for
prereqs.
Also
right
so,
like
you
know,
hey
do
I
have
the
right
permissions
when
I'm
launching
this,
you
know,
along
with
guidance
and
instructions
on
how
to
do
that.
So
I,
don't
think
anybody's
done
that
work
yet,
but
it's
gonna
it's
gonna.
That
would
definitely
make
it
easier.
I
mean.
A
G
A
G
G
A
G
A
B
G
When
the
the
big
issue,
there
is
around
credentials
right
now,
the
ECR
credentials
are
temporary,
and
so,
if
you
get
the
right
stuff
installed
when
running
on
ec2
those
credentials
get
renewed.
Whenever
you
need
to
when
you're
running
off
of
ec2,
those
credentials
tend
to
expire,
and
it's
very
difficult.
You
have
to
sort
of
do
a
lot
of
extra
plumbing
to
automatically
renew
those
credentials
when
you're
running
off
of
ec2
and
then
doing
that
with
kubernetes
ends
up
being
a
little
bit
of
a
Rube
Goldberg
type
of
thing.
Right
now,
currently,.
B
B
G
B
A
Should
do
a
cup
of
maybe
today,
but
you
know
this
weekend
ish
and
we
typically
don't
do
a
we
take
advance
the
back
of
are
not
in
the
main
repo.
So
we
don't
do
a
release.
A
gold
release
on
the
same
day,
which
I
think
is
the
correct
call
so
we'll
do
an
alpha.
If
everything
is
perfect,
we
would
do
like
an
alpha
around
now.
We'll
do
a
beta
in
about
a
week
and
we'll
do
a
release
the
week
after
I
think
one
six.
A
Where
there
were
some
teething
issues,
it
went
I
think,
like
I,
think
we
pushed
them
out
by
like
a
month
or
so
or
even
longer,
so
it
it.
But
what
you
can
expect
to
see
an
alpha
assume
that
actually
will
work
seems
to
work
fine,
but
that
you
know
we
do
the
Alpha,
because
we
want
to
indicate
to
people
that
there
may
be
problems
because
it
is
not.
It
is
a
one
seven,
it
is
a
dot
o
release
and
they
tend
to
be
doubles
right.
A
Correct
yes,
for
push
it
out
to
the
alpha
channel
as
well,
so
you
can
do
like
channel
alpha
and
I
believe
that,
actually,
like
even
cups,
one
six
works
fine
with
community
ones
as
well.
The
changes
are
not
or
there
aren't.
There
aren't
any
like
meet
massive
breaking
changes
that
I've
found
yet
so,
but
we're
gonna
do
the
alpha
ones
have
an
offer
pretty
soon
anyway,
and
that
will
be.
That
will
be
where
we
do
the
recommended
hardening
and
verification,
and
all
of
that
gotcha.
A
G
So
I,
you
know
okay,
so
for
those
who
aren't
familiar
spiffy,
is
this
thing
about
sort
of
how
do
we?
How
do
we
in
a
sort
of
you
know
generic
reusable
way?
You
know,
distribute
certificates
and
use
them
across
different
workloads
for
both
server
certificates
and
client
certificates?
G
There
is
a
discussion
ongoing
within
sort
of
the
group
of
folks
working
on
spiffy
around
how
best
to
use
this
on
Amazon,
regardless
of
whether
it's
you
know
included
with
kubernetes
or
not,
and
so
I
think
that's
interesting
to
this
group.
So
just
a
heads
up,
they're
looking
at
ways
to
establish
trust
between
machines
by
relying
on
essentially
the
the
AWS
control
plane
to
bootstrap
that
trust.
Looking
at
things
like
the
identity
document,
how
much
does
that
really
tell
you
I?
Think
valve.
G
Does
some
interesting
stuff
around
giving
you
an
Ananse
and
then
you
use
that
to
essentially
use
a
roll,
so
you
can
verify
that
you
have
access
to
a
roll
and
then
when
that
thing
comes
back,
it
actually
says
here's
the
instance
that
you're
on
and
so
there's
some
like
weird
ways
that
you
can
essentially
take
a
piece
of
information.
Then
sign
it
with
a
way
where
you
can
trust
that
it
actually
is
came
from
a
certain
machine
and
so
we're
looking
at.
G
G
A
Yeah
I
think
that
the
the
the
ability
to
access
and
and
I
am
role
is
certainly
as
I
understand
it
like
the
canonical
way.
You
do
it,
so
you
would
have
something
that
you
can
only
access
from
a
particular
role
like,
for
example,
a
KMS
key.
It
just
feels
like
a
massive
like
I
said:
we'd
go
route
go
boat
machine
where
you're
like
I'm
gonna,
create
a
secret
in
like
when
I
access
the
secret,
then
I
can
do
things
but
isn't
terrible,
I
think
well,.
G
I
think
the
problem
is
that
right
now
in
Amazon
rolls
are
too
chunky.
You
know
you
it's
not
practical
to
create
a
role
per
instance,
and
so
you
can
use
roles
and
kms
to
actually
verify
that
an
instance
is
part
of
a
particular
group,
which
is
definitely
good,
but
you
can't
use
it
to
actually
prove
an
instance
is
actually
that
instance,
and,
like
I
said
I
think
there
was
some
stuff
like
its
new
since
last
I
looked
at
it.
I
was
talking
to
somebody,
and
they
were
talking
about
it.
G
Problem
with
the
identity
document
is
that
it's
not
specific
to
a
regular
request,
so
you
can
hand
it
off
right.
So
if
you
can
get
the
identity
document,
you
can
present
it
to
somebody
else
and
impersonate
it.
The
ideal
thing
here
would
be
to
have
a
metadata
service
where
you
handed
some
sort
of
nonce
and
then
it
creates
an
identity
document
with
that
embedded
in
that
for
you
right,
because
if
you
could
do
that
now
you
could
actually
say
it's
a
challenge
response.
G
I
G
A
Least
on
the
ws,
my
understanding
is
that
IP
addresses
are
not
spoof
Irbil,
although
maybe
that's
not
sure
if
you
turn
off
source
s
checks,
but
I,
don't
I'm
not
entirely
sure,
but
sorry
there
may
be
like
it
may
be
that
we
have
to
do
something.
Aws,
sort
of
specific
and
I.
Don't
know
if
I
would
love
to
like
find
the
magic
bullet
that
works,
not
that
doesn't
require,
like
IP,
address,
trust
or
anything
anywhere
specific,
but
it
might
be
that
we
just
have
to
accept
that.
G
And
and
I
mean-
and
some
of
this
I
mean
I
mean
I,
hesitate
to
call
security
theater,
but
at
some
point,
if
you
don't
trust
your
cloud
provider,
you're
kind
of
screwed,
all
right,
it's
like
at
the
end
of
the
day
they
can
scrape
memory
out
of
your
VM
and
find
keys
that
way
right.
It's
like
you
know,
I
mean
there
are
sort
of
new
features.
Coming
with.
G
You
know:
new
Intel,
virtualization
modes
with
protective
enclaves
and
stuff
like
that,
where
you
are
actually
protected
from
the
cloud
provider
to
a
deeper
way
but
but
yeah,
but
I
think
there
are
cases
well.
You
know
there
are
advanced
cases
where
you
know
where
you
you
can
spoof.
The
IP
address
at
least
I
know
on
on
GCE
and
I,
assume
that
it's
that
they're
hosting
scenarios
play
out
in
ec2
also
ec2.
B
K
G
G
G
G
Me
dig
that
up
real
quick
I
heard.
G
A
I
don't
see
anyone
from
Amana
that
has
joined
I
have
pinged
them,
but
let's,
let's
continue
with
unless
anyone
has
any
other
business.
Should
we
talk
about
what
people
would
like
to
see
in
one
a?
What
are
people
sort
of
biggest
pain
points
right
now?
What
people
might
want
to
work
on
that
sort
of
any
anyone
want
to
volunteer
any
anything
that
they
are
working
on,
I
will
I
can
kick
us
off,
which
is
cute.
A
A
G
A
A
I
think
the
design
is
correct,
like
I've
I
think
we've
talked
about
it
and,
like
we've
talked
about
you
know,
should
we
automatically
create
roles
and
the
implications
of
that
in
terms
of
security,
and
it
seems
like
the
cute
I
am
design
is
basically
spot-on,
so,
like
some
debate
about
whether
you
want
to
run
on
every
node
already
want
to
forward
it,
but
other
than
that
that
requires
some
notion
of
identity.
So,
basically,
like
the
the
big
picture,
design
is
exactly
what
we
were
built
sort
of
scratch,
knowing
everything.
E
A
Bumping
like
the
cube,
dns
version,
but
I
haven't
I,
haven't
really
looked
it
like
some
of
the
more
advanced
features,
m17
like
the
node
security
policy
type
things
and
whether
we
probably
still
have
to
implement
whatever
we
need
to
do
for
those.
That
certainly,
is
not
in
cops
one
six.
If
it
requires
additional
flags.
A
G
So
I
did
dig
up
the
email
message.
It
was
sent
to
say
goth
around
the
working
group
and
there's
a
there's,
a
doc
linked
off
of
there.
If
I
think,
if
you're
in
kubernetes
Deb,
you
may
have
access
to
the
doc
because
people
with
G
suite
orgs
and
documents
that
they
can't
share
publicly
and
all
that,
and
so
definitely
something
interesting
to
get
involved
in
there.
A
A
J
L
You
know
I'm,
sorry,
for
some
reason:
my
camera
showed
up
10:00
a.m.
I,
guess
maybe
with
daylight
savings
or
something
it
didn't
hit
me
that
it
was
nine
o'clock
and
unfortunately
the
demo
is
going
to
be
done
by
my
colleague
who
is
in
a
different
time
zone.
So
we
unfortunately
may
not
get
a
demo
right.
This
second,
but
I
have
a
couple
of
slides
to
go
through
that
I
was
going
to
go
through
in
advance
from
the
from
the
demo
so
anyway,
so
kind
of
rollin
with
the
situation.
L
L
Okay,
good,
alright,
well,
hopefully,
you're
gonna
lack
Chua
ly,
give
a
live
demo,
but
short
of
that
I
can
explain
exactly
what
we've
done
and
how
it
all
works.
So
let
me
let
me
jump
into
that
and
I'll
go
through
this.
This
pretty
quickly,
I
think
it
probably
everybody
here
is
very
familiar
with
the
problem,
so
I'm
not
going
to
spend
a
lot
of
time
on
that,
but
just
to
recap,
you
know:
there's
a
limitation
in
the
VP
sea
route
table
that
limits
cluster
sizes
and
perhaps
one
that's
not.
L
You
know
something
you
are
very
much
aware
of,
but
if
you
run
a
CNI
network
provider
that
requires
an
overlay
when
you
go
across
zones.
So
for
a
che
clusters,
you
can't
apply
network
policy,
so
these
are
two
things
that
we've
been
working
on
and
our
next
version
of
software.
An
extra
romana
addresses
both
of
these
and
that
we
set
out
to
do
this.
With
these
following
goals,
we
wanted
to
use
all
standard
api's,
we
didn't
use
annotations
or
third-party
resources.
L
We
wanted
to
use
all
of
the
available
bvp
sea
routes
that
are
there
today,
but
enable
large
clusters
without
an
overlay
and
then
for
HEA
clusters,
support
the
network
policy
API
so
just
to
level
set.
That's
what
this
will
provide.
So
it's
a
part
of
a
romana
version
2,
which
is
not
yet
available
and
the
big
features
there
are
technology
aware
I,
Pam
and
network
advertisement,
and
what
I'm
going
to
show
today
is
there
actually
a
the
idea
of
advertising
the
networks
into
the
VPC
routing
table?
L
So
that's
kind
of
what
this
new
software
does
this
VPC
router
that
we
called
and
that's
gonna,
be
ready
sometime
next
month?
Ok,
so
anyway,
this
is
a
V
PC
router
for
ec2,
it's
up
on
github
and
I'll,
go
through
how
it
all
works,
and
hopefully,
Jurgen
will
show
up
to
give
a
live
demo
all
right.
So
here
is
a
standard
ec2
VPC
with
three
availability
zones
with
three
hosts
in
each
okay.
So
we've
got
nine
nine
instances
and
you're
familiar
with
how
how
this
all
works.
L
We've
got
a
slash,
24
in
each
zone
and
then
each
host
pulls
on
IP.
Out
of
that
block
and
then
runs
on
the
V
PC
network.
Okay,
so
that's
all
very
standard,
vanilla
stuff,
but
we
solve
this
by
introducing
this
concept
called
a
prefix
group.
Now
the
user
is
not
going
to
be
exposed
to
a
prefix
group.
This
all
happens
under
the
under
the
under
the
covers,
but
essentially
what
this
allows
us
to
do
is
ensure
that
todd's
get
IP
addresses
from
within
the
prefix
group.
L
Okay,
and
it
does
that
through
its
new
IPAN
all
right.
So
this
is
the
first
new
concept
that
that
we
introduced
and
and
I'm
gonna
go
through
a
very
simple
example
and
then
we'll
go
through
a
more
specific
example
of
how
this
is
deployed.
So
you
see
here,
we
have
this
prefix
group
and
the
pods
that
get
launched
are
gonna,
live
within
that
prefix
group.
L
So
that's
all
standard
kubernetes,
pod,
networking,
okay,
so
now
on
the
master,
no
Romanus
got
its
own
services
that
are
running
on
the
master,
but
this
is
this
new
service.
That's
part
of
that
that
we
call
the
V
PC
router,
that's
gonna
run.
One
instance
is
gonna,
run
on
every
master
and
since
there's
a
master
in
each
zone,
there's
probably
gonna
be
three
of
these
okay.
L
So
that's
how
that's
gonna
be
deployed
and
the
way
this
gets
configured
is
it's
going
to
identify
one
of
the
running
instances
in
each
zone
to
be
a
router
to
forward
incoming
traffic
to
that
zone
to
the
ultimate
hosts?
That's
running
the
pod?
Okay.
So
that's
what
V
PC
router
does
so
now
now
what
happens
here?
Important.
L
V
PC
router
you
going
to
set
a
route
in
the
route
table,
not
for
every
instance
but
for
every
prefix
group.
Okay,
now
that's
how
it
a
grenade
shouts
and
that
outs,
how
it
saves
the
entries
in
the
route
table.
Okay,
now
you
can
have
lots
and
lots
of
prefix
groups.
You
can
have
as
many
prefix
groups
as
you
have
instances,
so
this
falls
back
to
the
cube
net
model,
but
again
and
fundamentally
the
issue
here.
The
way
we
conserve
routes
is
by
using
a
VPC
route
for
each
of
these
prefix
groups.
L
L
So
at
the
end
of
the
day,
what
happens
is
once
these
routes
are
set
in
deep
in
the
in
the
VPC
route
table
a
pod?
That's
living
in
zone,
one
that
needs
to
talk
to
a
part
of
living
in
zone.
Three
is
going
to
send
directly
to
its
pod
network
IP
address
the
traffic
is
going
to
go
out
through
the
default
route,
but
it's
going
to
pick
up
the
route
on
in
the
VPC
routing
table
to
be
forwarded
to
the
instance
in
the
availability
zone.
L
Three
and
then
that
instance,
when
this,
in
which
case
is
one
sixty
eight
dot
three
to
the
instance,
is
then
going
to
forward
that
traffic
to
the
pod.
That's
actually
running
I'm,
sorry
to
the
hole,
that's
actually
running
the
pod
okay.
So
that's
what
it
does
and
okay.
So
now,
as
I
said,
so
this
is
a
to
overcome
the
veep
250r
out
V
PC
limit.
L
There's
only
one
route
per
prefix
group:
okay,
alright,
so
that
example
was
one
where
you
had
a
prefix
group
per
zone
okay,
but
we
also
wanted
to
support
the
simple
case
where
it
replicates
cube,
Nets
capabilities,
so
what
that
would
require
would
be
to
simply
identify
each
host
to
be
its
own
prefix
group.
Okay,
again,
this
all
happens
behind
the
scenes
the
user
isn't
exposed
to
any
of
this
prefix
group
stuff.
L
But
the
end
result
is
that
in
this
case,
the
V
PC
router
software
that
I
just
described
would
actually
put
a
route
in
the
route
table
for
every
ec2
instance
and
again,
that's
the
cube
net
model
now.
One
other
final
point
to
note
here
is
that
the
way
romana
normally
typically
works
in
you
know,
layer
two
environments
is
that
it
actually
installs
hosts
I'm,
sorry
installs
routes
on
each
host.
So
when
clusters
are
deployed
in
layer,
2
environments,
there's
no
external
routing
necessary.
L
So,
in
the
case
where
a
cluster
is
within
a
single
availability
zone,
there
are
no
V
PC
fee,
no
V,
PC
routes
required
at
all,
because
the
routes
are
installed
on
the
on
the
host
and
that's
sort
of
the
standard
romana
mechanism
to
forward
traffic.
Okay,
all
right
now.
What
time
is
it
now?
Unfortunately,
we
will
not
be
able
to
get
to
a
demo
till
10
o'clock,
but
what
we
were
gonna
show
and
if
we
have
time
we
can
get
this.
L
It
was
a
live
cluster
running
12
nodes
in
three
zones
where
VPC
router
is
running
in
one
of
them
and
what
we're
going
to
show
and
it's
kind
of
embarrassing
we're
not
gonna,
be
able
to
do
this
without
you're
gonna
was
exactly
what
I
described
there.
Basically,
we
can,
you
know,
show
a
trace
route
to
from
a
pod
from
to
a
pod
across
zones
and
then
just
show
that
route
changing
as
an
instance
gets
killed.
L
L
For
the
entire
pod
subnet
I'm,
sorry,
the
the
route
to
all
the
traffic
on
the
subnet
goes
to
a
particular
instance
in
that
node
not
to
all
the
nodes,
there's
not
a
route
for
each
node.
So
that's
what
I
described
previously.
There
is
a
route
entry
for
each
prefix
group.
So
that's
at
this
point.
Who's
gonna
show
the
demo,
but
unfortunately
I
got
nothing
but
sort
of
shadow
puppets
so
guys
if
you're
gonna
shows
up.
You
can
do
that.
But
at
that
point
I'm
afraid
I
got
nothing.
So
sorry
about
that
guys.
A
I
think
that
was
grim,
I
think
I.
Think
I
I
certainly
got
the
idea.
I
know
nobody
else
has
any
like
immediate
questions
about
how
it
works.
I
think
it's
I
think
it's
clever
I
think
it's
certainly
very
interesting
to
like
keep
to
keep
the
cube
net
model,
like
you
basically
said
in
the
Cuban
little
past
the
past,
the
VC
routing
limit
by
by
using
nodes
as
automatic
router
promotions
and
like
that
I'd
love
to
I'm,
looking
forward
to
seeing
it
when
it
when
it
is
available,
it
sounds
like
the
source
code
is
available.
L
Well,
a
couple
things
so
the
VPC
router
software
is
available.
Now
and
actually
you
can,
you
know,
play
around
with
it
at
the
command
line
to
see
how
it
Wiggles
the
V
PC
route
table
directly
but
to
actually
deliver
what
I
described
actually
requires.
You
know
integration
with
kubernetes
to
pick
up
at
CD
data
and
Romana
ipam
works
in
conjunction
with
that.
L
So
there's
sort
of
you
know
interdependencies
ultimately
to
deliver
what
I
described
and
that's
going
to
show
up
in
Romana
2.0
in
a
couple
of
weeks,
but
the
the
route
manipulation,
software,
V,
PC,
router,
that's
a
standalone
thing.
You
can
pull
it
down
and
play
around
with
that
at
the
command
line.
To
your
heart's
delight.
But
again
it's
not
going
to
deliver
what
I
showed
unless
it
has
all
the
other
apparatus
around
it
and.
A
And
I
think
something
that
I
think
is
coming
up
on.
A
lot
of
people's
radar
is
like
being
able
to
span
I,
guess,
regions
or
like
between
clusters,
like
I,
don't
know
if
Federation
yet
has
a
great
story
for
like
a
stateful
set
of
Cassandra
like
or
my
sequel
replicas
like.
How
do
you
actually
like
communicate
between
the
two
nodes,
if
they're
in
different
clusters
or
in
different
regions
or
whatever
I,
don't
as.
L
L
It
absolutely
could
be,
but
let
me
sort
of
back
up
a
step
and
sort
of
share
some
perspective
that
we've
gained
over
time.
Here.
We've
got
a
bunch
of
users
that
are
running
clusters
in
separate
regions
and
sort
of
that
has
informed
our
design
choices
here
and
what
we
have
found
when
they
split
across
zones
within
a
VP
see
they
do
not
want
to
be
bothered
with
the
hassle
of
running
a
quagga
instance,
and
all
that
other
networking
you
know
mumbo-jumbo.
L
They
said
you
know
eff
that
I
want
to
use
native
networking
I
will
let
a
node
forward
traffic
I'll
introduce
an
extra
router
hop
on
a
subnet
in
a
zone,
and
I
can
live
with
that.
However,
when
they
go
to
multiple
V
pcs
that
are
living
in
multiple
regions,
they're
invariably
forced
out
into
the
public
Internet,
not
invariably,
but
frequently
Plus
out
into
the
public
Internet,
and
at
that
point
they
sort
of
bite
the
bullet
and
say
well.
I
need
to
do
take
care
of
all
the
other
stuff.
L
I
got
a
backhaul
and
into
my
data
center.
At
that
point,
they
sort
of
cross
the
threshold
where
they
say
you
know:
I'm
gonna
run
quagga,
because
I
need
all
this
other
stuff.
So
we
decided
that
was
territory
that
we
just
didn't
want
to
get
into
just
yet,
although
we
could
it's
the
sort
of
a
bright
line
between
those
kind
of
deployments
and
the
ones
that
we
are
focused
on
initially,
where
you
add
H
a
clusters
across
zones
in
a
single
V,
PC
and
there's
sort
of
a
step
function.
L
When
you
get
to
the
networking
complexity
on
how
you
want
to
go
across
regions-
and
that's
where
quagga
you
know
actually
does
a
pretty
good
job,
because
there's
a
lot
of
other
requirements
that
it
can,
it
can
accommodate
and
sort
of
a
little
beyond
the
scope
of
what
we
want.
Romana
to
do
right
now,
but
anyway,
perhaps
longer
answer
to
your
question
Justin,
but
that
again
is
sort
of
the
the
perspective
that
we've
been
hearing
from
from
some
of
our
users.
A
G
So
I
mean
so
the
V
PC
router,
then
it
updates
and
modifies
the
AWS
router
table
right
yep
and
then
is
there
a
local
component
running
on
the
node
that
actually
installs
and
manages
the
routes
by
monitoring
the
other
nodes?
Okay,
yes,.
L
B
L
There's
this
new
piece
that
is
adjacent
to
it
that
well
let
me
let
me
bit.
Let
me
back
up
a
further
step
here
so
with
Romano
version,
two.
What
we're
adding
is
the
ability
to
advertise
routes,
and
that
was
missing
in
version
one
and
the
idea
of
advertising
routes
to
a
tapa
racks,
which
is
sort
of
like
the
canonical
example
of
route
advertisement,
but
in
within
VPC
the
equivalent
is
to
announce
a
route
to
your
top.
Iraq
switch
and
VPC
route
table
is
conceptually
the
top
iraq
switch.
L
So
since
it's
since
VPC
can't,
you
know,
doesn't
run
OSPF
or
BGP,
we
needed
something
to
modify
those
routes.
An
advert
essentially
advertise
those
routes
as
though
it
were
a
top
of
rack
switch,
so
that
is
sort
of
like
the
functional
slot
that
VPC
router
fills.
It
allows
you
to
son
on
the
southbound
side,
take
a
route
advertisement
that
we
need
for
all
layer,
three
deployments,
but
on
the
northbound
side,
there's
no
bgp
listener.
So
we
just
talked
vp
c
route,
api's
so
and
that's
longer
answer.
Then
you
want
to.
L
G
L
L
G
L
G
You
know
really
the
care
and
feeding
of
those
routes
to
make
sure
that
they
okay
and
then
and
then
the
number
of
routes
in
the
route
table
is
going
to
be.
The
number
of
it's
gonna,
be
the
number
of
zones
or
sort
of
like
times
the
number
of
essentially
forwarders
that
you
have
personnel
right,
correct.
L
G
G
L
A
L
G
Stuff
well,
the
GC
stuff
is
a
lot
more
flexible
here.
Okay,
all
right,
and
so
what
we
really
want
is
ipv6
with
big
free,
prefixes
right.
Okay,
I'm
just
said:
that's
that's
kind
of
what
we're
starting
to
approximate
here
right,
you're,
sort
of
shifting
and
doing
some
replacement
on
the
IP
addresses
to
approximate
the
fact
that
we
actually
have
have
like
a
you
know
a
you
know.
You
know
some
bits
of
ipv6
base,
yeah.
L
Well,
so
just
to
look
just
as
a
historical
footnote
here
the
original
idea
for
Romana
and
using
IP
I
Pam
to
solve
this
problem
sort
of
presumed
abundant
address
space,
and
that
seems
to
be
glacial
in
its
pace,
so
we're
shoehorning
everything
into
v4,
but
you
know
we're
sort
of
creeping
down
that
path.
But,
okay,
these
six
sort
of
you
know
solves
a
bunch
of
things.
Yeah.
G
And
said:
okay
and
then
the
and
then
the
IPM
is
per
note
right.
So
it's
essentially
a
local
decision
to
the
IPM,
because
you
already
know
the
prefix
that
you're
doing,
but
the
you
may
need,
as
an
IPM
then
have
to
be
recorded
into
a
global
database
someplace
because
it
really
shouldn't
have
to
be
well.
L
A
L
G
L
G
B
B
G
G
A
Taken,
there's
also
actually
something
which
we
haven't
done
in
turbidities,
which
I
think
is
interesting
from
that
we
can
point
of
view
which
is
you
can
have
multiple
en
eyes,
multiple
network
interfaces
on
a
machine.
They
can
get
real
IPs
they're,
not
there
for
subjects
the
route
limit,
so
something
that
could
be
interesting
but
would
be
a
hell
of
a
lot
of
work,
is
to
do
a
en
I
per
pod
model.
The
limits
are
just
high
enough
where
it
worked.
G
A
A
Yeah,
but
there
is
certainly
that
edge
I
think
it's
I,
think
it's
I
I,
don't
think
it
solves
enough
problems
that
anyone
has
felt.
Is
it
worthwhile,
but
I
think
that
is
certainly
something
that
is
keep
looking
at
and
thinking?
Oh,
that
would
be
interesting
and,
like
occasionally,
people
pop
up
with
something
that
it
would
solve,
but
not
very
often
we
have
a
couple
of
minutes
left.
D
L
G
L
I'm
sorry
I
told
you
10
o'clock,
Pacific
time
it's
it's
nine
o'clock
anyway,
uragan
I've,
been
speaking
for
the
past
ten
or
fifteen
minutes,
went
through
slides,
Joe
and
Justin
asked
some
very
thoughtful
and
informed
questions,
but
I
think
seeing
a
demo
would
really
help
a
lot
so
I
know.
We've
only
got
a
couple
minutes,
but
you're
going.
You
want
to.
A
A
M
M
C
M
Alright
good,
so
if
you
look
at
the
at
the
this
one
here,
if
I
just
come,
a
very
quickly
show
this
here,
and
so
the
Chris
talked
to
you
about
the
the
demo
and
he
gave
you
some
that
he
went
through
these
slides
here.
This
is
now
the
actual
demo
setup
we
have
in
front
of
us.
So
these
are
the
actual
prefix
groups
and
subnets
and
such
so
we
can
see
again.
We
have
a.
We
have
a
small
cluster
prepared,
12
nodes.
M
They
are
in
three
subnet
groups
in
three
subnets:
it's
one
subnet
per
availability
zone.
You
know
it
doesn't
have
to
be
that
you
can
have
multiple
subnets
per
availability
zone
and
you
could
have
multiple
prefix
groups
over
over
these
subnets.
But
here
for
simplicity,
we
have
one
availability
zone,
one
subnet,
one
prefix
group-
so
you
can
see
we
have.
We
have
a
community's
master.
M
So
if
I
just
shrink
this
here,
so
when
we
look
down
here,
we
have
our
console
and
we
can
see
that
we
have
these
various
instances
running
and
you
can
see
we
have
these
route
entries.
You
can
see
the
route
entries
have
been
created
before
these
prefixes
because
of
the
VP
syrota.
So
when
we
now
go
and
look
at
our
what
we
have
in
kubernetes,
we
have
a
couple
of
pots
running
if
a
sender,
receiver
pots
and
you
can
see
their
addresses.
M
So,
for
example,
you
have
a
sender
here
in
the
hundred
twelve
prefix
group,
and
you
can
see
it
says
in
the
0
11
it's
in
the
68
0
subnet
and
on
this
host
here
and
we
have
a
receiver.
Maybe
this
guy
here
who
is
in
the
128
13
host
and
you
can
see,
is
in
different
prefix
group
and
there
are
no
overlays
now
running
and
if
we
go
now
and
log
into
one
of
these
parts
here
we
go
into
the
sender
part.
We
can
do.
Let's
say
sorry,
so
we
can
see,
we
have.
M
M
M
M
Have
of
this
going
here,
so
the
basic
gives
me
a
refreshing,
keep
refreshing
our
a
trace
route
display.
So
we
can
see
that
the
packet
from
this
part
makes
it
through
a
few
hops
to
get
to
the
target
pot,
and
we
can
see
the
actual.
We
can
actually
see
the
path
of
traffic
the
packet
takes.
So
this
is
not
something
hidden
by
an
overlay,
for
example.
So
we
can
see
here
that
this
is
the.
The
first
hop
is
the
host
on
which
the
ascender
pot
resides.
M
Then
this
is
the
host
which
is
currently
acting
as
the
router
into
the
into
the
prefix
group
of
the
target.
This
is
the
host
on
which
the
target
pot
resides
and
then
here's
the
pot
itself-
and
this
is
all
quite
visible
in
in
the
extra
networks
you
could
have
network
monitors
running
and
you
would
be
able
to
see
the
true
traffic
as
it
takes
place.
Now
we
have
these
routes
configured
here
and
TV.
Pc
router
is
monitoring
the
health
of
these
nodes,
for
example,
of
this
guy
here.
M
M
Yes
hundred
twelve,
so
this
we
will
just
stop
this
instance
here
and
right
now.
I
will
stop
this,
and
this
always
takes
a
little
bit
with
Amazon.
We
have
seen
times
where
it
takes
10
seconds,
others
where
it
takes
a
minute
for
the
horse
to
actually
be
stopped.
So
we
are
here
at
the
mercy
of
Amazon
to
see
when
it
comes
when
it
really
stops,
and
so
Harvey
PC
router
is
monitoring
the
situation
of
background
that
has
stopped
and
it
should
take
just
a
few
seconds
here
we
go
and
now
it
has
failed
over.
M
It
has
selected
a
different
host
as
router,
and
now
it
has
happened
to
have
selected
the
actual
target
host
on
which
the
pot
resides.
That's
coincidence,
there
are
several
posts
to
choose
from,
so
it
has
randomly
chosen
the
128
13,
hostess
or
router,
and
this
is
exactly
the
which
host
of
it's
the
spot
running.
So
we
now
have
one
unless
hoppier,
but
you
can
see
how
the
how
we
can
have
now
a
routed
Network
and
no
overlays,
and
even
if
our
host
fails,
the
system
keeps
rubbing.
A
M
The
DS
DS
D
I
have
a
yeah.
This
IP
address
was
briefly
discovered
to
have
failed,
because
my
the
health
test
that
we
are
doing
on
the
hosts-
isn't
it's
not
particularly
well
done.
Yet.
This
is
a
very
early
version
of
the
V
PC
router,
it's
just
hints
of
ping
and
and
sometimes,
and
sometimes
it's
why
it
works.
Actually
you
see
it
even
here
it
failed
over
and
so
the
the
for
some
reason
it
detected.
M
M
Sorry
I
need
to
this
windows
really
small.
There
we
go
so
the
this
these
things
will
have
will
have
changed
accordingly.
So
if
you,
if
you
look
at
this,
did
this
entry,
this
entry
will
be
updated
automatically.
We
didn't
know
monitor
this
year,
but
yeah.
You
would
have
seen
this
at
this
number.
He
has
changed.
A
M
That
says
this
is
I,
think
it
depends
very
much
on
we
removed
his
owns
into
Regency
or
in
what
the
Lotus
online
networks
all
these
vagaries
that
we
are
used
to
with
networking
and
AWS
but
yeah.
This
is
about
the
time
you
can
see
that
hopping
over
to
another
host
and
then
compared
to
the
like
this
time.
He
and
this
in
this
row
right
that
is
this
is
this-
is
a
hop
over
to
another
availability
zone.
M
This
was
here
the
additional
hop
in
there,
and
this
is
then
actually
getting
to
the
target
pot,
and
you
can
see
that
these
sites
are
types
are
almost
identical,
so
the
additional
hopping
around
we
are
doing.
There
has
hardly
any
impact
really
only
on
the
ping
times
on
the
latency
times,
and
so
what
would
we
currently
have
in
our
demo
setup
if
I
can
just
go
back
to
the
slides
here
for
a
moment?
I
just
want
to
stress
this.
M
It's
so
this
is.
This
is
a
very.
This
is
an
active
sort
of
ping.
Health
check
I
want
to
make
this
configurable,
so
they
are
so
that
you
can
maybe
get
the
state
of
notes
from
somewhere.
You
can
maybe
integrate
with
cloud
watch
events
if
it
goes
down
this
right
now
is
just
a
very
first
version.
I
mean
already
is
working,
but
it's
just
the
first
version
right
now,
where
we're
just
having
just
the
active
health
checks.
M
So
so,
and
and
so
usually
the
detection
because
we're
we
are
currently
doing
the
test
robot
once
a
second.
Actually,
the
detection
happens
very
quickly,
but
then
you
have
to
send
the
updated
route
over
to
to
to
the
VPC
round
table
and
that
that
there's
always
this
latency
dealing
with
the
api-
and
you
know
by
the
time
it
takes
effect-
a
few
seconds
elapsed
yeah
so,
but
it
is
around
five.
We
have
seen
five
to
seven
seconds,
roughly
usually
a
text
sometime.