►
From YouTube: VPC Networking beyond the public cloud
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
it's
Kelsey
Hightower
here
and
I'm,
going
to
be
moderating
the
cncf,
on-demand
webinar,
and
today
we're
going
to
be
talking
about
VPC
networking
beyond
the
cloud.
Now.
Why
is
VPC
so
important
for
the
last
seven
years?
As
you
all
may
know,
I've
been
at
Google
cloud
and
I've
had
a
lot
of
jobs
before
then
and
I
know
what
it's
like
to
work
inside
of
a
data
center.
You
rack
and
stack
servers.
A
You
create
your
leaf
and
spine
Network
you've
got
routers,
you
got
top
of
rack
switches
and
I've
made
my
fair
share
of
Cat5
cables
and
in
that
architecture,
automation
is
key.
Like
I
get
it,
we
need
to
provision
all
of
those
ports,
maybe
you're
setting
up
vlans,
but
for
the
most
part,
a
lot
of
our
automation
tools
are
kind
of
doing
a
one-to-one
mapping
of
the
standard
configuration.
Now,
if
you're
like
me-
and
you
have
some
experience
in
the
cloud
you
know,
things
are
very
different.
A
A
You
create
a
kubernetes
cluster
again,
you
just
attach
it
to
a
VPC
and
typically
that
VPC
is
going
to
give
you
an
IP
address
and
deal
with
any
other
routing
concerns
that
you
may
have,
and
so,
when
I
think
back
to
like
people
coming
from
on-prem
into
the
cloud.
Well,
one
of
the
big
differences
that
we
actually
don't
talk
about
enough.
We
always
talk
about
the
differences
in
compute
and
VMS,
maybe
even
the
differences
in
load,
balancers
and
security.
A
Things
like
I
am,
but
hardly
do
we
ever
talk
about
what
I
think
is
probably
one
of
the
most
important
components
and
that's
that
VPC
abstraction?
Let's
just
let
us
focus
on
our
apps,
our
workloads
and
our
applications.
So
to
help
me
really
dive
into,
is
this
even
possible?
I
want
to
introduce
Alex
from
netris
to
give
us
a
deep
dive
of
his
company,
his
product
and
their
ambition
to
bring
VPC
anywhere,
including
possibly
your
data
center.
With
that,
when
we
welcome
Alex
to
the
stage.
B
Hi
Kelsey
hi
everyone
thanks
for
this
nice
introduction,
so
my
my
background
is
many
years
in
traditional
network
engineering,
almost
like
20
years,
designing
architecting,
large-scale
data
center
networks
and
I'd
like
to
start.
You
know
just
to
to
give
a
little
bit
of
a
context
before
we
we
we
we
get
on
onto
environment.
B
Before
we
start
configuring
things,
I
want
to
give
a
little
bit
of
a
context
and
I'd
like
to
start
with
the
evolution
of
the
networking,
so
networking
kind
of
started
with
CLI,
and
then
over
time,
we've
seen
sdn
software-defined
networking
trying
to
make
it
kind
of
more
programmatic
for
engineers
to
consume
networking
and
the
the
next
one
was
intent
based.
B
Networking
pretty
recently
thing
is
that
all
this
Technologies
they
are
amazing
for
traditional
Data,
Center
and
Telco
environments,
but
when,
when
you
do
need
to
do
this,
Cloud
native
applications,
devops
methodologies,
you
need
VPC.
Those
Technologies
are
not
fundamentally
made
for
this.
Vpc
is
what
what
we
are
seeing
in
the
cloud.
It's
vpcs
what
we
need,
and
if
we
look
at
the
compute
infrastructure
market
growth,
we
can
see
that
not
only
public
cloud
is
growing,
but
also
bare
metal.
B
Cloud
Market
is
growing,
Edge
growing,
like
crazy
and
even
data,
even
traditional
Data
Center
Market
is
growing.
Why
is
this
happening
is
because
we
need
lots
of
apps.
We
have
lots
of
data,
we
want
apps
for
everything
and
anything,
although
most
of
the
apps
go
to
public
Cloud.
It's
not
that
public
cloud
is
not
a
one-size-fits-all
kind
of
solution,
because
in
some
cases
regulations
require
us
to
take
some
UPS
elsewhere.
B
In
some
cases
it's
it's
a
high
cost,
especially
at
scale
and
in
some
cases
some
applications
do
require
technical
requirements
like
latency,
like
machine
learning,
applications
like
applications
dealing
with
transient
data.
So
my
main
point
is
that
applications
are
highly
distributed.
We
have
lots
of
age
use
cases.
This
is
how
things
are
today,
and
this
is
how
things
will
be
always
some
applications
will
be
on
public
Cloud.
Some
applications
will
be
on
bare
metal
Cloud.
Some
applications
will
be
on
Edge
and
some
applications
will
be
in
traditional
data
center.
B
Now
this
means
that
Engineers
have
to
deal
have
to
deploy
maintenance
scale,
applications
in
all
these
four
types
of
environments
and
in
public
Cloud.
That's
fairly,
it's
fairly
convenient
to
do
things
programmatically
because
of
VPC.
It's
declarative.
It's
quick
and
safe.
B
There's
a
lot
of
complexity
at
best,
there's
some
kind
of
homegrown
solution
which
is
different
from
organization
to
organization,
and
we
still
are
seeing
lots
of
silos
like
devops
Engineers
net
Ops,
Engineers,
Network,
Engineers
kind
of
takes
a
lot
of
time
and
it's
kind
of
hard.
So
we
have
started
networks
to
to
address
this
problem
to
create
a
software
that
brings
VPC
networking
everywhere
to
your
bare
metal,
Cloud
to
Edge
compute
and
traditional
Data
Center,
to
make
both
things
look
like
VPC
and
enable
Engineers
to
have
kind
of
similar
operational
model
in
both
environments.
B
Look
at
this:
this
is
what
networking
looks
like
in
Amazon
AWS
and
it's
very
similar
in
any
public
cloud
provider,
and
this
is
what
networking
looks
like
in
a
physical
Data
Center.
B
So
basically,
this
is
what
we're
trying
to
do
we're
trying
to
take
that
physical
Network
and
make
it
look
like
very
much
like
VPC.
So
here's
the
concept.
We
have
this
thing
called
soft
gate,
which
is
you
can
think
about
it.
Like
a
VPC
Gateway,
it's
a
Linux
machine
that
is
running
frr
wire
guard
key,
a
different
Linux
networking
tools
and
software
mainly
open
source
natural
softgate,
sits
on
top
of
any
physical
Network.
It's
just
a
it's
just
a
software,
just
a
machine
you
can.
B
So
that's
the
machine
which
does
packet,
forwarding
and
there's
and
then
there's
this
networks,
controller
which,
which
has
web
web
console,
which
is
very
similar
to
public
Cloud
that
has
declarative
declarative
mindset
has,
of
course,
a
terraform
integration,
kubernetes
and
rest
API.
B
But
the
idea
is
that
you,
you
configure
you
deal
with
this
controller
and
controller
automatically
programs,
VPC
gateways,
so
gate
notes
to
make
Network
work
and
we
will
see
how
this
works
soon.
In
action.
A
All
right,
this
is
perfect
overview
and
I
love
that
that
image
of
that,
like
Telco
closet,
where
all
the
wires
are
hanging,
I'm,
pretty
sure
people
listening
to
this
or
watching
this
right
now
are
probably
responsible
for
creating
that
mess,
and
we
all
know
the
value
of
abstractions
and
I.
Think
you
really
dialed
it
in
when
we
start
to
think
about
taking
all
those
components,
I'm
going
to
remind
everyone.
All
of
those
components
are
necessary
if
you
want
a
real
working
network,
but
what
isn't
necessary
is
to
leak
that
complexity
to
everyone
else.
A
So
having
that
VPC
abstraction
layer
that
gets
us
back
to
kind
of
simple
Primitives
that
we
can
actually
use
with
our
networks.
Now
one
thing
I
asked
before
we
get
into
this
demo
was
you
know,
let's
not
just
show
the
VPC
in
a
bunch
of
IP
addresses
I,
really
want
to
call
out
like
very
common
infrastructures,
and
when
we
think
about
vpcs
these
days,
especially
for
this
audience,
you
know
one
common
architecture,
I,
think
if
you
go
to
the
next
slide,
you
have
a
nice
diagram
of
you
know
a
common
thing
that
people
do
like.
B
Yeah
sure
so
for
for
for
this,
for
this
session,
we've
created
this
environment,
where,
where
we
took
you
know
a
traditional
Network,
so
basically
a
Cisco
switch
in
the
middle.
We
have
connected
three
physical
machines
for
where,
where
we
will
run
our
Harvester
Rancher
Harvester
hypervisor
nodes,
we
have
connected
to
other
physical
machines
for
a
soft
gate,
one
so
gate
two,
those
those
will
be.
You
know
highly
available
VPC
gateways.
B
We
have
one
more
machine
where
we
run
two
controllers:
Netflix
controller
and
Ranger
controller
controller
itself
is
just
a
k3s
cluster.
So
it's
it's.
You
can
easily
run
two
two
controllers
on
the
on
the
same
machine
and
we
have
this
internet
connectivity.
Now
this
internet
connectivity
can
be
a
physical
cable
coming
from
the
ISP,
with
a
range
of
IP
addresses
and
physically
plugged
into
that
Cisco
switch.
B
That
can
be
a
cable
coming
from
if
it's
a
if
it's
a
Brownfield
environment
that
can
be
a
cable
coming
from
traditional
routers
from
Enterprise
border
routers
doesn't
really
matter
some.
We
just
need
some
sort
of
internet
connectivity
to
peer
or
VPC
network
with
the
rest
of
the
world.
Now
this
thing
is
entirely
based
on
standard
protocols,
so
basically
we
we
could
have
connected
a
bunch
of
VMware
esxi
nodes.
We
could
have
connected
bare
metal
nodes
just
to
keep
it
simple.
A
So
I
think
if
I
were,
if
I
were
to
summarizing
correctly
we're
wrong
here,
if
I
were
to
walk
into
an
existing
Data,
Center
and
I.
Look
to
my
left,
there's
some
NetApp
storage,
some
VMware
HP,
blade
chassis
and
all
that
is
working,
fine
right
and
I.
Look
at
the
top
of
that
rack,
there's
probably
a
Cisco
or
Juniper
switch
or
router
combination,
and
so
everything
is
working.
Instead,
what
we
mean
by
Brownfield,
like
things,
are
already
there
you're,
not
starting
from
scratch
and
then
I
decide.
A
You
know
what
I
want
to
give
this
whole
Cloud
native
thing:
a
try,
maybe
I'm
going
to
go,
buy
some
new
hardware
and
let's
walk
through
this
diagram.
You
know
I
get
a
rack
right
next
to
this
other
rack,
and
instead
of
going
out
and
buying
very
expensive
networking
gear,
I
go
get
some
standard,
maybe
Network
ready
commodity,
Hardware,
I
rack,
two
of
them
in
the
rack.
So
at
the
top
you
have
the
two
I
get
that
Uplink
cable
from
my
network
team.
That's
ready
to
go!
A
A
So
if
I
step
back
and
look
at
this
rack,
I
have
this
kind
of
clean
setup
of
you
know,
dedicated
worker
nodes,
a
controller
that
will
be
the
orchestration
layer
for
the
other
ones
and
two
commodity
network
devices,
and
at
this
point
I
guess
I
just
need
to
get
the
software
installed
to
make
all
of
this
come
alive.
Is
that
kind
of
an
accurate
description
of
someone
that
was
kind
of
rolling
this,
together
with
their
own
machines
in
an
on-premise
Data
Center.
B
Yeah,
that's
that's!
That's
very
close
to
how
people
usually
test
this
and
in
in
some
cases
you
can
have
like,
like
a
rake
of
kind
of
new
new
equipment.
You
can
always
peer
that
with
existing
Network.
That's
one
way
to
go.
Another
way
is
like,
even
if
it's
like
a
trial,
and
if,
if
you,
if
you
don't
really
want
to
do
make
a
lot
of
changes,
if
it's
just
an
experiment,
do
I
do
I
want
this
solution.
B
B
Blank
servers
where
you
will
install
just
operating
systems
and
you
you
will
need
to
ask
your
networking
team
to
allocate
you
a
range
of
VLAN
IDs,
VLAN
IDs
that
you
know
will
not
conflict
will
not
overlap
with
other
things
on
the
network
and
I
can
actually
show
then
so.
This
is.
This
is
netris
controller,
graphical
user
interface
and
when
you
install
it
for
the
first
time
you
have
you
see
this
site
site
is
like
a
region,
and
this
is
default.
Configuration
I
didn't
change
anything
here.
You
can
see.
B
It
says:
villain
range
700
to
900s.
This
means
that,
in
this
case,
networks
will
stick
to
using
only
these
villain
IDs.
But
if
your
network
team
is
happy
with
a
different
range,
that's
totally
fine,
just
edit
and
type
there,
the
right
range.
So
that's
it.
Basically,
VPC
services
will
stick
to
using
this
VLAN
IDs.
B
Whenever
traffic
exchange
is
happening
between
softgate
1,
for
example,
and
Harvester
three
it
in
the
middle,
it
will
be
encapsulated
using
one
of
these
VLAN
IDs
and
that's
how
traffic
will
pass
through
existing
Enterprise
switch
without
requiring
Network
team
to
make
big
changes.
That's
the
whole
idea
for
every
service.
You
you
only
deal
with
softgates
with
netris,
with
your
you
know,
with
components
that
are
plugged
into
VPC,
but
the
moment
VPC
traffic
is
needs
to
travel
the
traditional
Network
it
encapsulates
into
one
VLAN.
Id
goes
through
that
Network
and
then
decapsulates
back
so.
A
Look
we
already
jumped
into
the
demo
I
think
we
just
pop
over
there
and
and
really
go
through
this.
So
we
just
talked
about
kind
of
that
base.
Install
Hardware
is
now
mounted.
We
got
our
configuration
in
place.
So
now
it's
like
you
know
when
I
go
to
a
cloud
provider
and
you
had
a
nice
screenshot
in
the
presentation
you
know.
Typically,
when
I
create
a
VPC,
I
tend
to
be
presented
with
like
a
list
of
subnets
that
I
can
choose
either
for
VMS,
or
you
know
various
clusters
that
I
want
to
attach
to
them.
B
Yeah,
that's
a
that's
a
great
question,
so
there's
this
IPM
part
where
all
the
IP
addresses
are
kind
of
registered,
and
this
you,
you
can
see
this
list
of
private
IP
addresses
so
slash,
20s,
those
are
by
default,
configured
they're
created
there,
I
didn't
create
them
and
whenever
I
want
to
create
a
new
network,
new
virtual
Network
I
can
use
one
of
these
slash,
20
IP
addresses.
B
Now.
If
we
look
at
the
I
have
created
few
networks
to
to
make
this
whole
thing
work,
but
we
we
will
create
couple
of
new
networks
soon,
so
the
very
first
network
is
this
network
called
hypervisor,
so
v-net
we
call
them
Venus
virtual
Network,
and
we
can
see
it
is
using
this
subnet
from
from
that
list.
This
slash
20..
B
This
is
where
a
ranger
Harvester
hypervisors
are
living
first.
Second,
third,
and
you
can
see
that
those
machines
have
received
IP
address
from
from
that
Network.
B
That's
sorry,
these
three
machines,
but
they
also
need
to
have
some
kind
of
a
Gateway
which
is
South,
Gate,
1,
so
gate
two
physically,
and
we
can
see
that
in
our
inventory
so
get
one
so
K2
they're
green.
B
We
can
see
traffic
memory
and
stuff
utilization,
but
the
nice
thing
is
all
all
this
is
happening
behind
the
scenes.
I
I
didn't
configure
these
things.
What
I,
what
I
did
I
have
only
created
this
v-net
attach
that
IP
address
and
that's
it.
My
my
Rancher
got
this.
My
wrencher
hypervisors
got
this
minimal
connectivity,
so
nodes
can
talk
to
each
other,
can
reach
out
to
to
controller
and
make
the
cluster
work.
A
So
it
looks
pretty
straightforward:
I
create
a
VPC
I,
get
a
range
of
subnets.
Then
I
can
come
over
here
and
allocate
any
of
those
ranges
like
you
showed
earlier.
Those
hypervisors
are
going
to
the
low
level
nodes,
but
you
were
in
Harvester.
So
do
we
have
to
tell
Harvester
anything
about
this
network
topology
like
how
does
it
know
that
these
networks
are
something
that
it
should
be
using?
Is
there
any
configuration
that
needs
to
be
done
on
the
Harvester
side
and
I'm
assuming
Harvester?
B
B
I
select
the
site
we
on
the
single
region.
In
this
case
we
attribute
this
to
devops
team
and
we
we
pick
one.
The
next
available
range
of
IP
addresses
next
slash,
20,
that's
available
for
VLAN
ID.
We
leave
it
assigned
automatically,
so
networks
will
choose
one
of
the
available
VLAN
IDs.
We
click
add
now.
This
is
provisioning,
if
we'll
finish
soon
so
everything
starts
from
creating
the
v-net
from
from
here
now,
when
this
is
provisioned,
we
need
to.
B
We
need
to
create
a
network
inside
Harvester,
so
I
will
go
to
home
and
will
work
step
by
step,
so
I
go
to
virtualization
manager,
then
Harvester
cluster.
B
Advanced
networks-
and
this
is
the
you-
can
see-
network
that
I've
added
before
but
now
I'm
going
to
add
a
new
one,
I
paste,
the
name
here
and
I
need
this.
Villain
I
did
at
net
research
generated
to
to
type
the
same
VLAN
ID
here
703.
So
now,
both
endpoints,
they
know
which
will
an
ID
to
use
so
traffic
reached
the
the
right
place,
weekly.
A
If
I,
if
I'm
looking
at
this
correctly,
this
feels
like
the
same
flow,
you
typically
do
with
like
a
VMware
setup
right
like
if
I
bring
in
exxi
I
have
to
bring
in
this
networking
information.
So
it
sounds
like
Harvesters
like
this
true
hypervisor
layer,
that's
going
to,
let
us
create
VMS
or
whatever,
and
this
is
where
we
plug
in
those
network
settings.
B
Yeah,
absolutely
the
same
same
flow
would
be
if,
if
you
do
with
the
with
the
VMware
okay,
now
at
this
point,
when
we
we
have
the
the
network
ready
and
now
we
can
go
to
Cluster
management
and
we
can
try
to
create
a
new
cluster.
B
B
A
Now,
while
that's
creating,
this
feels
very
similar
to
like
an
eks
or
AKs
or
gke
workflow
right,
you
come
here
and
I'm,
assuming
Harvester
is
doing
the
hard
work
of
creating
the
necessary
VMS
programming
them
to
be
on
the
right
Network.
In
this
case,
this
VLAN
ID
tagging
all
the
network
traffic.
So
as
it
flows
through
all
the
intermediate
components,
will
do
the
right
thing
in
terms
of
handling
those
Network
packets
and
giving
me
this
this
feel
of
like
hey
as
long
as
you
have
a
network
in
place.
B
But
it's
actually
creating
it's.
It's
actually
started
the
process.
A
B
A
Know
what
we'll
do
we'll
just
leave
that
for
feedback
for
the
Harvester
team
and,
like
all
good
demos,
it
looks
like
we
already
have
a
cluster,
that's
going
so
I'm,
pretty
sure.
I
can
imagine
how
that
would
be.
Provisioned
there'll,
be
some
SCD
there'll,
be
some
worker
nodes
and
then
ideally,
you'll
probably
be
able
to
pull
down
a
coupe
config.
Is
there
an
ability
to
get
a
config
from
that
prod
one
cluster
up
there,
while
we're
waiting
for
this
other
one?
To
finish
absolutely.
B
We
can
just
quickly
check
one
thing:
we
can
go
to
Harvester
for
one
moment
and
see
what's
happening
underneath
if
we
click
on
Virtual
machines,
all
right.
We
can
see
that
these
three
virtual
machines
have
created
like
two
minutes
ago.
B
B
Okay,
so
going
back
to
Cluster.
Now
I'm
assuming
VMS
are
being
installed
and
kubernetes
is
being
installed,
but
that
takes
some
time
so
not
to
wait
like
you
suggested.
Let's
go
and
look
to
this
other
cluster
kubernetes
prod
one
and
totally
we
can
download
cubeconfig
here
for
the
first
cluster.
A
So
I
guess
at
this
point
you're
going
to
configure
your
Cube
config
with
those
credentials
you've
just
downloaded
so
that
you
can
actually
interact
with
it
via
koopsy
tail.
So
for
those
probably
looking
at
this,
this
is
the
easy
way
to
actually
deal
with
multiple
clusters,
especially
ones
that
you
just
created,
and
you
want
to
make
sure
that
the
coupe
ctail
is
pointing
to
that
to
that
set.
A
Now
so
what
does
this
look
like
on
the
network
side?
So
Harvester
is
the
using
that
VLAN
or
that
VPC
that
we've
carved
out,
or
at
least
a
subnet
from
that
VPC,
that's
kind
of
represented
as
that
VLAN.
So
look
Harvester
knows
all
these
things.
Kubernetes
knows
all
these
things,
but
I'm
interested
to
see
what
it
looks
like
from
the
netris
console
like
people
that
are
managing
the
network.
What
do
they
see.
B
So
that
that's
a
good
question
and
to
your
point,
people
that
are
managing
that
traditional
Network
they
in
from
their
perspective.
These
are
just
some
packets
going
back
and
forth.
They
don't
know
any
of
these
details,
we're
not
disturbing
them.
They
are
not
disturbing
this
VPC.
Everyone
are
happy
now
from
netrisk
console
from
netris
web
console
perspective.
B
Well,
this
this
cluster
came
out
is
up
and
running,
so
obviously,
cluster
has
at
least
connectivity
with
Ranger
controller.
That's
that's
how
Ranger
Works
cluster
needs
to
access
their
controller,
but
but
there's
also
this
other
component
of
netris
operator,
which
I
to
save
everyone's
time.
I
have
previously
installed
it
already.
B
Here
is
it
netris
operator,
that's
running,
so
this
operator
is
talking
to
Netflix
controller,
it's
authenticated
with
credentials
and
now,
if
I
deploy
an
application
in
kubernetes
which
which
is
using
service
of
type
load
balancer,
for
example,
mattress
controller
will
be
able
to
understand
this.
Let
me
actually
show
this.
B
This
is
a
layer,
layer,
4
load,
balancer
kind
kind
of
like
elastic
load,
balancer
functionality
in
address,
and
we
can
see
that
nothing
is
there.
We
we
could.
We
could
create
one
using
web
console,
we
we
could
pick
protocol,
we
could,
you
know,
enable
health
checks
and
type
in
backend
IP
addresses
in
in
this
case.
This
would
work
just
a
just
a
regular
load
balancer,
but
in
this
case
we
can
actually
deploy
application.
I
have
this
basic
application
here,
put
info
that
is
using
service
of
type
load
balancer.
A
And
before
you
do,
that,
I
think
it's
important
to
go
back
to
that
pod
info
and
let's
explain
something
to
folks
really
quickly.
So
there's
going
to
be
a
lot
of
people
that
are
new
to
kubernetes
that
are
coming
to
this
there's
going
to
be
people
that
are
experienced
with
kubernetes,
maybe
go
back
to
that
config!
Really
quick
and
I
can
see
at
the
top.
A
Things
will
be
provisioning,
but
let's
go
look
at
the
Pod
info
for
one
more
second,
I
think
what
you're
kind
of
highlighting
here
is
that,
given
that
that
controller
is
there,
we
see
that
when
you
create
this
service
type
load
balancer
what's
happening,
is
it
feels
like
you're
doing
the
integration
work
on
the
network
side,
meaning
that
you're
dealing
with
whatever's
required
to
provide
the
IP
address
the
load
balancer
service,
the
service,
Discovery
integration,
pulling
that
IP
information
up
there
you
go,
you
can
see
it
there,
pulling
the
IPA
information
from
kubernetes,
so
I
guess
a
lot
of
people
if
you've
ever
used
any
of
the
cloud
services
before
this
would
be
the
equivalent
of
like
a
network
layer,
load
balancer
right.
A
You
can
send
traffic
directly
to
these
pods,
but
it
I
think
it
also
shows
off
the
first
class
integration
here.
You
don't
have
to
come
to
this
console
and
then
pop
back
to
kubernetes.
You
can
keep
your
normal
kubernetes
workflow,
put
everything
in
the
Manifest
use
native
kubernetes,
ways
of
articulating
things
like
type
load
balancer,
and
that's
enough
information
for
the
Nexus
controller
to
take
over
and
pull
in
that
config
and
be
responsible
for
creating
the
Network
Services
required
to
make
it
work.
Just
like
people
would
experience
and
other
fully
managed
kind
of
kubernetes
environment.
B
And
actually
to
to
show
that
further
I
can
I
can
edit
this
number
of
replicas
from
three
to
ten.
Let's
say
save
and.
B
We
can
see
that
more
pots
are
now
starting
container,
creating
those
are
still
in
the
process
and
on
network
side.
We
can
see
that
networks
now
detected
more
IP
addresses
because
we
we
have
six
nodes
in
this
kubernetes
cluster
and
originally
pod
info
was
running
only
on
three
notes.
That's
why
we
had
only
three
IP
addresses
now
network
is
understood.
Okay,
now
it
is
running
on
more
nodes,
so
networks
has
populated
more
more
IP
addresses
of
nodes
into
load,
balancer
service.
A
And
when
you
say
notes
here,
I'm
assuming
you're,
referring
to
pods
I,
mean
there's
probably
still
three
nodes
in
the
cluster,
but
these
pods
are
the
ones
that
are
pulling
in
these
IP
addresses
right.
So
these
are
natively
assigned
to
the
Pod.
So
we're
going,
you
know
we're
honoring
the
true
kubernetes
networking
model
where
these
pods
are
being
assigned
first
class
IPS
from
the
subnet
right.
A
So
when
they're
communicating
or
things
are
communicating
to
that
you're
kind
of
going
point
to
point
and
and
I
think,
that's
the
best
way
to
think
about
kind
of
doing
that
kubernetes
model
and
the
other
thing
I
want
to
say,
is
shout
out
to
anyone.
That's
working
on
this
console
at
netris
the
fact
that
you're
giving
people
that
real
time
feedback,
where
it's
automatically
updating
I'm
running
commands
at
the
bottom
console
and
then
seeing
that
live
kind
of
preview,
respecting
the
health
checks
and
then
moving
letting
people
know
what's
going
on.
A
A
What
we're
not
really
showing
here
is
that
if
you
restart
one
of
those
pods,
it
can
land
on
a
different
node
get
a
different
IP
and
the
expectation
of
you
know,
developers
and
platform
teams
is
that
those
IPS
will
dynamically
be
updated.
Ipam
management
will
be
fully
automatic
and
the
load
balancer
will
follow
things
like
health
checks
and
service
Discovery
to
pre-populate
everything.
I
think
this
is
the
problem.
Most
people
have
with
the
kubernetes
networking
model.
A
It
will
challenge
whatever
you
have
existently,
and
so,
if
you
don't
have
the
ability
to
carve
out
vpcs
and
dynamic
subnets
and
then
do
the
full
integration
with
the
kubernetes
networking
model,
that's
when
people
start
complaining
and
that's
when
you
feel
like
you're
falling
short
I
mean.
Did
we
capture
everything
it
looks
like
we
went
through
everything
if
we
missed
something
I
think
now
be
a
good
time
to
kind
of
bring
it
up
and
show
it.
B
So
to
to
add
a
little
bit
on
on
that
in
terms
of
IP
addresses
of
ports,
nodes
and
so
on
and
network
engineering
challenges,
we
we're
we're
we're
we
we're
very
much
neutral
and
we
understand
that
different
people
have
different
preferences.
Someone
may
may
want
to
use
one
senile,
psyllium
or
Calico,
or
someone
may
want
to
use
in
one
mode
another
mode
for
for
for
us,
it's
not
much
different.
We
support
all
kinds
of
configurations.
Someone
is
Happy
using
Calico
totally
fine.
B
Our
our
operator
is
able
to
understand
that
if
someone
is
switching
into
calico's
bgp
mode,
this
is
their
kind
of
Highly
advanced
mode.
We
understand
even
that
and
we
we
know
how
to
respond
on
our
site,
how
to
configure
things
that
things
kind
of
automatically
seamlessly.
You
know
work
together
as
well
as
if
it's
just
a
simple,
simple
setup,
with
few
few
notes
without
complicated,
say
nice.
Still,
it's
it's
cool
for
us.
We
we're
just
trying
to
support
all
this
common
use
cases.
B
Let's
see
if
this
even
works,
so
this
this
service
received
this
IP
address.
So
what
what
happened?
Basically
netris
has
assigned
a
next
available
public
IP
address
to
the
service
and
push
that
information
to
kubernetes
same
IP,
address
here
and
there
in
console.
But
let's
see
if
even
if
this
even
works,
okay
and
we
can
so
it
works,
we
can
even
use
Curl
to.
B
To
make
sure
that
it
is
being
served
by
different
different
pots,
it's
not
so
actually
load
balancing
is
actually
working.
B
A
B
So
well
well,
physical
networks.
They
are
kind
of
designed
to
move
packets
and
traffic
really
well,
which
is
fine
like
this
is
their
their
specialization.
If,
if
you
will
they,
they
do
this
really
well,
VPC
is
in.
In
my
opinion,
public
cloud
became
possible
not
because
you
put
your
server
somewhere
else,
but
because
because
it
was
really
relatively
easy
and
fast
to
use,
you
know
when,
when
you
want
to
iterate
and
and
people
in
this
world,
they
want
to
iterate
a
lot
right.
B
We're
not
we're
we're,
not
we're
not
in
the
age.
We're
like
designing
something
takes
like
10
years
now,
we're
we're
in
the
software
age,
where
we
want
to
ship
something
make
changes
and
ship
again.
We
want
to
ship
many
times
a
day.
So
basically,
it's
iterations
that
are
important
for
the
business
and
VPC
is
making
this
possible,
so
I
don't
need
to
go
and
communicate
internally
with
multiple.
We
don't
need
to
do
kind
of
five
meetings
to
decide
how
to
how
to
add
or
remove
one
one
VLAN
or
something.
B
That's
in
my
opinion,
that's
why
VPC
is
extremely
powerful
to
make
this.
You
know
iteration
devopsy
type
mindset
possible
and
before
starting
netris,
when
I
was
looking
to
public
cloud
and
being
myself
a
person
coming
from
traditional
data
center
I
was
like.
Oh
my
God,
this
VPC
thing
this
is
insanely
great
in
public
Cloud.
This
is
I
wish.
This
is
available
everywhere,
like
in
every
Network,
every
data
center,
every
Edge
bare
metal,
Cloud,
even
even
my
little
home
network.
Everything
should
have
this.
This
is
kind
of
how
I
thought
about
this.
A
B
Yeah,
that's
that's
a
that's
a
great
question,
so
yeah
I
I
showed
everything
in
web
console.
There's
also
rest
API,
which,
for
example,
someone
who's
like
potentially
embedding
networks
can
can
use
rest
API,
there's
Swagger
built
in,
but
what
is
kind
of
more
more
designed
for
for
devops
engineers.
That's
that's,
of
course,
terraform
and
I.
I
have
here
a
little
example
of
that.
So
to
save
everyone's
time,
just
will
show
the
resources
so
in
this
terraform
file,
I
I
have
summarized
kind
of
netris
resources
like
in.
B
B
B
So
we
can
see
this
is
what
terraform
is
trying
to
do
so
trying
to
create
netris
resource
the
same
v-net
and
basically,
what
I
did
from
inside
the
Ranger
from
web
console,
but
through
terraform.
B
So
process
has
started
and
it
it
actually
takes
some
time.
A
B
Yeah
this
this
has
been
created
and
if,
if
we
look
into
wrench
or
actually
okay,
test
staging
is
creating
here,
too
they're
very
best,
responsive
too.
What
what
is
this
thing
waiting
here?
It's?
It
is
kind
of
waiting
until
the
cluster
is
up
and
running,
and
only
the
entire
phone
will
finish
the
work
which,
which
takes
some
time
like
few
minutes.
Five
ten
minutes
but
yeah,
of
course,
there's
terraform
how,
without
terraform
and.
A
I
also
want
to
call
out
that
it
looks
like
your
demo
healed
itself
and
that
other
cluster
you
were
making
earlier
the
kubernetes
test.
Cluster
looks
like
it's
up
and
running
so
yeah.
It
may
have
been
a
UI
issue,
so
it's
nice
to
see
that
get
resolved
so
don't
tempt
the
demo
guys
no
need
to
connect
to
it.
A
Just
let
it
shine
bright,
nice
and
green
good
to
go
all
right,
so
I
think
we
got
time
for
maybe
one
or
two
more
questions,
and
you
know
when
people
are
looking
at
this
and
they're
asking
themselves
like
how
do
I
get
started,
you
know,
is
it
best
for
them
to
kind
of
start
with,
like
a
small
POC,
you
know
get
new
hardware.
What's
the
best
way
for
people
that
are
watching
this,
maybe
they're
super
excited.
Maybe
they
look
at
this
and
say?
Yes,
this
is
the
missing
piece.
B
Good
question
so
so
I
I
would
recommend
similar
basic
setup
like
this,
just
like
two
machines
for
soft
gate,
one
one
machine
for
controller
and
two
three,
just
just
a
handful
of
machines
for
for
compute
nodes
in
in
our
documentation
page
there
is
there.
Is
this
install
guide
on
VPC
anywhere
getting
started
guide?
It
describes
the
concept
and
it
works.
You
know,
step
by
step
through
how
you
install
controller
controller
installation
is
basically
a
one-liner
that
you
coping
based
on
Ubuntu,
Linux
or
any
Linux.
B
It
will
stand
up
a
cube
cluster
and
netrisk
controller.
Then
you
install
softgate
nodes,
which
is
again
very
basic.
You
when,
when
you
install
Netflix
controller
you'll,
see
this
two
South
Gate
nodes,
they
will
already
exist
there,
but
everything
here
will
be
red.
Heartbeat
will
be
red
because
you
will
need
to
actually
provision
that
sort
kit
machines
which
is
again
a
one-liner
command.
You
copy
here
and
paste
on
machine.
Wait
few
minutes
until
it
installs
that's
pretty
much
it.
B
There
are
examples,
the
we're
also
on
slack,
so
people
are
welcome
to
join
our
communities
like
Channel,
where
other
users
and
our
Engineers
are
they're.
Happy
to
answer
all
kinds
of
questions.
A
Awesome
and
look
I
think
that
was
super
dope.
We
got
a
nice
demo,
I
mean
the
big
takeaways
for
me
were
we
can
bring
the
cloud-like
abstractions.
The
VPC
on-prem
also
doesn't
seem
like
we
have
to
throw
away
everything
we
have
so
this
works
well
in
brown,
fail
scenarios
as
well,
where
we
already
have
a
setup.
A
We
can
work
with
our
team
to
give
us
a
range
of
vlans,
give
that
to
netris,
and
then
we
now
have
a
great
way
of
partitioning
that
Network,
and
this
should
sound
real
familiar
from
people
coming
from
that
VMware
or
world,
where
you
know
you
did
the
same
thing
for
your
hypervisors,
but
now,
instead
of
just
getting
Network
automation,
we're
actually
getting
back
to
that
world.
Where
you
get
that
VPC,
you
have
subnets
to
choose
from
I,
don't
know
about
y'all,
but
I.
A
Remember
the
days
of
people
trying
to
track
all
this
stuff
in
the
spreadsheet,
and
now
we
have
a
full
console.
We
have
visibility
and
we
have
something
that
actually
understands
the
kubernetes
networking
models,
not
just
the
node
level
that
not
just
at
the
pot
level,
but
even
things
like
L4
networking,
load,
balancers
and
health
checks
and
making
sure
that
you
can
scale
and
remove
things
dynamically.
So
this
is
really
dope.
Really
wonderful,
I
want
to
say
big,
shout
out
to
Alex
and
the
whole
networks
team
who
helped
put
this
together.
A
B
Yeah,
absolutely
thanks,
Kelsey
and
thanks
everyone
who's
who's.
Watching
this
hey
give
an
entries
a
try.
It's
it's
really
easy.
We
are
looking
forward
to
your
to
your
feedback,
positive
feedback,
negative
feedback.
All
feedback
is
helpful
where
we
have
the
slack
channel.
So
if
you
go
to
mattress.ai
slack,
that's
that's
how
you
can
join
our
Select
Community
Channel
I'm
on
Twitter
Alex
Saroyan,
that's
my
Handler!
It's
also
on
the
cncf
page
registration
page
for
this
webinar.
B
If,
if
you're
visiting
kubecon,
please
come
visit,
our
booth,
we
we
will
have
demo
and
we
will
have
multiple
environments
to
where
you
will
be
able
to
play
around
with
the
product
talk
to
people
who
built
the
product
and
give
them
feedback,
learn
from
them
exchange
with
experience,
yeah.
Looking
forward
to
that.
A
All
right
awesome,
all
right,
thank
everyone
for
attending
I'll,
be
around
I'm.
Gonna
keep
my
eye
on
this
in
general,
because
I
do
think
we
need
new
abstractions
kubernetes
was
a
great
abstraction
for
compute
and
it's
nice
to
see
that
we're
pushing
past
automation
to
new
abstractions,
and
maybe
just
maybe
this
vcp
anywhere
concept
will
take
on
we'll
catch
y'all.
Next
time
later,.