►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Why
should
you
have
to
care
where
your
infrastructure
is
when
the
infrastructure
itself
is
being
abstracted
away?
Even
cloud
providers
don't
solve
this
problem?
Why
can't
you
span
a
kubernetes
cluster
across
regions,
some
cloud
providers?
Don't
even
let
you
span
across
availability
zones
within
the
same
region?
A
Almost
everyone
wants
a
multi-location
deployment
for
some
reason,
be
it
disaster,
recovery
or
high
availability
performance
or
localization,
perhaps
mixing
on-demand
high-cost
resources
with
fixed
low-cost
infrastructure,
physical.
You
point
of
use
point
of
source
requirements:
for
instance
cash
registers,
data
collection,
hardware,
physical
storage,
or
just
simply
to
avoid
vendor
lock-in
some
examples
from
our
user
user
base,
a
voice
systems
provider
with
high
volumes
and
low
margins
needs
to
get
the
greatest
value
for
his
core
workloads,
which
mandates
a
bare
metal
infrastructure.
A
A
large
retailer
needs
a
needs
to
manage
local
compute
resources
for
point-of-sale
equipment,
but
their
core
applications
and
database
run
in
the
cloud
they
want
to
have
management,
free
resources
in
the
store
controlled
entirely
by
kubernetes.
In
the
cloud,
a
large
public
transportation
company
has
a
number
of
mandates
which
require
them
to
use
specific
compute
resources,
store
writer
data
in
specific
specific
places
and
have
a
large
number
of
point
of
use
display
systems
which
all
need
to
be
tied
together.
A
Now,
this
isn't
to
say
that
it
can't
be
done.
Ultimately,
the
requirement
for
most
cni
plugins
is
that
each
node
needs
free
and
direct
communication
to
each
other
node.
This
can
be
achieved
in
a
number
of
ways:
full
native
ipv6,
though
good
luck
with
the
universality
of
this
option,
particularly
in
cloud
environments,
and
make
sure
your
firewall
works.
A
A
A
This
sounds
like
a
job
for
wireguard
like
ipsec
before
it
wireguard
seeks
to
provide
secure
transport
between
two
endpoints,
unlike
ipsec
it
does
this
with
much
more
standard.
Tooling
is
vastly
easier
to
use
and
is
even
higher
performance,
while
offering
better
security
and
more
modern
encryption.
Algorithms.
A
A
Talos
systems
is
building
talos,
the
kubernetes
os,
as
well
as
a
great
deal
of
tooling,
to
automate
and
manage
large
and
disparate
sets
of
compute
resources
on
bare
metal.
On
premise
and
in
the
cloud
our
core
product
is
an
extremely
lightweight
read-only,
image-based
linux
operating
system,
highly
optimized
to
running
kubernetes.
A
A
The
method
we
have
developed
is
also
entirely
impact
free.
We
do
not
manipulate
tables,
we
do
not
manipulate
main
routes,
we
don't
interfere
with
kubernetes
view
of
the
node
ip
address
in
any
way,
we
simply
interact
with
the
kernel's
net
filter
and
core
routing
systems
to
redirect
traffic
to
nodes
and
pods
through
the
wireguard
interface.
A
A
A
Kubernetes
also
needs
to
know
which
traffic
goes
to
the
wire
guard
piers,
because
this
information
may
be
dynamic.
We
need
a
way
to
be
able
to
constantly
update
and
keep
in
sync
this
information
through
all
the
nodes
if
we
have
a
functional
connection
to
kubernetes.
Otherwise,
this
is
fairly
easy.
We
can
just
keep
that
cube
at
that
information
in
kubernetes.
A
Otherwise,
though,
we
have
to
have
some
way
of
discovering
it
in
our
solution,
we
use
a
multi-tiered
approach
to
gathering
this
information.
Each
tier
can
operate
independently,
but
the
amalgamation
of
the
peers
of
the
tiers
produces
a
more
robust
set
of
connection
criteria
for
this
discussion.
We'll
point
out
two
of
these
tiers,
an
external
service
and
a
kubernetes
based
system,
the
external
service.
A
On
top
of
this,
we
also
route
pod
subnets.
This
is
often
maybe
even
usually
taken
care
of
by
the
cni,
but
there
are
many
situations
where
the
cni
fails
to
be
able
to
do
this
itself,
especially
when
this
is
done
across
networks.
So
we
also
scrape
the
kubernetes
node
resource
to
discover
its
podcidrs.
A
We
need
a
way
to
be
able
to
handle
any
number
of
addresses
and
ports
and
also
have
a
mechanism
to
try
each
of
them.
Wireguard
only
allows
us
to
select
one
at
a
time
for
our
implementation.
Then
we
have
built
a
controller
which
continuously
discovers
and
rotates
these
ip
port
pairs.
Until
a
connection
is
established.
A
This
would
be
very
cumbersome
and
slow
to
maintain
in
ip
tables.
Luckily,
the
kernel
supplies
a
convenient
mechanism
by
which
to
define
this
arbitrarily
large
set
of
ip
addresses.
Ipsets
talos
collects
all
of
these
ips
and
subnets,
which
are
considered
in
cluster
and
maintains
these
in
the
kernel
as
an
ip
set.
A
A
A
A
A
These
are
defined
as
a
common
ordered
list
of
operations
for
the
whole
operating
system,
but
they
are
intended
to
be
tightly
constrained
and
are
rarely
used
by
applications.
In
any
case,
the
rules
we
add,
are
very
simple.
If
a
packet
is
marked
by
our
inf
table
system,
then
it
is
sent
to
an
alternate
routing
table.
A
A
A
A
A
A
They
simply
cache
the
new
data
and
reconcile
when
the
internet
comes
back
up
receiving
any
new
workload
orders
at
the
same
time,
even
better.
If
the
network
doesn't
come
back
up
quickly,
they
connect
their
router
to
an
lte
modem
and
they
don't
even
have
to
change
anything
else.
The
nodes
just
connect
up,
however,
they
can
and
continue
working.