►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
cloud
native
live
where
we
delve
into
the
code
behind
cloud
native.
I'm
taylor
dolezal,
head
of
ecosystem
here
at
the
cncf,
where
I
work
closely
with
teams
as
they
navigate
their
cloud
native
journeys.
Every
week
we
bring
a
new
set
of
presenters
to
showcase
how
to
work
with
cloud-native
technologies.
A
They
will
build
things,
they
will
break
things
and
they
will
answer
your
questions.
In
today's
session
we
have
andrew
reinhardt
from
cedarolabs,
and
today
andrew
will
be
presenting
bringing
the
edge
into
your
data
center.
This
is
an
official
live
stream
of
the
cncf
and,
as
such
is
subject
to
the
cncf
code
of
conduct,
please
don't
add
anything
to
the
chat
or
questions
that
would
be
in
violation
of
that
code
of
conduct.
Basically,
please
be
respectful
to
all
of
your
fellow
participants
and
presenters
be
excellent
to
one
another.
A
B
Cool,
thank
you,
taylor,
all
right
so
today
I
kind
of
wanted
to
take
us
through
a
pattern
that
we're
seeing
emerge
from
our
users,
where,
essentially,
you
have
edge
compute,
but
you
don't
have
a
whole
lot
of
it
and
when
it
comes
to
kubernetes,
you
want
to
have
aj
and
all
all
these
things
and
it
becomes
much
more
expensive
and
when
things
go
wrong,
things
could
go
really
wrong
if
your
control
plane
is
out
there,
and
so
what
we're
going
to
do
today
is
we're
going
to
deploy
a
control
plane
in
vulture,
we're
going
to
add
some
nodes
to
that
cluster
that
are
running
in
digital
ocean
and
then
we're
going
to
add
a
raspberry
pi
to
the
cluster.
B
So
you
can
imagine
that
you
have
maybe
a
cluster
that
is
hosting
argo,
cd
or
something
or
flux,
that's
responsible
for
deploying
an
application
to
your
edge
machine,
and
then
your
edge
machine
is
reaching
out
to
what
people
I'm
finding
out
are
calling
cloud
services
but
they're,
not
necessarily
you
know
the
aws
or
gcp
services.
B
They
are
the
cloud
services
that
are
running
in
kubernetes,
but
the
workloads
are
actually
in
the
cloud
and
the
edge
is
reaching
out
to
them.
B
So,
let's
see
am
I
sharing
my
screen
yet.
B
Cool,
I'm
gonna
take
a
little
bit
of
a
shortcut
today
because
we
could
have
easily
made
this
stream
about.
You
know
how
do
you
actually
even
go
about
connecting
these
remote
machines,
because
there's
some
networking
implications
here
in
in
this
design
that
I'm
talking
about
that,
make
it
very
very
difficult,
as
I
mentioned
we're
going
to
have
machines
in
vulture
we're
going
to
have
machines
in
digital
ocean
and
we're
going
to
have
a
completely
private
machine
in
my
closet.
So
how
does
that
work?
B
Wireguard
is
a
new
technology,
that's
built
into
the
linux
kernel
that
we
have
support
for
in
the
cedarolabs's
product,
talos
linux,
talos
linux,
it's
just
a
simple
boolean
value,
you
just
say
enable
kubespan
and
all
this
automated
wireguard
networking
gets
set
up
for
you
didn't
want
to
go
through
that
manually,
so
I'm
sort
of
cheating
here
a
little
bit
and
I'm
also
using
another
product
of
ours
called
omni
which
is
going
to
allow
me
to
manage
these
machines,
regardless
of
where
they're
at
and
what
you're
looking
at
here
currently
is
about
seven
machines.
B
B
We
have
these
ones
that
have
the
generated
hostname
since
vulture
doesn't
do
dhcp,
it
seems,
but
we
do
have
three
of
these
machines
running
in
los
angeles.
I
should
point
out:
that's
in
los
angeles.
This
is
in
san
francisco
and
I'm
in
santa
barbara,
so
those
three
machines
will
serve
as
the
control
plane
and
then
we
have
this
demo.
Sbc01
sounds
fancy,
but
it's
just
a
raspberry
pi
4
running
in
my
closet
here.
B
B
Look
here,
two
seven
cool,
so
talos
linux
is
completely
driven
by
a
configuration
file
that
you
supply
to
it
via
the
api.
For
those
of
you
that
don't
know
what
telus
linux
is
it's
a
kubernetes,
or
rather
a
linux
distribution
that
is
built
explicitly
for
running
kubernetes.
You
could
only
communicate
with
it
via
an
api.
B
We
don't
have
bash
or
ssh
or
anything
like
that,
and
so
I'm
going
to
use
the
api
in
this
product
to
push
this
configuration
to
this
machine
when
we're
setting
it
up
and
all
I'm
doing
is
setting
up
the
networking
according
to
how
vulture.com
tells
me
to-
and
you
can
see
here
this
coupespan
enable
true
that
I
was
talking
about
earlier.
This
is
all
that
is
needed
to
do
the
automated
wire
guard
setup.
B
A
B
Cool
so
we'll
do
telus,
120
and
kubernetes
124.4.
We
will
call
this
live
stream
and
I
will
create
the
cluster
what's
really
cool
about.
This
is
that
we
are
actually
already
leveraging
wireguard,
it
isn't
the
kubespand
wireguard
network,
but
it
is
what
we
call
sederolink,
which
is
basically
talos,
is
streaming
logs
to
this
to
this
service
for
us.
So
I
don't
have
talos
api
access
in
theory
at
the
moment.
B
B
So
as
long
as
your
edge
machine
is
going
to
have
egress
capabilities
and
internet
you'll
be
able
to
reach
the
kubernetes
control
plane
and
it
all
be
secured
over
wireguard
cool.
So
the
next
thing
I
want
to
show
is,
let's
just
add
some
machines
here,
and
these
machines
are
going
to
serve
as
let's.
Let's
just
imagine,
we
have
a.
We
have
a
control
plane
currently
running
in
los
angeles,
and
let's
say
that
your
customer
has
an
edge
site
somewhere
in
san
francisco.
B
B
But
this
sfo.
These
sfo
machines
are
just
going
to
serve
something
in
our
case
today.
We're
going
to
use
redis,
but
just
imagine
that
they
supply
some
kind
of
support
to
the
edge
machine
so
that
you
don't
need
to
run
as
much
there.
So
we're
going
to
set
up
these
three
as
workers
and
the
only
thing
that
we're
going
to
need
to
do
on
these
machines
is
patch
in
the
kubespan
capabilities.
B
Cool
we'll
add
those
nodes,
and
all
that
we're
left
here
now
is
with
the
edge
machine.
It
is
technically
physically
in
santa
barbara
because
I
don't
have
an
edge
machine
running
in
san
francisco
somewhere.
But
let's
just
imagine
that
this
is
the
edge
machine
close
to
the
these
support
worker
nodes.
A
B
All
right
cool,
no,
it
is
not
ready
all
right,
it
looks
like
things
are
going
to
chug
along
just
fine.
So
one
of
the
things
that
you
need
to
be
concerned
about
are
that
we're
seeing
our
users
at
least
be
concerned
about
at
the
edge
is
how
do
you?
How
do
you
encrypt
the
disk
and-
and
thankfully
this
is-
this-
is
simple
enough
for
telo.
So.
B
B
We're
going
to
patch
in
some
encryption,
so
we're
going
to
use
lux
to
to
encrypt
the
what
we
call
the
system
disk
within
talos,
which
is
the
disk
where
more
or
less
a
from
a
femoral
state
lives.
B
So
this
is
something
that
you
probably
want
to
be
thinking
about,
if
you're
doing
an
edge
machine,
because
people
can
walk
away
with
these
machines
and
that's
a
common
thing
that
does
happen
so
you
can
see,
we
have
coupe
span,
enabled
and
we're
going
to
do
system
disk
encryption
and
I'm
going
to
add
that
node.
So
this
machine
running
in
my
closet
should
be
joining
this
cluster
here
very
very
shortly,
and
it
looks
like
the
ones
in
san
francisco
are
up,
let's
just
double
check
in
san
francisco.
B
B
To
be
honest,
I
think
that
that's
a
that
is
a
weakness
in
this
space
and
I
think
these
hybrid
sort
of
approaches
are
rare
because
it's
been
hard,
but
I
think
if
you
look
at
wireguard
and
what
it
has
enabled
just
take,
you
know
the
folks
at
tailscale
what
they're
doing
the
new
types
of
things
that
we're
going
to
see.
I
think
are
going
to
push
the
boundaries
of
the
current
implementations
of
things
like
storage,
and
so
my
hope
is
in
the
future.
B
We
can
have
storage
that
is
able
to
be
sort
of
geographically
aware,
but
yeah.
It's
not
really
the
case
today.
So
the
recommendation,
I
would
say,
is
sort
of
the
pattern
that
we're
doing
here
today
is
where
we're
going
to
have
redis,
let
that
be
backed
by
a
local
sort
of
storage,
some
csi
that
you
decide
to
to
supply
to
the
cluster
and
then
have
your
edge
machine
sort
of
store
its
mach.
It's
data
there
if
you
will
okay
cool.
B
A
B
B
B
For
any
any
questions
at
this
point,
because
we're
going
to
sort
of
shift
we've
bootstrapped
this
cluster
and
I've,
I've
already
explained
sort
of
the
architecture
that
we're
seeing
people
use
the
pattern,
at
least,
and
now
we're
going
to
actually
prove
that
this
can
actually
be
functional.
A
Cool,
if
you
do
have
any
questions,
please
feel
free
to
throw
those
into
the
chat
and
we'll
get
those
questions
raised
up.
And
then
we
had
another
request
just
to
make
the
textures
a
little
bit
larger
in
the
terminal
that
better
yeah
I'd,
say
just
a
just.
A
little
bit
more
would
be
fantastic.
A
So
we
did
just
we
did
just
get
one
question
asking:
are
there
any
plans
to
release
this
for
air-gapped
environments?
Yes,.
B
Absolutely
it
is
a
single
go
binary.
It's
going
to
have
ncd
baked
in
so
you'll
be
able
to
run
this
in
aj
fashion,
so
yeah
it
will
be
coming
shortly.
But
at
the
moment
it
is
currently
just
a
sass
cool
cool.
So
we
have
all
of
our
machines
and
at
this
point
I'm
going
to
do
some
labeling
on
these
ones
that
are
running
in
digital
ocean
so
that
we
can
do
some
node
affinity.
B
So
this
is
something
you're
going
to
have
to
be
really
aware
of
when
you're
talking
about
the
edge
in
this
pattern,
because
imagine
that
this
edge
worker
this
spc-01
here-
let's
imagine-
we
have
five
of
them
and
they
are
different
customers,
potentially
or
different
locations,
and
so
you're
going
to
have
to
really
utilize
labels
and
node
affinity
to
make
sure
you
have
the
workloads
running
where
you
need
them
to
run.
B
So
if
we
get
our
nodes
now
we
can
see
that
these
ones
now
have
a
role
of
redis,
arbitrary
name,
but
something
that
I
want
to
also
point
out
that
which
is
which
is
interesting
here.
Oh
the
internal
ip,
is
that
which
is
interesting.
B
B
It's
10,
1,
2,
4,
0
zero,
and
it's
a
slash
twenty
and
at
home
it's
a
one,
nine
two
one,
six,
eight
one,
one
slash
twenty
four,
so
you
gotta
be
aware
that
you're
not
going
to
get
networking
and
collision
collisions
here
because
what's
gonna
happen
under
the
hood
with
at
least
talos,
and
I
think
anything
that
isn't
necessarily
tell
us
but
is
going
to
implement
something
equivalent
you're
going
to
have
some
potential
for
ip
collisions
here.
So
that
is
something
you
have
to
be
aware
of
in
this
model
as
well.
B
A
A
B
Yeah
they're
not
really
happening
yet.
I
think
what
we're
doing
here
at
cedar
labs
is
sort
of
pushing
the
boundaries.
I
think
with
something
like
telos
linux.
We
can
really
start
to
imagine
new
types
of
architectures
and
new
ways
of
doing
things
that
just
aren't
there
yet
so
we're
a
little
bit
ahead
of
the
curve
here.
I
think
and
the
conversations
aren't
happening,
but
we
plan
on
spearheading
those
conversations,
but
they
need
to
happen
in
multiple
places.
Storage
is
one
if
you
even
just
look
at
the
the
the
ecosystem
within
kubernetes.
B
They
don't
really
support
this
notion
of
hybrid
infrastructure.
If
you
will
this
this
idea
like
what
we're
doing
today,
I
have
machines
that
are
running
in
vulture.
I
have
machines
that
are
running
in
digital
ocean.
I
have
a
machine,
that's
running
on
premise.
B
If
you
look
at
tooling,
they
don't
really.
You
know
cloud
controller
managers,
for
example.
They
don't
have
the
the
notion
of
this.
How
do
you
actually
do
that?
Look
at
cluster
api?
It's
not
a
it's,
not
a
native
idea
within
the
cluster
api
world,
so
we're
still
very
early
on
here.
To
be
honest,
and
the
conversations
aren't
necessarily
happening
at
this
at
the
level
that
I
think
we
should
but
we're
we're
proving
it
out
here
that
can
be
done
and
we'll
plan
on
driving
those
conversations.
A
Cool
cool,
it's
yeah,
it's
an
exciting
space,
I'm
really
curious
to
see
as
we
move
to
the
edge
you
know
what,
as
what
is
unlocked.
I
know
there's
still
many
other
problems
to
solve
too,
like
multi-cluster
federation,
like
all
of
those
kinds
of
concerns
as
well
as
we
start
to
go
multi-cloud
and
multi-context
on
that
front.
Exactly
I
did
get
one
more
question
in
and
that
was
does
latency
between
workers
in
the
control
plane
matter.
B
Not
too
much
what
does
matter
is
the
latency
between
ncd.
That
is
the
most
important
thing,
and
so
in
this
model
you
could
see
that
I'm
deploying
the
control
plane
nodes
right
next
to
each
other.
That's
important!
B
The
amount
of
traffic
that's
going
back
and
forth
between
the
kubelet
and
the
api
server
is
pretty
minimal,
and
it
really
is
just
enough
to
really
check
the
state
of
the
world
and
make
sure
that
the
state
of
the
world
the
configuration
is
pushed
to
the
machine,
and
so
it's
not
necessarily
in
the
hot
path.
As
far
as
latency
goes.
B
So
that
is
actually
a
very,
very
common
question.
I'm
glad
someone
asked
that,
let's
figure
out
why
this
isn't
working
the
joys
of
live
demos.
C
B
That's
fine,
we
can
live
without
it.
I
would
have
loved
to
have
shown
off
talos
ctl
a
little
bit,
but
we
can
figure
that
out
later.
In
the
meantime,
let's
just
actually
start
crafting
some
of
the
things
that
we're
going
to
need
for
deploying
redis.
B
So
my
idea
here
is
that
we're
going
to
run
redis
as
a
deployment
we're
going
to
target
the
the
the
nodes
that
have
the
role
radius,
those
are
running
in
digital
ocean,
we're
going
to
deploy
redis
there
and
then
we're
going
to
expose
a
an
internal
service
which
is
going
to
be
routable
from
the
edge
machine
thanks
to
wire
guard
or
kubespan,
so
that
we
can
reach
this
service
from,
even
though
this
machine
is
completely
private,
we'll
be
able
to
reach
this
and
prove
that
this
is
a
model
that
will
work
for
us.
B
So
I
think
what
we'll
do
is.
How
are
we
doing
on
time?
We
have
about
40
minutes,
okay
cool,
so
I
know
that
kubernetes
has
the
reason
I
chose.
This
is
because
I
know
kubernetes
has
a
sort
of
a
how-to
how
to
run
redis
and
so
we're
just
gonna
work
from
there
to
show
how
we
can
build
this
up
from
scratch.
B
So,
let's
just
start
with
the
pod.
I
know
that
the
what
I
want
to
do
is
actually
do
a
deployment,
so
we'll
just
change
this
to
apps
v1
and
then
I
believe
what
we
need
is
spec.
B
B
So
we'll
just
do
one
replica
for
now,
because
you
could
you
could
figure
out
how
to
make
this
later.
If
we
have
time,
maybe
that's
what
we'll
do,
but
I'm
just
going
to
use
the
this
label
here
to
set
up
my
template
and
then
let's
get
these
things
going.
B
Cool,
so
I
mentioned
node
affinity,
so
node
affinity
again
is
going
to
be
an
important
idea
here
and
you're
gonna.
Make
sure
that
you
have
these
on
all
your
workloads
when
it
comes
to
this
model.
B
What
we're
going
to
do
is
we're
going
to
set
up
some
node
affinity
here
and
I'm
keying
in
on
this
node
roll
label
that
we
did
earlier
and
the
values
are
just
that.
That's
not
really
the
node
rule.
This
is
sort
of
a
special
one.
By
the
way
you
cannot,
the
kubelet
does
have
the
ability
to
set
labels
on
nodes
when
they
come
up,
but
this
is
actually
not
allowed,
because
you
can
actually
escalate
a
machine
to
become
a
control
plane
node
using
just
this
label,
so
they
don't
allow
this
to
be
set.
B
So
this
will
be
have
to
this.
Labeling
will
be
have
to
be
done,
sort
of
after
the
fact,
but
anyways
here's
the
affinity,
we're
going
to
target
those
digital
ocean
machines
and
we're
going
to
run
redis.
You
can
see
that
we're
referencing
a
config
map
here.
I
believe
they
have
that
somewhere
around
here.
B
B
C
B
B
B
B
B
B
And
we
have
a
bunch
of
a
bunch
of
security
contacts
here,
because
talos
is
going
to
secure
kubernetes
for
you
out
of
the
box,
it's
going
to
apply
stig
hardening
guidelines,
cis
benchmarking
guidelines,
our
own
sort
of
personal
opinions
and
on
things
it's
going
to
be
a
hardened
version
of
kubernetes,
which
I
highly
recommend
for
the
edge
and
this
pattern
as
well,
because
there's
just
so
many
moving
parts
in
this,
and
so
that's
where
all
of
this
extra
security
contact
stuff
comes
from.
It's
basically
the
most
least
privileged
pod.
B
B
B
B
B
B
So
the
most
important
one
is
that
we
enable
cubespan
so
I'll
just
start
with
well
five
eight,
since
I
already
have
it
here
and
we'll
go
down
to
network.
B
A
B
All
right
cool
so
we're
getting
some
some
piers
at
this
point,
I
wonder
why.
B
B
B
A
I
did
have
one
question
and
someone
that
just
really
wanted
a
recap
on
some
of
the
edge
devices
that
you
had.
Just
clouds
clouds
and
edges.
B
Oh
sure,
so
what
we're
running
here
is
a
seven
node
cluster.
It
is
split
across
vulture.com,
where
I
have
these
machines
running
as
my
control
plane,
and
then
we
have
three
of
them.
These
ones
are
in
los
angeles.
By
the
way
we
also
have
three
droplets
running
in
digital
ocean.
These
are
running
in
san
francisco.
These
will
serve
as
sort
of
the
support
nodes.
If
you
will,
for
our
edge
machine
and
our
edge
machine.
Is
this
spc-01
machine
which
is
running
in
my
closet,
on
a
private
network
here
at
home?
B
Cool
so
let's.
B
B
A
B
Yeah,
that's
a
good
question.
I
think
that
they
both
are
after
the
same
end
goal,
but
the
advantages
within
talos
is
that
it
starts
very
early.
It's
not
dependent
on
kubernetes
kubernetes
status,
it's
not
dependent
on
the
kubelet
being
up.
This
is
just
baked
in
and
you
get
this
as
early
and
as
possible.
So
we
also
have
access
to
like
host
level
networking
routing.
So
we
can.
We
can
set
up
routes
and
everything
that
we
need
so
that
even
host
level
traffic
can
be
over
this
wire
guard
network.
B
So
that
is
some
of
the
advantages
and
it's
being
leveraged
here
already,
but
in
this
omni
product
that
I'm
using,
which
would
not
be
possible
because
if
you
think
about
it,
these
edge
machines
and
the
fact
that
I
can
remotely
manage
them
using
this
product
is
entirely
dependent
on
the
fact
that
kubernetes
is
not
in
the
hot
path,
because
if
I
had
to
bootstrap
kubernetes
on
the
machine
and
do
all
these
things,
it
implies
that
I
need
to
be
able
to
talk
to
that
machine,
and
I
would
need
to
get
psyllium
installed
and
do
all
of
that.
B
Okay,
so
I
think
what
we're
going
to
run
in
here
run
into
here
is
well
an
issue
I
need
to.
Oh,
I
can
schedule
it,
but
I
can't
reach
it.
B
Okay,
well,
let's
just
do
something:
let's
just
create
a
droplet
in
somewhere
else.
I
think
I
only
have
this
in
a
few
regions
so.
B
York,
so
what
I'm
going
to
do
here
is
I'm
going
to
spin
up
a
machine
in
new
york
in
digital
ocean.
It's
now
going
to
represent
my
raspberry
pi.
I
confirmed
with
my
team
in
the
background
that
patches
were
broken
between
my
testing
and
now,
but
that's
why
we
do
this
and
that's
why
it's
still
alpha
so
we'll
still
be
able
to
prove
this
out.
B
What's
what
I'm
doing
is
I'm
booting
a
custom
image
that
you
can
download
from
here?
So
if
I
just
go
to
here-
and
I
say
digital
ocean,
I
could
download
this
image
and
it's
pre-configured
to
set
up
wireguard
so
that
we
can
reach
this
machine
and
start
managing
it
from
here.
In
fact,
it's
already
joined.
This
is
the
one
in
new
york
right
now
and
what
we'll
do
is
we
will
join
this
to
the
cluster.
B
Config
patching,
that's
fine
I'll,
have
a
public
network
and
I
can
patch
it
in
manually.
So
let's
add
that
node
and
then
what
we'll
do
in
the
background
here
while
we
wait
is
I
now
need
to
what's
the
host
name
of
this
machine.
I
believe
it
was
demo
nyc,
three
zero,
zero
one
good
one.
B
Cool,
so
we
still
have
the
one
in
san
francisco.
Let's
just
check
in
on
demo.
Nyc01
node
isn't
ready,
but
it
should
be
there.
Okay,.
B
So
all
I've
done
is
I've
just
spun
up
a
new
machine.
This
might
actually
be
good
because
you
could
see
how
things
could
go
wrong
and
maybe
you
could
just
spin
up
another
machine
and
replace
it
really
quickly.
This
is
why
we
love
kubernetes,
so
we
have
that
pod.
Let's
exec
into
that.
B
B
B
We
have
our
control
plane
running
in
los
angeles,
and
we
do
that
because
we
want
xcds
the
latency
between
etcdb
and
cd
to
be
as
minimal
as
possible,
thanks
to
wireguard
and
kubespan,
we're
able
to
add
some
machines
that
are
running
in
san
francisco,
which
would
act
as
these
support
machines,
offering
storage
or
something
to
that
effect,
some
service
that
the
edge
machine
might
need
we
planned
on
using
the
one
in
a
raspberry
pi
in
my
closet,
but
we
ran
into
some
trouble
so
to
replace
sort
of
what
is
representing,
that
I
just
went
ahead
and
spun
up
a
node
that
is
running
in
new
york
and
using
wireguard
we're
able
to
have
a
completely
mesh
network
between
all
of
these.
B
B
B
So
that's
where
I
wanted
to
get
to
with
sort
of
cheating,
but
we
didn't
necessarily
cheat.
We
had
to
do
some
workarounds
and
patch
things
in
manually
and
then,
if
we
had
enough
time
which
we
might
I
was
going
to
maybe
look
at
how
we
can
maybe
use
argo
cd
or
flux
to
deploy
this
or
maybe
even
challenge.
This
thing
of
you
know:
storage
running
at
the
edge.
B
Let's
see
we
have
about
10
minutes.
Are
there
any
questions,
anything
that
I
can
answer.
A
B
So
I
want
to
just
maybe
expand
on
this
a
little
bit.
There
is
a
another
pattern
to
this
that,
as
you
could
tell,
I
was
doing
everything
sort
of
by
hand
with
coupe
ctl
and
applying
these,
but
in
a
in
a
fully
automated
the
goal
of
being
fully
automated
we're,
starting
to
also
see
that
things
like
argo,
cd
and
flux
become
really
really
powerful
here
and
one
of
the
ways
that
we
can
deploy
that
stack
automatically
with
is
with
talos
it.
B
We
support
the
ability
to
supply
manifest
at
the
bootstrap
bootstrapping
time
of
the
cluster.
So
that
means
that
deploying
your
clusters
could
be
as
simple
as
what
we
did
today.
We
just
supplied
patches
that
might
do
specific
networking
that
I
might
need
enabling
kubespan
if
we
need
need
to
and
then
having
your
workloads
and
using
these
node
affinities,
and
all
these
things
be
managed
by
flux.
B
B
B
A
B
B
I'm
not
necessarily
opposed
to
the
idea
in
theory,
but
I
just
think
it's
untested
psyllium
and
talos
is
a
very
common
combination,
maybe
partly
due
to
the
fact
that
both
are
sort
of
more
advanced
technologically
and
are
more
cutting
edge.
If
you
will-
and
those
types
of
folks
tend
to
want
to
run
those
things
together.
B
I
managed
to
get
one
before
the
big
shortage.
C
B
A
B
Cool,
let's
do
flux
check,
free.
B
Cool
prerequisite
records
it's
past.
Let's
bootstrap.
B
What
did
I
name
it?
Flux
demo.
B
Wow,
that's
actually
a
really
nice
experience,
good
job
flex.
B
So
what
I
will
do
is,
I
will
add
things
to
this
now
and
see
if
we
could
push
it
up
and
roll
it
out
I'll
delete
them
first.
Actually,
too,.
B
B
A
B
A
There
was
a
question
once
deployed:
is
there
a
ui
for
performance
in
management?
I'm
not
sure
if
that's
in
reference
to
flux
but
steven
francis
but
but
yeah?
Let
us
know
on
that
front,
but
I
know
with
I
think
with
flux.
It's
just
that
using
the
cli
and
then
I'm
not
sure
with
the
with
with
the
other
stuff.
I'd
assume
that
that
is
the
case
right
andrew.
As
far
as
the
sas
and
everything
goes
with
cedar
labs.
A
B
A
ui
for
performance
and
management-
I
mean
it
depends
on
what
you're
talking
about.
I
don't
know
what
flux
does,
but
I
know
that
we
have
some
stuff
here.
So
if
we
go
into
this
cluster,
we
click
on
a
node,
for
example,
or
this
is
just
the
the
cluster
fear
view.
I
would
recommend
deploying
something
like
prometheus
and
grafana
to
do
full
on
metrics
that
you
can
alert
off
of
and
whatnot,
but
this
is.
B
This
can
be
useful
just
to
see
what's
going
on
in
bootstrapping,
if
we
wanted
to
look
at
a
node
in
particular,
this
is
using
all
of
the
talus
linux
apis
at
this
point,
so
we're
able
to
get
a
list
of
processes
a
list
of
services.
If
I
wanted
to
look
at
ncd
logs,
I
could
do
that
here.
B
B
B
Well,
yeah,
I
don't
think
I'll
be
able
to
get
too
far
on
flux.
It
looks
like
it
failed,
anyways
and
I'll
have
to
dig
into
that,
but
I
was
prepared
for
that.
This
was
something
I
wanted
to
test
with
just
a
few
minutes
left
but
yeah.
B
I
think
that
that's
that's
it
so,
just
just
to
summarize
we're
using
talos
linux
and
coupe
spam,
which
leverages
wireguard
under
the
hood
to
connect
machines
that
are
on
disparate
networks,
so
that
we
can
have
a
fully
meshed
kubernetes
cluster,
where
the
control
plane
is
running
in
one
region
of
the
world.
I
can
have
sort
of
a
support,
workload
running
close
to
my
edge
machines
and
my
edge
machines,
then
can
be
smaller.
I
need
less
of
them
and
less
things
can
go
wrong.
A
A
Infinity
is
a
fun
number,
though
awesome
awesome.
Well,
thank
you
so
much
for
stopping
by
cloud
native
live
today
andrew.
It
was
really
great
to
kind
of
get
a
sense
for
how
we
can
expand
the
edge.
You
know
every
cloud
is
a
silver
lining
and
it
seems
like
that's
the
edge,
the
more
that
the
more
that
we
get
to
see
it.
So.
Thank
you.
Thank
you.
So
much
for
joining
today.
Thank.
A
Thank
you
to
all
of
you
viewing
today
or
watching
this
a
little
bit
later
for
joining
the
latest
episode
of
cloud
native
live.
We
really
enjoyed
the
interaction
and
all
the
questions
that
you
all
had
to
ask
thanks
for
joining
us
today,
and
we
hope
to
see
you
again
soon
with
that.
I
wish
you
well
and
hope
you
have
a
wonderful
week.
Thank
you
all.
So
much
have
a
good
one.