►
Description
Blog: https://everyonecancontribute.com/post/2021-02-10-cafe-16-kubernetes-deployments-to-hetzner-cloud-part-3/
Step 1: https://everyonecancontribute.com/post/2021-01-27-cafe-14-kubernetes-deployments-to-hetzner-cloud/
Step 2: https://everyonecancontribute.com/post/2021-02-03-cafe-15-kubernetes-deployments-to-hetzner-cloud-part-2/
Demo repo: https://gitlab.com/ekeih/k3s-demo
Follow Max Rosin on Twitter: https://twitter.com/ekeih
A
And
we
are
live
on
youtube.
Welcome
everyone,
I'm
looking
forward
to
the
third
session
on
deploying
a
kubernetes
cluster
to
headset
cloud
and,
as
you
can
see
max
already
prepared
the
demo
and
prepared
everything.
A
B
But
I
think
today,
in
the
end,
we
will
have
something
that
we
can
call
a
running
cluster
where
we
can
deploy
whatever
we
want
to
and
the
last
time
we
had
a
goal.
I
think
it
was
to
run
cuba
city,
get
notes
and
get
pots
and
see
something,
and
I
like
the
idea
to
have
a
goal
and
to
achieve
it
in
the
end.
So
today
we
want
to
try
as
a
goal
to
have
a
cluster
with
20
nodes,
running
and
2
000
pots
running.
B
B
Most
of
the
time,
we
spent
writing
an
ansible
role
for
this
and
yeah.
Today
we
want
to
continue
with
k3s,
because
last
time
we
ended
in
a
situation
where
the
ip
addresses
were
not
correct
yet,
and
our
pots
were
also
not
running
yet
and
to
speed
up
the
beginning
a
little
bit.
I
already
created
the
virtual
machines
today.
So
if
we
look
in
the
dashboard
we
can
see,
we
already
have
those
servers.
They
are
18
minutes
old
and
just
before
we
started
the
live
stream.
B
B
All
right
yeah,
it's
I
often
get
the
same
ip
again
and
again
from
hetzner.
So
let's
try
this
all
right
and
last
time.
In
the
end,
we
had
this
that
we
had
a
running
k3s
cluster,
where
we
could
run
k3s
ctl,
get
notes
and
we
were
able
to
see
that
we
have
two
pots,
but
they
are
not
running
yet
they
are
in
this
pending
state
and
today
we
want
to
explore
a
little
bit
why
those
are
pending
and
solve
this.
We
also
want
to
yeah
the
reason
why
it's
not
running.
B
B
Then
we
were
able
to
see
that
our
public
ips
are
marked
as
an
internal
ip
address
and
there
is
no
external
ip.
So
this
also
looks
very
wrong
want
to
solve
this,
and
we
also
want
to
create
some
persistent
storage.
So
if
we
restart
a
pot,
the
data
is
still
there
before
we
start
to
look
into
those
ip
address
issues
on
the
spending
pods.
B
B
So
your
only
point
to
talk
to
is
the
api
and
you're
using
cubectl
to
talk
to
the
api,
to
use
cubectl
to
talk
to
a
server.
You
usually
have
a
configuration
file,
it's
the
keep
config,
which
includes
all
the
details.
You
need
to
know
to
talk
to
your
api
like
the
ip
address,
the
your
client
certificate
or
your
client
token
to
authenticate.
B
Yeah
there
is
k3s
yammer,
it's
add
the
configuration
file
and
we
can
also
look
into
it.
B
If
you
have
this
file,
don't
share
it
because
there's
a
secret
in
there
like
here
the
client
certificate
is
audio
under
the
client
key
data.
So
all
you
need
to
connect
to
the
api
server.
So
that's
a
bit
of
secret,
but
I
am
positive
that
during
this
one
hour
nobody
will
type
this
manually
from
the
stream.
B
Yeah
yeah,
you
have
to
type
it,
and
then
you
have
decode
to
have
to
decode
it
and
yeah.
For
us
interesting
is
this
server,
and
currently
the
server
is
like
localhost,
because
it's
only
meant
to
be
used
on
this
machine.
B
This
put
it
on
my
computer
and
edit
the
file,
but
we
are
trying
to
automate
as
much
as
possible
here
and
if
we
create
and
delete
and
create
and
create
delete
this
cluster
again
and
again,
we
don't
want
to
do
this
every
time
and
during
the
last
two
sessions
we
already
explored
ansible
quite
a
bit,
so
it's
yeah.
It
comes
natural
to
use
ansible
to
do
this.
B
So
let's
start
a
new
role.
Let's
call
it
keep
config,
let's
create
this
folder
tasks
again
with
one
file,
and
we
are
not
going
to
cover
this
in
great
detail
right
now,
because
I
think
last
time
we
already
looked
a
lot
at.
How
does
answer
will
work?
How
do
we
write
a
role?
So
I'm
going
to
copy
a
few
things
in
here
and
if
you
have
any
questions
you
can
just
ask
or
write
in
the
chat,
so
I
will
go
over
it
very
fast
and
we
need
one
other
thing.
B
And
we
are
going
to
put
this
in
there,
it's
like
the
at
the
local
path,
where
I
want
to
store
this
file
on
my
machine.
So
if
you're
trying
this
yourself,
you
can
set
whatever
path
you
like
here,
and
so
we
have
four
tasks
in
our
row.
One
is
to
check
if
this
exists
locally
or
yeah.
If
this
file
exists,
we
are
using
the
path
we
just
said
here
and
then
we
register
this
check
in
a
variable
and
then
the
next
three
tasks
we
yeah
check.
B
B
We
are
going
to
replace
the
localhost
ip
with
the
ip
first
server
in
our
cluster,
so
the
service
group,
the
first
one
we
are
going
to
use
the
ip4
address
of
it
to
make
our
life
a
little
bit
easier.
We
are
also
to
replacing
another
string
in
there
because
it's
called
default
and
we
want
to
call
it
k3s
demo.
We
will
see
this
name
in
a
minute,
so
yeah,
that's
kind
of
nice
to
use
ansible
to
do
this.
B
B
And
we
want
to
do
it
only
for
localhost.
So,
even
though
localhost
is
not
part
of
our
ansible
inventory,
because
our
ansible
inventory
only
includes
those
hetzener
servers,
localhost
will
work,
that's
a
magic
part
of
ansible.
Your
localhost
always
exists,
it's
always
the
machine
you're
running
ansible
and
we
want
to
use
our
role,
keep
config
and.
C
B
B
There
and
now
I
should
be
able
to
have
three
cube
contexts.
We
are
not
going
to
dive
into
what
a
cube
context
is.
Let's
imagine
like
one
cube
context.
Is
one
cube
configuration
file?
That's
like
one
cluster
we
are
talking
to,
and
I
have
two
other
contexts
or
two
other
clusters
and
the
new
one
is
k3s
demo
and
with
cubecontext,
which
is
a
separate
tool
from
cubecontrol.
But
it's
super
useful.
You
can
switch
between
those.
B
So
then
you
would
see
that
this
one
is
selected,
but
today
we
just
want
to
work
in
k3s
demo,
but
the
other
one
center
server
k3s
is
set
up
in
a
very,
very
similar
way
to
what
we
are
doing
in
those
live
sessions.
Here,
and
now
that
this
is
selected,
we
should
be
able
to
run
cube,
ctl
get
nodes
and
there
it
is
they're,
also
like
14
minutes
old.
So
this
is
definitely
our
new
cluster
and
if
we
run
cube
ctl
get
parts,
then
it's
also
there.
B
B
So
if
you're
new
to
kubernetes,
then
maybe
it's
a
little
bit
confusing
when
I
type
all
those
cube
control
commands.
So
this
will
come
over
time
that
you
get
used
to
it.
So
you
remember
the
important
commands
and
you
don't
have
to
remember
all
the
options
that
there
are.
There
are
also
some
graphical
tools,
so
visual
tools
to
explore
your
cluster,
so
you
don't
have
to
do
everything
with
cube
control,
but
it's
useful
to
know
the
basics
of
it.
B
So
one
command
we
already
have
seen.
Is
this
cubesat,
I
get
notes
and
then
maybe
sometimes
I
used
get
an
o
instead
of
nodes,
and
it's
doing
exactly
the
same.
So
for
a
lot
of
the
default
resources
in
your
cluster.
There
are
some
shortcuts,
so
you
don't
have
to
write
the
long
things
for
nodes
and
ports,
there's
not
much
of
a
difference.
It's
only
a
few
characters,
but
there
are
a
few
more
things,
for
example,
network
policies
and
then
it's
getting
a
lot
easier.
B
If
you
are
just
have
to
write,
cube,
cta
to
get
network-
and
if
you
have
this
yeah
the
back
of
your
mind,
knowing
that
this
exists,
then
it
can
help.
So
please
don't
be
surprised
if
sometimes
I
just
write
keep
ctll.
I
get
note
it's
that
of
cuba.
City
notes
yeah.
Well,
so
we
can
use
this
cube.
Ctr
get
command
to
explore.
What's
going
on
in
our
cluster
and
cubecontrol
works
like
this
most
of
the
time
you
have
cubecontrol
and
then
a
verb,
for
example,
get
or
delete
or
apply.
B
We
will
look
into
this
later
or
create,
and
afterwards
you
take
you
control
what
kind
of
object
you
want
to
modify
or
what
object
you
want
to
get
or
yeah.
So
that's
why
we
tell
it
to
like
get
our
notes.
We
could
also
delete
our
notes.
We
are
not
going
to
do
this
now,
but
it's
possible
and
then
our
list
of
notes
will
only
contain
two
notes.
B
The
hedsner
server
would
still
exist,
but
the
cluster
wouldn't
know
about
it
anymore,
yeah.
That's
like
the
basic
way
how
keep
control
works
and
over
the
evening
we
will
learn
a
few
more
commands.
Sub
commands
of
cubecontrol
so
there's
another
concept
in
kubernetes:
it's
not
namespaces,
so
let's
do
this
get
namespaces
and
then
we
see
we
already
have
four
of
them
and
each
time
you
run
cubecontrol.
You
are
running
it
in
the
context
of
one
of
those
namespaces.
B
But
a
few
minutes
ago
I
did
this,
which
is
a
shortcut
for
all
namespaces,
and
then
it's
showing
me
all
ports
from
all
namespaces
in
my
cluster
here
in
the
front
you
can
see
the
namespace
keep
system
is
running
those
two
parts
or
it's
actually
not
running
them
because
because
they
are
still
pending
but
as
an
object,
they
exist
there
so
yeah.
Sometimes
I
use
the
word
resource
or
object.
So
what
is
it?
Actually
all
this
data
of
what's
running
in
your
cluster,
is
stored
by
the
api
server
of
kubernetes.
B
B
B
Then
it
will
decide
where
to
run
it,
and
then
we
have
those
notes
running
in
our
cluster,
for
example
our
agent,
zero
and
agent
one
and
those
are
talking
to
the
api
server
and
asking
him
hey.
Should
I
run
a
pot
or
should
I
do
anything
and
the
api
server
says
yeah,
the
scheduler
decided.
You
should
run
this
part
and
then
on
this
node
we
have
the
cubelet
running.
B
B
B
B
So
let's
switch
to
a
different
name
space
cube
system,
so
we
can
use
cuban
s.
It's
it,
ships
with
a
cube
context,
tool
which
I
used
earlier
to
switch
to
it
all
right.
So
the
advantage
of
switching
to
a
different
namespace
is
now
we
can
use
cubecontrol
getpod
and
directly
get
those
without
running
with
all
namespaces.
B
B
B
First
of
all,
we
have
this
name
and
we
have
this
namespace.
This
plot
is
running
on
in
then
we
have
some
labels,
no
annotations.
We
have
the
status
pending.
It's
the
same.
We
saw
in
the
table
earlier
all
right,
then
we
are
able
to
see
here
what
containers
are
running
in
this
pot
right
now,
it's
this
currently
in
this
container,
and
we
see
it's
based
on
this
image.
B
B
So
you
see
a
lot
of
more
things
we
will
ignore
for
now
and
at
the
bottom
we
see
a
list
of
events
and
we
see
that
it
looks
a
lot
like
errors.
I
mean
it's
called
warning,
but
it's
definitely
more
like
an
error
and
yeah.
Our
scheduler
is
telling
us
something
it
says:
yeah
zero
nodes
are
available
and
yeah.
It
says
I'm
not
sure
why
it
says
different
numbers
here,
but,
for
example,
three
nodes
had
taint
node
cloud
provider.
Kubernetes
io
initialized
is
true
that
the
pod
didn't
tolerate.
C
It
that's
interesting.
The
other
option
could
be
because
then,
when
you
spin
up
the
two
with
three
notes,
suddenly
that's
it's
the
timing
of
your
lens,
because
the
studio
goes
every
time
to
through
the
loop
to
the
loop,
and
that
could
say
some
part,
but
the
latest
one
is
eight
from
24
minutes
right,
so
the
oldest
one,
so
yeah,
that's
and
I'm
fine,
mostly
in
the
end,
it
would
be
three
notes
on
the
first.
He
found
the
first
note.
B
A
B
Definitely
copy
paste
to
google
okay,
so
especially
when
a
pot
is
not
starting.
B
B
B
Google
and
we
are
not
going
to
read
the
full
page,
but
so
if
you
are
new
to
kubernetes,
you
may
have
never
seen
this
documentation.
So
that's
the
kubernetes
documentation
and
it's
a
very
good
resource
to
learn
about
all
the
concepts
and
all
the
thousands
of
layers
of
abstraction
of
kubernetes
and
the
whole
idea
of
taints
and
tolerations
is
that
you
contained
a
node
and
if
a
node
is
tainted,
no
pot
will
be
scheduled
to
it,
except
if
the
pot
tolerates
the
taint
of
the
node.
B
B
Yeah
those
are
okay,
for
example.
Now
here
we
have
this.
We
also
have
a
warning,
but
we
can
ignore
this
for
now,
especially
during
startup,
it
could
be
happening
that
there
are
some
warnings
and
but
if
everything
afterwards
looks
healthy,
then
maybe
it's
just
a
timing
issue
that
something
didn't
exist.
Something
wasn't
initialized
during
the
starter,
so
it's
a
very
good
sign
that
the
last
few
events
are,
for
example,
our
agent
status
is
now
note
ready,
so
that
should
usually
be
the
state
and
notice
in,
but
we
are
looking
for
tanes.
B
So,
let's
scroll
up
a
bit
at
the
top
again,
we
have
this
name.
We
have
some
labels
and
annotations.
We
have
seen
this
stuff
a
few
minutes
ago
for
the
pot
as
well.
So
labels
on
annotations
are
very
common
on
all
the
objects
in
the
kubernetes
cluster,
and
then
we
have
attained
here
like
teens,
and
it's
exactly
retained
that
we
were
able
to
see
in
our
warning
or
our
error
message
and
attained
as
a
key
value
pair
very
similar
to
the
annotations
and
labels.
There's
also
always
a
key
and
a
value.
B
So
our
taint
has
this
name
and
to
make
it
unique
most
of
the
time
you
use
a
domain
for
it.
So
if
you
have
a
very
custom
taint
for
you
or
your
company,
then
for
you
would,
for
example,
call
it
like
yeah
foo
dot.
Everyone
can
dot
com,
slash,
uninitialized
or
something
like
this
to
make
it
unique
and
make
sure
nobody
else
uses
this
and
yeah.
B
B
B
A
B
B
So,
as
now
provides
this
and
yeah
here's
for
an
example,
I
can
zoom
in
a
little
bit,
for
example,
that
it
will
set
the
right
public
ip
address.
That's
kind
of
the
information
we
are
currently
want
to
fix
in
our
cluster
right.
It
adds
a
few
more
things,
for
example,
labels
of
like
what
what
kind
of
virtual
machine
are
you
running
in
which
region
region?
Is
it
running
stuff
like
that
and
then
there's
a
example
how
to
deploy
it?
B
B
B
B
B
The
beginning
of
it
are
a
service
account
and
a
cluster
rule
binding.
We
will
ignore
this.
This
is
stuff
that
is
used
to
manage
permissions
in
your
kubernetes
cluster.
So
if
you
are
going
to
run
stuff
in
production
kubernetes
at
some
point,
you
will
have
to
learn
about
airbag,
which
is
yeah
the
technology
or
the
yeah,
the
mechanism
that
is
used
to
define
roles
and
permissions
in
your
cluster
for
us,
it's
more
interesting
to
look
at
the
deployment
right
now.
B
So
we,
this
is
also
in
the
documentation,
explained
a
little
bit
back
down
here:
change
the
clusters
to
the
reflect
in
the
deployment
file
to
fit
your
pod
range.
So
what's
our
pod
range,
we
should
look
this
up
because
we
could
at
no
point
we
configured
this.
So
let's
google,
this,
like
k3s
k3s
networking.
B
B
It's
a
design
principle
on
how?
How
could
you
design-
or
this
is
this-
is
german
now
and
okay-
all
right
now,
it's
english
yeah,
how
how
you
should
design
an
application.
So
it's
easy
to
run
it
in
a
container
to
those
12
factors
to
it,
make
it
easily
configurable
and
everything
and
one
of
those
is
yeah
three
store
config
and
in
the
environment
and
kubernetes
kubernetes
is
a
yeah,
an
orchestrator
for
containers.
So
all
we
do
here
is
running
containers.
B
Actually,
I
did
this
before
our
first
session,
but
I
showed
it
a
little
bit
in
the
first
stream
that
we
have
this
token,
and
this
token
is
stored
in
this
variable.
So
I'm
able
to
use
this
without
showing
you
the
api
token,
which
is
a
little
bit
nicer
than
to
expose
this
token
in
the
stream,
all
right
so
yeah.
It
needs
this
token
to
talk
to
the
hudson
api
and
because
we
are
deploying
it
with
this
support
for
private
networks.
B
We
also
need
to
tell
which
network
to
use
so
yeah,
this
concept
with
12
12
factors
and
one
of
them
is
put
stuff
into
the
environment.
How
the?
How
is
it
realized
in
kubernetes
in
kubernetes?
You
have
two
two
objects
for
this.
One
of
it
is
cube
through
this
config
maps,
and
we
can
see
here
that
there
even
exists
a
few
and,
for
example,
maybe
let's
look
into
this
one
coordinates.
I
think
it's
some
configuration
file
for
the
core,
dns
pod
and
so
far
when
we
used
cubect.
B
I
get
we
used
it
like
this
or
we
used
it
with
output,
white
for
configuration
or
for
conflict
maps.
There
is
no
wider
output,
so
it's
kind
of
boring,
but
instead
we
can
also
tell
it
to
use
to
print
it
in
yammer
and
then
it's
a
lot
of
stuff.
So
all
right
and
there
we
can
see
the
data
that
is
stored
in
this
config
map
and
yeah.
It's
basically
just
a
text
file
with
a
bar
or
not
exactly
it's
again,
some
key
value
stuff.
B
So
there's
a
key
client
ca
file
and
then
there's
data
in
stored
in
this
key,
and
this
is
done
several
times
and
the
core
dns
port
uses
this
config
map
to
get
its
configuration
and
in
the
same
way
we
want
to
do
it
for
our
cloud
controller
manager.
But
we
are
not
going
to
do
it
with
the
config
map.
We
are
going
to
do
it
with
a
secret
and
there
is
a
bunch
of
secrets
in
the
cube
cube
system
default
in
the
cube
system,
namespace
and
yeah.
B
What's
the
difference,
the
idea
is
that
what
you
store
in
a
conflict
map
is
it's?
Okay,
if
everyone
with
access
to
your
cluster
can
see
this
information
and
it's
not
just
secret,
it's
just
configuration
and
what
you
put
in
a
secret
is
more
like
yeah
secret,
for
example
our
api
tone
and
then
there's
this
huge
coming
in
my
sentence,
because,
but
it's
not
really
a
secret,
but
in
what's
in
there
yeah.
Why
not,
because
the
whole
encryption
of
our
secret
is
a
basephase
base64
encoding.
B
So
if
you
just
use
a
secret
in
kubernetes,
don't
think
your
data
is
safer
than
it
than
when
it
then
it
would
be
in
another
situation.
It's
the
same
as
storing
it
in
plain
text.
B
It
protects
you
from
a
few
things,
for
example,
if,
if
you
print
it-
and
someone
is
looking
on
your
monitor
and
there's
this
huge
blob
of
numbers
and
characters,
then
they
can't
directly
read
the
secret,
but
if
they
can
copy
it
and
decode
it
or
they
have
a
screenshot
of
it
and
can
type
all
the
characters,
they
are
able
to
figure
out
your
secrets.
So
yeah,
that's
about
it.
C
C
The
reason
why
it's
not
secure
is
because,
like
max
said,
yeah
it's
encoded
in
base64,
so
this
is
only
the
consistency
checks
that
you
can
really
ensure
that
the
data
is
correct,
but
mostly
the
data
are
saved
into
the
database
of
kubernetes
encrypted.
So
it's
like
not
a
re-encryption,
but
you
can
attach
this
also.
I
can
check
for
volume
for
the
dots
so
that
you
can.
If
someone
is
interested
in
the
stoppage
the
person
can
read
about
that.
B
B
Now
a
secret
can
have
a
specific
kind
and
we
want
to
create
generic
secret.
We
want
to
give
it
a
name
hcloud.
Why
do
we
want
to
use
this
name?
Because
in
this
upstream
yammer
example,
they
use
this
name?
So
let's
just
go
with
the
same,
so
we
don't
have
to
change
it
here
and
then
we
want
to
say
what
secrets
do
we
want?
What
information
do
we
want
to
store
on
the
secret.
B
All
right
and
it
was
called
network
and
we
want
to
use
the
network
a3s
all
right
now.
We
as
we
do
cube
cta,
get
secrets
again,
then
we
can
see
that
there's
a
new
secret
each
cloud,
it's
seven
seconds
old
and
yeah.
I'm
not
going
to
print
this
now
because
it
includes
my
api
token
but
yeah.
It's
it's
there
now.
B
A
B
Directory,
let's
see
we
put
the
right
ip
address
in
here,
the
file
is
saved,
so
now
we
use
apply
the
difference
between
create
and
apply
with
create.
You
pass,
for
example,
yeah
what
what
you
want
to
create,
for
example,
a
pod
and
with
apply
you
tell
cubectl,
to
read
all
this
information
from
a
file
which
is
usually
more
helpful
because
you
don't
want
to
type
all
those
stuff
all
the
time.
Instead,
you
have
your
configuration
in
a
yummy
file,
which
you
are
then
going
to
apply.
B
B
All
right,
so,
let's
do
cube
ctrl
get
pods
again
and
we
are
now
seeing
we
have
three
ports
and
none
of
them
is
pending
anymore.
So
we
have
those
old
ports
metric
server
and
coordinates.
They
are
now
running
and
we
have
the
newport
hcloud
cloud.
Controller
manager,
which
we
just
created
and
what's
interesting
here
is
the
resource
we
created,
isn't
called
pot.
B
It's
called
deployment.
So
maybe
let's
try
to
look
if
this
is
something
that
exists
in
kubernetes
and
it
is
actually
for
the
other
two.
We
also
have
a
deployment
so
yeah.
There
is
one
abstraction
layer
after
another.
So
most
of
the
time
you
don't
manage
a
pot,
I'm
not
sure
if
we
also
looked
into
what
a
pot
actually
is
so
a
pot
is
a
group
of
containers.
B
The
group
can
contain
con.
The
group
can
contain
only
one
container,
but
it
also
can
contain
several
containers.
So
in
our
examples
right
now,
each
pot
contains
one
container,
but
the
pot
itself
can
again
be
part
of
several
things.
In
our
example,
it's
part
of
deployment,
so
there's
something
else
between
this.
For
example,
if
we
describe
this
pod
hcloud
controller
manager.
B
B
So
that
also
exists
in
kubernetes,
so
why
do
we
have
so
many
layers
and
layers
and
layers?
Why
do
we
do
this
so
to
make
it
easier?
Actually
it
might
not
appear
this
way
that
you
do
this
to
make
it
easier,
but
the
idea
is
to
have
several
parts
and
each
part
in
your
cluster
takes
care
of
one
simple
thing
and
the
pot,
for
example,
takes
care
of
like
defining
what
containers
should
run
there
and
the
replica
set
is
taking
care
of
how
many
of
it
should
run
there
and
the
deployment
yeah.
B
B
B
B
B
B
B
C
From
beginner
questions,
mostly
because
currently
I
was
wondering
how
currently
the
pod
gets
another
ip
address
so
that
it
isn't
more
in
the
pending
state.
B
Yes,
last
week,
I
think
you
weren't.
A
B
But
we
can
look
into
it
because
I
think
last
week
for
everyone,
it
was
more
like
a
magic
part
of
what
we
did.
We
wrote
our
ansible
role
to.
B
Deploy
k3s
and
as
an
option,
so
the
default
cni
is
flannel
and.
B
C
Rjs
at
me,
okay,
then
I
understand
how
it
works
so
because,
basically,
the
rear
node
has
a
different
ip
address
and
it's
interesting
how
the
packet
flows
from
one
node
to
another,
typically
all
the
nodes,
using
a
technology,
mostly
based
on
the
ips
lens.
So
that
means
that
your
ip
package
on
the
node
will
be
pitched
into
a
new
pic
ip
package
and
then
transfer
to
the
next
node
and
then
will
be
encapsulated
again.
And
then
you
can
that's
the
reason
why
the
plot
can
receive
a
reappear
address
by
using
this
virtual
period.
B
A
few
days
ago,
I
tweeted
something
or
I
retweeted
something,
so
the
content
is
not
originally
from
me,
but
it's
yeah
this
one
understanding,
humanities
networking
in
a
single
picture,
and
it's
like
this,
so
it's
sums
it
up
pretty
good.
B
So
yeah
niklas
gave
a
short
explanation
on
how
it
works
and
then
the
more
you
dive
into
kubernetes
the
more
things
you
will
discover
how
the
networking
works
and
how
it
may
work
differently
on
each
provider
and
what
weird
bugs
can
come
up,
but
most
of
the
time,
most
of
the
time
you
don't
have
to
worry
about
it.
So
that's
a
good
part.
Yeah.
C
Yeah
yeah,
mostly
what
I
found
interesting
about
networking,
because
we
are
on
the
company
we're
using
a
high
pass
data,
so
something
like
aws
or
azure
and
when
you're
using
their
virtual
needs
network
you
currently
all
your
pots
are
reachable
by
their
ip
address.
C
So
if
you,
for
example,
you
have
a
typical
normal
virtual
machine,
you
can
use
go
into
the
virtual
machine,
then
ping,
the
part
directly
or
it
says,
report
you
don't
have
like,
like
the
virtual
network
over
it,
so
that
you
need
to
do
some
turning
or
something
like
that,
because
it
is
mostly
requires
a
really
flat
network.
If
you
want
to
do
nothing
in
there
or
firewalling,
you
will
have
a
lot
of
more
fun.
Mostly
it's
like
a
hell.
Yeah.
B
So
most
of
the
networking
in
a
default
kubernetes
cluster
is
done
by
itp
tables
and
to
give
you
a
little
bit
of
an
idea,
let's
maybe
execute
iptable
save
on
this
node,
but
that's
less
than
I
anticipated
all
right.
Okay.
So
if
we
deploy
a
few
more
things,
then
this
will
be
a
lot
longer,
but
yeah
all
this
or
everything
most
of
what
kubernetes
is
doing
with
the
network
is
happening
here
by
setting
up
ip
table
rules
and
for
each
container
you
start
off
for
each
port.
B
Okay,
so
I
was
hoping
that
today
we
would
create
20
nodes
and
2000
ports.
I
don't
think
we
will
make
this
hopefully
next
time,
but
what
we
can
do
is
we
can
create
one
load
balancer,
so
we
are
able
to
reach
something
in
our
cluster
from
the
outside.
Just
some
basic
example
of
running
a
web
application.
C
But
for
the
next
session,
probably
we
can
use
for
auto
scaler
to
scale
up
two
thousand
nodes,
literally
20
nodes
based
on
two
thousand
parts,
so
that.
B
Would
be
yeah
yeah
if
we
prepare
the
outer
scale
for
it,
then
we
can
do
this
yeah.
A
We
we
also
want
to
add
storage
volumes.
As
far
as
I
know,.
B
A
I
think
so
what
I
imagined
or
dreamed
of
was
like
we're
building
something
up
and
we
keep
using
it
and
like
using
it
with
the
gitlab
ci
cd
deployments.
Then,
because
we
have
a
running
cluster,
we
can
actually
deploy
something
on
top.
Then
we
want
to
monitor
the
performance
and
the
metrics
and
all
this
stuff.
We
maybe
even
can
install
grafana
temple
for
tracing
and
test
that,
but
it
would
be
awesome
to
like
use
what
we
have
been
building
over
the
weeks.
A
B
Yeah,
no,
it's!
I
actually
really
really
enjoy
those
wednesday
evenings
because
they
are
like
yeah.
It's
a
fun
time
to
build
this.
A
B
Yeah
to
create
right
now
we
are
just
going
to
create
a
pot
like
five
minutes
ago.
I
said:
usually
you
don't
do
this,
you
create
a
deployment,
but
now
we
are
actually
going
to
do
exactly
this
because
to
play
around
to
test
stuff.
Sometimes
you
actually
create
a
port.
We
are
going
to
create
to
use.
B
B
B
B
B
What
we
also
have
is
a
service
which
is
also
often
written
as
sovc
the
same
thing.
So
the
idea
of
a
load,
balancer
and
kubernetes
is
that
it
exposes
something
to
the
outside
of
your
cluster,
and
for
this
you
want
to
have
a
public
ip
address.
That
does
not
change,
and
then
kubernetes
takes
care
of
routing
this
traffic
from
this
public.
Ip
address
to
your
actual
pots
in
the
cluster,
that's
something
we
will
look
into
a
bit
yeah
more
next
week
and
probably
each
time.
We
then
work
on
this
cluster
right
now.
B
B
B
And
this
opens
the
yummy
definition
of
this
object,
so
yeah,
that's
the
definition
of
our
service
and
what
you
already
saw
earlier.
That
often
an
object
has
this
kind
of
labels.
It
has
a
name,
it
has
a
namespace
and
what
it
often
also
has
is
an
annotation
and
our
cluster
is
running
in
feinstein,
which
is
a
town
in
germany,
and
so
we
are
going
to
use
a
load
balancer
also
in
feinstein.
B
C
C
You
have
maybe
better
tcp
hendrick
time.
B
A
B
B
B
So
the
cloud
controller
manager
watches
all
services
in
your
cluster
and
if
a
new
service
of
the
type
load
balancer
is
created
or
it's
deleted
or
it's
modified
or
anything
else,
then
and
we'll
make
sure
to
update
the
configuration
on
hetzener's
site
via
the
headset
api.
So
apparently
it
did
something
with
the
headset
api.
B
B
B
B
To
yeah
expose
a
website
to
the
public
running
in
our
cluster,
so
there
is
no
dns,
there
is
no
ssl,
but
it
works
and
yeah.
That's
it
wasn't
the
goal
for
the
day,
but
I
guess
it's
in
it's
a
checkpoint
where
we
can
stop
so
yeah,
which
way
is
our
traffic
going
now?
That
may
be
an
interesting
question,
because
I
said
it
a
few
times
today:
abstraction
layers,
abstraction
layers,
abstraction
layers,
so
you're,
maybe
not
confused.
B
B
B
So
this
service
is
not
something
that
really
exists.
I
mean
it
is
there,
but
it's
not
like
a
running
pot
or
a
piece
of
hardware.
It's
only
virtual.
So
let's
put
this
into
some
yeah
brackets.
So
it's
the
representation
of
it
is
in
ip
tables.
On
those
notes,
that's
where
service
is
created
and
so
with
each
service.
This
iptables
list
from
earlier
gets
longer
and
then
based
on
those
iptables
rules,
the
traffic
will
be
sent
to
the
correct
node
in
our
cluster.
What's
the
correct
node,
the
correct
node
is
the
node.
B
B
But
yeah
that's
basically
the
start
of
getting
traffic
in
our
cluster
and
another
time.
Next
time.
The
week
afterwards,
we
will
see
we
will
put
another
step
in
between
it's
called
an
ingress
controller,
and
this
way
we
can
use
a
single
load
balancer
for
several
websites,
which
is
very
nice,
because
if
you
look
at
the
load
balancer,
you
will
notice
that
it
costs
almost
6
euros
per
month
and
if
you
create
a
load
balancer
for
each
website,
you
are
running
then
this
gets
very,
very
expensive,
very
fast.
B
C
C
B
B
A
C
Yeah,
that's
the
tool
I
missed,
but
probably
the
promise
with
load
testing
on
this
stale
is
that
you
need
to
inform
probably
the
cloud
provider.
It's
not
like
that.
You
should
do
in
light
out
of
the
box
yeah
because
and
on
all
the
slas.
Some
of
the
cloud
providers
also
mentioned
that
if
you
want
to
have
really
really
high
traffic
on
that,
you
probably
need
to
make
them
aware
of
so
that
you
get
better
rules
and
so
on,
like
nothing,
you
should
do
out
of
box.
C
B
So
you
mean
okay.
Now
I
understand
yeah.
A
Okie
dokie,
then
thanks
a
lot
max
for
the
educating
session.
Now
I
need
to
unpack
everything
in
my
mind
and
hopefully
next
week
we
can
jump
into
maybe
the
storage
volumes.
The
ingress
controller
also
sounds
interesting,
cicd
monitoring
and
other
things.
The
only
thing
is
next
week
I
had
to
move
the
meeting
to
7
p.m.
If
that's,
if
that
works,
for
you.
A
So
I'm
I
will
be
here
at
7
pm
and
if
max
is
here,
we
can
do
the
live
stream
and
maybe
like
follow
up
on
on
the
youtube
stream.
If
we
decide
during
the
next
days
that
we
want
to
reschedule
it
in
two
weeks
time,
it's
also
no
problem.
So
just
let
it
sink
in
have
a
great
evening
and,
let's
say
bye
on
youtube.
Bye,
bye,
bye,
see
you
around.