►
From YouTube: 24. #everyonecancontribute cafe: Automate Kubernetes setup deployment with GitLab CI/CD
Description
We are learning how to deploy and secure Kubernetes into Hetzner cloud in this series.
- This week: https://everyonecancontribute.com/post/2021-04-07-cafe-24-automate-kubernetes-deployment-ansible-gitlab-cicd/
- Last week: https://everyonecancontribute.com/post/2021-03-31-cafe-23-automate-kubernetes-setup-hetzner-firewall-feature/
A
Okay,
now
we
are
live
on
youtube
hello
again
for
into
our
24th.
Everyone
can
contribute
coverage
for
me,
it's
the
vacation
edition,
but
for
max
it's
like
we're.
Gonna
continue
what
we've
learned
thus
far
of
the
q
as
setup
and
the
automation
and
rumors
do
tell
that.
We
are
like
combining
it
this
week
with
gitlab
cicd,
but
I
don't
want
to
spoil
anything
so
I'm
looking
forward
to
marks
and
hand
it
over
to
you
unless
you,
unless
someone
else
has
something
against
it.
No,
so
please
go
ahead.
B
All
right,
yeah
welcome
yeah.
Today
we
want
to
take
all
the
stuff
we
built
in
the
first
few
sessions
and
put
it
into
one
gitlab
ci
pipeline,
and
here
it's
mostly
we
do.
It
live
just
trying
to
figure
out
how
it
works.
I
tried
the
first
few
steps
yesterday
in
the
evening
to
make
sure
it's
not
going
to
be
a
disaster,
but
most
of
it
we
just
need
to
try
out
and
also,
I
think
in
the
core
here,
like
everyone
knows
at
least
the
same
amount
of
stuff
about
gitlab
ci
as
I
do
so.
B
One
thing
I
wanted
to
add
to
last
week
because
I
I
think
we
were
talking
about
the
firewalls
and
we
came
across
this
topic
that
the
where
I
told
you
that
in
my
case,
the
wire
guard
connections
did
still
work
after
I
applied
the
firewalls
in
my
private
cluster
and
nikola's.
First
reaction
was
connection
tracking,
and
my
answer
was
yes,
I
thought
so
as
well,
but
I
restarted
the
notes
and
it
still
worked
and
yeah
by
now.
B
I
figured
out
that
it's
the
course
is
still
connection
tracking,
but
not
the
connection
tracking
on
our
notes,
because
this
firewall
isn't
even
applied
on
our
nodes,
it's
applied
on
the
in
front
of
them
and
they
managed
firewall
from
hetzner,
and
so
they
don't
care
what
we
do
with
our
nodes.
If
we
reboot
them
fast
enough,
the
connection
can
be
established
again
so
most
likely.
If
we
shut
it
all
down
and
wait
for
a
bit
and
start
it
up
again,
the
traffic
would
be
blocked
as
expected.
C
So
I've
also
questioned
regarding
that
probably
or
an
idea,
because
I
was
wondering
in
the
last
time,
if
you're
using
hetzner
and
you
spin
up
a
new
machine,
for
example,
you
get
every
time
you
get
also
the
same
ip
address.
Mostly.
C
That
could
be
also
a
reason
because
you,
when
you
are
in
the
time
zone
or
in
the
time
window,
when
you
get
a
psycho
p
address
when
you
are
fast
enough,
you
get
the
same
ip
address
for
the
machine.
If
you
are
waiting
longer,
you
will
get
a
new
appearance
as
soon
as
it's
the
same
behavior,
also
on
the
firewall.
So.
B
B
Yeah,
that's
about
firewalls,
so
that's
yeah,
definitely
interesting,
and
so
our
first
instinct
about
connection
tracking
was
right,
but
it's
yeah
at
a
different
place
than
usual
all
right.
So
if
you
didn't
attend
last
week,
just
forget
what
he
said
all
right.
So
so
far
we
have
terraform
to
set
up
our
machines
and
our
not
networking
on
hetzner,
and
then
we
have
ansible
to
set
up
the
k3s
cluster
in
there
and
then
we
have
a
few.
B
I
think
we
have
a
few
yama
files
with
just
some
kubernetes
manifests.
We
applied
with
cube,
cdl,
apply
and
right
now,
I'm
not
even
sure.
If
we
also
had
some
help
charts
already.
I
think
we
had
not
sure
right
now.
Today
at
least
we
will
put
terra
from
and
ansible
into
the
github
ci.
I
guess,
and
if
we
are
fast
enough
we
may
add
the
installation
of
some
kubernetes
manifest
as
well,
not
sure
yeah.
B
Let's
start
it
will
probably
start
a
little
bit
slow
because
I'm
super
exhausted,
but
we
will
see
all
right.
B
So
you
should
be
able
to
see
the
repository
on
gitlab.com
right
now
and
I
prepared
a
little
bit
in
the
settings
we
have
in
settings
cicd
or
variables,
and
I
added
an
hcloud
token
and
I
added
an
ssh
key
and
it's
the
token
with
read
and
write
access
to
the
hetzner
api
and
our
everyone
can
contribute
project
and
then
there's
also
a
private
ssh
key
and
we
use
terraform
to
deploy
the
public
key
to
our
nodes.
And
then
we
use
ansible
with
this
private
key
to
access.
The
notes,
short
question.
B
B
That's
yeah,
that's
also
a
thing
we
should
be
aware
of
right,
so
the
ssh
key
is
a
file.
So
what
gitlab
does
here
is
to
put
the
content
of
the
variable
into
a
file
and
put
the
path
to
the
file
into
this
variable,
which
is
nice
because,
usually
in
ssh
key,
we
want
to
use
as
a
file
when
we
pass
it
to
a
other
program,
because
we
will
pass
it
to
ansible
but
yeah
that
one
is
in
capital.
Letters
and
one
is
not
as
more
random.
B
I
don't
want
to
change
it
now,
because
I
think
then
it
will
reveal
the
values
all
right,
but
we
have
those
two
variables
in
here
and
yeah.
One
thing
I
also
changed
is
in
general
pipelines
I
disabled
a
public
pipelines
mostly
because
when
you
enable
public
pipelines,
everyone
will
be
able
to
see
artifacts
of
the
pipelines
and
later
on,
we
will
put
some
things
in
the
artifacts.
We
don't
want
to
have
publicly
accessible
only
for
members
of
the
project,
all
right.
B
Yeah,
I
don't
know
if
it's
always
enabled
by
default
or
if
I'd
enabled
at
some
point
in
the
past,
but
yesterday,
when
I
because
later
on,
we
have
some
step
in
the
pipeline
which
puts
the
cubeconfig
in
a
artifact.
So
we
are
able
to
download
it,
and
I
was
thinking
way
is
this
artifact
public
and
it
was
yeah?
B
C
C
B
C
A
Okay,
but
choke
choking
aside
today
I
figured
we
have
the
gtld.cafe
available
or
it's
like
you
can
buy
domains,
and
so
I
registered
one.
I
just
need
to
figure
out
how
to
use
it.
D
Are
you
you're
one
of
one
of
the
people
who
get
a
domain
and
then
wonder
what
am
I
going
to
do
with
it
now.
D
I
I
I
feel
a
bit
called
out
here.
I'm.
A
B
All
right,
so
we
we
are
back
with
a
shell
yeah,
I
don't
know.
Maybe
I
should
stop
doing
stuff
with
the
computers
anyway,
we
will
so.
I
will
just
a
little
bit
assume
that
people
know
what
a
ci
pipeline
is
right
and
maybe
even
worked
with
gitlab
before,
and
so,
if
you
watch
this
video
afterwards
and
have
no
idea
what
the
pipeline
is
then
yeah,
maybe
look
it
up
first
and
then
come
back
and
watch
the
video.
A
I
will
be
adding
the
urls
to
the
docs
in
the
blog
post
and
link
the
blog
post
in
the
video
description
and
then
we'll
save
okay.
B
We
will
do,
we
will
create
several
stages
in
our
pipeline.
Let's
define
them
here,
we
will
create
for
each
like.
We
have.
B
B
D
B
This
might
be
a
little
bit
unusual
for
a
ci
pipeline,
but
our
main
goal
is
for
now
is
to
like
have
a
way
to
create
a
kubernetes
cluster
with
one
click
and
so
for
now.
It's
like
we
just
click
in
the
web
interface
of
gitlab,
and
then
we
get
a
cluster
back
and
later
on.
If
we
want
to,
we
can
change
the
pipeline
a
bit
to
also
yeah
apply
all
changes
with
each
commit,
or
something
like
this
for
now.
B
B
C
B
All
right
and
for
teraform
to
pick
this
up
from
the
environment,
we
need
to
call
it
tf,
var
in
capital,
letters
and
then
the
name
of
the
terraform
variable,
and
there
we
put
our
hcloud
to
environment
variable,
and
then
we
define
a
few
more
terraform
settings.
We
set
up
the
teraform
root
directory
and
there
we
can
use
this
predefined,
a
variable
which
comes
with
the
gitlab
ci,
which
is
the
project
directory.
C
B
Okay,
but
then
we
needed
to
set
it
in
capital.
Letters
probably
right.
C
B
B
B
B
B
B
B
B
I
can
close
the
terminal
for
now
so,
and
we
can
see
here
at
the
top.
We
have
created
this
test
stage
all
right
and
we
extend
our
terraform
template
so
right
now
it's
the
same
as
if
all
this
stuff
would
be
down
here
as
well
and
yeah,
then
we
create
a
script
to
tell
gitlab
what
we
actually
want
to
run
and
we
run
to
run
gitlab
tire
from
init
and
we
want
to
run
gitlab
tear
from
validate
so
a
gitlab
terraform
is
a
minimal
wrapper
around
the
actual
terraform
binary.
B
I
haven't
looked
into
it.
I
think
it
does.
Some
stuff
like
automatically
tells
passes
the
parameters
to
auto,
approve
the
changes,
so
you
don't
have
to
type
yes,
if
terraform
wants
to
do
any
changes
and
stuff
like
this,
so
they
provide
this
as
part
of
their
terraform
image
and
yeah.
There's
a
terror
from
gitlab
documentation.
B
C
B
B
B
And
then
we
can
that's
why
we
create
the
run
terraforms
plan
json
right,
because
we
can
now
tell
gitlab
that
we
have
a
report
in
here
and
it's
a
terraform
report
based
on
this
plan,
because
then,
if
we
create,
for
example,
merge
request
on
gitlab,
it
will
display
yeah.
This
match
request
will
do
the
following
terraform
changes:
it
will
create
five
resources,
modify
one
and
delete
none
or
something
like
this,
so
you
have
as
a
reviewer
and
gitlab.
B
B
B
B
B
I'm
thinking
in
our
case,
I
think
it
wouldn't
matter.
Maybe
we
can
try
to
remove
it
and
see
if
it
still
works,
so
we
don't
over
complicate
stuff
for
now,
all
right.
D
D
B
B
B
A
B
B
Let's
look
into
this,
for
example,
what
we
could
do
right
now.
We
don't
do
this,
but
we
could
change
it
that
the
terraform
apply
only
is
run
if
we
manually
click
here
to
run
it
like,
we
could
check
the
diff
and
then,
if
we
agree
with
it,
we
click
to
apply
it,
but
for
now
we
can
just
go
and
always
run
all
the
jobs.
C
B
C
B
B
B
C
B
But
it
has
to
talk
to
the
api
during
the
plan
right
because
it
needs
to
check
if
it
needs
to
change
stuff.
Yeah,
that's
correct!
Okay,
I
mean
it
only
access
it
with
read
access.
So
if
our
token
has
read
access
but
not
right
access,
then
it
could
happen.
Okay.
Now
our
plan
does
not
work
anymore
because
we
have
this
stored
plan
in
our
artifact
and
it
does
not
match
the
actual
state
anymore,
because
the
first
few
things
already
got
applied.
A
But
it's
good
to
see
that
it
fails.
So
you
know
when,
when
you're
in
a
like
in
a
mixed
deployment
state-
and
you
try
it
again-
you
need
to
recreate
a
plan
and
then
do
a
diff
and
apply
it.
A
B
A
B
Renovate
we
can
do
a
separate
session
about
it,
or
maybe
morrow
wants
to
do
a
session
about
it
because
he
uses
as
well,
which
is
super
nice.
We
can
even
use
the
gitlab
integration
of
it.
B
Nice
because
it's
like
the
best
tool
ever
it
will
yeah
check
if
there
are
any
updates
available
for
dependencies
of
your
git
repository,
be
it
python
packages,
ansible
roles,
docker
images,
ham,
charts,
everything.
A
I
think
I
think
niklas
mentioned
that
several
months
ago.
I
remember
that
name
and
yeah,
but
let's
just
it's
just
just
a
thought
of
mine
was
saying:
hey,
I
probably
want
to
install
security
updates
and
other
things
automatically
and.
B
And
you
can
tell
renovate
to
it
will
always
create
a
merge
request
for
you
with
the
changes
and
you
can
even
tell
it
to
auto
merge
it
if
it's
just
a
minor
version
or
something-
and
that's
extremely
helpful.
I
don't
know
for
my
private
stuff.
I
get
around
10
match
requests
each
week
and
I
would
completely
lose
the
overview
of
this
without
renovate,
because
everything
would
just
be
outdated,
and
so
now
I
just
get
automated
match.
Requests
to
tell
me
here.
There's
an
update
for
your
parameters.
B
There's
an
update
for
the
python
image.
Your
ansible
image
is
based
on
and
all
this
stuff-
and
I
don't
know
it's.
We
started
to
use
this
in
my
previous
job
and
we
started
with
two
repositories
and
then
we
added
it
just
to
all
of
them
and
you
can
host
it
yourself
or
you
can
also
use
the
hosted
version
if
you
have
public
repositories
on
github
or
github
yeah,
it's
very
nice
yeah.
A
I
think
I
think
it's
a
great
great
addition
that
you
can
self-hosted,
because
I've
seen
board
being
abused
for
crypto
miners
who
generate
merge,
requests
to
pull
requests
and
then
inject
something
into
your
ci
cd
repository
to
mine,
some
bitcoins,
or
something
like
that
and
if
yeah
typically
you
you
want
to
auto
merge
dependencies.
But
if
it's
hosted
by
yourself
and
you
secured
it
by
yourself-
it's,
hopefully
it
works
better
than
anything
which
comes
from
the
public
to
your
repository.
C
Yeah,
you
can
also
do
like
not
like
standard
dependencies.
If
you
have
like
really
crucial
dependencies
that
are
not
in
a
standard
way
from
form
from
the
public,
you
can
also
write
regular
expressions
to
update
them,
or
something
like
that,
so
you
don't
need
to
stick
to
to
the
convenience
of
the
outside
world,
so
we
need
also
to
use
this
on
some.
B
Yeah,
that
is
correct
a
little
bit
overwhelmed
with
the
new
job
and
yeah.
That's
where
it
matches
and
it
yeah
it's
just
nice,
because
I
just
created
get
a
merch
request.
We
can
show
this
example
here
as
well.
No
there's
no
voice!
Oh
well!
Maybe
I
already
merged
it.
A
Next
week
we
have
upstream
so
like
digging
a
little
bit
into
monitoring
and
observability,
and
then
we
can
see
whether
we
can.
We
want
to
like
stay
stay
on
the
loop
and
continue
with
prometheus
and
other
things,
or
go
into
the
direction
of
manner
or
managing
dependencies
and
updates
and
other
things.
But.
B
B
D
A
With
with
the
docker
image
from
the
gitlab
branch
history,.
B
D
C
D
C
Yeah,
it's
it's
not!
Okay!
It's
like
it's
like
an
extra
step.
I
would
say
it's
like
an
optimization
so,
but
therefore,
why
should
it
call
when
nothing
is
in
the
state?
So
then
you
know
don't
need
to
different.
If
you
have
a
big
terraform
state
with
1000
resources
or
more,
I
think
it
makes
sense
when
you
don't
have
anything
on
the
api
side,
not
not
to
do
all
the
api
calls.
B
D
A
It's
I
think
it's
sometimes
I'm
thinking
to
complicate
it,
and
this
is
like
me
and
ansible.
There
is
nothing.
There
is
no
ansible
for
dummies
for
michael,
so
I
you
need
to
find
a
use
case
and
you
need
to
try
it
out
and
when
you
fail
10
times
you
fail
another
10
times
and
then
it
works.
D
I
have
that
really
interesting
topic
of
how
do
I
get
state
information
from
terraform
into
ansible
right
now
and
I
haven't
found
a
good
solution
for
it
yet
so
that
might
be
something
where
I'm
thinking
too
complicated.
D
Yeah,
I
I
thought
one
thing
I
did
was
I
already
did
was
put
everything
I
need
in
ansible
in
as
added
as
outputs
to
terraform
and
then
call
terraform
from
ansible
register.
The
result
which
in
ansible
then
gives
you
the
outputs
of
terraform
in
a
variable.
But
it's
pretty
complicated
because,
like
I
need
some
obscure
sub
property
of
something
it.
C
Stuff
you
want
to
provision,
does
it
have
an
api?
Yes,
probably,
then
I
would
ask
with
ansible
the
api
to
get
the
information
about
that
instead.
D
Wait
for
actions
server
is
locked.
Did
you
possibly
create
some
server
with
that
name
or
is
there
a
server
existing
with
that
name.
B
D
B
C
D
A
D
I
I
recently
did
something
like
that
and
I
failed
spectacularly
what
I
did
was
notice
that
my
k3s
cluster
server
as
we
can
see
here,
the
service
has
problems
creating
my
server
was
a
bit
too
slow
because
I
put
a
lot
of
load
in
it
and
then
I
was
like
yeah.
I
just
quickly
I'll
just
quickly
scale
it
up
and
then
change
the
configuration
in
terraform
afterwards,
and
you
can
imagine
what
happened.
I
scaled
it
up
in
the
headset
front
end
and
a
week
later
I
ran.
D
I
ran
a
provider
update
of
the
hack,
h
cloud
terraform
provider,
which
then
in
turn
runs
my
gitlab
ci
pipeline
to
apply
terraform,
which
then
downgraded
my
server,
which
in
turn
crashed
the
server
because
it
didn't
have
enough
resources.
A
Even
if,
even
if
you
go
into
a
git
repository
or
you
manage
something
in
your
git
repository,
some
configs
and
you
change
it
in
production
and
you
forget
to
like
update
the
git
repository
yeah
and
then
you
deploy
it
again
and
then
it's
boom.
B
B
A
Some
maintenance
coming
up
on
april
12
or
something
like
that.
Maybe
that's.
A
C
A
I've
seen
that
the
default
ubuntu
image
from
hats
now
also
disables
the
the
message
of
the
day,
news
which
forms
home
to
ubuntu,
and
I
normally
disable
that
and
I
was
searching
the
vm.
It
was
like.
Oh
it's
disabled
already,
so
one
one
step
less
to
provision.
B
I
guess,
if
I
open
the
console,
you
won't
see
it
because
I
think
it
opens
in
a
new
window,
yeah
it's
almost
a
new
window,
but
now
okay,
the
server,
is
running.
I
get
a
login
prompt,
okay,
interesting.
D
C
B
Yeah,
let's
try.
This
would
be
not
for
an
imperial.
I'm
still
surprised,
though,
because
if
the,
if
terraform
would
be
failing
and
then
the
server
would
just
like
be
slower
to
come
up
and
then
running,
I
would
say:
okay,
something
weird
going
on
on
the
terraform
site,
but
that
it's
ends
up
not
running
on.
The
hudson
side
is
very
surprising
to
me.
C
B
I
think
you
can
just
pass
any
options
of
terraform.
D
B
B
B
C
Probably
it
should
be
a
problem
on
the
data
center,
but
not
where
we
are
scheduled
with
our
vm
is
currently
having
some
problems.
We
can
check
had
some
status
probably
also.
A
B
B
A
D
D
A
Yeah,
true
yeah,
I
I
don't
know,
maybe
it's
it's
blue
sky
after
the
clouds
but
yeah
the
weather
changed
from
a
sunny
easter
weekend
to
s
to
rain,
then
snow
and
thunder
yesterday.
I
think
so
it's
it's
a
bit
strange,
but.
A
To
to
change
to
overcome
the
breaks
between
watching
a
pipeline's
run,.
B
D
B
Okay,
I
would
say
we
stop
here,
because
this
gets
annoying,
not
the
talk
about
the
weather,
but
the
failing
pipeline,
because
I
right
now
I
have
no
idea
what
to
test
but
to
wait,
because
I
don't
think
it's
something
we
do
wrong
because
yeah,
it
doesn't
look
like
an
error
on
our
side
to
me.
A
B
A
C
Max
we
could
do
it
in
a
better
way.
I
think,
in
a
more
readable
way,
we
could
use
a
data
resource
our
data
source
for
location
and
then
using
this
data
source
in
every
resource,
because
then
you
need
to
change
it
only
one
time.
C
I
will
post
the
stuff
that
we
need.
I
can
show
you
the
example
typically
when
you're
using
other
providers,
so,
for
example,
aws,
azure
or
other
you
can
directly
define
the
location
directly
in
the
provider
on
hetzner.
They
have
a
different
approach
that
you
need
to
specified
on
the
resources.
C
B
C
D
C
Yeah,
okay,
so
what
you
can
do
is
they
have
two
data
sources?
That's
more
interesting.
So,
for
example,
we
don't
now
want
now
to
get
all
locations
or
we
want
to
ask
a
headstand
cloud
about
all
available
locations
and
now,
instead
of
creating
resources
with
reform,
we
can
also
use
terraform
to
get
information.
C
C
So,
but
we
could
use
the
h-round
locations
and
then
using
our
own
index
instead
of
doing
it
dynamically
yeah.
That
would
be
the
case
you
can
also,
instead
of
getting
all
locations.
If
you
were
only
interested
to
get
the
id,
you
can
also
use
the
data
source
h2o
location.
Where
you
can
write
for
shorthand
for
the
location,
then
you
can
use
this
data
dot,
h,
cloud,
dot,
location,
for
example,
r1,
and
then
you
can
use
this
to
users
in
the
resources
and
that's
I
think
this.
C
No,
it's
directly
formatted,
okay,
yeah
and
that's
what
the
way.
What
I
perfectly
do
mostly
so
that
tear
form
controls
everything
so
in
terms
of
creating
the
resource,
and
I
don't
like
to
hard
type
everything
so
that
we
can
use
this
type
because
therefore,
can
then
easily
check
if
it
is
addressed
correctly
most
because
then
you
can
do
type
checking
on
this
stuff.
A
B
B
B
B
B
C
So
what
you
also
can
see
that
not
only
resources
created
but
later
when
you
push
it,
you
can
push
it
directly.
Then
we
can
also
see
a
change
in
our
plan.
C
A
A
D
I
I
think
they
they
looked
at
control
and
were
like.
Oh
that's,
nice.
We
want
to
have
that
too,
because
I
think
the
keep
control
is
like
at
least
the
first
one
I
can
remember
to
have
that
and
it
works.
C
B
C
A
It
already
knows,
but
it
cannot
print
it.
C
B
A
It's
a
problem
we
can't
fix
now
and
probably
someone
else
who's
on
call
is
currently
trying
to
fix
it.
A
The
least
we
can
do
is
like
wrap
it
up
for
today
or
wrap
it
up
for
now
and
and
wait
for
better
times.
Yeah.
B
C
A
A
It
would
be
interesting-
and
I
was
I
think
I
was
thinking
about
this
a
while
ago,
like
to
prove
provision
heads
in
a
cloud
when
it
works
and
then
say:
hey
I
we
want
to
when
want
to
do
it
with
aws.
For
instance,
I
think
someone
responded
yeah.
Someone
responded
last
week
to
the
twitter
thread
around
devops
stack,
devops
tool
stack,
I
don't.
I
don't
really
remember
it,
which
is
a
a
full-blown
of
a
ready-to-use
solution
for
terraform
and
deployments
and
multi-cloud
and
whatever,
and
I
said
yeah.
A
We
can
definitely
look
into
it,
but
first
we
need
to
have
it
working
with
that
snack
cloud,
and
I
also
want
to
learn
it
like
building
the
lego.
Bricks
together
is
more
compelling
or
more
easy
for
me
to
learn,
rather
than
reading
the
documentation
of
a
fully
fledged
tool
where
I
cannot
see
the
progress
of
the
learning
history
and
the
trial
and
error
and
things
around
this.
So
I
think,
like.
C
There's
another
state
that
I
know
at
least-
and
I
know
that
someone
use
it
really.
It's
called
tooth.
It's
like
the
same
approach
that
the
guy
posted
last
week,
but
you
can
also
run
it
locally
to
test
for
stuff
and
it
runs
multiple
on
on
every
cloud,
but.
B
A
So
for
like
comparing
how
to
probably
do
it
at
a
later
stage,
when
you
know
that
you
know
the
basics,
I
think
it's
great
to
have
them,
but
for
the
initial
kickstart
and
say
hey,
this
is
how
terraform
provisions
something-
and
this
is
how
it
works
with
headset
right
now.
Headset
is
locked
for
some
reason,
so
it's
using
the
api
and
we
cannot
debug
it
as
a
black
box.
A
Maybe
it's
like
how
terraform
fires,
your
requests
could
also
be
a
thing,
but
I'm
not
I'm
not
totally,
not
in
the
mood
to
use
tcp
dump
now
or
just
attempt
to
do
that
so
yeah,
let's,
let's
maybe
wait
for
better
times,
probably
in
one
hour
it
works
again.
Next.
B
D
C
B
D
B
A
C
D
You
always
get
one
first
hour
on
creation
so
that
you
don't
just
spawn
servers
like
once
you
once
you
create
one.
You
get
built
at
least
one
hour
so
that
you
don't
just
spawn
them
and
use
them
for
30
minutes
and
then
turn
them
down.
I
mean
you
could
make
a
business
out
of
that
and
that's
why
it
doesn't
work.
C
It's
mostly
it's
it's
relying
on
the
business
model,
because
I
know
that
you're,
an
aws
you
need
only
pay
for
the
first
10
minutes
is
the
fastest
stuff
that
you
did
so
you
have
10
minutes.
If
you
have
workload
that
fits
into
under
10
minutes,
you
can
spin
up
your
whole
machine
for
that
and
then
use
it
in
pain
only
for
10
minutes
in
a
fun
whole
hour.
C
A
Here,
for
our
case,
I
would
say,
probably
don't
over
engineer
it
with
scheduled
jobs.
If,
if
you
have
the
mood
next
week
or
we
have
the
mood
next
week
to
say,
hey,
we
want
to
verify
it.
Maybe
after
the
the
ops
trace
session,
we
can
just
click
on
it
and
try
it
again
or
who,
whoever
is
in
the
mood
yeah.
That
being
said
next
week,
is
the
14th
of
april
and
sebastian
will
join
us
from
ops
trace.
A
As
far
as
we
have
discussed
it,
it
will
be,
like
short,
showcase.
What
is
upstream,
what
are
the
plans,
roadmaps
and
other
things,
and
then
we
do
like
a
hands-on
session,
and
I
also
told
him
that
we
probably
will
be
having
some
crazy
ideas,
and
maybe
we
find
we
find
certain
ones
and
then
do
it
live,
maybe
just
scrolling
in
monitoring
the
kubernetes
cluster,
which
has
built
with
ops
trace.
A
I
don't
know
if
this
works
but
yeah,
let's
see
about
it,
so
I'm
excited
for
next
week
other
than
that,
thanks
max
for
preparing
today
and
for
debugging
with
us
today.
I
really
appreciate
it
and
with
that
I
would
say,
bye,
bye
on
youtube,
see.