►
Description
You need to get a small web application up and running, without much fuss and complexity? Well, lucky you, if there is a dedicated k8s cluster already running within the company.
A
At
elastic
we
run
the
elastic
Community
Conference,
it's
kind
of
a
virtual
event
to
run
once
a
year
it
takes
about.
12
hours
is
Around
the
Clock
many
different
languages
around
70
talks
and,
of
course,
we
had
a
platform
for
that
in
2021,
and
the
idea
of
that
was
like
you
kind
of
register
there.
You
can
see
the
schedule
all
of
this
kind
of
things
and
we
were
not
really
happy
with
that
platform,
especially
with
the
live
streaming
part.
A
So
for
2022,
someone
was
coming
up
to
me
and
said:
like
look,
can
we
have
the
whole
registration
thing
going
over
elastic
Cloud,
which
is
our
elastic
stack
as
a
service
platform,
and
because
we
already
have
all
of
that
up
and
running?
We
would
just
like
to
go
with
the
user
registration
via
this
platform
as
well
and
at
the
end
it
boils
down
to
a
summer-based
authorization,
workflow
kind
of
what
GitHub
is
just
against
our
own
platform
and
I
kind
of
said
yeah,
but
I
would
not
like
to
build
like
the
whole
platform
myself.
A
Obviously,
because,
like
it's
a
small
team,
this
is
nearly
impossible.
There's
a
reason
why
they're
products-
and
so
we
had
this
whole
discussion
like
should
we
build
or
buy
what
kind
of
part
can
we
do
in
the
application?
And
what
kind
of
part
do
we
need
to
Outsource
turns
out
for
live
streaming?
There's
this
tiny
little
platform
called
YouTube
which
you
could
use,
so
there's
no
need
to
cover
anything
of
that.
A
A
We
can
use
any
platform
as
a
service
thing,
it's
not
running
for
a
long
time,
so
we
wouldn't
have
to
spend
crazy
amounts
of
money,
but
we
already
had
a
kubernetes
cluster
running
withinelastic,
which
any
employee
can
just
use
to
run
their
own
applications,
and
sometimes
it's
not
the
technical
decision,
sometimes
there's
other
political
ones.
I
didn't
need
to
get
any
approval.
I
didn't
need
to
get
any
legal
approval.
A
A
You
click
on
login,
you
get
redirected
to
elastic
Cloud,
you
kind
of
type
in
username
and
password,
and
you
get
redirected
back
and
the
redirect
back
also
submits
a
certain
set
of
data.
Like
your
name,
like
your
email
address
to
the
to
the
web
application.
So
we
can
basically
use
that
for
registration
and
that's
about
it.
So
from
a
backhand
side,
the
most
important
part
is
that
you
come
up
with
the
authorization
library
that
understands
summer
and,
of
course,
from
other
functionalities.
A
We
just
had
like
the
standard
schedule
thing
where
it
could
sort
the
schedule
have.
Rememberable
URLs
turns
out
that
a
ton
of
platforms
just
have
this
one
schedule
for
two
days
and
you
can't
even
link
to
it
properly
and,
of
course
you
need
a
detailed
View
for
the
session,
and
you
need
to
detail
a
few
for
each
speaker
and
all
of
this
data
was
retrieved
via
the
sessionized
API,
where
our
cfp
was
running
just
because
they
export
a
big
bunch
of
Json
and
I.
A
What
I
wanted
to
resemble,
and
the
architecture
of
this
looks
really
simple:
the
elastic
CCF
in
the
middle,
the
Java
one,
is
actually
running
on
the
kubernetes
cluster.
It
delivers
some
HTML,
which
is
marked
as
a
front
end
here.
I
have
an
elasticsearch
cluster
running
which
is
not
running
on
the
kubernetes
platform,
because
we
have
our
own
hosted
elastic
Cloud
as
a
server
thing
and
it
connects
to
sessionite,
and
that
was
it.
A
So
the
app
itself
was
really
simple,
like
there's,
no
magic,
no
complexity
except
for
scaling
and
the
the
complex
part
was
again
more
on
the
people.
Side
of
things
like
I
didn't
want
to
get
any
other
team
involved
in
this.
Like
the
community
team,
was
the
one
responsible
for
running
this.
It
also
meant
Collective
ownership.
Everyone
should
be
able
to
roll
out.
A
Everyone
should
be
able
to
write
code
if
they
need
to,
and
of
course,
we
all
want
well-tested
apps,
so
I'm,
a
Java
developer
by
trade
like
I,
used
to
do
Java
development
before
alternate
the
elasticsearch
team
for
web
applications.
So,
of
course,
my
first
choice
is
always
Java
when
I
do
anything
and
I
had
the
luxury
of
kind
of
pick.
The
latest
version
run
with
the
latest
garbage
collectors.
Anything
I
wanted
to
try
out.
A
There's
pack
for
J:
it's
not
a
really
well
known
authorization
library
for
Java,
but
it
kind
of
support
everything
from
saml
to
GitHub,
to
Facebook
to
everything.
That's
Awards
based.
A
So
if
you
ever
need
to
do
something,
go
ahead
with
that,
and
of
course,
like
I,
have
the
UI
skills
of
three
year
old
so
coming
up
with
something
that
doesn't
look
like
I
designed
it
by
Night
meant
I
had
to
use
some
Frameworks
and
the
one
I
use
is
htmx,
which
allowed
me
to
not
learn
any
JavaScript
I'm,
really
bad,
with
keeping
up
to
date
with
JavaScript,
Frameworks
and
htmx
kind
of
allows
you
to
create
web
applications
that
update
itself
in
Parts.
A
But
you
can
still
write
server-side
code
and
render
it
and
it's
a
really
nice
solution.
You
should
totally
check
it
out.
Okay,
the
minimum
resources
part
I
said
at
the
beginning
goes
into
many
different
angles.
Of
course,
we
always
would
like
to
run
like
small
parts,
not
needing
a
lot
of
memory
being
like
not
needing
a
lot
of
CPUs
things
like
that.
A
But
it's
also
like
minimum
resource
to
me
fast
rollouts,
right
I,
don't
want
to
have
like,
like
this
major
big
pipeline,
for
a
small
application
like
that
I
really
just
would
like
to
roll
out
and
that's
it.
We
don't
have
anyone
working
full-time
on
this,
so
resources
also
means
we
are
kind
of
constrained
on
a
on
an
advocate
basis
and
I
really
don't
want
to
store
any
security
data
on
this
right,
like
I,
know,
I'm
able
to
write
web
applications,
but
writing
like
secure
web
applications
is
a
completely
different
Beast.
A
A
Like
that's
our
rollout
process,
you
can
just
run
Docker,
build
push
it
to
the
registry
and
restart
it,
and
that's
it
and
I
could
have
automated
that,
but
I
absolutely
didn't,
because
we
only
had
a
bunch
of
people
who
did
the
rollout,
so
it
was
just
not
necessary
if
I
had
like
10
people
working
full-time
on
the
project,
things
would
be
vastly
different
and
yeah.
That's
also
the
reason
we
could
just
like
always
go
with
the
with
the
latest
image
from
the
docker
registry.
A
So
from
an
implementation
perspective
like
one
of
the
best
things
with
the
kubernetes
cluster
was
a
world
integration
like
this
is
awesome,
like
you
always
have
those
problems
how
to
deal
with
Secrets,
especially
if
you
don't
use
something
like
terraformed.
A
You
take
a
look
something
like
the
digital
ocean
apps
platform,
which
is
just
a
kubernetes
cluster
in
Disguise,
where
you
can
run
your
apps
like
adding
Secrets
is
always
in
this
like
awkward
web
front
end
where
you
just
dump
something
in
two
and
like
just
having
your
own
namespace
within
world
for
each
team
within
the
company
and
us
being
able
to
just
add
this,
to
Walt,
specify
the
pass
and
go
with
that,
and
also
then
just
map
what
we
have
written
involved
to
environment
variables
start
this
up
and
everything
is
available
within
the
container
is
really
really
nice.
A
So
this
was
one
of
the
like
sleekest
Integrations,
like
Walt,
is
a
great
tool
and
the
kubernetes
integration
is
also
really
nice
to
get
up
and
running
and
yeah.
Of
course,
you
need
to
to
write
the
secret
into
Worlds,
but
again,
like
everyone
in
our
team
who
sort
of
registered
had
the
ability
to
change
those
values.
So
this
is
not
something
you
you
have
to
do
now.
A
The
the
next
interesting
part
is
rollouts
without
downtime,
because
this
is
one
of
the
strengths
of
any
platform
as
a
service
thing
that
you
only
have
to
like
draw
a
slider
or
change
a
yaml
file,
and
then
everything
scales
right
and
at
least
that's
that's,
how
they
sell
it.
And
it's
true
from
a
kubernetes
perspective.
But
of
course
it's
not
true
from
a
from
an
application
perspective.
A
So
starting
my
pots
is
something
that
is
easy,
but
the
requests
are
usually
distributed
via
around
Robin
or
you
have
another
rule
in
front.
But
this
means,
like
request.
One
goes
to
part
one
request,
two
to
part
two
and
so
forth.
Even
if
it's
the
requests
for
the
same
user,
that
is
actually
logged
in
and
again,
it
makes
sense.
It
prevents
like
hot
systems
if
you
use
something
like
session
fixation,
but
it
still
means
you
need
to
think
about
it
and
as
most
Java
based
Java
based
web
Frameworks,
Javelin
is
also
so-called
surflet-based.
A
Web
framework.
Servlets
is
kind
of
the
standard
in
the
Java
world
how
to
write
web
applications.
So
it's
kind
of
a
of
a
standard
abstraction-
and
it
also
has
sessions
and
sessions,
are
basically
in-memory
hash,
Maps
stored
on
the
server
side,
and
you
probably
have
seen
a
cookie.
A
A
So
one
of
the
problems
with
this
approach,
of
course,
is
first.
If
you
do
round
robin,
you
will
almost
always
hit
one
of
the
parts
who
doesn't
have
this
session
loaded.
The
second
part
is
it's
an
in-memory
map
so
killing
the
instance
just
kills
the
inventory
map,
and
that
is
a
problem
because
again
in
kubernetes,
we
are
not
supposed
to.
Like
think
that
the
same
application
will
always
run
as
the
same
pod
and,
of
course,
some
people
like
to
use
session
fixation.
You
could
just
have
your
fancy.
A
A
Yeah,
so
I
was
trying
to
be
smart
and
if,
if
all
you
have
a
hammer,
everything
looks
like
a
nail.
So
I
had
elastic
searches
back
in
store,
so
just
that
I'm
gonna
serialize
the
session
and
write
it
into
elasticsearch
after
every
request.
So
if
any
of
the
pots
go
down,
they
go
into
elasticsearch
retrieves
the
session
back
and
everything
is
good.
So
I
wrote
this:
it's
not
a
lot
of
code
200
lines,
something
like
that
which
is
short
in
Java
and
I,
was
patting
myself
on
the
back.
A
Awesome
solution,
good
job
Alex.
So
every
request
now
writes
its
session
data
to
elasticsearch.
When
finished
I
did
that
I
rolled
out
I
went
to
lunch
with
a
friend,
I
came
back
and
I
figured
out
that
there
were
a
hundred
thousand
sessions
throughout
in
the
elasticsearch
index.
I
was
like,
I
was
pinging.
My
colleague
like
did
you
announce
anything
like
did
you
say
the
website
is
live
and
I
was
still
in
the
in
the
POC
phase
and
was
like
no.
Why
did
you
think
that
and
yeah
turns
out?
A
A
This
was
not
announced,
I
didn't
announce
anything.
This
was
like
just
one
domain
under
the
elastic.co
domain
that
was
running
and
that
was
it
and
like
the
problem
was
also
not
I,
mean
100K.
Request
power,
I
think
it's
about
30
per
second,
it's
not
a
lot
of
data
that
you
write
at
the
end
of
the
day.
So
I
could
just
have
said:
yeah,
okay,
whatever
I.
A
Don't
care
just
keep
writing
all
the
sessions
they
expire
after
seven
days
and
it's
good,
but
it
still
adds
like
a
latency
of
10
15,
20
milliseconds,
even
to
a
4
4
that
you
return
to
a
client
that
a
security
scanner
tries
to
retrieve
some
PHP
fail
that
doesn't
exist,
and
that
obviously,
is
a
bad
idea.
So
I
kind
of
changed
the
code
that
sessions
get
only
persistent
in
elasticsearch
if
a
user
is
really
logged
in
and
that's
it.
A
A
That
just
adds
up
where
I
do
useless
work
and,
like
that's
one
of
the
last
things,
I
would
like
to
do
as
a
developer,
doing
useless
work
and
so
yeah
I
had
a
major
reduction
in
the
elasticsearch
right
operations
and
I
had
way
faster
responses,
especially
to
all
those
Bots
who
were
just
there
scanning
things
and
I.
Think
that
that's
the
that's
the
main
point
here.
A
So
how
did
we
get
those
100K
requests
an
hour?
There
was
no
announcement.
This
website
wasn't
known
well,
it
had
a
public,
let's
encrypt
certificate,
and
all
of
that
gets
ended
up
in
a
public
lock
and
the
moment
you
put
something
under
alerts
encrypted.
You
have
like
millions
of
security
scanners,
hammering
wheel
and
like
we
had
for
the
first
hour,
this
100K
requests,
but
even
on
a
normal
hour,
you
really
end
up
with
less
than
5K
requests
for
security
scans.
A
It's
really
it's
insane
like
the
amount
of
traffic,
and
you
should
probably
block
this
somewhere
else.
If
you,
if
you
can,
especially
if
you
just
built
an
internal
Park,
that
just
by
accident,
happens
to
have
an
external,
let's
encrypt
cert,
but
it's
something
to
keep
in
mind
like
never
ever
put
like
non-security
ready
applications
in
production.
If
there's
a
let's
encrypt
through
it,
it
will
happen
really
really
fast.
A
Like
basically
the
same
minute,
you
do
the
deployment
you
have
the
First
Security
scanners
coming
in
yeah.
The
next
part
is
probes.
Of
course
you
would
like
to
figure
out
like
when
is
your
your
system
down?
When
is
your
system
ready,
as
this
is
a
Java
application
and
it
tries
to
reduce
the
resources
it
needs
I
needed
a
Readiness
probe
because,
like
driving
up
the
jvm
takes
some
time.
If
you
don't
assign
a
lot
of
CPU
to
it
and
of
course
you
also
have
a
liveness
probe.
A
A
A
So
I
basically
created
one
endpoint
to
figure
things
out
which
allowed
me
to
stop
the
web
server
to
make
sure
that
the
the
restart
Works
everything
worked
as
expected,
but
it's
just
something
you
you
probably
should
not
forget
to
try
out
if
you
set
any
of
those
values
also
because
those
are
relatively
high
and
I
probably
should
have
gone
way
way
lower
with
the
with
the
default
and
the
other
part
is
setting
the
jvm
memory
again,
you
do
this
in
two
directions:
I
think
most
of
you.
If
you
run
Java
apps
in
production.
A
You
know
there
are
so
many
blog
posts
like.
If
you
don't
configure
the
memory
settings
or
the
garbage
collector,
it
tries
to
take
some
fancy
defaults
and
if
you
have
not
a
lot
of
CPUs
configured,
those
defaults
are
really
weird
like
using
garbage
collectors
that
are
decades
old
and
are
really
stopping
your
application,
like
they
collect
memory
and
your
application
does
not
respond
at
all
and
that
can
take
a
lot
of
time.
A
So
if
you
set
or
if
you
run
jvm
applications,
always
make
sure
that
you
set
jvm
options,
the
other
part
is
I,
do
not
want
to
set
the
jvm
options
in
my
Docker
file,
because
this
means
that
my
Docker
file
setup
is
different
from
how
I
run
things
locally.
So
what
you
should
always
do
is
either
have
a
share
script
that
sets
it
and
write
this
shell
script
or
use
a
plugin
like
the
the
Gradle
install
this
plugin
to
automatically
create
those
shell
scripts
and
then
go
with
that.
A
But,
like
do
not
have
custom
settings
within
your
Docker
container,
because
that's
usually
the
last
thing
that
you
that
you
actually
test
and
like
I
ran
this
with
so
much
Headroom.
It's
like
not
even
funny
I
think
I
configured,
one
gigabyte
and
the
maximum
memory
consumption
I
had
was
like
140
Max,
and
that
was
with
all
the
security
scanners
and
writing
data
to
elasticsearch.
So
I
could
have
gone
like
easily
with
a
fifth
of
the
memory
here.
A
But
I
really
wanted
to
make
sure
that
during
the
conference,
everything
is
stable
and
given
the
complete
size
of
the
kubernetes
cluster,
like
the
setting,
was
not
even
noticeable.
So
that's
a
really
that
was
a
really
huge
cluster.
A
A
So
I
said
at
the
beginning,
I
didn't
pick
the
the
fastest
thing
I
could
do
to
run
the
Java
application.
The
fastest
thing
you
usually
can
do
is
compile
a
binary
run,
this
binary
using
graphvm
and
have
a
really
low
memory,
footprint
and
really
fast
startup
time.
A
So
don't
truthfully
trust
your
data.
You
can
see
here
like
the
whole
processing
of
something
for
the
schedule.
Endpoint
took
25
milliseconds.
Most
of
the
time
was
spent
writing
a
session
or
trying
to
retrieve
a
session
here,
but
what's
up
with
the
other
60
milliseconds
right
like
the
whole
processing
at
the
top
says,
94
milliseconds
and
the
the
processing
on
the
Java
part
of
things
says
25.,
so
somewhere,
I,
I'm,
missing,
60,
milliseconds
and
I
have
no
idea
where
this
is
missing.
A
Judging
by
this
picture,
it
might
be
that
there
was
a
full
thread
pool
waiting
for
the
response.
It
might
be
that
you
needed
to
do
some
native
memory
allocation
when
returning
data
back-
it's
just
not
can
be
seen
here
and
that's
something
to
keep
in
mind.
I
mean
I.
Had
this
Trace
only
once
out
of
like
all
the
time,
so
I
was
happy
about
that.
The
rest
looks
okay,
but
it's
still
something
that
you
should
probably
keep
in
mind.
A
When
you
take
a
look
at
your
observability
data,
it's
always
telling
only
a
part
of
the
truth
and
yeah.
This
is
the.
This
is
a
full
rendering
operation.
This
takes
front
and
end
packet
into
account,
and
if
you
look,
this
is
oh
sorry
this
here,
where
my
mouse
curves
eyes,
hope
you
can
see
it.
That's
a
that's
a
server
side
part
that
speed.
So
the
server
side
was
always
fast
enough.
The
server
side
was
not
an
issue.
A
So
like
that's
also
one
of
the
reasons
why
it's
not
enough
just
monitoring
your
server
side
and
like
padding
yourself
on
the
back
shaving
off
10
milliseconds,
like
if
the
the
client
side
or
the
browser
side,
is
just
crazy,
slow,
so
yeah
when
we
started
optimizing,
we
we
just
optimized
the
different
things
and
that's
one
of
the
reasons
why,
like
monitoring
the
whole
application
made
so
much
sense
here
for
us.
A
So
let's
talk
about
debugging,
which
of
course
no
one
ever
needs
to
do,
because
systems
are
always
there
stable
and
reliable,
and
that
was
one
of
the
of
my
pain
points
there
actually
so
again
like
using
an
existing
kubernetes
cluster.
You
have
to
adapt
to
the
existing
logging
strategy
and
the
existing
logging
strategy
was
writing
all
the
logs
into
a
dedicated
elasticsearch
cluster,
which
is
good
because,
like
you
know
where
to
go,
you
start
a
project.
A
So
I
could
have
done
a
configuration
change
to
write
those
log
in
my
in
my
own
elasticsearch
instance,
which
I
just
didn't
do
because
I
focused
on
different
things
and
I
didn't
really
really
need
the
logs,
but
one
of
the
things
to
keep
in
mind
when
you
write
a
POC
when
you
want
to
run
on
the
platform
is
like.
Where
are
my
logs,
and
this
one
is
actually
a
good
case
like
there
are
so
many
platform
as
a
service
platforms
where
you
can
even
access
the
logs.
A
They
don't
write
it
off
to
any
three
bucket,
so
you
can
do
it
later.
You
can
just
monitor
it
in
real
time
or,
if
you
know
Heroku,
you
can
basically
retrieve
the
logs,
but
you
have
to
write
an
extra
component
to
do
that
and
that's
really
really
tedious.
So
locks
are
always
kind
of
a
second
class
citizen
on
many
platforms,
and
that
was
at
least
not
here.
The
case
I
always
had
immediately
access
to
the
locks
and,
of
course,
like
fancy
question
like
if
your
instrumentation
instruments,
every
request
that
comes
in
and
creates
APM
data.
A
For
that,
do
you
really
need
locks,
or
do
you
get
every
information
you
need
already
out
of
that
APM
data
and
if
you
have
background
processes
that
are
not
properly
instrumented,
probably
you
should
take
a
look
at
the
logs,
but
if
everything
is
instrumented,
maybe
there's
absolutely
no
need
for
the
locks,
because
you
can
retrieve
the
same
data
out
of
APM
still
not
as
the
clown
Emoji
at
the
end.
So
don't
consider
that
best
practice.
A
So
what
what
did
I?
What
did
I
miss
on
one
of
the
things
I
didn't
Implement
on
purpose
was
automatic
rollouts
again,
that
would
be
vastly
different
if
I
was
probably
working
with
a
team
of
five
people
full
time
on
that,
but
like
if
it's
just
three
Persons
get
out
across
the
globe.
I
was
not
really
worried
about
us
like
having
flashing
rollouts.
We
also
synced
all
the
time,
and
so
that
was
not
an
issue.
A
I
was
going
to
completely
lazy
past
with
stateful
Services.
Everything
was
outsourced
like
this
is,
at
the
end
of
the
day,
a
stupid,
stateless
app
the
moment
the
the
session
state
was
written
and
that
of
course
made
the
deploying
on
on
any
platform
a
lot
a
lot
easier,
and
one
thing
I
was
really
I'm
still
inferiorated
about
today.
Is
that
I
didn't
use
infrastructure
as
a
code
from
the
beginning,
and
that's
just
painful
right,
like
you
kind
of
have
to
document
how
the
different
pieces
fit
together.
A
A
It's
really
bad,
then
I
also
played
around
with
different
APM
tools
and
when
you
do
a
POC,
take
a
look
at
your
APM
tool
with
regards
to
pots
and
how
it
deals
with
many
different
parts,
because
the
way
this
sometimes
works
is
that
you
configure
service
say
this
is
service
name
a
I
want
to
run
10
parts
on
that,
but
you
can't
drill
down
on
each
single
pot
when
there's
a
memory
leak
and
only
one
of
them
out
of
whatever
reason,
you
will
just
see
the
max
memory
and
that
is
kind
of
applied
to
every
part,
and
that
means
you
will
not
be
able
to
debug
this.
A
If
it
falls
down
doesn't
matter
and
then
from
a
monitoring
system,
it's
like
yeah,
it's
really
bad
if
one
instance
dies
and
like
you
have
to
sort
of
figure
out
where
the
middle
ground
for
that
and
how
you
can
combine
the
both
of
that
anyhow,
the
conference
day
like
it
was
spectacularly
unspecticular
I,
my
APM
service
detected
one
exception.
I
did
a
rollout
in
the
morning
like
wrote
the
wrong
HTML
template
that
threw
an
exception
fixed
that
one
but
yeah
during
the
12
hours
of
the
conference.
A
I
didn't
have
to
do
any
downtime,
free
rollouts
again,
because
everything
just
kept
working.
We
had
like
100k
valid
requests.
A
We
had
10x
that
an
invalid
requests
or
the
security
scanners
just
kept,
hammering
even
that
time,
of
course,
and
we
had
also
like
two
million
total,
so
it
was
not
really
high.
Traffic
I
could
have
run
this
easily,
with
the
tens
of
the
resources,
I
guess
and
like
the
the
percentiles
of
rendering
times
were
like
just
fine
like
everything
under
10
milliseconds.
For
me,
is
it's
good
to
go
to
to
go
out
to
the
customer
and
yeah?
A
One
of
the
nice
advantages
of
all
of
the
setup
is
like
there
was
this
tiny
security
hole
that
you
might
have
noticed
called
lock
for
Shell,
which
allowed
your
remote
code
execution
with
about
like
95
of
the
Java
apps
out
there?
Obviously,
this
one
was
affected
as
well,
because
it
used
log4j
and
someone
pinged
me
in
the
morning
and
I
took
a
look
and
said
yeah.
A
This
looks
really
bad
I
should
probably
fix
that
and
jumped
kind
of
out
of
bed
did
roll
out
everything
in
40
minutes,
and
this
means
for
me
from
a
developer
perspective
like
going
out
from
oh
to
roll
out.
This
is
safe.
Now
in
14
minutes
is
really
really
good,
and
this
is
kind
of
the
the
way
I
would
like
to
keep
working.
You
have
full
control
over
the
rollout
process.
Things
like
that,
so
I
could
have
also
just
shut
down
the
the
kubernetes
instance
immediately.
A
So
there
was
many
ways
we
could
have
mitigated
that,
but
that
meant
to
me
that,
from
an
agility
perspective
like
running
on
the
kubernetes
cluster
was
really
good
decision.
Yeah,
the
biggest
impact
was
I,
came
late
to
kindergarten,
dropping
my
little
one,
I
think
that's
okay,
at
least
once
in
a
month
yeah.
My
summary
would
I
do
this
again.
Yes,
absolutely
like
running
an
application
like
this
on
a
kubernetes
cluster
starting
from
zero
was
a
really
good
idea,
like
I,
have
absolutely
nothing
to
complain.
A
Of
course,
there
are
small
friction
points
like
the
lock
part,
but
that's
all
stuff
that
you
could
solve.
If
you
wanted
I
still
wouldn't
go
crazy
on
automation
like
I
only
do
what's
really
necessary.
Let's
see,
first
approach
of
software
engineering,
I
would
say,
and
one
thing
I
would
probably
fix,
is
I
would
not
write
sessions
again
into
elasticsearch.
I
would
probably
just
use
a
cookie,
send
it
back
to
the
client,
have
a
cryptographically
signed,
and
then
the
client
comes
back.
So
this
way
I
would
not
even
write.
A
Data
I
would
need
some
more
CPU
to
cryptographically
sign
it
whatever,
but
like
crypto
code
is
so
fast
nowadays,
because
it
can
be
natively
executed
on
the
CPUs
that
there's
nothing
really
to
worry
about.
It's
just
something
that
this
probably
ate
more
resources
than
it.
It
really
was
necessary
from
my
perspective
and,
of
course,
whenever
I
start
a
new
project
like
first
thing,
you
start
with
is
infrastructure
as
code,
so
other
people
can
just
join
the
party
easier.
A
The
last
thing
I
want
to
talk
about
is
level
of
abstraction
because,
as
a
developer,
I'm
still
not
happy
about
this.
Like
this
is
my
my
biggest
point
of
critic
and
The
Primitives,
you
design,
like
the
like
the
the
memory
and
the
CPU.
This
is
all
from
an
Ops
perspective
right
when
you
escape
your
kubernetes
cluster
you're
supposed
to
scale
it
like
a
data
center.
You
have
so
many
cores.
You
have
so
many
memories.
A
You
have
so
many
stories
and
it
makes
perfect
sense,
but
now
like,
if
you
tackle
this
from
a
developer
perspective
like
I'm
supposed
to
add
some
floating
Point
number
in
a
yaml
file
to
decide
how
much
CPU
is
supposed
to
I,
don't
even
know
what
kind
of
cores
are
running
there.
I
don't
even
know
what
kind
of
Hardware
support
they
have
right.
So
like
it's
really
hard
to
come
up
with
a
good
formula
to
say
like
this
is
exactly
the
CPU
I
need
and
I
think
oftentimes.
A
You
come
back
to
hopelessly
over
provisionize
your
instance,
which
is
something
which
we
all
were
told,
is
not
necessary
with
kubernetes,
because
scaling
is
too
easy,
but
like
take
a
look
at
your
yaml
files
and
you
will
probably
see
them
either
the
same
mistake
right
I
said
at
the
beginning:
I
could
have
run
with
one-fifth
of
the
resources
yet
I
didn't
do
it
because
I
didn't
want
to
go
down
I'm
same
with
memory
might
be
a
little
easier
with
Java,
because
you
have
to
configure
how
much
memory
you
actually
want
to
use.
A
A
I
didn't
talk
about
scaling
because
again,
I
was
a
small
web
application,
but
from
a
developer
perspective,
when
is
the
right
time
to
scale
up
or
out
and
I
think
there
are
tons
of
really
bad
metrics
trying
to
figure
this
out,
like
the
the
number
of
concurrent
requests,
is
something
I
don't
really
care
about
right
like
as
long
as
I
can
handle
it?
I
can
have
like
5
000
compound
requests
on
a
single
system,
but
maybe
I
have
really
long
running
requests
where
I
can
only
handle
a
10
of
them
in
in
parallel.
A
A
A
How
do
you
know
when
you
need
to
add
more
Hardware
you're
going
to
have
fun
discussions
or
a
black
stare,
but
I,
don't
think
there's
a
lot
in
between,
but
it's
an
important
talk
that
you
need
to
have
just
because
if
you
really
like
move
or
just
not
do
a
lift
and
shift,
but
you
also
want
to
really
make
usage
of
all
of
this.
A
You
need
to
understand
that,
like
applications
go
up
and
down
all
the
time
in
terms
of
resources,
all
right,
that's
it
so
I,
don't
think
we
have
a
lot
of
time
left
for
discussion.
So
are
we
somewhere
near
the
coffee
at
the
coffee
break?
A
Please
feel
free
to
correct
me
with
anything
which
I
probably
have
said
wrong
from
a
kubernetes
perspective.
Here,
I'm
perfectly
fine
with
that
also
like.
If
there's
anything,
you
would
do
in
general
differently
architecture,
wise
I'm,
super
Keen
to
listen
to
that.
That
QR
code
contains
the
link
to
the
presentation.
If
you
want
to
do
it
and
yeah,
it
depends
if
you
have
some
time
left
for
Q
a
thank
you,
foreign.