►
From YouTube: Cloud to on-prem and back again by Gijs van der Voort
Description
Deciding to move to on-premise Kubernetes in a cloud-native era is not one that is made overnight. For Picnic, it was driven by the launch of Picnic's automated groceries packing warehouse and the low-latency requirement for controlling the 10km+ conveyor belts and 100+ packing stations. Being a cloud-native company from the start, we set out for a cloud-native experience, on-prem. I'll take you through our on-prem journey, what we liked and what was challenging, and what enabled us transition back to the could in the end.
A
Well,
good
morning,
everybody
I
hope
you
can
all
hear
me
well
this
morning,
I
would
like
to
tell
the
story
about
how
picnic
who's
been
Cloud
native
from
the
start
decided
to
go
on
premise
and
ultimately
also
decided
to
go
back
again.
So,
let's
go
first,
a
short
introduction
about
myself,
just
to
know
who
I
am
and
how
I
was
involved.
A
With
all
of
this,
my
name
is
kaisman
Ford
I
work
now
for
about
three
years
with
picnic
where
initially
I
started
off
as
the
founder
of
the
platform
python
platform
team
to
launch
python
as
one
of
the
key
languages
to
be
used
within
the
company,
but
already
quickly
on
I,
got
involved
with
the
infrastructure
team
at
picnic
and
took
up
the
law.
The
role
of
product
owner,
which
I've
been
now
been
focusing
for
for
the
last
two
years
or
so
I've
been
in
the
field
a
little
bit
longer.
A
So
this
was
not
my
first
organization
and
was
primarily
involved
with
the
ad
back
industry
up
until
then,
where
there
was
quite
a
bit
of
infrastructure
involved
with
running
real-time
bidding
platforms
for
advertisements,
not
so
cool
now.
My
mother
finally
knows
what
kind
of
work
I'm
doing
which
before
she
had
no
clue
of,
so
what
does
picnic
really
do?
A
Well,
ultimately,
we
are
a
grocery
delivery
service
back
then
we
were
one
of
the
first
and
now,
of
course,
there's
many
competitors
on
the
market
and
one
of
the
things
that
we
do
differently
than
especially
the
flesh
delivery
services
that
you
see
nowadays
is
that
we
deliver.
A
So
that's
a
bit
of
what
we
do
in
order
to
facilitate
we
control
pretty
much
the
entirety
of
the
supply
chain,
except,
of
course,
for
the
products
that
are
still
being
produced
somewhere
and
then
from
there
being
shipped
to
us
infrastructure
at
picnic.
Is
it
was
Cloud
native
from
the
start,
so
everything
is
running
in
AWS,
where
we
were
running
eks
at
its
core.
We
tried
to
make
use
of
most
of
what
AWS
is
supplying.
A
So
our
Ingress
controller,
for
example,
is
the
AWS
ALB
and
we
try
to
keep
quite
a
Consolidated
stack
of
Technologies.
This
allows
us
to
specialize
within
those
Technologies
and
maximize
sort
of
the
benefits
that
we
get
out
of
this
and
as
we
grow
and
new
teams
join,
the
company
or
new
people
join
the
company.
They
can
easily
pick
up
the
learnings
that
we
did
on
all
of
those
Technologies
for
new
products
that
they're
starting
to
launch,
and
so
with
this
Consolidated
stack
that
we
manage
via
terraform
and
spacelift.
A
To
orchestrate
all
of
this,
we
enable
teams
with
a
nicely
prepackaged
Solutions
so
that
they
can
focus
on
their
deployments
that
are
then
running
in
kubernetes,
which
we
expect
them
to
maintain.
So
infrastructure
at
picnic
is
about
the
tooling
around
the
deployments,
and
the
development
teams
are
actually
actual.
Devops
seems
that,
above
through
the
building
of
the
software,
but
also
the
operations
of
the
software.
A
So
what
we
do,
let's
say
on
a
daily
basis,
so
a
big
part
of
the
supply
chain
that
we
that
that
make
up
our
offering
to
our
customers
is
actually
taking
the
products
that
are
shipped
to
us
from
the
producers
and
turning
it
into
something
that
we
can
actually
deliver
to
our
customers.
A
I'm,
not
sure
how
familiar
you
are
with
our
proposition,
but
the
small
etvs
that
we
have
the
electric
vehicles
to
deliver.
The
groceries
are
quite
iconic,
and
you
see
them
everywhere
across
the
Netherlands
now
and
those
are
fully
tailored
to
the
specific
service
that
we
Supply
at
turning
all
of
those
groceries
of
big
ballots
of
chunky
stuff,
turning
it
into
something
that
we
can
actually
comfortably
deliver
to.
Somebody's
doorsteps
is
a
lot
of
work.
A
You
have
to
imagine
that
you
know
these
different
products.
They
all
have
different
behavior
of
how
to
treat
them.
So,
for
example,
we
have
Frozen
Goods,
which
need
to
be
kept
below
zero.
We
have
chills
products
that
need
to
be
kept
below
seven
degrees
and,
of
course,
we
have
our
regular
products,
but
even
then
you
have
stuff
that
can
easily
break
so
a
bag
of
chips.
A
You
cannot
put
it
under
a
bottle
of
Coke
in
a
tote
that
you're
shipping
out,
and
that
adds
a
lot
of
complexity
to
this
process,
which
you
initially
set
out
to
do
fully
manually,
so
actually
turning
these
kind
of
pallets
into
the
things
that
we
deliver
a
lot
of
manual,
labor
involved
and
exactly
that
was
something
that
we
want
to
stop
doing.
This
is
exactly
sort
of
the
bottleneck
that
comes
up
in
an
organization
like
this.
It's
really
hard
to
scale
up.
A
You
have
to
imagine
that
the
process
that
we've
implemented
is
effectively
a
huge
massive
Supermarket,
an
industrial
size,
one
that
is
fully
tailored
to
optimize
picking
routes
for
people
that
walk
through
this,
and
that
is
great.
We
spend
a
lot
of
time
on
this
and
we
could
maximize
quite
a
bit,
but
it
also
has
its
limitations.
Doing
this
work
is,
as
you
can
imagine,
it's
a
labor
intensive.
It's
quite
heavy
there's
risk
of
things
hitting
each
other,
so
it's
also
not
fully
safe.
It's
not
the
experience
that
we
want
for
our
employers.
A
We
want
something
better,
and
not
only
this,
for
our
employers,
but
also
we
just
hit
limits
of
what
we
can
do
on
one
physical
location.
So
if
we
want
to
deliver
more
groceries
with
the
same
amount
of
people,
we
need
to
have
something
else
in
place
and
for
that,
a
couple
of
years
ago
we
launched
the
Welcome
To
The
Future
solution
of
picnic
to
this
whole
problem,
which
was
a
fully
automated
fulfillment
center.
A
So
that
process,
where
you
have
people
walking
around
this
whole,
this
big
Supermarket
kind
of
environment
has
been
replaced
with
the
fully
automated
system
who
priced
of
about
14
kilometers
of
conveyor
belt,
those
conveyor
belts.
They
all
need
to
be
controlled
and
we
need
to
yeah.
We
need
to
control
all
of
this,
such
that
we
take
the
right
products
out
of
storage.
We
put
them
on.
A
We
put
them
to
the
right
people
on
the
floor
that
actually
take
a
product
out
of
one
storage
tote
and
put
them
into
another
one
in
such
a
way
that,
ultimately,
we
have
again
the
groceries
prepared
that
we
can
ship
out,
and
this
operation
is
massive.
It
was
the
largest
site
that
we
launched
back
then.
We
now
have
some
some
larger
ones,
but
it
was
definitely
the
most
complex
thing
that
we
tried
to
pull
off
to
control
such
an
operation.
A
So
in
essence,
there's
three
levels
in
which
you
need
to
operate
such
a
warehouse.
The
first
one
is,
you
need
to
be
aware
of
what
kind
of
orders
are
coming
in
and
what
that
kind
of
orders
need
to
be
fulfilled
for
what
kind
of
moment
in
time
and
track
that
those
things
are
actually
starting
to
happen
right.
So
we
need
to
make
sure
that
the
products
are
actually
in-house
in
order
to
fulfill
those
Etc.
So
that's
sort
of
the
high
level
view.
A
That
is
what
we
call
warehouse
management,
then
there's
a
level
of
Warehouse
control,
which
is
more
about
ensuring
that
the
right
products
are
put
in
the
right
bags
which
are
destined
for
the
right
people
such
that
such
an
order
is
actually
fulfilled,
and
that
is
what
we
call
Warehouse
control
and
are
less
level
especially
than
in
this.
In
this
kind
of
automated
Warehouse.
You
have
something
we
call
transport
system,
so
this
is
actually
the
software
that
directly
integrates
with
all
of
that
physical
infrastructure
that
is
running
on
the
site,
so
the
conveyor
belts.
A
We
have
millions
of
scanners.
We
have
actuators
things
that
Junctions
that
we
can
put
one
region
or
the
other
a
lot
of
physical
stuff
that
needs
to
be
maintained,
and,
typically,
if
you
would
go
in
such
a
track,
you
can
get
these
kind
of
solutions
off
the
shelf.
We
have
a
fender
that
is
actually
building
all
of
these
conveyor
belts
for
us,
but
picnic
from
the
start
has
been
a
pretty
best,
pretty
focused
on
building
software
themselves,
which
you
normally
would
get
off
the
shelf.
This
has
been
quite
a
successful
method
for
picnic.
A
It
allowed
us
to
optimize
a
lot
of
processes
in
a
supply
chain,
and
we
also
believe
that
we
should
do
the
same
for
warehouse
control
in
this
case
warehouse
management.
We
already
did
for
our
manual
FC's
Warehouse
control
over
something
you
would
typically
get
off
the
shelf,
but
this,
especially
here,
is
the
place
where
you
can
do
a
lot
of
optimization
and
improvements
which
you
with
otherwise
have
to
wait
for
a
dirt
by
Defender
for
you
to
implement.
A
So
with
this
kind
of
model.
The
question
came
to
us
as
an
infrastructure
team
like
how
can
we
facilitate
this
all
of
this
software?
You
know
where?
Should
we
run
this,
so
we
started
to
think
about
the
different
types
of
the
way
that
we
should
want
to
do
this
and
effectively
we're
able
to
identify
three
distinct
types
of
workloads.
You
know
the
first
one
is
the
one
that
we
all
I
think
here:
Love
and
Enjoy.
That's
the
stateless
containerized
deployments
that
yeah
your
typical
kubernetes
deployment
that
you
can
easily
scale
up
and
down.
A
This
is
especially
in
this
case
the
picnic
software,
the
things
that
we
always
build
ourselves,
because
that's
what
we
do
from
the
start
and
then,
of
course,
some
of
the
open
source
solutions
that
we
have
around
us
to
facilitate.
All
of
that,
then.
The
second
one
is
the
stateful
VM
based
systems.
I'll
come
back
to
why
this
is
VM
based,
but
in
essence
it's
the
data
stores
that
we
would
normally
get
as
a
SAS
solution.
A
This
would
be
postgres
reviton
queue
or
it's
a
typical
data
source
deck
and
then
the
last
one,
which
is
definitely
a
pain
for
such
an
operation,
especially
if
this
kind
of
this
or
this
automated
Warehouse
is
destined
to
support
about
a
quarter
or
half
of
the
primary
Market
that
we
that
we
cater
for
so
operational
quality
is
essential
and
then
having
systems
that
we
know
are
not
highly
available
means
that
anything
you
do
with
such
a
deployment
effectively
means
outages
and
that's
a
huge
risk
for
our
entire
operation.
A
In
addition,
a
couple
of
let's
say,
operational
aspects
were
identified
that
we
needed
to
be
able
to
achieve
so.
The
operational
hours
are
effectively
22
7
all
days,
except
for
Kings
Day.
Well,
we
need
to
have
one
celebration
at
least
then
there's
the
availability
of
four
nines
that
we
need
to
achieve,
because
it's
such
a
critical
part
of
our
entire
catering
to
the
Netherlands,
and
then
we
only
have
a
small
team
to
pull
this
off
at
this
combines
with
sort
of
the
the
key.
Let's
say
aspect
of
this
of
this.
A
Of
this
whole
solution
like
this
was
a
massive
important
milestone
for
picnic
to
prove
our
long-term.
That's
a
way
of
operating
was
a
quite
a
critical
project.
A
In
addition,
one
I
think
one
very
unique
peculiarity
I
think
for
such
an
operation
is
that
an
operation
like
this
runs
at
a
fixed
clock
rate,
so
where
normally,
of
course,
you
have
to
deal
with
web
traffic,
and
it
goes
with
your
typical
web
sort
of
a
load
kind
of
trend
lines.
Here
we
have
a
conveyor
belt
that
connects
everything
together
that
runs
at
the
fixed
rate.
It's
a
little
bit
less
than
a
meter
per
second.
A
It
means
that
everything
that
happens
here
is
sort
of
event-based,
so
we
have
scanners
everywhere,
as
I
mentioned,
they
detect
that
the
toad
is
coming
coming
along
and
maybe
there's
a
junction
after
this
scanner
you
need
to
decide
is
the
toad
going
to
go
left
or
right,
or
straight
or
right
that
kind
of
decision
Point
is
everywhere
in
this
operation.
It's
many
many
many
many
places
and
for
any
kind
of
Junction
point
that
is
being
hit
by
such
thought.
A
It
means
that
that
event
needs
to
go
from
the
transport
system
to
the
warehouse
control
system.
The
warehouse
control
system
actually
needs
to
do
some
smart
thinking
to
Define
The
Next
Step
then
send
back
the
information
again
and
all
that
needs
to
happen
in
a
couple
hundred
of
milliseconds
or
even
less
than
100
milliseconds
before
that
tote
already
comes
to
the
junction
point,
for
example,
that
you
want
to
put
into
a
different
direction.
A
So
we
set
out
to
find
the
solution
that
we
felt
like
was
able
to
achieve
all
of
these
kind
of
requirements
and
all
of
these
kind
of
types
of
workloads.
A
A
This
specifically
was
decided.
Yeah,
you
saw
the
team
size,
not
so
big
that
allowed
us
the
picnic
infrastructure
team
to
focus
on
the
actual
catering
of
the
workloads.
So,
in
this
case,
the
poster
is
close
to
the
revenue
view
the
picnic
workloads
around
us
to
make
sure
that
that
all
works.
A
We
opted
for
running
two
identical
rooms
that
were
fully
prepared
to
run
the
entire
production
workload
allowing
us
to
continue
operation
even
if
there's
a
reason
to
shut
down
an
entire
room.
You
can
imagine
that
if
there's
a
a
power
outage
in
that
room-
or
you
want
to
do
some
significant
maintenance
or
refurbishment
on
the
internals,
we
can
actually
continue.
We
can
actually
do
this
without
impacting
operations,
because
we
have
that
kind
of
capacity
on-site
in
another
place.
A
We
actually
had
an
issue
quite
early
on
where
the
cooling
units
completely
shut
down
in
one
of
the
rooms,
and
that
allowed
us
to
immediately
shift
all
of
the
workloads
to
the
other
room
and
continue
operation
there,
which
is
very
handy
in
such
a
situation,
and
the
VMware
layout
actually
gives
you
the
opportunity
to
transparently
migrate
workloads,
vm-based
workloads
from
one
location
to
the
other.
So
on
one
side
it
does
it
for
you.
A
If
actual
physical
Hardware
fails,
so
it
detects
it
and
it
will
start
up
the
VM
in
another
location
where
there's
still
capacity,
but
then
also
it
offers
a
solution.
They
call
Life
migration,
so
you
can
actually
move
running
VMS
that
are
actually
part
of
crypto
corporations.
You
can
do
it
live
migrate,
it
from
one
place
to
the
other
without
impacting
operation
either,
and
that
especially,
is
a
very
nice
feature
to
allow
us
to
do
all
kinds
of
Maintenance
to
the
existing
Hardware.
That
is
there
and
then
lastly,
feature
comes
with
a
solution.
A
A
So
on
the
network
stack
on
the
on
the
operations
that
are
executed
in
the
CPU,
and
only
when
something
goes
wrong
in
one
of
the
two
instances
will
they
defer
traffic
to
the
one
that
is
still
remaining
and
especially
for
the
workloads
that
I
described
that
are
not
highly
available.
This
was
a
very
important
feature
for
us
to
add
on
top
of,
and
then,
of
course
you
know
we
still
want
to
do
kubernetes.
You
know
picnic
and
especially
the
teams
building
the
software
to
control
all
of
this.
A
They
they've
grown
up
essentially
in
the
company
to
use
kubernetes,
so
we
want
to
still
have
the
same
offering,
and
in
this
case
we
opted
to
run
tonsu
tkgi.
So
this
is
the
VM
VMware
offered
solution
for
managing
kubernetes
clusters,
essentially
sort
of
the
eks
of
VMware.
We
offered
specifically
for
this
because
it
gives
you
something
that
they
call
VMware
validated
design.
So
an
asset
is
VMware.
A
Now,
that's
all
good
and,
of
course,
while
opting
there's
a
lot
of
choices
that
you
have
to
make
and
decisions
that
you
have
to
consider
and
especially
I
think
for
the
audience
like,
like
you
guys,
there's
a
lot
of
options
that
that
might
have
come
already
up
to
mind.
So
one
thing
is,
of
course
you
can
run
kubernetes
directly
on
Hardware,
or
at
least
we
can
find
a
solution
that
allows
us
to
just
use
kubernetes
directly
on
some
physical
infrastructure.
A
We
opted
not
to
do
this,
mainly
because
the
workloads
that
we've
already
identified
we
needed
to
run
those
VM
based
workloads
that
are
not
highly
available,
but
we
also
felt
that
there's
too
much
of
a
skill
Gap
to
also
take
responsibility
for
actually
maintaining
these
kind
of
clusters
on
the
master
node
level.
We
heavily
rely
on
eks,
so
we
are
actually
users
of
kubernetes
and
not
the
operators
of
actual
clusters,
at
least
that's
how
we
see
it
and
therefore
we
felt
it
was
too
much
of
a
risk
to
go
into
the
resurrection.
A
You
can
run,
of
course,
virtual
machine
workloads,
Within
kubernetes
by
Cube
vert,
which
actually
is
a
really
cool
technology
and
I
think
I
would
have
loved
to
use
it.
Was
it
not
for
such
a
a
critical
site
at
that
point
in
time
it
supports
light
migration,
which
is
nice,
but
it's
behind
the
feature
gate
still
so
effectively
a
beta
feature,
which
is
really
a
bit
on
a
risky
side,
and
of
course
it
didn't
have
that
turn
a
non-aja
workload
into
something
highly
available,
which
for
us
was
back
then
especially
quite
a
miss.
A
We
also
consider
to
run
postgres
and
rabbitmq
in
kubernetes,
because
now
we
need
to
suddenly
run
VM
based
workloads
for
this,
especially
then
we
were
looking
at
solando's
offering
for
for
this,
which
was
quite
new
back
in
the
time,
especially
for
postgres.
There
was
a
lot
of
people
that
would
still
actively
refrain
you
from
using
kubernetes
for
running
these
kind
of
systems,
so
we
also
opted
to
play
it
safe
and
go
to
something
that
we
at
least
know.
A
If
we
go
into
these
boxes,
we
can
actually
look
at
the
you
know:
there's
no
abstractions
around
this
there's
no
kubernetes
operators
or
something
else
that
is
making
it
harder
to
debug.
We
can
go
into
these
boxes.
We
had
people
that
knew
a
lot
about
postgres,
that
I
felt
a
lot
safer
and
the
same
goes
for
Revenue
queue.
A
Of
course
we
have
the
operator
that
is
supplied
by
the
revenue
team,
but
still
if
something
in
that
direction
goes
wrong,
we
didn't
feel
the
right
that
we
didn't
sell
enough
prepared
to
tackle
those
kind
of
issues
in
such
a
capacity
and
lastly,
AWS
Outpost
I
think
this
one
is
something
that
especially
if
you
would
re
do
this
now
would
be
a
very
high
contender
back.
Then
it
was
just
not
generally
available
for
for
the
Netherlands.
A
We
did
have
good
lines
and
there
was
some
opportunities
to
try
it
out.
We
would
be
early
adopters
and
you
would
see
this,
for
example,
in
the
support
for
RDS
back
then,
the
multi-az
or
fellow
for
at
least
functionality
was
not
there
or
not
fully
ready,
and
especially
back.
Then
it
was
massively
more
expensive
than
the
solution
that
we
opted
for
with
vsphere
now.
I
think
it
would
be
a
very
interesting
candidate
to
explore
now
with
such
a
solution
in
place.
You
think,
okay.
Well,
we
have
all
of
the
technology.
A
We
have
all
the
tools
we.
We
feel
that
these
are
the
right
set
of
Technologies,
and
we
hope
that,
with
that
similar
kind
of
experience
that
we
were
looking
for
with
a
hardware
fertilization
all
managed
a
platform
that
is
similar
to
eks.
Is
it
as
easy
to
operate?
Well,
honestly,
it's
just
super
complex.
We
ended
up
with
a
completely
new
tech
stack
that
we
had
to
operate
and
maintain.
We
were
good
in
AWS.
We
were.
We
have
a
lot
of
experience
there.
We
built
a
lot
of
tooling
and
we
effectively
had
to
duplicate.
A
A
We
need
to
build
a
custom
stack
of
observability
where
otherwise
you
get
these
out
of
the
box
from
AWS
or
for
other
providers,
and
now
we
need
to
do
our
own
observability
stack
for
postgres
and
for
the
proxies
that
come
with
postgres
Etc,
quite
a
lot
of
complexity,
and
then,
of
course,
we
do
this
for
the
first
time,
so
yeah
you're
gonna
run
into
issues
that
have
already
been
solved
that
others
have
already
been
doing
better
and
that's
just
a
painful
operation.
A
So
the
end
result
of
this
is
that,
yes,
we
were
able
to
pull
it
off,
but
we
have
a
huge
High
operational
load
and
the
reliability
was
definitely
not
what
we
were
hoping
for
now.
This
was
not
something
that
was
only
hard
on
the
infrastructure
side
running
this
operation
was
tricky
for
everybody.
It
was
the
most
big
project
that
we
ever
pulled
off
and
just
getting
it
also
under
physical
level,
working
on
the
operational
side,
people
around
it.
A
It
was
just
super
complex,
so
the
corporate
development
teams
were
already
looking
into
an
alternative
way
to
do
this
and
they
opted
for
something.
They
call
a
hybrid
FC,
effectively
less
complex
in
terms
of
automation,
a
lot
easier
to
build
a
lot
faster
to
build
and
better
to
pull
off
for
this
project.
We
opted
for
another
third
party
Fender,
which
actually
said
well
I'm,
pretty
okay,
with
you
guys
running
everything
from
the
cloud.
A
Instead
of
doing
it
on
premise,
it
was
something
that
the
other
vendor
was
absolutely
against
and
was
the
reason
that
drove
us
to
on-premise
in
the
first
place,
and
so
with
that
we
built
this.
We
launched
this.
It
went
super
smooth.
It
was
like
a
fraction
of
the
time
that
we
needed
to
invest
compared
to
the
other
location,
and
so
the
logical
question
was
yeah.
Can
we
just
not
do
this
also
for
the
initial
location
FCA
in
order
to
confirm
this,
there's
only
one
way
to
do
this.
A
A
This
is
where
traffic
would
go
from
the
cloud
to
on-premise
and
started
to
introduce
artificial
latency
on
small
iterations
up
until
the
point
that
the
operations
team
would
come
screaming
at
us,
complaining
that
things
were
breaking
down,
give
them
time
again
to
recover
and
then
identify
if
it
actually
was
us,
because
there's
many
things
going
wrong
in
such
a
location
and
then
reassess
what
we
wanted
to
do.
Yeah
we
had
to
do
this
multiple
times,
I
think
five
or
six
times
super
painful
for
everybody
involved.
A
Also,
the
recovery
takes
a
long
time,
a
lot
of
effort
But.
Ultimately
we
did
conclude.
Yes,
we
can,
and
that
was
amazing,
because
now
we
suddenly
can
think
about
going
back.
We
can
actually
say
well
if
we
want
to
launch
such
a
location
again,
we
start
doing
it
immediately
from
the
cloud
and
if
we
can
get
to
the
clouds
for
this
location
as
well.
A
Of
course,
on
its
own,
it's
not
an
easy
feat:
I
mean
it's
not
the
largest
scope,
I
think
a
much
larger
lift
and
shift
operations
have
been
conducted,
but
you
know
we're
already
launching
new
locations,
there's
a
lot
of
stuff
that
my
team
is
now
involved
with
in
doing
a
lift
the
shift
back-
it's
not
so
great,
but
AWS
actually
has
a
service
that
they
offer.
A
A
So
we're
very
happy
with
this.
Now,
if
you
look
at
what
we
sort
of
learned
from
this
I
think
for
us
specifically
and
I.
Think
for
such
an
audiences
here
and-
and
some
of
you
probably
already
know,
but
kubernetes
just
does
another
strike
the
way,
the
actual
physical
world,
if
you
rely
on
so
if
you
do
not
have,
if
you
use
a
the
ALB
Ingress
controller,
then
of
course
going
to
in
the
place
where
it's
not
there,
it's
going
to
be
harder.
A
So
if
you've
opted
in
for
all
kinds
of
SARS
or
for
solutions
that
you
hook
into
your
your
kubernetes
cluster,
then
going
to
a
different
location
will
just
not
be
the
same.
A
lot
of
additional
complexity
comes
with
this,
and
even
if
you're
Outsource
the
management
of
physical
and
virtual
infrastructure,
you
still
have
to
deal
with
physical
reality.
We
had
a
power
outage
or
well.
We
had
the
power
Fender
that
was
not
very
reliable.
A
So
every
so
often
we
had
the
diesel
generators
kicking
in
up
until
the
point
that
the
gasoline
was
just
depleted
and
the
whole
site
Came
Crashing,
Down,
Like
You.
You
need
to
learn
this
I
guess
in
practice.
Maybe
there's
people
that
do
this
better,
but
it's
just
stuff
that
you
have
to
deal
with
and,
like
I
said,
the
cooling
equipment
failing
Etc
and
then
you
know
working
together
with
these
different
third-party
providers
that
build
such
a
complex
operation
for
you.
A
A
In
there
and
I
think,
it's
also
prevented
us
from
finding
a
solution
that
might
be
much
better
equipped
or
fitting
to
what
we
as
a
company
are
and
how
we
want
to
operate,
and
so
I
think
it's
important
that
you
always
collaborate
very
closely
understand
where
these
requirements
are
coming
from
and
what
is
driving
them
and
in
in
what
capacity
do
you
want
to
trade?
A
Let's
say
responsibilities
over
certain
decisions,
instead
of
just
assuming
that
everybody
just
needs
to
do
what
they
need
to
do
and
I
think
that's
a
very
powerful
thing,
especially
in
such
a
complex
operation
and
then
the
last
one
is
an
interesting
one.
So
there's
actually
an
ongoing
discussion
now
from
I
think
the
guy
who
created
rails,
who
is
now
shifting
a
lot
of
their
Cloud
infrastructure
to
on-premise,
saying
safe,
it's
going
to
be
saving
millions
and
millions.
A
In
our
case,
it's
going
to
be
actually
more
cost
effective
to
run
on
AWS
I
mentioned
already
the
buffering
some
of
the
deployment
requirements
that
our
third-party
vendor
was
asking
for.
They
wanted
physical.
They
couldn't
do
virtual
CPUs
physical
notes,
only
very
high
resource
usage
and
being
able
to
scale
up
and
down
in
the
clouds
would
still
allow
us
to.
You
know
better
fit
this
kind
of
resources
instead
of
buying
Fair,
immersive
health
or
Hardware,
that's
hardly
used.
A
In
addition,
you
know
the
discounts
that
you
get,
the
liability
that
you
get
and
all
of
the
solution.
The
functionality
that
you
get
out
of
AWS
is
something
that
you
otherwise
need
to
invest
in
yourself.
You
can
do
this,
but
it
just
requires
a
different
strategy,
and
so,
in
our
case,
it's
cheaper,
and
it
also
allows
us,
for
example,
we're
now
playing
around
with
the
graphiton
instances
that
are
available
for
running
also
in
kubernetes,
or
experimenting
with
this
and
being
able
to
also
adopt
this
now
for
these
kind
of
locations
as
well.
A
A
And
that's
it
that's
my
story
of,
however,
how
we
went
to
on-premise
how
we
felt
is
not
so
fitting
for
us
and
how
we're
now
moving
back
to
the
clouds,
if
you
are
willing
to
help
out
or
interested
to
help
build
the
best
Milkman
that
is
serving
millions
of
families,
then
always
feel
free
to
join
and
reach
out
and
join
the
picnic
of
death
and
I
hope
you
learned
something
today.
B
Sacrifice,
can
you
hear
me
nope?
A
Warehouse,
so
if
the
internet
connectivity
fails
right,
that's
the
question
yeah.
So
we
spend
a
lot
of
time
on
this
and
opted
now
for
a
double
line
of
internet
connectivity.
So
these
are
not
just-
let's
say
you
know
T-Mobile
and
KPM,
but
this
is
I
believe
from
terms
called
Dark
fiber.
So
we
have
a
network
team
of
folks
on
this,
so
these
are
physical,
different
lines
of
going
to
different
directions
that
go.
That
connect
to
different
connection,
points
that
end
up
in
different
parts
of
the
general
internet
infrastructure.
A
All
of
this,
to
account,
for
let's
say
any
wire
cut
or
any
yeah
any
physical,
that's
an
issue
with
those
lines.
In
addition,
we
use
what
they
call
AWS
Direct
Connect,
so
that
also
reduces
the
amount
of
hopes
that
you
go
over
actual
public
internet
and
so
that
instead,
relying
on
dedicated
lines
directly
into
the
AWS
infrastructure,.
B
Thank
you
for
the
presentation,
one
question
about
the
plcs,
which
are
in
general,
mostly
a
very
classical
piece
of
I.T.
How
does
this
work,
together
with
the
very
modern
infrastructure
that
you're
running
in
AWS.
A
Yeah,
so
in
our
case,
so
the
three
levels
of
control,
so
the
transport
system
that
is
in
place,
is
effectively
the
bridge
for
us
between
data.
So
so
they
take
it
into
account.
They
they
support
all
of
those
low-level
protocols
that
sort
of
they
translate
to
that
physical
world,
and
we
just
purely
rely
on
the
events
and
information
that
they
emit.
So
we
asked
our
third
party
provider
of
this
yeah.
A
B
No
thank
you
very
much
guys,
so
thank
you.
Brilliant.