►
From YouTube: 18 Months Before the Mast by Jack Foy, Hiya
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
I'm,
not
an
Operations
person,
I'm
a
developer
I
came
into
this
because
I
wanted
to
be
able
to
ship
code
and
I
got
interested
in
infrastructure.
For
that
reason,
so
I've
been
interested
in
working
on
these
kinds
of
problems
off
and
on
for
about
a
decade
now
and
so
I
don't
have
some
of
the
domain
expertise
that
some
people
are
pulling
in
here.
A
So
what
I'd
like
to
talk
about
today
is
why
we
two
undertook
this
project
and
this
might
end
up
being
more
of
an
organizational
talk
than
a
technical
one.
There
is
going
to
be
some
technical
content,
but
it
might
not
be
as
meaty
as,
and
so,
if
you
would
like
so
be
forewarned,
I'm
not
going
to
be
offended
if
you,
if
you're
here,
for
a
technical
talk
and-
and
you
and
you
decide
this
isn't
for
you
and
I'd
like
to
in
particular
cover
some
of
the
lessons
that
that
we
learned
as
we
went
through
this.
A
A
A
How
did
we
get
there?
We
had
made
a
conscious
decision
to
move
towards
a
DevOps
model
a
couple
of
years
earlier,
and
we
defined
that
as
developers
having
responsibility
for
their
system
all
the
way
to
production
and
debs
loved
this.
They
really
really
liked
the
ability
to
decide
when
to
ship
their
own
software.
The
fact
that
they
could
do
it
more
or
less
on
a
team
independent
basis.
A
The
fact
that
we
didn't
have
to
do
enormous
heavyweight,
integrated
test
cycles
across
the
entire
stack
before
they
could
ship
something
to
production
and
they
even
embrace
being
on
call
as
a
consequence
of
that,
but
they
really
hated
the
state
that
our
tools
had
arrived
at
in
order
to
facilitate
being
able
to
do
that.
So
we
first
undertook
when
we.
A
This
is
again
kicked
off
by
by
an
intern
having
an
unhappy
experience
at
in
autumn
2014,
and
so
we
started
doing
a
bigger
survey
just
to
capture
what
we
knew
or
what
we
could
learn
out
of
the
organization.
And
the
first
was
that
we
had
gone
from
a
mono
repo
with
a
monolithic
release
process
to
extensive
use
of
chef
across
the
organization.
A
But
that
had
two
consequences.
First,
in
some
places
it
was
just
wrapping
the
legacy
monolithic
system
within
Chef
resources,
so
it
hadn't
really
become
easier
to
ship.
It
had
just
become
more
isolated
to
ship
and
second,
we
had.
We
had
been
using
chef
as
a
production
site
there
for
some
years
prior
and
we
had
some
cookbook
customizations
that
made
changing
the
overall
process
hard
to
do,
and
in
particular
I.
Don't
have
my
speaker
notes
here.
A
We
also
had
the
problem
that
we
couldn't
kill
off
services.
We
would
get
to
the
point
where
we
couldn't
ship
things
easily
using
modern
technologies,
because
we
had
to
go
back
and
do
the
tech
uplift
on
them,
but
we
couldn't
justify
the
tech
couplet,
because
the
systems
were
no
longer
receiving
engineering
love,
but
the
business
couldn't
afford
to
shut
them
off.
This
is
a
common
thing.
I
see
people
nodding
in
the
audience
here
that
this
is
something
that
other
organizations
have
encountered
too,
and
so
the
cost
of
that
is
the
explosion
of
complexity.
A
A
And
so
I
I
talked
about
being
able
to
make
changes
with
confidence.
Some
of
that
boils
down
to
ownership.
If
you
don't
have
a
single
ownership
owner
for
a
given
component
than
nobody
else.
If
everybody
owns
it,
then
nobody
really
owns
it
and
nobody
is
willing
to
put
in
the
effort
to
comprehend
what
it
given
change
is
going
to
mean,
and
we
had
disagreement
about
how
we
should
even
be
apportioning
out
these
responsibilities
and
where
a
deaf
student
really
have
a
platform.
A
But
the
application
was
not
something
that
that
the
operations
team
understood
it
was
something
where
they
would
say.
Deb's
need
to
provide
us
with
a
set
of
tests
so
that
we
can
prove
that
that
the
app
is
up
and
running,
and
so
we
had
Deb's
who
wanted
to
be
able
to
ship
without
having
to
involve
off.
So
we
had
ops
who
wanted
to
be
able
to
introduce
system
changes
that
they
asserted
were
benign
or
were
required
without
having
to
involve
dev,
and
so
we
had.
We
had
an
impedance
mismatch.
A
And
so
another
consequence
of
this
was
that
we
had
shared
components
that
were
shared
by
historical
accident,
in
particularly
the
example
is,
if
you
have
a
load
balancer-
and
you
have
a
single
instance
of
your
of
your
separate
load
balancer,
but
that's
fronting
multiple
services.
Now
each
development
team
that
has
a
service
that
is
registered
on
that
load
balancer
has
some
place
where
they
might
need
to
make
changes
as
part
of
a
normal
release,
because
they're
introducing
a
change
to
the
load,
balancing
logic
or
any
of
these
other
reasons.
A
A
So
on
top
of
all
of
this,
we
had
actually
done
an
earlier
experiment
with
docker.
We
had
identified
this
as
a
promising
technology.
We
had
said
containers,
look
like
they're,
really
interesting
and
we
were
able
to
deliver
a
service
to
production
using
it,
but
that
served
enough
to
be
able
to
tell
us
that
we
didn't
have
all
of
the
infrastructure
there.
A
So
I'm
going
to
talk
a
little
bit
about
how
we
solve
this
as
an
organizational
problem.
We
decided
that
in
order
to
make
this
happen,
we
needed
to
enable
dev
teams
to
run
and
admin
their
own
applications.
This
is
something
that's
come,
sometimes
called
app
ops
or
app
admin.
These
days
again,
I'm
seeing
people
nodding
and
laughing
in
the
audience,
so
I
guess
people
people
have
dealt
with
this
model
and
in
order
to
provide
that
we
needed
a
better
picture
of
the
infrastructure
than
we
currently
had.
A
We
at
the
time
we're
running
on
on
fairly
manually
maintained,
VMware
vSphere
configuration,
and
we
were
looking
at
doing
a
move
over
to
to
some
kind
of
cloud
system,
and
so
we
tasked
the
open.
We
tasked
our
ops
team
with
becoming
an
infrastructure
development
team
and
providing
that
kind
of
an
interface
for
us
to
be
able
to
build
on,
but
to
join
the
two.
A
A
A
How
many
of
you
have
heard
this
motto
before
test?
What
we
ship
and
ship
what
we
test?
Ok,
we
decided
again
that
containerisation
looked
like
the
right
model
for
us
to
be
able
to
encapsulate
behaviors
that
we
could
develop
and
test
and
then
bite
for
bite
move
into
production,
and
we
then
said
great.
That
means
that
we
need
to
be
able
to
define
our
environments
in
terms
of
how
we
manage
these
immutable
objects.
A
There's
a
general
principle
on
this,
which
is
that
you
want
to
push
variability
to
the
left,
meaning
that
you
want
to
be
able
to
to
encapsulate
your
variances
in
your
components
as
early
in
the
build
process,
as
you
can
and
then
compose
those
in
in
immutable
ways.
As
you
move
through
your
development
cycle.
A
A
So
that
left
is
the
question
of
how
do
we
in
fact
configure
our
hosts
and
our
clusters
for
use,
and
we
evaluated
for
OS
and
decided
that
this
was
a
pretty
good
model
for
where
we
wanted
to
go.
We
had
been
in
a
in
a
boon
to
shop
prior
to
that
we
are
still
in
a
boon
to
shop
with
in
many
of
our
container
development
environments,
but
as
far
as
the
actual
hosts
that
we
are
running,
we
decided
that
our
new
platform
would
be
four
or
less.
A
We
did
briefly
evaluate
a
new
configuration
management
systems.
We
looked
at
ansible.
We
looked
at
salt
in
particular
because
of
Coober
Nettie's
use
of
salt,
and
we
didn't
really
adopt
any
of
them.
We
instead
moved
towards
something
with
more
static
configuration
based
on
based
on
user
data
within
VMs.
A
And
we
knew
that
we
needed
more
than
docker
in
order
to
make
this
succeed,
we
needed
some
kind
of
environment
and
there
wasn't
really
an
obvious
community
consensus
around
this.
At
the
end
of
2014,
Cooper
Nettie's
had
been
announced
as
a
project,
and
we
were
trying
to
get
our
heads
around
the
abstractions
that
they
were
describing.
At
the
same
time,
there
was
mesas
and
mesosphere,
and
we
really
liked
some
of
the
things
that
they
were
doing
as
an
organization.
A
We
were
really
impressed
by
what
giant
was
doing
with
with
their
applications
and,
in
particular,
with
the
the
linings
personality
that
they
were
able
to
expose
under
under
their
bsd
jail
mode.
I'm
sorry
they're
under
there,
solaris
jl
mode
and
fleet
was
available
in
because
we
were
looking
strongly
at
core
OS.
That
was
one
of
the
options
that
we
could
use,
but
fleet
doesn't
really
provide
the
abstractions
that
we
wanted
to
be
able
to
manage
an
application.
A
It
it's
more
targeted
at
managing
infrastructure
across
the
fleet,
and
we
had
a
strong
contingent
within
the
team
that
that
thought
that
we
could
solve
this
problem
better
than
anyone
else
had,
and
we
should
just
bite
off
that
that
project
and
we
ended
with
a
strongly
divided
team
over
this,
and
we
ended
up
having
to
pull
into
CTO
in
order
to
give
us
a
command
decision
on
it,
and
he
just
said
management
wise.
This
is
what
we're
going
to
do.
A
We're
going
to
use
community
standards
pick
one,
but
use
that
don't
roll
our
own-
and
this
is
a
this-
is
a
hard
line
to
walk
as
as
development
organizations.
You
know
we're
employed
because
we
have
to
solve
business
problems
and
our
business
problems
really
are
all
different,
because
all
of
our
organizations
are
different,
but
we
use
open
source
because
they're,
not
all
that
different.
So
you
have
this
line.
A
You
have
to
walk
between
being
able
to
use
standardized
components
and
be
able
to
track
being
able
to
track
an
upstream
as
changes
come
out
versus
having
something
that
more
tightly
fits
your
use
of
an
application.
I,
don't
always
know
where
the
line
is
on
this
it.
I
think
it's
an
engineer.
It's
part
of
the
art
of
engineering.
A
So
then
we
face
some
implementation
decisions.
We
had
our
own
data
center
and
that
was
where
we
were
deploying
OpenStack.
It's
much
more
cost
effective
to
run
things
in
your
own
data
center.
If
you
already
have
one
you've
already
sunk,
the
capital
cost
you've
already
sunk,
the
operational
costs
of
running
that
infrastructure,
and
so
your
cost
per
cycle
and
cost
per
megabyte.
A
grantor
gigabyte
of
RAM
is
much
much
lower
than
using
somebody
elses
platform.
A
So
we
knew
that
we
needed
to
have
a
something
that
would
look
common
between
the
two
but
they're
really
very
different.
They
have
similar
abstractions,
but
they're
not
the
same,
and
they
behave
very
differently
and
the
the
problem
was
that
as
our
role
in
the
platform
team,
it
was
our
job
to
then
provide
that
common
platform
that
our
developers
could
use,
and
just
the
only
thing
they
would
need
to
know
is
basically
the
different,
latency
or
cost
characteristics
of
where
they
were
going
to
be
running.
A
So
we
were
already
on
core
OS
we
reached
out
to
core
OS.
They
introduced
us
to
tecktonik,
we
really
liked
it,
but
we
needed
it
to
do
more
than
AWS,
and
on
top
of
that
we
were
getting
into
January
February
at
this
point
and
their
release
cycle
was
not
going
to
be
complete
early
enough
for
us
to
be
able
to
meet
our
deadline
of
not
having
interns
use
the
old
system
by
the
time
they
came
around
that
following
summer.
It
was
four
months
off
at
that
point
and
and
we
expected
they
weren't.
A
A
This
came
online
pretty
late
in
our
process,
and
we
really
really
liked
a
lot
of
the
capabilities
and
we
really
really
were
attracted
to
the
idea
of
not
having
to
run
our
own
cluster,
but
it
would
mean
introducing
a
new
cloud
and
we
already
had
other
investments
in
AWS
and
other
investments
in
the
data
center,
and
so
we
said
we
can't.
We
can't
really
capitalize
on
that
right
now.
A
So
then
we
started
looking
at
doing
our
own
self
management
of
these
clusters
and
naturally,
we
looked
at
using
cube
up
because
that's
where
all
of
the
all
the
documentation
points
you
and
you
want
to
be
able
to
start
your
own
cluster
and
bring
it
up.
But
even
if
you
do
that,
first,
the
result
still
isn't
isn't:
production
ready
and
second,
it's
very
opinionated
about
how
you're
going
to
be
structuring
your
AWS
environment
or
your
OpenStack
environment
and
in
addition,
the
OpenStack
integration
was
was
basically
non-existent.
A
So
we'd
have
to
go
in
and
invent
a
whole
bunch
of
it
anyway.
If
we
were
going
to
make
that
happen,
and
on
top
of
that
it
was
very
cintas
centric
and
we
were
not
a
simple
shop
at
all.
You
know
we
had
done
some
of
we
were
heavily
abun
to.
We
had
started
moving
into
core
OS
and
we
just
looked
at
that
and
said
that
we
don't
really
want
to
pull
in
another
platform.
We'd
have
to
support.
A
So
we
we
took
all
of
that
is
as
input
and
and
produced
our
own
set
of
fleet
configurations
and
because
you're
running
a
bunch
of
fleets,
we
called
it
our
model
and
it
took
the
core
OS
images
and
the
latest
group
earnings
available
at
the
time
which
is
0-17
and
deployed
it
effectively
on
to
OpenStack
first
and
then
AWS,
and
this
worked
pretty
well
for
us.
It
allowed
us
to
do
exactly
what
we
needed.
A
The
downside
is
you're,
maintaining
yourself
and
you
don't
get
to
capitalize
on
the
new
changes
that
are
coming
down
from
core
OS
and,
as
it
turns
out,
the
core
OS
development
cycle
doesn't
always
distinguish
which
parts
of
new
functionality
are
getting
initialized
via
the
core.
Api
is
at
least
at
the
time.
This
is
true
and
via
the
cluster
scripts,
and
so
as
as
new
functionality
gets
introduced
as
kind
of
hacks
into
the
into
the
cluster
scripts.
A
You
have
to
then
reverse
engineer
where
those
are
coming
in
and
make
sure
that
your
cluster
maintenance
stuff
is
keeping
up
with
it.
It
it's
a
bigger
engineering
challenge
than
it
appears
to
be
at
the
beginning
and
again
we
get
into
this
question
of
customization.
And
how
much
are
you
willing
to
carry
forward
and
how
much
are
you
willing
to
to
just
drop
in
as
upstream
comes
in
and
it
was
entirely
manual
as
far
as
size
and
clusters
and
so
on.
A
We
also
produce
their
own
developer
dashboard,
which
was
called
Archimedes,
and
that
worked
really
well
and
the
intent
was
that
we
were
going
to
open
source
it,
but
we
never
did,
and
that
was
because
it
was
tightly
coupled
to
our
own
processes.
It
was
tightly
coupled
to
a
bunch
of
our
own
internal
interface.
Libraries
and
CSS
objects
that
we
couldn't
open
source
and
we
delayed
long
enough
that
basically
Kuber
Nettie's
dashboard
has
largely
caught
up
with
where
we
would
be
if
we
open
sourced
it
today.
A
So
today,
we're
largely
just
running
kuba
Nettie's
dashboard
internally,
and
we
have
a
backlog
of
things
that
we
want
to
extend
it
and
open
source
on
to
that.
So,
if
you're
going
to
do
this
today,
I
would
suggest
just
get
behind
cube
dashboard
and
push.
Don't
don't
write
your
own,
but
that
didn't
exist
at
the
time.
There
was
no
such
thing
as
cube
dash.
A
A
A
You
could
still
do
something
like
this
today
if
you
were
to
do
things
like
node
selectors,
if
you
are
going
to
constrain
your
external
versus
internal
traffic
within
a
single
cluster
to
two
nodes
where
I
think
we're
almost
there
I
think
you
would
need
a
node
exclusion
policy
in
order
to
keep
internal
traffic
off
of
external
nodes.
If
you're
going
to
do
that,
so
you
could
get
most
of
this
effect
today.
A
Those
are
implemented
as
a
VX
LAN
overlay
on
top
of
a
physical
network,
as
it
happened,
we
were
also
using
a
VX
land
overlay
within
the
cluster
for
a
pod
network.
So
we
end
up
with
VX
land
over
VX
land
latency
gets
very
high,
it's
a
very
janky
kind
of
setup,
but
again
for
v1.
It
worked
the
major
problems
I'll
get
I'll
get
to
in
a
minute
about
where
we
had
to
diagnose
issues
on
that
we
were
using
cube
proxy
v1,
sending
all
traffic
through
user
space.
A
Container
logging,
we
had
had
an
existing
agent
that
we
were
using
for
log
gathering
from
our
existing
applications.
We
had
a
plume
agent
that
we
ran
throughout
and
that
would
send
log
streams
to
a
kafka,
a
bus
that
we
would
then
forward
on
to
things
like
log,
indexers
or
archival
services,
or
that
sort
of
thing
and
that
worked
again
pretty
well.
A
But
again
it
you
know
for
the
for
the
core
mission,
it
was
working
pretty
well,
we
built
containers
via
an
svt
plugin
SBT
is
the
scala,
a
common
build
tool,
and
we
put
we.
I
don't
think
the
docker
file
plugins
were
available
at
the
time.
So
I
believe
we
just
had
something
that
was
templating,
a
very
simple
dockerfile
innovating.
It
and
then
building
that
through
jenkins
and
publishing
to
our
existing
artifactory
repo,
which
we
then
licensed
in
order
to
enable
the
the
docker
image
repo.
A
A
Our
first
customers
were
async
off
co-workers,
which
were
supporting
some
of
our
mobile
apps,
followed
by
some
of
the
internet
facing
helper
services.
We
had
things
like
named
auto-completion
that
are
firing
law
facing
HTTP
requests,
and
so
we
were
fielding
those
out
of
poober
Nettie's.
We
hadn't
yet
moved
the
main
app
over
the
main
rails,
app
over
to
Coober
Nettie's,
but
this
was
a
great
way
to
start
getting
some
operational
experience
with
running
things
in
production
and
we
had
a
new
flagship
application
that
we
were
preparing
to
take
massive
traffic
on.
A
That
was
also
one
of
the
major
major
first
users
on
this,
and
by
and
large,
with
the
exception
of
the
networking
things
that
I'm
going
to
get
into
in
a
minute,
our
users
were
very
happy.
So
this
is
an
example
of
a
quote
that
somebody
came
and
excitedly
told
me
about
a
week
or
two
after
we've
gone
live
that
you
know
he
had.
A
Fixing
the
bug
in
production
before
the
alert
for
the
for
introducing
the
bug
had
hit
his
phone,
and
so
that
was
considered
just
an
enormous
success.
Compared
to
the
close
to
an
hour
release
cycle
that
users
previously
had
to
go
through
in
order
to
converge,
application
state
and
the
internet
came
back.
A
A
Some
of
our
monitoring
stuff
really
wanted
to
be
able
to
use
the
hostname
in
a
distinguishable
way,
and
that
meant
we
had
to
bring
pod
information
through
into
the
into
the
container
in
such
a
way
that
a
hostname
would
would
answer
with
it,
and
this
eventually
became
the
default
container
behavior
in
12,
but
we
were
running
a
custom
couplet
for
a
while.
In
order
to
get
this
behavior,
we
expanded
to
include
auto
scaling.
We
never
did
end
up
using
heat,
auto
scaling.
A
We've
been
OpenStack
adjusted
the
heat
implementation
within
our
version
just
never
really
got
there,
but
we
expanded
on
to
AWS
and
used
auto
scaling
there.
We
eventually
did
salt
TLS,
but
I'll
get
to
that.
A
little
bit
later,
I've
talked
about
networking.
We
had
very
high
baseline
latency,
which
we
later
discovered,
is
probably
the
result
of
running
cute
proxy
in
user
space
when
we
upgraded
to
cube
11
and
brought
in
its
version
of
cube
proxy,
we
enabled
iptables
saw
it.
It
worked
like
charm
and
blatant
seized
dropped
way
down.
A
A
Dr.
D
is
the
bane
of
my
existence.
It
turns
out
that
the
log
options
change
a
bit
across
some
of
the
major
versions
and
the
doctor
demon
didn't
fail
to
start.
If
it
didn't
recognize
them,
it
would
just
ignore
the
option.
So
we
had
end
up
with
cases
where
log
rotation
would
break
and
we'd
end
up
filling
up
disks,
and
we
had
no
sign
of
until
the
disks
filled
up
again.
A
External
load
balancers,
we
didn't
have
the
OpenStack
H
a
proxy
integration
in
place,
and
so
we
used
our
traditional
external
load
balancers,
which
had
worked
pretty
well,
but
we
discovered
that
there's
a
bug
in
the
DNS
pool
discovery
on
that
that
we
were
using
in
order
to
advertise
new
nodes
to
this
pool,
and
suddenly
the
pool
would
empty
and
we'd
lose
production
traffic,
not
where
you
want
to
be
so.
That
was
all
within
whitepages,
which
was
which
was
kind
of
the
parent
company.
A
Where
all
this
happened
earlier
this
year,
we
spun
out
as
a
start-up
into
a
new
organization
called
hiya.
We
we
are
tasked
with
fixing
spam
and
scam
calls
if
you
get
robo
calls
we're
trying
to
stop
that,
and
we
took
the
crew
benetti's
clusters
with
us.
When
we
went
the
entire
platform
team
wholesale
got
moved
over
to
the
startup,
and
we
determined
that
that
was
a
good
interface
for
us
to
draw.
A
Then,
if
we're
not
not
going
to
have
the
infrastructure
team
with
our
organization
anymore,
we're
going
to
replace
them
with
AWS,
so
we
moved
wholesale
over
into
the
cloud
and
we
ported
our
existing
stuff
over
to
cloud
formation.
We
use
that
to
configure
our
stuff
automatically.
Now
it
works
pretty
well,
but
we
still
have
problems
we
move
to
using
the
host
gateway
back
in
/
plan'll,
which
is
marvelous.
It
just
works.
Its
high-performance
we've
had
no
issues
with
it.
A
With
auto
scaling,
we
took
the
time
to
do
a
full
TLS
root
key
ceremony.
You
know
with
an
offline
machine
and
everything
and
a
copy
of
the
key
kept
off-site,
and
that
has
allowed
us
to
take
a
few
more
risks
with
doing
things
like
putting
a
drive
key
up
in
an
automatic,
sorta
thority,
and
so
now
we
have
a
TLS
handle
automatically,
as
we
auto
scale
we're
using
CF
SSL
to
do
that.
A
A
A
We're
using
hosted
at
a
factory
now-
and
this
is
great
for
the
people
who
are
in
North
America,
but
we
also
have
European
engineers,
and
they
really
don't
like
this,
because
the
latency
is
even
worse
than
it
was
before
so
we're
looking
at
moving
our
image
hosting
directly
into
s3,
don't
ever
use
cloud
formation
to
add
an
ALB
to
an
auto
scaling
group
I
see
people
laughing
here.
It
sounds
like
other
people
have
seen
this
to
cloud
formation
will
delete
your
auto
scaling
group.
If
you
do
this,
you
lose
your
entire
production
cluster.
A
It
did
not
make
us
happy
if
we
move
to
criminate
es
1
dot
for
which
we
haven't.
Yet.
We
believe
that
load
balancer
services
will
give
us
the
automatic
handling
on
this.
That
will
make
us
not
have
to
do
elb
management
directly
at
all
anymore,
and
so
that
will
have
the
benefit
of
solving
this
problem
and
move
it
back
into
the
developers.
Hands,
which
is
where
it
should
be.
A
Container
logging
is
still
not
working.
Well.
I
would
love
to
musing
journal
tea
for
this,
but
we've
done
some
extensive
profiling
and,
in
spite
of
the
core
OS
driven
latency
solutions
that
they
have
put
into
place
and
they've
had
a
couple
of
blog
posts
on
that.
Throughput
is
still
not
where
it
needs
to
be.
In
fact,
it's
it's
surprisingly
low.
A
You
know
we're
seeing
things
max
out
under
a
thousand
a
second,
and
so
this
wouldn't
be
a
problem,
except
that
rocket
is,
is
proposing
to
use
journal
as
its
container
logging
system
of
record,
and
so,
if
you
have
solved
any
any
problem
like
this
of
handling,
high-throughput
container
logging,
I'd
love
to
talk
to
you,
I
really
want
to
get
dr.
out
of
the
picture.
This
is
not
a
universally
loved
loved
opinion,
but
I'd.
Rather
we
stop
using
docker
and
move
directly
to
rocket.
A
A
A
A
A
Do
we
do
any
security
scanning
on
our
containers
or
on
our
registry,
nothing
automatic
and
it's
on
the
backlog
right
now.
We
basically
just
keep
an
eye
out
on
the
CBS
and
and
keep
our
base
images
patched,
and
but
we
don't
yet
have
anything
that
can,
for
example,
automatically
pull
out
of
date,
base
images
from
use
and
that's
that's
obviously
desirable
yeah.
A
How
have
you
been
upgrading,
kuber,
denny's
or
any
other
components
that
would
affect
infrastructure?
Downtime
we've
been
doing
close
to
replacement
and
load
migration
so
far,
and
I
would
love
to
get
away
from
that.
I
think
that
where
I
think
we're
approaching
a
state
where
we
can
do
that
now
that
we're
going
to
be
able
to
do
a
che
upgrades
of
the
core
components
and
then
and
then
the
cube
load
alongside
it,
but
we
haven't
proven
that
out
yet
so
right
now
it's
a
fairly
heavy
weight
process.