►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Today,
I'm
going
to
give
an
example
of
a
long-standing
problem
that
we've
had
within
the
cng,
whether
it
be
kubernetes
or
just
the
containers
themselves,
when
you're
doing
a
roll
of
the
deployment
you're
doing
an
upgrade
or
say
a
massive
reconfigure,
there
are
instances
where
the
pods
will
be
restarting
and
you'll
have
one
version
with
one
set
of
assets
and
another
version
with
a
different
set
of
assets.
Our
containers
explicitly
package
only
the
assets
they
actually
need,
as
a
matter
of
you
know,
not
wasting
resources
on
pull
on
deploy
and
storage.
A
A
So
if
the
user's
request
hits
pod
a
and
that
gives
them
html,
that
html
says
I
need
these
assets,
and
that,
however,
then
gets
routed
to
pot
b.
Pod
b
doesn't
have
the
assets
from
pod
a
thus
can't
serve
them,
and
now
you
get
a
ui.
That's
either
missing
chunks.
Blank
or
just
doesn't
behave
in
the
way
that
you
would
expect,
because
we
don't
actually
have
all
of
it
so
in
an
attempt
to
address
this
problem
without
actually
making
use
of
the
cdn,
because
you
can
do
that
now,
but
it's
fully
documented
and
complicated.
A
What
I've
found
is
that
I
can
build
a
very
minimal
container
and
then
deploy
just
that
nginx
and
use
nginx
to
actually
provide
assets
from
both
containers
by
using
an
init
container
that
actually
pulls
the
assets
from
the
two
running
container
versions.
This
is
why
it
has
to
be
an
additional
deployment,
so
you
kind
of
deploy
an
additional
set
of
assets
or
a
helm
chart
that
pulls
in
the
two
sets
of
assets,
for
you
shims
itself
into
the
existing
ingress
on
the
host
name
and
then
off.
A
A
A
A
The
init
containers
populate
a
directory
that
we
call
assets
that
we
mount
in
here
and
we
go
for
the
first
version
that
we're
concerned
about
this
is
version
we're
upgrading
from
we
copy
those
assets
in
under
a
and
then
we
do.
The
next
version
that
we're
concerned
about,
and
we
copy
that
in
under
b
and
then
the
running
container
is
just
nginx
from
the
latest
version
running
the
alpine
image,
so
as
small
as
we
can
get
it
without
having
to
fight
with
it
right.
We
mount
the
nginx.com.
A
This
completely
overrides
all
of
the
containers
behavior,
because
we're
mounting
on
top
of
that
configuration
that
has
includes
or
literally
anything
else
and
we're
only
carrying
that
we
have
a
temp
disk,
that's
actually
disk,
backed
because
we
don't
want
to
consume
all
of
the
space
assets
in
memory.
If
we
used
memory
back
to
empty
from
that,
we
have
a
service
listing
on
port
8000.
Thus,
we
don't
have
to
worry
about
port
80
and
things
complaining
and
then
we're
dividing
in
ingress
this
matches
to
everything
else
that
we
have
in
our
deployment.
A
This
is
not
a
chart
right
now.
This
is
literally
just
a
manifest,
so
we
have
an
ingress
for
couple
at
the
same
name,
space
that
I'll
install
a
chart
into
here
in
a
moment,
and
then
we're
saying
this
is
this
matched
host
to
what
we're
making
use
of
and
the
tls
configuration
is
matched
to
the
way
I'm
about
to
deploy
my
help
chart
and
it
says
to
pass
to
that
and
only
care
about
assets,
path.
A
A
A
A
A
A
A
Which
resulted
in
that
exact
error
and
that's
the
underlying
rails
condition.
I
guess
I
could
have
done
those
14
1
and
14
2,
and
I
should
have
hit
this
massive
everybody's
going
to
break
on
the
database
problem.
A
A
A
A
A
What
we
also
happen
to
experience
this
time
is
the
problem
of
the
application's
view
of
the
database
aka.
The
object
relational
model
caches
the
schema
in
memory,
so
it
knows
which
columns
exist.
What
table
is
where
and
which
things
it
should
look
for
in
the
database
without
actually
having
to
query
the
database
every
time
it
needs
to
query
the
database
right
when
we're
doing
a
production
upgrade,
we
would
normally
do
run
the
migrations
roll.
A
The
pods
run
post
migrations,
roll,
the
pods
in
that
behavior
this
particular
case,
because
we're
not
splitting
pre
and
post
migrations,
all
of
the
migrations
happen
and
that
particular
table
rename
is
what
we
deem
as
a
post-transaction,
because
we
don't
want
to
break
the
running
application
so
because
I'm
not
actually
executing
this.
According
to
the
sre
pattern
for
running
rolling
upgrades,
I
hit
that
one
because
there
happens
to
be
a
post
transaction
in
that
set
of
versions.
A
B
I
think
it
looks
good
it's
fairly
simple
solution,
like
we've,
we've
come
up
with
ideas
on
how
we
were
going
to
solve
this
for
a
while,
but
it's
been
in
place.
B
It's
it's
taken
a
while
right.
So
a
lot
of
those
ideas,
often
related
with
you,
know,
uploading
the
assets
into
object,
storage
and
then
pulling
them
back
down
into
each
workhorse,
or
something
like
that.
You
know
they
might
be.
It
might
be
good
ideas,
but
this
seems
like
the
a
simple
way
to
get
started
like,
even
even
in
the
approach
you
have
right
now.
It
could
already
be
documented
pretty
much
for
people
to
to
try
if
they
wanted
it
right.
B
A
A
A
A
One
thought
dj
is,
as
we
pull
the
operator
logic
out,
the
one
thing
that
the
operator
logic
does.
Is
it
flags
all
of
the
deployments
and
stateful
sets
as
paused,
where
if
we
were
able
to
actually
do
the
deployments
of
say,
taskrunner,
but
then
pause
all
the
other
rails,
orm
items,
sidekick,
rails
or
web
service
anything
else
bound
to
it,
then
in
theory
we
should
be
able
to
still
have
it
be
paused.
Do
those
and
then
initiate
the
rolling
upgrade?
A
A
B
Flags
like
because
we
were
having
to
deal,
we
were
having
to
deal
with
the
fact
that
helm
was
already
rolling
these
out
type
thing
which,
with
the
new
operator,
we
have
better
control
over,
not
having
that
happen
to
begin
with
type
thing
exactly.