►
From YouTube: 20191108 Mailroom on k8s demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
I,
don't
really
have
too
much
to
show
for
it.
So
I
think
what
I'm
going
to
show
I'll
share
my
screen,
so
you're
not
looking
at
a
cat
but
share
screen
and
let's
make
it
smaller,
because
that's
irrational,
so
jarred
did
the
most
most
of
the
work
to
initially
implement
mailroom
so
far
inside
a
kubernetes.
So
so
far
the
only
thing
we've
got.
A
We're
following
the
same
pattern
that
we
did
for
the
kubernetes
registry,
so
in
this
case,
inside
of
each
of
our
environment
files,
we're
starting
to
add
the
necessary
in
items
to
enable
mailroom.
So
here,
in
this
case,
we
simply
set
no
room
enabled
true,
and
we
set
the
specific
version
of
the
tag
that
we
want.
A
A
We
have
the
basics
that
stay
the
same
for
all
of
our
environments,
so
we're
using
GMO
for
everything
we're
setting
the
poor.
You
know
this
is
just
the
configuration
we're
creating
a
secret
Colgate,
LEM
mailroom
I
map
as
a
secret
object
inside
of
kubernetes,
and
then
we
got
a
key
called
password
that
will
fill
in
the
necessary
stuff.
A
Not
sure
why
user
is
blank,
maybe
add
an
S
job
about
that.
Maybe
it's
just
not
there.
Maybe
there's
no
user
for
the
authentication
method.
I,
don't
know
in
this
mimics
our
configuration
that's
across
our
omnibus
installation.
So,
theoretically
mail
works
in
pre,
I've,
not
tested
it
myself,
I've
seen
job
test
at
multiple
times
and
it
works
just
fine
same
for
staging.
We
had
to
go
through
a
little
snafu
of
disabling
everyone
else's
email,
because
I
was
a
production
database
import
at
some
point
in
time,
but
that
works
as
well.
A
A
B
A
A
C
C
A
A
C
A
I
think
yeah
there's
an
open
issue
right
now
in
get
lab
that
someone
is
assigned
to
where
they're
trying
to
add
the
structured
logging,
because
that
was
a
new
implementation.
So
they're
gonna
try
to
upgrade
the
mailroom
gem
that's
being
utilized
and
then
once
that
gets
upgraded,
we
should
be
able
to
do
the
necessary
stuff
inside
of
the
infrastructure
work
to
grab
that
actual
log
data.
All.
C
A
So
inside
of
prod,
because
we
don't
have
any
pods
that
are
running
inside
of
kubernetes,
we
don't
have
anything
from
stag
driver
for
mail
room,
but
we
do
have
our
unread
email
accounts,
and
this
is
something
we've
always
had
a
metric
for.
We
just
never
had
a
dashboard
for
so
this
is
something
new
and
then
recently
we
added
this
there's
no
data
in
production,
obviously,
but
at
least
in
staging
we'll
have
some
data
for
our
pods.
So
we
do
have
visibility
in
things
and
we
have
existing
alerts.
A
That's
for
the
application
itself
and
then
we
also
have
our
standard
pod
notifications.
So
if
a
pod
fails
after
a
period
of
time,
we'll
get
an
alert
for
it
or
if
it's
crash
looping
or
if
the
replica
set
wants
a
certain
number
and
we're
not
meeting
that
replicas.
That
count
we'll
get
an
alert
for-
and
this
is
just
due
to
the
fact
that
we
don't
really
have
good
visibility
from
the
Melbourne
gem.
There's
an
open
issue
that
I
created.
C
A
C
A
A
A
C
A
C
B
C
C
It's
one
on
one
process:
basically,
it's
the
same
between
different
components
and
that
we
don't
revert
back
to
the
old
behaviors
of
we
are
waiting
for
a
tagged
release
to
be
able
to
run
to
github.com,
because
that
is
exactly
where
we
don't
want
to
end
up
on
right
now.
What
we
end
up
doing
is.
We
have
commits
as
basically
being
our
tags
yeah
and
we
don't
have
latest
on
that.
A
C
B
It
this
is
something
you
can
do,
but
then
you
have
to
build
out
I
think
around
this,
because
we
want
to
know
who
and
why
changing
something.
I
was
also
thinking
that
last
year,
I
was
briefly
involved
with
the
me
now
I
think
it's
me
is
the
hardening
of
kubernetes
cluster,
so
I'm,
not
sure
yeah,
it's
kind
of
a
Google
thing,
but
it's
based
on
some
open
standards
and
they
had
this
kind
of
problem,
because
you
can
only
run
authorizing
images
on
your
cluster.
B
So
in
that
case,
it's
been
worse
because
not
only
you
have
to
always
use
the
leash
leash
on
the
commitment
to
shout
the
IMP
of
the
image
itself,
but
you
also
have
to
know
the
eyes
and
upload
inside
the
cluster
that
the
image
is
allowed
to
run.
So
we
may
end
up
building
another
system
that
the
end
of
this
for
us,
if
you
don't
want
to
use
git,
commit,
there's
a
state
to
be
saved
somewhere
now.
C
Unless
you
fully
agree
with
you,
we
need
to
build
a
type
system
for
sure
at
some
point.
We
are
going
that
way
and
I
don't
see
a
way
around
it
like
there
literally.
There
is
no
way
around.
However,
we
are
so
far
away
from
it
right
now
that
investing
in
a
system
that
we
don't
even
know
how
it
can
look
makes
a
little
Sun
little
sense
to
me
what
I've
been
thinking
is
a
tiny
bit
more
iterative
approach
which
is
kind
of
controversial,
but.
C
C
A
C
Many
problems
with
this,
but
what
that
also
allows
us
to
do
is
at
any
point
in
time
we
can
switch
around
and
say
that
github
dot,
lock
is
the
source
of
truth
and
any
external
system
that
needs
to
vet.
Any
changes
can
actually
change
that
file
and
then
trigger
the
rest
of
the
system
deployments
automatically
without
actually
having
to
change
the
whole
system
of
deployment
question.
B
B
Why
do
we
need
CI
variables?
Can
we
use
directly
kubernetes
secret
in
the
CI
of
that
repo?
We
transfer?
The
information
is
written
in
repo
in
kubernetes
cigarettes
whatever,
and
then
we
can
just
mount
them
as
hashmap
or
whatever
inside.
When
we
want,
we
can
also
build
so
it
can
be
an
ATC
D,
so
we
can
access
it
from
the
machine
or
we
can
even
build
a
very
simple
daemons,
the
tron
and
style
our
system,
where
people
can
banana
clip
really
and
ask.
B
C
B
C
B
C
C
You
can
have
to
do
it
and
I'm
really
good
at
those
stupid,
so
sure
it's
an
option,
but
what
I'm
trying
to
suggest
here
is
that
this
connection
get
us
there
much
quicker
to
bias
time
to
be
able
to
build
something
more
robust,
as
you're
suggesting
alternative
to
that
building
your
or
your.
The
thing
you
suggested
is,
why
not
add
freaking
audit
logging
to
CI
variables
and
have
everyone
benefit
from
it
all
right?
That
would
be
even
better,
because
then
we
don't
have
a
custom
piece
of
stuff
to
to
maintain
yeah.
C
A
C
C
C
So
that
means
that
when
you're
doing
some
changes
to
this,
this
is
the
second
question
you
need
to
ask.
But
how
do
we
make
sure
that
we
implement
this
while
being
able
to
start
from
scratch
whether
that
means
every
time
you
need
to
update
or
create
a
new
environment?
You
need
to
depend
on
an
existing
file
of
some
sort
like
staging
or
something
like
that
or
again,
even
dumber
solution
would
be
if
the
file
exists
to
this,
if
not
generic
template.
That
has
a
skeleton
and
note
values.
C
B
C
C
A
A
great
solution,
I
guess
the
only
concern
I
have-
is
that
right
now
our
process
of
changing
this
value
and
the
Yama
file
provides
us
the
CI
pipeline.
We
don't
currently
do
this,
but
we
could
inject
QA
testing.
If
we
did
this
outside
of
that
process,
I
don't
know
off
the
top
of
my
head.
How
how
we
would
how
we
would
do
that.
A
Go
away,
we
have
the
capability
to
get
a
pipeline
when
we
make
a
change
to
that
variable
like
right
now.
All
we
do
is
a
dry
run,
so
we
just
determine
whether
or
not
you
know
we
can
make
this
change.
But
theoretically
off
of
this
could
be
a
QA
that
spun
up
and
tested
that
new
version
prior
to
running
a
deploy
on
some
fake
environment
or
something
the
solution
that
you
proposed
I'm,
not
really
sure
how
to
do
that.
B
My
solutions
and
just
think
about
what
Mara
suggested,
okay,
so
I
think
the
easiest
thing
here
is
that
we
can
build
the
Yama
file,
we'll
say
a
very
simple
Ruby
script
and
Ayrton
ARB
template
so
that
we
inject
values
from
the
environment
variables.
We
can
see
the
thing
was
house
I'm,
not
sure,
I
think
the
outfit
was
home,
so
you
can
still
ask
helm
to
the
dry
run
yeah.
B
A
B
A
C
C
Nothing
else
we'll
file,
you
go
and
see
this
weird
variable
and
you
get
to
wonder
where
is
this
from
and
then
you
look
into
documentation
which
will
write,
obviously,
and
you
find
it
and
further,
even
more
than
than
that,
like
you
can
just
easily
set
environmental
variable
in
your
environment
when
you're
testing
things
easier
than
always
remembering
to
do
that's
their
set.
So
it's
a
bit
more
difficulty.
A
B
C
Theoretically,
there
is
no
reason
why
not,
but
there
is
still
enough
of
edge
cases
that
I
can
think
of
like
right,
this
very
moment
to
address,
but
it
it
could
move
us
in
the
direction
of
not
having
to
depend
on
images
being
so
tightly
coupled
with
the
home
charts.
And
then
that
brings
me
to
the
home
chart
I.
C
I'm
choosing
my
words,
things
that
still
need
to
be
over
overcame
overcome
compared
to
omnibus,
which
has
been
tried
and
tested
in
his
rock-solid.
So
we
are
deploying
from
aster,
basically
right
now,
but
omnibuses
see
so
many
eyes
that
I
understand
it.
But,
theoretically
what
we
could
do
is
say
that
we
are
not
pulling
from
tags
version
of
a
chart,
but
there
we
can
say
that
we
are
pulling
from
the
latest
stable
branch
off
the
charts
that
would
allow
them
to
back
port
fixes
as
soon
as
they
need
they
need
it.
C
We
would
be
able
to
not
depend
on
them.
Tagging
tagging
a
version
and
we
would
be
able
to
leverage
helm
updates
as
well.
The
only
concern:
well,
not
you
own,
a
lot
of
concerns,
but
one
of
the
concerns
I
have
there
is:
how
do
you
ensure
that
you
roll
out
how
change
changes
independently
from
image
changes?
So
they
now
don't
step
over
each
other,
because
you
want
to
have
the
smaller
Delta
as
possible
for
any
change
that
you
do
so
say.
C
If
we
have
a
scheduled
pipeline
that
will
continuously
upgrade,
we
want
to
ensure
that
it
always
only
catches,
one
type
or
the
other,
so
how
I'm
pulling
the
changes
in
the
home
chart
should
be
a
blocking
thing
for
any
other
changes
on
images,
for
example,
but
that
could
be
like
a
way
forward
for
us
to
do
this.
So
as
far
as
you
can
see
here,
I'm
trying
to
find
a
way
where
we
are
going
to
be
the
observers
of
the
process
rather
than
people
responsible
for
get
comment.
C
This
version,
like
I,
don't
want
to
go
back
to
manual,
work
and
I
know
for
a
fact
that
a
lot
of
smaller
companies
medium
companies
actually
do
this,
because
that's
the
safest
thing
to
do.
It
is
safest
thing
to
do,
but
it's
the
nicest
thing
to
do,
because
you
depend
on
on
humans,
and
that
would
also
push
us
in
the
direction
of
depending
more
on
tests
depending
more
on
alerts
depending
more
on
that.
C
C
C
B
B
B
Minute
right
so,
first
directly
to
go
to
the
mail
room,
then
this
is
something
done,
I'm,
not
really
extremely
familiar
with.
So
my
question
here
is:
how
can
we
because
what
I
am
so
late
mail
room
rates
from
a
map
and
published
the
sidekick
okay?
So
this
is
it's
consuming
email?
So
how
do
we
deploy
this?
We
have
just
one
more
than
one.
How
can
we
ensure
that
we
can
run
kubernetes
version,
plus
the
old
vm
together
or
just
one
single
thing,
and
can
you
can't
run
two
of
them
together?
Meryem.
A
Has
a
feature
they
call
it
arbitration
where
it
talks
to
radisson,
creates
a
like
a
unique
ID
in
a
lock
for
who's
gonna
process,
the
email,
so
we
could
run
the
VMS
in
parallel
with
the
ones
in
kubernetes
no
problem.
We
already
do
this
with
multiple
VMs
today,
so
this
will
be
a
quick
and
easy
way
to
test
things
to
validate
that
the
pods
are
running
like
they're
supposed
to,
and
then
we
can
just
cut
down
the
VMS
when
we're
ready.
A
B
B
So
what
if
in
kubernetes
we
do
once
we
move
through
the
we
do
a
mapping
of
endpoints
and
assign
them
to
the
product
team
that
owns
that
part
of
the
code
base
and
then
each
a
proxy
we
Ralph's
only
to
several
pods.
So
we
can
scale
independently
by
setter
feature
and
we
kind
of
have
free
error
budgeting.
It
doesn't
isn't
that
really
complete
because,
let's
say
then
you
go
through
literally
coal
and
you
can
do
this
later
on.
But
it's
still
the
entry
point
kind
of
identifiers
which
probably
category
owns
it.
B
So
let's
say
it's
a
CI
thing
then
goes
for
the
verify.
So
everything
that
is
less
ship
CI
goes
to
service
deployment.
So
it's
a
set
of
parts
that
are
supposed
to
only
handle
a
CI
API
request
same
for
what
a
web
IDE
or
whatever
we
can
do
the
split
as
much
as
we
want.
So
that
and
it's
a
lot
of
work.
Okay,
but
then
we
can
tag
the
the
monitoring
and
the
error
rates
and
everything
by
the
family
of
the
boat,
and
then
we
know
which
team
deflated
his
own
budget.
B
We
can
do
the
same
for
sidekick
and
go
on
and
go
on
and
go
on.
The
idea
behind
this
is
that
they
do
the
same
thing.
There
goes
the
same
now
you're
going
to
really
fork
the
project.
You
can
remove
the
code
that
just
doesn't
belong
to
you,
but
no
I
don't
want
to
go
in
the
directions,
still
deploy
everything
to
everyone,
but
you
only
route
to
deposit.
B
This
couldn't
be
a
good
way
for,
let's
say
measuring
the
features
that
we
are
using,
how
many
researches
they
consumes.
Maybe
we
can
end
up
scaling,
see
I'm
more
pod
for
CI,
less
spot
for
package
management
or
whatever,
and
we
can
have
better
understanding
of
what
we
are
doing
and
who's
doing
good.
Who
is
doing
not
good
so.
B
Each
team
has
is
fronting.
Thoughts
has
back-end
thought,
whatever
we
can
rebuild
yeah,
but
they
only
thinks
related
to
that
team.
Goes
to
that
path
so
that
you,
you
can
count
the
errors,
the
five
hundreds
whatever
and
this
you
have
a
certain
point
at
me.
Maybe
then
you
realize
that
trace
that
it's
the
problem
is
in
another
area
of
the
codebase.
It's
okay,
it's
fine,
but
do
you
have
let's
say,
are
Taiwan
ownership
for
the
problem
or
whatever
now
agent
bag
or
whatever
it
is.
That's.
A
A
C
C
Wouldn't
be
opposed
into
doing
some
POC
in
there
first
of
all
like
finding
who
actually
owns
reply
by
you
know,
because
categories
page
product
categories
page
does
not
even
list
this
doesn't
list.
Mailroom
doesn't
list
incoming
email
doesn't
list
reply
by
email,
so
I
don't
know
who
owns
this
service
desk.