►
From YouTube: SRE support for the reference arch Staging environment
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Cool
okay,
so
let's
get
started,
you
wanted
to
discuss
what
you
need
from
infrastructure
for
creating
the
reference
architecture
for
the
alternate
staging
environment,
correct.
B
A
A
And
that's,
I
think,
that's
what
we
want.
What
do
you
think
that
we're
going
to
need
from
the
infrastructure
side?
I
guess
the
first
thing
is
a
gcp
project
to
deploy
to
yes
and
is
that
is
that
all
we
need
to
get
started.
A
B
You
know
if
there
isn't
a
specific
place
for
we
might
as
well,
give
it
its
own
project
and
then
from
there
we
can,
depending
on
what
architecture
we
want.
We
can
then
create
it.
No
problem.
C
Probably
need
if
it's
staging
10k
you'll
probably
want
to
use
some
domain
like
and
not
ip
address,
so
yeah,
maybe
something
from
yeah
from
something
from
the
existing
existing
domains.
B
No
that's
true
yeah
and
you
can
set
up
your
own
domain
as
part
of
that
project
or
I
think
you
give
that
project
access
to
an
existing
domain
or
you
can
get
a
new
one
because
I
know
like
within
geo.
I
have
my
own
projects,
but
I
use
the
domain
in
another
project
just
to
point
to
my
ip
address,
so
I
don't
specifically
need
my
own
domain.
It's
just.
However,
you
want
to
access
thing
like
we
can
give
you
an
ip
address
or
you
can
have
a
proper
actual
domain
name
for
it.
A
C
I
I
think
that
max
suggested
to
call
it
staging
10k
yeah,
maybe.
A
A
So
existing
project
names
right.
We
have
like
gitlab
production.
We
have
git
lab
staging
one,
because
gitlab
staging
was
taken
by
somebody
else
when
we
created
the
project
name
so
so,
given
that
it's
kind
of
like
maybe
a
little
weird,
because
he
loves
staging
10k
that
won't
confuse
anyone.
No,
I
mean
that's
fine.
I
think
I'd
like
the
project
game
to
be
the
same
as
the
environment
name,
if
possible,
and
this
follows
our
convention.
So
what
I
can
do
is
I
can
create
a
project
called
gitlab
station
10k.
A
I
can
put
it
in
the
same
folder
as
our
other
environments
and
then
it's
just
a
matter
of
making
sure
you
guys
have
access
to
it.
We'll
have
to
sort
that
out
once
you
have
access
to
it.
How
do
you
envision
you'll
be
deploying
to
it
like?
Will
you
just
be
running
terraform
and
ansible
locally
from
your
workstation?
For
now?
Is
that.
B
It
depends
how
you
want
to
do
it.
I
suppose
we
can
so
we've
done
before.
We
can
just
create
a
basic
vm
inside
that
project
that
you
can
use
as
a
controller,
because
then
anyone
can
log
on
to
it,
and
all
that
is
is
there,
because
normally
we
would
have
it
locally,
but
we'd
also
have
it
stored
in
a
in
a
gitlab
project.
B
In
a
repository
so
that,
like
any
of
us,
can
just
check
it
out
like
we
have
it
encrypted
and
stuff,
so
then
you
have
to
get
people
added
or
you
could
just
store
it
in
the
project
in
the
vm,
where
you
can
pretty
much
control,
accessibility.
A
B
Have
different
examples
of
that
already
yeah,
like
all
the
performance
stuff,
we
have.
C
B
A
Okay,
so
if
we,
if
we
make
that
like,
if
we
do
that,
we
need
a
project
name,
maybe
for
this
too,
like
we
need
to
allocate
like
a
separate
project.
That's
going
to
have
the
ci
config.
C
Yeah
yeah,
something
like
like
this.
I
linked
it
in
the
dock.
C
Yeah
we
use,
we
use
a
folder
named
developer
environment
to
keep
configs
right.
A
C
Can
create
a
new
project
there
and
add
it
yeah
and
add
this
new
environment
there
with
the
encryptions
yeah
with
the
encryption
that
nick
mentioned.
A
A
A
C
Trigger
it
will
pull
the
gitlab
environment
toolkit
from
the
repo.
A
I
think
that's
enough,
so
so
what
I
can
do,
then,
is
have
you
guys
gone
through
the
like
so
typically
when
we
create
a
gcp
project
or
we
create
a
new
environment.
What
we
do
is
we
have.
We
have
terraform
that
creates
the
project
and
it
creates
a
service
account
and
we
associate
a
key
for
that
service
account
that
kind
of
has
like
admin
access,
and
this
is
the
service
account
we
use.
A
We
call
it.
Terraform-Ci
is
the
name
of
it
for
historical
reasons,
but
this,
like
is
what
we
use
in
ci,
and
this
is
what
you
set
your
gcp
service
account
key
to
is
that
if
I,
if
I
create
this
project
and
we
create-
and
we
and
I
give
you
this
gcp
service
account,
json
key
is
that
is
that
like
enough
like-
and
you
can
just
go
with
that.
B
C
B
We
do
it
at
the
moment.
You
won't
be
able
to
see
it.
But
if
you
look
in
that
configs
project,
there's
a
folder
called
keys
and
that's.
A
B
A
Yeah,
I
think
it's
probably
fine,
for
this
I
mean
normally,
we
were
a
little
bit
wary
of
putting
even
encrypted
checking
and
encrypted
keys
into
git,
but
I
think,
given
that
there's
no
customer
data
here,
it's
probably
fine-
it's
what
you
guys
done
already
so
we'll
do
that.
So
what
I'll
do
is
I'll
I'll
create
the
just
for
tasks
I'll
create
the
project
we'll
create
the
service
account
and-
and
then
I
guess,
like
we'll-
we'll
probably
want
to
create
a
dns
entry
for
this.
C
C
A
C
To
make
everyone
align
but
yeah
sounds
good.
A
B
This
we'll
do
it,
and
I
say
if
it's
going
into
that
existing
project,
then
to
be
fair,
there's
very
little
work
that
needs
to
really
be
done.
It's
just
copying
the
existing
conflict
and
pointing
it
to
your
stuff.
C
Simple
yeah
yeah
yeah
and
I
wanted
to
ask:
could
you
please
clarify
how
the
staging
is
being
updated?
With
the
pipeline,
I
mean
when
the
new
version
is
getting
out.
A
So
how
our
existing
staging
environment
like
staging.com,
is
being
deployed
to
prior
to
like
production,
so
so
the
way
it
works
is
we
currently
have
a
mixture
of
virtual
machines,
running
omnibus
and
kubernetes
clusters
and
staging.
We
have
multiple
clusters
when
a
new
when
there's
a
new
auto
deploy,
build
and
what
we
have
is
a
pipeline
that
builds
omnibus
packages
off
of
the
auto
deploy
branch
when
a
new
build
is
ready,
it
triggers
a
deployer
pipeline,
and
this
is
in
the
deployer
project.
A
A
A
So
what
we
have
is
like
a
series
of
stages.
The
first
thing
we
do
is
we.
We
have
an
asset
pipeline
that
puts
assets
up
into
object,
storage,
we
do
migrations,
then
we
have
this
stage,
which
does
the
omnibus
upgrade
from
italy.
So
we
have
like
a
whole
bunch
of
italy
nodes
both
in
staging
and
production.
A
Then,
after
that
we
do
prefect,
and
then
we
have
the
fleet
stage
and,
as
we've
moved
services
to
kubernetes,
the
number
of
jobs
has
gone
down
right
now
we
only
have
web
web
pages
and
then
we
have
the
kubernetes
cluster
and
this
job
here.
This
kubernetes
cluster
job
it
triggers
another
pipeline
which
runs
home
against
our
our
kubernetes
clusters,
and
we
have
we
have
four
of
them.
We
have
one
is
a
regional
cluster
and
then
each
availability
zone
has
a
separate
cluster.
A
A
What
I
can
imagine
is
what,
when
we
create
this
new
staging
environment,
maybe
we
could
just
add
like
a
trigger
job
here
that
would
trigger
another
pipeline
that
would
initiate
a
deploy
with
a
package
version
set
as
like
an
environment
variable
or
something
that
would
allow
us
to
like
okay,
we're
going
to
deploy
this
omnibus
package,
or
I'm
not
sure
what
you
had
in
mind.
Yet
for
whether
this
is
going
to
be
a
kubernetes
cluster
or
omnibus.
C
I
think
the
the
current
assumption
is
to
create
a
10k
hybrid
environment,
but
I.
A
A
C
A
So
I
think
like,
if
we
really
want
to
if
we
want
to
integrate
it
with
our
current
deploy
pipeline,
if
we
can
just
get
a
pipeline
that
we
can
trigger
with
a
version
as
a
ci
variable
or
something
then
that'll
be
good,
then
we
can
just
do
that.
We
might
even
do
like,
so
it
doesn't
block
other
things.
We
could
just
do
like
a
fire
and
fire
and
forget,
you
know
like
we'll
just
trigger
the
pipeline
and
go
on
our
merry
way
on
staging
and
into
production.
C
A
C
A
Yeah
we
we
use
ubu,
we're
still
stuck
on
1604
because
we're
it's
very
difficult
to
reboot,
getting
notes
because
they
are
a
single
point
of
failure.
If,
if
like,
where
we
don't
use
prefix,
if
we
reboot
a
gitaly
node,
then
customers
see
errors
so
we're
we're
kind
of
stuck
on
this
old
ubuntu
version.
If
you
wanted
to
keep
your
base
vm
configuration
just
like
production,
then
you
would
also
be
stuck
on
ubuntu
1604,
but
you
know
you
can.
A
A
But
I
think
to
answer
your
question:
we
don't
have
a
base
image
like
we
don't
have
a
an
os
image
that
we
start
with.
We
always
start
from
just
like
a
clean,
ubuntu
install
and
then
we
run
chef
on
top
of
it,
and
I
mentioned
ansible
before
we
use
ansible
here
for
just
installing
the
gitlab
package.
Everything
else
in
staging
and
production
is
done
with
chef,
though
we're
kind
of
like
moving
off
of
that
into
kubernetes.
So
the
only
chef
managed
vms
left
are
really
just
a
couple.
A
C
A
All
right,
so
I
think
I
have
my
tasks
which
I
can
just
do.
I
will
do
today
and
I'll
send
I'll
just
send
nick
and
now
I'll
send
both
of
you.
The
the
service
account
key
when
it's
ready
and
that's
it
yeah.