►
From YouTube: App Modernization with Google Cloud 20220406
Description
Hear from Atif Rashid sharing his experience working at Google Cloud for over 4 years, seeing the platform and organization grow and the journey he has seen customers take in order to achieve their respective moonshots and 10x strategies with cloud.
A
Okay
welcome
everyone.
Thank
you
for
joining
us
for
cs
skills
exchange
again.
Today,
optif
is
going
to
kick
things
off
for
us
today
and
talk
about
all
of
his
learnings
around
modernization.
So
go
ahead.
Office.
B
Hi
good
morning,
so
my
name
is
atef,
rashid,
I'm
a
alliances
solutions
architect
at
gitlab.
I
support
google
cloud
in
its
entirety
from
a
gitlab
perspective.
Prior
to
this
I
was
at
google
cloud
for
four
and
a
half
years
in
a
technical
role.
I
was
a
customer
engineer
supported
customers
in
canada
and
then
I
was
a
partner
engineer
and
I
worked
with
partners
like
get
lab.
B
You
know
sort
of
on
the
other
side,
so
I'm
happy
to
spend
some
time
with
you
today
and
kind
of
show
you
some
of
the
experiments
and
testing
I've
done
on
git
lab
on
google
cloud
and
vice
versa,
and
I'm
going
to
try
to
share
my
screen.
Let's
see
how
this
goes.
B
Yeah,
oh,
thank
you
so
much
so
so.
The
topic
I've
chosen
for
the
cs,
skills
exchange
is
around
application.
Modernization
on
google
cloud
application,
modernization
is
a
loaded
term.
It
could
mean
anything,
but
truly
what
I'm
getting
at
is
is
kind
of
an
overarching
like
a
better
app
experience
for
for
companies
and
how
they
deliver
products
and
applications,
and
so
I'm
going
to
go
through
a
few
key
areas
here:
one
value,
selling
and
positioning.
B
B
Get
lab
fills
a
really
big
gap,
in
my
opinion,
in
their
portfolio-
and
I
hope
I
can
you
know-
show
you
ways
that
that's
true
so
the
next
I'll
I'll
kind
of
go
through
a
few
solutions,
these
solutions
kind
of
show-
you
know
some
of
the
coolness
of
of
cloud.
One
is
the
elasticity.
B
So
with
git
lab
runners
on
gke
autopilot
in
particular,
you
can
run
almost
like
thousands
of
jobs
concurrently
using
that
you
know
that
product
another
one
is
the
cloud
foundation
toolkit.
So
google
cloud
sponsors
a
lot
of
open
source
projects
and
a
few
of
those
are
terraform
modules
that
they
use
at
customers
to
deliver
infrastructure
as
code
foundations
of
cloud,
so
secure
patterns
on
cloud
for
companies.
B
Next
is
something
called
dynamic
environment,
so
at
gitlab,
dynamic
environments
typically
mean
kubernetes
name
spaces,
but
for
cloud
it
you
know
what
I've
done.
B
Is
I've
created
dynamic
environments
with
google
projects,
another
one
is
multi-project
pipeline,
so
you
know
I've
taken
our
concept
of
multi-project
pipelines
and
made
a
deployment
pattern
that
allows
you
know,
sort
of
the
best
of
both
worlds
of
how
to
build
a
containerized
application
and
how
to
actually
deploy
it
to
multiple
targets
or
infinite
amount
of
targets,
and
then
some
q
a
I'll,
be
pausing
after
each
section
for
questions.
B
If
anybody
wants
to
you
know
chime
in
or
anything
like
that,
it's
totally
fine,
so
value
selling
and
positioning
so
typically,
there's
two
ways
to
think
about
modernizing
an
app
to
cloud.
So
you
either
migrate
as
is
and
then
iterate,
while
on
cloud
or
you
iterate
in
your
data
center
first
and
get
things
cloud
ready
as
an
example,
containerize
your
application
and
then
migrate
after
it.
It's
ready,
and
so
no
there's
no
right
answer.
But
truly
the
target
is
kind
of
the
same.
And
so
that's
where
you
have
an
application.
B
You're
running
it
on
on
google
cloud,
you're
sort
of
achieving
all
of
the
wonderful
dora
metrics
that
you're
hoping
to
get
and
you're
actually
using
google
best
practices
to
get
there.
And
so
what
I'm
going
to
do
is
I'm
going
to
show
a
few
examples
of
how
to
kind
of
think
about
an
application
kind
of
moving
to
cloud
and
why
that's
kind
of
important.
B
So
one
thing
I
want
to
really
articulate
here
is
that
at
gitlab
we
deal
with
full
stack
applications.
So
devops,
you
know,
tool
chains
and
optimizations
of
that
is
is
great,
but
also
we
deal
with.
You
know
full
stack,
apps
right,
front-end
back-end,
every
type
of
language,
and
so,
while
that's
all
really
important
to
us,
google
cloud
isn't
necessarily
thinking
about
that.
B
So
when
they
think
about
app
modernization,
they
think
about
kubernetes
clusters
or
they
think
about
how
people
can
you
know,
hit
their
apis
more
efficiently
and
so
there's
a
little
bit
of
a
disparity
in
in
sort
of
how
gitlab
thinks
about
cloud
and
how
google
cloud
thinks
about
git
lab
right
and
so
as
an
example,
a
good
area
that
you
know,
I
think
most
customers,
I've
seen
use
gitlab
for
is
a
migration
strategy
to
get
something
onto
cloud
to
get
a
workload
there
and
the
one
that
I've
seen
you
know.
B
Kind
of
shine
is
is
in
two
areas,
one
you
know
app
modernization.
I
have
a
legacy
app,
it's
sitting
in
the
data
center
or
another
cloud.
I
then
want
to
refactor
it
in
an
in
motion
kind
of
build
this
target
on
google
cloud
or
even
data.
So
you
know
no
application
can
exist
without
a
you
know,
database
or
a
data
store,
and
so
how
do
you
move
those
databases?
How
do
you
think
about
replicating
data
across?
How
do
you
manage
that
data?
How
do
you
achieve
insights
out
of
it?
B
Those
are
really
you
know,
big
things
that
google
thinks
about
and
gitlab
is
actually
uniquely
positioned
to
deliver
on
those
things
at
companies
in
a
pretty
exciting
way.
But
we
don't.
We
don't
actually
talk
a
lot
about
it,
and
so
the
one
alignment
that
I've
seen
that
is
super
interesting
is
is
again
this
application
modernization
area
where
a
lot
of
companies
use
gitlab
to
actually
move
stuff
to
cloud
like
they're,
not
optimizing
tool
chains,
they're.
B
Actually
you
know
using
a
runner
sitting
in
a
data
center
to
trigger
some
process
and
then
replicate
whatever
they
need
to
do
to
actually
get
that
application
onto
cloud
in
some
form
and
then
eventually
they'll
cut
over
and
and
that's
how
they
think
about
gitlab
in
a
lot
of
ways.
A
lot
of
these
bigger
companies,
it's
that
transformation
strategy.
B
So,
where
things
fall
apart
at
cloud
in
every
cloud
project
universally,
this
is
like
every
single
customer
engineer,
and
every
single
partner
engineer
thinks
this
way.
Is
that
it's
really
great
at
the
beginning,
because
all
the
tools
are
exciting
and
sandboxes
are
awesome,
but
then,
as
soon
as
you
actually
want
to
move
something.
B
B
Because
again,
they
focus
on
the
people
and
process
of
using
cloud,
whereas
you
know
google
handles
the
sort
of
technology
bit
and
they
can
actually
allow
those
organizations
to
really
accelerate
their
efforts
and
and
really
what
kind
of
happens
out
of
that
is
you
get
those
faster
iterations?
You
get
better
software
delivery,
less
burnout
and
all
of
the
really
wonderful
outcomes
that
we
actually
know
quite
well.
B
And
so
why
application
modernization
with
cloud
right,
there's
a
million
ways
to
optimize
an
application.
I
can
use
a
million
different
things,
but
truly
with
google
cloud
in
particular,
there's
a
real
depth
and
breadth
of
products
that
people
can
use,
whether
it
be
a
sas,
serverless
global
scalable
thing.
You
know
whether
you're
trying
to
scale
an
application
globally
or
whether
you're
trying
to
do
multi-cloud
with
you
know,
amp
those
clusters
and
all
these
different
providers
like
in
almost
every
pattern.
B
Google
can
meet
that
demand
and
what
gitlab
does
is
it
really
allows
a
team
to
be
a
little
bit
more
successful
than
they
typically
have
been
right.
So
I
think
all
of
us
here
have
gone
to
any
cloud
console
hit.
You
know
products
and
actually
gone
through
and
said
what
do
these
300
things
do
right?
How
do
I
fit
them
together?
B
What
do
I
actually
try
to
achieve,
and
so
that's,
where
gitlab
does
a
really
great
job,
to
really
team
that
complexity
and
allow
sort
of
people
to
consume
at
their
pace
a
cloud
provider
all
right,
so
I
that
was
kind
of
the
value,
selling
and
positioning
area.
I'm
happy
to
take
questions
at
this
point
before
going
on,
so
if
anybody
has
any,
I
see
something
in
the.
A
B
Oh
notes
for
doc
and
q
a
I'm
happy
to
if
anybody
wants
to
verbalize
a
question,
I'm
happy
to
answer
it
or
I
could
just
keep
going.
B
Okay,
going
once
twice
all
right
on
to
the
next
and
so
one
sort
of
solution
again,
you
know
these
are
examples
that
I'm
going
through
next.
These
examples
specifically
are
focused
around
showing
kind
of
like
the
power
right,
like
the
heavy
lifting
that
cloud
can
do
to
really
optimize
our
efforts
here
at
get
lab,
and
so
one
area
that
I've,
you
know
really
spent
a
lot
of
time.
Testing
is
runners,
runners
are,
are
awesome
at
get
lab.
B
It's
one
of
the
reasons
why
I
think
it's
actually
a
differentiator
for
us,
and
you
know,
from
the
outside.
Looking
in
people
think
you
know,
the
runner
is
like
a
very
small
binary
that
just
sits
there
and
but
that
code
base
is
super
mature
and
you
can
actually
do
a
lot
with
runners
and
customize
them
to
do
a
lot
of
really
exciting
things,
and
I
I
actually
haven't
even
scratched
the
surface
on
like
the
design
space
of
how
runners
can
can
really
be
optimized.
B
So
so
one
solution
that
I
I
you
know
devised
is
a
it
is
a
runner
fleet
built
on
gke
autopilot,
so
gke
autopilot
is
a
like
a
managed
service
that
google
provides.
You
know
the
back
end
of
that.
So
underneath
that
managed
service,
it's
actually
google
sre
that
and
it's
the
same
srv
teams
that
support
all
of
their
big
products,
like
you
know,
youtube
and
whatever
search,
and
so
that
they
have.
This
sponsored
service
called
gke
autopilot,
which
has
a
very
secure
profile
and
it
allows
you
to
and
what
I've
done
is.
B
I've
deployed
a
runner
on
that
using
the
kubernetes
executor,
and
so
so.
I've
deployed
three
runners
and
the
runners,
I've
deployed
are
are
specific
to
a
type
of
user
at
google
cloud,
and
so
the
three
types
of
runners
I've
deployed
are
an
application
runner.
So
if
I'm
an
application
developer,
I
can
go
in
and
do
my
stuff
and
it's
great
and
infrastructure
a
pipeline
or
an
infrastructure
group
which
is
again
an
operations
group
devsecops
devops
group.
B
They
focus
on
managing
the
infrastructure
and
then
a
foundational
group
which
the
foundational
group
is
more
focused
on
governance,
and
so
how
do
I,
you
know
manage
policy?
How
do
I
think
about
you
know,
external
threats?
B
How
do
I
manage
you
know
kind
of
like
the
the
entire
big
picture
of
what
I'm
doing
on
cloud,
and
so
those
are
the
three
kind
of
main
areas
that
I've
picked
for
for
runners
and
so
the
architecture
I've
devised-
I
think
I
went
backwards
here
is:
is
this
so
using
a
tool
or
an
integration
at
google
cloud
called
workload,
identity
federation?
So
what
workload
identity
federation
on
google
cloud
is:
is
a
federated
system
to
allow
you
to
replicate
identity
to
different
providers?
B
One
of
those
providers
is
kubernetes,
so
I
can
create
a
kubernetes
cluster
anywhere.
I
can
federate
now
that
authentication
to
google
iam
specifically,
and
so
the
reason
why
that's
important
is
because
using
this
workload,
identity
federation,
I
have
deployed
a
runner
in
a
unique
name:
space
in
kubernetes
in
gke
autopilot.
B
I've
tied
that
kubernetes
service
account
using
workload
identity
to
a
google
service
account
I'm
not
managing
any
secrets
at
all.
So
all
I'm
doing
is
connecting
the
the
label
of
the
name
of
the
google
service
account
to
my
kubernetes
service
account
and
I'm
not
actually
doing
any
secrets,
management
and
and
the
reason
why
I
think
googlers
and
customers
love.
That
is,
it
is
because
they
already
they
might
already
have
a
secret
system
built
or
they
want
to
keep
the
security
perimeter
for
google
cloud
within
google
cloud
and
or
they
manage.
B
Google
service
accounts
already,
and
they
don't
want
to
start
managing
different
ones,
and
so
there's
a
lot
of
reasons.
Why
one
would
do
that
and
so
so
yeah.
So
I've
created
multiple
namespaces.
You
know-
and
I
have
a
bit
of
a
demo
here.
B
B
I'm
using
the
free
tier
of
gke
autopilot,
so
so
there's
two
free
tier
gke
autopilot
features
that
google
gives
gives
away.
One
is
a
zonal
cluster,
so
that's
a
cluster
that
only
fits
in
one
zone
in
a
region
or
a
regionalized
cluster.
That's
a
gke
autopilot
cluster,
because
it's
a
shared
infrastructure,
all
the
node
pools
and
gke
autopilot
are
shared,
and
so
the
way
it
works
is
if
I
go
in
and
I
schedule
a
pod
and
so
right
now
I
have
three
runners.
B
These
runners
use
the
g,
sorry,
the
kubernetes
executor,
and
it's
hitting
that
you
know
kubernetes
api.
Every
time
I
I
launch
a
job,
it
will
create
a
new
pod
using
that
kubernetes
executor
and
you
know,
using
service
accounts
that
I've
defined
I've
actually
given
different
permissions
to
each
of
these
runners.
So
in
the
org
case
they
have
access
to
org
resources,
so
top
level
resources
only
but
no
project
resources
or
folder
resources
in
the
project.
Runner.
B
It's
given
permission
to
specifically
those
projects
in
all
the
compute
and
memory
and
stuff
and
networks
in
there
in
folders,
and
that
for
the
app
it
actually
doesn't
even
show
you
that
it's
specific
to
a
compute
target
like
gke
or
compute
engine
or
whatever,
that
application
developer
needs
nothing
to
do
with
the
overarching
setup.
So
you
know,
and
I've
mimicked
those
here
in
a
gitlab
group,
and
so
using
these
group
level
runners.
B
I've
actually
created
these
three
runners
with
three
different
tags.
If
you,
if
you
can
see
that
so
those
tags
are
tied
to
the
ci
files
of
those
projects
subsequent
to
those
groups
and-
and
you
know,
there's
ways
to
protect
that
further,
as
everyone
here
knows,
but
but
yeah,
so
so
each
of
you
know
within
this
group
is
the
app
runner
within
this
group.
Is
the
project
or
infrastructure
runner?
B
Oh
sorry,
within
this
group
is
the
project,
your
infrastructure,
runner,
and
this
one
here
is
the
foundational
runner,
which
is
again
the
cloud
foundation
and
so
just
sort
of
going
back
to
the
runner
architecture
anytime.
I
launch
a
job
it
spawns
these,
but
then
you
know
the
coolness
of
it.
Oops
here
is
yeah
again
that
free
tier,
so
I'm
not
paying
for
any
compute
for
the
actual
cluster,
which
is
awesome,
I'm
only
paying
for
the
pod
that
gets
spawned
by
a
kubernete.
The
kubernetes
executor
super
cheap.
B
Second,
the
cloud
reads:
the
node
pools
that
they
give.
You
are
awesome,
so
if
I
want
to
go
out
and
create
a
pod,
that's
massive
that
that
sits
there
and
does
like
hpc
level.
You
know
compute
power.
I
can
actually
allocate
that
type
of
pod.
For
my
kubernetes
executor
I
could
say
look.
This
is
the
type
of
a
node
that
I'm
looking
to
do
this
on
and
hard
code
that
yeah
again
super
extremely
high
security
posture.
B
It's
production
ready
because
again,
google
sre
manages
that
service
and
the
cloud
elasticity
is
just
out
of
this
world
so
again
for
the
cloud
quota
for
us
central
one,
which
is
where
I've
hosted
this
cluster
there
I
can.
I
can
launch
12
400
concurrent
pods,
which
could
mean
theoretically
12
000
jobs.
Now
I've
never
tried
this,
it
might
explode,
but
you
know
it's
cool
to
know
that
I
don't
have
to
think
about
it.
B
So
great
so
before
I
go
on,
does
anyone
have
any
questions
about
the
runner
architecture
that
I've
I've
shown
here?
B
C
Fast
does
it
or
how
much
effort
does
it
take
to
stand
something
like
this
up.
B
Yeah,
so
the
actual
deployment
itself
is
actually
pretty
easy,
so
I'm
using
the
helm
chart
for
runners
that
was
given
to
us.
I
guess,
and
then
all
I've
done
is
I've.
You
know
copy
and
pasted
a
few
areas.
B
So,
like
I'm
using
the
cloud
foundation
toolkit,
you
know
gke
module
to
deploy
autopilot,
which
is
not
that
difficult
and
then
I'm
just
binding
two
service
accounts
together
and
then,
which
is
the
kubernetes
service
accounts
and
the
google
service
account
and
then
and
then
I
just
deploy
three
runners
into
three
different
name
spaces,
but
so
the
runner
actual,
like
the
actual
architecture
like
the
rig
itself,
is
super
easy.
I
would
say
the
you
know
the
hard
part,
and
this
is
the
stuff.
B
You
know
that
I
haven't
figured
out
yet,
but
you
know
how
do
you
get
runner
runners
to
to
operate
quickly?
So
here's
my
understanding
of
the
process
is
I
go
in.
I
have
a
job.
It
hits
the
executor
that
doesn't
process
the
job.
It
hits
kubernetes
api.
It
launches
a
new
pod
that
new
pod,
then,
is
an
alpine
image.
So
from
outer
space
somewhere
an
image
comes
down,
gets
downloaded.
There's
a
small
miniature
process
that
kicks
off,
and
so
there's
time
that's
required
for
that.
B
So
that
has
to
be
optimized
and
then
the
second
one
is.
I
don't
have
hard-coded
compute
on
the
gke
autopilot
cluster.
So
the
way
that
works
is
every
time
I
launch
a
job.
It
tries
to
schedule
a
pod
on
different
types
of
compute
until
it
finds
the
one
that
one
that
likes
so
there's
another
set
of
you
know
there's
another
process
that
it
has
to
go
through
in
order
for
that
to
get
like
scheduled
and
then
once
it's
scheduled
it
you
know.
Does
that
thing
where
it
calls
home
and
that
doesn't
take
too
long?
B
You
know
thing,
and
then
you
know
someone
smarter
than
me
has
to
maybe
tell
me
how
to
like,
like
I
have
now,
a
runner
fleet
sitting
in
u.s,
central
one
I
my
artifact
registry
or
container
registry,
is
there.
B
C
I
don't
have
a
question
but
kind
of
a
comment
as
it's
just
kind
of
from
the
public
sector
and
regulated
side
of
things.
It's
it's
interesting
to
see.
C
You
know
how
this
is
set
up
and
the
different
controls
it
has
in
place,
because
one
of
the
biggest
questions
that
we're
getting
right
now
is:
okay,
we
understand
all
the
cyber
security
threats
at
the
moment.
How
do
we
lock
down
our
pipelines
better-
and
this
strikes
me
as
a
great
answer
to
some
of
that,
based
on
the
different
tiers
of
cloud
users
and
and
some
of
the
you
know,
the
the
structure
that
you're
showing
us
here
really
simplifies
how
much
authentication's
got
to
be
done
to
the
cloud
within
the
cloud
etc.
B
Oh
yeah,
thanks
no
problem.
I
I
actually
think
the
missing
piece
on
my
side
is,
I
don't
fully
know
you
know
all
of
the
controls
are.
I
don't
think
anyone
knows
all
the
controls
on
git
lab,
and
so
how
do
you
piece
together
those
capabilities
across
both
to
really
get
that
security
posture
that
you're
mentioning
yeah
like
forever
question
to
answer
right?
B
D
This
is
chester.
I
had
one
question
so
the
this,
the
different
users
that
you
mentioned,
the
app
users
versus
the
infrastructure
users,
like
the
level
of
the
secrets
that
each
group
of
users
would
have
or
is
slightly
different
right.
So
the
folks
on
the
infrastructure
side
would
probably
have
more
higher
privileges
to
be
able
to
do
things
like
you
know,
deploy,
I
don't
know
new
networks
or
provision
servers
and
things
like
that.
So
how
do
you
scope
down
the?
B
Yeah
great
great
question
so
in
in
this
scenario
like
this
one,
I'm
actually
not
managing
any
secrets.
The
only
secret
I'm
managing
is
the
original
runner
that
is
deploying
subsequent
runners,
which
then
do
the
workload
identity
to
google
cloud
and
and
that's
like
a
limited
account
and
so
from
a
you
know,
role
and
user
perspective.
You
know
on
the
gitlab
side,
I'm
not
even
managing
any
runners.
B
All
I've
done
is
I've
created
this
runner,
that's
tied
to
one
specific
google
service
account
and
then
that
service
account
then
has
the
correct
roles
for
that
specific
cloud,
user
or
cloud
persona,
and
so
then,
in
that
service
account
I
can
define
roles.
I
can
tie
that
service
account
to
different
projects.
Folders
all
of
the
the
you
know
correct
permissions
that
would
be
required
for
that
user
or
that
user
group.
B
That's
that's
how
I've
I've
set
it
up,
but
from
a
secrets
perspective,
it's
a
interesting
area
for
sure
I
think
you
know,
I
think
the
coolest
solution
I've
seen
is
you
know
the
the
json
web
token
ability.
So
you
know,
one
possible
solution
is
to
actually
using
open
id
authenticate
to
google
open
id
or
open
id
with
a
google
account
or
google
service
account
and
then
tie
that
back
to
some
process.
On
the
gitlab
side.
I've
seen
some
folks
here
doing
that,
but
you
know
the
challenges.
B
I've
never
seen
in
enterprise
open
up
their
service
accounts,
for
you
know
open
id
right.
So
so
you
know
it
really
depends
but
yeah
in
this
scenario,
I'm
not
actually
managing
too
many
secrets.
E
I
have
a
quick
one,
hey
alejandra,
so
I
was
wondering.
Maybe
it's
not.
E
B
Yeah,
so
so
that's
what
this
is
right.
So
in
this
scenario
like
if
I
here,
let's
do
it.
So
if
I
kick
off
a
job,
I'm
trying
to
think
of
something
that
won't
break.
Actually,
I
could
keep
going
and
I'll
show
you
alejandro
exactly
how
it
works
like.
So
what
happens?
Is
these
yeah
yeah?
No
worries
these?
Don't
actually
do
anything.
These
pods
get
optimized
down
because
they've
actually
been
optimized
down
to
a
very
low
cpu
level
and
gke
autopilot
has
changed
the
annotations
to
be
like
look.
B
Give
me
the
cheapest
instance.
No,
the
no
pool
cheapest
no
pool,
because
what
happens
is
whenever
I
hit
a
job.
Whenever
I
launch
a
pipeline,
it
actually
goes
out
and
creates
a
new
pod,
and
I'm
going
to
show
you
that,
because
I
I
can
show
you
that
in
the
next
section,
and
so
so
all
right
so
moving
on,
you
know
cloud
foundation
toolkit,
so
the
google
has
something
called
the
cloud
foundation
toolkit.
B
These
are
all
modules
that
google
maintains,
so
they
have
a
staff
of
developers
that
they
pay
full
time
to
maintain
this
code
base
and
the
reason
for
it
is
that
from
a
deployment
iac
perspective,
you
need,
like
their
standard,
is
terraform
and
deployment
manager,
which
is
their
native
like
iac
solution,
which
is
similar
to
like
aws
cloud
foundation,
and
so
in
the
reason
why
this
is
super
great
for
us
is
because
terraform
is
a
first-class
citizen
on
gitlab
and
the
terraform
like
developer.
B
Experience
is
awesome,
and
so
one
of
the
modules
that
I
particularly
have
focused
on
is
this
organizational
policy
module,
and
so
what
I've
done
is
I've
set
this
up
in
my
lab
here.
B
So
under
my
foundational
pipeline,
I
have
an
org
policy
set
up
in
here
I
go
in,
and
I
I
see
that
I
have
some
terraform
all
I'm
doing
is
I'm
changing
this
to
you
know
some
region,
so
the
org
policy
that
I've
chosen
is
something
called
the
gcp
resource
locations.
Meaning
here
are
the
set
of
regions
and
zones
that
I'm
allowed
to
use.
B
For
my
google
cloud,
it's
a
really
important
policy,
and
so
one
you
know
one
that
I've
seen
that
is
is
super
great
is,
if
I
go
back
here
and
I've
kind
of
wrote
it
in
the
readme,
so
they
they
google
cloud
has
a
an
annotation
which
allows
me
to
only
deploy
in
carbon
neutral
locations.
So
a
few
of
the
google
data
centers
are
actually
carbon
neutral,
meaning
I
think
what
it
means
is
that
the
power
that
they
use
is
like
they.
They
somehow
make
it
neutral,
like
it's
a
better
energy
efficiency
in
certain
locations.
B
So
I
want
to
change
my
cloud
to
only
use
that,
and
so
what
I
do
is
I
go
in
and
I
find
my
module
and
then
I'm
going
to
change
it.
So
I'm
going
to
open
it
up
here
and
what
I'm
going
to
do
then,
is
change
this
to
this
right
and
I'm
just
going
to
commit
it
to
the
main
one
which
will
launch
the
pipeline.
B
So
now
I've
changed.
It
changed
my
setting
for
any
cloud
usage
to
only
exist
in
low
carbon
locations.
Now
going
back,
I
have
a
pipeline
now
I've
told
not
to
show
pipelines
and
presentations.
Let's
see.
B
B
You
have
to
click
it
yeah
in
some
amount
of
time.
It's
going
to
change
that
policy
and
we
can
go
back
to
that
in
a
in
a
while,
but
there's
policies
that
do
there's
terraform
modules
that
google
have
that
are
fantastic
one
of
the
my
favorite
modules,
which
by
far
is
one
of
the
coolest
ones.
I've
ever
seen
is
the
gcloud
sdk.
So
if
anyone
has
ever
used
google
cloud,
they
give
you
an
sdk,
which
is
something
you
can
install
on
your
desktop
in
your
on
your
macbook
or
you
could
install
it.
B
You
know
on
any
server
or
on
any
type
of
linux,
and
all
it
is
is
a
cli.
So
it's
a
gcloud
command
which
does
everything-
and
you
can
actually
you
know
using
command
line,
actually
define
and
change
and
manage
your
cloud.
It
basically
is
a
front-end
cli
front-end
for
all
of
their
apis,
and
so
why
that's
important
is
someone
made
a
terraform
module
of
that,
so
they've
actually
created
their
own
terraform?
Well,
google
did
created
a
terraform
module
out
of
that
gcloud
command
so
using
terraform.
B
B
B
Oh
looks
like
it's
done
so,
if
you
can
see
here,
I
had
an
organizational
policy
for
with
that
resource
location.
I
wanted
to
change
it
to
carbon
neutral
data
centers
only
and
hopefully
it
worked.
Something
happened
yeah
there.
It
is
so
these
are
carbon
neutral
data
centers
for
google
cloud,
so
I
can
only
deploy
to
these
carbon
neutral
locations,
which
is
a
an
org
level
policy.
B
So
going
back,
you
know
that's
kind
of
how
you
would
segment
kind
of
like
an
organizational
tier
right
and
maybe
going
back
to
the
presentation
you
know.
Another
solution
I
had
was
something
called
dynamic
cloud
environments,
and
so
what
I
did
here
was
is
I
actually
created
using
our
our
merge
request.
B
The
merge
request
is
kind
of
cool
because
there's
certain
predefined
variables
that
I
can
pass
into
a
job
like
a
ci
slug
or
like
a
few
others
like
the
merge
request
id
and
so
using
that
I
actually
was
able
to
build
a
project
factory,
and
so
what
this
does
is
it
actually
based
off
of
a
merge
request,
deploys
a
google
project
in
a
dynamic
environment,
so
I
actually
go
in
and
whenever
I
create
a
new
merge
request.
Let's
try
it.
B
I
deploy,
I
use
this
g
cloud,
terraform
module
that
I
was
mentioning
earlier
to
actually
deploy
a
google
project
and
let
me
show
you
what
it
does.
B
There
we
go
right,
so
this
is
the
g
cloud
terraform
module
that
google
maintains
and
all
I'm
doing
is
running
a
command
with
some
variables
and
the
command
is
project
created
and
project
delete,
and
it's
super
easy,
and
so
what
I'm
in
this
case,
I'm
creating
a
project.
Every
time
I
create
a
merge
request,
so
the
merge
request
here
that
I
have
launched
a
job
that
was
the
pipeline.
I
just
showed
you
and
then
you
know.
B
B
So
right
here,
oh
it's,
the
merge
request
id
that
I
chose,
which
is
interesting,
because
I
thought
the
merge
request.
Id
was
like,
like
a
two-digit
number,
but
for
some
reason
it's
actually
a.
However
many
digits
that
is-
and
so
you
know
if
I
go
in-
and
I
want
to
you
know-
I'm
done
with
my
branch
so
say.
For
instance,
you
know
you'll
see
here
that
I've
created
a
branch,
that's
that
project
and
if
I
go
to
that
merge
request
and
I
merge
it-
which
I'm
about
to
do
here.
We
go.
B
What
it'll
do
is
it'll
actually
delete
this
project,
so
I
won't
see
like
it'll,
actually
create.
So
this
is
an
ephemeral
project
that
I've
created
in
a
branch
in
a
pro
in
like
a
gitlab
project
and
the
reason
why
that's
cool
is
say,
for
instance,
like
that's
the
typical
you
know:
isolation
zone
for
google.
So
whenever
I
have
a
production
application,
it's
like
a
google
project
for
sandbox,
a
google
project
for
staging
and
a
google
project
for
production.
B
That's
how
I
think
about
it,
because
that's
how
google
segments
you
know,
that's
their
separation
of
resources
in
their
cloud
and
so
by
leveraging
our
dynamic
environments
and
that
terraform
module.
I
can
now
tie
that
to
a
project
that
I
create
and
any
resources
that
I
want
to
create
within
it,
and
so
it
gets
super
interesting.
If
I'm
deploying
say
you
know
a
bunch
of
clusters
in
an
application
and
whatever
else.
B
Oh
and
I
had
yeah
exactly
well,
I
had
to
die.
The
diagram
would
have
been
helpful
earlier.
Sorry,
I
didn't
show
it
at
the
beginning,
but
but
yeah,
that's
exactly
what
it
does.
So
I
have
you
know
a
project,
a
gitlab
project.
It
you
know,
has
a
main
branch
and
then
it
has
kind
of
like
a
review
branch
and
then,
when
I
merge
it,
it
merges
all
of
the
resources
back
to
the
main.
B
So
just
to
go
here:
did
it
actually
delete
my
project?
It
did
so
the
project's
gone
and
this
pipeline
should
be
done
and
the
merge
request
is
complete.
Does
anyone
have
any
questions?
Oh
one
thing
I
forgot
to
show
alejandro
is
the
actual,
the
actual
jobs,
hopefully
the
runner's.
Still
there.
B
Yeah,
do
you
see
that
alejandro
so
see
that
running
pod
and
you
see
how
it's
in
the
project
name
space?
So
as
soon?
If
you
wait
like
60
seconds,
which
is
my
timeout
that'll
actually
disappear,
so
I've
only
spent
the
compute
for
that
specific,
like
for
that
specific
moment,
whatever
that,
like
that
processing
of
that
job,
and
then
it's
gone.
B
Yeah,
it's
totally
a
cloud
thing.
Our
runner
already
did
it.
My
runner
configuration
was
like
default
like
I
didn't
do
too
much
to
configure
this
on
our
side.
It
was.
It
was
just
picking
the
right
product
on
the
google
side
for
the
runners,
any
more
questions
on
dynamic
environments
and
google
projects.
F
I
have
a
quick
one.
The
terraform
obviously
needs
credentials
to
get
into
gcp.
Can
you
use
development
credentials
for
a
branch
and
a
separate
set
of
credentials
for
the
the
main,
the
main,
the
the
main
pipeline.
B
Yeah,
so
in
my
design
I've
got
I've
become
opinionated
in
saying
that
the
runner
is
the
service
account,
and
so
the
design
that
you're
asking
for
is
like.
I
have
one
runner,
but
then
I
have
multiple
service
accounts
tied
to
that
one
runner
or
I
have
multiple
kubernetes
service
accounts
tied
to
like
different
service
accounts
on
the
google
side,
it's
possible.
I
think
it's
called
workload.
Identity
pools,
google
workload
at
any
pools,
but
I've
it's
it's
a
it's
a
harder.
It's
a
harder
go,
but
you
it's
possible.
F
Yeah
I
was
looking
at
the
scenario
where
you're
validating
the
check-in
on
the
branch
right
you're
validating
your
work,
but
you
don't
have
authentication
into
the
cloud
provider
to
actually
deploy
into
production
or
maine
right.
So
it's
different
levels
of
access.
B
Yeah
exactly
well,
I
think
the
the
way
to
do
it
would
be
to
deploy
a
project
with
a
service
account.
So
if
I
have
security
admin,
I'm
able
to
then
provision
accounts
for
this
purpose
like
the
ephemeral
purpose,
but
then
I'm
managing
a
bunch
of
workload,
identity,
settings
and
mapping.
B
Those
google
service
accounts
to
specifically
kubernetes
ones,
and
then
I
think,
on
the
runner
side,
I
I'm
not
sure
like
how
how
you
would
like,
because
I
know
you
can
tie
the
runner
pod
to
like
different
kubernetes
service
accounts
to
do
different
things,
but
you
know
the
runner's,
so
small.
Would
you
just
have
one
for
each
purpose
like
I
would
create?
Maybe
a
runner
under
the
foundational
namespace
called
you
know:
service
account
admin,
maybe
something
like
that,
but
yeah
there's
it's
definitely
a
design
exercise.
F
B
F
B
You
know,
I
think,
we're
still
early
like
there's
so
much.
We
could
do
with
hashing
hash
corps
set
of
tools
like
I'd
love,
to
have
a
coffee
chat
with
you,
man
and
pick
your
brain,
because
I
I'm
familiar
with
a
lot
of
the
stuff
that
they
have,
but
I'm
trying
to
figure
out
how
to
fit
it
in,
but
yeah
yeah
I'll.
B
Yeah
no
worries
yeah
any
more
questions.
B
Okay,
I'm
gonna
zip
along
because
you
know
we're
running
a
bit
short
on
time,
so
I
would
say
the
next
sort
of
solution
I
I
focused
on
is
something
called
the
cloud
deployment
pipeline
and
all
it
is
is
a
multi-project
pipeline
at
gitlab,
and
the
reason
why
this
is
important
is
with
auto
devops
and
the
way
we
think
about
our
kubernetes
agent.
It's
a
very
you
know,
opinionated
convention
that
is
specifically
geared
for
developers.
B
However,
that
convention
isn't
necessarily
for
like
for
everything
else
in
space.
That's
out
there
right,
not
everybody's
building
a
web
application,
not
everybody
is
doing
that
and
so
using
a
method
or
a
pattern.
Similar
to
this,
you
can
actually
get
the
best
of
both
worlds.
So
I
have
a
project.
It's
my
main
project.
That
project
has
all
of
the
bells
and
whistles
for
auto
devops.
The
kubernetes
agent
allows
me
to
develop
efficiently
using
all
the
best
of
get
lab,
but
then
I'm
now,
you
know
building
say
a
containerized
application.
B
That
application
maybe
goes
up
to
a
registry
as
an
artifact
with
some
metadata,
and
then
a
subsequent
project
can
pick
up
that
artifact
and
then
launch
it,
and
so
using
our
tools
like
our
multi-project
pipeline,
I'm
a
lot
I'm
able
to
build
a
project,
that's
the
main
and
then
using
a
specific.
You
know
you
know
identifier,
so
what
I've
done
is
I'll
just
show
you
I've
actually
created
again.
This
is
built
into
the
ci
variables,
predefined
variables,
a
label,
and
so
I've
created
two
labels.
B
I've
created
one
called
gke
and
I've
created
another
one
called
cloud
run
and
using
that
label
I'm
able
to
then
deploy
to
different
multi-project
two
different
projects,
and
so
the
benefit
of
that
is
say
I
have
you
know
canary
releases
say
I
want
to
experiment
with
a
new
compute
deployment
target
say
I
want
to
play
with
some
new
serverless
thing.
I'm
able
to
do
that
and
not
impact
my
you
know
main
project.
I
can
keep
my
project
running.
B
So
in
here,
I've
actually
created
under
application.
Three
projects
and
the
first
project
is
a
just
a
a
regular
project.
But
then,
in
my
mind,
what
I
think
people
would
want
to
do
is
really
give
this,
the
all
of
the
power
of
like,
say
an
integrated
kubernetes
cluster,
or
you
know
all
of
the
bells
and
whistles
of
getting
like
a
team
together
to
actually
you
know,
collaborate
and
work
together
to
build
a
very
you
know,
awesome
results
or
awesome
outcome
of
an
application
or
a
microservice
whatever,
and
then
what
I'm
doing
is
I'm
saying?
B
B
B
Well,
there
is
a
way
to
see
that
I've
done
this
and
it's
from
a
pipeline
that
should
show
that
I'm
actually
deploying
to
a
different
target.
But
it's
not.
B
B
B
Then
I
can
get
to
yes
there
it
is,
and
so
all
this
is
in
that
project
is,
I
downloaded
the
bash
template
and
and
I'm
running
it
and
so
there's
a
build
happening
on
that
cloud
run
project
that
is
the
subsequent
project
to
this
one,
so
in
this
group
and
so
just
to
go
through
this
again
now,
if
I
create
a
merge
request,
then
with
so
just
to
do
the
same
thing.
B
Here,
which,
again,
is
that
different
other
that
other
project,
and
so
the
benefit
of
this
is?
I
can
actually
create
like
a
catalog
of
deployment,
targets
or
testing
projects
and
then
pass
using
kind
of
like
an
intelligent
pipeline,
an
application
to
it
to
see
how
it
works
on
on
a
different
cloud
provider
or
any
multi-cloud,
any
real,
any
target,
and
so
that
gives
people
kind
of
like
a
catalog
of
places
to
deploy
to
and
and
that's
it.
B
So
you
know
that
was
kind
of
again
a
bunch
of
solutions
that
can
help
sort
of
accelerate
an
app
modernization
approach
for
someone
who's
wanting
to
use
google
cloud
and
gitlab.
I
think
the
key
elements
of
that
are
using
cloud
elasticity.
B
So
you
know
serverless
is
awesome,
but
in
our
runner
architecture,
which
is
kind
of
like
the
compute
back
end
a
little
bit
of
how
jobs
work,
you
know
creating
a
an
intelligent
design
there
and
thinking
about
how
fast
things
can
get
processed
and
securing
that
is
important
and
cloud
does
a
really
great
job
in
giving
us
tools
to
allow
us
to
do
it.
Another
one
is
leveraging
the
security
controls
in
cloud.
So
the
way
I
think
about
it
is
cloud
does
a
really
great
job
for
defense
in
depth
and
really
thinks
about.
B
Okay,
look
in
different
layers
of
the
stack
or
diff,
you
know
as
I
go,
I
can
secure
different
things
and
it's
completely,
you
know
locked
down
from
you
know
it's
completely
locked
down,
whereas
gitlab
does
a
really
great
job
for
like
the
people
and
process
side
of
security,
which
is
you
know
again,
I
call
it
security
kind
of
in
flight
where
you
know
people
are
constantly
consuming
cloud
or
cons
constantly
consuming
data
issues,
tickets,
epics
whatever,
and
then
we
do
a
really
great
job,
securing
that
in
almost
every
area
of
our
our
stack.
B
So
it's
a
really
good,
better,
better,
together
story.
Another
one
that
is
devops
is
a
really
key
element,
but
you
know,
as
I've
sort
of
articulated
it's
one
element
that
sits
between
a
governance
structure,
which
is
again
very
business,
focused
around
how
an
enterprise
manages
their
data
assets,
their
apps.
You
know
all
of
their
digital
sort
of
you
know,
presence
right
and
then
also
between
that
and
the
developers.
So
what
are
the
set
of
capabilities
that
a
developer
needs
to
really
just
forget
about?
B
You
know
the
infrastructure
as
a
whole
to
allow
them
to
focus
on
the
business
logic
to
build
the
applications
that
they
need.
I
really
see
that
shine
around
specifically
microservices
and
you
know
api
kind
of
ecosystems
where
you
know,
there's
an
infinite
amount
of
technology
out
there
to
play
with
how
do
we
sort
of
tame
that
complexity
and
really
bring
that
in
to
allow
enterprises
to
consume
it?
B
A
A
Wonderful,
if
there
are
no,
oh
sorry,
were
there
questions.
I.
D
Had
once
I'm
always
coming
in
with
these
late
questions,
the
let's
see,
could
you
do
a
quick
summary
of
the
work
load
I
identity,
parameter
and
like
how
that
relates
to
runner
workloads.
B
Yeah
no
problem,
so
when
I
deploy
a
runner
onto
gke
so
again
this
is
specific
to
kubernetes.
It
can't
be
like
cloud
run
or
any
of
the
others.
Workload
identity
federation
only
works
for
you
know
a
few
providers
kubernetes
identity
is
one
of
them,
and
so,
when
I
deploy
a
runner
to
gke
or
kubernetes
in
order
for
that
to
go
and
work,
I
actually
need
a
kubernetes
service
account.
B
So
there's
a
unique
identifier
or
like
kind
of
like
an
email,
id
or
unique
identifier,
which
is
the
name
of
that
kubernetes
service
account
and
because
I
can
because
I've
deployed
gke
and
because
gke
has
workload
identity
federation
built
into
it.
I
can
then
using
a
single
g
cloud
command
or
a
terraform
setup.
I
can
bind
that
kubernetes
service
account
to
a
google
cloud
service
account.
The
benefit
of
doing
that
is
I'm
pushing
all
of
the
security
sort
of
you
know,
management
over
to
google
cloud.
B
I
said:
look
this
is
the
kubernetes
service
account
that
I'm
using,
but
the
only
reason
it
exists
is
because
I
want
to
bind
it
to
that.
Google
service
account
for
it
to
do
its
job
and
the
reason
why
the
service
account
you
know
is
is
really
important
on
the
google
side.
Is
it's
like
a
user
account,
so
I
can
actually
tie
roles
to
it.
I
can
type
permissions
to
it.
B
Whatever
that
permission
or
role
that
I've
given
to
that
google
service
account
on
the
google
side
is
that's
the
only
thing
that
runner
can
do
and
so
say,
for
instance,
you
know
down
the
line
we
want
to,
then
you
know
rotate,
you
know
keys
so
for
that
runner.
For
that
service
account,
you
know.
Perhaps
I
make
an
api
key,
not
that
we're
using
it,
but
like
I
want
to
rotate
that
or
I
want
to
change
that.
B
Well,
then,
I
don't
have
anything
to
change
on
the
gitlab
side
on
the
runner
side,
and
so
that's
that's
why
I've
chosen
that
solution.
The
reason
you
know
like
there
are
other
ways
to
kind
of
think
about
it,
but
I
think
from
you
know:
google
gk
autopilot
uses
like
google
sre,
so
it's
kind
of
like
their
most
secure
gke
configuration
the
second
one.
B
Is
you
know
by
pushing
as
much
security
over
to
the
google
side,
we
actually
reduce
the
complexity
of
the
setup
on
the
gitlab
side,
and
so
you
know
it
kind
of
helps
our
our
mission
a
little
bit
because
then
we're
now
focused
on
just
sort
of
you
know
managing
everything
else
and
it's
it's
less.
You
know
less
complexity,
but
yeah
and
so
workload,
identity
federation,
which
is
again
a
google
product
by
using
kubernetes
identity
and
combining
that
with
google.
I
am
identity.
B
It
allows
you
to
do
a
lot
of
things
and
and
that
workload
at
any
federation
product
at
google
is
deep.
So
there's
a
lot
of
ways
to
map
accounts
and
think
about
how
I'm
mapping
accounts
to
different
providers
and
I'm
just
using
it
one
specific
way.
D
Yeah
that
makes
a
lot
of
sense,
because
the
conversations
I've
had
with
a
lot
of
customers
is
they're,
always
taking
a
voice
to
kind
of
do
secrets,
management,
and
I
try
to
kind
of
advise
them
that,
like
it's
best
to
use
other
solutions
to
manage
secrets
versus
trying
to
store
secrets
within
gitlab,
so
try
to
give
them
tours
like
vaults
integration,
types
of
solutions
or,
like
you
said
in
this
example,
the
work
workload,
identity,
federation
solutions.
So
that
makes
a
lot
of
sense.
Thanks
for
that.
B
Yeah
yeah
no
worries,
and
I
think
there's
one
thing
about
get
lab
that
everyone
here
probably
agrees
with.
Is
that
there's
a
million
ways
to
deploy
things
and
do
things?
So
I
always
like
to
tell
customers
that
there
is
really
a
big
design
space
between
what
we
want
to
achieve
on
the
gitlab
side
and
even
on
google
cloud,
and
so
you
know
I
always
try
to
move
customers
through
a
process
to
figure
out
exactly
which
is
the
right
setup
for
them.
B
But
I
I
also
you
know
having
only
a
couple
minutes
left.
I
just
want
to
say
I'm
I
work
on
the
alliance's
team,
I
think.
Last
week
you
had
met
two
of
my
colleagues
kurt
and
darwin.
We
all
work
together
pretty
closely.
B
You
know
I
focus
on
hashicorp
in
google
cloud
or
google
everything
darwin
does
aws
kurt
focuses
on
openshift
and
red
hat,
and
you
know
many
others
we're
all
available
to
help.
Anyone
so
feel
free
to
reach
out
to
us
directly,
and
you
know
pick
our
brain
where
what
we
welcome
that
one
area
where
I
put
all
of
my
solutions,
which
is
the
previous
you
know,
customer
success
session-
was
on
the
solution
index.
B
So
we
actually
try
to
put
as
many
of
these
things
on
to
the
solution
index
as
possible
in
all
of
our
managers
as
prasad
who
you
probably
have
met
in
your
travels
and
so
we're
all
open
to
helping
and
feel
free
to
reach
out
and
and
yes,
and
please
don't
hesitate
to
do
so.
A
G
Thank
you
all
and
thank
you,
author,
for
a
great
presentation,
and
you
know,
as
arthur
mentioned
last
week,
we
presented
on
the
solutions
index,
so
all
of
the
alliance's
solutions
are
on
highest
part.
So
if
you
missed
last
week's
presentation,
please
check
it
out.