►
From YouTube: Auto Deploy to ECS (via Auto DevOps)
Description
Team Discussion about how to implement Auto Deploy to ECS (via Auto DevOps)
A
A
So
the
motivation
for
Auto
deploy
to
ECS
is
expanding
Auto
DevOps
to
support
more
than
just
kubernetes.
We,
we
developed
a
template
again:
lopsy
IMO
template,
that's
a
place
to
ECS
and
we
wanted
to
connect
it
to
Auto
DevOps.
Now,
when
we
decided
that
we
wanted
to
do
this,
we
stumble
across
the
fact
that
auto
deploy
is
this
huge
template
that
once
we
want
to
start
expanding
it,
there's
several
several
issues
that
we
needed
to
address.
A
The
first
of
that
of
them
is
that
we
didn't
want
anything
in
the
kubernetes
flow
to
break,
because
you
know
it's
working
well.
So
what
we
wanted
to
do
in
the
first
place
was
to
split
that
out
into
separate.
You
know
template
that
just
it
gets
called
just
like
the
master
out
of
the
box
template
cause.
A
bunch
of
Auto
builds
auto
deploy,
so
we
wanted,
but
at
the
point
to
be
also
like
a
master
template
that
calls
a
bunch
of
targets,
so
one
of
them
would
be
kubernetes.
A
This
would
be
ECS
and
in
the
future
we
can
move
and
have
ec2
you
can
have
Fairgate
we
had
can
mgcp.
We
have
a
bunch
of
things
that
are
not
on
the
table
yet,
but
this
is
the
first
time
that
we're
touching
it
so
so,
besides
filling
out
the
kubernetes
template
and
splitting
out,
but
ECS
template
and
having
them,
each
being
called
we
needed
to
handle
the
logic
behind.
What's
going
to
be
called
and
when
so,
we
have
several
use
cases.
A
One
of
them
is
I'm
using
kubernetes,
but
I'm,
not
using
UCS
another
one
is
I'm,
not
using
kubernetes
that
I'm
using
you
see
this
another
one
is
I'm
using
both
and
then
the
fourth
one
is
I'm
using
none.
So
in
the
first
case,
where
I'm
using
kubernetes
and
no
UCS
I
want
the
flow
to
be
exactly
as
it
is
today
same
behavior,
not
being
not
touching
anything
at
the
point,
pcs.
What
I
want
to
do
is
be
able
to
leverage
everything
we
have
in
Auto
DevOps
today.
A
So
if
we
think
about
composable
that
off
switch,
we
talked
in
a
different
issue
is
I
can
still
use
Auto,
Bild
and
auto
test
and
maybe
even
order
the
secure
and
everything
that
I
have
all
the
beauty
of
one
of
the
box.
But
the
deployment
target
changes
TCS
if
I
use
both
and
for
this
iteration
we're
going
to
skip
ECS
and
just
continue
with
kubernetes,
but
at
some
point
along
the
road.
A
We're
gonna
need
to
deal
with
this
use
case
and
the
fourth
one,
which
does
nothing
I,
wanted
to
behave
like
today,
so
it
doesn't
do
deploy,
but
it
uses
everything
up
and
downstream
from
auto
DevOps.
So
that
was
basically
the
idea
and
then
we
we
thought
about
a
way
to
differentiate
this
once
we
wanted
to
expand
the
deployments.
A
C
B
C
B
B
B
Should
we
focus
like
spend
our
effort
on
the
on
the
back
end,
a
solution
on
the
framework
solution
which
I
believe
I
think
we
should
we
have
this
like
back-end,
that's
only
kubernetes
focused
at
the
moment
and
I
think
the
end
goal
on
the
long
term
would
need
we
need
to
have
some
sort
of
framework.
That's
like
you
know,
platform,
not
platform
agnostic,
but
you
know
we
shouldn't
have
like
modules
that
are
called
clusters
or
kubernetes
or
whatever
right,
something
that
which
is
more
a
bit
more
tech,
a
technic,
Gnostic
and
I.
B
Think
this
is
I
think
as
an
option.
We
should
start
it.
We
should
start
to
look
at
as
of
the
like
about
the
EMR
that
that
that's
going
on
I
feel
like
a
oppressive
like
different,
different
suggestion
about
how
to
technically
do
this
and
and
I
think
yeah.
I'm
grateful
because
they
are
like
things
like
the
dynamic
child
pipeline
I,
just
you
buddy,
but
I
never
heard
her,
never
like
I,
never
dived
into
it.
B
A
A
B
C
Just
like
jump
in
there
I
don't
think.
Dast
was
designed
with
kubernetes
specifically
in
mind.
It
was
just
assigned
with
there
had
to
be
a
deployment
and-
and
we
need
to
know
what
the
URL
was
to
hit
and
the
only
way
that
that
worked
was
if
with
Auto
DevOps
anyway.
That
was
the
only
reason
it
worked
is
if
you
had
kamini
so
detecting
communities
as
a
shortcut,
because
that
was
the
only
variable
to
check
but
I,
don't
believe
that
says
literally
anything
to
do
with
kubernetes
pacifica.
C
B
A
A
B
C
B
C
Documentation,
so
that
would
be
the
engineering
work
to
go
down
this
path,
which
is
a
perfectly
plausible,
viable
path
and
I,
don't
see
any
not
to
do
that,
but
it
is
engineering
working
I
think
maybe
what
you
were
exploring
was.
Is
there?
Is
there
a
path
that
doesn't
require?
Adding
you
know,
conditions
to
includes
I.
B
Think
the
fact
that
yes,
there
is
one
part
which
I
explored
like
yeah
almost
a
couple
of
weeks
ago,
and
it's
just
it's
just
to
leverage
kubernetes
active
with
only
or
exclude
to
have
exclude
communities
active
when
we
want
to
use
the
SES
deploy,
template
or
just
to
leave.
Only
current
is
active
when
we
want
to
use
the
currency
someplace.
That
said,
the
configure
team
they've
replaced
only
and
exclude
with
rules
right.
A
B
C
Right,
okay,
that
makes
sense
right
yeah,
because
so
there's
too
so,
there's
actually
something
I
just
said
that
this
actually
would
work
I
just
realized.
That
was
wrong.
One
of
the
problems
with
the
way
that,
like
that
specific
comment,
machine
yeah
six
days
ago,
that
would
only
work
for
three
of
the
four
cases
that
were
each
talked
about.
So
specifically
the
case
where
you
need
both
communities
and
ECS
in
the
same
pipeline.
C
It
could
I
mean
my
mine,
yeah,
well,
I.
Think
the
bigger
challenge
was
simply
if
you
I
mean,
depending
on
how
you
define
things,
if
changing
the
name
of
the
job
caused
a
problem,
there
not
be
a
problem,
but
if,
if
it's
I
run
it
on
communities
and
the
pipeline
just
shows
a
deploy
to
communities,
then
so
be
it
yeah
and
if
I
deploy
to
yes,
there's
the
platies.
C
B
B
C
Mean
cuz
Auto
DevOps
itself,
I
mean
correct
me
if
I'm
wrong
right,
I'm,
just
working
through
this,
my
head
and
I've
literally
used
on
a
devops
in
the
years.
I,
don't
know
necessarily
what
all
the
things
are
there,
but
I.
Don't
know
why
we
would
care
what
the
job
is
named.
What
we
care
about
is
what
stage
it's
in
and
and
then
what
the
dependencies
are
across
stages,
but
if
you
just
rename
the
job
it
I
can't
right
now
see
any
way
that
it
would
actually
break
anything.
Yeah.
B
Yeah
I
was
just
making
like
an
assumption
here
because
I,
you
know,
if
I
had
all
the
knowledge
that
the
contribute
team
does
have,
maybe
I
would
be.
You
know
more.
Certain
and
I
would
be
able
to
say
hey,
yes,
like
the
job
names
cosmetic,
rename
them.
As
far
as
I
know,
the
job
names
are
checked
within
the
test.
There
are
some
aspects,
we're
that
you
know
if
we
have
this
type
of
pipelines,
so
that
we're
making
sure
that
we
have
the
jobs
which
names
our
production
and
not
review,
and
so
on
right.
B
C
A
really
good
point-
or
it
may
even
just
pattern
match
and
then
like
set
the
convenience
pod
variables
or
whatever,
based
on
the
job
name,
and
so
you
might
have
to
be
more
careful,
but
even
so
I
believe
you
could
then
solve
for
that
by
just
rewriting
the
deploy
function.
To
now
be
aware
of
the
new
job
name
yeah,
but
it's
it's
possible
to
make
it
consistent
and
not
break.
That's
the
point.
Okay.
C
I
think
so,
if
I
can
just
it's
gonna
jump
ahead
because
I
don't
think
we
have
a
lot
of
time.
I
feel
like
there's.
The
one
of
the
biggest
concerns
was
simply:
is
it
a
requirement
that
we
support
deploying
to
multiple
targets
in
the
same
pipe?
And
if
you
want
to
relax
that
for
13.0
and
say,
look,
we
only
support
oli,
CSR,
all
communities,
that's
a
perfectly
rational
constraint.
If
that
makes
it
radically
easier.
The
challenge
is.
If
then,
you
go
down
a
certain
path
that
then
paint
you
into
a
corner.
C
That
then
makes
really
really
hard
to
get
back
out
of
and-
and
you
can't
ever
then
support
both.
So
if
an
extra
you
know
hour
of
thinking
about
how
to
make
it
work
to
support
both
means
that
you
go
in
the
right
direction,
then
definitely
do
it,
but
if
it
takes
too
long,
you
know
definitely
relax
that
constraint
and,
and
so
again,
like
this
example
that
I'm
sharing
right
now
like
this
would
be
a
perfectly
fine,
next
iteration
and
then
later
on,
you're
like
okay.
C
Now
we've
got
to
push
it
down
a
level
and
we've
got
to
have
the
job
names.
You
know
you
know
make
sure
that
both
jobs
can
survive
at
the
same
time,
given
that
we
have
environment,
specific
variables
and
that
is
in
core
I
believe
I
think
we
moved
to
the
core
a
year
ago.
That
means
that
everybody
has
that
capability
to
like
that
auto
to
play
platform
variable
could
be
different
based
on
each
environment.
C
I
think
the
challenge
with
the
way
that
is
right
now,
because
it's
done,
if
you
were
to
do
it
out
and
include
time
that
can't
be
per
environment,
cuz,
there's
no
environment
in
the
include
that
would
have
to
be
a
global
for
the
entire
pipeline.
But
if
you
did
it
on
a
per
job
basis
and
that
job
then
has
environment
declared,
then
that's
where
that
can
be
really
easy
to
just
be.
A
C
Yes
and
no,
but
think
about
literally
our
own
staging
in
production,
we've
got
geo
running
and
staging,
but
not
running
in
production.
We
have
a
different
deploy
capability
because
we're
trying
to
migrate
it
out.
So
we
will
look
like
the
idea
is
the
test
something
in
staging
first
and
then
make
production
match,
but
sometimes
there's
a
sequence
there
that
you
can't
avoid
the
other
very
real
possibility,
though,
for
a
lot
of
folks
would
be
like
review.
Apps
I
might
have
reviews
go
to
Heroku,
because
who
cares?
How
close
they
are?
C
I
just
want
to
see
what
the
app
looks
like,
but
then
production
might
be.
You
know
a
VM
somewhere
or
any
other
number
of
ways
to
do
it.
Production
often
very,
is
very
different
from
from
review.
Apps
I
agree
that
staging
and
production
in
general
should
be
pretty
close
and
maybe
there's
lots
of
other
ways
to
solve
that,
which
is
another
one.
I
guess
say
if
you
want
to
relax
that
constraint
for
13
point:
oh,
go
ahead
right
because
it's
not
gonna
be
a
totally
common
one.
C
C
You
still
have
to
make
sure
that,
like
I,
like
this
declaration
here
of
auto
deploy
platform,
because
it's
a
single
variable,
whereas
one
of
the
other
suggestions
out
there
hired
like
ETS
deployment,
enabled
and
community
is
enabled,
in
which
case
then
both
can
be
enabled
at
the
same
time,
and
then
the
job
gets
confused
in
whichever
one
is
declared
last
wins
and
that's
probably
a
failure
mode.
You
don't
really
want
to
go
in,
although
you
know,
maybe
that's
okay,
too
again,
if
you
want
to
relax
constraints
because
I
also
had
suggested
earlier
like.
C
Wouldn't
it
be
great
if
we
just
auto,
detect
it
like
if
the
clusters
a
little
available,
deploy
to
communities
if
ezs
variables
are
there
deploy
that
ECS
and
then
that
way
like
maybe
I,
would
even
just
hide
my
communities
variables
from
production,
because
you
can
do
that
as
well.
The
challenge,
then,
though,
is
if
you
declare
that
up
higher,
like
if
a
group
has
a
communities
cluster
available,
then
there's
no
don't
override.
C
That
I
would
just
always
end
up
deploying
to
the
communities
cluster,
because
the
variables
available,
so
just
you
gotta,
think
through
a
lot
of
these
edge
cases,
but
they're
really
not
that
hard,
but
you
do
have
to
think
through
what,
like
the
solution,
isn't
hard.
The
thinking
through
might
be
hard,
so.
A
I
think
there's
also
another
really
good
advantage
for
this
variable,
like
the
order
deploy
platform
or
the
one
that
we
introduced
the
launched
type.
So
our
solution
heavily
relies
on
the
fact
that
you're
using
AWS
types
of
variables
and
environment
variables-
and
that's
not
always
the
practice-
people
are
using
like
different
ways
to
pass
their
secrets.
So
this
actually
gives
us
flexibility
in
the
future
to
integrate
with
other,
like
voltage
volts,
you
like
not
necessarily.
C
A
C
An
interesting
one,
because
for
now,
if
Auto
DevOps
only
supports
those
variables
that
doesn't
really
matter,
but
if
you
want
to
make
other
DevOps
support,
vault
and
other
things
that
might
be
more
convoluted,
and
maybe
it's
not
just
a
matter
of
simply
detecting
a
variable,
you
don't
really
want
a
ping
volt
and
wait
for
an
error
to
then
find
out.
Yes,
you
don't
have
vaults
like
yeah,
so
I
think
again.
This
is
a
perfectly
valid
answer.
There
are
probably
some
other
ramifications
which
we
aren't
necessarily
gonna,
be
able
to
foresee.
C
That
could
be
a
perfectly
valid
iteration,
but
that's
also
not
a
requirement
for
30
no,
but
if
it
turns
out
to
be
a
problem,
you
can
do
that.
But
one
challenge
here,
though,
is
you've,
got
to
think
about
what
about
everybody?
Who
already
has
Auto,
DevOps
and
I
already
have
communities,
and
they
haven't
set
any
variable.
What
you
do
as
what?
How
do
you
do
with
the
default?
If
the
variable
is
we.
A
Thought
about
that
and
we
decided
that
we
are
not
gonna,
require
anyone
to
go
back
and
enter
these
variables.
A
blank.
A
blank
entry
would
be
kubernetes
as
a
default
and
we're
gonna
add
to
the
logs,
like
a
line
that
says
you
haven't,
set
this
variable
so
that
people
get
aware
of
it
and
at
some
point
in
the
future
I
believe
it'll
be
a
requirement,
but
that's
not
no.
Okay,.
C
So
then
the
only
thing
I
would
caution.
There
is
you
know,
you'll
just
have
to
stick
it
through.
It's
not
a
big
deal,
but
like
the
the
default
for
most
people
is
actually
that
it.
There
is
no
deploy
platform
because
most
other
develops
doesn't
have.
I
could
have
a
cluster
anything,
so
it
has
to
sort
of
be
like
well.
If,
if
you've
got
a
cluster
and
you
have
not
set
a
platform
explicitly,
then
the
implicit
type
of
deploy
is
Khomeini's,
but
you've
obviously
got
a
you
know.
C
B
We'll
have
this
available
and
we
are
gonna
set.
The
value
ECS
for
that
launch
type,
CI
variable
and
we
use
Azure
variable-
was
a
switch
rather
than
a
know,
something
that
can
be
used
within
a
job,
a
switch
as
in
like
we're
gonna
do
this
set
of
job
or
that
set
a
set
of
jobs
and
and
then
you
made
the
comment
of
later
on.
Maybe
we
want
half
people
deploying
the
review
apps
onto
a
Heroku,
maybe
production
onto
something
else
of
whatever
is
right
and
to
me
out
of
this.
B
A
B
Have
like
some
concerns,
they
may
be
not
fully
formulated
in
my
head,
but
I'm
still
I'm
starting
to
have
some
concerns
about
to
use
of
CI
variables,
for
what
we
are
trying
to
achieve
here
and
I
just
want
to
make
sure
that
it
is
a
pathway.
We
will
be
able
to
extract
ourselves
from
you
know.
So
the
time
come
when
we
want
to.
C
So
I'm
not
quite
sure
how
to
answer
concerns,
but
some
of
the
things
that
came
up
when
you're
talking
there
is
well,
let's
one
simple
analysis
like
right.
Now,
there's
two
things,
and
so
it's
really
easy
right.
What
happens
when
there
are
50
deployed
targets
and
some
of
those
deployed
targets
are
like
third-party
manage
deployed
targets?
How
does
this
extend
if
I
look
at,
you
know
again
have
actions
or
circle,
see
I'd,
orbs
or
any
of
these
kinds
of
things?
C
They
all
have
a
marketplace
in
a
way
to
have
lots
and
lots
of
different
deploy
targets.
How
would
we
want
to
support
that
kind
of
a
thing
in
that
case?
I,
don't
know
that
a
variable
is
a
problem.
You
know,
setting
your
auto
deploy
platform
to
a
name
and
having
that
key,
be
unique.
Amongst
all
the
deploy
targets
seems
perfectly
reasonable.
I
still
would
argue
it
even
lets
you
potentially
override
it.
If
I
want
to
load
my
own
Heroku
deploy
targets,
and
you
know
so
I've
got
a
custom.
One
I
could
still
say
my
deploy.
C
Platform
is
Roku,
but
I,
just
you
know,
do
a
different
include
and
it
would
load
things
at
some
point.
The
auto
DevOps
script
gets
a
little
crazy
and
you
might
want
to
make
it
more
dynamic
where
somehow-
and
we
have
no
capability
to
do
this
today,
but
maybe
you'd
dynamically
generate
the
list
of
includes
based
on
what
patterns
are
set
in
the
environment
or
at
some
point,
maybe
even
push
it
down.
C
A
level
and
part
of
my
original
design
for
includes
was
that
you
could
actually
potentially
include
them
at
the
job
level
instead
of
that
just
at
the
top
level,
and
then
that
way,
you'd
be
able
to
do
something
like
well.
If
I've
only
included
Heroku
and
kuba
ninis,
then
those
are
the
only
includes
that
get
loaded.
Otherwise
they
don't
load
the
rest
of
the
50,
because
that
would
get
to
really
really
bloated
pretty
soon
anyway.
C
I
feel
like,
though,
going
down
the
current
path,
doesn't
stop
us
from
going
down
any
of
the
other
paths
easily.
So
one
of
the
tests
I
use
for
like
okay
are
we
gonna,
be
painted
in
the
corner
is
come
up
with
some
crazy
ideas
or
futures
right
and
I
just
see?
Is
there
a
possible
world
where
we
can
get
out
of
that
corner?
C
As
long
as
you
can
come
up
with
a
reasonable
solution
where
you
get
out
of
the
corner,
then
don't
design
for
it
today,
but
have
some
confidence
that
yeah
we
can
get
around
there.
So
I
feel
like
having
it
very
I.
Don't
see
any
reasons
that
having
a
single
variable
for
the
platform
would
cause
a
problem.
I.
C
C
C
If
you
really
wanted
to
kill
the
use
of
a
variable,
I
don't
feel
like
the
use
of
a
variable
is
particularly
problematic,
whereas,
if
you
wanted
to,
if
you
want
to
say,
look
will
auto-detect
it
based
on
the
variables
that
you've
set.
So
if
you
set
your
company's
variables,
then
we
assume
communities.
You
set
a
UCS
variable
as
we
assuming
yes,
you
said
them
both
I,
don't
know
we
pick
one.
If
you
even
went
down
that
path,
it
would
still
be
easy
to
get
out
of
it.
C
If
that
turned
out
to
be
a
bad
path,
because
then
you
just,
then
you
introduce
the
variable
so
I
think
the
only
thing
that
by
introducing
the
variable
actually
I
would
argue
that
it's
still
not
even
that
bad,
because
it's
somewhat
you
just
talked
about
you
have
a
default.
So
if
you
don't
set
the
variable,
then
we
interpolate
so
I
feel
like
none
of
these
are
locking
us
into
a
corner
or
constraining
us
in
any
way.
So
just
pick,
you
know
something
that's
reasonable.
That
sounds
good.
C
That
is
a
good
developer
experience
we're
over
time,
but
I
feel
like.
We
should
talk
about
one
thing,
because
you
asked
me
and
I
didn't
respond
about
the
dynamic
pipelines.
I
feel
like
it's
worthwhile,
explaining
my
comment
there,
a
little
bit
just
to
expose
really
quickly
I
was
actually
pretty
much
against
including
dynamic
pipelines
from
the
beginning,
not
necessarily
because
it's
a
bad
idea.
C
It's
actually
a
great
idea,
but
but
it
actually
pretty
much
this
exact
scenario
where
Auto,
because
if
you
have
dynamic
pipelines,
then
people
will
start
to
use
that
as
a
default
and
I
don't
want
it
to
use
it
as
an
excuse
for
basically
making
things
really
really
complex.
I
feel
like
dynamic
pipelines,
are
a
very
advanced
feature
which
we
should
not
include
by
default
in
Auto
DevOps.
We
should
not
be
encouraging,
frankly,
people
to
use
it's
an
escape
hatch
for
when
we
haven't
designed
the
system.
Well
enough
to
be
able
to
do
what
they
need.
C
I
believe
that
everything
that
a
user
wants
to
be
able
to
do.
We
should
be
able
to
do
without
dynamic
pipelines,
but
there's
a
timeliness
of
it
where,
if
it
takes
us
a
year
to
implement
a
feature
that
they
need
and
they
could
get
it
done
on
dynamic
pipelines,
then
great
that
gives
them
an
escape
hatch
to
do
that
for
the
year
until
the
introduced.
You
know
the
feature
that
they
really
need,
but
everything
that
we
should
be
doing
should
be
at
the
top
level,
like
Shinya
suggestion
to
have
you
know.
C
Conditional
includes,
like
that's
a
great
example,
we're
like
that's
a
first-class
way
to
solve
the
problem.
In
the
meantime,
you
could
use
dynamic
pipelines
and
as
a
customer,
if
that's
what
you
have
to
do,
then
go
ahead
and
use
dynamic
pipelines,
but
for
us,
I
would
really
just
hate
to
see
Auto
DevOps,
cuz
I,
don't
have
to
be
our
embodiment
of
best
practices
and
I'll,
just
say
dynamic
pipeline
not
a
best
practice.
They
are
an
escape
hatch.
That
just
adds
a
massive
complexity
like
technical
complexity,
but
also
cognitive
complexity.
C
If
I
look
at
that
pipeline,
I
no
longer
know
what
it
does,
because
I'm
doing
a
double
you
get
to
find
a
file.
It's
like.
Why
don't
I
just
do
an
include,
and
it's
like,
oh
well,
because
we
have
rules
for
it.
Like
that's
the
only
reason
like
it's
clearly
a
hack,
you
know
it's.
It's
we're
trying
to
do
basically
and
include
with
conditions
because
but
conclude
doesn't
have
conditions
and
anybody
reading
that
would
be
like
well.
Why
don't
you
just
make
includes?
C
Have
conditions
like
just
do
it
as
a
first-class
way,
not
as
a
as
a
hack
to
work
around
a
limitation?
So
again,
if
you
need
to
experiment
for
one
iteration
and
you
need
to
do
it
this
way
to
get
it
out
and
then
the
next
iteration
you
wanted
to,
then
you
push,
you
know
includes
five
conditions,
that's
totally
fine
as
a
temporary
solution,
but
I,
but
I
just
really
don't
believe
it's
a
permanent,
because
it's
not
a
best
practice
that
makes
sense.
C
A
It's
a
go
for
a
13-0
with
our
current
proposal.
Correct
I
do
think
you
need
some
follow-up
issues
like
conditional
includes,
which
is
really
an
interesting
use
case
regardless
so
to
live
ups
and
and
then
I
think
we
pretty
much
thought
about
like
the
default
values
and
the
conditions,
but
when
we
expand
this
to
be
even
bigger
than
then
we'll
need
to
rethink,
you
know
all
the
logic
again,
it's
probably
going
to
be
in
the
near
future,
but
yeah.
C
I
mean
I
really
hope
we
have
multiple
deploy
targets.
I
also
really
hope
that
whatever
we
do
here
becomes
a
pattern
for
people
to
add
their
own
deploy
targets
and
like
take
the
auto
dev
ops,
template
put
it
in
there,
get
labs,
yeah,
yeah
mole
and
then
be
like.
Oh
I'm,
just
gonna
include
my
own
definition
of
how
to
do
deploys
and
then
suddenly
I've
got
a
deploy
to
you
know
my
own
custom
stack,
but
all
the
rest
of
auto
dev
ops
still
works,
and
then
you
know
that
would
be
the
ideal
outcome.
C
A
A
A
C
A
C
C
So
I'm
just
gonna
throw
it
a
couple
of
things.
One
is
the
one
you
said:
OTN
still
had
AWS
in
it,
and
so
that
implies
well
like
it,
but
I'm
deploying
outside
of
AWS
I'm,
not
gonna,
set
that
so
then.
What
do
you
do?
I'm,
not
quite
sure
not
saying
it
doesn't
work,
but
it
sounds
like
you're
that
implying
stuff,
but
by
saying
out
of
the
house
platform,
you're
being
explicit
and
I'm
I'm,
making
a
choice
between
all
these
other
choices.
C
C
A
C
Would
think
that
the
auto
deploy
platform
would
be
the
right
place
cuz
then
it's
between
companies,
it's
easy
to
it's.
Yes,
it's
whatever
it's
just
one
variable,
instead
of
like
a
two-tiered
variable
like
Oh.
First
at
the
time
am
I
deploying
to
AWS,
oh
then
within
AWS,
which
one
about
deploying
to
I
could
see
potentially
reasons
to
do
that,
but
none
of
them,
you
know,
are
relevant
right
now.
The
only
other
things
I
won't
comment
to
make
is
a
auto
deploy.
C
It's
you
had
said
a
couple
conflicting
things
right
now.
This
is
auto
deploy,
but
Auto
DevOps
is
the
the
thing
auto
deploy
is
a
part
of
Auto
DevOps,
so
you've
got
to
make
some
decision
about
whether
you
want
it
to
be
able
to
deploy
or
on
the
DevOps.
The
other
thing
I'm
going
to
suggest
is
like.
Do
you
really
need
to
do
anything
without
a
DevOps
at
all
or
any
auto
anything?
C
C
So
if
you
use
it,
then
the
variables
don't
conflict
or
whatever,
but
I
kind
of
like
the
idea
of
saying
well,
but
if
I
took
out
of
ops
and
then
I
modified
it
and
made
it
my
thing
and
it's
no
longer
on
a
DevOps,
because
it's
no
longer
it's
I've
got
a
hard-coded
gitlab
CI
yeah
Mel.
What
variable
am
I
going
to
use
then
and
now
you're
perpetuating
I'm
gonna
have
a
variable
calling
Autobots
platform
and
I
figure.
That's
weird.
A
C
But
now
you
can
Ollie
platform
has
a
potential
there
and
you
can
say
well
I'm,
not
using
Auto
DevOps,
Ami's
Auto
deploy
but
auto,
deploy
like
barely
you
know.
We
have
a
Maddock
page
on
it.
It
does
exist
as
a
term,
but
it's
not
really
calling
into
play
platform
could
be
a
challenge
and
a
variety
of
other
ways.
You
know
it's.
It's
making
a
bolder
statement,
though,
and
saying
look.
No.
This
is
how
you
declare
anything
and
if
you,
if
you
want
to
write
your
own,
this
is
what
we
suggest
you
do.
C
Anyway,
to
be
clear,
I'm
not
opposed
to
any
like
if
you
want
to
put
out
of
DevOps,
you
want
to
put
Auto
deploy.
It's
fine
I
would
just
encourage
you
to
think
through
the
ramifications
and
the
principles
around
it
of
you
know
either.
If
you
want
to
say,
look
everything's
to
be
behind
Auto
DevOps,
like
I,
don't
know
if
the
rest
of
our
variables
all
have
Auto
DevOps
but
I'm
pretty
sure.
C
When
we
set
things
like
like
how
many
pods
to
scale-up,
we
didn't
use
Auto
DevOps
prefixed
variables
we
declared
this
is
the
variable
like
this
is
how
many
I
don't
remember
the
nest
sizes,
something
like
that
and
I
think
there
to
be
consistent,
I'm,
pretty
sure
we
don't
prefix
everything
with
Auto,
DevOps
or
Auto,
deploy
or
Auto.
Anything
I
could
be
wrong,
though
it's
been
around.
C
C
C
So
just
really
quickly.
Looking
through,
though,
like
we've
got
like
cube
ingress,
it's
not
Auto.
Devops
cubing
dress.
It's
you
know.
These
are
other
terms,
so
I
don't
think
we
use
Auto
DevOps
as
a
prefix
for
anything
else,
but
alright
anyway,
thanks.
Hopefully,
the
meeting
was
useful.
Everybody
thank.