►
From YouTube: Deep Dive on Natively supporting cloud deployments
B
A
Is
an
epic
yeah
we
we
could.
We
could
look
at
that.
I
can
also
just
talk
about
kind
of
the
idea
and
yeah
I'll
start
with
that,
and
if
you
want
to
pull
up
the
epic
we
can,
we
can
then
explore
that
or,
however,
we
want
to
do
it,
but
I'll
just
start
by
talking
about
what
the
overall
idea
here
was.
This
concept
of
doing
something
in
this
space
for
the
hyper
clouds
came
from
the
fact
that
we've
got
a
really
really
nice
user
experience
for
kubernetes.
A
If
you're
using
kubernetes,
you
can
use
Auto
DevOps
and
it
will
automatically
detect.
What's
in
your
project
using
Heroku
build
packs,
it
will
run
the
build,
it
will
run
the
tests.
It
will
build
a
container
if
you've
got
a
container
in
there.
It
will
even
build
other
kinds
of
things
so
like
if
you've
got
like
a
Ruby
project.
There's
a
Heroku
build
pack
for
Ruby.
B
A
Will
know
how
like,
if
it's
rails,
it
will
know
how
to
build
a
rails
application
automatically.
You
don't
have
to
write
a
kit,
lepsy
IMO
forth
and
then,
if
you've
integrated,
kubernetes
cluster
with
it,
then
Auto
DevOps
also
knows
how
to
take
that
output
and
deploy
it
to
your
cluster,
that's
associated
with
your
project
or
your
group,
which
is
super
super
cool,
so
it
doesn't
fit
every
use
case,
but
for
a
lot
of
use
cases,
and
especially
ones
where
you're
building
a
container
you
can
get.
You
know
a
build
and
deploy
to
your
test
environment.
A
All
up
and
running
super
super
easy
and
you
also
get
to
take
advantage.
Then
it
incremental
rollout,
canary
rollouts
and
all
these
features
that
we've
built,
but
if
you're
using
ECR
or
dcs,
which
is
the
amazon
container
services,
where
you're
deploying
a
container
to
virtual
hardware.
But
it's
not
through
kubernetes
or
the
same
thing
in
Azure
or
the
same
thing
in
GCP,
then
we
don't
really
do
anything
for
you.
A
But
then
it
leaves
you
there
and
you're
like,
and
then
you
have
to
figure
out
how
to
do
the
deployment
on
your
own
I'm
doing
the
deployment
on
your
own
is
not
terribly
complicated
and
probably
the
person
knows
how
to
do
it,
but
there's
some
pieces
missing
there
where
we
can
make
it
really
easy.
The
the
dream
is
getting
to
the
point
where
you
can
just
essentially
do
the
same
thing
that
you
do
for
kubernetes,
where
you
wouldn't
be
configuring,
your
cluster
and
providing
us
the
credentials
to
it.
But
you'd
be
configuring.
A
Like
you
know
what
is
the
AWS,
the
ECS
environment
that
you
want
to
deploy
to
and
what
are
the
credentials
for
it?
We
have
an
issue:
that's
around
collecting
credentials
for
Amazon.
In
particular
that
we
can
start
with
I
think
it's
good
to
look
for
12ax7,
not
a
hundred
percent
sure
on
that,
though,
and
then
using
that
we
could
say
well,
we
can
automatically,
if
you're,
providing
those
credentials
to
us
and
then
you're
providing
us.
You
know
the
target
in
some
way,
there's
the
task
definition,
which
is
a
JSON
file.
A
If
we
have
a
task
definition
file,
that's
checked
in,
we
have
credentials
that
you
provided
to
us
in
theory.
At
that
point,
we
can
automatically
do
everything
for
you.
We
can.
We
can
just
look
at
your
project
and
know
that
it's
deploying
to
Easton,
yes
and
just
automatically
do
it
for
you
as
part
of
the
process
which.
A
Cool
there
are
steps
that
we
can
do
in
the
mean
time
like
in
12,
Essex
I
think
we're
delivering
the
container.
So
there
is
no
so
right
now,
if
you
want
to
use
the
AWS
command
line,
client
even
Amazon
doesn't
provide
a
base
container
that
you
can
use.
That's
like
their
build
/
deploy
environment
for
running
commands,
so
we
can
provide
a
nice
a
nice
container.
A
B
A
B
A
It's
not
rude,
it
shouldn't
be
rude.
That
should
be
avoided,
so
we
should
provide
some
guidance
on.
You
know
what
we're
expecting,
there's
actually
good
guidance
from
Amazon
on
how
to
give
the
least
credentials
that
are
needed,
but
I
would
say
that's
the
overall
principle
that
you
should
play
if,
if
you're
deploying
to
one
task
definition
in
East,
yes,
you
should
provide
credentials
to
get
lab
for
that
project.
That
are
only
able
to
do
that.
One
very
specific
thing:
there's
a
balance
to
strike
there.
A
Where
you
know
you
could
end
up
with
hundreds
thousands
of
credentials
depending
on.
If
you
have
many
individual
micro
services,
so
there
may
be
some
efficiencies
to
gain
and
like
at
least
grouping
them
in
some
way,
but
ultimately
that's
up
to
the
user
in
general.
The
principle
is
make
them
as
as
specific
as
possible.
Oh.
B
A
That
is
a
scary
thing.
I
don't
know,
what's
driving
them
to
have
to
do
that,
so
that
would
be
interesting
to
look
into
Oh
as
far
as
I
know,
within
CICE
we're
not
requiring
them
to
do
that,
but
maybe
that's
just
the
obvious
easy
way
to
get
it
up
and
running,
and
then
people
stick
with
it
and
they
later
regret
it.
A
So,
there's
that
the
other
thing
that
you
can
do
is
we
have
a
vault
integration
coming
out
in
a
release
or
two
well
I
guess.
The
first
thing
I'll
say
is
that
the
credential
should
be
in
environment
variables
rather
than
in
a
file,
that's
checked
in
because
checking
in
the
file.
If
you
have
a
public
repository.
Obviously
people
can
then
just
look
at
the
file
and
and
get
the
credentials.
A
So
so
that's
the
workflow,
and
then
you
can
see
with
these
little
iterations
that
we're
getting
closer
and
closer
to
the
eventual
idea
of
having
kind
of
like
that
dream
scenario
that
I
was
talking
about
at
the
beginning.
Where
we
just
look
at
your
project,
we
look
at
the
credentials
you
gave
us
and
we
know
that
we
can
deploy
to
AWS
to
start,
but
eventually
the
same
thing
for
Azure
for
for
for
GCP
as
well,
for
the
non
kubernetes,
but
still
cloud
native
deployment
targets.
Those
are
super
super
commonly
used,
yeah.
B
A
It's
something
to
double-check,
my
understanding
of
ec2
is
that
it
uses
the
amis
a.m.
eyes
and
they're
just
like
virtual
machine
images
that
are
something
like
VMware
uses
that
are
more
just
like
an
image
of
a
machine
with
memory,
processor
and
disk
and
and
it's
more
if
the
more
traditional
way
and
I
wouldn't
really
call
that
way.
Cloud
native,
it's
possible
I'm,
misremembering,
I'm,
misremembering.
What
you
see
too
is!
But
if
it's
that
then
I
would
say
it's
out
of
scope
for
this
project.
A
More
interesting
is,
and
what
you
might
be
able
to
do
with
ec2
is
run
docker
on
a
virtual
machine
and
then
but
that's
sort
of
you're
getting
into
like
one
step
removed
territory.
If
you're
using
ECS,
which
is
the
direct
container
deployment
system,
then
that's
much
more
automatable
for
us
and
there's
no
Emmys
involved.
There's
no!
So.
A
I'd,
no,
no,
that
eks
is
the
is
the
parallel
to
GK,
yeah
and
I.
Don't
know
what
ec2
is
in
GCP
and
I.
Don't
know
what
ECS
is
in
G
GCP,
but
I'm
sure
they're.
They
offer
virtual
machines
and
I'm
sure
that
they
offer
just
like
direct
container
deployment.
Something
also
interesting
here
that
we
haven't
touched
on
yet
is
serverless.
B
A
B
A
Think
we
should
only
make
them
if
we
need
to
AWS
is
kind
of
unique
and
not
providing
a
deploy
environment.
That's
a
standardized
one
I
believe
so.
There's
an
interesting
slide
track
here,
which
is
through
this
issue
of
providing
a
container.
We
sort
of
discovered
the
idea
that,
in
theory,
we
could
support
something,
that's
kind
of
analogous
to
circle,
CI,
orbs
or
github
actions,
at
least
the
part
of
those
that
are
about
having
a
container
that
contains
the
environment.
A
That's
needed
to
do
a
task
and
the
code
that's
needed
to
then
do
it
and
then
have
a
standardized
way
to
from
a
git
lab
see
I
am
I'll
call
out
to
one
of
these
containers
provide
the
configuration
that's
needed
so
that
it
can
then
run
and
do
its
thing.
That's
a
perfect
description
of
what
we're
doing
with
the
AWS
command
line.
Where
you
could
add,
you
know,
layers
of
automatic
scripting,
so
deploying
to
ISA.
A
Ecs
probably
takes
two
or
three
AWS
command-line
implications
to
like
login,
then
like
target
the
thing
and
then
deploy
to
the
thing,
and
so
doing
this
in
a
reusable
way.
Could
let
us
implement
other
kinds
of
interesting
containers
that
are
self-contained
in
that
way?
That
could
really
do
anything
like
they
could
run
a
test
suite
or
they
could
literally
do
anything.
So
it
would
be
a
job
that,
instead
of
having
a
script
section
would
just
have
configuration
to
pass
through
containers
in
the
container
would
do
whatever
it
wants.
A
It
would
be
an
interesting
way
to
potentially
have
like
reusable
community
contributed
containers
that
can
be
integrated
into
gitlab
CI,
that's
sort
of
a
tangent
and
that
interesting
idea
doesn't
contribute
per
se
to
getting
us
to
the
hyper
cloud
to
making
kind
of
cloud
deployments
easy,
except
in
that
we're
kind
of
doing
this
for
AWS
and
that
we
want
to
do
this
in
the
container.
So
if
it's
not
too
much
extra
work,
we
can
try
and
do
it
like
the
quote
right
way
or
in
an
interesting
way
to
start.
B
B
A
B
Sure
how
we
would
choose
where
to
deploy
from,
because,
if
we're
talking
about
native
cloud,
this
is
just
defending
additional
methods,
I
would
say,
but
it
doesn't
reach
the
cloud
at
the
end
of
the
day
like
it
gives
me
a
terraform,
see
I
forget
that
see.
I
am
a
file,
but
it
doesn't
tell
me
okay,
now
you're
deploying
to
to
shore
or
to
AWS,
or
you
know
so,
we're.
A
There
is
a
terraform
configuration
file
for
sure
that
describes
the
you
know,
environment
to
be
set
up
or
modified,
and
then
there
is
a
target
environment
but
yeah.
It's
different,
because
I
think
more
flexibility
in
terms
of
what
the
target
environment
is
terraformers
is
able
to
abstract
some
of
that
away.
A
What
we
would
have
to
do
is
some
research
on
how
that's
defined
what's
possible.
What
could
be
auto,
detect
there
I
think
recognizing
that
a
project
contains
a
terraform
deployment.
Script
should
be
like
pretty
straightforward,
but
figuring
out
automatically.
What
it
deploys
to
that's
gonna
depend
on
how
terraform
works
and
I'm
not
familiar
enough
with
terraform
to
know
how
complicated
or
easy
that's
going
to
be.
It
should
be
relatively
easy.
I
would
think,
but
somebody's
gonna
have
to
be
familiar
with
terraform
to
know.
B
B
A
A
So
yeah
this
is
a
bunch
of
stuff
and
isn't
really
I,
wouldn't
say
that
our
team
should,
or
you
should
be
tracking.
This
particular
issue
I
understand
why
they
linked
it,
because
it's
I
probably
would
if
I
was
them
done
it.
The
other
way
like
I,
wouldn't
have
made
their
alliances
issue
a
sub
epic
of
something
that
we're
delivering
in
a
single
release.
I
would
have
made
this
thing:
we're
delivering
a
single
release
like
a
sub
item
to
their
big
picture
of
you,
but
that's
sort
of
neither
here
nor
there.
A
B
There's
a
lot
of
potential
potential
going
into
this
after
we
actually
support
it.
I
guess
smoothly.
We
could
then
and
monitoring
to
it
and
even
scanning
to
make
sure
that
you're
not
over
budget
and
things
like
that.
So
there's
a
lot
of
interesting
things
that
we
can
do
here,
but
I
guess
we
need
the
building
blocks.
First,
yeah.
A
We
need
two
building
blocks.
First,
that's
for
sure
and
that's
that's
running
the
deployment
and
trying
to
as
best
we
can
Auto
detect
what's
in
the
project
and
what
we're
deploying
to
on
a
per
environment
basis,
just
like
we
do
for
other
DevOps
and
and
just
make
that
super
easy
that
experience
when
you're
in
a
new
user
and
you're
like
okay,
I'm
gonna
check
out
get
lab,
see
I've
heard
good
things
about
it,
I'm
using
ECS.
What
is
the
experience?
A
Is
it
like
whoa,
it
auto-detected
everything
in
my
project
and
it
was
clear
to
me
that
I
needed
to
just
set
up
credentials
and
then
it
would
deploy
to
my
environment
or
is
it
spelunking
for
hours
through
the
get
labs,
see
I,
am
old
specification
and
trying
to
figure
out
what's
possible,
it's
more.
The.
A
And
it's
not
clear,
what's
possible
or
what's
easy,
if
you
take
one
of
our,
you
know
account
experts,
who's
worked
with
many
customers
to
set
up
get
CIE
a
moles
in
order
to
get
to
play
to
AWS.
They
can
probably
get
you
up
and
running
at
15
minutes
if
you're,
a
person
trying
to
learn,
get
lap,
CI,
yeah
Mille
at
the
same
time
and
learn
how
git
lab
works
and
try
and
get
everything
working.
A
A
A
I,
don't
think
so.
I
think
that
sort
of
the
the
flavor.