►
From YouTube: Auto Deploy Handover
Description
First Handover for Auto Deploy from the Configure team to the Release:Progressive Delivery team
A
Okay,
so
this
is
the
first
meeting
of
and
over
of
auto
deploy
between
the
configure
team
and
the
release
team,
progressive
delivery
team
and
the
the
idea
behind
it
is
that
deployment
is
a
large
part
of
what
we're
doing
anyway
in
the
release
stage.
So
it
makes
sense
to
to
build
on
top
of
it,
and
even
now
we
have
a
lot
of
things
that
we're
developing
at
the
moment
they
use
an
extended,
auto
deploy
for
non
kubernetes
usage
such
as
deploy
to
AWS,
CCS
and
I.
A
Assume
additional
targets
will
be
joining
soon,
so
it's
not
necessarily
kubernetes
for
users
that
would
use
this,
but
it
builds
on
top
of
it.
So
that's
kind
of
the
motivation
for
the
handover
and
I
think
in
terms
of
dris
from
the
development
side,
we
have
a
TN
on
the
line
and
Shinya
I
didn't
see
it
enjoying
probably
the
both
of
them
will
be
responsible
for
auto,
deploy
and
I'm
from
the
early
side,
and
that's
kind
of
like
an
intro
to
what
we're
doing.
A
A
So
that
we're
aware
of
what's
going
on,
I
talked
about
Auto,
DevOps
documentation
and
when
I
said
this,
so
there's
an
idea
of
rolling
around
about
composable
Auto
Dada
basically
means
that
you
can
mix
and
match
whatever
you
want
from
Auto
DevOps
pipeline.
So
you
can
do
your
own
build
stage,
but
then
take
advantage
of
everything
else
or
you
know
just
select
every
stage
and
and
like
Legos,
build
whatever
you
want.
A
So
that's
kind
of
a
discussion
of
where
we
would
want
the
documentation
tool
and
also
in
terms
of
the
features
page
so
there's
a
feature
zmo
file
that
we
have
on
github.com
and
we
need
to
decide
how
we're
going
to
separate
that
and
who
owns
what
I
put
here.
A
question
mark
about
Auto
review
apps,
because
it's
something
that
was
on
a
thread
and
slack.
But
we
didn't
actually
make
any
decision.
So
I
put
that
as
a
question
mark
since
we're
the
team
that
manages
review
apps
anyway.
A
A
B
C
All
right
so
sorry,
I'm,
just
I
just
I'm,
still
going
through
my
first
coffee
of
the
day,
so
on
a
DevOps
technical
overview.
So
I
think
that
the
I
mean
they
know
the
gist
of
it
already.
But
the
things
that
we
have
been
responsible
for
have
been
auto,
build
or
deploy
and
test
and
the
main
I
believe
the
main
template.
C
C
There's
this
Auto
Bild
image,
this
one
uses
Heroku
build
packs
to
automatically
build
automatically
to
automatically
detect
the
project
type
and
to
the
like
to
build
your
container
image
that
will
get
deployed
and
they
use.
It
also
gives
a
user
opt-out
of
the
build
pack
detection
because
it
is
quite
fallible
and
it's
also
kind
of
the
Hiroko
bill,
Patrick
quite
slow
and
they
can
use
docker
files.
Then
there
is
auto
test.
C
Is
deprecated
upstream,
so
this
is
like
one
of
the
like.
This
is
a
significant
piece
of
like
engineering
upstream
that
we
do
not
have
on
a
currently
alternative,
for
that
depend
of
is
on
the
roadmap
to
to
figure
something
out
for
and
then
auto
deploy.
This
is
primarily
auto
deploy,
image
and
the
template
to
a
lesser
extent,.
C
C
So
this
is
the
main
script
that
used
to
live
inside.
One
auto,
Delft,
okay,
left
see
IMO
file
that
has
in
the
past
course
of
the
last
year,
been
broken
up
into
sub
templates
and
docker
images
for
testability.
So
that's
how
all
of
these
sub
project
actually
got
created,
and
it's
just
like
a
massive
script
that
basically
just
calls
hell.
Well
massive,
it's
not
even
that
massive.
It's
it's
less
enough,
less
than
500
lines,
and
then
there's
this
helm
chart.
C
Well,
we
have
the
links
here
actually,
and
this
is
the
chart
that's
being
deployed
by
the
auto,
deploy
image
and
it
just
it's
like
hyper
customizable
and
users
always
want
more
customization.
Currently
we
are
wrapping
up.
Kubernetes
like
it
did
super
simple.
It's
like
really
doesn't
really
have
a
lot,
because
the
kind
of
type
of
application
that
we
deploy
is
really
really
simple.
So,
like
you
saw,
there's
like
very,
very
basic
things
in
here.
Like
you
know,
this
is
this
is
a
very
short
deployment
definition.
C
If
you
look
at
it,
but
then
you
look
at
all
the
ifs
here.
You
see
that
it
has
a
lot
of
like
what
you
call
cyclomatic
complexity,
there's
like
so
many
different
configurations
that
you
could
activate
so
like
a
lot
of
our
more
advanced
users
will
have
pretty
much.
None
of
them
will
have
effectively
the
same
deployment
type
due
to
this,
and
we
think
this
has
been
one
of
the
more
interesting
like.
Probably
one
of
the
most
active
projects
for
for
community
contributors
is
just
adding
like
for
our
team
is.
This.
C
And
that's
that's,
actually
something
that's
kind
of
I
would
say
would
be
a
very
for
the
future.
It
would
be
nice
to
have
something
more
customizable
than
how
I'm
backing
this,
because
helm
is
a
kind
of
like
you
have
to
configure
it
by
passing
down
flags.
But
then
what
ends
up
happening
is
that
you
create
something
like
a
DSL
on
top
of
kubernetes
that
is
less
flexible
than
cuber
Nettie's
and
yet
like
it,
it's
very
hard
to
support
and
yeah.
C
We
use
like
Auto
Bild
image
and
auto
deploy
image,
and
this
is
the
last
one
that
still
remains
on
a
floating
version
and
it's
just
very
hard
to
iterate
on
projects
that
use
floating
versions,
because
you
can't
make
like
any
kind
of
breaking
changes,
even
even
if
they
wouldn't
be
breaking
for
users,
that
the
user
that
you
really
want
to
get
the
changes.
They
would
break
something
for
users
that
don't
actually
want
to
get
the
changes.
D
This
is
I,
just
have
a
quick
question:
yeah,
ok,
so
we
have
a
couple
of
templates
in
this
chart
photo
right
chat
project,
yes,
some
and
then
we
cannot
make
it's
hard
to
introduce
a
change,
because
some
users,
all
the
already
depending
on
the
current
structure,
and
it
doesn't
make
sense
to
create
a
like.
You
know
the
other
template
project
and
then
like
switch
to
that
like
a
new
one,
noop
noop
project,
if,
like
a
specific
variable,
is
passed,
you
know
like
just
leaving
this
older,
deploy
app
project,
as
is
for
like
the
legacy
users.
D
C
To
me
that
that
makes
perfect
sense.
We
we
floated
aside
several
times
already
that,
like
it
seems
like
every
time
we
want
to
make
a
change.
It's
like.
How
can
we
ever
do
it
to
this
template
and
we've
been
doing
it
so
far
by
masking
it
behind
options,
but
that
just
adds
more
complexity
to
the
to
the
template,
so
yeah
I
think
once
like.
C
It
totally
makes
sense
to
do
this,
and
my
only
reservation
for
just
jumping
on
it
is
that
you
don't
want
to
repeat
the
same
thing
with
that
template
and
then
have
to
do
that
again
with
that
template,
because
then
you'll
just
have
two
templates
that
you
have
supports.
You
know
for
a
while
cos
yeah,
so
yeah
figuring
out
how
to
do
it
in
a
more
scale
like
sustainable
way
would
be
yeah.
D
C
E
A
C
Just
that,
like
I,
think
it's
just
that
we
didn't
think
of
it
in
the
beginning,
or
we
wanted
to
iterate
really
faster,
actually
I
think
Shinya
knows
better,
because
he
was
on
the
team
when
that
decision
was
made.
But
I'm
pretty
sure
this
was
just
like
I
get
all
started
out
as
a
humungous
gate,
lepsy
eye
template
that
think
that
was
Auto.
C
This
current
structure
is
a
good
example
of
like
a
iterative
improvements,
I
think
and
we
just
haven't
done
the
versioning
yet
so
what
we
like
it
should
be
pretty
easy
to
introduce
versioning
by
hard-coding
the
version
of
this
template
that
should
be
paired
with
auto,
deploy
image
into
Auto
deploy
image,
and
that
would
solve
some
of
the
problems,
but
it
would
still
not
solve
the
problem
for
users
who
have
not
like
upgraded
into
that
version
of
auto
deploy
image
that
pins
the
version.
So
we
would
have
to
stop
overwriting.
This
chart
at
some
point
so.
A
Another
connected
to
this
is
so
when
we're
doing
this
appointment
to
ECS,
which
we
have
planned
for
this
milestone.
One
of
the
things
that
we
wanted
to
do
was
kind
of
introduce
a
new
like
section
to
Auto,
deploy
that
we
have
a
template
on
the
side
that
does
the
dacoit
ACS
and
the
auto
deploy
template
would
call
this
temp.
A
This
I
don't
know,
let's
just
call
it
template
again
deploys
easy
assembly,
it
doesn't
include,
and
since
we're
planning
on
like
a
bunch
of
targets,
so
we
see
us
as
one
of
them,
but
you
see
to
s3
Farragut
WS
ones
will
probably
do
the
others
for
additional
cloud
providers.
So
we're
gonna
have
a
bunch
of
different
deployment
targets
that
we
can
have.
Kubernetes,
isn't
one
of
them
and
and
the
way
that
I
thought
to
do
it
was
to
take
out
the
community
that
came
from
the
auto
deploy
and
also
make
it
like
an
external
template.
C
A
C
I
mean
I
I
I,
like
the
splitting,
like
I,
think
it
makes
sense
to
split
like
by
my
deployment
type
in
the
in
the
sector.
So
the
co,
the
rigid
you
don't
plant
like
I,
I've,
been
reviewing
the
that
work
actually
and
and
following
along.
So
the
whole
original
plan
does
make
sense
it.
The
one
thing
that
was
unfortunate
was
that
we
were
already
planning
things
that
would
make
it
complicated.
C
But
it's
so
the
main
thing:
that's
the
main
thing
that
I
had
for
that.
One
was
that
so
it's
it's
more
of
a
long-term
concern,
which
is
that
we
have
so
there
are
these
two
sections
here:
operations,
environments
which
is
actually
called
deploy
boards
and
then
operations
kubernetes,
which
I
would
probably
more
generally
called
deployment
platforms,
but
they're
called
kubernetes
section
and
deploy
board
section,
and
so
here
we
have
like
these
advanced
integrations
that.
C
Don't
have
my
deleted
this
cluster,
so
nothing's
gonna
show
up
here,
but
it
calls
out
to
the
cluster
and
stuff
so
like
adding
support
for
this,
like
my
question
was
just
that,
was
there
a
plan
to
introduce
stuff
like
this,
for
the
new
deployment
platforms,
or
were
we
planning
on
kind
of
more
doubling
down
on
the
CI
variable
way
of
configuring
them.
A
We
have
a
bunch
of
stuff
that
we
plan
for
the
deploy
bores,
but
for
deployment
to
AWS.
I
didn't
see
anything
specific
that
we
wanted
to
add
here.
What
we
were
relying
on
was
the
environment
variables
that
were
configured
and
that's
pretty
much
how
we
figured
out
that
the
user
is
applying
to
AWS,
but
there's
like
a
other
ways
that
aren't
covered
by
this
use
case.
C
Yeah,
okay,
so
I
was
just
thinking
like
for
that.
One
was
if,
if,
if
you
wanted
to
make
API
calls
out
to
the
deployment
platform,
then
this
would
require
the
ability
to
well
to
fetch
those
secret
from
the
backend
and
then
make
some
sort
of
so
in
here,
because
here
you
would
see
pod
rollout
statuses
and
then
some
other
things.
C
So
the
CI
variables
would
in
principle
be
sufficient
for
the
current
deployment,
but
then
anytime,
the
user
changes
to
see
our
variables,
for
example,
if
they
want
to
retarget
their
deploy
deployment
to
review
apps,
then
they
would
break
deploy
boards
for
active
the
active
one.
So
that
was
kind
of
like
a
it's
a
it's
a
concern
that
doesn't
surface
immediately
but
would
hit
you
kind
of
later
on.
A
Looking
at
we
kind
of
try
to
avoid
a
collision
so
you're,
either
using
auto
dev
office,
kubernetes
or
you're
using
auto
the
pub
ccs
if
you're,
using
kubernetes,
+
UCS,
all
it's
just
gonna
initiate
the
regular,
auto
DevOps
flow
for
kubernetes.
So
at
the
moment
we
shouldn't
see
any
problems
in
that
sense,
but
at
some
points
we'll
probably
need
to
combine
them.
Yeah.
C
Currently
with
the
templates
we
wanted
to
change
from
own,
like
the
only
accept
syntax
to
rules
which
should
be
a
trivial
change,
especially
to
Auto
DevOps.
But
then
we
realized
people
are
including
it
and
the
only
accept
syntax
is
incompatible,
so
it
had
to
be
done
at
a
major
milestone.
So
just
like
little
things
like
that,
that
actually
turned
out
to
be
big
things.
E
A
C
Thought
that
child
templates
were
a
really
cool
idea.
I,
like
the
child
pipelines
and
I,
think
for
all
the
DevOps
to
make
it
less
customizable
and
from
the
user
perspective,
because
the
user
would
not
be
as
far
as
I
can
tell.
They
would
not
be
able
to
override
every
single
line
of
the
CI
job
in
a
child
pipeline.
I
think
that
would
make
it
a
lot
easier
to
manage
Auto
DevOps
visually.
E
C
So
it
I
think
it
gives
us
more
control
to
like
say
you
have
an
easy
s
deployment,
and
this
is
how
it
works,
and
then
you
can
only
configure
it
using
CI
variables
versus
now
well
or
you
could
include
the
template
that
yourself,
that
is
I,
know
extended
yourself.
So
include
this.
Yes,
deployment
template
yourself,
not
as
a
child
pipeline.
D
C
The
moment
we
only
have
monitoring
on
the
implicit,
auto
devops.
Unfortunately,
we
we
added
this.
That
I
was
this
was
at
it
like
last
last
year
and
we
never
really
extended
it
to
the
included
ones,
because
the
primary
goal
of
this
was
to
to
just
notice
when
there
were
problems
like
baked
bugs
in
the
pipeline.
So
here
you
see
like
here
the
last
one
is
total
audit
that
most
pipelines
over,
like
implicitly
only
are
those
pipelines
per
12
hours
over
the
last
30
days.
D
C
Tests
are
expected
to
fail
when
you're
developing,
and
so
that's
why
we
have
this
dashboard,
which
shows
you
like
the
deviance
and
success
ratios
anyway
yeah.
So
we
only
have
like
this
is
all
we
know.
We
can
also
get
the
same
information
like
this
information
from
the
database
like
from
the
pipeline's
table,
but
the
way
that
it's
eim
will
works
like
the
way
it's
stored.
C
D
C
D
C
C
Yeah
just
wanted
to
make
sure
that
we,
like
we
weren't
repeating
some
of
the
things
that
we
had
been
working
hard
to
like
kind
of
hard
to
to
you
know
the
flick.
The
floating
version
is
something
that
really
was
a
big
pain
point
for
us,
so
making
sure
we
don't
have
a
floating
version
in
something
that
we're
just
adding
and
making
sure
that
if
we
introduce
a
new
name
to
the
template
that
the
user
can
extend,
that
this
is
the
name
that
we
want
to
be
using
for
the
long
lick
for
the
long
run.
C
C
Yeah,
so
so
that's,
let's
look
at
the
auto
devops
talks
and
there's.
C
Where
is
this
here,
so
this
section
here
about
how
you
can
include
all
the
DevOps
like
this,
the
resulting
like
the
result
of
including
all
of
those
like
this
is
that
you
can
really
like
like
here.
We
even
document
this
kind
of
behavior.
You
can
override
like
individual
attributes
in
your
job
in
every
job.
So
every
single
thing
in
the
auto
dev,
ops,
eim
will
template
from
not
just
us
but
from
every
template
that
is
in
here
is
basically
public
API.
D
C
So
yeah
I,
actually
I
really
like
that.
One
and
I
only
had
like
that.
One
well
minor
major
concern
that
that,
just
to
not
use
a
floating
version
for
that,
if
possible,
because
it
will
slow
you
down
for
your
next
iteration,
it
will
let
you
land
it
sooner,
but
it
will
slow
you
down
when
you
want
to
make
any
changes
that
require
back-end
work.
I'll
write
that
on
the
on
the
verge
request
as
well.
C
E
E
Specific
docker
images,
so
we're
I,
don't
know
much
about
that.
Auto,
deploy
image
that
you
talked
about
you
mentioned
here,
but
from
Jason
Wade
I
feel
like
this
is
an
image.
That's
like
that's
platform,
agnostic
and
you
know
on
one
hand
we
have
auto
deploy
image.
On
the
other
hand,
we
have
these
EWS
e
CS
+
WS
base
images
that
are
very
WS,
related
or
focus,
and
so
I
need
to
look
into
that
a
little
bit
more.
C
I,
this
was
a
comment
that
mark
also
made
on
the
parent
issue
kind
of
flick
away.
So
it
was
one
of
the
ways
in
which
we
could
actually
introduce
ECS
support
without
even
modifying
the
templates
really.
Actually,
although
all
we
would
have
to
do
is
remove
the
restriction
that
the
deployment
jobs
are
kubernetes
specific
and
just
say
they
have
any
deployment
platform.
That
just
requires
us
to
know
that
they
like
that,
that
there
is
one
of
these
auto
deploy,
supported
deployment
platforms
available
and
so
yeah
I
think
that
actually
makes
a
lot
of
sense.
C
It's
it's.
A
one
very
viable
way
forward,
is
to
to
extend
the
auto,
deploy,
image
or
or
build
another
one
that
that
merges
both
as
for
the
faith
of
the
ECS
images,
I,
don't
know,
I
think
I
think
it
makes
sense
to
like
also
for
iteration
to
build
out
the
functionality
in
the
ECS
image
and
try
to,
for
example,
mimic
the
API
of
auto
deploy
see
if
there
is
any
parts
where
it's
Cuban
any
specific
yeah.
C
One
thing
that
you
probably
will
I
don't
know
if
you
will
be
able
to
have
that
is
the
implicit
in
cluster
database.
That's
auto,
deploy
app
or
image
kind
of.
Currently
provision
has
been
very
painful
for
us
actually
cuz
we're
not
really
qualified
to
be
managing
people's
databases,
so
we
shouldn't
be
offering
that
functionality
like
in
in
production.
You
know
they
really
should
be
using
a
managed
service
for
the
databases
instead
of
self
deployed
Postgres,
but
it's
there.
C
So
we
if
we
have
to
deal
with
it
and
it
does
demo
really
well
sorry
that
was
that
was
a
tangent,
so
yeah
I,
think
I,
think
building
it
outs
and
in
the
EECS
image,
while
you're
still
like
figuring
out.
All
the
details
makes
a
lot
of
sense
and
then
merging
it
back
into
auto.
Deploy
image,
yeah
should
be
a
problem,
would
be
a
really
cool
iteration.
A
So
we
discussed
a
little
bit
about
what
we
have
planned
for,
deploy,
tecs
and
the
fact
that
we're
relying
at
the
moment
environment
variables
in
terms
of
deciding
that
that's
the
target
from
a
previous
conversation
that
I
had
with
Daniel
I,
understand
that
Auto
DevOps
in
in
general
also
relies
on
the
environment
variables.
Is
that
correct,
yeah.
C
So
Auto
DevOps
is
like
in
the
end
the
whole
architecture
is.
There
are
some
extra
things
on
the
backend
like
the
like
God
and
of
a
deployment
platform
support
which
then,
in
the
end
just
injects
CI
variables
into
the
pipelines.
So
the
whole
feature
is
in
the
at.
Like
you
know,
once
you
get
to
the
CI
stage,
everything
is
driven
by
CI
variables,
so
starting
out
with
CI
variables
for
the
first
iteration
does
make
a
lot
of
sense.
B
C
Let's,
let's
go
to
this
page
so
that
every
everything
that's
in
here
that
reaches
the
look
for
the
Kuban
any
specific
integration.
Everything
that's
in
here
in
this
cluster
stuff
eventually
becomes
a
CI
variable,
but
for
the
sake
of
a
more
integrated
experience
for
like
for
the
user,
we
have
this
page
here.
C
So
that's
that's
all
I'm
saying
like
so
that
you
can.
You
can
do
these
things
like
you
know
that
you
can
tell
that
the
project
has
cuber
Nettie's
and
that's
you
know
you
want
to
be
able
to
display
different
things
in
the
UI
like
all
of
this
should
be
in
the
database
in
a
in
a
way
that
is
meant
to
be
retrieved
by
the
user
and
to
drive
back
in
business
logic,
but
the
feature
then
like
so
this
whole
like
this
is
not
a
lot
of
database
stuff
and
then
it
drives
CI
variables.
A
We
have
another
issue
regarding
implementing
load,
balancers
and
annotations
on
nginx
yeah.
So
maybe
we
could
talk
a
little
bit
about
that.
What
our
current
plan
is
to
use
the
already
supported,
nginx
annotations
and
let
users
configure
in
some
way,
through
the
UI
and
through
API
change,
different
parameters
like
weight
and
things
like
that,
and
then
we
would
want
to
connect
it
back
to
Auto
them.
A
D
Currently,
the
Gators
are
say
a
variable
called
canary
enabled,
and
if
this
variable
is
a
specified
canary
deployment
is
created
and
and
masoom
liquid,
we
cut
what
what
we
currently
have
for
kind
of
redeployment
and
the
production
deployment
is
like
service
level
load
balancing,
and
there
is
just
one
service.
There
are
two
deployments:
one
is
Connery
and
one
its
production,
and
then
the
traffic
is
routing
to
teach
deployments,
and
actually
one
contributor,
like
a-tryin
key,
was
trying
to
fix
this
problem
because
it's
it's
kind
of
bothering.
If
you
use
canary
ingress.
D
C
D
C
F
C
D
D
F
A
Okay,
great
I'm,
so
jumping
back
to
ECS.
Another
thing
that
we're
discussing
is
adding
a
new
parameter,
horan
or
variable.
That's
called
launch
type,
and
the
idea
here
is
to
make
it
easier
on
the
logic
of
all
of
those
work
once
we're
introducing
a
bunch
of
different
targets,
a
lot
of
DevOps
needs
to
differentiate
what
the
target
ultimately
is
going
to
me
where
the
deployment
is
going.
A
A
Know
I
asked
a
similar
question,
but
would
this
help
us
in
terms
of
extracting
the
deployments
at
the
kubernetes
or
deployment
EECS
from
the
master
template
and
with
this
help
with
the
versioning
and
at
the
end
of
the
day?
Because,
basically,
since
we're
introducing
this
new
variable
that
doesn't
exist
in
previous
versions,
it
would
can
help
us
determine
whether
what
kind
of
version
a
user
is
using
yeah.
C
So
yeah
I
was
I've,
been
thinking
about
this
as
well,
because,
honestly,
up
until
this
effort
to
add
DCs,
we
really
hadn't
thought
much
about
generalizing
to
different,
like
what
I
like
to
call
deployment
platforms,
because
that's
an
abstraction
that's
already
halfway
in
the
codebase,
so
there
there
are
already
some
abstractions
and
like
I,
would
I
think
launch
types.
A
for
AWS
makes
a
lot
of
sense,
though
I
would
encourage
you
to
maybe
try
to
see
if
there's,
if
it
makes
sense,
to
make
the
abstraction
less
ADA.
C
Is
yes
instead
of
kubernetes,
although
this
one
is
really
really
completely
generic
as
long
as
as
long
as
that,
like
the
platform
is
one
where
it
makes
sense
to
deploy
in
these
different
ways
like
like,
if
staging
is
something
that
makes
sense
or
incremental
rule
or
something
makes
sense,
I
think
that
that
probably
makes
sense
for
almost
every
deployment
platform
possible.
So
this
is
pretty
generic,
but
yeah.
B
B
B
A
It's
like
something
that
you
would
manual
itself
and
then
and
then
that's
it's
nothing
like
visual
or
anything,
a
new
one
and
again
the
idea
behind
it
was
just
to
make
life
easier
for
the
logic
you
know
it's
at
the
end
of
the
day
when
you
think
about
it's
like
a
switch
case,
and
we
have
a
bunch
of
different
cases.
So
this
would
you
know
the
thing
that
I'm
scared
most.
Is
that
there's
some
kind
of
deployment
target
that
we
haven't,
thought
about
and
I
don't
want
to
deal
with
it.
A
So
I
want
to
make
sure
that
that
gets
skipped
and,
on
the
other
hand,
I
want
to
make
sure
that
the
other
elements
of
Auto
DevOps
if
you're
not
using
Auto
deploy,
you
can
still
use
the
other
ones
without
streamer
upstream.
So
so
you
can
still
use
auto
bills,
auto
test
or
everything
else.
So
that's
that
that
was
the
reasoning
behind
it.
B
Yeah
I
think
that
makes
absolute
total
sense.
I
was
just
wondering
if
we're
gonna,
because
right
now
pretty
much
everything
is
focused
on
kubernetes
like
the
messaging
documentation,
the
UI
like
everything.
So
if
we're
gonna
kind
of
create
a
new,
you
know
deployment
targets,
it
does
create
an
opportunity.
I
feel
like
this.
It
does
create
a
lot
of
opportunities,
actually
like
you're,
mentioning
there's
an
opportunity
to
maybe
break
it
in
a
way
that
we
can
add
version
in
there's
an
opportunity
to
break
it
in
a
way
that
can
add
multiple
deployment
targets.
I
think
it.
B
A
Have
the
me
tree
here
too,
so
he
can
start
thinking
about
it
as
well,
because
it
is
very
kubernetes
targeted
right
now
and
even
the
idea
of
load
balancer
with
the
nginx
weights.
We
don't
even
know
where
we
want
to
place
it
so
I
mean
I
was
thinking
I,
don't
want
to
hide
it
under
cover
Nettie's
I.
Don't
maybe,
even
though
this
one
is
starting
from
kubernetes
but
like
how
do
we
visualize
it?
Maybe
it's
a
really
good
opportunity
to
change
change
of
everything,
so
I
don't
know,
that's
why
we
had
Dimitri.
F
F
E
Exists
today
in
our
code
base
kubernetes
projects
and
from
the
and
from
that
point,
once
we
have
like
updated,
improve
the
framework
to
be
you
know,
platform
agnostic,
then
maybe
we
can
move
on
on
to
like
implementing
a
UI,
a
new
UI
and
after
that
we
can
even
maybe
like
remove
the
CI
variable.
The
CI
variable,
the
new
C
available
is
like
what
holds
it
all
together
right.
What
enables
us
to
have
that
second
platform
right,
it's
a
switch
in
a
way
and
and
so
having,
but
that
that
that
is.
B
Yeah
I,
I
I,
think
that
sounds
great
and
and
the
the
configure
team
totally
stands
ready
to
help
in
whatever
way
we
can
to
tell
plan
or-
or
you
know,
provide
consulting
to
help
that
along
I
think
the
only
thing
I'm
going
to
ask
is
just
keep
communication
open
because
somehow
we
didn't
know
about
this
until
like
last
week.
So
you
know
like
the
more
the
communication
transparency.
Is
there
the
more
that
we
can?
We
can
help,
make
it
great
and
avoid
you
know
problems
and
things
like
that.
So
yeah.
E
Totally
agree
like,
for
example,
the
work
that
you
guys
have
done
on
the
implementation
of
the
rules
attribute.
I,
think
that
the
I
think
that's
great
and
that's
something
that
you
know
we
I
I
pulled
so
that
you
know
I
can
work
with
it
onto
my
tasks
and
and
yeah
I
feel
like
now,
there's
like
more
communication
and
it's
good
and
very
helpful
to
have
an
insight
on
what
you
guys
have
been
doing.
A
D
C
C
I
would
really
want
to
have
also
actually
control
over
that
image
and
yeah
we
met
so
there's
auto,
deploy
image,
there's
auto,
build
image
in
or
deploy
yeah
where,
where
the
configure
team
maintains
all
of
them
who,
on
the
team
is
a
maintainer
depends
on
like
kind
of
what
they
worked
on,
but
we
were
the
primary
maintainer
of
it
and
yes,
I
would
expect
you
to
take
that
over
if
you're
taking
over
the
auto
deploy
feature
and
that's
or
that's
what
we
were
expecting
and
when
discussing
it.
Okay,.
D
C
Recently,
it
feels
like
it's
been
happening
more,
which
might
be
a
signal
that
we're
getting
more
users
or
that
people
are
bored
in
the
pandemic.
I
think
I
think
it's
just
we're
getting
getting
more
users
over
time
and
that
people
are
starting
to
hit
the
limitations
of
like
the
helm
chart,
so
I
would
say
we're
getting
about
maybe
one
or
two
significant
contributions
monthly,
but
then
we
also
get
there's
these
less
significant
ones
that
fall
into
a
gray
area
of
adding
complexity
to
the
chart.
That
might
not
benefit
many
users
and
we're
not
entirely
sure.
C
C
Okay,
just
yeah
so
know
that
the
two
sports
that
I've
showed
you.
So
this
is
the
main
metric.
This
is
just
a
nose,
prometheus
interface
and
this
this
these
are
the
main
two
live
monitoring
dashboards
that
we
have.
Oh
yeah
I'm,
not
sharing
my
screen
so
but
ya
know
so
the
dashboards
that
I
showed.
You
is
the
only
dashboard
that
we
have
for
for
monitoring
for
errors.
C
We
don't
really
look
at
it
that
much
because
it's
it's
pretty
stable,
so
it's
more
of
a
reactive
thing
that
we
look
at
it
when
people
start
complaining,
but
previously
so
so
they
got
added
previously,
because
we
had
the
problem
that
you
know,
Auto
DevOps
broke
and
we
didn't
know,
and
we
had
no
way
of
really
verifying
it
because
it
was
semi
random.
So
then
we
added
those
dashboards.
C
C
B
So
so,
or
it
just
just,
was
asking
how
like
like
what
should
be
our
next
action
items
or
this
well
I
mean
it
seems
like
we're.
Gonna
need
another
meeting,
potentially,
as
always,
should
probably
set
that
up.
I
I'm
wondering
if
there's
folks
on
on
the
on
the
progressive
delivery
team
that
want
to
join
these
projects
as
trainee,
maintainer
x'.
We
could
sort
of
kick
start
that
process
of
getting
them
bumped
up
towards
being
a
maintainer.
B
C
That
was
merged
last
night
by
Tom,
so,
okay,
that
is
merged,
then
the
other
one
is
moving.
The
Postgres
default
version
which
we
are
doing
like
it
should
be
merged
very
soon,
so
hopefully,
today
or
tomorrow,
and
then
those
those
are
kind
of
the
main
action
items
we
have
413.
Oh,
my
suggested
plan
would
be
that
configure
identify
the
kind
of
work
that
we
want
to
wrap
up
now
during
the
handover
stage
and
that
their
team
kind
of
help
review
that
work
and
that
we
pause
any
further
feature
development.
C
A
B
Yeah
and
I
guess
from
my
perspective,
I
feel,
like
you
know.
If
we
met
next
week,
like
my
team
can
come
up
with
a
list
of
like
handover
items
of
like
now,
they
have
a
better
idea
of
kind
of
the
overall
situation.
Come
up
with
a
like
a
specific
list
of
things
that
need
to
be
handed
over
or
we
can
communicate
in
this
feature
channel,
then
maybe
we
don't
need
to
meet
regularly
I,
don't
think
you
need
any
more
meetings,
but
I
guess
maybe
we
could
decide
at
the
next
meeting
like
what.