►
From YouTube: 2023-03-08 - Delivery:System Sync and Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
Okay,
so
it's
added
to
the
merge
train.
Okay,
again,.
A
E
For
logistical
purposes,
I'm
gonna
move
my
specific
discussion
item
to
the
very
bottom
of
the
agenda,
just
because
it
could
either
be
one
of
those.
Oh
let's
discuss
this
for
the
next
three
hours
or
it
could
be
one
of
those.
Let's
discuss
this
for
five
minutes
and
because
of
that
variability
I,
don't
want
to
interfere
with
other
things
that
look
more
fun
to
me.
So
Ahmad.
Why
don't
you
go
ahead
and
kick
us
off
and
let's
talk
about
some
metrics.
D
Just
like
making
sure
I
don't
have
inappropriate
things
now,
I
have
Thanos.
Can
you
see
Thanos
amazing?
This
week
we
merged
a
couple
of
metrics
and
Vladimir
visualized.
Some
of
them
I
want
to
talk
about
the
issues
we
have
with
counters.
D
For
example,
if
I
say
delivery
deployment
started
total.
This
is
a
metric
which
gives
us
the
total
amount
of
deployment
started
per
day.
E
A
A
D
What
did
you
want
me
to
do?
Skybag.
A
D
Then
every
deployment
start
total
for
G
prod
right.
This
should
gives
us
if
we
show
the
graph
it
should
give
us
how
many
departments
started
for
the
last
one
hour
if
we
increase
here,
nothing
started
for
today
for
the
last
one
hour
and
then
six
hours
12
hours,
one
day
two
days,
I
wanted
to
be
two
weeks
to
show
you
this.
D
We
have
an
issue
here.
The
first
one
is:
we
have
restarts
between
them.
Other
started
is
in
the
Prometheus
metric
server.
We
have
on
our
Ops
instance,
so
the
counter
does
not
reset.
It
actually
starts
counting
from
zero.
Again.
This
is
not
a
country
set.
D
Dark
m
and
execute
this
is
the
country
this
basically
give
like
shows
the
metric,
as
it
has
actually
a
contrary
set.
So
what
I
wanted
to
say
is
this
is
a
counter
that
has
at
Max
for
the
last
two
weeks
15
as
a
value
here,
so
we
have
handful
like
not
handful,
but
like
very
low
number
of
things
to
count
per
day
or
basically,
over
the
I.
Don't
know
like
the
three
days
here
from
March
1
to
March
3..
D
We
goes
up
to
15
and
then
we
reset
again,
so
we
just
have
a
very
low
number
of
things
to
count.
Second
thing
is:
this
is
not
a
valuable
thing
to
show.
What
we
actually
need
to
show
is
something
like
deployment
failed,
so
we
have
here
thanks
for
open
for
this,
we
can
have
delivery
deployment
completed.
D
This
brakes
doesn't
show
any
meaningful
data
once
one
thing
is,
we
don't
have
the
problem
completed
for
the
last
two
weeks,
actually,
because
this
is
just
merged,
I,
don't
know
two
days
ago.
This
is
the
first
thing.
Second
thing
is:
this
is
math
operation
and
math
operation
encounters
does
not
really
produce
meaningful
thing,
a
meaningful
value,
so
it
makes
it
really
hard
to
visualize.
This
basically,
so
counters
are
usually
meant
with.
Prometheus
are
usually
meant
to
be
go
ahead.
Let
me,
if
you
want
to
say
anything.
B
Yeah
I.
That
was
the
reason
why
well
one
of
the
reason
why
I
wanted
to
have
actually
real
deployment
data,
not
like
a
counters
on
multiple
deployments
per
something
per
time
period
or
something
like
you
know.
We
discussed
that
I
think
already
with
with
Ruben
as
well,
and
the
initial
plan
was
to
have
a
kind
of
mean
time
per
merge
request
as
as
as
a
metric,
but
then
I
I
start
thinking
like
okay,
so
we're
gonna
have
meantime
per
merch
request
like
what
it
actually
gives
us
like.
What
answer?
B
Like
everything
that
you
can
imagine
you
can
you
can
just
you
could
just
you
can
just
bring
with
this
data
like
we
need
more
data
in
order
to
have
like
a
long-term
aggregations
and
then
like
a
ability
to
drill
down
into
particular
deployment,
and
then
certain
problem
will
just
disappears.
You
know
like
these
counter
resets
or
you
just
have
raw
data,
and
you
you
do
everything
with
this
raw
data.
Whatever
you
want,
this
is
my
my
this
is
my
vision
and
I.
B
I
suggested
that
and
I
think
that
all
these
metrics
can
be
calculated
on
the
on
the
fly.
If
we
have
a
data
per
deployment.
B
I
already
do
that
I
I.
Can
we
like
at
the
end,
will
be
the
demo
of
like
the
very
beginning
of
this
journey.
B
D
Thanks
Vladimir
yeah,
like
the
point
that
I
was
trying
to
also
make
is
counters,
are
usually
used
with
rate
and
increase
in
Prometheus,
so
on
their
own.
They
are
not
very
helpful
to
do
aggregations.
You
need
to
do
aggregations
with
them
using
other
functions
like
the
rate
or
like
to
show
the
trend,
and
what
we
are
trying
to
do
here
is
not
going
to
really
work,
so
it
like
I
try
to
visualize
this
for
the
last
couple
of
days.
D
D
If
we
subtract
yeah,
if
we,
if
we,
if
we
want
to
subtract
the
completed
from
the
started,
we
would
get
the
failed
deployment
and
the
failed
deployment.
Unfortunately,
this
doesn't
give
the
real
number
because,
as
I
said,
it's
not
like
the
way
Prometheus
work,
so
we
need
to
like
figure
out
actually
how
we
need
to
collect
the
metrics.
Somehow.
D
A
B
My
again,
like
my
first
thoughts
about
metrics
and
about
visualization,
like
you,
need
to
keep
in
mind,
what's
there
what's
the
question
you
try
to
ask
and
what
answer
you're
like
what
is
the
ultimate
goal
of
this
metric?
You
want
to
have
a
percentage
of
failed
jobs.
You
want
to
have
a
total
amount
of
jobs
per
day
or
what
exactly.
D
B
Then
I
don't
see
why
we
cannot
do
that
with
with
aggregating
or
subtracting
the
the
amount
of
actual
jobs
with
status
with
label
failed,
and
something
like
that.
So.
A
D
D
D
That's
a
good
question.
The
thing
is
yeah.
You
can
of
course,
do
this,
but
it's
not
going
to
show
a
meaningful
data
as
well
like
if
I
do
here,
increase.
F
D
D
F
F
F
D
A
D
B
F
D
D
I
was
discussing
this
with
scarback
in
our
101
and
I
was
like
proposing
to
use
histogram
with
buckets
bucket,
with
the
bucket
failed
bucket,
like
like
these
status,
failed
or
status,
completed
or
status
started,
and
basically
we
can
also
do
some
math
on
this,
like
it's
going
to
be
a
little
bit
easier,
but
I
would
need
to
actually
listed
because
I'm
not
sure
if
it's
gonna
work
as
well
or
not
like
my
main,
my
main
problem
is
Prometheus,
is
not
really
meant.
D
B
B
I
saw
that
wait.
A
second.
A
B
Sorry
I'm
trying
to
find
sorry
that
calls.
B
D
Not
really
like
this
is
a
gauge.
This
goes
up
and
down.
What
I
wanted
to
actually
have
is
buckets
in
histogram
like.
C
B
Wait
a
second
duration
yeah,
the
duration
has
has
a
status
label.
D
Yeah
anyway,
I
know
this
wreck
the
whole
team
in
this
discussion
more
just
that
I,
don't
think
counters
are
the
perfect
fit.
I
also
discussed
this
with
Steve
azobardi
yesterday,
because
I
was
fighting
for
the
last
couple
of
days
against
this,
and
we
have
the
same
conclusion:
counters
are
meant
to
be
used
with
rate
functions
and
the
increased
functions,
and
they
are
not
going
to
produce
any
meaningful
data
for
us.
Unfortunately,
so
either
we
try
with
histogram
or
Maybe.
D
F
So
earlier
we
were
not
sure
if
we
could
collect
like
high
volume
data,
like
you
know:
High
cardinality
data,
sorry
with
like
every
deployment,
ID
and
stuff.
So
if
we
want
to
collect
it
now,
I'm
still
wondering
how
you
would
get
of
the
like
number
of
failed
deployments
out
of
that,
because
you,
for
example,
say
you
record
the
start
of
a
deployment
with
a
particular
ID.
F
D
The
the
thing
is,
we
can
still
I
think
we
can
still
use
like
something
I
would
play
with
is
we
can
still
use
Primitives
at
the
database,
but
like
not
the
query
language
of
it,
the
interface?
Maybe
you
can
use
the
API
to
actually
get
the
numbers
and
I
don't
know
with
Ruby.
If
we
calculate
it,
that
would
be
easier
right.
If
we
get
the
deployment
started
and
then
we
subtract
the
completed,
we
would
have
a
number,
but
the
query
language
Prometheus
would
not
get
us
the
number
we
want.
A
D
G
So
with
the
metric,
if
we
have
it
by
ID,
then
aggregating
it
via
the
like.
A
graph
is
fine
like
using
a
sum,
and
then
we
can
aggregate
it
like,
but
then
like
separating
it
by
ID
would
prevent
us
from
increasing
the
count
using
Ruby
like
while
trying
to
calculate
the
budget
right.
G
G
G
E
Are
we
concerned
about
car
dolly
because
we're
only
deploying
we
create
at
most
what
eight
or
nine
pipelines
a
day
so
like,
even
if
we're
adding
say
the
package
name
to
it,
if
we're
adding
a
status
field
of
some
kind
to
it,
the
cardale
is
still
not
going
to
be
a
lot.
So
we're
not
storing
that's
a
lot
of
metrics
we're
not
running
300
sidekick,
pods
capturing
15
different
labels
for
sidekick
itself,
like
we
are
well
below
that
threshold.
That's.
B
F
G
A
lot
more
spots
that
we
need
to
check
to
implement
that
status
unless
that's
easy,
I'm,
not
I'm,
not
too
sure
but
I
feel,
like
you
know,
capturing
status
started
and
capturing
status
completed
is
like
a
very
finite
number
of
spots.
In
our
code,
at
least.
E
Well,
I'm
guessing
there's
two
ways
that
I
mean
Vladimir
will
probably
correct
me
because
I
get
the
sense
that
you
know
what
you
you
know
this
very
well
compared
to
what
my
knowledge
is,
but
like
I'm
playing
off
of,
or
at
least
my
thought
process
takes
me
to
a
cube.
State
metrics
is
doing
where
everything
has
two
statuses
effectively.
So
you've
got
your
pipeline.
E
So
we
could
query
for
all
failures
and
get
account
of
all
failures,
regardless
of
the
pipeline
or
the
package
ID
for
that.
For
that
case,
and
then
for
every
individual
job,
we
will
basically
have
something
similar,
but
we
would
have
like
the
job.
E
Id
would
be
like
another
unique
identifier
or
another
label
that
we
would
have
to
take
into
account,
and
then
each
of
those
would
probably
see
then
cardinales
does
start
to
blow
up
because
then
you're
asking
for
a
lot
of
statuses
across
all
of
those
pipelines
and
all
those
jobs
and
that's
a
lot
of
jobs
in
our
in
the
entire
pipeline
right
I.
B
I
I
looked
at
the
at
the
canonical
report
and
my
creation,
like
my
initial
equation,
was
like
what
sort
of
cardinality
would
be
to
send
all
data
about.
Merge
requests
like
every
single
merch
request.
I
want
to
see
like
the
the
elite
time
for
the
merch
request
and
I
figured
out
that
we
have
since
very
beginning
of
gitlab.
B
And
etc,
etc.
So
in
the
whole
world
of
metrics,
our
pipelines
is
just
like
a
tiny
fish
and
if
you're,
if
you
are,
if
we
are
talking
about
getting
the
the
data
per
each
Pipeline
and
per
each
deployment,
you
take
this
merch
requests.
And
then
you,
like
one
deployment,
consists
of
an
average
like
five
merch
requests.
So,
like
you
take
83
000,
you
divide
it
by
five
and
then
you
have
like
amount
of
deployments.
We
may
we
make
per
since
very
beginning
of
the
gitlab
and
I.
Don't
really
think
that
it's
like.
B
We
should
really
worry
about
that,
and
even
the
status
like
okay,
so
you
have
a
what
you
have.
You
have
a
environment,
you
have
a
deployment
ID
and
you
have
a
status
and
probably
start
and
stop
time
or
something
like
that,
and
and
even
with
those
labels,
it's
not
gonna
explode.
The
the
the
the
the
the
metrics.
E
B
Was
jobs
just
jobs,
jobs,
yes,
jobs
that
will
start
to
increase
I,
I
I,
think
so,
yes,
but
again
like
what
it
really
depends.
What
you
compare
with
I,
I,
I
I,
have
experience
in
the
past,
enabling
istio
met
like
istio
metrics
for
each
connection
like
from
where
to
where
it
goes.
It
goes
to
and
like
for
the
end
point:
okay
and
it's
it's
all
Expo
exported
to
the
to
prenitius,
and
this
is
really
a
lot
so,
and
this
is
like
a
few
hundred
millions
of
metrics
per
couple
of
days.
B
B
Because
our
Thanos
and
like
infrastructure
is
is
huge
and
what
we
are
like
observability
infrastructure,
we
we,
what
is
really
important,
is
the
amount
of
metrics
Prometheus
local
Prometheus
keeps
in
memory,
because
it
should
keep
everything
in
memory
and
it
keeps
only
12
hours
of
metrics
and
the
rest
goes
to
Thanos
and
Thanos
just
store
those
metrics
in
in
the
bucket.
It
costs
nothing
and
it
can
it
can.
It
can
de-duplicate
it
can
down
sample
and
etc,
etc.
F
I,
don't
think
storage
costs
for
the
concern
it's
more
like
Prometheus
is
Prometheus
able
to
process
like
a
a
Time
range
that
is
longer
and
contains
a
lot
of
metrics,
but
even
assuming
that
that
is
fine,
sorry
Skype.
You
wanted
to
say
something.
F
F
So
both
they
are
not
exactly
the
same
thing,
because
a
failed
job
can
be
retried
and
can
succeed.
F
So,
let's
start
with
the
failed
deployment,
a
failed
job
is
easier
because
once
it
fails,
you
know
it's
failed,
even
if
you
retry
and
it
succeeded,
the
previous
failure.
Event
is
still
there,
but
with
a
deployment
pipeline,
you
retry,
and
it
succeeds
that
same
event
has
you
know
changed
if
you
get
my
meaning,
so
that
same
pipeline
is
now
not
failed,
but
is
succeeded.
B
If
we
set
sorry
if
we
set
the
metric
so
in
in
metric
server,
there
is
a
set
and
there
is
observe,
I,
think
methods
if
we
set
and
if
we,
if
ID
of
the
pipeline,
doesn't
change
and
it
doesn't
change
it
will
set,
it
will
rewrite
the
metric.
So
it
will
yeah
I
see
what
I
see
the
problem
right.
So
if
job
failed
and
then
it
treats
right
and
it
succeeded,
the
metric
will
be
Rewritten.
That's
the
thing.
F
I,
don't
think
a
counter
has
a
set
method.
I
think
that's
just
in
a
gauge
but
yeah
that
should
work
with
the.
E
Gauges
yeah,
it's
Gage,
yes
in
this
sounds
like
it
would
work
very
well,
because
our
ID
would
probably
be
something
like
the
name
of
the
package
that
we
want
to
deploy,
because
that's
kind
of
like
our
starting
point
with,
like
all
of
our
processes,
start
with
the
tag
right.
So
that
could
be
the
ID
that
we
use
across
the
board
for
all
of
us
right.
D
D
I
can
play
with
this,
like
I
can
basically
make
a
gauge
and
basically
try
to
visualize
this
I
would
I
would
also
like
raise
this
again
like
I.
Don't
think
still.
Prometheus
is
the
correct
tool
for
our
analytics
data.
I
think
we
are
mainly
want
to
analyze
data
and
it's
it
could
be
relational.
Data
I,
don't
know
like
maybe
Reuben
has
like
an
idea.
If
Ops
Trace
has
a
different
back
end
to
store
this
kind
of
data,
more
than
Prometheus.
F
So
upstairs
users
click
house
as
their
backend
and
clickhouse
is
a
very
generic
column
based
database,
but
on
top
of
clickhouse
they
run
the
same
Prometheus
query
language.
So.
A
D
E
Excellent
any
further
questions
before
we
go
on
to
Vladimir.
B
B
I'm
going
to
show
you
the
dashboard
that
I
actually
worked
and
like
dashboard,
it's
it's
metric,
it's
like
a
part
of
the
pipeline
and
then
at
the
end
it's
a
dashboard
that
shows
you
the
the
result
that
I'm
going
to
show
well
basically
oops.
Sorry.
B
This
is
the
working
dashboard
that
I
created
I
did
that
manually,
I
haven't
succeeded
yet
with
Json
net
version,
but
the
manual
version
works
like
very
well
I.
Think
so
you
can
select
the
the
environment
and
then
it
will
yeah
it
will.
B
It
will
get
all
the
deployments
IDS
from
the
last
one
day
per
environment
and
then
you
can
just
select,
and
you
see
those
are
all
merge,
requests
per
deployment,
and
these
require
merge,
request,
lead
time
and
you
can
see
like
what
is
the
lead
time
per
each
merge
request
so
lead
time
by
by
lead
time,
I
mean
the
amount
of
time
it
took
from
merge
to
production
so
to
the
end
of
the
deployment
pipeline
so
and
like
in
each
on
on
each
bar.
C
And
then
you
can
select
whatever
like
staging
and
select
other
deployments
and.
B
Yeah,
that's
that's
how
it
look
likes
and-
and
my
idea
again
like
as
a
as
I
like
my
approach
to
observability-
is
to
collect
as
much
as
possible
and
take
whatever
like,
like
observability,
should
be
on
Demand
observability
right,
so
you
you
need
to
have
a
data
and
you
need
to
like
you
need
to
have
enough
data
in
order
to
crystallize
the
the
the
things
that
you
want
to
know
from
that
data
and
I.
B
My
idea,
like
initial
idea,
was
to
have
deployment
metrics
dashboard
and
by
deployment.
Metrics
I
mean
like
the
metrics
per
each
deployment
and
we
need
to
know
what
is
the
merge
time.
We
need
to
know
what
is
the
status
of
deployment?
We
need
to
know
how
long
it
took
we
need
to
know.
What's
I,
don't
know
like
what
are
the
locks
from
the
from
the
from
the
pipelines,
etc,
etc.
B
Like
all
possible
information
on
that
dashboard,
where
we
can,
we
can
have
as
much
as
data
as
as
we
we
can
get
and
visualizations
will
answer
to
the
questions
we
want
to
to
to
to
to
to
to
ask
right
what
is
like
what
is
the
problem
with
what
what
was
the
issue
with
this
deployment?
Why
it
took
so
long
to
to
deploy
like
Etc
etcet?
B
Like
all
all
the
questions
that
you
possibly
you
can
possibly
ask,
you
will
be,
you
should
be
able
to
answer
by
looking
at
the
data,
and
here
my
question
about
that
particular
dashboard.
That
particular
bar
chart
is
what
are
the
thresholds
for
for
the
lead
time?
What
is
our
kind
of
slos
or
something
for
that
like?
What
is
our,
what
we
are
expecting
from
from
the
merge
request
like
how
long
we
tolerate
the
the
lead
time
for
the
merge
request?
B
That
means
we
I
wanted
to
like
discuss
the
thresholds,
because
here
for
now
it's
if
it's
less
than
three
days,
then
it's
kind
of
warning.
If
it's
more
than
three
days,
it's
it's
it's
critical.
If
it's
less
than
two
days,
it's
it's
green
I
somewhere.
We
had
green
pipeline.
A
B
The
the
test
data
for
pull
from
actual
deployments-
yes,
so
I
I,
run
like
I
I
run
my
rake
task
from
my
machine
with
the
real,
with
the
real
version
and
like
thresholds,
is
my
first
question.
The
second
question
is:
I,
have
a
feeling
that
this
deployment
IDs
are
useless.
I
would
change
them
with
the
version
number
I.
E
B
E
Okay,
so
it's
just
a
matter
of
you
know,
figuring
out
how
to
supplement
what
your
date
is.
Looking
for
to
use
the
the
tag
of
our
packages
right.
B
Okay,
then
I
will
change
instead
of
instead
of
having
deployment.
Id
I
will
change
it
with
with
this
version
number
and
any
ideas
about
thresholds.
G
I
think
we're
about
to
say
the
same
thing.
Maybe
I
was
under
the
impression
that
our
lead
time
was
more
in
terms
of
hours.
Not
three
plus
days
is
this:
like
is
this
from
the
deployment
over
the
last
few
days
because
of
the
PCL,
or
is
this
just
like
a
regular
deployment?
Prior
to
that?
Do
you
know
for
these
deployment
IDs.
B
So,
just
to
just
to
to
clarify
the
dates
like
the
the
data
here
might
be
wrong,
because
this
rake
task
is
supposed
to
run
as
a
part
of
deployment
right
after
track
deployment
job
so
but
and
it
it.
It
calculates
the
like
how
it
calculates
this
this
time.
This
amount
of
seconds
this,
like
it's,
it's
like
the
gouge
with
seconds
it
take
their
merged
at
date.
B
Oh
sorry,
it
take
take
time
now
and
subtract
subtract
the
merge
at
time
step
from
time
now
so
and
as
I
run,
these
things
manual
at
the
time
now
might
be
different
than
the
actual
time
now
from
deployment.
B
G
F
Also,
I
think
that
version
number
you
used
is
from
Monday,
so
maybe
it
was
deploying
Mrs
from
like
Friday.
G
Yeah,
yes,.
G
Instead
of
using
thumbnail
now,
is
it
possible
to
use
the
other
metric
like
on
I?
Don't
know
if
the
completed
like
the
completed
metric
that
we
were
looking
at
before
has
a
timestamp
Associated
to
it,
because
that
would
make
sense
Maybe.
B
I
think
you're
right
I
think
that
no
it's
actually,
no,
it's
not
gonna
work.
You
know
why,
because
deployment
itself
and
the
pipeline
does
not
have
a
since
this
rake
task
will
be
executed
within
the
pipeline.
While
it's
executed
pipeline
doesn't
have
finished
at.
G
E
E
G
F
So
there
is,
there
is
a
record
of
when
I
think
of
when
an
MR
reaches
production.
I
think
it's
in
the
database,
but
I
think
they
don't
show
it
in
the
API.
So
that
would
have
been
very
useful.
F
Because
I
think
I've
seen
like
jobs
fail
because
of
a
large
number,
a
large
deployment.
So
the
job
takes
longer
than
the
timeout.
F
F
F
I
remember
using
that
attribute
value
the
time
when
it
reached
production
I
think
it
was
in
chat,
Ops
I'll,
try
finding
it
and
see
if
it's
it
can
be
useful
for
you.
Thank
you.
E
Okay,
Vladimir,
you
were
asking
about
thresholds
earlier.
We've
got
a
goal
as
a
team
where
nttp
should
be
below
12
hours,
so
I
think
that
should
be
your
starting
point
for
what
you're,
using
as
your
thresholds.
E
E
F
E
Thank
you
I
do
question
what
this
chart
looks
like
because,
like
we're,
seeing
what
like
15
ish,
merge
requests
here,
but
it's
common
for
us
to
see
you
know
between
50
plus
merge
requests
in
a
given
package.
So
I
think
this.
B
Say
it
again
like
that's
that's
what
we
have
here
right.
So
that's
what
we!
Basically,
these
all
merge
requests
are
taken
from
they
taken
from
from
from
from
from
my
very
poor.
B
C
B
These
things
are
added
by
apis.
No,
not
least
not
these
things.
Let
me
just
I'm
very
I'm
yeah
these
things.
Basically,
yes,
you
see
it's.
It
says
like
it's
created
by
API,
so
we
have
a
track
deployment
job
that
runs
at
the
end
of
the
deployment
and
this
job,
what
it
does
it
populates
the
the
job
data
since
those
jobs
are
running
outside
of
the
outside
of
gitlab.org
gitlab.com
they
run
on
on
Ops
or
in
on
dev.
B
We
don't
have
this
data,
and
but
we
have
a
job
that
actually
creates
the
objects
for
that
particular
dashboard
right
and
this
merge
requests
that
we
are
showing
is
basically
I.
Take
the
the
object
for
this
deployment
and
I
list
all
Associated
merge
requests
with
that
object.
B
F
F
E
Okay,
well
I'll
finish
off
with
instead
of
a
discussion,
just
a
question
just
an
overall
question,
so
maybe
keep
this
in
the
back
of
your
minds
as
you,
you
know,
work
throughout
the
next
couple
of
weeks
and
maybe
next
time
we
have
this
meeting.
I'll.
Add
this
to
the
agenda.
I
I
would
like
to
know
what
you
all
think
is
holding
us
up
from
proceeding
with
trying
to
implement
a
blue
green
deployment
style
up
system.
E
Today
there
is
a
issue:
I
spun
up,
I
think
it
was
either
the
last
year
the
year
before,
where
I
explored
this,
and
we
ultimately
turned
it
down
just
due
to
lack
of
the
ability
to
work
on
it,
but
I'm
kind
of
curious
as
to
what
everyone
else
here
thinks,
especially
keen
on
hearing
from
Vladimir
being
that
you're
new
and
that
you're
just
now
ramping
up
with
release
tools
and
or
release
management,
rather
so
I'm
kind
of
curious
as
to
what
your
thoughts
may
be.
E
So
yeah
just
think
about
this,
and
you
know
next
time:
let's
have
a
larger
chat
if
we
can
about
it
cool
excellent.
Well,
thank
you.
All
everyone
have
a
lovely
day
enjoy
the
rest
of
your
day.