►
From YouTube: 2022-10-19 Delivery:System Sync and Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
we
are
live,
so
this
is
the
19th
of
October,
2022,
delivery,
sync
and
demo.
We
have
a
couple
of
actual
discussion
points
in
our
agenda
that
is
made
around
our
okay
Arts.
Okay,
Artisan
is
is
approaching
the
past,
so
in
11
days,
actually
we
have
12
days.
We
have
Q4
starting
with
our
yards,
so
we
have
actually
this
issue
open.
Where
we
collect
a
lot
of
ideas.
A
The
idea
agenda
is
to
have
an
O'hare
that
is
gonna,
be
a
group
of
we
are
together
with
team,
orchestration
and
team
system
together,
but
we
also
also
going
to
have
some
systems
specific
okr
or
okrs.
There
could
be
like
an
iteration
on
top
of
what
we
did
before
or
otherwise
could
be
something
new
that
we
can
work
on
since
yeah.
We
we
made
some
work
that
enabled
us
to
make
a
step
further.
A
In
particular,
I
wanted
to
touch
base
on
one
occur
in
particular
that
came
out
after
the
discussion
together
today
with
Reuben
use
Karbach.
You
might
know
as
well,
because
I
touched
attached
with
you
on
this
in
our
101
earlier
today,
so
just
to
bring
you
what's
Ahmad
up
to
speak
to
that
I
think
we
already
spoke
about
that.
In
any
case,
everybody
is
working
on
Poco
metrics,
especially
if
you
see
now
is
involved
in
the
part
where
we
work
on
manual,
job
retries,
how
to
measure
manageable
tries.
A
The
POC
itself
is
not
only
for
the
manager
Arbitrage
itself,
but
also
try
to
Define
the
right
foundations
of
how
we
can
handle
metrics
and
traces
better
within
within
delivery,
so
how
to
collect
metrics,
but
also
how?
Where
do
we
send?
Those
metrics
is
delivery
metrics,
the
our
in-house
build
solution,
good
enough
for
the
what
we
need
to
do
right
now
and
already
rub.
It
already
find
some
good
findings
there.
A
In
addition
to
that
also
pipeline
stage,
duration
will
be
kind
of
the
next
iteration
there,
and
for
that
we
would
like
to
talk
food
a
bit.
What
the
gitlab
observability
stack
is
offering.
We
already
had
some
meetings
with
them.
So
I
see
these
as
a
very
powerful
here
once
we
for
the
point
one,
because
it
will
be
moving
direction
of
independent
deployments,
we
will
have
to
provide
kind
of
already
instrumented
pipelines
for
this
individual
components.
You
want
to
deliver
and
they're
being
done
all
the
work
already
in
our
side.
We
know
already
okay.
A
This
is
like
this
meaningful
metrics.
We
want
to
call
it
from
a
pipeline.
This
is
Meaningful
to
understand
if
this
pipeline
takes
too
long
in
a
direction
of
the
other,
so
I
was
already
instrumenting.
These
pipelines
with
this
kind
of
metrics
and
traces
could
is
actually
definitely
a
need
that
we'll
have
later
on
for
independent
components.
A
In
addition
to
that,
on,
our
discussion,
also
with
the
observability
group,
came
up
that
they're
thinking
a
lot
about
these
traces
and
error
tracking,
but
they're
still
very
focused
on
maybe
errors
tracking
across
clusters
or
services,
and
not
so
much
around
pipelines
and
I.
Guess
you
know
if
we
have
this
problem,
a
lot
of
other
companies
gonna
have
the
same
problem
that
we
have
us
that
would
that
we
are
having
so
I
guess.
A
This
is
something
that
on
One
Direction,
we
are
kind
of
the
footing
their
capabilities
for
traces
on
the
other
direction.
We
are
actually
probably
providing
them
a
good
food
for
thoughts
or
which
feature
they
could
Implement
within
the
product,
and
actually
this
is
extremely
powerful,
so
I
asked
Reuben
today
to
summarize
a
bit
this
okr
is
already
is
already
listed
in
the
you
already
added
it
today
in
the
O'hara
issue
and
I
just
wanted
what
wanted
to
know?
What
are
your
thoughts
around
this?
C
So
I
do
think
that
this
okr
is
beneficial
I.
Remember,
Reuben
had
done
some
work
for
providing
traces
using
I
think
it
was
Jaeger
at
the
time
he
was
able
to
import
some
of
the
data
for
our
pipelines
and
not
just
the
release
coordinate
pipeline,
but
like
all
the
pipelines
that
are
part
of
it
as
well,
which
was
really
cool
to
look
at
I.
C
C
How
do
I
say
this
so,
like
you
know,
we
have
Reuben
has
proven
we
could
get
Chase
traces
for
individual
jobs
inside
of
a
pipeline.
I
think
what
would
be
kind
of
cool
to
add
on
to
this
is
and
Reuben.
C
You
know
about
this
because
I
was
asking
you
about
this
before
I
went
on
vacation,
but
we've
got
the
ability
to
like
Mark
or
put
specific
statements
inside
of
the
shell
output,
such
that
it
kind
of
groups
blocks
of
stuff
together,
and
you
could
automatically
hide
that
block
or
you
could
have
it
automatically
expanded
depending
on
your
use
case.
I
was
just
playing
around
with
this,
but
like
it'd,
be
really
cool.
C
If
we
could
make
that
feature
also
play
into
the
instrumentation,
because
now
you
could
take
your
pipeline
and
you
have
say
two
jobs
in
your
pipeline,
so
you
have
two
traces
for
each
one
of
those
jobs
now
for
inside
of
the
job,
you
have
the
individual
code
blocks
that
may
take
a
period
of
time
so
now
you're
instrumenting
the
actual
code,
that's
running
inside
of
your
job.
So
now
you
have
this
Trace
information.
C
That
goes
all
the
way
down
to
like
the
line
of
code
that
is
being
executed,
gitlab
CI,
which
I
think
would
be
tremendously
valuable
long
term,
because
now
we
assume
this
is
a
complete
feature.
We
could
go
into
CNG
and
be
like
hey
this
one
particular
step
always
takes
45
minutes.
C
B
Yeah,
that
is
possible
because
the
section
start
on
Section
end
right.
So
the
cool
thing
is
next
to
the
section
stock.
They
put
the
Unix
timestamp
and
same
thing
next
to
the
section
end,
so
you
actually
already
have
the
timestamp,
so
you
can
just
pass
the
logs
and
and
get
the
so
yeah.
It
is
possible,
but
just
a
note
here:
products
like
Circle
CI.
They
emit
events
for
each
for
each
like
each
command
in
the
script
that
the
job
is
running.
B
So
that's
like
the
ultimate
granularity,
because
you
can
measure
each
line.
So
here
we'll
be
measuring
groups,
but
still
it's
it's
still
better
than
what
we
have
now.
A
D
I
extremely
highly
very
support
anything
that
would
actually
make
a
release
manager's
life
easier,
and
this
gonna
make
our
life
easier.
Somehow
so,
I
like
it
and
I
was
like,
like
I
I,
had
this
in
my
mind
like
some
months
ago
that
actually
we
need
to
have
something
to
retry
job
like
to
be
visualized.
Somehow
I
think
I
have
noise
around
me.
Sorry
for
this
and
yeah
I
couldn't
think
of
anything
back
then
so
Reuben
like
really
Kudos
too
and
yeah.
Let's
look
it
out.
A
So
if
you
have
any
extra
thoughts
about
these
metric
issues,
metric
okr,
there
is
the
comment
left
by
Reuben
on
the
okra
issue.
Please
go
there
comment,
add
your
upload
notebook
or
something
like
that,
and
it's
going
to
be
any
any
thoughts.
Any
feedback
is
going
to
be
extremely
useful
also
because
you
probably
will
need
to
refine
this
further
to
try
to
understand.
You
know
unexpects
that
the
exact
scope
of
diesel
here,
what
to
work
on
and
so
on.
So
anything
that
comes
to
mind.
D
A
Yeah
yeah.
Thank
you.
Thank
you
for
the
the
result.
Okay,
our
proposed
about
to
improve
visibility
into
our
deploying
Pipelines
I.
Guess
this
one
Reuben
is
one
of
your
suggestions
as
well.
If
I
record
correctly
from
the
okra
issues
and
this
providing
to
provide
to
Stage
groups,
the
right
visibility
of
what
our
deployment
pipelines
are
working.
D
A
B
Oh,
you
were
talking
about
visibility
of
the
pipelines
itself,
because.
B
Most,
the
developers
don't
have
access
to
release
tools
and
Ops.
Yes,
so
I'm
not
sure
if
we
should
work
on
that.
First,
solving
the
visibility
problem
or
continue
with
the
POC
first
and
then
worry
about
visibility.
C
A
Yeah
probably
makes
sense
also
I
was
thinking
about
that.
Actually,
I
wanted
to
touch
base
in
that
in
the
in
the
case
that
we
are
going
to
have
independent
pipelines
for
independent
component
to
be
deployed,
probably
the
kind
of
access
level,
what
kind
of
visibility
there
will
be,
but
correct
me
if
I'm
wrong,
because
we
probably
want
to
provide
visibility
into
their
own
Pipelines,
but
we
don't
want
to
provide
probably
visibility
or
maybe
we
want
to
survive
music
people.
A
You
know
to
stay
to
to
configure
and
and
I,
don't
know,
group
package
pipelines
that
can
see
they
both
have
visibility
to
each
other.
I,
don't
see
any
problem
on
having
full
transparency.
There
I
think
is
something
what
we
probably
need
to
refine
in
that
direction.
Understanding
a
lot,
so
we
are
going
to
build
this
pipeline
for
independent
components
before
thinking
about
which
kind
of
visibility
we
can
provide
to
them.
A
I
guess
if
I,
let
me
know
if
I'm
wrong
here,
but
there
was
also
some
some
issues
about
the
sharing
secrets
that
could
have
been
seen
in
pipeline
logs
I'm,
not
sure
where
we
are
with
that.
So
I
think
it
is
also
something
we
should
have
brought
into
consideration
when
we
we
speak
about
this.
C
I,
don't
think
it's
fully
sold
because
we
do
have
one
or
two
Secrets
they're
of
lower
priority,
but
they're
still
stored
within
our
git
repo,
which
is
kind
of
unfortunate,
eventually
they'll
get
pushed
out
into
an
appropriate
mechanism,
but
I
think
because
they're
they're
kind
of
like
the
POC
style
of
things,
we
haven't
really
put
a
lot
of
effort
towards
fixing
that
which
I
think
is
fine,
is
just
long
term.
We
need
to
avoid
this
and
I
think
that
would
be
a
good
exercise
as
a
way
to
validate
that
we
have
solved
that
issue.
C
The
other
thing
related
to
your
prior
statement
is
that
we
don't
precisely
know
exactly
how
these
pipelines
are
going
to
be
executed.
This
is
a
question
that
Graham
proposed
in
response
to
what
am
I
statements
about
the
current
POC
as
it
stands,
so
there's
more
thought
that
needs
to
go
into
how
we
want
to
accomplish
this,
which
might
change
some
of
that
access
that
is
currently
prohibiting
developers
from
being
able
to
see
the
results
of
jobs
and
Pipelines.
C
So
I
think
there
still
needs
to
be
more
conversation
held.
I
still
need
to
catch
up
on
graeme's
comment
about
this
and
then
I
think
we
can
continue
the
conversation.
I,
don't
know
what
we
could
chat
about
today.
That
would
might
bring
us
towards
closer
towards
a
resolution
for
that
kind
of
thing.
Okay,.
B
Also,
some
time
back
some
many
months,
I
think
back.
A
C
One
idea
I've
had
for
this
is
that
and
again
this
is
an
implementation
detail
and
there
are
other
considerations
to
take
into
account,
but
one
of
the
thoughts
I've
had
was
that
we
would
could
create
some
sort
of
a
set
of
jobs
inside
of
the
canonical
repo
that
developers
work
inside,
of
where
each
job
is
effectively
mirrored
wherever
we're
doing
that,
deploy
from
and
instead
of
getting
the
job
output.
The
purpose
of
the
job
in
the
canonical
repo
is
just
to
Ping
the
associated
job.
C
That's
performing
the
deploy
and
Reporting
some
sort
of
status,
or
maybe
like
where
it
is
in
the
process
that
does
not
give
them
full
visibility,
but
at
least
they
know
where
the
job
is
located,
so
they
could
quickly
link
us
to
it
and
that
provides
them
with
at
least
where
they
are
in
the
process.
So,
like
it's
reached
out
to
staging
the
deploy,
is
occurring.
We're
waiting
for
pause
to
come
up,
for
example,
and.
B
C
B
C
B
Way
to
give
developers
visibility
into
a
pipeline
without
giving
them
access
to
the
job
logs
would
be
metrics
and
traces.
So,
for
example,
if
they
have
access
to
the
metrics
and
traces,
but
not
the
pipeline
itself,
so
say
traces
if
they
have
access
to
the
trace,
they
can
see
exactly
which
job
it
terminated
on.
So
that
could
also
give
the
same
visibility.
B
Of
course,
there
was
another
way
that
I
was
trying
to
explore
many
months
back
when
I
tried
to
get
the
job
logs
open
to
everyone.
That
would
be
to
introduce
a
new
permission
into
the
gitlab
product,
which
gives
people
permission
to
see
the
pipeline,
but
not
the
job
logs
I
think
I
had
found
that
the
permission
already
exists.
It's
just
that
we
don't
have
anything
on
the
UI
allowing
users
to
you
know,
give
someone
that
permission
or
take
it
away.
A
So
you're
suggesting
events
correctly
that
they
could
be
able
to
see
the
pipeline
not
to
actually
see
the
logs,
but
they
could
still
see
if,
let's
say
the
status
of
the
pipeline,
so
it
was
failed
or
if
maybe
at
at
the
stage
and
job
level
right.
They'll.
B
A
A
That
yeah
I,
understand
I,
think
you
know
I
think
we're
also
having
a
lot
of
those
problems
that
a
lot
of
other
companies
probably
are
being
using
our
product
right.
So
we
could
also
make
proposal
to
product
to
say
how
do
you
plan
to
solve
this
problem,
because
this
is
like?
You
are
the
expert.
There
are
many
experts
at
least
on
on
these,
so
we
probably
need
to
find
who
could
be
someone
that
is
willing
also
to
take
this
into
consideration.
A
I
mean
I,
guess
at
this
point
we
should
probably
draft
our
roadmap
with
independent
deployments
in
mind
and
see
where
we
get
and
see
what
we
can
get
with
the
product
in
the
opponent
status,
and
then
we
can
see
if
we
can
influence
other
stage
groups
roadmap
with
our
inputs
and
so
on.
Right,
because
we're
going
to
try
to
solve
a
pretty
complex
problem,
especially
like
company-wide
we're
going
to
deliver
on
the
components.
A
Thank
you
for
the
inputs.
The
point
number
three
it
was
about
this
is
actually
was
like
you
are
your
input
about
outer
loud
risky
changes
in
production
right?
These
are
I,
guess
was
rupo.
Hair
was
not
a
system
of
care,
and
but
they
see
you
know
a
big,
a
big
part
of
the
effort,
also
being
on
system
side
as
well
how
to
provide
the
right
environment
to
do
so.
This
would
be
amazing
to
be
able
to
the
ability
to
do
that.
A
Do
you
think
this
is
like
definitely
the
worth
of
our
multiple
okay
Arts
from
the
way
I've
seen
it,
but
maybe
you
know
this
hope
that
this
is
a
maybe
too
big.
Do
you
see
that
we
could
have
like
a
minimal
iteration
that
you
can
get
get
us
closer
to
a
Target
State,
that's
going
to
bring
any
value
to
I,
don't
know
ruby,
red
or
loud.
C
So
I
guess
a
few
things
one.
This
okr
is
very
ill-defined
at
the
moment,
or
this
idea
is
very
ill-defined
at
the
moment.
I
think
the
concept
is
there,
but
we
need
to
figure
out
some
of
the
details
that
allow
us
to
iterate
iteratively
put
some
sort
of
solution
in
place.
C
C
Think
some
of
the
work
that
Ahmad
is
doing
with
cluster
rebuilds
could
help
with
this
because
I'm
my
initial
thought
is
that
we're
not
going
to
be
introducing
risky
Mrs
at
all
times,
but
the
ability
to
bring
a
line,
something
temporary
like
a
quick
cluster
and
send
traffic
to
it
with
a
modified
weight,
will
be
beneficial.
So
I
think
the
work
that
Ahmad
is
doing
might
be
able
to
feed
into
the
work
we
do
here,
because
we
could
spin
up
something
temporarily
and
turn
it
down
really
quickly
with
minimal
interruption
right
now.
C
C
A
B
Just
note:
the
ability
to
rebuild
to
build
a
cluster
might
also
help
with
what
Graham
was
talking
about
how
to
spin
up
and
a
new
environment
quickly.
B
C
Yes,
so
that
is
related
to
extending
our
patch
policy
to
three
versions:
back
and
I.
B
And
if
we
manage
to
do
that,
that
seems
like
a
first
step
towards
towards
this
feature
that
we
are
talking
about
right
now.
So
because
you
won't
be
able
to
send
production
data
to
it
or
maybe
you
would
I'm
not
sure.
C
We
would
so
like
the
idea
would
be
that
we
probably
have
like
a
minimal
like
spin
up
a
cluster
that
contains
this
specific
code
change,
but
still
somehow
in
alignment
with
the
code
base,
just
like
maybe
the
Ruby
version
is
different
and
it
sees
a
very
small
portion
of
traffic
monitor
for
errors.
Compare
the
performance
of
that
cluster
to
others,
and
you
know
Drive
results
out
of
that
testing.
You
know
and
we
would
need
the
ability
to
quickly
turn
on
and
off
the
traffic
to
that
cluster.
C
For
example,
stuff,
like
that,
like
there's
a
lot
of
work
that
needs
to
go
into
this
yeah.
C
And
like
we're
stuck
on
this
really
ancient
version
of
AJ
proxy
that
we
don't
have
any
sort
of
dynamic
configurations
for
and
like
our
terraform
at
the
moment,
we
reserve
static
IP
addresses
we
put
those
IP
addresses
manually
inside
of
our
configurations,
like
all
of
that
needs
to
be
more
Dynamic.
To
enable
something
like
this
to
be
operational.
D
C
B
C
Effectively
automating
cluster
rebuilds
that
gets
us
even
closer
and
I
think
that
will
be
a
good
Target
for
what
are
the
okrs.
That
Ahmad
is
currently
the
dri
on
yeah.
A
A
So
about
the
next
iteration
of
cluster
rebuild
right,
so
we
have
this
runbook,
hopefully
by
the
end
of
the
quarter
and
we're
able
to.
We
have
our
change
requests
where
we
can
replace
ads
on
a
cluster
in
our
staging
environment,
so
our
next
iteration.
On
top
of
that
it
will
be
actually
to
understand
what
our,
what
what
we
will
need
to
fully
automate
that
part
right
so
Scar
the
field
was.
A
This
was
one
of
your
comments
in
the
career
issue
and
especially
on
the
time
to
understand
in
the
mind
that
we
replace
a
cluster.
We
started
doing
an
automated
way
apart.
That
really
actually
liked
about
understanding
is
the
current
traffic
saturation.
We
have
at
the
level
that
we
can
tear
down
a
cluster
and
bring
one
anyone
I
mean
if
it's
a.
A
If
it's
you
know,
is
a
disaster
event
where
plastic
just
simply
goes
down
and
so
on,
but
it
is
something
that
we
plan,
because
we
plan
to
take
a
cluster
out
of
rotation
for
upgrading,
something
like
that.
Having
automation
around
understanding
traffic
situation
load
and
the
ability
actually
to
replace
it
or
not,
replace
it
I
think
would
be
actually
a
great
Step
Ahead
in
that
kind
of
automation.
A
The
same
way
that
probably
we
are
doing
now
with
the
throwback
when
we
just
okay,
can
we
roll
back
now
you
know
we
are
this
check
status,
checking
a
status
there?
Can
we
replace
a
cluster
now
and
so
on?
That
will
be
probably
a
great
Step
Ahead
I
guess
this
is
the
most
natural
one
from
the
work
that
we've
done
so
far
doesn't
mean
that
you
need
to
pick
only
one
of
these
or
hairs.
We
can
even
pick
more.
We
just
need
to
understand
so
we
scope
we
want
to
bring
in.
A
Let's
keep
both
in
mind
that
next,
the
beginning
of
this
quarter
we'll
have
Europeans
and
release
management,
so
Delivery
Systems,
also
at
half
power.
Here
we
have
a
new
start
there
at
the
end
of
November
as
well.
So
we
have
a
few
things
to
figure
it
out
to
understand
what
we
can
take
on
our
on
our
plates
or
not.
A
Anyone
wants
to
add
anything
about
this
cluster
build
used
carpet.
You
think
that
we
can
find
well.
You
are
one
that
you
are
saying:
workers
and
Zone
on
it,
minimal
operation
that
can
bring
us
a
step
closer
to
be
kind
of
fully
automating.
These
rebuilds,
without
considering
you
know,
page
approxy
configurations
and
everything
that
all
the
manual
part
that
we
know
that
they
are.
C
So,
with
the
assumption
that
the
upcoming
test
that
augment
is
planning
will
be
completely
successful
in
that
we've
got
a
fully
usable,
runbook
I
think
our
next
step
would
be
to
take
that
Redbook
and
figure
out
what
is
manual
and
try
to
automate
it.
And
the
first
thing
that
screams
out
at
me
is
two
things:
one.
The
network
choices
that
we
have
to
make
is
very
static.
There's
nothing
Dynamic
about
that,
and
there
is
some
reasoning
for
that.
C
But
I
think
if
we
can
remove
the
nature
of
how
static
that
is
that
should
enable
us
to
be
able
to
quickly
create
clusters
with
we
should
be
able
to
create
clusters,
lowering
the
amount
of
planning
that
needs
to
go
in
ahead
of
time,
but
we
need
to
figure
out
how
the
network
traffic
is
going
to
be
routed
to
various
clusters,
because
we
do
some
peering
that
way.
Our
monitoring,
an
observability
stack
works
correctly.
So
that's
item,
one
that
we
need
to
figure
out
and
I
think
we
can
figure
that
out.
C
The
second
thing,
I
see,
is
trying
to
figure
out
how
to
automate
the
deployment
of
stuff
into
our
clusters,
so
we
could
create
clusters
via
terraform,
like
we
do
today.
I,
don't
think
we
need
to
change
that
yet
at
least
but
be
able
to
take
a
cluster
from
having
nothing
on
it
to
having
the
entire
observability
stack,
plus,
gitlab
and
making
sure
gitlab
is
up
to
date
and
at
the
same
level
as
all
the
other
clusters
will
be.
The
next
thing
to
work
on
and
then
I
think.
C
Anything
that's
associated
with
that
inside
of
our
CI
configurations
gets
updated
in
some
way,
shape
or
form
such
that
when
Jenny
is
the
release
manager
and
is
saying
hey,
let's
deploy
now,
she
doesn't
have
to
get
blocked
on
the
fact
that
she
has
a
red
pipeline,
because
this
is
a
new
cluster
and
it's
failing
to
deploy
because
there's
a
authentication
issue
or
it
doesn't
know
how
to
reach
it.
Etc
stuff
like
that,
so
there's
two
things
that
I
think
are
good
targeted
items.
C
We
could
look
into
the
ordering
could
probably
change
depending
on
you
know,
what's
easier
to
work
on
and
such,
but.
D
D
Thanks
carbox
summarized
it
I'm,
just
like
like
query
that
we
will
need
still
some
transform
code
to
be
able
to
build
the
cluster.
So
this
part
I'm
not
sure
how
we
can
automate
this.
So
we
can
basically
say
to
transform
hey,
build
a
cluster
with
this
variable
from
the
CI,
but
I'm
not
sure
if
we
are
going
this
path
at
all.
The
other
thing
yeah
the
workloads
worth
looking
at
and
also
the
networking
as
carback
set.
C
I
haven't
considered
at
all
is
like
what
happens
with
our
metric
systems
like
we're
really
tied
to
the
region
label,
to
tell
us
like
which
clusters,
which
or
like
where
workload
is
running
I,
don't
I.
I
would
hope
that
we
don't
have
anything
hard-coded
that
says:
hey
everything
is
going
to
be
in
usd's
one.
C
We
distinguish
our
regional
and
zono
clusters
by
that
label.
So
if
it's
our
regional
cluster,
it's
Us
East
one.
If
it's
our
Zona
clusters,
it's
Us,
East,
1D,
U.S,
east
one
C
USC's,
one
B,
Etc
I-
hope
that
we
don't
have
anything
that's
important
inside
of
our
metrics
that
limits
us
with
that
label.
So
if
we
deploy
cluster
in
U.S,
Central
1A,
you
know,
hopefully
that
doesn't
break
things
but
I'm
going
to
know
immediately
that,
like
a
lot
of
our
dashboards,
are
probably
locked
with
that
region
label.
C
So
it's
going
to
make
looking
at
metrics
very
difficult
or
if
we
want
to
compare
metrics
between
clusters,
we
may
not
have
that
necessary
label
selector
as
a
variable
that
we
could
use
to
select
in
our
dashboards.
We
could
certainly
do
this
at
Thanos,
probably
but
there's
going
to
be
issues
with
looking
at
metrics
and
such,
but
that's
more
geared
towards
like
expanding
our
clusters
out
of
where
we
are
today.
So,
hopefully,
that's
kind
of
a
non-issue
until
later.
C
I
just
worry.
What's
going
to
happen,
if
we
add
additional
clusters
versus
what
we
have
today,.
A
A
If
everyone
is
Keen,
I
have
some
extra
minutes:
I,
don't
wanna
cut
this
one
short,
but
I
would
also
give
space
to
the
demo
from
Reuben
about
using
Google
Books
to
measure
geographies.
So
if
someone
is
Keen
to
stay
a
few
minutes,
more
I
would
say
to
move
to
that.
A
B
Gitlab
the
product
has
a
feature
called
webhook
events,
so
you
put
in
a
webhook
URL
and
that
URL
will
be
called
every
time.
A
certain
event
happens.
So,
for
example,
we
can
leverage
that
to
suppose
we
want
to
count
the
number
of
retries.
We
can
leverage
job
events
to
incremental
counter.
Every
time
a
job
is
read
write.
B
B
You
can
put
a
token
here.
This
should
be
random
and
then
choose
which
event
you
want
to
be
notified
on.
So
we
want
to
be
notified
on
job
events,
I've
actually
already
added
the
webhook.
So
I
won't
add
this
again,
but
this
is
how
you
add
it,
and
since
it's
localhost
and
not
https,
you
can
turn
off
SL
verification.
B
And
you
can
see
I
received
a
web
hook
event
over
here,
so
that's
job,
ID,
602
Auto,
deploy
with
sorry
my
niece
wanted
to
say
hi
and
status.
Success
Okay!
So,
let's
start
a
new
pipeline.
B
B
B
A
B
Yeah,
so
how
to
deploy
start
created,
State
return
account
one.
So
here
this
is
a
bit
of
a
problem
because
you
can
get
multiple
calls
for
the
same
state
is
created.
We've
got
two
calls
then
running
so
we
can
use.
We
can
store
store
this
in
memory
so
that
we
don't
count
the
same
event
twice
and
hopefully
any
duplicates
like
this
should
be
very
close
together.
So
you
don't
have
to
store
it
in
memory
for
very
long
either,
and
now
it's
successful
now,
if
I
retry
again,
this
should
increment
to
two.
B
B
By
the
way,
you
can
also
see
the
payload
right
in
the
gitlab
UI,
so.