►
From YouTube: 2022-10-05 - Delivery:System Sync and Demo
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
This
is
the
5th
of
October,
2022,
Delivery,
Systems,
think
and
demo.
We
have
an
item
in
the
discussion
and
is
around
our
queue
for
okrs.
Yesterday
we
opened
an
issue
with
Amy,
where
we
decided
to
have
one
group
of
KR
that
is
going
to
be
around
independent
deployments,
so
I
mean
right
now
we
estab
we
still
have
a
lot
of
preparation,
work
that
we've
done
in
this
quarter
around
that
both
orchestration
system
and,
at
the
same
time,
maybe
next
quarter,
is
going
to
be
still
more
preparation,
work.
A
That
needs
to
be
done,
but
it's
good
to
start
to
have
a
shadow
here,
since
we
are
going
going
towards
this
single
as
a
as
our
group.
In
addition
to
that,
we
probably
still
have
some
space
for
some
team
specific
hairs.
A
So
please,
let's
start
together
some
ideas
over
there,
or
maybe
we
can
use
also
today
to
pull
up
some
ideas
and
just
have
a
brief
discussion
before
our
demo
section
in
I
added
already
the
one
clicks
on
a
cluster
construction.
This
actually
came
out
of
a
scarbeck
101's
document.
You
just
put
it
there
like
a
few
weeks
ago.
So
the
idea
would
be
to
have
one
click,
or
at
least
close
to
one
click.
A
Cluster
reconstruction
akma
did
a
lot
of
work
there
and
we
realized
that
we
have
some
some
parts
that
are
not
easy
to
to
automate.
Let's
say
that,
if
we
need
to
add,
we
need
to
replace
a
cluster
in
the
same
region
where
we
are
right
now,
I
think
is
actually
something
that
is
feasible,
especially
like
the
IP
addresses
for
that
needs
to
go
to
proxy.
This
one
is
something
that
we
Institute
something
among
election:
that
we
need
to
do
more
blockers
we
found
are
around
connectivity
to
Vault
and
I.
A
Think
was
the
postgres
log
balancer
correct
me
if
I'm
around
here.
So
this
is
something
because
our
cluster
that
we
try
to
build
right
now
from
scratch
is
actually
not
even
in
the
same
region
of
our
sonar
clusters.
So
that
could
be
an
acceleration
of
one
of
here
that,
if
you
already
work
now
and
it's
gonna
build
definitely
a
great
value
not
only
for
us,
but
also
for
the
disaster
recovery
working
group-
that
is,
that
is
also
looking
at
business
recovery
scenario.
A
A
B
Ways
to
on
my
mind,
one
would
be
related
to
our
current
okr
or
trying
to
do
independent
employees.
Michaela
you
and
I
have
already
shared.
We
are
101
list
is
probably
a
very
closely
shared
book
AR
with
the
orchestration
team,
I'm,
not
precisely
sure
what
the
language
would
be,
but
there's
something
that
would
enable
us
to
drive
closer
towards
enabling
a
Target
service
to
be
independently
deployed
in
some
way
shape
or
form.
I
think
we'll
have
a
better
idea
as
to
what
we
need
to
include
in
the
language
of
that
umtr.
B
B
A
really
good
answer
for
is
how
to
make
risky
changes
easier,
especially
when
they
are
very,
very
large,
think
a
rails
version
of
brain
or
like
a
Ruby
version
of
upgrade
I,
created
a
draft
issue
that
kind
of
discusses
some
of
this
a
little
bit,
but
it
doesn't
Encompass
everything
yet
obviously,
but
I
think
it
would
be
really
neat
if
we
could
figure
out
a
way
to
work
with
distribution
to
create
a
mechanism
that
allows
us
to
build
both
Ruby
versions
and
building
a
deployment
mechanism
that
would
enable
us
to
deploy
a
targeted
Ruby
version
like
let's
say
we're,
targeting
just
a
specific
Ruby
upgrade
being
able
to
deploy
a
targeted,
Ruby
version
in
a
specific
environment.
B
I,
don't
know
how
this
would
look
like
you
know.
Today
we
have
Canary
and
we
have
a
main
stage,
I'm
thinking.
If
we
get
to
the
point
where
we
are
able
to
quickly
build
and
tear
down
clusters,
we
might
have
to
go
need
to
do
some
sort
of
blue
green
aisle
intertwined
with
a
canary
stage
in
order
to
provide
the
ability
to
deploy
one-off
images
that
contain
something
different
from
the
rest.
B
That
would
enable
us
to
validate
certain
capabilities
within
our
code
changes.
This
requires
a
lot
of
thoughts.
Still
it's
very
in
it's
a
very
image
for
thought,
but
I
think
this
is
something
that
would
be
beneficial
for
the
development
teams.
A
Okay,
I
mean
this.
Video
definitely
is
going
to
be
a
very
useful
in
order
to
bring
a
lot
of
value.
So
just
to
clarify
for
my
understanding,
the
idea
would
be:
let's
say
that
we
have.
We
have
a
way
to
get.
Let's
say
a
new
image
based
Ruby
tree
that
has
been
tested
has
been
built
by
CNG,
and
we
need
to
have
a
new
deployment
strategy
where
we
could
kind
of
deploy
this
image
on
a
portion
of
an
environment.
A
B
And
like
the
ultimate
goal
here
is
that
we
enable
Dev
teams
to
do
they're
still
doing
Feature
work
like
we've
got
a
runbook
for
deploying
something
in
isolation.
That's
not
my
goal
here.
My
goal
is
to
enable
Dev
teams
to
continue
developing
what
the
currently
do.
They
just
have
this
magical:
hey,
let's
test
this
Ruby
3
upgrade
or
something
to
that
effect
that
enables
us
to
just
Target
a
very
small
portion
of
our
infrastructure
to
run
that
code.
B
But
hypothetically,
like
specifically
for
a
ruby
upgrade
the
same.
The
same
code
exists,
it's
just
the
difference
of
which
version
is
running
that
code
hypothetically
I
know,
there's
going
to
be
challenges
with
that
from
a
development
standpoint,
but
I
think.
If,
if
it's
at
a
point
where
it's
notable
and
runnable,
we
should
have
the
ability
to
deploy
it,
and
that
would
help
gather
the
necessary
theme
that
prevent
teams
to
understand
and
better
plan
for
larger
upgrades.
And
you
know
such
let
me
try
to
find
the
issue
that
I
created
for
this.
B
C
B
A
This
image
will
have
to
go
at
least
to
some
kind
of
testing
on
our
side
like
now,
we
are
going
for
testing
for
for
the
image.
We
are
running
right
now
in
our
created
the
pipeline,
so
for
us
to
deploy
right
and
then
we
have
like
when
we
have
QA
mix
test
and
everything.
If
everything
goes
well,
then
we
will
have
the
chance.
A
We
would
like
to
have
the
capability
of
deploying
a
portion
of
an
environment
with
that,
so
it
means
extra
routing
capabilities
that
we
are
probably
missing
right
now
and
have
the
ability
for
it
to
shift
the
portion
of
traffic.
It
means
the
number
of
PODS
running
that
image
to
be
higher
or
lower.
In
the
case,
we
see
errors.
A
Yeah
I
think
it's
it's
actually
a
good
idea.
It
actually
would
allow
us
to
introduce
a
lot
of
these
very
risky
changes
with
much
more
confidence
right
and
I
mean
the
ability
to
switch
immediately
like
back
and
forth.
In
the
case,
we
see
error
threshold,
like
increasing
I
think
will
be
definitely
would
make
her
sleep
a
bit
tighter
at
night
when
we
go
maybe.
B
Here's
another
analogy
that
I
think
would
work
well
like
we've
got
review
apps
for
the
gitlab
code
base.
I.
Think
this
as
a
variety
of
a
review
app
that
deploys
that
small
portion
of
code
to
a
very
small
portion
of
our
infrastructure,
but
it's
still
receiving
legitimate
user
traffic
and
can
quickly
be
switched
off
if
it's
not
performing
as
desired.
A
So,
having
kind
of
review
the
tops,
but
that
infrastructure
level,
environment
levels,
let's
say
yeah,
it
would
be
really
like
Next
Step
by
designating
create
I,
think
it's
something
that
we
could
add
some
details
simply
to
a
state
where
we
can
understand:
okay,
Is
it
feasible.
Where
could
be
the
first
iteration
out
of
that
right?
I
didn't
see
the
issue
yet
well.
I
think
I
saw
that
this
issue,
but
yeah
I
saw
that
is
issue.
Definitely,
but.
C
Another
benefit
I
can
see.
Is
that
the
problem
with
Canary
that
we
have
now
is
if
something
does
go
wrong,
we
can
certainly
turn
it
off.
We
can
drain
Canary,
but
then
you
cannot
promote
that
and
you
cannot
deploy
anything
to
production
until
that
problem
is
fixed,
so
you're
kind
of
stuck.
While
with
this
you
test
this
something
goes
wrong.
You
turn
it
off
and
deployment
continues
the
whole
time
Nothing
Stops,
yeah.
B
I
agree
because
it's
not
yet
merged
or
like
if
we
could
figure
that
out,
because
the
code
is
not
yet
merged,
we're
not
impacting
the
default
branch,
and
that
should
shorten
the
time
it
takes
for
a
certain
rollback
procedures
that
would
currently
suffer
through.
D
C
Also,
is
it
really
necessary
to
have
the
cluster
built
completely
automated
in
order
to
do
something
like
this,
or
can
we
have
like
one
cluster
built
and
sort
of
disabled
or
something
until
it's
required,
and
then
you
simply,
but
of
course,
if
you
do
that,
then
I
suppose
you
can
only
test
one
thing
at
a
time.
B
I
think
that
boils
down
to
how
we
want
to
Route
traffic
around
and
currently
h8
proxies
are
a
method
of
doing
that.
Being
currently
at
your
proxy
is
expecting
a
single
cluster
endpoint.
So
we
could.
We
could
toy
around
with
that
idea
of
using
like
a
different
name
space
instead
of
a
dynamic
cluster.
For
that
kind
of
situation
we
just
need
to.
It
would
be
up
to
our
tooling
and
capabilities,
so
no
I,
don't
think
having
a
dedicated
cluster,
for
that
is.
A
requirement
is
just
one
of
the
thoughts
I
had.
B
A
A
It
was
a
very
big
chunk
of
work.
Is
the
multiple
of
hair
worth
it,
but
I
think
it's
a
great
idea
that
we
should
probably
refine
more
and
just
the
moment
that
something
like
that
comes
into
the
Horizon
I
think
you
know,
we
know,
which
are
the
first
steps
at
least
to
get
us
closer
to
this
target
state.
A
Any
other
ideas
for
okay,
Arts.
A
I
mean
we
don't
talk
about
them
now
we
just
opened
the
issue
now,
so
we
can
definitely
continue
discussion
there,
but
definitely,
let's
think
about
that.
Also,
when
you
also
your
work
around
the
metrics
and
so
on.
If
you
see
anyway,
fitting
it
in
any
of
these
is
definitely
gonna
be
definitely
going
to
be
valuable
as
well.
B
My
internet
connection
is
unstable,
so,
if
I'm
breaking
without
my
apologies,
all
right
so
I
wanted
to
showcase
the
work
that
I've
done
with
CNG,
and
this
is
mostly
Discovery
and
documentation,
and
the
reason
why
this
is
mostly
discoveries,
because
I
had
a
conversation
with
Michael
and
I,
can't
remember
how
far
this
has
been
broadcasted,
but
the
distribution
team
they're
busy
people,
it's
I,
feel
like
if
we
want
to
make
changes
to
the
way
CNG
operates,
we're
going
to
wait
a
few
quarters
or
do
that
work
entirely
ourselves
and
kind
of
hope
that
they've
got
time
to
review
the
work
that
we
want
to
accomplish.
B
Thus
what
I've
been
doing
lately
is
discovering
how
CG
currently
operates
creating
the
necessary
bits
of
documentation
that
I've
seen
as
soon
and
then
then,
for
the
purposes
of
what
we're
going
to
do
today
is
I'm
going
to
Showcase
how
it
works.
What
our
expectation
is
out
of
CNG
as,
like
the
final
end
product
for
what
we're
trying
to
chase
for
and
then
hopefully,
we'll
have
a
better
view
as
to
what
our
expectations
of
what
we
can
currently
expect
from
CG
as
it
is
today.
B
So
I
am
currently
on
a
CG
pipeline
I'm,
specifically
using
CNG
mirror
as
far
as
I
understand
it.
Cng
and
CNG
beer
are
identical,
CNG
mirror
does
not
yell
at
people.
If
you
create
a
red
pipeline,
I
got
yelled
at
by
CNG
or
distribution,
because
I
kicked
off
a
pipeline
on
CNG
I
sent
the
wrong
variable
in
place
a
build
failed,
so
they
got
a
notification
that
Master
was
read.
Oops
I
feel
like
that's
silly,
but
let's
use
CNG
for
the
rest
of
the
thought
experiment.
So
this
this
pipeline
looks
effectively
go
ahead.
B
B
Like
the
majority
of
these
were
kicked
off
in
the
last
say,
10
minutes
or
so
here's
when
I
kicked
off
right
before
the
meeting
started.
My
knees
are
all
I
guess
this
gets
run
quite
a
bit
so
I'm
trying
to
avoid
you
know
interfering
with
you
know
normal
CNG
development
practices,
I
guess
but
anyways,
so
we
CNG
is
divided
in
multiple
phases.
B
B
It's
not
having
to
re-pull
the
image,
for
example,
that
should
that
reduces
the
build
time
it
takes
for
all
these
and
the
same
kind
of
repeats
itself,
all
the
way
down
the
pipeline
for
everything
inside
of
phase
two
and
I
think
phase
three.
These
are
all
mostly
dependencies
that
we
include
in
all
of
our
containers.
Phase
four
is
where
we
start
seeing
most
of
our
final
images.
So,
for
example,
well,
I
guess,
phase
four
and
phase
five
are
very
similar,
but
thank
you,
sir.
A
B
B
An
example
back
here
on
phase
six
is
most
of
our
final
images.
You
know
Workhorse
the
web
service.
These
are
all
the
common
images
that
run
inside
of
our
pods.
That
run.
You
know
there
are
rails
code
base
and
sidekick
Etc.
Those
include
dependencies
on
say
the
gitlab
exporter,
because
this
is
what
processes
are
measuring,
so
it
also
depends
on
gitlab
logger,
because
this
is
what
takes
all
of
our
logs
and
pumps
it
up
in
a
way
that
kubernetes
understands
it.
So
all
these
are
our
dependencies.
B
They'll
get
built
based
on
all
of
the
Upstream,
build
jobs
so
like
the
git
lab
exporter,
I
believe
is
going,
so
that
probably
depends
on
the
gitlab
go
in
and
get
a
lot
longer.
I
think
is
also
going
so
it
depends
on
the
good
life
goal
gitlab
rails
is
some
gitlab
rails
is
where
we
actually
build.
We
gitlab
code
base
and
it's
obviously
Ruby.
B
So
it's
going
to
depend
on
git
live
Ruby,
which
is
somewhere
in
here
yeah
right
here,
keep
it
over
and
over
and
then
because
the
rails
code
base
is
shared
between
the
geolowl
cursor
site.
Kick
toolbox
web
service
Workforce.
Those
obviously
depend
on
all
of
this
to
be
built
so
for
the
purposes
of
us
today
we're
trying
to
chase
down
implementing
an
independent
deploy
for
a
Caz.
B
We
don't
need
this
entire
pipeline.
What
we
really
need
is
the
get
lot
cads,
but
currently
CNG
is
not
configured
to
enable
us
to
provide
a
pipeline.
That
says
only
build
me
gitlab
cash
and
anything
it
depends
on
there's
just
no
capability
inside
of
the
CG
pipeline,
to
do
that.
So
some
of
my
discoveries
trying
to
figure
out
what
we
need
minimally
for
us
to
build,
say
GitHub
cows,
which
is
quite
easy,
because
we
could
go
and
look
at
the
docker
file.
B
So
we
could
say
that
if
you
left
kez
depends
on
going
and
there's
a
dependency
currently
on
which
version
of
Basil
we
want
to
use
for
building
bitlabcast,
which
is
specific
to
that
component,
and
then
we
need
to
tell
CNG
which
version
of
githubs
that
we
want.
So
all
this
is
highly
documented
to
go
through
this
process.
B
We'll
make
it
say
we
want
goaling
conversion
and
we
know
that
the
version
of
going
this
point
18
6,
based
on
what
is
in
a
specific
file
inside
of
the
cluster
integration
repository,
so
I
will
continue
to
use
that
in
this
experiment
and
then
by
default.
Cng
wants
to
build
the
latest
version.
So
it's
going
to
pick
the
latest
commit
on
the
default
bridge.
B
B
You
know
let's
copy
this,
because
this
one
has
a
pipeline
associated
with.
So
we
could
assume
that
this
was
a
green
pipeline.
So
when
we
go
around
to
initiating
the
build,
we
will
probably
check
for
that
and
making
sure
all
the
testing
works.
So
we'll
just
go
in
here
and
I
just
happened
to
memorize
that
it's
gitlab
has
a
verbum,
but
it's
documented
I
want
to
talk
about
the
pipeline.
B
So,
just
like
I
showed
you
before
this
is
going
to
create
a
massive
pipeline
that
contains
a
lot
of
information.
You
know
everything
after
phase
four
for
us,
we
technically
don't
care
about
for
this
particular
component
and
there
are
some
items
or
there
are
phases
that
we
don't
care
about.
I
know
gitlab
Kaz
for
some
reason
uses
its
own
base
image.
So
from
a
technical
aspect.
We
really
don't
care
about
either
of
these.
B
These
images
Kaz
is
a
goaling
application,
so
we
care
not
get
locked
down
and
I'm
pretty
sure
it
does
not
do
any
templating
by
itself
within
the
image.
So
I
think
this
might
be
our
only
dependency
for
your
live
cams,
but
we'll
Circle
back
around
to
what
we
can
do
with
that.
As
far
as
the
output
goes,
so
this
is
how
to
trigger
a
pipeline
and
effectively.
That's
all.
B
We'll
see
that
gitlab
Kaz
has
completed
so
I
guess
because
we
depend
on
get
that.
Go
let's
go
here
and
see.
If
there's
anything
super
excited
to
talk
about
I,
don't
think
there
is
in
fact
I'm
going
to
guess
that
we
probably
used
a
lot
of
shared
more
cached
data.
Simply
because,
because
not
I've
been
playing
around
with
this
a
lot
so
I
would
imagine
this
version
would
have
been
built
already.
But
it
looks
like.
B
Okay,
so
there's
not
really
much
to
look
at
I
feel
like
so
let's
do
this,
but
we
have
our
image,
so
we
have
our.
Where
is
it
algories
attack
this
one?
So
we
have.
This
is
going
to
be
our
final
dimension,
so
you'll
notice
that
the
shaw
here
doesn't
really
provide
us
any
beneficial
information
and
in
fact
that
certainly
does
not
match
the
version
of
cats
that
we
asked
to
build
in
no
way
shape
or
form,
and
that's
expected
in
order
to
figure
out
what
version
of
Canada's
Got
build.
B
B
The
result
for
us
such
that
we
know
what
to
deploy,
because
we've
got
this
random
shot.
We
probably
want
to
try
to
figure
out
how
to
memorize
this
in
some
way
shape
environment,
but
at
the
moment
we
could
look
at
our
artifacts
for
this
pipeline
and
we
are
provided
with
a
number
of
pieces
of
information
that
we
could
leverage
here.
So
inside
of
this
file,
which
did
not
openness
and
browsing,
it
came
open.
B
The
only
information
we
have
in
that
file
is
the
tag
that
was
used
in
that
job.
So
if
I
go
back
to
the
build
job
again,
this
is
a
simple
text
file
that
contains
one
piece
of
information
which
is
this
right
here.
So
we
have
this
information
available
to
us.
So
now,
let's
go
and
take
the
results
of
this
and
figure
out
what
things
look
like
inside
of
those
documents.
B
B
Luckily,
we
we've
created
small
intermediate
images,
so
this
doesn't
take
terribly
wrong,
but
we're
expecting
to
see
go
link
version
118
once
something
got
built
and
we
did
so.
We
built
our
intermediate
image
with
the
appropriate
item
that
we
wanted
and
then
lastly,
we
want
to
see
which
version
of
CasCal
build.
So
this
is
from
the
build
box.
B
B
A
Great
progress,
definitely
I
mean
demo
is
a
lot
to
follow,
but
I
think
we
actually
go
to
a
stable
point
where
we
actually
find
a
lot
of
findings
on
the
cast
on
CNG
side.
Is
these?
Do
you
get
any
support
from
someone
from
CNG
directly
on
that
or
it
was
just.
B
Effectively
but
luckily
it's
not
terribly
difficult
to
work
through,
because
their
CI
is
very
well
defined.
Nice
they've
got
some
various
very
well
established
patterns
inside
of
CI.
That
enables
us
to
make
it
relatively
easy
to
look
good.
C
I
had
a
question
about
the
sci,
so
is
it
like,
like
you
said,
like
Kaz,
has
a
dependency
on
the
go
job?
Is
that
defined
in
the
CI,
so
so
that
if.
B
Let
me
dynamically
build
a
CI
pipeline
for
this
requirements
like
they
don't
have
any
sort
of
they
don't
have
a
configuration
file
that
says
Kaz
needs
X,
Y
and
Z
in
order
to
have
a
successfully
built
image
that
part
I
think
is
missing
and
because
of
that,
they
don't
know
what
needs
to
be
built
ahead
of
time.
On
top
of
that,
you've
got
other
requirements
so
like
the
goaling
intermediate
image
depends
on
I
think
either
probably
the
day
being
damaged
such
that
it
installs
all
the
right
stuff
or
go
away.
B
I,
don't
I,
don't
recall
if
we're
compiling
going
or
just
downloading
it
but
like
if
the
Upstream
Debian
image
gets
updated
or
if
there's
like
a
security
release
for
The
Daily
division,
we
want
to
make
sure
that
gets
updated.
That
way
that
gets
fed,
Downstream
to
Oliver
and
just
as
well
so
I
think
it's
just
been
easier
to
be
like
hey,
let's
just
build
these
in
phases,
or
you
know
the
various
stages
and
the
dependencies
will
naturally
be
available
to
us.
B
That's
my
suspicion,
I
think
that's
one
of
my
proposed
improvements
for
CNG
is
to
limit
how
much
we
build
depending
on
what
we
want
to
build
and
if
we
only
want
to
build
cars.
Just
give
me
the
cast
image
and
you
know
build
everything
it
depends
on
automatically,
but
something
needs
to
tell
them
how
to
figure
that
information
out.
C
B
B
I'm
sure
there
are
other
improvements
that
we
could
think
of,
but
I
got
at
least
two
in
place
and
I
hope,
I
tuck,
those
in
with
the
correct
ethics
for
the
distribution
team,
but
I've
got
two
Improvement
issues
and
I've
got
three
or
four
merge
requests
that
are
strictly
documentation
for
CNG
to
help
us
as
well
as
anyone
else
who
wants
to
contribute
to
CNG.
B
Those
are
obviously
sitting
in
distributions
workflow
for
review,
so
I
don't
know
when
they'll
actually
get
pulled
per
review,
but
I
think
that
should
be
enough
to
close
these
issues.
And
then
we
can
move
on
to
the
next
step,
which
I'm
kind
of
eager
to
work
on
which
would
be
kind
of
putting
together
some
sort
of
POC
that
connects
the
dots
of
the
work
that
we
were
doing
earlier
in
the
quarter,
which
is
the
metrics
work
and
tying
that
into
sending
information
to
CNG
to
produce
a
build.
B
D
D
We
gonna
see
hopefully
soon.
Yes,
the
workloads
are
now
deployed.
So
if
I
do
something
like
qctl
port
forward
gitlab
service
web
8080,
as
suggested
by
scarback,
to
show
this
part
and
we
go
to
localhost
8080.
D
D
D
Yeah,
unfortunately,
it
rejects
the
request,
but
anyways.
All
of
this
is
on
the
first
cluster,
so
the
first
cluster
is
working.
Workloads
are
there,
so
the
next
step
is
basically
a
document
this
in
our
own
book
and
Destroy
Everything
But,
actually
before
destroying
I
mean
we
can
destroy
actually
before
reconstructing
this
with
the
cluster
in
place,
because
this
is
this
causes
some
money
and
it's
not
like
usable.
It's
not
used
by
anything
so
so
yeah.
Any
questions.
A
I
have
a
few
questions,
so
the
images
are
not
loaded
because
we're
kind
of
missing
the
CDN
in
front
for
the
quantum
delivery
I.
D
Think
no
I
think
I
I'm,
not
sure,
actually
I
think
this
it's
broken
because,
like
I
get
some
something
in
the
logs
I
can
show
you
with
me
quickly
sure.
B
Aj
proxy
does
a
lot
of
work
like
AJ
proxy
serves
our
CDN
content.
So
more
than
likely
what's
happening
is
that
these
web
requests
are
going
to
localhost.
Then
they
don't
know
where
to.
A
Okay
makes
sense,
and
in
addition
to
do
that,
when
you
logged
in
so
you
got
a
four
two
tools
or
that
I
think
is
processable
entity
or
something
like
that,
if
I
ever
correctly,
so
that
one
that
is
a
four.
So
it's
a
client
code,
client
error,
so
you're
logging
in
to
that
instance
with
your
staging
credentials,
right,
yep,
okay,
and
do
we
know
why
that
doesn't
work
because.
D
I
think
Scarborough
mentioned
to
me
today.
There
will
be
some
issues,
CSP
I,
think
I'm,
not
sure.
If
this
is
it
yeah,
yeah,
okay,
so
I
think
this
could
be.
It
I'm
not
sure,
actually
also
like
why
it
gets
rejected,
but
I
assume
it's
yeah.
It
has
something
to
do
with
the
headers.
Basically,.
A
B
B
B
Okay,
so
I
have
forked
the
gitlab
agent,
therefore
scrub
and
get
lab
agent
and
I
have
made
sure
that
I've
I've
made
changes
to
only
this
CI
configuration
so
prior
to
what
this
would
have
looked
like
looked
like.
Rather,
we
would
only
have
the
test
stage
and
we
would
only
have
the
push
image
stage:
I
added
the
build
and
the
play
stages,
so
I
created
a
CNG,
build
job
that
depends
on
the
testing
job.
The
tests
take
I,
think
upwards
of
45
minutes,
I.
B
B
This
is
strictly
shell
inside
of
our
gitlab
CI,
so
I'm
just
doing
a
curl
statement
using
the
job
API
token
to
talk
to
this
is
the
project
ID
for
CNG,
and
you
can
see
that
we
are
triggering
a
specific
gitlab
cads
version
and
we're
also
triggering
the
specific
Go
version
which
is
which
I
showed
you
all
earlier
we're
getting.
The
information
of
that
trigger
the
pipeline
ID
and
then
I'm
sitting
here
with
a
giant
while
loop
asking
me:
where
are
we
on
this?
As
of
this?
B
Obviously,
this
needs
to
be
improved
because
there's
no
such
thing
as
making
this
work
in
a
production
worthy
manner,
probably
purchasing
video
status
works.
But
for
those
part
you
know
we
wait
for
a
specific
job.
In
this
case,
like
I
pointed
out
earlier,
we
only
care
about
the
gitlab
cads
build
job
inside
of
CNG.
We
don't
care
about
the
rest
of
them.
So
long
as
that
we
have
this.
We
have
all
the
information
we
need
to
send
an
independent
employee,
so
we
wait
for
that
job
status
to
complete
so
job
success.
B
It
does
and
then
that
job
writes
to
our
artifacts
and
one
of
those
artifacts
of
the
gitlabcast
tag
and
as
I
mentioned
to
you
earlier,
if
we
just
print
out
a
part
of
that
information,
we
now
have
which
tag
that
we
want
to
use
for
deploy.
So
we
have
all
the
information
we
need
after
we've
built
the
image,
that's
perfect,
so
we
save
that
as
an
artifact.
B
So
let's
go
back
to
our
build
pipeline,
and
this
is
just
using
our
standard
triggers
like
naturally,
because
I'm
using
an
API,
we
get
the
CNG
mirror
job
linked
to
that
job,
which
is
kind
of
cool,
but
I
created
two
deploy
jobs.
These
are
fake.
So
don't
worry
about
me
actually
accidentally
breaking
anything,
but
we're
kind
of
doing
something
similar
here,
so
we're
taking
we're
getting
our
artifacts
somewhere.
So
those
are
artifacts
that
we're
grabbing
that
way.
B
It
contains
the
container
tag
that
we
want
to
use
to
deploy,
which
is
that
guy
and
then
we
create
a
trigger
pipeline
that
reaches
out
to
a
special
project
and
I'll
show
you
this
project
in
a
second
but
we're
sending
a
few
pieces
of
information.
In
this
case,
the
Kaz
tag,
which
is
up
here,
I'm,
saying
what
environment
we're
doing,
which
is
staging
we're,
deploying
staging
so
we're
environment.
Staging
and
Kaz
is
known
with
a
type
Kaz
across
everything
like
our
metrics
catalog
and
everything.
B
We
just
use
the
type
Cass
board
and
there
are
stages
made
because
there's
no
such
thing
as
canary
there
is,
but
it's
not
yeah
complete,
there's
no
Canary
stage
forecast
yeah.
So
that's
a
work
in
progress,
but
we're
doing
the
same
thing:
we're
curling
it
and
retrieving
our
Pipeline
and
I'm
doing
the
same
exact
thing:
where
I'm
looking
for
the
status
of
the
entire
pipeline
running
running
and
then
it
just
completes
it.
So,
let's
go
to
that
actual
job
which
is
going
to
be
cro.
B
So
I've
got
a
fake
repository
in
this
case
testify
because
I'm
very
unique
in
my
naming
and
we
are
doing
effectively
a
very
short
quick
status
check,
followed
by
a
deploy
followed
by
checking
it
again,
and
if
that
check
fails,
we
would
do
a
rollback,
but
these
are
very
quick
and
easy
start.
Health
is
just
my
naming
board:
let's
grab
the
health
of
the
environment
before
we
begin
anything,
and
you
can
see
here,
I'm,
just
echoing
out
all
the
variables
just.
B
Shell
script,
with
minor
modifications
that
I
worked
on
when
I
was
working
on
our
metrics
issue
at
the
beginning
of
the
quarter,
as
you
can
see,
we're
getting
a
healthy
response
and
the
actual
response
from
Prometheus
is
this
Json
object
where
the
results
of
the
query
are
effectively
one
in
our
case,
when
we
get
a
response
of
one
back
on
this
query,
that
means
we're
in
a
good
and
healthy
spot
if
we
were
to
get
zero
or
no
value,
we're
not
pumping.
B
B
I've,
never
seen
rollback
Drew,
because
Kaz
is
always
helping
so
there's
that,
so
that
is
what
I've
put
together
as
a
POC.
That
only
connects
what
work
we
have
done
in
our
team
together.
So
far,
I'm
going
to
start
the
necessary
conversation,
I
think
Reuben.
You
probably
saw
this
already
I
tagged
you
on
an
issue
where
we
want
to
build
a
greater
POC
that
is
more
comprehensive.
B
B
My
thought
is
that
we
developed
the
necessary
scripting
in
the
CI
files
that
are
necessary
for
this,
but
they
don't
get
maintained
inside
of
the
cas
repo.
We
maintain
those
and
they
simply
include
them
into
their
project
for
execution,
something
along
those
lines.
It's
precisely
how
like
container
scanning
or
code
scanning
works
today,
I
think
I'm
doing
kind
of
the
same
thing.
At
least
that's
kind
of
like
an
end
goal.
I
feel
like
we
should
shoot.
Pork
is
that
damn
listing
had
a
dog
with
your
product,
but
we'll
see.
A
C
Not
a
question
really
more,
like
a
note,
I
think
the
templates
for
like
container
scanning
and
and
such
things
are
in
the
main,
gitlab
repo.
So
we
might
need
to
see
how
we
can
do
that,
such
that
we
don't
have
to
keep
adding
stuff
into
the
main
GitHub
repo.
B
Well,
the
includes
is
a
feature,
so
we
can
put
it
to
whatever
repo
we
want.
I
was
simply
exclaiming
I'm,
trying
to
use
the
same
pattern
that
dependency
scanning
or
container
skating
and
code
scanning
works.
That's
all.
A
A
Very
much
John
I
think
with
the
Amy
on
this
on
the
on
the
POC
as
we
spoke,
or
else
in
our
101
yesterday
and
I.
Think
Graham
is
going
to
be
the
the
counterpart
from
orchestration
side
on
his
effort.
So
please
also
sync
with
him,
but
I
guess
you're
going
to
be
off
next
week,
if
I
recall
correctly
and
so
I
guess.
A
A
Well,
thank
you
very
much
that
today
was
a
very
nice
demonstration.
We
went
away
over
time,
I
extended
our
next
calls
to
50
minutes
instead
of
40
years
now,
so
maybe
hopefully
we
can
always
go.