►
From YouTube: Container registry online GC QA testing
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
hello,
we
are
here
to
discuss
a
proof
of
concept
to
test
online
garbage
collection
on
the
container
registry.
My
name
is
juan
pared.
I'm
from
the
package
team
and
sofia
is
here
as
well,
so
I've
put
together
a
proof
of
concept.
Let
me
share
my
screen.
A
Can
you
see
this
yeah,
so
this
is
the
mr.
The
code
is
in
really
bad
shape,
but,
as
I
say
here,
but
the
intention
was
just
just
to
try
to
make
sure
that
the
receipt
for
the
test
would
work
and
see.
How
can
we
make
this
compatible
with
the
gitlab
ci
pipelines
and
ideally
the
qa
tests
as
well?
A
So
I've
started
to
write
down
some
documentation
for
this?
Maybe
you
can
go
through
it
because
it's
small
enough
to
understand
it.
As
as
background,
it
is
important
to
look
at
the
online
garbage
collection
specification
so
that
you
have
a
good
overview
about
how
online
garbage
collection
works
on
the
registry.
A
We
provide
some
background
information
as
well.
So,
even
if
you
don't
have
a
in-depth
knowledge
about
the
registry,
you
should
be
able
to
get
a
sense
of
how
it
works
just
by
reading
the
spec
and
basically
the
idea
was
to
create
a
tool
that
crews
generate
a
bunch
of
artifacts,
namely
repositories,
layers
images
with
multiple
tags
and
and
manifest
as
well,
and
the
way
that
I
did
it
is
with
the
with
go.
I've
built
a
tool
that
is
able
to
do
this,
and
the
first
thing
that
we
it
will
do
is
generating
repositories.
A
We
will
only
use
a
not
credited
set
of
repositories.
Two
of
them
name
is
a
and
b.
This
will
live
under
a
superpository.
A
A
So
that's
why
we
do
it
and
finally
jobs
the
the
tests
can
be
separated
in
multiple
jobs
and
stages
in
gitlab
ci,
and
the
way
that
I
did
it
here
for
this
proof
of
concept
is
that
I,
as
I
build
the
context
like
the
list
of
layers,
the
list
of
images,
their
tags,
all
that
information
when
I'm
done
with
processing.
A
I
write
down
that
information
to
file
in
the
file
system
so
that
the
subsequent
jobs
and
stages
can
can
read
that
file
and
to
get
the
information
that
was
created
on
previous
on
previous
runs.
So
we
can
do
that
easily
by
using
ci
caching.
So
we
can
cache
that
file
and
make
sure
that
it
is
propagated
across
jobs
and
stages
and
for
brevity,
from
now
onwards,
whenever
you
see
prefix
women,
this
will
think.
A
A
Otherwise
the
layer
may
end
up
not
being
deleted,
because
there
is
some
image
in
other
repository,
maybe
even
a
repository
from
a
user.
If
you
are
testing
this
against
the
live
registry
and
the
behavior
will
not
be
deterministic,
so
the
first
thing
that
the
tool
will
do
is
generate
a
set
of
layers
right
now,
it's
hardcoded
to
generate
five
layers.
They
are
unique
and
randomly
generated,
and
they
have
one
megabyte
in
size.
There
is
no
specific
reason
for
one
megabyte.
The
size
is
irrelevant
for
asserting
the
beaver
of
online
jc.
A
We
just
need
to
make
sure
that
they
are
small
enough
to
be
fast,
pushing
and
pull,
but
not
so
small
that
there
is
a
risk
for
them
not
to
be
unique
across
builds
because
the
random
data
used
it
to
generate
them
was
not
that
that
big,
so
one
megabyte
should
do
it
and
when
we
generate
the
the
layers
we
name
them
with
the
l,
l1
l2,
l3,
l4
and
l5,
and
the
tool
will
generate
these
and
save
them
to
the
local
file
system
of
of
the
runner
or
your
local
machine.
A
If
we
are
running
this
locally
and
finally
internally,
they
are
not
referring
to
sl1
l2,
l3,
r5
and
so
forth.
They
are
referred
using
the
the
checksum
digest,
which
is
a
compression
of
the
layer
file
using
the
zip
algorithm
and
then
getting
the
hash
of
that
target
z
file
will
give
us
the
digest
of
the
layer
that
we
will
see
on
the
registry
once
the
image
is
pulled.
A
But
for
simply
for
simplicity,
we
will
refer
to
layers
as
l1,
l2
and
so
forth.
The
digest
is
only
important
internally
for
the
the
test
tool
and
the
registry,
but
we
will
see
them
printed
on
the
logs
soon
and
yeah.
This
already
said
this,
an
international
that
will
keep
a
mapping
between
the
name
of
the
layer
that
is
known
to
us
and
the
digest
that
we
will
see
on
the
registry
side.
A
So
for
the
images
we
will
generate
five
of
them
and
here
in
this
table
we
can
see
the
relationship
between
those
images
and
the
layers
so,
for
example,
image
with
tag
1.0.0
in
repository,
hey
references,
layer,
l1
and
l2,
then
we
have
tag
2.0.0,
which
kind
of
simulates
like
the
next
version
for
1.0.0,
which
added
another
layer
l3
on
top
of
the
l1
and
l2,
and
then
on
the
same
repository.
A
This
one
has
a
single
tag
and
it
references
l1,
l4
and
l5,
and
finally,
a
pending
tag
in
the
a
repository
which
only
reference
layer
1..
So
these
images,
the
composition
of
these
images,
are
art
coded.
What
changes
is
the
the
content
of
the
layers,
but
for
us
using
the
two?
It's
always
l1
l2
for
this
tag
and
so
forth.
A
So
here
we
can
see
that
correlation.
I
chose
this
correlation
because
it
will
give
us
the
possibility
to
test
multiple
scenarios
like,
for
example,
deleting
the
tag
latest
and
making
sure
that
neither
l1,
l2
or
l3
were
deleted,
because
that
is
still
tech
2.0.0,
which
references
those
same
layers.
So
we
will
see
that
later
on.
I
already
explained
this.
A
A
Just
like
players
manifests
are
referred
to
using
their
checks
on
digest,
but
for
for
the
sake
of
simplicity,
we
will
use
the
same
notation
so
from
m1
to
m4.
To
refer
to
those
modifiers,
and
you
can
see
the
correlation
between
the
tag
and
the
manifest
that
it
points
to
so
we
know
that,
for
example,
type
1.0.0
in
repository,
hey
points
to
manifest
m1
and
therefore
we
know
that
manifest
m1
will
reference
layer,
1
and
layer,
2
and
so
forth.
A
So,
looking
at
these
two
tables,
we
can
make
that
relationship
to
validate
the
tests
and,
as
I
said,
would
to
the
alternative
and
latest
point
to
the
same
manifest
m2.
They
are
composed
by
the
exact
same
layers.
They
have
the
the
same,
the
same
manifest
and
finally,
the
delay.
The
later
the
less
the
less
thing
to
be
generated
are
the
docker
files
and
basically
the
locker
files
are
are
very,
very
simple.
A
They
are
built
from
scratch,
so
they
don't
have
any
data
and
then
for
each
layer
we
use
an
instruction
called.
Have
I
actually
show
that
an
example
here,
but
basically
this
is
how
these
docker
files
look
like,
and
if
I
go
here
basically,
you
would
have
had
a
one
here
and
l2,
and
by
doing
this
referring
to
l1
and
l2,
this
would
be
the
locker
file
m1,
which
points
to
layer,
1
and
2
and
docker
file
m2
would
be
identical,
but
would
have
an
extra
line
to
have
layer,
3
as
well
and
so
forth.
A
Okay,
and
then
it
comes
the
the
the
stages
that
the
two
offers.
So
basically,
the
two
can
be
used
as
a
cli
compiler
binary.
That
supports
multiple
comments
and
each
comment
will
be
a
separate
stage.
Ideally,
this
would
be
separate
stages
also
on
gitlab
ci,
but
they
can
be
executed
in
sequence,
all
in
the
same
stage
or
the
same
job,
so
they
are
separated
in
this
proof
of
concept,
just
to
make
sure
that
they
are
granular
enough
and
allow
us
to
execute
step
by
step.
A
So
we
generate
all
of
the
layers
and
then
we
finally
generate
all
docker
files
using
those
layers,
as
I
explained
before.
A
And
finally
there
is
this
information
here
and,
as
I
explained
it
before
after
executing
each
one
of
the
stages,
will
persist
the
build
contacts
or
the
data
that
they
have
generated
into
a
file
on
the
file
system,
so
that
subsequent
stages
can
read
that
and
and
and
know
the
data
that
they
have
to
validate
or
push
to
the
registry
or
pull
to
the
registry.
So
in
this
case
this
is
this.
A
Build.Com
file
is
just
a
google
data
structure,
serializes
using
a
the
incognito
library
and
then
written
to
a
file
on
the
file
system,
and
this
will
contain
for
the
generate
stage.
It
will
contain
the
lists
of
layers
that
were
generated
and
the
list
of
docker
files,
so
that's
mostly
the
digest
for
the
layers,
the
pad
for
each
one
of
them
and
the
pad
for
each
one
of
the
soccer
files
as
well.
So
we
generate
them.
A
So
after
generate
we
have
the
build
stage,
and
here
the
idea
is
to
grab
those
layers
and
those
docker
files
from
the
previous
stage
and
build
the
docker
images
using
them.
So
build
will
start
by
reading
the
build.cob
file
from
the
generate
stage
and
then
it
will
iterate
over
the
list
of
locker
files
and
generate
the
corresponding
images.
So
we
can
see
that
here
it
is.
It
is
building
an
image
automatically
with
the
right
path
and
using
the
correct
soccer
files.
A
A
So
when
we
build
the
images
here,
docker
the
locker
engine
will
build
the
image
payloads
and
that
will
lead
to
the
creation
of
a
configuration
and
we
capture
the
the
digest
from
that,
because
it
will
be
important
for
the
for
the
future
stages.
So
we
capture
that
information,
we
append
it
to
build.gob
and
we
are
done
with
the
with
with
the
build
stage
and
of
course,
in
this
case
I'm
executing
these
on
my
local
machine.
A
So
I'm
performing
the
tests
manually
so
to
make
this
work,
I
had
to
manually
define
these
environment
variables
which
are
predefined
for
hitler
ci.
So
this
is
only
needed
for
a
local
test
and
then
we
move
on
to
the
next
one,
which
is
the
push
stage,
and
here
we
will
push
the
images
that
were
built
on
the
build
stage
into
the
container
registry
that
that
we're
using
so
the
tool.
A
Basically
starts
by
reading
the
build.gob
file,
finding
the
list
of
images
that
were
built
on
the
previous
stage
and
then
pushing
each
one
of
them
to
the
to
the
container
registry.
Now
the
target
container
registry
is
inferred
from
the
ci
registry
image
environment
variable
which
contains
the
registry
address
as
as
a
prefix,
so
for
production
this
would
be
registry.github.com
for
pre-production,
it
will
registry.3.registry.github.com.
A
So
there
is
no
need
to
do
anything
besides
that
and
internally,
we
use
that
information
is
missing
here,
but
we
use
the
docker
engine
sdk
to
to
push
those
images.
This
is
basically
just
a
go
sdk
to
interact
with
the
docker
engine
api.
A
The
alternative
would
be
to
invoke
the
docker
cli
from
go
like
executing
a
shell
command.
This
is
a
bit
easier
because
it
gives
us
a
bit
more
control,
but
the
effect
is
exactly
the
same.
We
are
giving
a
comment
to
the
locker
engine
to
push
the
image
to
the
to
the
registry
and
when
it
is
pushed,
then
the
registry
will
give
us
the
digest
the
of
the
manifest,
and
we
are
going
to
save
that
information
on
the
build
context,
because
it
will
be
needed
for
later.
A
Okay
and,
as
I
was
saying,
to
interact
with
the
registry
here,
we
do
use
credentials
and
those
credentials
come
from
the
predefined
ci
environment
variable.
So
that's
a
ci
registry
user.
I
think
and
ci
registry
token
or
registry
password,
I'm
not
sure
about
that.
But
those
are
already
predefined
on
gitlab
ci
as
well.
A
On
the
registry
side,
so
the
only
thing
that
it
does
is
read
that
build
cop
file
find
the
images
that
were
built
and
then
pull
each
one
of
them
from
the
registry,
making
sure
that
all
those
pools
finish
successfully
and
the
build
context
does
not
change
in
this
stage.
And
finally,
the
the
last
stage
is
test,
and
here
we
want
to
do
three
things
one.
We
want
to
invoke
the
registry
api
in
a
certain
manner
that
will
trigger
some
actions
from
online
garbage
collector.
A
Then
we
want
to
sleep
for
enough
time
to
let
online
gc
workers
pick
those
tasks
and
do
whatever
they
need
to
do.
And
finally,
we
want
to
validate
that.
The
online
category
collector
either
deleted
artifacts
that
were
dangling,
so
they
were
garbage
collected
and
also
make
sure
that
stuff
that
shouldn't
be
deleted
was
not
deleted,
which
is
even
more
important.
A
So,
as
I
say
here,
the
required
sleep
time
between
triggering
those
requests
and
waiting
to
perform
the
validations
for
rise
per
environment
depending
on
the
registry
configurations
and
the
workloads
from
other
clients
that
are
using
that
registry.
Please
refer
to
the
online
capture,
clicks
and
inspect,
to
understand
better,
what's
involved
in
that
process
in
calculating
those
delays,
but
let's
not
focus
on
that
here
and
to
make
that
work.
The
tool
provides
two
ways
so
for
an
interactive
validation.
A
That
would
be
an
engineer
like
myself
performing
a
test
and
manually
validating
the
outcome
against
the,
for
example,
the
prepro
registry,
by
default.
The
tool
will
wait
indefinitely
between
triggering
those
requests
against
the
registry
and
making
the
validations.
So
that
means
that
once
we
start
a
test,
it
will
make
the
requests.
A
It
will
wait
indefinitely
for
me
to
press
the
key
on
the
keyboard
that
should
give
me
enough
time
to
go
and
check
the
online
gc
metrics
and
queues
to
make
sure
that
the
tasks
were
created
processes
successfully
and
then,
when
the
queue
is
empty.
I'm
confident
that
I
can
press
the
key
and
the
registry
will
do
the
validations
when
everything
was
already
dealt
with
on
the
registry
side
and
for
non-interactive
validations.
A
We
can
also
pass
a
delay
flag
to
the
cli
and
we
can
give
it
as
arguments
an
amount
of
time
to
wait
a
fixed
amount
of
time
to
wait
before
performing
those
validations.
So,
for
example,
in
gitlab
ci
we
could
define
an
environment
variable
which
is
different
for
each
environment,
for
example,
depending
on
the
pre-prod
registry
configurations.
A
We
may
think
that
we
need
to
wait
10
15
minutes
before
performing
validations
in
most
cases,
so
we
can
define
an
environment
of
arrival
like
registry
weights
dla,
something
like
that
and
use
that
as
an
input
for
the
delay
flag
when
triggering
this
command
on
the
ci
file
and
finally,
for
for
granularity.
A
This
test
stage
was
split
in
five
sub
stages,
but
they
can
be
evenly
together
or
can
be
separated,
but
we
will
look
at
them
separately,
so
the
first
one
and
we
can
control
that
with
a
stage
flag,
so
stage
equals
one
means
that
it
will
execute
the
first
substage
of
the
tests,
and
we
start
by
deleting
the
latest
stack
in
in
repository
a
and
when
we
do
that.
A
This
will
not
cause
the
deletion
of
anything
else
because
the
underlying
manifest,
which
is
m2,
we
can
look
at
the
tables
above
to
see
that
relation,
but
it's
m2
and
the
layers
that
are
referenced
by
that
manifest
one.
Two
and
three
are
still
referenced
by
the
tag
2.0.0
within
that
same
repository,
so
the
manufacture
shouldn't
be
deleted,
neither
the
layers,
so
the
only
thing
that
we
need
to
do
is
trigger
stage
one
with
the
tool.
A
So
in
here
I
did
the
manu
the
manual
validations.
I
checked
the
registry
metrics
to
make
sure
that
tasks
were
possessive.
The
queues
for
online
igc
are
empty,
so
we're
good.
Then
I
press
the
enter
key
and
the
the
two
will
perform
the
validations
as
as
we
can
see
here,
it
will
look
for
the
m2
manifest.
A
When
we
do
this
there
are,
there
will
be
no
other
text
pointing
to
the
m2
manifests.
This
would
be
the
last
one.
That
means
that
m2
should
be
considered
dangling
and
garbage
collected,
and
once
that
happens-
and
you
can
understand
that
by
reading
the
spec
when
a
manifest
is
deleted
either
through
the
api,
because
it
was
garbage
collected
automatically,
the
registry
will
schedule
a
review
for
the
layers
referenced
by
that
manifest.
A
So
that
means
that
l1,
l2
and
3
will
be
scheduled
for
review
and
when
done,
l3
should
be
deleted,
because
it
is
unique.
It
was
unique
to
the
2.0.0
tag,
as
we
can
see
here.
So
we
can
see
that
tree
was
unique
because
we
already
deleted
latest
right
and
the
only
two
images
using
the
l3
were
latest
already
deleted
and
2.0.0,
which
we
are
deleting
so
l3
should
be
garbage
collected
and
l1
and
l2
should
be
preserved
because
they
are
still
used
elsewhere.
A
So
if
we
get
back
here,
you
can
see
we
trigger
statute
tag
was
deleted
slapped
for
enough
time.
I
pressed
enter
once
everything
was
validated
and,
and
then
the
tool
will
make
sure
that
the
m2
manifest
is
gone.
So
money
has
not
found,
that's
what
we
want
and
it
will
make
sure
that
l1
and
l2
are
still
there
because
they
are
needed,
but
it
will
make
sure
that
l3
is
gone
because
it's
no
longer
referenced.
A
Finally,
we
move
on
to
deleting
the
manifest
entry.
We
are
not
deleting
the
tag
in
this
specific
test
case.
We
delete
the
manifest
directly
and
by
doing
so,
as
I
said
that
will
cause
the
registry
to
review
the
underlying
layers
and
for
the
manifest
m3.
That's
l1,
l4
and
l5,
and
out
of
these
l1
is
the
only
one
that
should
not
be
garbage
collected
because
it
is
still
reference
it
somewhere,
but
we
can
see
here
that
l4
and
l5
were
the
only
ones
that
were
used
by
this
manifest.
A
So
m3
is
used
in
this
tag
and
this
tag
references
all
these
layers,
so
by
deleting
that
manifests,
a
task
will
be
scheduled
to
review
this
one.
This
one
is
one.
This
one
is
still
needed
by
everyone
else,
but
these
two
are
no
longer
needed,
so
they
should
be
garbage
collected.
So
that's
what
we
are
going
to
test
again
stage
three
after
validation,
I
triggered
the
the
the
two
validations
and
we
make
sure
that
the
manifest
is
gone.
A
L1
should
still
be
there
because
it's
needed,
but
l4
and
l5
should
be
gone.
Okay
and
if
we
need
to
look
back
to
see
the
correspondence
between
digest
and
the
layer
name,
we
can
look
at
the
logs
from
the
from
the
from
the
generate
stage
and
we
can
see
the
digest
here
and
l5.
So
we
can
make
that
correlation
by
looking
at
the
locks,
but
that
should
only
be
needed
if
the
test
fails
for
some
reason,
but
everything
went
okay.
A
This
time
we
move
on
to
the
next
sub-stage,
which
is
simulating
a
text
switch
and
attack
switch,
for
example,
happens
quite
frequently
with
the
latest
tech,
for
example,
because
latest
is
usually
used
to
point
to
the
latest
version
of
your
image.
But
of
course
the
latest
version
changes,
for
example,
for
gitlab.
It
changes
with
ever
release,
so
it
will
start
pointing
to
13.9,
and
now
it
will
point
to
14.0.
A
So
that's
a
text
switch.
We
are
moving
the
latest
tag
from
pointing
to
the
manifest
that
was
used
to
build
the
image
for
13.9
to
the
manifest
that
is
used
to
build
the
image
for
14.0.
So
here
we
we
do
something
similar,
but
we
do
it.
With
this
stack,
it
was
initially
pointing
to
m1,
as
we
saw
before,
and
now
we
are
going
to
point
it
to
m4,
which,
which
is
the
manifest
of
that
painting
image
that
we
had
before,
and
we
didn't
push
the
registry
just
yet.
A
We
build
an
image
with
it
and
we
tag
it
with
a
1.2.0,
and
then
we
push
it
to
the
registry
and
when
we
do
so,
the
registry
will
see
okay.
This
tag
already
exists,
so
1.000
already
exists
in
the
repository,
but
it's
pointing
to
m1.
A
So
let's
switch
that
pointer
from
m1
to
m4
and
that's
it,
and
when
this
happens,
this
is
also
covered
on
the
spec.
But
when
it
happens,
the
registry
will
automatically
schedule
a
review
for
the
previous
manifests
that
will
be
m1.
A
It
will
detect
that
there
was
a
switch,
so
maybe
m1
longer
has
a
tag
pointing
to
it.
So
a
task
is
scheduled
to
to
review
m1
and
in
this
case
yeah
it's
no
longer
tagged.
We
left
m1
orphan.
There
are
no
other
texts
pointing
to
it,
so
it
should
be
garbage
collected
and
as
response
to
that,
the
layers
references
by
m1
will
also
be
reviewed
and
l2
will
no
longer
be
referenced
by
any
other
image
on
the
registry,
so
it
should
also
be
coverage
collected.
A
So
we
trigger
the
stage
number
four
it
does.
All
of
that
builds
the
builds
the
the
new
version
of
the
a
1.0.0
image
pushes
it
to
the
registry.
Of
course
the
config
digest
changes,
so
we
save
that
information
and,
and
then
it
pushes
to
the
registry
slips,
I
do
the
validations
and
then
it
will
make
sure
that
m1
is
gone.
A
L1
should
still
be
there,
but
l2
should
not
so
everything
went
okay
here
and
finally,
the
the
last
substage
is
deleting
the
last
tag
that
we
have
on
the
registry,
which
is
a
1.0.0,
and
by
doing
this,
keep
in
mind
that
we
have
switches
1.0.0
from
m1
to
m4
here.
So
it
now
only
points
to
l1.
A
So
that's
the
same
manifest
here.
It
only
points
to
m1,
and
that
means
that
if
we
delete
the
latest
stack,
this
will
call
the
this
will
cause.
The
deletion
of
the
latest
of
the
last
manifest,
which
is
m4
and
by
deleting
that
manifest
it
will
cause
the
deletion
of
the
layer
that
it
referenced,
which
is
l1
because
there
are
no
other
images
on
the
registry.
So
we
trigger
stage
five.
The
tag
is
deleted.
A
A
So
that's
the
the
the
rough
idea
what
of
the
receipt?
So
this
is
an
article
that
received
the
set
of
images,
the
set
of
layers,
all
those
images
are
built
and
also
the
test
cases.
These
are
fixed
test
cases
mean
to
test
specific
behaviors
of
online
gc,
so
we
can
always
change
them.
Add
more
sub
stages
to
test
different
scenarios.
Add
more
images
to
test
some
kind
of
combination
that
we
are
missing,
but
this
is
the
main
idea.
A
I'll
just
show
it
here
quickly,
because
I've
tested
this
on
github
ci
quickly
using
this,
and
in
this
case
I've
bundled
it.
I
have
a
separate
stage
for
generates
with
a
single
job.
A
I
have
a
separate
stage
for
builds
with
another
single
jaw
and
then
I've
bundled
the
tests
and
builds
push
and
pull
so
build
push
and
pull
or
bundle
it
with
tests
in
a
single
stage
which
is
validate,
and
I
only
did
that
because
it
was
easy
because
with
builds
when
we
generate
an
image
using
the
gitlab
ci
using
dockering
docker,
the
generated
images
will
only
be
local
to
that
runner.
A
Instance
to
that
to
that
container,
where
the
images
were
built,
and
that
means
I
wanted
to
be
able
to
reuse
those
images
for
the
for
the
push
stage
because
they
were
built
in
a
separate
job,
so
they
are
lost.
Of
course
we
can
cache
build
images
I
get
into
them
here.
My
only
purpose
here
was
just
to
make
sure
that
this
was
feasible
to
execute
on
on
gitlab,
ci
and
yeah.
A
So
we
build
and
push
the
images
we
played
the
contacts
and
you
will
see
some
differences
on
the
logs
because
already
made
some
changes
to
the
two
that
are
not
shown
here
on
the
pipeline
and
then
finally,
it
will
pull
the
images
and
will
and
will
and
will
perform
the
tests
on
on
the
validation
and
yeah.
That's
all
what
are
your
thoughts
about.
B
A
Let
me
use
myself
yeah
I've
used
the
content
registry
repository
for
this,
because
it
was
the
easiest
way
to
create
this
book
because
everything
is
set
up
already.
But
if
you
see
here,
I
actually
let
me
collapse
this.
I
actually
erase
if
I
have
the
wall
content
registry
github,
ci
file
and
creative,
a
couple
of
customizations.
So,
for
example,
here
you
can
see
that
I'm
caching
build
context.
I'm
caching
go
modules
and
then
we
can
see
that
I
I
have
the
variables
defined
here.
A
I
have
the
two
stages
generate
and
builds
and
then
for
each
one
of
them,
I'm
going
to
run
the
tool
in
this
case
I
run
it
from
source,
but
it
could
be
compile
it
and
run
it
like
that.
So
yeah,
you
can
basically
separate
each
one
of
the
stages
that
I
documented
here
in
separate
stages
for
gitlab
and
then
use
one
or
multiple
jobs
to
to
execute
them.
B
Yeah,
I
I
see
this
as
so.
Basically,
as
far
as
I
can
understand,
with
our
existing
pre-prod
pipeline
is,
we
could
incorporate
the
whole
this
gitlab
ci
in
inside
of
the
tests,
as
we
are
doing
already
for
container
registry
tests.
When
we
do
docker
build
docker
push,
we
would
incorporate
this
tool
and
then
we
would
have
that
pipeline
inside
the
test.
That
will
go
through
all
those
stages
and
verify
if
they
are
passing,
but
I'm
thinking
that
how
would
we
do
within
just
give
me
a
little
bit
more
about
that
interactive
mode?
B
Would
we
still
be
able
to
get
something
of
this
test?
Even
if
we
don't
get
to
press
the
keys
I
didn't
get.
Maybe
that
part.
A
Yeah
yeah,
so
so
by
by
default,
and
let
me
show
it
here,
you
can,
can
you
read
this
yeah,
so
I'm
going
to
run
it
from
from
source,
I
could
have
compiled
the
binary,
but
that
doesn't
matter
I'm
running
from
source,
and
here
let's
for
example,
let
me
just
remove
everything
that
I've
built
so
far
and
let's
trigger
the
generate
staging
once
again
and
test
builds.
A
A
Yeah,
okay,
so
you
can
do
tests
and
then
you
say
the
stage
stage
one
and
by
default
this
will.
Of
course
there
is
no
text
left
to
delete.
But
this
will
wait
for
me
to
press
the
key
to
continue
as
I
shown
on
the
logs.
But
if
you
pass
this.
A
A
B
A
B
Yeah
and
each
test
just
to
see,
if
I
got
this
right,
each
test
stage
has
a
weight
of
delay
right
x,
delay.
Yeah.
Is
there
a
way
that
we
batch
these?
You
know
in
in
a
way
that
we
could
wait
that
time.
Just
once.
A
Yeah
we
we
we
can.
We
can
do
that.
For
example,
we
could
run
all
triggering
operations
like
deleting
text
deleting
manifest
for
all
these
sub
stages.
You
know
in
bulk
in
a
single
operation,
then
sleep
once
like
for
20
minutes
or
so
and
then
execute
all
the
validations
from
each
one.
B
B
B
B
A
Would
yeah
it
would
be
a
it
would
likely
be
a
problem
because,
for
example,
if
you're
parallelizing
the
deletion
of
two
types
yeah
the
same
manifests
in
one
of
them.
You
are
going
to
assume
that
there
are
no
tags
left
for
that
manifest,
but
that
concurrent
pipeline
maybe
hasn't
executed.
Yet
so
it's
better
to
make
sure
that
these
are
sequential
either
split
like
this
using
the
stage
flag
or
executive
one
after
the
other,
without
intervention,
but
make
sure
that
they
have
a
fixed
sequence,
because
otherwise
it
won't
be
deterministic.
B
Yeah,
so
perhaps
we
could
go
with
the
fixed
sequence
instead,
just
just
so,
we
don't
have
like
a
three
hour
pipeline,
not
three
hour,
but
you
know
yeah.
So
as
I
as
I
see,
I
think
this
is
possible
as
long
as
we
have
access
to
the
tool,
the
binary
from
the
command
line.
There
is
a
way
that
we
can
install
it
and
then
run
it.
B
So,
even
if
we
import-
because
I
see
that
you
have
online
gc
tester
slash
main
go
so
there
is
like
some
files.
B
A
Yeah,
we
can
do
this
in
in
multiple
ways.
So,
as
I
said,
I
have
this
on
the
container
registry
repository,
I'm
not
sure
if
it
should
stay
there
or
not,
but
it's
the
easiest
place
to
start
with.
So
we
can
do
this
in
multiple
ways.
A
So
we
can
fetch
the
registry
repository
as
it's
done
right
now,
refresh
the
container
registry
git
repository
and
build
the
binaries
from
source,
and
then
we
can
execute
them
like
I
have
here
so
don't
and
the
name
of
the
binary
or
we
can
just
fetch
the
git
repository
and
execute
them
like
I'm
doing
here
so
calling
the
the
main
file
format
for
that
winery
and
execute
it.
It
takes
a
bit
more
because
it
has
to
compile
the
the
code,
so
it's
better
to
to
to
build
the
binaries,
and
that
should
be
easier.
A
Fetch.
The
github
story
execute
a
make
task
which
will
create
the
binary,
and
then
you
will
have
the
binary
available
to
you
in
the
bin
folder.
The
alternative
would
be
to
do
this
using
the
docker
image
where
you
on
the
ci.
Instead
of
using
a
base
image
of
golang
or
ubuntu
or
whatever
you
would
use
a
base,
you
would
use
an
image
that
already
has
the
the
binary
layer,
so
yeah
that
you
will
have
to
do
is
execute
the
the
binary
and
that's
it.
So
there
are
multiple
ways
to.
B
Do
it
I'll
probably
start
by
doing
the
same
way
that
you
have
here
for
the
container
registry
project
and
yeah,
and
I
think
like,
are
we
going
to
keep
this
tool
within
this
repository?
Do
we
want
to
separate
this
somewhere
or
it's
just
yeah?
I'm.
A
Not
yet
sure
about
that,
we
should
ping
as
well
to
see
what
she
thinks
about
it
right
now.
It
makes
sense
to
be
on
the
registry
because
it's
related
to
the
registry
but
at
the
same
time
we're
doing
something:
that's
not
related
with
the
registry
operation
as
a
service.
Yes,
so
it's
more
a
tool
on
top
of
the
registry,
so
it
might
make
sense
to
pull
it
from
the
registry
and
and
have
it
somewhere
else.
I
don't
know
if
there
is
any
dedicated
group
for
quality
tools.
B
Yeah
we
have,
we
have
a
few.
We
have
a
few
quality
tools.
I
think
perhaps
this
could
be
a
candidate
to
be
stored.
There,
perhaps
maybe
not
now,
as
we
are
doing
the
plc
but
but
something
to
consider
later,
and
I'm
thinking
that
this
is
not
intended
to
be
running
in
any
gitlab
ci
container
registry
release
right.
I
know
that
there
are
some
tests
for
go
versions,
for
example,
but
this
is
not
meant
to
be
run,
then
this
is
just
to
be
running
against
pre-part
right.
A
Yeah
yeah,
I
think
we
we,
we
will
run
this
manually
in
the
interactive
way
when
we
change
something
about
online
gce,
and
we
want
that.
It
is
okay
in
preprocess,
for
example,
before
going
on
with
the
production,
deploy
or
then
have
it
automated
in
the
keyway
pipelines
in
pre-prod
as
well
and
run
after
each
deployment
to
to
pre-proc.
B
A
No,
no,
the
validation
is
happening
automatically.
So,
for
example,
here
when
we,
when
they
delete
the
tag,
it
will
slip
or
or
wait
for
that
for
my
click
and
then
it
will
perform
a
fixed
set
of
validations.
So
the
tool
I
can,
I
can
show
it
to
you
the
code
is
the
code
is
terrible,
as
I
said
so,
but
yeah,
let's
see
tests,
and
here
we
can
see
it
will
break
tests
in
multiple
stages
and,
for
example,
for
the
for
the
for
the
stage
one.
It
will
lose
the
the
build
context.
A
Then
it
will
build
a
registry
client
in
google.
Basically,
this
this
is
just
a
go
client
for
the
registry
http
api,
and
then
we
are
going
to
invoke
the
api
using
this
client
to
delete
the
tag.
The
tag
was
deleted
and
then
we
are
going
to
perform
the
validations
here.
So
we
want
to
make
sure
that
the
manifest
still
exists
or
not,
and
we
also
want
to
make
sure
that
logs
exist
or
not.
So
we
do
those
obligations
here
automatically,
so
the
job
will
either
succeed.
A
If
all
the
validations
were
okay
or
it
will
fail
with
an
exit.
One
status
like
it
did
here
if
something
if
something
goes
wrong,
so
the
ci
pipeline
will
either
succeed
and
then
we
know
that
online
gce
is
okay
or
it
will
fail,
and
we
need
to
investigate.
What's
going
on.
B
Okay,
I
think
I
think,
by
the
end
of
today.
Hopefully
I
could
already
have
something
an
mr
open
with
with
all
of
this
integrated
and
then
we
could
try
to
run
this
against
pre-prod
and
see
what
happens
and
where
can
I?
Where
can
I
find
this
documentation
is
in
the
it's
in
the
container
registry.
A
Docs
right,
I
just
finished-
writing
this
just
before
our
meeting,
so
I'm
what
I'm
going
to
do
I'll
push
the
the
minor
few
changes
that
I
did
on
the
code
to
the
branch.
I'll
add
this
document
there
as
well
and
for
now
will
not
merge
this
to
the
to
the
main
branch.
Until
we
know
if
this
is
going
to
stay
here
or
in
a
separate
repository
so
to
test
it,
we
can.
A
We
can
just
fetch
the
repository
the
specific
range
from
the
registry
repository
and
and
go
from
there
I'll
add
this
into
the
docs
folder
dot
docs
gitlab
folder.
So
it
will
be
here
this
document
that
I
have
here
I'll
place
it
here
and
I
will
also
add
a
make
a
make
task
to
compile
the
online
gc
tester
binary
so
that
it
is
available
in
the
bin
folder.
A
A
B
As
well,
definitely,
and
also
if
we
want
to
continue
adding
cases
for
this,
it
would
be
good.
Do
we
so
regarding
the
settings
of
container
registry
garbage
collection
in
in
preprod,
which
modifications
are
we
going
to
do
to
the
registry
and
when
so,
when
would
we
reasonably
set
the
delay
out?
Would
it
be
15
minutes?
Would
it
be
five?
A
Yeah
so
yeah,
that's
the
the
tricky
part
that
can
lead
to
that
non-deterministic
behavior
that
we
have
been
discussing.
So
it's
not
impossible
that
we
will
have
some
false
positives
or
negatives,
but
we
have
to
be
very
try
before
giving
up.
At
least
we
should
do
that
so
in
prepro.
What
we
need
to
do.
We
need
to
tweak
a
few
settings.
A
So
the
first
thing
that
we
need
to
tweak
is
the
review
after,
and
this
is
going
to
define
the
amount
of
time
that
will
be
used
to
postpone
a
review.
So,
for
example,
when
you,
when
you
delete
a
tag,
we
know
that
a
review
will
be
scheduled
for
the
manifest
that
was
below
that
type
and
by
default
the
registry
will
schedule
that
review
for
24
hours
a
year.
So
if
I
delete
it
now,
the
review
will
only
happen
after
24
hours.
A
So
for
this
test
we
need
to
make
sure
that
that
happens,
much
quicker,
let's
say
five
minutes,
but
we
can't
make
it
too
small,
because
then
that
is
the
possibility
of
garbage
collecting
stuff
like
on
ongoing
image,
uploads
think
about
an
image
that
takes
like
10
minutes
to
upload:
it's
not
impossible,
it
might
happen
if,
if
a
blog
is
pushed,
a
a
task
will
be
scheduled
to
review
that
blog
and
that
will
remain
untight,
possibly
in
text
until
the
very
end
of
the
image
of
the
image
push.
A
A
So
I
think
it's
fine
to
lower
this.
Like
say
three
minutes:
five
minutes.
We
can
start
like
three
minutes
five
minutes
and
see
how
that
goes,
and
then
the
other
thing
that
we
need
to
do
is
turn
this
flag
know
it'll
work
off
to
true,
and
this
is
because
online
this
is
a
micro
process.
It
runs
in
intervals.
A
Let's
say
five
seconds:
every
five
seconds
the
the
worker
will
kick
in
to
see
if
there
are
tasks
to
be
reviewed
or
not,
and
if
there
are
no
tasks
to
be
reviewed
instead
of
keep
hammering
the
the
database
every
five
seconds
by
default,
the
garbage
collector
will
back
off
exponentially
when
there
are
no
tasks
to
be
done
like
if
there
are
no
tests
to
be
done
now,
instead
of
running
in
five
seconds,
we
will
only
run
in
10
seconds
and
each
time
it
will
take
more
and
more
time
if
there
are
still
no
tasks
to
be
to
be
deleted.
A
So
we
want
to
turn
that
off
so
that
we
are
sure
that
the
work
will
run
successfully
once
sequentially
every
five
seconds
or
so
and
the
other
one
is
the
the
review
interval
the
kicking
interval,
which
by
default,
is
five
seconds,
so
that
should
be
fine.
We
can
load
it,
but
five
seconds
should
be
enough.
That
means
that
every
five
seconds,
every
five
seconds,
the
gc
workers
will
look
for
available
tasks
for
review.
A
B
B
B
We
are
running
tests
in
pre-prod,
so
there
would
be
an
additional
notification
there
with
this
with
these
jobs-
and
I
yeah
it's
going
to
be
marked
as
allowed
to
fail
for
the
beginning,
and
I
have
another
one
thing
that
we
can
consider
is
so
right
now
they
are
going
to
be
scheduled,
but
I
was,
I
was
checking
the
infrastructure
code
for
delivery
and
I
found
out
where
they
are
triggering
in
the
deployment
path
so
later
on,
perhaps
once
we
want
to
make
it
as
I'm
not
sure
if
it
it's
going
to
be
every
time
the
container
registry
releases
we
want
to
trigger
this.
B
It's
also
it's
also
possible.
It
doesn't
seem
like
such
a
big
change,
but
so
we
are
going
to
start
with
the
scheduled
test
so
every
day
or
every
two
times
a
day,
we're
going
to
be
running
this
garbage
collection
test,
not
sure
if
I
hope
it
doesn't.
A
Yes,
we
want
to
test
yeah
yeah.
I
think
the
the
way
to
go
is
trigger
that
pipeline
right
after
the
deployment
to
pre-brought,
so
that
we
can
wait
for
the
deployments
to
staging
and
production
for
that
pipeline
to
finish
successfully
before
moving
on
with
the
deployment
stage
staging
and
production.
But
yet
to
start
with
it,
it's
better
to
make
this
an
unblocking
failure
and
possibly
manually
even
manually,
trigger
it.
B
Sure
yeah,
so
I
think
I
think
I'll
be
able
to
get
back
to
you
in
a
couple
of
hours
just
with
something,
and
then
we
just
comment
in
dmr
and
go
from
there.
That's
okay,.
A
Yeah
I'll
push
the
changes
and
the
dock
and
I'll
also
ping
ellie
on
the
mrnu,
so
that
we
can
discuss
a
bit
where
they
should
live,
and
then
we
can
continue
with
the
the
integration
for
keyway
tests
and
even
if
you
want,
with
the
with
a
pairing
or
an
end
off
to
work
on
the
calls
move,
this
somewhere
else,
make
it
better
implement
it
properly.
Because,
as
I
said,
this
is
poorly
implemented
right
now.