►
A
A
A
They're
different
than
our
trigger
trigger
tests,
so
our
production
deploys
sorry.
Our
deployment
ones
are
triggering
smoke
tests
and
those
are
passing
so
we
are,
depending
on
them
and
they're
reliable,
but
I'm
trying
to
find
a
full
shoot
run
that
was
successful
and
I
haven't
been
able
to
do
so
so
I'm
pinging
quality
right
now
to
figure
out
who's,
actually
monitoring
this
or
whether
I'm
looking
at
the
wrong.
B
A
A
B
A
A
B
A
B
He's
excited
to
see
this
kind
of
stuff
done
because
he
wants
to
see
sidekick
pushed
over
to
kubernetes,
so
I
think
we
both
have
a
vested
interest
in
making
sure
we've
got
metrics
going
so
at
this
moment,
I'm
I'm
just
waiting
for
those
merge,
requests
and
then
I'm
gonna
go
through
and
just
make
sure
that
everything
still
works
as
expected
and
that
I
haven't
broken
anything
or
introduced
only
new
words
mislead.
You.
A
Fyi
for
you
here
is
now
the
scalability
team
got
two
new
members.
We
are
starting
next
week
with
full
like
full-on
work,
not
to
say
that
Rob
and
Andrew
have
been
working
full,
but
I
mean
like
we
have
almost
14,
so
I'm
planning
on
doing
demos.
The
same
way
we
are
doing
demos
nice
here
for
kubernetes,
which
means
is
this
too
early
for
you?
Is
it
possible
for
you
to
do
an
hour
early
to
this
time,
yeah.
A
So
because
I'm
planning
to
do
most
likely
one
hour
early
so
that
we
have
scalability
demo
where
we're
going
to
review
site
key
changes.
How
are
we
planning
to
do
things?
I
want
to
invite
delivery
team
as
well,
and
then
we
can
just
continue
with
our
demo
for
convenience
and
that's
gonna
feed
each
other.
Basically,
so,
yes.
A
B
Yeah
that
worked
is
fine.
The
next
one
of
the
goals
I
wanted
to
work
on
was
making
sure
we
have
a
solid
deploy
process
because
right
now,
there's
no
such
thing
as
auto,
deploy
and
CNG
or
home
charts.
So
I've
been
working
on
that
all
this
week
and
I
haven't
really
made
too
much
progress.
I've
got
one
merge
request,
that's
geared
towards
making
sure
that
we
have
the
ability
to
create
the
stable
branches
for
CNG.
So
that's
one
start.
The
next
portion
is
sending
all
the
components
and
their
versions
to
CNG.
A
B
A
B
A
Here's
a
problem
have:
are
you
aware
of
branching
structure
in
charts?
No,
that
is
your
problem
number
one,
because
you
went
out
to
build
this
out,
but
charts
have
a
completely
different
branching
system
net,
not
different
branching
system.
Sorry
I'm,
going
to
retract
that
branching
naming
system
is
completely
different
in
charts,
so
they
are
following
their
own
major
minor
version
branching.
So
if
you
have
chart
2
dot,
4
dot
1
that
is
going
to
be
used
in
2
4
table
update.
B
A
B
A
B
A
In
a
very
active
development
we
know
to
have
we
needed
to
have
like
a
very
independent
way
of
doing
major
releases,
so
we
can
break
a
break
things
quicker
and
fixed
as
well,
but
I
don't
know
whether
we
are
at
that
stage
still
in
the
chart.
I
believe
we
are
not
to
reevaluate
that,
but
that
might
be
actually
throwing
a
spanner
in
the
works,
but
I
think
it's
first
important
for
you
to
inform
yourself
on
how
that
system
looks
in
there.
B
A
A
A
A
A
A
A
Well,
it
wouldn't,
actually
it
wouldn't
do
anything.
No
again
I
need
to
rephrase.
You
would
be
able
to
use
it
for
your
own
purpose,
but
then,
if
they
need
to
back
for
changes
into
stable
versions,
they
will
have
to
do
it
either
in
two
places
or
whatever
we
use
would
be
different,
which
in
itself
might
not
be
the
worst
thing
in
the
world.
There
are
ways
for
keeping
those
branches
in
sync.
A
But
that's
something
we
need
to
kind
of
think
through
and
discuss
whether
we
want
to
go
down
that
route.
There
is
something
to
be
said
about
the
fact
that
we
are
now
actually
focusing
quite
a
lot
of
effort
on
improving
charts
and
in
order
for
us
to
actually
be
able
to
use
that
uncom,
we
need
to
kind
of
wait
for
the
release.
What
we
can
end
up
doing
is.
A
We
can
just
say
that
we're
going
to
create
the
auto
deploy
branches
the
same
way,
we
create
them
right
now
and
use
that
as
our
is
our
way
of
handling.
So
that's
what
that's
how
we
can
also
break
away
from
tagged
charge
releases,
so
you
would
have
the
same
system
of
you
creating
the
same
mode
to
deploy
a
branch
in
the
charts.
A
How
that
would
trigger
image
build.
That's
like
you
need
to
figure
that
one
out
I
forgot
how
that
works,
but
then
you
don't
have
to
deal
with
stable
branch
creation
or
have
to
deal
with
any
of
that.
Then
you
can
leave
that
to
the
distribution
team
to
improve
how
they
create
2
4
stable.
Maybe
they
can
create
it
from
what
a
deployed
branch
is
the
same
way
we
do
from
stable
branches
right.
So
maybe
that's
where
we
need
to
kind
of
hook.
Things
in.
B
A
You
say
it's
putting
a
dent:
I,
don't
think
it
does,
because
here
here,
two
things:
why
doesn't?
First
of
all,
you
went
into
release
tools
to
understand
how
all
of
this
is
tied
together,
so
you
can
implement
it
in
another
place
that
can
be
reused.
The
branch
name
can
be
different.
Number
two.
You!
You
actually
now
know
that
there
are
differences
between
charts
and
omnibus
and
whole
release
process,
and
maybe
you
can,
between
the
two
find
a
common
solution
for
all.
B
B
Okay,
all
right
well,
that
will
probably
something
I
start
working
on
next
week
is
trying
to
work
with
distribution
and
figure
out
the
best
way
to
go
about
doing
that
cool,
because
I
really
want
to
solve
that,
because
other
what
like,
we're
already
behind
on
the
pre
environment,
for
sidekick,
like
it's
on
12
for
stable
and
we're
at
12
6
so
like,
and
we're
not
updating
it
when
we're
doing
a
deploy.
So
that's
really
hurtful
for
us
and
we
can't
be
doing
that.
No.
A
I
am
I,
agree
and
I
kind
of
knew,
but
I
was
kind
of
I
think
it
was
still
a
good
exercise.
It
is
a
little
good
exercise
because
it
still
allows
you
to
focus
on
a
specific
cue,
see
how
it
connects
fix
some
things
in
the
charts.
So
you
unblocked
yourself
sooner
now.
What
you
could
do
is
just
bump
that
bump
dent
in
pre
right
now
today
and
see
whether
something
broke,
and
we
can.
We
can
have
that
as
a
task,
maybe
in
a
demo
weekly
until
we
automate
is
where
let's
bump
in.
B
A
B
A
A
B
B
B
A
B
A
B
A
B
B
A
B
B
B
A
B
The
gist
being
that
Cron's
only
add
one
item
to
the
queue
and
whatever
worker
is
assigned
to
that
key
will
pick
it
up.
So
in
pre.
Our
best
effort
is
going
to
pick
up
specifically
the
import
export
project,
cleanup
worker,
which
is
not
helpful
for
the
kubernetes
pods
running
exports,
they're
not
going
to
clean
anything
up
because
they've
got
their
own
shared
their
own
disk
for
each
pod,
yeah.
B
B
A
B
B
Looking
at
my
screen,
but
like
I'm,
just
outputting
the
size
of
the
shared
directory
before
and
after
an
export
and
before
you
know
it
was
empty
four
kilobytes
and
just
folder
directory
metadata,
it
ramped
up
to
one
and
a
half
gig
and
it
shrunk
back
down
to
32
K.
So
that's
just
it
leaving
more
directories
behind,
but
it
cleaned
up
all
the
data
inside
of
the
directories
so
for
the
export
queue
I,
think
we're
perfectly
fine
and
safe
and
ready
to
go
with
that
and
in
kubernetes
it'll
be
like
the
import.
B
A
If
we
already
know
that
the
import
queue
is
not
cleaning
itself
up
properly,
please
create
an
issue
in
the
github
issue.
Tracker
go
to
the
product
categories
page
and
find
who
owns
import,
apply
those
labels
there
and
explain
the
impact.
Basically,
you
want
to
explain
that
this
is
not
blocking
us
right
now.
It
is
gonna
block
us
very
soon
and.
B
A
B
A
B
B
They
have
in
the
documentation,
they
don't
recommend
using
the
sidekick.
All
in
one
thing,
which
runs
every
single
cue
inside
a
one
or
more
pods
as
it
scales,
they
recommend
a
more
production
where
the
style
of
the
point,
which
is
what
we're
attempting
to
do
where
you
assign
specific
cues
to
a
set
aside
cake
pods
but
I'm,
not
against
so.
A
A
B
B
B
B
B
B
A
B
A
A
B
Pre
upgrade
so
we
got
our
def
worth
changing
the
image
again
and
it
looks
like
it
was
successful.
I'm
looking
for
the
right
messages,
deployment
not
ready.