►
From YouTube: 2022-04-14 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
so
no
one
has
joined
us.
So
therefore,
the
only
discussion
item
I
had
was
to
discuss
github
shell
just
a
wee
bit
just
because
I
feel
like
this
project
is
getting
close
to
attempt
number
two
and
since
amy
you're
here
and
you've
been
involved
in
the
same
conversations
that
I've
kind
of
seen,
I
was
hoping
that
we
could
kind
of
provide
like
just
a
generic
status
update
as
to
what
our
intentions
are
as
far
as
scheduling
when
this
work
might
happen.
A
As
far
as
I
know,
the
last
update
that
we
had
was
that
we
wanted
to
complete
testing
the
rollback
procedure,
which
was
done,
I
think,
one
or
two
weeks
ago.
At
this
point
it
was
last
week.
A
The
readiness
review
is
in
progress.
I
just
saw
not
this
igor,
the
other
igor,
I
forget
his
last
name
was
still
working
on
some
final
improvements
and
asked
me
for
a
final
review
of
that
redness
review.
I
did
that
this
morning
and
had
just
I
think,
three
final
items
that
I
wanted
to
see.
Improvements
done.
So
I
love
that
feedback.
But
as
far
as
I
know,
that's
the
last
thing.
That's
preventing
us
from
doing
that.
Take
two
correct.
B
I
agree
yeah,
I
think
that's
exactly
right,
so
I
think
we
don't
have
to
the
second
awaiting
this
review
is
done.
We
don't
have
to
roll
straight
to
production,
so
I
think
we
can
fit
it
around
our
other
projects.
I
know
aquat
today
has
been
like
fully
on
registry
all
day.
We
have
a
few
other
things
like
that
that
are
floating
around.
So
we
can't
do
anything
until
we
have
the
readiness
review,
but
once
we
get
it
yeah
we
can
schedule
it
in,
as
makes
sense
for
us.
A
So
from
my
perspective,
it
sounds
like
we
should
be
able
to
have
the
readiness
review
completed
by
the
end
of
this
week.
If
all
goes
well,
because
igor
is
tightly
working
on
this
very
well
like
he's
keeping
track
of
everything
very
closely,
and
I
think
they're
eager
to
see
this
get
rolled
in.
So
maybe
we
could
try
to
schedule
this
in
for
next
week.
Perhaps.
A
Perfect,
do
you
know
of
anything
that
would
prevent
us
from
attempting
this
next
week,
outside
of
completing
the
readiness
review
of
this
particular
project
in
general?.
B
Only
the
the
monthly
release,
if,
depending
on
how
deployments
go
like
if,
if
this
is
going,
I
mean
we'll
have
other
changes
for
sure.
But
if
we
don't
want
to
add
another
change
to
to
the
pile
that
that's
the
only
reason
I
can
think
of
okay.
Otherwise,
I
think
it
should
be
good
to
go.
A
So
hopefully
it
wouldn't
be
too
bad,
but
you
know
if
all
we
could
always
delay
a
week
if
necessary,
like
it
is
pressing
so
exactly
well,
that's
the
only
thing
I
wanted
to
bring
up
I've
been
kind
of
far
away
from
kubernetes
land
lately
anyway,
so
it's
been
kind
of
hard
for
me
to
keep
up
with
other
projects.
So
I'm
curious
amy,
if
there's
anything
that
you
wanted
to
add
to
our
agenda
to
discuss
or
converse
about
thanksgiving.
B
For
that
matter,
a
couple
of
things-
maybe
I
had
in
my
night
they're,
both
a
bit
hazy.
B
I
am
starting
to
think
I'll
open
up
an
issue,
so
we
can
discuss
this,
but
I'm
starting
to
think
about
q2
and
sort
of
beyond
two
projects.
That
would
be
amazing
for
us
to
have
to
be
able
to
sort
of
get
into
projects
in
the
next
sort
of
month,
if
so
kind
of
gauging.
How
comfortable
you
feel
is
one
is
to
start
working
on
the
cluster
recreation
and
do
enough
that
we
could
recreate
a
cluster
and
get
some
more
ips.
B
A
A
The
long
term
goal
would
be
to
figure
out
how
to
build
a
new
nat
device
and
then
attach
a
cluster
behind
it,
and
for
that
realistically,
what
we
would
need
to
accomplish
is
probably
the
first
step
of
rebuilding
a
cluster
which,
in
our
last
demo
meeting,
we
decided
that
we
should
just
rebuild
a
test.
Cluster
rebuild
a
cluster
without
any
changes,
just
to
make
sure
that
at
least
our
documentation
for
having
the
ability
to
rebuild
a
cluster
is
legit
and
that
we're
not
missing
anything
right
in
the
documentation.
A
B
A
B
Exactly
yeah,
okay,
that
makes
sense,
and
would
you
where
would
you
see
in
terms
of
that
project
the
simplifying
the
cluster
rebuilt?
Would
that
be
like
a
phase
following
that,
once
we've
done
it,
we
could
start
to
identify
stuff.
A
So
the
step
of
us
performing
that
initial
rebuild
in
staging
would
feed
knowledge
that
we
gain
from
that
process
will
feed
into
what
we
need
to
prioritize
yeah.
Okay,
so
you
know
that
might
become
you
know.
We
might
learn
something
that
might
block
us
from
wanting
to
rebuild
a
cluster
in
production,
for
example,
and
we
would
want
to
mitigate
that.
Obviously.
B
Yeah
exactly
yeah,
okay,
that
makes
sense:
okay,
okay
and
then
the
other
one,
that's
sort
of
in
a
maybe
more
hazy
state
that
graham's
been
thinking
about
is
the
all
of
the
problems
around
kate's
workloads
and
trying
to
actually
get
a
handle
on.
What
are
all
the
pain
points
that
people
feel
around
that,
and
how
can
we
start
turning
some
of
those
into
a
project
as
well.
A
Yeah
we
have
a
lot
of
ideas
and
we've
got
a
lot
of
situations
where,
if
we
could
solve
some
of
those
problems,
that
would
enable
a
lot
of
other
teams
to
start
being
able
to
automatically
deploy.
If
we
could
work
on
some
of
those,
I
feel
like
we
have
a
lot
of
battling
priorities
at
the
moment.
Yeah.
B
Exactly
yeah,
I
think
I'm
hoping
that
some
of
this
stuff
might
be
a
case
of
if
we
can
find
a
way
to
separate
some
of
it
apart.
It
might
make
it
easier
for
us
to
start
optimizing
for
like
specific
cases
so
into
over
right
now.
That
is
quite
difficult
to
take
one
piece
and
say:
we've
just
improved
this
bit
and
kind
of
ignore
the
others.
So
I'm
hoping
that
we
can
get
that
into
a
project
as
well
that
we
could
then
start
working
on
next
quarter
as
well.
A
B
So
he's
still
working
on
that,
I
haven't
read
the
latest
bit.
Oh
great,
so
he's
actually
put
quite
a
lot
of
extra
stuff
on
this.
So
what
I
think
we
need
to.
B
I
need
to
read
it
closer,
but
my
my
guess
will
be.
We
need
to
start
finding
like
kind
of
grouping
these
problems
and
figure
out,
because
I
think
sometimes
we
have
related
problems
that
we
can
hopefully
solve
in
one
project
and
then
I
think
some
of
them
are
sort
of
other
problems
that
we
can
prioritize.
B
A
lot
in
here,
so
we
certainly
going
to
need
to
have
this
in
some
other
formats,
like
this,
isn't
going
to
end
up
being
like
a
single
epic
that
we
can
just
say
here
are
three
issues
and
off
we
go
right,
so
this
is.
B
This
is
probably
a
blueprint
I'll
see
what
graham
thinks
about
that,
but
this
is
going
to
probably
end
up
being
some
sort
of
state
of
like
here's
where
we
are
currently
are
here's
our
situation.
B
But
for
now
I
would
say:
don't
worry
too
much
about
this.
I
know
graeme
is
actively
working
on
this
I'll
spend
a
bit
of
time
with
him
see
if
we
can
get
this
into
something
where
well.
B
What
I'm
wondering
is
whether
we
maybe
try
to
say
try
and
get
these
into
sort
of
problem
buckets,
and
then
we
can
do
some
prioritization
right,
which
we've
done
in
the
past
and
we've
kind
of
thought
about,
because
some
of
these
things
are
like
this
really
helps
somebody
in
distribution,
or
this
really
helps
somebody
in
reliability,
or
this
really
helps
our
deployments
right,
but
they're
going
to
have
very
different
use
cases,
so
I
think
we
might
need
to
sort
of
play
around
with
some
of
their
the
problems
as
well.
A
B
What
do
people
actually
want
to
see,
and
how
do
we
achieve
that?
So
I'm
not
expecting
this
to
end
up
being
a
single
epic.
I
think
this
is
probably
I
don't
know
a
year
of
work.
It
feels
like
it
could
be
some
fairly
big
pieces.
So,
let's
see
if
we
can
get
that,
you
know
in
a
slightly
more
digestible
format
so
that
people
can
drop
in
and
add
comments,
maybe
on
specific
pieces.
A
A
Those
particular
areas
first
might
be
worth
prioritizing,
but
I
do
understand
that
there's
a
lot
of
various
items
in
here
we
want
to
improve,
so
there
might
be
different
priorities,
depending
on
who
you
talk
to,
because,
like
yeah,
for
me,
in
particular,
like
chart,
upgrades
have
been
very
painful.
I'm
glad
that
we've
got
something
automating
a
portion
of
that,
but,
like
still
the
review
process
for
that
is
not
kind
to
us.
B
It's
not
great
yeah.
I
think
that
could
be
its
own
project
because
I,
I
think,
there's
a
whole
sort
of
turn
chart
upgrades
into
a
deployment
approach
right.
We
have
zero
visibility.
In
fact,
I
have
an
issue
to
write
up
because
I
had
a
heap
of
fun
the
other
week,
because
I
got
pinged
on
an
issue
from
distribution
and
it
was
such
a
great
example
because
I
got
pinged
on
a
issue
which
someone
had
a
specific
commit
that
they
wanted
to
know
which,
when
specifically
did
that
go
out.
B
Really
tricky
so
there's
a
really
big
gap
and
that's
a
question
which
everyone
who
is
deploying
in
the
monolith
just
gets
for
free,
so
we've
got
those
sorts
of
clear
gaps
that
we've
implemented
before
that.
Other
people
also
want
to
be
able
to
have,
I
think
so,
there's
probably
a
whole
project
of
make
chart
bumps
a
deployment
approach.
A
Tooling,
inside
of
release
tools
that
says:
hey
your
commit
made
it
to
production
right
right.
Well,
the
same
thing
should
be
able
to
apply
for
chart
bumps
because
it's
the
same
thing,
it's
just
a
different
method
of
executing
that
actual
chart
by
modifying
which
repository
and
modifying
you
know
where
to
look
for
that
data
right.
These
tools
has
the
logic
built
into
it
for
gitlab
or
gitlab.
We
could
probably
carry
the
same
style
of
using.
B
B
Yeah
yeah
and
there's
no
process.
So
basically
it's
when
you
need
well,
there
is
a
little
bit
of
a
process
as
of
like
a
week
ago,
but
at
the
moment
it's
like,
if
you
need
one
like,
hopefully
got
such
a
raw
deal
like
we
started
thinking
about
this
because
ahmad
got
a
rough
deal
at
the
beginning
of
the
year
and
then
philippe
got
like
the
ultimate
worst
deal
where
he
needed
to
get
a
chart
bump
out.
It
picked
up
a
load
about
this.
Oh.
B
C
C
Yeah
yeah
do
it.
I
mean
doing
that
automated
on
on
every
change
makes
a
lot
of
sense
and
is
basically
what
we
do.
The
other
example
that
came
to
my
mind
was
italy,
like
bumping
italy
version
in
gitlab.com,
gitlab
or
gitlab.
B
A
A
A
Ideally,
if
we
could
figure
out
how
to
automate
this
is
going
to
sound
silly,
but
if
we
could
automate
how
to
look
at
the
dips
and
like
automatically
ignore
certain
things
that
we
don't
care
about
or
if
we
could,
on
the
other
side
figure
out
if
the
things
that
change
all
the
time,
because
it's
the
chart
upgrade
the
stuff
that
we
don't
care
about.
Just
ignore
that
in
some
way.
A
C
B
I'm
gonna
create
one.
I
know
game
just
completed
the
first
phase,
but
it
didn't
specifically
say
that
so
much,
but
I
am
going
to
create
an
epic,
so
we
can
put.
C
B
I
feel
like
there's,
probably
another
project
as
well,
which
is
the
the
pulling
apart
of
the
two
sides
of
kate's
workloads,
because
I
think
if
we
separate
out
the
I'm
an
sre
and
I
want
to
push
a
conflict
change
through
a
kate's
workload
pipeline-
and
I
am
you
know-
a
registry
developer-
and
I
want
to
bump
registry
those
things
feel
so
totally
different-
that
having
them
intertwined
and
also
hooked
up
with
auto
deploys
in
a
slightly
unusual
way.
I
don't
think
makes
it
easy
for
anyone
to
work
with.
C
Yeah
thinking
about
the
the
review
side
of
things
where
we
have
all
this
garbage
in
the
diff
I
mean,
is
there
a
hacky
way
that
we
can
like
set
replace
all
of
that
stuff
and
just
get
a
diff
without
it,
without
that
crap
in
it,
without
having
to
necessarily
remove
the
the
versions
from
the.
A
A
And
then
there's
sometimes
a
lot
of
white
space
being
added
or
removed
just
because
of
the
way
go
lane
templating
is
working.
We
could
certainly
make
fancy
said
statements
that
do
some
fancy
regex
to
remove
those
and
make
the
diff
easier
to
look
at
right.
That's.
We
could
certainly
do
something
like
that.
A
So
like
I
would
rather
tackle
stuff
like
this
at
our
helm.
Chart
because
our
helm
chart
is
what
generates
this
information.
So
if
we
could
figure
out
how
like,
if
there's
a
procedure
in
the
development
of
the
distribution
team
that
says
hey,
this
is
going
to
add
a
white
space
because
you're
adding
this,
we
should
do
something
inside
the
helm
chart
that
would
remove
that
white
space.
Instead,
I
would
prefer
that
route,
because
you're
removing
that
from
the
entire
helm
chart
as
a
whole
and
that
kind
of
benefit
would
be
seen
across
anyone.
C
A
Consume
our
chart
and
then
the
other
item
for,
like
the
changing
of
the
chart
version.
That
would
actually
be
a
very
simple:
hey
look
for
helm.io,
slash,
chart,
underscore
version
and
ignore
the
actual
version
everywhere
else.
But
you
know
if
that's
something
that
we
rely
on.
You
know.
A
So
the
one
thing
I
know
that
we're
doing
in
helm,
diff
to
show
the
diff
is
just
controlling
how
many
lines
above
and
below
the
actual
change
it
shows
us.
I
don't
know
if
there's
a
way
in
helmimdev
to
be
like
hey.
If
you
see
this
particular
change,
just
don't
show
me.
I
don't
think
that
option
exists
because.
B
B
Yeah,
I
know
graeme
has
a
lot
of
ideas.
I
think
we're
going
to
end
up
needing
to
sort
of
have
the
easy
things
and
the
kind
of
the
bigger
things
kind
of
some
of
the
stuff.
That
graham,
is
quite
keen
to
sorry
like
it's
just
like
super
old
school
but
like
chris,
is
printing
like
on
top
of
that's
the
side
of
a
printer.
Just
so
you
know
like
we're
going
like
we're
going
back
back
to
the
90s.
B
I
don't
know
what
expensive
we
have
to
clean
the
printer
heads
every
time
we
use
the
printer
because
we
don't
print
often
enough
for
the
ink
not
to
dry
up
anyway.
B
So
so
I
know,
graham's
got
lots
of
ideas,
but
some
of
them
are
quite
radical,
so
things
like,
for
example,
like
moving
away
from
helm,
or
you
know
like
putting
everything
into
tanker,
and
I
I
know
I
already
know
even
without
us
having
that
suspect
issue,
I
know
there's
going
to
be
lots
of
different
opinions,
so
there'll
be
some
of
those
things.
I
think
we
can
get
set
up
as
discussion
issues
and
make
like
for
the
future.
B
We
can
make
a
decision
on
this
and
do
future
stuff,
but
I'm
also
hoping
there'll
be
enough
stuff
that
we
can
just
say
in
q2
like
here
is
the
project
and
we
can
actually
get
working
to
start
improving
stuff.
I
would
love
to
not
have
like
we're
doing
a
work
at
the
moment
to
try
and
get
auto
deploys
to
be
hands
off.
B
A
B
So
I
think
for
now
in
terms
of
like
what
to
do
like,
if
you
have
time
to
read
that
issue
go
for
it,
but
I
don't
think
feel
like
you
have
to
it's
not
stalled
waiting
on
you
to
read
it.
If
that
makes
sense,
there's
lots
of
other
things
that
are
going
on
on
that
issue
and
I
think
we'll
switch
into
other
formats
to
gather
ideas.
B
The
cluster
one,
the
cluster
recreation
stuff.
If
we
could
get
issues
like
if
we
already
know
enough
to
have
issues
and
epics
for
that,
that
would
be
helpful
and
then
we
can
just
start
planning
that
stuff
in.
B
A
C
Yeah
I
discovered,
because
I
was
waiting
for
italy
change
to
go
out.
I
discovered
that
the
the
automated
merge
requests
on
gitlab
or
gitlab
to
bump
the
italy
server
version
have
been
blocked
on
this
one
emma
for
one.
I
don't
know
whose
responsibility
that
is.
I
don't
know
if
that's
on
you
or
if
that's
on
the
getally
team,
but
it
seems
like
something
that
kinda
nobody
was
aware
of,
and
I'm
wondering
whether
we
need
to
have
a
way
of
detecting
that.
C
B
B
C
C
B
So
it's
a
super
hard
one
yeah
we
definitely
have.
This
is
a
really
interesting
one,
because
what
the
this
is
kind
of.
So
this
was
the
best
deployment
approach
available.
B
Well,
it
is
the
best
deployment
approach
available
for
getaly
right
now,
because
it's
not
on
kubernetes,
but
I
think
it's
a
really
interesting
model
that
we
need
to
be
aware
of
when
we
do
start
moving
things
like
registry
and
other
things
like
charts
and
things
into
deployments,
because
we
end
up
this
kind
of
hybrid
that
doesn't
really
work
for
either
side
and
that
the
because
goodly
don't
own
it
end
to
end
they
actually
don't
their
day-to-day,
doesn't
really
include
much
attention
on
deployments
but
at
the
same
time
their
actions
block
their
own
deployment
pipeline.
B
B
So,
yes,
I
will
go
and
have
a
chat
and
see
what
we
can
do
about
kind
of
getting
visibility
on
that.
So
yeah
thanks
eagle.
A
B
B
B
I
was
missing
that
piece
yeah,
so
in
the
in
the
good
days
like
you
know,
everything
just
happens.
It's
all
completely
automated,
but
when
it's
not
when
it
doesn't
quite
work
out
yeah,
it's
a
little
bit
four
days
is
quite
a
longer
a
long
time,
so
yeah.
Thank
you
for
raising
that
I'll
I'll
see
what
we
can
do
about
working
with
them.
On
that
one.
B
B
Oh
yeah,
of
course,
yeah,
oh
speaking,
of
which
I
so
reliability
going
to
have
a
look
and
see
if
there's
someone
who
can
pick
up
that
disc
space
issue,
eagle
scavec
so
hopefully
we'll
get
that.
But
otherwise,
if
we
don't
get
someone
on
that
today,
I
propose
that
we
pause
all
of
our
projects
and
pick
it
up
as
delivery
and
that
will
take.
C
B
Well,
I
mean
we
can.
Maybe
we
can
try
a
bit
around
auto
deploys
if
we
can,
but
it
will
certainly
impact
like
registry
and
sshd,
and
I'm
sure
we
have
a
few
other
things
that
are
queued
on
us
as
well.
But
getting
this
solved
this
week
would
be
very
good.
I
would
actually
rather
pause
auto
deploys
tomorrow
and
get
this
fixed
versus
go
into
next
week,
prepping
the
monthly
release
and
having
to
deal
with
this
as
well.
That
would
be
worse
than
not
deploying
tomorrow.
A
B
B
I
was
thinking
more
about
people,
I'm
thinking
more
about
you.
Actually,
when
I
say
people
I
actually
know
there
aren't
very
many
we're
so
short
on
people
this
week,
because
only
you
and
ahmed.
So
if
we,
if
we
get
to
your
day
tomorrow,
starting
and
we
haven't
solved
this,
I
think
we
should
just
make
a
call
and
pause
auto,
deploys
and
solve
this.
Instead.
B
Awesome,
how
is
redis
going
eco.
C
C
So
we're
trying
to
enable
host
names,
we're
still
trying
to
enable
host
names,
support
and
the
issue.
The
new
issue
that
popped
up
is
those
can
currently
only
be
enabled
by
setting
a
host
name
as,
like
the
the
replica
announce
ip.
C
And
that
means
we
need
to
set
explicitly
the
host
name
per
redis
host
and
so
we'd
have
to
have
a
separate
chef
config
for
red,
so
one
red,
so
two
red
iso,
three,
which
we
don't
want
that
right.
We
want
to
just
set
a
single
roll
and
we
actually
don't
have
a
good
way
of
doing
that.
Currently,
so
what
I
started
working
on
is
adding
a
little
hack
that
allows
you
to
set
a
flag
in
chef.
C
So
that's
an
omnibus
emma
that
ahmad
is
debugging
the
spec
failures
on
and
yeah
once
once
we
get
that
in.
Hopefully
that's
the
last
blocker
before
we
can
roll
through
with
enabling
sentinel
host
names
everywhere,
the
the
script
henry
prepared
the
script
and
that's
ready
to
go
so
once
that's
in.
We
can
run
through
pre
and
then
run
through
all
of
the
other
environments.
B
Nice,
okay,
great,
does
does
like
need
any
like
back
end
engineer.
Help
like
is
this:
are
these
spec
failures
sort
of
obvious
ones,
or
would
it
be
useful
to
get
engineer
on
here
as
well?
A
back-end
engineer
in
here
as
well.
C
I
think
we're
good
for
now
we
we've
had
on
it
today
he's
he
seems
more
he's
more
capable
than
I
am
at
debugging
this
stuff.
So
I.