►
From YouTube: 2021-03-11 GitLab.com k8s migration EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
A
A
Good
morning
cool,
so
let's
begin
scavek
I'll
hand
it
over
to
you.
D
D
D
It
changes
which
file
it's
describing
it
outputs
that
file
name
and
then
outputs
the
actual
log
data
and
then
prior
to
logging.
The
name
of
the
file
it
drops
an
empty
line.
Tail
is
meant
for
the
command
line.
It's
meant
for
you
to
see
stuff
locally.
I
mean
it's
meant
for
you
to
say,
hey
tail.
This
log
file
show
me
the
data
on
the
screen.
It's
not
meant
for
consumption
into
elasticsearch,
so
we're
abusing
a
tool
and
it
works.
Fine,
don't
get
me
wrong,
like
it
does
its
job,
but
the
logs.
Just
don't
look
great.
D
D
D
What
I'm
expecting
to
see
is
that
we
should
see
perfect.
So
we
see
a
new
object
called
sub
component
so
that
we
know
that
this
particular
log
message
came
from
production,
json
log
file,
which
is
great
because
now
we
have
stuff
like
the
readiness
is
already
inside,
of
its
own
object
in
this
case
json.path.
D
D
Subcomponent
so
yeah,
okay,
so
our
logging
is
working
as
desired.
The
message
comes
through
the
entire
thing,
so
we
have
some
items
that
are
still
not
like
structured
like.
I
would
have
expected
to
see
json.path
equal
to
forward
slash
metrics
here,
but
I
think
because
this
is
coming
from
a
different
log-
that's
not
json
structured,
we're
still
losing
some
of
that
information,
so
that
was
json
message.
What
are
these
empty
ones?
D
D
D
D
D
They
don't
have
the
attribute
json
that
message,
the
ones
with
blank
lines
we
should
no
longer
see
and
the
reason
they're
blank
lines
is,
like
I
said
tail,
will
output
a
blank
line
prior
to
going
to
the
next
file.
That's
outputting
data,
for
so
you
know
we're
receiving
all
of
this
data
for
blank
message
into
elasticsearch.
D
D
So
do
we
have
component
in
here
sub
component?
Rather
I
don't
see
it.
Oh
no
workhorse
is
not
impacted.
It's
not
using
github
logger
get
workhorse
is
already
outputting
structured
logs.
Only
so
sorry,
I
was
incorrect.
D
D
So
this
is
a
log
message
that
we
need
to
somehow
avoid
jarv
determined
that
this
is
output,
so
often
in
production
that
we
add
six
whopping
megabytes
of
unnecessary
data
stan
looks
to
be
forking
the
project
that
is
refusing
to
fix
this
quickly
and
bring.
D
So
I'm
expecting
to
see
sub
component
in
here
as
well,
which
I
do
not
for
some
reason.
So
that's
perfect.
That's
great!
That
gives
me
something
to
look
into
which
is
irritating,
but
sidekick
wasn't
always
pretty
disgusting,
because
the
logs
didn't
change,
often
like
we
had
a
lot
of
log
files,
but
not
really
sure
what
to
look
for
inside
of
sidekick
logs,
but.
E
Car
back,
where
you're
expecting
to
look
at
the
pre
log,
because
maybe
you
were
looking
you,
you
thought
you
were
looking
at
another
instance.
D
D
Yeah,
I
only
enabled
it
in
pre-prod
for
the
time
being
so
get
lab.
Shell
was
the
last
one
I
want
to
look
at.
Where
is
shell.
D
Okay,
cool,
okay,
so
obviously
there's
more
work
to
look
into
before
I
close
this
issue.
I
don't
know
why
there's
no
logs
for
gitlab
shell!
D
I
also
know
that
the
team
members
and
observability
have
been
messing
with
logs
as
well
lately,
so
it
may
just
be
a
clash
of
me
working
alongside
them
with
this
particular
change,
but
I'm
not
really
sure
what
else
to
showcase
because
it
was
literally
just
merged.
So
I
need
to
go
back
and
look
at
it.
But
that
concludes
my
demo.
I
know
it
wasn't
exciting
in
my
apologies,
but
you
know.
The
goal
here
was
to
see
if
get
lab.
Blogger
is
working
for
us
today
and
not
causing
issues
with
the
goal
that
it
improves.
D
Log
data
and
searchability
inside
of
elasticsearch
I'll
do
a
more
thorough
analysis
outside
this
meeting.
So
I'm
not
wasting
the
time
in
here.
A
D
The
change
was
a
simple
environmental
variable,
the
get
that
blogger
is
enabled
via
an
environment
variable
that
gets
sent
to
our
containers
upon
startup,
otherwise
they'll
default
to
going
or
falling
back
to
or
excuse
me,
the
default
is
to
use
tail
or
extail
depending
on
the
component
or
depending
on
the
container,
but
this
is
used
in
the
gitlab
blogger.
A
D
A
D
Beneficial
so
from
a
technical
standpoint,
we're
unblocked,
but
I
would
like
to
complete
a
little
bit
more
thorough
analysis
before
I'm.
Confident
with
that,
and
the
last
thing
we
need
to
deal
with
is
the
immense
logging
from
the
url
obsolete
message,
which
we
already
know
that
we
need
to
handle
in
some
way
shape
or
form
so,
whether
that
gets
merged
prior
to
us
going
to
production
or
whether
I
figure
out
how
to
create
a
filter
in
fluent
d.
D
D
D
Sidekick
is
a
little
light
on
issues
because
it's
one
of
those
things
where
you're
not
going
to
know
what
the
next
issue
is
until
you
complete
the
evaluation.
So
I
didn't
want
to
waste
my
time
trying
to
be
like
oh
yeah
next
batch
move
to
production
because
I
figured
we
would
just
create
a
change
request
for
that.
C
B
B
The
prometheus
scraping
of
nfs
metrics
right
and
you
can
see
that
we
have
a
lot
of
queues
still
sending
nfs
requesting
nfs.
Have
you
already
looked
at
that
data?
I
haven't
looked
at
it
yet
I
just
looked
at
the
when
I
merged
it.
I
looked
into
the
thanos
graph
for
that
and
it
looked
like
a
lot
of
queues
are
showing
up
there
still
using
nfs.
D
B
D
B
B
Do
you
know
how
we
called
this
metric.
D
Okay,
okay,
I
got
it
if
you
want
me
to
share
my
screen
instead,
yeah
I'll.
Just
tell
me
the
name
I'll
do
that
on
my
screen,
okay
share
and
then
we'll
just
limit
this
to
limit
this
to.
D
D
So
yeah
there's
quite
a
few,
so
pages
is
still
making
calls.
So
we
see
pages
update
configuration
is
still
making
nfs
calls.
D
But
the
frequency
is
very
low
right,
I
mean
oh
yeah.
The
frequency
is
a
lot
less
than
expected,
which
is
really
good.
I'm
curious
as
to
why
group
samo
group
sync
is
that's
also
a
weird
name
for
a
cube,
but
that's
making
nfs
calls
cron
job
user
status.
Cleanup
batch
pages
is
still
showing
up
here,
so
we
still
issue
rebalancing
what
is
issue
rebound?
What
is
what
do
you.
B
In
this
case,
trying
to
rebalance
this-
if
I
remember
correctly,
so
we
have
a
psychic
job
for
just
taking
care.
If
we
have
too
many
issues
in
a
project,
then
we
could
reach
an
id
number
which
is
getting
too
high,
so
we
need
to
rebalance
for
some.
D
B
Forgot
the
details,
but
for
some.
B
D
B
There
could
be
some
things
which
are
just
running
weekly
or
monthly,
which
are
hard
to
check.
E
B
D
D
E
D
But
we
do
have
some
that
are
at
a
rate
of
zero
or
not
a
number,
so
we
do
have
some
metrics
that
are
being
gathered
for
items
that
we
could
potentially
move.
So
I
think
we
have
enough
information
where
we
have
a
whole
list
we
could
go
through
and
for
any
that
are
recording
zero
or
not
a
number
for
a
lengthy
period
of
time.
D
D
I
do
know
that
the
issue
is
still
open
for
removing
some
stuff
related
to
nfs,
so
maybe
there's
some
cues
that
are
still
doing
some
work
that
are
behind
a
feature
flag
because
I
think
there's
a
feature
flag
that
needs
to
be
set
to
fully
disable
the
nfs
calls,
but
there's
like
four
or
five
issues
all
related
to
that
work.
So
it's
kind
of
unless
I
read
through
every
single
I
did
not
read
through
every
single
one
of
them,
so
I
don't
know
what
the
status
of
all
that
work
in
general
is.
A
Yeah
it
has,
I
can
find
out
for
you,
but
like
yeah,
it's
still
there.
I
think
the
migration
finished,
but
I
don't
think
they've
done
the
final
switch
off
just
in
case
they
need
to
go
back
so
yeah.
I
could
dig
that
up.
A
A
Do
we
know
why
there
are
so
many
cues
I
mean
I
know
the
answer
to
this
is
no.
We
don't
know
why,
but
like
there's
a
lot
more
here
than
we
saw
last
year
right.
Is
that
correct.
D
Yeah,
I
don't
know
the
answer
to
that
because
I
distinctly
recall
creating
a
list
of
everything
that
was
left
running
on
these
nodes
and
creating
an
issue
for
all
the
ones
that
I
saw
were
problematic
and
I
was
migrating
all
the
rest
off.
So
I
don't
because.
F
F
Of
people
who
didn't
read
where
their
queue
needs
to
end
up,
so
they
didn't
classify
what
their
queue
requires:
resource
wise,
whether
it's
cpu
intensive,
whether
it's
memory
intensive
and
so
on
so
yeah.
That's.
D
F
There
is
a
proposal
in
the
scalability
team
to
completely
rewind
re-wamp
this
again,
because
these
number,
like
the
high
number
of
cues,
is
actually
causing
huge
radius
issues.
Yeah.
B
F
This
is
a
architecture
problem.
Unfortunately,
this
was
a
workaround
for
another
workaround
that
allowed
us
to
breed
for
a
year
and
a
half,
so
I
think
we'll
have
to.
We
have
already
planned
this
in
scalability,
however,
again
priorities
and
because
it's
major
surgery
we're
not
tackling
it
until
we
have
to.
D
Yeah,
I
do
think
we
have
the
information
necessary
so
that
we
can
continue
forward
with
the
back
seven
evaluation
and
go
ahead
start
moving
stuff
over
it's
just
a
matter
of
doing
the
work.
A
Awesome
sounds
good,
so
that's
something
which
I'm
gonna
ask
graham
to
see
if
he
can
have
a
go
at
when
he
joins
us
next
week.
D
So
my
next
bullet
point
item
I
already
discussed
this
with
amy
and
marin,
but
I'm
hoping
that
we
could
still
work
on
both
of
these
things
at
the
same
time.
So
my
goal
here
just
to
reiterate
this
in
the
meeting
was
to
I
would
help
henry
you
and
I
I'm
hoping
I
could
work
on
the
api
together
and
then
I
would
also
work
on
the
sidekick
work
and
work
with
grain
on
that
particular
work.
D
B
Nothing
like
there's
no
dependency
between
api
work
and
sidekick
work
right,
they're
independent.
They
just
need
to
be
done
right.
A
Yep
yeah
nice.
That
sounds
like
a
great
idea.
Henry
do
you
know
how
much
registry
work
you've
got
coming
up
in
the
next
like
few
days.
B
Good
question,
so
I
created
a
lot
of
mrs
today,
which
should
help
to
get
the
first
iteration
of
a
registry
db
out
so
once
they
average.
I
get
the
secret
that
I
need
for
configuring
kubernetes
to
actually
be
configured
to
use
the
registry
db.
B
B
A
Yeah
so,
but
in
terms
of
like
this
step,
so
at
the
moment,
pre
is
almost
got
a
database
and
did
you
did
the
connection
as
well?
Did
you
with
the
charts
change
that
came
in.
B
Yeah,
that's
a
thing
that
I
wanted
to
get
ready,
but
I
first
need
to
get
the
teleform,
mr
merch,
because
that
will
generate
the
database
secret
and
that
then
I
can
take
and
put
it
into
the
kubernetes
charts
configuration
and
then
we
can
see
if
register
really
is
able
to
use
it
if
it
works.
But
I
need
to
first
merge
the
teraformer,
but
yesterday
skype
helped
me
to
get
the
charts
config
right.
So
I
think
it
should
work
once
I
get
the
secret.
A
A
A
Fantastic,
so
on
the
blockers
list
like
there's
not
too
much
to
talk
about
really
like
the
blockers
rule
moving
along
pretty
nicely
at
the
moment,
which
is
super.
The
only
one
I
did
want
to
maybe
talk
about
was
the
set
the
different
shutdown
blackout
seconds
for
different
deployments.
A
So
this
has
a
change
has
been
merged
in
I'm
wondering
do
we
actually
have
some
follow-up
work
that
we
need
to
now
plan
out
to
actually
make
use
of
this
setting.
D
It's
just
a
matter
of
upgrading
our
chart
and
then
determining
what
value
to
set
this
to.
I
think
to
start
would
probably
just
set
it
to
something
similar
to
our
vms,
which
I
think
is
five
seconds.
D
Will
you
link
to
the
chart
issue,
but
do
we
have
an
issue
to
bring
this
or
evaluate
this
ourselves.
D
A
This
one
is
a
kind
of
like
we
can
definitely
put
them
together.
That
one
is
a
that's
after
we've
done,
the
migration
make
sure
we
have
a
bit
of
a
view
of
how
much
time,
hopefully
some
time
has
been
cut
off
deployments,
but
yeah.
Absolutely
fine.
If
you
want
to
add
this
piece
on.
D
A
Nice,
thank
you
cool
great,
that's
the
end
of
our
agenda.
Is
there
anything
else
anyone
wants
to
discuss
demo
question.
A
Do
you
want
to
scope
it?
Do
you
want
to
talk
to
us
about
the
change
you
made
to
for
deploying
kubernetes
and
using
the
cached
used
cached
charts?
I
don't
know
what
the
words
are.
What
was
the
change
you
made
yesterday
following
the
incident.
D
Well,
firstly,
the
problem
during
an
incident
gitlab
pages
was
down
because
of
that
our
helm
chart
still
has
a
dependency
on
gitlab
pages,
because
we
need
to
bring
in
the
gitlab
runner
chart
and
that's
hard
coded
inside
of
our
chart.
I
don't
think
there's
a
way
to
override
that
at
all.
D
D
The
change
that
I
made
was
to
create
an
artifact
and
store
it.
That
way,
it
could
be
used
for
the
rest
of
that
pipeline
and
now
we've
further
improved
that
where
it
will
create
the
artifact
store
it
and
if
there's
a
an
existing
artifact
that
matches
it'll,
pull
that
artifact
down,
validate
it's
the
right
version
and
utilize
that,
instead
of
trying
to
build
it
first,
if
it
fails
to
pull
down
the
right
version,
it'll
proceed
to
build.
D
D
D
D
D
Currently,
the
version
we
set
is
get
lab,
hyphen,
0.0.0,
plus
the
sha,
but
if
we
for
some
reason
need
to
build
the
chart
for
a
specific
environment,
it's
going
to
build
it's,
the
version
is
going
to
change
to
whatever
the
latest
version
is
stored
of
that
chart
at
that
moment
in
time.
So,
like
we're
at
version
4.5,
I
think
that
zero,
so
we'll
see
version
4.5.0
be
changed
in
like
every
object
inside
of
kubernetes
being
that
it's
only
a
version
change
to
the
annotations.
D
A
Nice
sounds
good,
sounds
good.
Are
there
any
if
there's
any
kind
of
things
that
we
are
in
six
months
time
gonna
be
question
marks
over?
Maybe
just
drop
some
notes
down
in
a
guide
or
something
like
that,
like
let's
write
those
thoughts
down
so
that
we
can
remember
them
but
yeah.
It
sounds
like
a
better
place
anyway.
So
that's
good.
A
Perfect
yeah,
great
nice
cool
is
there
anything
else,
and
I
want
to
go
through
today.
A
Fantastic.
Thank
you
very
much
for
the
the
demos
and
and
discussions
enjoy
the
rest
of
your
day.
All
right
take
care,
have.