►
From YouTube: 2022-01-20 GitLab.com k8s migration APAC/EMEA
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Awesome
welcome
everyone.
So
this
is
the
20th
of
january
apac,
timed
kubernetes
demo.
Is
there
anything
anyone
wants
to
demo?
B
B
Awesome,
so
let
us
talk
briefly
about
dev,
so
this
might
be
slightly
different.
I've
just
dropped
in
there.
I'm
gonna
just
hydrate
the
agenda
slightly
and
give
a
small
update.
So
we
I've
just
heard
just
a
few
minutes
ago
that
distribution
are
willing
longer
term
to
take
ownership
of
the
dev
machine,
which
makes
sense
right.
B
B
Let's
get
a
issue,
presumably
in
the
distribution
tracker,
we
can
use
the
slack
channel,
but
let's
coordinate
like
how
we
want
to
do
this
when
we
want
to
do
this
and
and
basically
figure
out
like
when
it
will
be
safe
for
us
to
take
dev
offline
for
a
bit
to
do
this
and
like
what
what
would
break
like
the
scaling
is
obviously
the
big
risk
for
this
for
right,
rust,
devs,
offline.
C
This
is
this
is
interesting,
so
that
would
mean
that
the
distribution
team
takes
ownership
of
the
whole
instance
and
everything
whether.
D
B
It
manually
and
we
need
to
do
anything,
so
I
think
that's
the
thing
like,
let's
figure
out
with
distribution,
how
much
they
can
do
but
yeah,
let's
get
it
set
up
so
that
they
can
own
the
entire
thing
and
it's
not
linked
to
our
stuff,
so
they
don't
have
to
have
a
dependency
on
infra.
For
for
this
stuff,
I.
C
Only
have
one
concern
here,
because
this
is
this:
machine
is
needed
for
production
loads
because
we
need
it
for
scaling
up
our
communities
clusters
and
we
need
to
have
the
same
kind
of
observability
into
it
like
we
have
for
the
rest
of
our
infrastructure,
because,
if
anything
with
scaling
on
pulling
images
and
stuff
like
that
is
failing
during
a
production
incident,
we
need
to
be
able
to
see
this
that
this
is
coming
from
this
machine
and
what's
going
on
there
right
to
be
able
to
identify
that
if
it
becomes
a
kind
of
blind
spot
for
infrastructure.
C
That
would
be
not
really
great.
So
we
should
make
sure
that
we
work
together
with
them
to
integrate
with
our
monitoring.
And
for
that
maybe
the
question
of
if
we
should
follow
the
same
kind
of
terraform
and
chef
structure
for
setting
up
like
we
do
for
our
other
infrastructure,
maybe
would
be
helpful.
B
Yeah
happy
to
discuss
that
with
them.
I
mean,
I
think,
like
let's
aim
for
this
short
term:
let's
get
it
better
than
dev
right.
We
don't
have
anything
at
the
moment,
so
I
think
it
doesn't
matter
too
much
what
we
have
initially
but
yeah
sure,
let's
figure
out
with
value
what
like
what
this,
how
this
makes
sense
to
look
like.
D
So
so,
there's
two
parts
to
this,
then
it
I'm
I'm
gonna,
try
and
not
take
things
too
far
off
in
a
conversation
off
where
we're
focusing
on
here.
I
agree
with
henry's
assessment
on
dev
registry
and
image
pulling
like.
That
is
a
serious
concern
to
me
whether
it's
in
gcp
or
not.
I,
I
still
think,
there's
more
work,
we
or
a
side
effort
here
that
we
need
to
circle
back
and
do
something
better
about
that.
D
D
If
it
sounds
like
we're,
walking
away
a
little
bit
from
management
of
it,
I
think
it
would
make
sense
for
us
not
right
now,
but
definitely
almost
immediately
after
to
have
a
conversation
about
how
do
we
make
our
docker
pools
for
scaling
more
reliable?
I
just
don't
think
it's
not
fair
for
dev
to
be
that
single
one
vm
where
everything
is
pulled
from.
I
just
don't
think
it's
sensible,
even
if
it's
just
like
a
a
proxy
or
something
so
that
you
know
the
proxy
stores,
because
they're
just
mostly
blobs
right
or
something
like
that.
A
B
D
C
What
I
like
about
this
approach
is
that
the
main
use
of
the
machine
is
really
for
a
distribution
right,
because
they
run
most
of
the
jobs
they
tune
them
and
they
know
how
they
are
working,
and
so
we
saw
that
they
produced
this
problem
by
by
having
massive
jobs,
pulling
and
pushing
a
lot
of
stuff
to
to
the
single
machine
which
is
just
overloading
it.
So
if
they
are
in
control
of
how
they
do
this
and
also
see
the
the
pain
of
it,
I
think
that
would
improve
that.
C
D
D
We
would
take
the
actual
vhd
disk
images,
somehow
transfer
them
to
gcp,
somehow
spin
up
a
vm
with
those
disk
images
attached
and
we
might
have
to.
I
mean
there
might
be
a
little
bit
more
work
than
that,
but
in
theory
right
that
would
be
the
ideal
solution
is
just
literally
taking
those
disk
images
pulling
them
across
and
putting
them
into
a
new
vm,
and
it
would
just
quite
unquote
magically
work.
D
I
know
it's
probably
not
that
simple,
so
I
guess
the
question
is:
does
anyone
have
any
ideas
if
that,
if
that's
possible
or
should
we
do
something
like
we're
just
going
to
r
sync,
the
data
direct
like
what
you
did
for
the
when
you
migrated
the
disks?
Do
we
just
do
spin
up
a
whole
new
vm
and
we
just
r
sync
the
data
across?
Instead,
I'm
not
an
I'm,
not
knowledgeable
enough
about
like
what
options
we
have
for
migrating
between
clouds.
D
I
spent
some
time
today,
looking
at
gcp,
so
a
lot
of
the
cloud
providers
have
their
own
migrate
from
one
cloud
to
another
tool
and
they're
very
invested
in
those
tools,
because
it's
it's
money
right,
like
you
say,
oh
we're
stuck
on
azure.
You
know
we
can't.
We
can't
give
you
money
and
come
to
gcp
and
they're
like
nah.
Just
use
this
tool
and
it'll
transfer
all
the
vms
across
you
know
it's
very
important
to
them.
D
I
had
a
look
at
the
tool
it
seemed
like
it
was
okay,
but
it
needed,
like
you,
had
to
set
up
a
vpn
to
azure.
You
had
to
give
the
migration
tooling
and
azure
like
security
credentials.
Obviously
so
I
could
talk
to
the
apis
to
turn
off
the
machine
and
it
looked
like
it
would
be
a
possibility,
but
it
looked
like
a
lot
of
hassle
which
once
again,
while
I'm
kind
of
throwing
open
the
conversation
here,
because
if
that
machine
is
under
chef
as
well
like
in
theory,
we
bring
up
a
new
vm.
D
C
Yeah
we
have
to
sync
stuff
from
the
os
disk
also
because
we
install
in
in
opt
like
an
op
git
lab,
which
is
not
on
the
data
disk.
So
the
installation
itself
is
an
op
git
lab
where
the
data
is
in
var
object
lab
so,
and
so
this
can't.
D
C
Think
I
think,
because
you
have
this
in
chef
and
chef,
is
configuring
omnibus,
I
think
using
chef
is
a
good
idea
in
general
and
and
also
to
not
have
this
kind
of
one
of
problem
right?
If
we
delete
this
machine
by
ins
by
accident,
then
how
can
we
rebuild
it
right?
If
we
have
terraform
and
shaft,
then
we
can
do
that?
If
not,
then
it's
I
don't
know
can
be
more
problematic,
but
this
is
a
question
that
also
distribution
needs
to
answer
and
for
the
transfer
of
the
images.
C
I
don't
know
how
good
that
would
work.
I
just
think
that
you
know
we
have
these
kind
of
scripts
always
and
gcp,
which
is
setting
up
metadata
and
stuff
like
that,
and
I
wonder
if
just
taking
the
image
from
azure
would
work
as
great.
So
maybe
asking
would
be
the
better
solution,
but
not
an
expert
in
this.
Never
did
it
yeah.
D
B
C
D
They
use
like
it
is.
It
is
the
riskier
option.
So
in
theory,
then,
if
I'm
understanding
correctly,
what
we
can
do
now
and
what
we
can't
do
now
is
we
could
possibly
get
the
new
vm
up
and
running
now
yeah
we
could
possibly
let
chef
run
on
it.
It
would
start
try
and
start
the
gitlab
services
on
that
box.
D
I
assume
they
would
fail
because
they're
missing
the
data
right
that'll
be
empty
or
what
have
you
we'd
have
to
then
turn
chef
off,
stop
all
the
services,
and
now
we
more
or
less
could
start
running
rsync,
even
while
the
old
dev
docket
lab
on
org
is
being
used
just
like
async,
a
bulk
of
the
data
right.
So.
C
D
C
I
think
most
mostly,
I
mean
we
need
to
get
the
accounts
right
and
stuff
like
that
in
gcp
and
then
see
if
we
move
registry
stuff
from
sv
over
to
gcp.
Maybe,
but
maybe
this
can
be
done
later.
D
C
C
D
D
B
Awesome
so
in
terms
of
like
next
steps,
so
shall
we
say
that
we
like
well,
I
suppose,
do
we
need
to
do
this
over
a
weekend.
B
C
We
can
to
deploy
as
well
yeah
deploy
as
well,
but
if
the
downtime.
C
D
D
Yeah
that
scaling
issue
yeah,
that's
a
really
big
problem
like
because
we
scale
off
that
we
can
do
something.
Could
we
do
something
silly,
like
temporarily
increase
our
pod
counts
to
like
very
high,
so
that
we're
basically
over
provisioned
for
temporarily
yeah,
then
kind
of
do
the
work
on
dev
and-
and
you
know
if,
unless
I
don't
know
someone
ddoses
us
or
something
that
they
have
to
scale
up
massively,
just
hope
that
by
preemptively
scaling
we
absorb
any
extra
capacity
yeah.
That's.
C
Or
we
changed
all
of
our
communities
deployments
with
an
sad
script
too.
Instead
of
going
to
dev
to
go
to
ops
or
something
to
pull
images.
C
D
B
So
in
terms
of
like
not
needing
to
figure
all
this
stuff
out,
but
in
terms
of
kind
of
timelines
of
when
we
would
want
to
kind
of
ask
values,
so
value
has
got
sort
of
one
to
two
days
to
help
us.
So
how
about
henry?
B
On
monday,
when
you're
out
of
release
management,
you
set
up
a
plan,
get
like
an
issue
somewhere
or
an
epic,
with
what
the
what
the
plan
for
how
this
might
look
and
then
we
can
work
with
like
get
input
from
other
people
like
job
we'll
have
some
ideas.
We
can
ask
value
about
this
stuff
as
well,
and
then
figure
out
a
timeline
of
what
we
need
to
do.
C
B
C
B
In
mostly
in
your
time
zone
right
so
yeah
awesome
and
then,
like
you,
use
the
slack
channel
to
keep
everyone
updated.
So
people
can
follow
along,
but
I
think
it
would
be
a
good
thing
to
kind
of
figure
out
quite
soon
how
long
we
think
the
disruption
would
be
and
how
we
would
schedule
this
so
that
we
know
are
we
able
to
do
it
in
the
morning?
Do
we
have
to
do
it
at
a
weekend?
C
A
C
A
B
Cool
okay,
great
graham
next
one.
D
Oh
yeah,
it's
a
very
quick
note.
It's
just
interesting.
I
see
that
one
of
there's
a
new
working
group
or
they're,
pretty
old
working
group
within
the
kubernetes
sigs
and
the
kubernetes
community,
talking
about
what
they're
trying
to
define
what
git
ops
is-
and
I
just
found
that
very
interesting
and
seen-
is
this
the
case
migration
demo
meeting.
I
thought
I'd
pass
it
along
for
general
perusal
if
people
are
interested.
D
It
also
kind
of
resonated
with
me
a
little
bit
of
thinking
about
like
next
steps
for
what
we're
trying
to
do
in
terms
of
deployments
and
some
of
the
kubernetes
deployment,
the
kate's
workloads.
All
of
that
stuff
that
we
have
now.
It
was
a
very
interesting
kind
of
it's
very
short
page,
like
they're,
still
just
kind
of
bringing
things
together,
but
it
was
interesting
that
they
actually
are
trying
to
articulate
and
define
it.
What
was
most
interesting
I
found
was
one
statement
of.
D
Iops
is
pulling
like
you
run
an
agent
in
the
cluster
that
pulls
and
reconciles
so
yeah.
You
spin
up
any
new
clusters.
They
pull
from
your
public,
git
repo.
You
don't
have
to
have
a
ci
job
that
fires
off
to
talk
to
these
clusters
and
we,
you
know,
there's
pros
and
cons
to
both
approaches,
absolutely
for
sure,
but
yeah.
I
can
definitely.
I
can
see
now
that
they,
as
they
start
to
articulate
more
of
that
it's
it's
a
very
interesting
proposition
and
could.
B
D
So
it's
a
little
bit
funny
I
discovered
today,
so
I
just
was
looking
just
poking
to
make
sure
that
I,
every
few
weeks
I
kind
of
poked
to
make
sure
the
upgrades
keep
going,
and
I
noticed
that
they
haven't
run
at
all
since,
like
just
no
december
last
year
and
tried
to
reach
out
why
I
thought
maybe
like
we
had
some
exclusion
periods
for
the
pcl,
but
I
thought
maybe
they
were
like
configured
wrong
or
we
had
to
remove
them.
D
No,
it
just
turns
out,
after
talking
to
google,
that
google
does
their
own
pcl,
where
they
just
turn
off
auto
upgrades
for
you
without
telling
you
in
the
background-
and
they
said
that
that
finished
january,
3rd
or
something
and
the
theory
is
now
that
they've
got.
I
think
they've
got
such
a
backlog.
Like
you
know
it's
just
whenever
it'll
happen,
it'll
happen
because
there's
no,
you
set
a
maintenance
window,
but
there's
no
guarantee
that
they
will
do
any
upgrades
right.
It's
really
just
google
is
just
like.
We
will
try
and
do
it.
D
You
know
over
the
coming
weeks.
They
they
send
a
note
out,
saying
classes
that
are
set
to
auto
upgrade
will
be
upgraded
to
this
version,
but
there's
certainly
no
solid
time
frame.
So
it's
not
like
they
yeah
it's
just
like
yeah.
They
they
they
notice
that
they
know
that
there's
a
backlog
and
our
classes
will
be
upgraded
whenever
so,
I'm
kind
of
like
we
can
just
wait.
D
We
can.
We
can
click
a
button
like
force
the
upgrade
kind
of
thing.
I
don't
really
know
what
to
do.
It's
it's
not
too
bad
at
the
moment
if
it
gets
halfway
through
february
because
they
are
still
trying
to
talk
about
march
is
when
we
got
the
one
two
two
upgrade
we'll
probably
get
promoted
to
the
auto
upgrade
channel.
D
So
we
need
to
just
make
sure
we
do
the
due
diligence
on
the
api
deprecations
different
depreciations
for
that,
so
I
don't
want
it
to
be
like
we're
still
sitting
on
one
two
zero
when
there's
one
two
two
right
out
there,
so
I
don't
want
to
leave
it
last
too
long.
I'm
happy
personally
unless
anyone's
got
any
better
ideas
or
they
particularly
do
want
to
jump
in
and
start
upgrading
things.
D
I'm
happy
to
just
see
if,
over
the
next
two
or
three
weeks,
whether
the
auto
upgrades
kick
back
in
and
we
get
upgraded,
but
if
anyone
else
has
any
thoughts.
All
that
is.
C
It's
funny
because
I
thought
exactly
that
that
maybe
they
just
have
their
own
holiday
maintenance
or
something
like
that.
If
nothing
is
happening,
so
they
have
a
smooth
start
to
a
year
like
we
do
and
also
the
other
issue
for
the
change
monitoring
ip
addresses
sounded
like
they
will
need
some
time
to
get
the
fix
deployed
through
gke.
So
it
sounds
like
they
really
have
some
kind
of
backlog
to
get
things
through
and
needs
more
time
for
everything.
So
maybe
they
are
just
overloaded
with
a
lot
of
things
and
for
for
autogrades
here.
C
I
think
we
should
maybe
wait,
but
if
you're
curious,
if
anything
breaks,
we
could
also
just
push
a
button
somewhere
on
stage
and
write
and
to
see
if
we
see
issues
but
but
waiting.
If
something
is
happening,
will
tell
us
that
that
auto
upgrades
are
working
again.
So
let's
wait
and
then
maintenance
policy
anyways
over
and
g-staging
and
g-prot
that
we
have
set.
So
we
should
upgrade
as
soon
as
it
works
again.
D
Yeah
yeah
yeah,
we
yeah,
I
think,
yeah
the
maintenance
exclusion,
I
think,
was
a
red
harry
and
I
just
assumed
that
was.
I
didn't
even
occur
to
me
that
google
themselves
would
be
stopping
them,
so
I
just
thought
it
was
a
problem
on
our
side
but
yeah.
I
agree,
I
think,
if
we
get
through
like
halfway
through
february,
we
should
at
least
force
staging
just
so
we're
like
testing
that
and
what
have
you
but
yeah?
I
don't
think
we
I
agree.
D
I
don't
think
we
need
to
rush
into
it
right
now,
but
it's
certainly
like
and
unfortunately
this
is.
We
need
to
keep
an
eye
on
this
manually
at
the
moment,
because
we
have
no
automation
or
monitoring
around
it.
B
So
how
about
we
just
add
a
comment
on
there
and
say,
like
you
know
like
before,
like
you
know,
put
a
by,
you
know,
end
of
february
or
whatever,
like.
Let's
make
sure
that
this
has
auto
upgraded
and,
if
not
like,
let's
make
a
plan
to
manually
trigger
it.
B
A
as
a
comment
for
the
team
who
are
going
to
pick
up
the
disruptive
upgrade
so
on
1
4,
so
to
graham's,
comment
that
we
don't
want
to
not
have
run
this
auto
upgrade
before
we
try
and
before
we
get
to
the
1.22
upgrade.
So
we
could
add
it
on
the
project
prepare
for
that.
One.
Two
two
upgrade
to
add
a
comment
that,
by
whatever
date,
midfield
end
of
feb,
we
make
sure
we've
definitely
had
a
had
an
upgrade.
B
C
I
was
I
was
thinking
of.
Maybe
we
set
a
reminder
like
like
a
due
date
to
this
issue,
that
we
have
that
we
have
often
that
we
check-
and
I
don't
know
two
weeks-
something
changed.
So
we
don't
forget
about
it.
B
Yeah
we
can
do
that
as
well.
I
guess
I
is
there,
like,
I
suppose,
the
the
change
we're
hoping
to
move
to
in
q1
is
to
get
reliability
more
involved
in
kind
of
understanding
how
these
things
work
and
sort
of
running
them,
which
is
kind
of
the
point
of
the
having
reliability
work
with
us
on
the
disruptive
one.
So
if
you're
wondering
like,
is
there
any
benefit,
also
looping
this
stuff
together?
So
they
have
that
visibility
as
well,
but
otherwise
yeah.
B
C
D
A
B
Cool
is
there
anything
else
that
we
need
to
go
through
or
anyone
would
like
to
go
through
today.