►
From YouTube: 2022-01-19 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
So
welcome
to
the
19th
of
january,
the
emea,
america's
kubernetes
demo,
so
scumbag
you
have
the
first
demo
item.
D
I
do
have
a
demo
item,
though
I
don't
know
how
beneficial
it
is,
couldn't
be
showing
people
but
I'll
I'll
share
a
few
things,
and
then
I
think
either
I
or
igor
could
share
with
like
kind
of
the
problems
that
we're
kind
of
running
into.
I
have
been
having
issues
with
screen
sharing
this
morning.
So
if
my
screen
looks
like
it's
frozen,
just
let
me
know
and
I'll
restart
it
igor
at
one
point
in
time
or
maybe
sean
he'll
correct
me.
D
There's
a
redis
chart
sandbox
set
of
a
repository
that
was
put
together
that
contains
a
bunch
of
information
that
we
could
quickly
create
a
test
environment
and
tear
it
down
as
necessary.
So
I've
very
much
butchered
this
repository,
so
I've
got
you
know
all
the
primary
stuff
that
I
care
about,
plus
a
lot
of
scripts.
I've
been
adding
the
important
piece
here
is
that
I'm
being
provided
with
a
set
of
virtual
machines,
a
kubernetes
cluster.
D
D
D
I
did
not
mean
to
run
that
one,
so
I
get.
I
have
like
a
command
called
get
roles,
and
this
just
goes
through.
All
the
virtual
machines
goes
through
all
the
pause
and
says:
hey.
What's
your
role,
and
so
we
get
basic
information
such
as
the
node
that
this
command
is
currently
running
on
the
version
of
redis,
because
that's
important
for
our
latest
variety
of
testing
and
we
get
that
for
each
one
of
them.
So
we
can
see
in
this
particular
case
right
now.
D
I've
got
a
two
pod
deployment
inside
of
kubernetes
and
we've
got
three
virtual
machines,
so
we'll
see
all
three
virtual
machines,
so
node
zero
one
and
two
and
they're
all
replicas,
and
then
we've
also
got
our
pods.
So
we've
got:
where
does
it
start
at
so
pod?
Zero,
which
is
a
a
secondary,
a
replica
and
we
got
pod
one
which
is
our
current
primary
and
he's
got
the
list
of
all
of
the
replicas
available
to
them.
D
What's
another
fun
script,
I
have
something
called
nodes
ready.
So
this
just
tells
me
whether
those
are
right
or
not,
because
the
init
script
takes
a
lot
alone,
I'm
showing
things
how
to
border.
My
apologies.
D
D
If
I
remove
our
pods,
I
then
need
to
make
sure
sentinel
of
the
virtual
machines
don't
have
any
knowledge
of
those
pods.
So
this
effectively
enables
me
to
quickly
reset
the
virtual
machine
infrastructure
to
a
known
state
where
only
those
virtual
machines
know
about
each
other.
So
I've
got
a
reset
vm
replicas
command
in
here,
so
we
now
have
a
situat
or
a
repository.
D
E
We
we
started
looking
into
this
ip
ipaddress
business,
which
I'm
kind
of
suspecting
might
be
an
edge
case,
or
some
kind
of
bug
in
writer's,
sentinel
itself
and
we'll
likely
need
to
dig
into
the
code
for
that
a
little
bit
and
see
if
we
can
figure
out
which
sequence
of
events
is
causing
that
to
be
an
ip
address
and
not
a
host
name,
which
we'd
much
rather
have.
In
that
case
so
yeah.
I
feel
like
we're
on
a
good
path.
D
Now
we
need
to
keep
in
mind
that
the
use
of
host
names
is
a
relatively
new
addition
to
redis.
It
was
introduced
in
6.2,
which
is
what
we're
currently
trying
to
target
and
utilize
here,
and
I
guess
some
of
the
reasoning
behind
some
of
these
decisions
are
the
current
version
of
the
helm
chart
we
use,
I
think,
igor.
You
found
out
that
when
there's
a
failover
bad
things
occur
or
like
the
right
things
do
not
occur,
which,
for
in
certain
use
cases,
failover
does
not
properly
work.
D
D
However,
in
the
last
week
the
latest
version
home
chart
introduced
a
new
bug
where
you
can't
spin
up
a
new
cluster.
So
there's
another
fun
piece
of
information
related
to
that,
but
you
know
I
created
an
issue
for
that.
D
The
reason
why
we
want
to
go
with
host
names
instead
of
using
ip
addresses
for
everything
is,
I
guess,
there's
two
reasons
for
this
one.
We
know
what
we're
talking
to
at
all
times,
because
we
have
a
host
name
instead
of
an
ip
address.
We
know
through
as
an
sre
what
are
redis
nodes
and
what
are
not,
because
we
allocate
network
address
space
for
this,
but
that's
going
to
be
a
little
bit
more
difficult
inside
of
kubernetes
land,
where
we
observe
one
huge
block
of
ip
addresses
for
all
pods.
D
D
But
if
you
go
back
and
look
at
the
issues
associated
with
it,
they
were
seeing
issues
with
being
able
to
talk
to
pods
appropriately,
so
they
decided
to
go
with
the
route
of
using
host
names
instead,
so
they've
hard
coded
the
options
to
force
you
to
use
host
names
everywhere,
which
is
fine,
but
it's
not
configurable,
so
we're
kind
of
stuck
with
it
and
we
could
probably
try
to
make
it
configurable.
D
But
I
don't
know
enough
about
the
history
to
feel
comfortable
with
enough,
with
video
being
able
to
make
the
change,
make
it
configurable
and
then
test
all
the
things
to
present
that
as
a
solution
for
the
bitnami
engineers
to
review.
So
I
think
going
with
host
names
as
viable.
We
just
make
sure
it
works
properly
for
us,
which
is
what
we're
currently
testing.
D
But
this
also
leads
me
to
the
other
thought
process
of
like.
Are
there
other
methods
that
we
should
be
talking
about
last
year
before
the
holiday
break?
We
were
talking
about
other
methods
such
as
using
console
and
there's
another
proposed
option
which
I
forget
which
off
the
top
of
my
head.
But,
like
you
know,
I
couldn't
get
console
working
at
the
time.
D
D
C
On
the
I
mean,
are
we
at
the
point
where
we
don't
have
other
tasks
or
other
things
so
like
we're
not
ready
to
make
the
shift
anyway
for
at
least
a
few
weeks
right,
we've
got
to
do
the
redis
upgrade
we
have
the
bitnami
chart.
So
if
we,
if
we
just
kind
of
sat
on
this
for
another
week
or
so
like,
would
we
have
other
useful
work,
we
could
be
doing.
D
We're
certainly
not
sitting,
there's
still
plenty
of
work
for
us
to
work
towards.
I
just
want
to
make
sure
the
priorities,
in
my
mind,
are
the
right
ones
to
be
working
on,
so
you
know
I'm
keeping
track
of
the
pull
requests.
We've
got
going
on
with
the
nami
with
this
change
that
we're
going
down
and
pursuing
we
need
to
upgrade
redis
and
igor
was
working
on
a
redis
upgrade
procedure
in
general.
D
We
need
to
push
that
through
all
of
production.
You
know
through
all
of
our
environments,
and
we
also
need
an
omnibus
upgrade
as
well,
because
omnibus
is
currently
pointing
at
six
six
zero
and
then
we
also
have
a
configuration
change
that
is
required
to
enable
this
to
work
properly,
which
our
omnibus
configurations
do
not
have.
So
that's
another
merge
request
inside
of
omnibus
that
needs
to
be
brought
in.
So
no
we're
not
at
a
point
where
we're
out
of
work
to
do
it's
more
of.
E
E
I
can't
do
that
right
now,
so
from
from
what
I've
seen,
this
looks
like
the
the
most
promising
and
viable
option
and
the
the
host
names
well
for
one,
it's
kind
of
the
direction
that
bitnami
are
pushing
towards,
but
I
think
there's
other
reasons
why
we
want
to
use
host
names
as
well.
The.
E
Reachability
of
redis
from
outside
of
kubernetes
being
the
the
main
one
that
I'm
thinking
of
so
I
think
this
solves
more
than
one
problem
that
we
have
and
it's
it's
not
like.
If
this
doesn't
work,
we
don't
have
other
options.
I
think
we
can
probably
find
some
other
workarounds
if
needed,
but
I
think
I
think
this
is
the
most
promising
path
that
we
have
right
now.
So
I
think
we
should
explore
it
further
and
hopefully
it'll
work
out
how's
that
for
our
confidence
boots,.
C
Yeah,
it
sounds
good.
That
sounds
good.
I
think
especially
like
I
think.
Maybe
if
we
didn't
have
the
reddish
upgrade
or
you
know,
we
were
getting
much
closer
to
wanting
to
action
on
this
stuff,
then
you
know
we
could
sort
of
review.
But
at
this
stage
I
think
that's
we've
got
a
bit
of
time.
F
I
have
a
question:
okay,
sorry,
go
ahead,
go
ahead
me
first!
Okay,
so
I
have
a
question
about
the
the
the
host
name
usage.
So
when
we
talk
about
using
host
name,
are
we
talking
about
reassigning
reassigning
always
saying
meaningful
host
names
or
it's
more?
Something
like
red
is
bad
random
hash
and
then
he
could
so
are
we
using
short-lived
dns
entries
or
they
would
just
be
regenerated
and
we'll
be
sentinel
business
to
make
sure
we
are
talking
to
the
right
host
name.
D
D
Some
of
that
effect
followed
by
some
identifier,
our
helm,
chart
or
the
bitnami
helm
chart
puts
the
word
node
followed
by
a
number
and
because
it's
a
staple
set,
it's
going
to
be
incremental,
so
it's
0
through,
however
many
replicas
we
have
so
in
this
case
we're
choosing.
The
replicas
will
be
a
redis
node
0,
a
redis
node
1,
a
redis,
node,
2,
etc.
D
D
It
gets
the
host
it
gets.
The
name
of
the
pod
adds
the
name
of
the
deployment
followed
by
that
suffix.
So
we'll
have
a
redis
node
0
dot,
a
redis
dot,
gke
dot
predicatelab.net
those
ip
addresses
get
updated
by
the
external
dns
project.
So
if
a
pod
gets
removed,
deleted
purged
in
any
way
shape
or
form,
we
have
to
wait
for
extra
well
one.
We
have
to
wait
for
a
replacement
pod
to
come
up.
It
will
guaran
pretty
much
be
guaranteed
to
have
a
different
ip
address
so
as
that
pod
is
booting.
D
If
external
dns
hasn't
come
around
and
updated
that
dns
entry,
that
pod
will
probably
keep
failing
until
that
happens.
From
my
experience
so
far,
external
dns
runs
on
a
one
minute
cycle,
so
every
minute
it's
keeping
itself
up
to
date.
That's
just
based
on
looking
at
logs.
I
don't.
I
haven't
actually
looked
at
the
configuration,
so
there
could
be
a
solid
point
in
time
where,
for
a
full
minute,
a
pod
might
be
in
a
failing
state
before
external
dns.
D
Finally
kicks
in
and
says:
hey
go,
you
know,
update
the
dns
entry
and
then
the
pod
will
start
magically
working,
and
I
have
seen
that
problem
where
I've
seen
a
pie
get
restarted
three
or
four
times
before
it.
Finally,
gets
up
and
ready
to
go
and
then
there's
a
third
thing
I
wanted
to
say,
but
it
took
me
so
long
to
get
to
it.
I
forgot.
F
Okay,
so
I
can
continue
with
question
on
the
things
that
you
told
me.
So
thanks
for
sharing
this,
the
question
is:
do
we
interact
with
pods
using
those
externally
provided
dns
names
only
from
vms
or
even
within
the
the
kubernetes?
We
interact
using
those
external
dns,
so
the
thing
is:
are
we
talking
about
this
not
reachable?
D
E
I
can
also
add
add
something
to
that.
So
the
way
that
we
connect
to
redis
is
by
using
service
discovery
through
redis
sentinel,
so
sentinel
decides
who
the
current
primary
is
and
is
the
authoritative
source
of
that
information.
And
so
the
client
will
first
connect
to
sentinel
get
from
one
of
the
sentinels.
E
F
Okay,
so
that
was
exactly
the
the
the
thing
that
I
had
in
mind,
so
they
sponsored
themselves
with
the
announced
themselves
with
the
external
dns
entry,
so
correct.
Everything
is
bound
to
this
refreshing
thing
that
if
goes
okay,
but
I'm
just
thinking
now.
This
is
not
a
real
problem,
because
if
it
was
a
master,
it's
it's
unreachable,
then
we
have
a
failover
event,
so
the
master
will
switch
to
a
new
one.
It's
supposed
to
be
healthy
and
yeah.
If
it's,
if
it's
a
secondary.
F
E
I
yeah,
I
think
so
too,
and
the
pods
should
be
long
lived
right.
So
as
as
long
as
the
pod
has
been
around
for
a
minute,
the
dns
should
at
that
point
reflect
the
the
actual
ip
of
that
pod.
So
it's
it's
really
more
during
maintenance
events,
where
we're
recreating
pods,
where
I'd
expect
us
to
run
into
some
challenges
with
this
but
yeah
during
during
the
normal
operations.
E
I
agree
with
you,
it's
it's
not
so.
Concerning.
F
Yeah
yeah,
because
I
was
thinking
the
other
aspect
of
this
that
was
kind
of
concerning
it's
a
strong
word.
But
just
because
tricking.
My
attention
is
that
it's
about
ttl
of
those
entries,
as
well
as
the
load
on
the
dns
system,
because,
as
being
honest,
I've
been
a
situation
with
this.
Where,
when
I
had
a
service
discovery
system
that
basically
was
killed
by
the
number
of
dns
requests,
because
they
had
to
keep
the
ttl
low
for
the
service
discovery
to
work.
F
D
F
D
D
The
one
thing
I
did
decide
to
do
was
because
we
are
now
adding
external
dns
as
a
dependency
for
this
project.
I
created
an
issue
such
that
we
ensure
that
we
complete
a
readiness
review
for
external
dns,
because
that
was
never
done
when
it
was
introduced
to
our
clusters.
Bill's
also
introduced
for
something
that's
not
as
important
as
dot
com,
so.
D
So
I
am
continuing
to
test
migration
strategies,
I'm
going
to
try
to
work
with
igor
as
much
as
possible
to
figure
out
if
we
need
to
do
anything
regarding
this
random
ip
address
issue
that
we're
running
into.
But
we've
created
an
issue
for
that.
D
E
Yeah
I
wanna
I
wanna
see
if
we
can
make
some
progress
on
this
ipaddress
thing
and
I'm
gonna
be
nudging
the
6.2
omnibus
bump
along
because
I
think
that's
something
we
we
just
need
to
get
ahead
of,
because
it's
going
to
be
a
hard
requirement.
B
I
started
with
the
observability
gitlab
shd
today
got
on
boarded
by
andrew
to
basically
build
the
panels
and
I
opened
a
merge
request
and
the
performance
is
going
to
be
done
by
igor
from
sourcecodetip.
So,
basically
we
are
waiting.
I
think
we
have
like
a
deadline
next
week
to
complete
both,
and
hopefully
it
will
be
done,
the
observability
and
the
performance,
and
then
next
step
is
to
take
it
to
production.
I
think
so
finger
crossed.
B
This
is
a
question
I
don't
know
how
to
answer.
To
be
honest,
I
think
they're
in
this
review
skype
left
comment
regards
the
readiness
review
on
the
issue.
I
will
check
it
and
maybe
like
I
will,
I
don't
know
the
answer
for
this
question.
D
Yeah,
I
just
wanted
to
make
sure
that
it
would
it
get
it
it
gets
completed.
There
was
a
there's
at
least
one
open
thread
about
performance,
and
you
know
I
want
to
make
sure
that
we
are
not
going
to
drown,
production
and
you
know,
kill
ssh
access
to
gitlab.com.
D
I
haven't
seen
any
results
of
any
sort
of
testing
that
was
done
in
sort
of
staging
at
all,
so
it
would
be
it'll
be
wise
to
have
that
information
as
part
of
the
readiness
review
prior
to
us
going
to
production-
and
I
think,
there's
one
other
item
still
left
to
do
related
to
monitoring
what
shock.
Might
I
think
you
said
you
were
working
on
so.
C
And
I
think
to
answer
the
question
like
generally:
yes,
I
think
source
code
should
own
this
as
much
as
they
can.
They
may
be
things
on
there
that
they
are
unable
to
complete
and
in
which
case
they
can
ask
for
help,
but
I
think
we
should
set
the
expectation
that
really
as
much
as
possible,
particularly
things
like
performance.
Like
you
know,
they
should
be
completing
as
much
of
this
as
they
can.
C
Cool
next
week
as
well,
I've
had
as
your
release
manager
like
if
you
need
help,
please
just
ask,
because
you
don't
have
to
try
and
juggle
projects
and
release
management.
So
you
know
we
have
other
people
who
can
we
can
either
pause
on
this
project
or
bring
other
people
in
to
help?
If,
if
there's
too
much
going
on.
C
Henry
I
was
gonna,
ask
you
about
the
dev
discussion,
so
is
the
other
people
in
this
school
we
specifically
want
might
have
expertise,
because
you
want
to
avoid
having
duplicate
meetings,
since
we
have
the
apac
demo
tomorrow,.
A
I
just
put
this
in
because
dream
faced
those
questions
and
he
wanted
to
bring
this
up
in
the
creator's
meeting.
A
Because
it's
for
tomorrow
and
I'm
not
sure
if
everybody
would
be
there,
then.
C
A
I
thought
I
can
just
put
it
in
here
already,
so
we
can
start
to
look
at
this
at
least,
but
maybe
the
best
would
be
to
asynchronously
answer
on
those
or
discuss
on
those
questions
you
had
in
the
issue.
I
just
copied
from
the
issue
what
I
put
here
in
the
discussion,
so
I
just
wanted
to
make
you
aware
of
the
things
that
you
need
to
consider
now.
If
you
want
to
move
registry
out
from
dev
to
gke.
A
C
It's
on
the
agenda
below
so
we've
got
the
apec
agenda
just
below
this
one.
So.
A
C
The
comments
there
that
tomorrow
well
my
morning
tomorrow,
utc
morning
when
we
have
the
apac
emea
demo,
we
can
discuss
so
yeah.
Add
questions,
add
comments.
There.
A
I
didn't
see
that
you
have
this
already
on
the
apec
one:
okay,
cool.
C
Cool
is
there
anything
anyone
wants
to
discuss
otherwise,.
C
The
other
thing
is,
since
we
do
have
the
apac
denmark
tomorrow,
if
there's
anything
that
you
would
like
to
see
demoed
or
discussed
with
graham,
then,
please
feel
free
to
add
it
on
here.
So
some
are
some
things
going
on
improving
kate's
workloads.
You
may
have
questions
or
suggestions
or
things
that
we
could
demo
or
discuss
tomorrow,
even
if
you're
going
to
be
async
so
have
a
think
about
whether
there's
anything
else
that
we
can
add
into
this
agenda.