►
From YouTube: 2022-03-30 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
I
guess
vlad
will
not
join
so
I
will
just
tell
you
about
camo
proxy,
so
basically,
we
set
up
the
task
that
we
think
we
need
to
do
in
the
epic
there.
So
we
can
have
a
look
if
we
want
and
what
we
did
so
far
is
mostly
talking
through
how
this
is
working
with
vlad
right.
How
do
we
use
our
queen?
It's
deployment
repositories?
B
How
is
the
are
the
pipelines
working
and
so
on?
So
he
is
was
looking
to
that
and
we
also
met
with
craig
whiskey,
who
originally
set
up
camera
proxy
to
just
get
some
background
on
some
of
the
decisions
they
made
to
get
a
better
understanding,
and
so
the
good
thing
is
chem
proxy
is
straightforward.
It's
just
a
go
binary
right,
so
it's
not
really
complicated.
B
The
only
little
thing
that
they
additionally
added
to
it
is
hi
proxy
because
of
flight
limiting
or
let's
say
of
blacklisting
now
we
say
denied
listing
sorry,
and
so
this
is
a
special
thing,
and
how
does
this
setup
currently
is
in
the
same
way,
how
we
do
add
blocked
ips
to
our
other
hr
proxy
deployments,
which
is
this
strange
font
and
security
repository?
Where
there's
a
file
where
you
could
can
put
in
an
ip?
B
So
this
is
an
interesting
feature
which
we
need
to
think
about
how
to
rebuild
this
in
kubernetes,
because
it
can't
work
automatically
like
this,
but
on
the
other
hand,
because
we
never
used
it.
We
think
that
maybe
this
can
just
be
an
after
thought,
just
get
it
out
right
now
and
then
think
about
how
we
can
build
something
similar
by.
I
don't
know
just
updating
some
kind
of
config
map
with
a
url
list
of
things
that
you
need
to
block.
That
should
be
fairly
easy
enough.
B
I
think
we
just
need
to
adjust
our
run
books
then
to
this
other
things,
we
also
need
to
get
an
ingress
in
place,
which
can
then
take
this
value
and
block
something.
B
So
that's
the
only
thing
and
the
next
step
is
we
talk
about
how
to
deploy
this,
and
we
first
thought
we
would
go
with
tanka,
but
then
we
decided
to
go
with
helm
files
because
most
of
our
deployments
are
done
with
helm,
and
so
this
is
kind
of
our
standard,
and
even
if
we
don't
have
it
in
omnibus
right
now
or
in
our
hand,
charts,
maybe
we'll
add
it
in
the
future,
so
having
a
hand
chart,
maybe
isn't
a
bad
idea
and
maybe
it's
even
simpler
to
get
it
started,
but
tanka
would
also
be
have
been
nice,
but
we
made
the
decision
for
him
fire
right
now
and
vlad
is
just
about
to
work
on
that.
C
B
B
C
I
think
it
was
raised.
I
don't
think
it's
been
like
fully
decided
decided
like.
I
think
we
we
should.
We
should
probably
review
what
that
would
actually
mean.
I
know
there
was
an
issue
that
did
this
or
it
was
a
suggestion
in
amongst
an
issue.
There
are
definite
some
downsides,
so
there
are
some
positives,
but
there
are
also
some
downsides
right,
which
is
that
it's
it's
all
new,
like
everyone
will
have
to
learn
jsonnet
right,
which
is
not
trivial.
C
B
C
If
I
remember
correctly
scumbag,
that
was
in
one
of
the
many
discussions
around
what
we
do
with
kate's
workloads.
Is
that
correct,
like
the
blocking
nature,
yeah?
Okay,
because
that's
an
epic
which
I
think
we
should
get
set
up
and
sort
of
be
able
to
evaluate?
Because
I
don't
think
it's
going
to
be
a
small
change
to
to
kind
of
resolve
this.
But
I
do
think
we
need
to
get
it
resolved
in
the
next
sort
of
couple
of
quarters.
C
So,
if
there's
a
way
like
we
can
contribute
to
or
if
we
have
an
epic
already
or
if
we
can
contribute
to
an
epic
to
get
a
project
set
up
so
that
we
can
actually
like
resolve
that.
That
state
would
be
great.
C
A
The
transition
will
be
the
part
that
takes
the
longest
just
because
we
have
so
much
in
the
gitlab
home
files,
repo.
A
B
D
One
of
the
things
that
I've
run
into
a
few
times
with
camo
proxy
is
the
logging
is
not
great
yeah
and
that
if
we
have
capacity
improving
that
side
of
things,
while
we're
touching
camo
proxy
might
be
something
worth
looking
at.
B
D
There's
there's
one
merged
about
a
year
ago.
That
would
already
help
quite
a
bit,
but
the
main
thing
that's
still
missing
is
there's
no
correlation
id
and
they're
like
separate
log
lines.
So
like
we
get
the
request
and
we
get
the
response,
but
we
don't
know
which
one
belongs
to
which.
B
Yeah,
that's
not
easy,
but
yeah
I
mean
here.
The
question
is
just
to
get
the
newest
version
deployed
right
and
I
don't
see
big
changes
in
you
know
in
the
functionality
that
should
break
anything.
B
C
Yeah,
I
was
going
to
say,
like
so
blood's
around
until
sort
of
beginning
of
june,
so
depending
on
how
much
time
we
have
after
the
migration,
like
you
know,
we
could
certainly
ask
him
to
help
with
some
of
this
stuff.
B
For
redis
rate
limit
migration,
so
the
current
state
there
is
that
I'm
right
now,
working
on
the
automation
for
our
real
environments,
like
the
first
one,
is
pre
to
switch
the
configuration
over
in
a
rolling
fashion
like
the
same
way
like
we
did
in
our
testing.
Just
we
need
to
adjust
this
a
little
bit.
I
mean
most
of
it
is
already
there,
but
we
need
to
adjust
it
to
how
our
clusters
are
set
up
and
the
naming
conventions
we
have
in
our
production
and
g-staging
pre-clusters.
B
Currently
working
on
that.
I
hope
that
should
be
working
today
or
tomorrow,
and
then
we
can
do
the
switch
over,
and
I
guess
I
will
rework
the
cr
for
that
that
you
already
created
with
a
few
adjustments
and
the
plan
is
to
get
this
over
into
the
run
books
repository
because
right
now
most
of
the
code
is
living
in
the
sandbox
testing
repository.
B
But
that's
right,
really,
nice
for
testing
and
it's
tested
there
and
once
it's
working,
you
should
have
this
in
the
run
box
repository
because
this
is
which
we
will
use
in
the
future
to
run
any
kind
of
cluster
config
migrations.
D
Sorry
about
that,
I
think
my
headphones
are
acting
yeah.
So
then,
the
next
milestone
that
we're
working
towards
is
getting
the
host
names
setting
rolled
out
to
gprod,
which
we
will
need.
C
Awesome
so
after
we've
got
this
on
pre
is
it
then
a
case
of
just
sort
of
the
next
step,
just
roll
it
through
staging
and
production.
B
C
Great
and
then
on
point
four
I
just
wanted
to
mention
so
I
cancelled
apac
demo,
which
scheduled
for
tomorrow
graham's
out
sick.
So
hopefully
that's
fine,
like
I
think
in
the
in
the
sort
of
next
few
months.
That
might
be
more
difficult
because
I'm
expecting
vlad
will
time
better
with
that
demo.
So
it
might
be
more
important
on
the
camo
proxy
stuff,
but
it
was.
I
don't
think
there
was
anything
planned
for
tomorrow's
demo.
So
hopefully
that's
fine.
B
C
Yeah,
I
can
give
you
a
quick
update,
so
I
chatted
with
graham
this
morning,
so
there
are
a
few
bits
that
need
to
sort
of
be
resolved.
I
believe
philippe
and
distribution
are
working
on
those,
so
there
is,
it
looks
like
a
bug
on
distribution
side.
So
at
the
moment
I
don't
think
we
need
to
be
involved
like
I
I'm
not
aware
that
anything
is
with
us.
C
Graham
wasn't
working
actively
on
anything,
so
I
can
check
in
later
today,
but
my
my
expectation
is:
is
it
linked
to
the
whatever
the
yeah.
B
Check
the
state
and-
and
I
think
guns
is
going
out
of
office.
B
C
Yeah
awesome,
let
me
yeah
I'll
see
what
I
can
do
about
getting
some
updates,
but
yeah.
My
expectation
at
the
moment
is
that
flippy
and
distribution
of
working
on
it.
C
The
main
thing
is
the
bug
around
the
the
petroni
cluster.
C
Doesn't
think
that's
going
to
be
an
issue
from
what
I
understood
at
the
slack
channel
so
that
bit
looks
like
it
might
be
fine,
but
I
I'll
get
him
to
check
it
out,
but
I
think
that's
just
those
two
bits
which
he
seems
to
have
a
good
handle
on
so
excellent.
I'm
hopeful
that
this
will
go
through.
Do
you
know
henry
when
registry
does
need
to
do
the
bump.
B
So
yeah
we
had
one
comment
for
for
testing
and
staging.
I
think
they
want
to
bump
something.
B
C
Of
course,
yeah
yeah-
that
makes
sense.
Okay,
I'll
see
what
I
can
do
to
get
an
update
on
this
one
in
the
next
few
days,
so
I'll
update
today.
So
we
can
get
some
unblocked
in
the
next
few
days.
B
Yeah
cool,
I
mean
it's
just
it's
good-
that
we
soon
have
this
weekly
update
of
charts,
but
these
massive
updates
and
then
and
for
not.
Fortunately
there
was
this
really
big
change
with
the
renaming
of
monster.
I
think
this
is
really
making
things
happen.
C
Exactly
yeah
exactly
so
yeah
I
chatted
with
graham
a
bit
this
morning
and
I
think
there's
probably
a
couple
of
things
like
it
was
a
like
it.
Once
we
have
a
more
frequent
like
a
weekly
update,
then,
hopefully,
when
we
get
something
like
a
an
incident
that
needs
a
chart
bump
the
person
doing
that
change
will
be
able
to
just
push
something
straight
through
to
production
and
not
get
caught
up
in
all
these
extra
changes.
C
There
may
be
a
future
thing,
though,
that
we
want
to
act
like
so.
The
chart
bump
process
is
really
different
from
our
auto
deploy
process,
because
in
an
auto
deploy
process
like
a
developer
makes
a
change
checks
it
as
best
they
can
pushes
it
in,
but
if
they
have
problems
on
the
deployment,
we
just
revert
it
back
straight
out
right
and
hand
it
back
to
them
the
chart
bumps
a
bit
different
in
that
we
seem
to
be
the
ones
ending
up
kind
of
like
resolving
things
to
make
the
change
work.
C
So
I
think
we
also
want
to
look
at
that
process
and
work
with
distribution
to
see
how
we
can,
but
not
justice,
but
they're,
obviously
making
a
lot
of
the
changes
but
work
with
the
people
who
make
the
change
so
that,
if
something's
not
expected,
you
know
we
actually
get
to
hand
it
back
to
them.
A
C
Code
base,
I
think
we
probably
do
you
know.
C
I
think
we
should
aim
for
that,
though,
like
I
mean
surely
like
I
mean
I
don't
know,
actually
you
might
have
a
thought
on
this.
Is
there
a?
Is
there
a
too
many
chart
bumps
problem
like
if
this
was
continuous
deployment?
Would
that
be
painful
for
people
using
the
chart.
B
C
A
There
could
potentially
be
that
situation,
but
if
we
could
set
it
up
like
we
do
with
autodeploy,
I
would
probably
encourage
it
to
be
like
its
own,
auto
deploy
process
just
because.
A
C
A
C
C
A
And
realistically,
we
really
need
to
figure
out
a
way
to
enable
auto,
deploys
chart
bumps
and
configuration
changes
to
get
kind
of
queued
up
in
a
way
that
way
we're
not
running
on
top
of
each
other.
When
we
have
other.
A
The
container
registry,
when
we
start
auto,
deploying
that
hey,
what's
the
other
one,
that
we
wanted
to
start
on
cash.
A
C
C
Exactly
yeah,
absolutely
absolutely
do
you
have
enough
feel
for
so.
We've
got
a
few
kind
of
they're
sort
of
similar
but
different
problems.
So
we
have
the
sort
of
the
the
blockingness
of
kate's
workload,
so
they're
kind
of
pushing
application,
changes
and
config
changes
in
the
same
pipeline.
C
We
have
the
chart
bumps
needing
to
become
auto
deploys,
and
then
we
have
things
like
registry,
the
kind
of
the
pain
we
see
with
with
particularly
registered
at
the
moment,
because
they're
actively
developing
but
like
doing
the
manual
bumps
in
terms
of
those
projects
like
what
would
be
your
kind
of
preference
on,
which
is
the
most
painful
problem
right
now,.
B
Chart
bumps
jump
bumps
are
super
heavy
and
then
hard
to
predict
what
what
is
breaking,
because
just
from
the
helm,
diffs,
you
can't
really
determine
what
will
happen.
You
need
need
to
check
and
understand
what's
going
on
and
this
is
generating
community's
object,
and
so
this
is
really
always
feels
like
a
big
risk
and
it's
really
hard
to
review.
B
C
A
Yeah,
to
an
extent,
I
think
if
we
could
figure
out
a
way
where
so
there
are
certain
changes
that
show
up
in
the
helm
diff
during
a
chart
bump
that
are
very
unnecessary
and
I
think
we
would
need
to
work
with
distribution
to
figure
out
if
there's
a
way
to
work
around
that.
So
one
example
of
this
is
that
the
version
gets
bumped
inside
of
the
values
of
each
of
our
charts,
which
for
us,
because
we're
trying
to
deploy
the
latest
it
does
we
don't
really
care
which
version
is
running.
C
C
A
What
in
our
helm,
charts
is
relying
on
that
version
number
and
if
we
do
have
some
reliance
on
it,
then
we
might
not
be
able
to
so
maybe
we
need
to
do
some
sort
of
development
inside
of
our
home.
Try
to
say,
hey
use
this
version,
or
this
is
an
auto
deploy
because
of
this
thing.
B
A
C
C
Okay,
so
I
think
it
sounds
like
so.
This
is
kind
of
interesting
as
we're
thinking
about
kind
of
the
future
of
delivery
and
sort
of
splitting
into
deliveries.
We
kind
of
have
two
problems
that
are
run
in
parallel
and
maybe
fit
kind
of
with
the
sort
of
areas
for
splitting
the
team.
One
is
the
chart
bumps
any
other,
which
is
super
painful
for
us
in
terms
of
like,
I
guess
in
terms
of
on
the
clusters
and
actually
how
we
get
changes
there.
C
Any
other
one,
that's
super
painful
is
the
cate
workloads
blocking
on
the
auto
deploys,
which
is
painful
for
release
managers,
even
though
it's
just
a
a
retry
that
was
going
to
become
more
problematic
when
we
get
to
the
automated
deployments,
because
the
great
thing
about
automated
deployments
is
they'll
just
run
and
you
don't
need
to
watch
them.
But
what
we're
going
to
see
is
a
lot
more
unexpected
failures,
which
will
become
like
they'll,
become
more
visible
right
because
you
won't
necessarily
be
watching
the
pipeline.
C
A
I
think
at
some
point
to
address
that,
or
at
least
one
of
the
options
to
address
that
is,
we
may
need
to
consider
splitting
out
our
deployments
so
like,
for
example,
we
could
remove
some
of
the
blocking
nature
by
ensuring
the
get
live
registry
is
deployed
all
by
itself,
because
then
it
could
run
on
a
completely
different
schedule
than
anything
else.
Yeah.
B
B
Think
one
problem
is
that
our
deployments
take
pretty
long
already,
because
we
have
very
big
deployments
right
for
api
and
web
or
sidekick.
This
really
takes
a
long
time
into
a
new
state
reset.
This
is
deployed
there
and
if
we
could
reduce
this
time
by
splitting
it
up
in
some
way,
I
don't
know
if
that
would
make
things
too
complex.
But
if
we
were
able
to
just
be
faster
instead
of
waiting
half
an
hour
until
something
gets
deployed,
then
we
could
do
it
more
often
right
right
now.
B
We
really
can't
do
too
many
config
changes
and
and
auto
deploys
and
so
on
at
the
same
time,
because
they
take
a
long
time
and
then
there's
no,
you
know
gap
in
between
where
you
could
fit
something
in
already
yeah.
If
there
would
be
a
way
to
make
this
in
smaller
pieces
and
faster.
That
would
be
cool,
not
sure
what
would
be
a
good
solution
there,
but
but
splitting
up
in
more
clusters,
maybe
or
something
like
that.
B
A
Like
those
take
between
10
and
20
minutes
per
cluster,
so
it's
not
terrible.
I
think
the
majority
of
what
takes
the
time
is
the
just
the
rotation
of
the
pods,
because,
like
sidekick
for
some
of
our
workloads,
they
run
170ish,
pods
so
rolling
through
all
those
just
takes
a
decent
enough
time
to
not
be
concerned,
but
like
it's,
it's
something
we
can't
easily
control
because
it's
sidekick,
we
don't
want
to
kill
too
many
at
the
same
time
and.
A
We
probably
could
yeah
because
we've
already
modified
it,
we're
straying
away
from
the
default
so
that
we're
spinning
up
more
new
pods
before
we
start
tearing
some
down.
So
maybe
one
option
we
could
do
is
maybe
ask
kubernetes
to
spin
up
a
few
more
extra
pods
initially,
and
then
we
could
start
tear
down
a
few
more
faster,
potentially
that's
worth
experimenting
with.
I
think
I
think
that
would
still
be
a
little
bit
difficult
for
sidekick
just
due
to
the
nature
of
how
sidekick
works,
especially
for
stuff
like
project
imports
and
exports.
A
A
I'm
sure
there's
other
sidekick
workloads
that
fall
into
the
category
of
we
should
not
do
it
this
way,
but
at
least
for,
like
our
web
services
and
such
those
are
prime
candidates
for
that
exact
solution.
C
In
terms
of
pulling
all
this
stuff,
apart
and
kind
of
getting
this
to
the
to
a
become
like
projects
that
we
can
prioritize,
it
seems
like
this
is
quite
tangled
and
difficult
like
is
that
everyone
else's
sense
that
we
have
a
lot
of
epics
and
issues
does?
Is
everyone
feel
like
this
is
a
like?
We
don't
necessarily
have
a
clear
set
of
projects
that
we
can
just
prioritize
for
the
solving
these.
A
I
would
love
for
us
to-
I
don't
know
everything
kind
of
competes
with
with
each
other,
like
the
desire
to
make
this
one
repo
better,
doesn't
help
us
for
other
projects,
for
example,
but
like
we've,
wanted
to
do
that
stuff
for
a
really
long
time.
But
now
this
new
thing
that
we're
discussing
here
is
coming
up,
and
you
know
that
might
change
the
face
of
how
we
approach
things.
So
I'm
not
really
sure
how.
C
I
know
you
can
do
it
in
an
mr,
but
it's
still
this.
I
think
we're
churning
so
much
that
it
might
be
quicker
just
to
do
a
first
draft
of
dropping
all
of
our
thoughts
in
a
dark
and
actually
saw
like
what
is
the
problem.
Where
are
all
the
pieces?
What
do
we
have
as
ideas
or
pain
points,
or
you
know
no
particular
structure
necessarily
needed
and
see
if
we
can
start
to
piece
these
together
into
projects
like
the
chart?
C
Bumps
feels
like
it's
a
distinct
project,
which
is
great
kate's
workloads
feels
much
more
messy,
and
I
know
we
have
lots
of
epics
that
cover
that.
But
there's
probably
some
dependencies
between
these
pieces
as
well.
C
Awesome:
let's,
let's
start
that
I'll
put
one
together
and
put
some
sort
of
context,
so
that
graham,
can
also
have
the
context
and
igor
like
very,
very
welcome
to
also
contribute
in
there.
It's
not
a
like
delivery
exclusive
thing.
Right
like
I
will
open
it
up.
I'm
sure
jeff
will
have
some
thoughts,
but
with
the
sort
of
goal
of
how
do
we
untangle
these
pieces
that
we
all
regularly
work
on
that,
we
know
are
quite
painful.
A
Amy
with
that
mind,
question
regarding
priorities,
I
noticed
in
our
build
board
we
had
the
desire
to
rebuild
a
bunch
of
node
pools
because
they're
missing
the
appropriate
labels.
This
was
an
epic
that
was
spun
up
by
andrew,
maybe
almost
a
year
ago.
At
this
point,
I'm
wondering
if
we
should
remove
that
from
delivery
and
push
that
towards
reliability.
C
So
we
certainly
can
it
has
been
requested.
The
delivery
and
scalability
pick
up
some
of
the
stuff
and
it's
nice
to
share
like
not
against
anyone
else
doing
it,
but
I
know
reliability
have
a
lot
of
their
own
priorities
to
to
handle,
so
it
it
is
something
that
we
should
be
working
on
kind
of
across
delivery
and
and
scalability.
C
However,
having
said
that,
I
have
it
on
the
board
is
kind
of
like.
If
you
want
to
pick
up
a
small
thing,
there
is
something
there
we
probably
don't
want
to
just
neglect
all
the
kind
of
future
thinking
stuff.
The
board
will
actually
run
out
of
items
relatively
soon
like
we
should
not
neglect
q2,
which
is
coming
up
fairly
rapidly
at
the
end
of
at
the
beginning
of
may
so,
don't
feel
like
you
have
to
always
be
picking
issues
and
pushing
them
like.
C
Certainly,
I
think
for
like
everyone
here,
I
expected
a
decent
bit
of
your
week
is
probably
on
kind
of
longer
term
thinking,
setting
up
projects
and
and
kind
of
helping
with
this
stuff.
So.
C
It
it
probably
it
is
going
to
be,
they
will
be,
they
will
remain
in
delivery,
but
it's
fine
if
they
don't
get
picked
up
like
right.
This
minute.
C
I
know
we
have
lots
of
things
kind
of
in
progress,
but
in
terms
of
say,
if,
if
q2
started
on
monday,
I
don't
think
we
have
two
or
three
projects
that
are
sort
of
just
set
up
and
ready
to
go.
So
we
definitely
do
need
to
spend
a
bit
of
time
getting
there.
C
Yes,
thanks
for
the
chats
everyone
that
was,
that
was
actually
super
interesting.