►
From YouTube: 2022-06-22 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
everyone
welcome
to
june
22nd
of
the
kubernetes
demo
meeting.
I
wanted
to
do
a
little
demo
on
tameland
after
we
had
some
node
pool
updates.
Unfortunately,
when
we
did
some
changes
to
our
git
node
pool,
we
dropped
some
labels,
so
tame
land
is
missing
information
after
its
last
run,
so
I
don't
have
anything
to
demo.
Unfortunately,
so
I'm
just
going
to
remove
this
from
our
thing.
A
And
skip
to
the
discussion
items,
however,
item
number
one,
so
I'm
number
one
walking
through
our
h2
proxy
tooling
changes.
This
is
related
to
the
ip
saturation
for
our
clusters.
The
goal
here
was
that
we
would
have
the
ability
to
we
have
a
tool
called
set
server
state.
This
provides
us
the
capability
to
tell
all
of
our
proxy
front
ends
that
we
want
a
back
end
to
go
down
on
purpose.
A
A
This
should
enable
us
to
send
all
traffic
for
extra
clusters
to
our
backends.
Currently,
I
cannot
demo
my
changes
because
I
added
a
bug
when
working
on
this
and
I
haven't
figured
out
how
to
fix
it.
Yet
so
that's
still
a
work
in
progress,
but
before
I
go
on
to
item
number
two,
does
anyone
have
any
questions
about
my
goals
for
this
particular
issue?.
B
Sorry,
I
do
I
just
like
to
press
my
button.
Sorry,
yes,
can
you
could
you
just?
I
haven't
been
following
this
epic
super
closely,
but
just
like
what
what's
the
the
goal
we're
trying
to
achieve
by
having
this
improvement.
A
A
A
A
I
want
to
limit
the
amount
of
change
of
traffic
going
to
canary,
so
I
don't
want
to
change
the
landscape
of
h
a
proxy
across
the
entire
board.
I
still
want
to
rely
on
all
auto
deployments
running
tests
inside
of
canary,
and
I
want
to
make
sure
qa
is
still
running
against
canary
that
way.
I'm
not
a
blocker
for
auto
deployments,
so
this
change
is
going
to
enable
us
to
be
more
incremental
with
what
clusters
are
shut
down
and
hopefully
preventing
auto
deployment
blocking
blocking
situations.
A
A
So
when
it
comes
time
for
deploying,
we
have
a
method
to
ensure
that
if
a
cluster
is
down
that
we
don't
bail
in
our
deployer
pipeline,
we
would
set
a
specific
environment.
Variable
I've
got
it
in
this
issue.
Let
me
pull
it
up
just
so.
I
could
read
it
out
loud.
I
guess
so.
We
have
an
environment
variable
locate
and
deploy
called
allow
kate's
failure,
and
all
this
does
is
when
we
send
the
trigger
to
kate's
workloads.
A
We
wait
for
the
pipeline
to
return
a
success
or
fail,
and
the
deployer
will
obviously
bubble
that
information
to
us.
If
we
set
the
allow
kate's
fail
to
true,
we
ignore
all
failures
in
that
pipeline,
which
is
not
precisely
what
I
want
here.
You
know
if
there's
a
problem
in
cluster
b,
but
we're
you
know
performing
maintenance
on
cluster
c.
I
don't
want
us
to
just
skip
over
that
failure
or
make
that
failure
go
by
unacknowledged,
still
want
deploy
to
tell
me
that
there
is
a
problem
somewhere.
A
So
what
I
want
to
do
here
is
propose
setting
some
new
environment
variable
that
may
change
the
way
that
our
auto
deployment
pipeline
gets
created,
perhaps
or
maybe
changing
the
way
that
our
k
control
script
operates
such
that,
if
we
are
targeting
a
cluster
for
maintenance,
we
allow
it
to
fail
gracefully.
B
As
part
of
that
subject,
I
would
also
consider
whether
we
actually
just
either
take
away
allow
kate's
failure
and
replace
it
with
a
thing
or
whether
you
modify
that,
because
we
only
kept
it
around
for
the
possibility
that
a
cluster
is
unavailable
and
we
would
want
to
skip
that
which
sounds
like
the
behavior
that
you're
describing
here
we
want,
although
we
don't
have
it
now,
it
sounds
like
we
would
want
the
same
thing
where
we
can
deliberately
say.
Zone
c
is
known
to
be
unavailable.
Everything
else.
A
Yeah,
like
the
autocades
failure,
is
an
environment
variable
that
is
kind
of
a
catch-all,
and
I
don't
want
this
to
be
a
catch-all
like
this
is
going
to
be
planned
maintenance.
You
know,
assuming
we
don't
have
a
cluster
fail
on
us
which
lucky
for
us
has
yet
to
happen,
but
since
this
is
planned
maintenance
I
want
to
be
able
to
have
the
mechanism
in
place
ahead
of
time
that
way.
Release
managers
that
are
operating
at
that
moment
in
time
don't
have
to
worry
about
things.
A
Failing
unnecessarily
we're
not
pinging
people
unnecessarily
we're
managing
this
ahead
of
time.
That's
part
of
our
procedure
when
we
go
do
some
sort
of
routine
maintenance.
That's
what
my
goal
here
is
to
do
without
hiding
failures
in
other
realms,
so
implementation
aside,
because
I'm
not
yet
picked
up
this
issue,
I'm
going
to
work
on
this
aj
proxy,
tooling,
improvement
first,
but
that
this
will
be
my
next
item
that
I
pick
up
for
this
workload
and
then,
after
these
two
items
are
done.
The
next
goal
is
to
actually
perform
some
sort
of
test
on
staging.
A
A
C
Maybe
a
simple
way
to
to
implement.
It
is
to
set
the
variable,
instead
of
being
a
ball
to
be
a
a
maybe
string
or
allow
it
to
accept
the
maybe
the
name
of
the
cluster
that
we
want
to
allow
to
fade.
A
I
just
haven't
worked
on
this
implementation
details
yet,
but
I
was
thinking
of
like
something
that
allows
us
to
maybe
specify
an
array,
because
there
could
be
a
situation
where
we
operate
on
two
clusters
at
a
time,
but
I
figured
I
could
iterate
on
that
and
you
know
work
on
that
later,
because
I
don't
imagine
us
wanting
to
do
that,
because
that
might
be
a
little
too
much
so
far.
The
research
I've
done
is
just
targeting
over
saturation.
If
one
cluster
goes
down.
B
Do
you
want
to
get
that
issue
set
scope
like
I
know
you
mentioned,
you'd
pick
it
up
as
a
next
thing,
and
I
know
matt
and
jenny.
I
know
you
two
are
both
busy
on
stuff
at
the
moment,
but
scuba
diving
to
get
that
ready
and
sitting
on
the
board
so
that
whoever
becomes
available.
If
you
fancy
having
a
shot
at
it,
you
you
can
like
it
it's
there.
If
anyone
wants
to.
B
You
might
you'll
need
to
ask
a
load
of
questions
for
sure
I
know
there's
not
that
much
context,
but
also
you
know
we're
around
answer
questions
if,
if,
if
you
have
any.
A
A
Sorry,
okay.
A
So
I
was,
I
was
looking
at
the
wrong
label
when
I
asked
that
question
okay,
so
that's
all
I've
got.
Does
anyone
have
any
further
questions
before
we
move
on.
A
All
right
mod
tell
us
the
good
news
man.
C
So,
thank
you.
The
ipl
list
for
gitlab
is
hd
is
rolled
out
yesterday,
so
it's
hd
is
fully
on
production,
working,
no
issues
so
far.
So
this
is
the
good
news.
There
is
no
bad
news,
so
unless
anybody
has
any
question
weird.
C
Thank
you
yeah,
like
th.
This
like,
as
I
said
like,
I
think
I
told
you
yesterday
like
this,
was
since
I
joined
git
lab
the
get
level.
Hd.
Was
this
thing
so
finally,
it's
on
production,
if
anybody
has
any
questions
just
like
shout
out.
C
The
second
item
is
chemoproxy.
Common
proxy
is
also
like
on
production,
vlad
and
henry
worked
on
the
steps
and
everything
so
kudos
to
them.
It's
now
in
production.
Could
it
also
scarback
who
rolled
it
out
and
to
address
your
question
amy?
I
don't
think
it's
behind
cloudflare,
it's
just
like
on
kubernetes
and
yeah,
working
with
the
internal
ingress.
A
I
do
want
to
send
a
special
thanks
to
vlad,
as
well
as
henry,
even
though
he's
no
longer
with
us
that
the
fact
that
the
change
request
was
written
in
a
way
that
I
could
follow
without
having
to
really
worry
about.
You
know
what
potentially
goes
wrong
is
pretty
fantastic
like
if
you
all
were
hit
by
a
bus.
I
was
able
to
take
this
change
issue
and
with
minimal
questions.
A
There's
like
one
typo,
but
aside
from
that
one
typo,
I
was
able
to
roll
through
this
without
any
issues,
despite
the
fact
that
I
wasn't
part
of
the
testing
and
staging
or
thinking
about
what
needs
to
be
done
ahead
of
time
in
production,
and
I
greatly
thank
you
for
writing
a
procedure
as
precise
as
this.
It
was
fantastic
to
roll
through
thanks
for
sending
that
car
back.
That's
that's
great
feedback.
I'm
happy
to
hear
it.
B
Yeah
well
done
hold
on
on
the
reason
I'm
asking
out
the
card
for
everything
as
well.
Vlad
I'll
leave
this
one
with
you,
but
just
as
a
consideration
for
the
denialist
option.
If
there
was
a
way
to
get
camo
sitting
behind
cloudflare.
That
could
be
a
solution
because
we
currently
do
all
of
our
deny
listing
other
cloudflare.
B
So
it
might
actually
be
a
nicer
and
more
kind
of
obvious
place
to
look
if
you
sre
or
cool,
and
you
get
paged
on
something
I
think
cloudfare
is
normally
where
we'd
look
to
do
the
to
do
deny
listing.
So
it
could
be
worth
seeing
if
camera
could
go
behind
it
and
what
that
might
look
like.
A
That
might
actually
increase
the
performance
of
camo
proxy
because
more
data
is
being
served
behind
cloudflare
versus
our
services,
responding
that
could
bring
some
cost
benefit
in
terms
of
network
costs
and
bandwidth.
A
So
I
think
that's
a
good
follow-up
issue
for
a
future
improvement.
I
did
have
the
question
of
like
what
did
we
miss
and
while
this
isn't
something
we
necessarily
missed?
This
is
another
future
improvement,
but
we
currently
deployed
camera
proxy
to
a
regional
cluster,
so
we're
kind
of
missing
as
small
as
this
deployment
is,
it's
probably
not
entirely
worth
the
effort
but
like
having
this
deployed.
Horizontal
clusters
makes
more
sense
in
terms
of
be
able
to
divvy
up
some
of
that
network
traffic.
A
That
goes
between
us,
but
then
again,
I
think
because
we're
pointing
directly
at
google's
gke
low
bouncer,
it
might
be
less
of
an
issue,
I'm
not
really
sure,
because
this
is
using
a
different
networking
load
bouncing
mechanism
than
what
was
available
when
we
first
started
pushing
our
front-end
workloads
in
kubernetes.
So
it
might
be
something
we
might
want
to
consider.
A
Looking
into
and
ahmad
created
an
issue
for
this
already
that's
linked
to
the
epic
in
terms
of
something
else
that
I
know
we
did
miss
and
ahmad
already
worked
on
this
yesterday
and
today
and
pushed
it
out,
but
we
did
miss
labeling
our
pods.
Therefore,
our
metric
system
was
missing
some
of
the
data
it
needed
for
calculating
like
requests
per
minute
and
such
mod
pushed
a
fix
for
that.
So
hopefully
our
metrics
are
going
to
go
at
this
point,
but
I
do
want
to
ask
the
question
to
ahmad
and
vlad.
C
These
few
days
yeah,
I
think
there
is
one
item
that
sean
was
talking
about.
We
are
missing
the
metrics
for
the
container
like
the
cpu
and
like
the
memory
of
the
container,
and
I
think,
common
proxy.
The
image
we
use
doesn't
have
these
kind
of
metrics.
I
think
we
could
use
some
c
advisor,
but
then,
if
you
are
using
c
advisor,
we
need
to
integrate
this
with
the
application
itself
to
basically
make
c
advisor
capture
all
of
the
metrics.
C
So,
like
I
don't
know
like
the
approach
to
actually
do
this,
but
still
didn't
think
about
it,
but
we
are
still
missing
some
metrics
there
for
the
container
itself.
A
C
I'm
not
entirely
sure
if
you
are
missing
anything
else,
but
there
is
still
a
couple
of
issues
on
the
epic
some
minor
issues,
so
I
think
in
the
future,
matt
or
jenny
could
basically
pick
up
some
of
these
issues.