►
From YouTube: 2022-06-08 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Good
morning,
welcome
to
the
june
8th
emea
kubernetes
demo
meeting.
It's
just
a
few
discussion
points
on
the
agenda
today,
let's
start
off
with
camo
proxy
ahmad.
I
know
you've
been
working
on
this
kidney
lately.
You
want
to
give
us
a
little
update
as
to
what's
going
on.
C
Yes,
I've
been
pairing
with
scarbeck
since
yesterday
to
add
network
policy
to
fix,
secure
income,
proxy
and
kubernetes,
so
the
mr
is
ready.
I
think
I
will
get
to
just
tackle
the
comments
on
the
mr
and
grime
I
think
is
going
to
review
and
then
we
are
good
to
go.
C
B
So
before
we
move
on
to
the
next
item
point,
I
do
want
to
clarify
that
the
cr
that
was
originally
created,
I
kind
of
ripped
it
apart
and
just
kind
of
created,
segregated
issues.
So
we
now
have
a
issue:
that's
dedicated
to
tracking
the
deployment
of
camo
proxy
to
production,
not
necessarily
the
pushing
of
traffic
in
who
came
or
proxy
just
yet.
B
B
Sorry,
I
was
being
told
my
internet
connection
was
unstable,
so
I
don't
know
where
we
dropped
off,
but
I've
mucked
with
the
camo
proxy
epic
and
the
issues
to
make
things
easier
to
track.
B
B
Okay,
gitlab
sshd:
let's
talk
about
this
for
a
hot
second.
B
If
we
look
at
the
latest
update
provided
by,
let's
see
sean,
there
appears
to
be
a
little
bit
of
confusion
around
what
100
of
traffic
on
canary
actually
means.
B
So
I'm
just
going
to
share
my
screen
just
because
I'm
looking
at
this
stuff
get
lab
canary
with
a
roll
out
of
50
and
then
a
rollout
of
gitlab
sshd
on
canary
200.
B
I
think
marin
was
speaking
in
different
terminology
than
what
we
as
the
technical
infrastructure
engineers
understand
with
us,
get
lab
canary
or
excuse
me.
The
canary
stage
has
been
running
with
gitlab
sshd
toggled,
because
it's
a
toggle,
it's
not
a
feature
flag.
It's
not
serving
a
percentage
of
traffic
canary
is
serving
a
percentage
of
production
traffic.
B
The
configuration
of
gitlab
is
hd
has
been
turned
on
for
all
traffic
going
into
the
canary
stage
since
we
rolled
it
out
and
we
have
rolled
it
to
canary
multiple
times
and
rolled
it
back
multiple
times,
but
since
what's
been,
was
it
ahmad
two
weeks
ago
we
rolled
it
out
and
it
sat
that
way.
Finally,
at
this
point.
C
Yeah
and
then
you
actually
did
the
100
roll
out
after
that.
B
Another
test,
so
that
so
that
test
is
separate,
so
we've
been
rolling
canary,
has
been
running
with
gitlab
assistant
100
of
the
time
since
we
rolled
it
out.
So
I
think
from
this
standpoint
we're
like
we're
done
like.
I
think
I
think
this
is
either
a
miscommunication
or
maybe
a
different
thought
that
marin
had
so,
I
think,
there's
a
little
bit
of
miscommunication
happening.
B
I
do
not
want
to
push
100
of
traffic
onto
canary.
That
would
be
a
dangerous
thing,
because
that
kind
of
messes
up
with
auto
deploy
and
that
increases
risk
when
it
comes
to
dealing
with
incidents-
and
you
know
our
natural
tendency
with
incidences
identify
if
canary
is
the
root
cause
if
it
is
quickly
drop
traffic
from
it
as
quickly
as
possible.
B
You
know:
we've
made
that
process
really
easy,
and
what
I
would
hate
for
us
to
do
is
push
100
of
the
traffic
to
canary
and
then
suffer
an
incident,
and
we
turn
off
traffic
to
canary
and
now.
Gitlab
is
just
completely
gone
for
anyone
who's
running
the
get
command
line
interface
at
all.
So
I
don't
want
to
do
that.
So
I
think
what
we
should.
Our
next
step,
in
my
opinion,
is
to
push
main
stage.
B
Enable
get
led
best
is
hd
on
the
main
stage
mckelly
you
mentioned
in
a
conversation
between
you
and
I
that
we
need
to
make
sure
that
gets
updated
down
here.
So
I
will
certainly
do
that
at
the
end
of
this
meeting.
B
The
which
means,
if
that
is
the
case,
you
know
this
has
already
been
accomplished.
I
don't
know
why
that's
still
on
the
implementation
plan,
this,
as
far
as
I
know,
should
be
active,
so
I
don't
know
why
we
have
a
tbd
status.
B
This
is
not
going
to
happen.
This
is
technically
incorrect
from
my
viewpoint,
so
I
think
it
should
be
removed.
B
B
A
I
think
the
only
thing
that
may
be
in
this
audience
is
good
to
understand.
If,
if
you're
rolling
out
tomorrow
to
production
in
stage
is
tomorrow
tonight,
do
we
show
do
we
want
to
step
set
set
the
date
for
tomorrow,
so
we
can
communicate
in
the
issue.
A
B
All
right
good!
So
from
my
vantage
point
it
sounds
like
we're
all
in
agreement
with
where
we
are
do.
We
need
to
do
anything
with
this
issue
to
ensure
that
sean
and
igor
and
everyone
else
who's
following
the
rapid
action?
Do
we
need
to
do
anything
to
make
sure
that
everyone
else
is
on
the
same
page
as
we
are.
A
No,
I
think
I'll
make
sure
that
with
sean
that
we
communicate
the
stuff
we
just
spoke
about
right
now
feel
free
to
highlight
the
risk
of
the
point
that
we're
not
going
to
do
so
going
to
100
percent
of
calorie.
But
apart
from
that,
the
date
for
the
change
request
for
production
is
the
release
of
tomorrow.
I
think
we
received
that
upset
down
light.
I
don't
see
any
other
blocking
that's
if
we
are
out
or
in
the
game
like.
B
This,
okay,
all
right
so
I'll,
follow
up
after
this
meeting
like
I
said
earlier
and
then
sounds
like
you're
good
to
go
so,
okay
cool,
any
other
questions
related
to
get
lab
sshd
before
we
move
on.
B
It
appears
to
be
that
during
a
deploy,
we
get
a
lot
of
error
messages
coming
from
nginx,
indicating
that
it
is
unable
to
connect
to
a
given
pod-
and
this
is,
is
this
api
traffic
or
is
this
web
traffic?
This
is
api
traffic.
So
this
is
something
that
we
experienced
two
years
ago
after
we
first
moved
the
api
over
into
kubernetes
and
we
identified
it
at
the
time
where
nginx
was
seeing
an
awkward
awkwardly
load-balanced
amount
of
traffic
to
some
pods.
So
nginx
receives
traffic
slightly
different
than
the
rest
of
our
services.
B
The
traffic
gets
evenly
routed
to
all
the
nodes
in
a
cluster
versus
all
the
traffic
being
sent
to
the
service
endpoint
and
then
kubernetes
is
load
balancing
within
itself
and
what
we
discovered
you
know
either
last
year
the
year
before.
Was
that
because
of
this
imbalance,
you
know
sometimes
we
had
two
pods
running
on
a
node,
and
sometimes
we
had
one
pod
running
on
node
and
because
the
traffic
was
evenly
balanced
at
the
google
low
bouncer
to
the
nodes,
you
know
one
pod
would
be
suffering
while
the
other
two
are
just
like.
B
Oh
yeah,
I
could
handle
this
load.
It's
fine
and
while
the
deploy
is
hurt
occurring,
nginx
is
a
little
sluggish
to
update,
because
it's
turning
so
much
trying
to
serve
that
traffic
that
it
is
failing
to
or
sluggish
rather
to
update
its
connection
table
to
connect
to
the
appropriate
pods.
So
this
is
something
that
we
investigated
quite
keenly
on.
B
I'm
going
to
try
to
see
I'm
going
to
try
to
shepherd
the
investigation
for
this,
but
not
actively
participate
in
it,
just
in
an
effort
to
try
to
spread
the
knowledge
of
kubernetes
amongst
the
rest
of
infrastructure,
but
this
is
also
a
tough
problem,
so
I
could
understand
if
they
need
to
come
to
us
for
further
questions.
B
B
B
C
Is
it
related
to
the
issue
that
henry
was
investigating
a
couple
of
months
ago.
C
B
This
is
a
public
meeting.
I
should
be
cautious
about
that,
but
I
don't
know
I'm
going
to
wait
for
mayna
to
come
back
before
we
try
to
do
anything
else
with
this
issue,
but
I'm
not
looking
forward
to
it.
If
it's
going
to
be
something
that
we
need
to
spend
a
lot
of
effort
on.
B
So
relatively
short
agenda
today
I
guess
to
end
the
meeting.
Does
anyone
have
any
interesting,
kubernetes
tidbits
of
information
that
they
recently
ran
across
in
this
last
few
weeks
or
so
that
they
want
to
share
that
might
be
fun
and,
interestingly,
share
with
others.
B
All
right
well,
in
that
case
we
could
end
the
meeting.
So
thanks
for
joining
enjoy
the
rest
of
your
day,
I'll
see
you
all
later.