►
Description
To support issue https://gitlab.com/gitlab-org/gitlab-shell/-/issues/563
We performed a packet capture on a single Pod servicing GitLab Shell in Canary.
B
Our
intention
here
is
to
do
a
packet
capture
on
gitlab
shell
in
canary
during
the
time
in
which
is
receiving
a
very
tiny
amount
of
traffic.
So
the
first
step
that
I
want
to
do
is
edit
our
config
map
and
the
reason
why
I'm
doing
this
is,
I
want
to
add
debug
logging.
This
is
currently
not
a
configurable
option
inside
of
our
helm
chart.
So
I'm
just
simply
adding
this
manually.
B
B
B
So
currently,
our
hp
is
set
to
a
min
replica
account
of
eight
and
a
maximum
replica
count
of
150.
we're
going
to
patch
this,
so
that
both
of
those
values
represent
a
value
of
one.
That
way
our
current
pod
count
gets
sucked
down
to
just
a
value
of
one,
so
it
should
be
now
patched.
So
if
we
go
to
a
get,
we
could
see
the
mint
and
max
is
set
to
one
respectively.
B
B
Is
what
I
want
to
see
just
for
further
validation,
let's
exact
into
that
pod?
B
B
B
B
A
B
B
Tcp
dump
is
now
installed,
so
we
are
going
to
do
a
full
host
capture,
so
we're
going
to
capture
every
piece
of
traffic
going
in
and
out
of
the
snow.
This
also
includes
my
ssh
traffic,
which
is
fine.
Our
goal
here
is
to
capture
as
much
as
possible
that
way
we
could
interrogate
the
capture
file
afterwards.
B
A
B
B
B
B
B
So
our
saturation
of
our
hp
being
100
is
known
because
I
set
it
to
a
value
of
one
that
way
it
doesn't.
A
B
B
Below
our
latency
threshold,
we
do
see
you
know,
.04
errors.
A
B
B
B
B
B
B
B
B
B
B
B
A
B
Could
look
at
our
network
traffic
to
determine
whether
or
not
something
goofy
is
happening
awesome?
Ideally,
it
would
be
best
if
we
had
a
packy
capture
for
our
front
end
node
as
well,
so
the
ata
proxy
node
that's
sending
the
traffic,
but
that's
going
to
be
that's
going
to
contain
a
lot
more
traffic,
because
it's
seeing
a
lot
more
requests
than
the
simple
one
percent
of
traffic
that
it's
sending
over
to
this
pod
and
then,
ideally,
we
would
also
have
a
packet
capture
running
for
every
giddily
node
that
this
pod
is
talking
to.
B
But
we
run
a
lot
of
file
servers
and
I
don't
want
to
try
to
figure
out
how
to
coordinate
getting
that
together.
It's
not
impossible,
like
I
could
sit
here
and
do
probably
a
simple
knife
command
to
start
a
tcp
dump
on
all
the
file
servers,
but
I
think
gathering
all
those
files
together
and
pushing
them
into
a
wireshark
would
be
a
little
difficult.
B
So
what
I
will
probably
do
is
I'm
going
to
hand
this
file,
I'm
going
to
store
this
file
somewhere.
That
way
stan
who
does
know
how
to
look
at
packet
captures
far
better
than
I
do.
We've
also
got
a
few
other
infrastructure
engineers
that
know
how
to
interrogate
captures.
Maybe
they
could
find
something
that
looks
suspicious.
B
A
That's
a
great
it's
a
great
idea
and
in
fact
I
was
my
next
follow-up
question
was
going
to
be
okay.
I
guess
we've
got
to
watch
for
file
size,
but
is
this
something
we
might
consider
running
when
we
do
the
next
rollout.
B
B
So
maybe
there's
some
extra
filtering.
We
could
do
like
maybe
capture
only
packet,
headers
or
maybe
the
first
couple
of
bytes
of
a
packet,
but
I
suspect
this
is
different
made
difficult
because
our
traffic
is
encrypted.
Obviously,
but
if
there
are
trails
inside
of
packing
flows
that
we
could
use
to
our
benefit,
it
would
behoove
us
to
try
to
capture
the
entire
packet.
A
B
B
I
think
the
current
work
that
igor
is
doing
is
certainly
worth
while
in
looking
into
so
when
that
gets
deployed.
I
think,
regardless
of
the
state
of
us
interrogating
this
capture,
we
should
go
ahead,
proceed
to
try
to
do
another
rollout.
A
Yeah,
so
that
should
be
that's
that's
in
review
and
expecting
to
merge
pretty
soon.
So
I
guess
we
need
to
coordinate
for
another
when
the
next
attempt
might
be
yeah.
I
think
you're
off
tomorrow,
right
john.
A
B
B
A
B
B
So
inside
of
hd
proxy
we're
just
going
to
set
the
weight
back
to
zero,
so
I'll
get
the
weight.
It
should
be
one
right
now.
A
B
A
A
A
B
A
B
A
John
sorry,
the
recording,
maybe
you
can
link
the
recording
to
the
the
rollout
issue
for
the
cr
issue.
Thank
you.
So
much.