►
From YouTube: Distribution Team Demo - 2021-01-28
A
Today's
edition
of
the
distribution
demo
will
include
showing
how
to
get
proxy
protocol
to
work
correctly
either
in
an
environment
such
as
aws
or
any
other
case,
where
you're
using
kubernetes
for
the
deployment
there's
some
intricacies
that
need
to
be
aware
of
and
we're
going
to
explore
what
the
methods
are
to
get
these
configured
in
a
way
that
functions
appropriately.
A
A
A
You
can
get
it
to
do
that,
but
that's
not
what
it
does
by
default.
As
a
result,
those
load
balancers
actually,
if
not
properly
configured
inside
the
application,
will
do
termination
and
proxying.
Now.
Why
does
this
matter?
That
means
that
you
have
a
different
address,
submitting
the
request
right,
so
your
web
browser
hits
that
elb,
the
elb
adds
headers
to
http
requests
and
then
forwards
that
down
to
engine
x
properly
configured.
This
means
that
nginx
will
see
that
and
trust
those
elbs
and
then
pass
that
additional
header
down
to
the
application.
A
Very
commonly,
instead
of
using
x4
to
4,
you
want
to
turn
on
proxy
protocol,
specifically
proxy
protocol
v2,
which
is,
as
shown
here,
service
beta,
kubernetes
io,
and
then
you
set
the
proxy
protocol
literally.
You
just
provide
this
with
star
and
it
turns
it
on
it's
on
or
it's
off
now.
The
problem
with
this
is
in
order
to
get
proxy
protocol
to
work.
You
actually
turn
it
on
on
all
ports
or
no
ports.
You
can't
set
it
on
a
given
port
when
you're
doing
this
with
an
nlb.
A
A
A
That
implementation
is
exact
to
the
protocol.
Definition
of
the
rfcs,
which
means
proxy
protocol,
is
not
implemented
so
when
it
sees
the
proxy
protocol
header
in
front
of
the
tcp
connection,
it
literally
goes,
I
don't
know
who
you
are
or
what
you're
talking
about.
This
is
not
my
handshake
go
away
and
terminates
the
connection.
A
A
A
A
Well,
with
a
candidate
yesterday,
we
actually
went
on
a
deep
dive
into
how
this
actually
works
because
of
how
vague
that
was
what
we
found
deep
in
the
settings.
Controls
of
the
nginx
ingress
controller
itself
was
the
actual
definition
of
what
this
format
should
be,
and
then
we
looked
through
the
logic
present
there
and
found
that,
essentially
you
take
this
string
and
you
split
it
on
colons.
A
A
Thus,
you
can
actually
get
it
to
take
that
off
and
then
pass
the
rest,
the
tcp
connection
over
to
ssh,
if
you
add
proxy
to
the
second
optional
field,
it
will
now
add
that
proxy
protocol
header
back
so
it
will
take
it
off
from
its
own
termination
and
then
add
it
before
it
passes
it
on
right
now.
The
the
point
here,
though,
is
for
those
instances
where
it's
coming
in
you
need
to
strip
it
off
for
ssh
because
of
how
we're
currently
using
open
ssh.
A
If
you
turn
this
on,
it
will
break
because
nginx
is
looking
for
a
proxy
protocol
header
and
it's
not
there
so
in
gke,
which,
unfortunately
I'm
having
an
issue
with
my
environment
at
the
moment.
If
you
add
this
to
the
tcp
definition,
it
will
break
incoming
ssh,
but
it
will
not
necessarily
break
http.
This
is
due
to
how
tcp
versus
http
is
handled
as
a
part
of
nginx
with
adding
the
tls
load
balancer
as
a
part
of
aws.
A
So
what
I'm
going
to
do
here
is
take
an
existing
instance
that
we
have
deployed
edit
the
service
for
the
nginx
to
add
the
annotation
that
tells
aws
to
do
this
with
proxy
v2
and
then
I'm
going
to
edit
the
config
map
for
the
ssh
on
that
particular
instance
and
tell
it
to
add
proxy
protocol
termination
so
that
we
can
see
what
happens
when
ssh
traffic
is
passed
through
this
load
balancer
with
and
without
the
ssh
traffic.
Having
that
header
in
front
of
it
and
where
the
add
and
remove
occur.
A
A
A
A
A
A
A
A
Now
I'm
going
to
stop
the
scroll
here
see
if
I
can
find
that's
the
reload.
So
let's
go
to
the
other
container,
pull
up
the
logs
and
stop
the
scroll
here.
We
are
now
you'll
notice
that
shows
the
ip
address
coming
from
the
load.
Balancer
right.
One
of
the
things
I
did
not
do
it
was
add
this
in,
I
didn't
add
the
the
vpc
seeder
for
the
real
ip
address.
A
A
A
A
A
A
That
has
to
be
done
through
an
extension
to
the
ssh
server,
either
by
replacing
it
with
one
that
does
support
it
or
putting
some
sort
of
extra
port
wrapper
on
it.
That
does
some
fun
termination
and
the
way
that
we've
chosen
to
do
this
going
forward
is
to
actually
implement
an
ssh
server
in
gitlab's
shell,
so
that,
instead
of
having
sshd,
terminate
the
connection
and
then
make
the
calls
into
gitlab
shell
we're
instead
having
a
server
that
is
gitlab
shell.
A
A
I
was
intending
to
show
you
what
happens
if
you
turn
this
on
in
an
environment
where
it
doesn't
work
aka,
if
I
did
it
in
gke,
and
I
turned
on
just
colon
proxy
what
you
would
end
up.
Having
is
because
there's
no
proxy
protocol
header
involved
on
the
way
down,
because
it's
directly
exposed
cloud
ip
address.
A
A
Needs
to
happen
so
the
good
news
is
I've
already
been
speaking
with
the
teams
responsible
for
gitlab
shell
for
some
time
and
we're
already
in
process
and
have
merged,
and
basically
alpha
level,
ssh
statement
into
the
gitlab
pages,
binary
sources.
So
the
project
itself,
you
can
now
generate
a
gitlab
shell
ssh.
A
A
A
So
I
got
you
hand
trade
further
down,
but
production
is
involved,
because
this
is
something
that
we
need.
The
teams
responsible
for
gitlab
shell
are
involved
they're
actively
working
towards
this,
and
we
have
a
roadmap
from
one
of
our
staff
engineers.
That's
going
to
make
sure
that
this
is
a
viable
option.
Come
this
time
next
year.
So
if
I
click
here,
the
current
target
is
by.
A
A
But
we
need
to
go
over
this
carefully,
because,
if
we're
not
100
sure
of
how
this
works
and
don't
clearly
document,
it
there's
going
to
be
bug
reports
most
likely
due
to
configuration
or
misunderstanding
of
environment
behaviors
things
like
this.
This
is
this
is
a
alteration
to
the
way
the
tcp
socket
itself
is
handled.
A
A
Seeing
a
couple
of
headshakes
can't
see
everybody,
so
there
I
hit
stop
share
nope.
Okay,
then,
despite
my
inability
gk
to
cooperate
with
me
today,
because
who
knows
that
I
would
say
that
was
fairly
successful
in
walking
through
the
impact
of
what
this
setting
actually
does
in
the
underlying
kubernetes
and
how
it
actually
works.
So
I
would
say
pretty
good:
we've
got
a
little
bit
of
work
to
do.