►
From YouTube: 2022-05-18 GitLab.com k8s migration EMEA/AMER
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
C
Okay
good
morning,
everyone
welcome
to
the
18th
of
may.
Let's
start
off
with
vlad,
let's
get
a
update
on
what's
happening
with
camo
proxy.
A
Sorry,
what's
misted
yeah,
we
made
some
some
great
progress
earlier
this
week
we
decided
to
kind
of
shift
direction
a
little
bit.
We
were
initially
thinking
of
doing
using
network
endpoint
groups
and
grab
those
so
create
those
in
kubernetes
and
then
grab
those
in
terraform
and
add
them
to
a
back-end
service
and
then
feed
them
to
a
load
balancer,
which
is
as
I
as
I
talk
about.
It
sounds
a
little
bit
complicated
and
I
think
that
was
the
whole
point,
the
complexity
of
it
and
the
possible
repercussions.
A
So
we
decided
to
scrap
that
and
go
for
a
classic
ingress
that
will
go
into
the
aws
dns
entry.
I
mean
we
can
just
switch
over
from
the
old
deployment,
the
vms
to
the
kubernetes
deployment
at
the
dns
level,
and
I
think
we
have
somewhere
a
comment
about
that.
I
was
just
looking
for
the
schematic
I'll
share.
My
screen.
A
Sorry,
so
this
is
the
old
approach,
the
one
that
we
scrapped.
It
was
called
the
standalone
neg
network,
endpoint
group,
and
we
are
going
for
the
green
one,
the
boring
solution,
so
basically
an
ingress
the
disadvantage
of
going
with
the
green
one.
A
Is
that
we're
not
going
to
be
able
to
do
both
vms
and
pods
at
the
same
time,
so
the
initial
idea
was:
we
were
thinking
about
the
the
standalone
neg
solution,
because
that
would
allow
us
to
load
balance
between
vms
and
pods,
so
we
could
run
those
in
parallel
for
whatever
time
we
needed,
but
the
drawback
was
the
complexity.
A
So
if
we
go
for
the
the
green
approach,
boring
solution,
we'll
just
cut
over
from
the
old
infrastructure
to
kubernetes
that's
kind
of
what
I
wanted
to
say
about
that,
and
we
also
have
a
demo-
and
I
think
henry
or
ahmad
will
need
to
help
me
with
that,
because
my
access
doesn't
allow
me
to
do
all
the
things
that
we
need.
D
Yeah,
I
can't
show
much,
but
I
can
show
that
we
just
applied
a
change
to
pre,
to
deploy
camo
proxy
and
combinators
there
with
the
with
an
ingress
and
we
configured
pre
to
use
it
for
camera
proxy.
D
It
kind
of
works.
The
only
thing
there's
some
small
detail,
which
we
still
need
to
figure
out
is
that
I
think
our
content.
Security
policy
still
needs
to
be
adjusted
because
inside
of
an
issue
or
something
pre
will
not
show
images
because
they're
embedded
in
external
urls,
but
if
we
fix
that
that
should
be
visible,
but
you
can
click
them
and
link,
see
them
in
a
new
tab.
So
you
can
see
that
cameraflex
actually
is
working,
and
I
can
show
that
maybe.
D
Hopefully
see,
can
you
see
it
I'm
sure
yeah
as
I
I'm
creating
a
new
issue
here
and
what
we
do
is
you
know
we
link
to
an
external
image
which
we
want
to
do
via
cable
proxy,
just
to
prevent
any
kind
of
misuse
on
the
security
issues.
And
if
you
go
in
preview,
we
normally
should
see
it
here,
but
I
think
it's
the
content,
security
policy,
which
prevents
us
to
show
it
directly,
but
we
can
just
open
it,
and
this
is
pointing
to
the
internal
ip
of
camera
proxy
and
showing
it
fine.
D
The
way
camera
proxy
is
working
is
that
we,
you
know,
generate
a
url
which
is
encrypted
with
some
kind
of
secret,
so
that
it's
not
guessable,
and
this
way
we
can
point
to
external
images
via
camera
proxy,
which
is
then
encrypted
and
so
on.
So
this
is
working.
What
needs
to
be
done
is
setting
up
the
kubernetes
change
to
actually
create
a
certificate
and
users.
D
I
know
that
vlad
just
made
it
work
in
his
proof
of
concept
locally,
so
this
is
coming
with
the
chart
already,
and
so
this
is
basically
not
much
to
to
finish
here
and
what
vlad
just
said
before
for
for
switching
over,
because
we
decided
not
to
go
the
way
with
you
know,
network
endpoint
groups
and
load
balancing
both
at
the
same
time
then
switching
over.
Maybe
we
will
just
switch
this
dns
entry
over
from
this
ip
and
the
old
load
balancer
to
the
new
one
for
the
interest.
D
That
will
be
the
switch
over
procedure
that
we
will
use.
It
should
be
fairly
easy
and
hopefully
fast
enough,
and
if
we
see
issues
we
can
just
switch
back.
That's
the
idea
how
we
want
to
do
it
in
staging
production.
D
Switch
over
yeah,
I
mean
we
would
test,
of
course,
if
it
works
as
it
is,
and
if
it's
forwarding
and
and
so
on,
we
of
course
need
to
make
somehow
sure
that
the
content,
security
policy
and
everything
is
exactly
working
as
it
should,
but
it
should
at
least
in
staging
and
production,
because
we
right
now
already
use
camera
proxy
there.
So
we
know
it's
working
there
right
now.
We
only
have
an
issue
with
pre,
where
we
just
added
something
new.
D
D
Yes,
we
need.
C
D
A
That
could
be,
we
are
using
the
same
csp
as
before.
So
I
know
I
don't
see
how
that
would
make
a
difference,
because
it's
the
same,
but
we
need
to
investigate
it,
a
little
bit
more
okay
to
figure
out
and,
as
henry
said,
maybe
it's
also
because
it's
we
didn't
add
the
certificate
and
stuff
like
that.
So
it's
not
https.
D
C
Okay,
when
we
migrate
over,
I
haven't
seen
any
merge
requests
associated
with
the
new
kubernetes
deployment
of
camo
proxy.
I'm
curious
if
we
are
running
a
horizontal
pod,
autoscaler.
A
We
don't
have
a
clear
idea
yet,
but
there's
there's
a
an
issue
where
we
think
we
would
use
metrics
to
auto
scale.
That's
also
something
that
we
consider.
So
we
just
need
to
feed
those
back
to
the
horizontal
pod,
scaler
and
yeah
make
the
auto
scale
based
on
the
number
of
requests.
D
The
other
yeah
I
mean
we
see
right
now
that
we
don't
have
too
many
requests
to
camo
proxy
and
production.
So
it's
really
not
a
lot
of
traffic
but
yeah.
We
should
adjust
so
that
we
are
able
to
scale
if
there's
something
coming
in
or
if
we
have
a
really
big
traffic
spec
or
something.
A
C
Oh,
that's
not
very
much
at
all.
It
seems
like
a
very,
relatively
small
service,
then
okay,
excellent
off
the
wall
question,
since
we
are
building
a
kubernetes
deployment
for
this.
I'm
curious
as
if
there's
any
interest
in
us
providing
a
helm
chart
for
this.
C
D
I
think
that
was
the
main
idea
to
go
with
the
hand
chart
and
right
now
we
deployed
via
gitlab
files
repository.
B
D
Sitting
in
there
and
that's
the
chart,
which
is
both
they're
locally,
all
done
by
vlad,
so
it
can
be
easily
taken
from
there
and
then
put
over
into
a
gitlab
charts.
A
I
think
this
was
a
design
direction
as
well.
If
I
remember
correctly,.
E
B
A
And
then
we
decided
to
go
this
other
route
with
a
chart
because
of
that
reason.
C
C
Alrighty
any
further
questions
before
we
move
on.
B
C
A
Sorry
for
for
what
the
right
link
for.
C
All
right
ahmad,
let's
talk
about
gitlab
sshd,
tell
us
about
how
today
went
it
sounded
like
it
went
well.
E
Yeah
quite
awesome,
so
today
we
did
the
rollout
to
canary
with
one
like
setting
the
weights
of
hha
proxy
to
one,
and
luckily
we
didn't
see
the
same
error
as
we
have
seen
in
the
last
four
attempts
or
let's
say
two
attempts,
because
the
last
two
attempts
had
the
castle
context
cancelled
there.
Something
like
that
today.
E
Still
we
had
issues
like
we
had
errors
about
the
normal
threshold,
but
I
think
there's
an
mr
to
adjust
these
because
they
are
like
404
and
expected
errors.
Let's
put
it
like
that.
So
basically,
there
are
success,
but
we
rolled
back
because
of
the
air
raid,
because
this
is
still
alerting
the
aoc's.
E
So
we
rolled
back
and
yeah
igor
opened
the
mr
and
I
think
this
is
excluding
the
current
errors.
We
are
seeing
basically
this
for
canary
production
canary,
so
I
think
next
roll
out.
Maybe
we
will
keep
it.
There
is
no
plan
yet,
but
I
think
we
will
keep
it
on
like
accepting
traffic
for
shd
for
a
while,
before
going
to
production
at
four
staging.
E
We
also
me
and
scarback
today,
merged
vmware's
from
hendrick,
and
we
also
at
them
had
like
an
attempt
to
basically
clone
from
staging
like
a
repository
from
staging,
and
it
was
success.
C
C
Can
we,
since
we
are
now
waiting
on
that,
I
guess
that
only
that
one
merge
request:
can
we
go
ahead
and
prepare
the
cr
to
yeah
roll
it
out
yet
again
and
hopefully
we're
at
a
point
where
we
could
stay
that
way
and
I
think,
because
we're
getting
to,
I
think,
we're
increasing
the
confidence
level.
Perhaps
we
should
create
this
change
request,
where
the
ultimate
end
goal
is
that
we
sit
at
five
percent
traffic
and
if
all
continues
to
be
well,
we've
got
a
merge
request
in
our
chef
repo.
C
E
C
C
E
Yeah
we
are
like
we
added
the
status
to
the
issue
and
to
slack,
I
think,
hendrix
is
also
like
doing
some
tests
right
now.
He
will
also
post
them
after
he
finished,
so
I
think
they
are
aware.
C
C
Do
you
have
a
kind
of
a
sense
as
to
when
that
merge
request
might
get
fully
deployed
in
production
like?
Is
that
something
that
might
be
able
to
be
accomplished
by
the
end
of
this
week?
So
maybe
we
could
roll
out
get
live
sshd
in
canary
by
the
end
of
this
week,
potentially.
E
C
C
Maybe
that
will
provide
the
necessary
visibility
to
help
encourage
getting
it
reviewed
sooner
potentially
and
maybe
helping
encourage
getting
it
deployed.
I
know
delivery
is
having
a
rough
time
with
deployments
the
past
few
days,
but
I
think
anything
that
at
least
gets
that
merged
quicker
will
enable
us
to
be
unblocked
faster
theoretically.
B
C
Cool
does
anyone
else
have
any
more
questions
related
to
gitlab,
sshd.