►
From YouTube: 2020-06-19 GitLab.com k8s migration EMEA
Description
Discussing multicluster installation https://gitlab.com/gitlab-com/gl-infra/delivery/-/issues/932 and point of access to the cluster
B
C
There
is,
there
is
no
demo
for
day
I.
Think
in
the
first
discussion
point
is
mine,
so
I
would
just
get
started.
So
this
is
something
that
I
started
discussing
I
think
two
weeks
ago
in
the
delivery,
weekly
cool
and
basically
after
reading
several
issues
on
several
topic
and
some
discussion
that
we
had
with
Myron
I.
C
Think
months
ago,
I
said
all
this
thing
started
clicking
in
my
head
and
I
I
basically
brought
down
this
issue
about
it's
more
of
a
long-term
idea
of
how
and
if
we
should
leverage
a
multi
cluster
multistage
deployment
and
what
I'm
trying
to
get
out
of
this
is
a
bunch
of
good
improvement
in
the
way
we
are
working
right
now.
So
basically
I
try
to
explain
it
very
briefly.
So
right
now
we
have
one
cluster
and
we
have
two
stages,
which
is
mine
and
in
cannery.
C
So
there
are
basically
three
kind
of
changes
that
we
can
deploy:
configuration
changes,
changes
of
the
kubernetes
cluster
itself
or
application
changes,
so
everything
goes
to
from
a
merge
request
gets
reviewed.
Then
we
merge
it.
If
it's
an
a
code
change,
we
also
package
it,
and
then
we
are
ready
to
say
ship
it,
which
means
that
we
go
staging.
Then
we
do
queue
a
cannery,
QA
and
production,
and
we
hope
that
everything
is
good.
Not
everything
can
go
through
this
process
and
the
later
we
spot
an
error.
C
The
hard
race
is
rolling
back,
and
these
are
all
pain
point
in
our
in
our
process.
So
my
idea
here
is
that
starting
just
with
multi
stage
deployment
is
that
we
can
create
kind
of
disposable
or
short-lived
stage
where
we
are
testing
some
kind
of
changes.
Can
it
be
new
chart
deployments
with
different
configuration
for
helm
and
we
can
route.
C
We
can
do
the
same
thing
with
the
cookie,
so
we
do
QA
on
this
specific
new
stage
and
make
sure
that
it
passed
the
automated
QA
and
then
we
can
do
one
step
more,
which
is
doing
incremental
routing
a
percentage
of
production
traffic.
So
let's
say
we
want
to
change
something
in
the
configuration.
We
are
not
sure
about
what
will
happen.
We
do
this.
C
We
deploy
this
in
the
helm,
cannery
stage,
we
run
the
QA
with
with
the
cookie
and
then
we
say
@h
a
proxy
level
route,
two
percent
of
production
traffic
to
this
new
stage,
and
we
monitor
every
rates
and
things
like
that.
We
can
increase
the
number
and
when
we
are
happy
with
the
results
we
say.
Okay,
the
experiment
is
correct.
These
things
can
really
be
merged
into
master
deployed,
and
can
it's
safe
to
to
do
this?
C
This
things
obviously
doesn't
work
for
production
wide
changes,
so
we
want
to
change
something
that
is
not
namespace,
scoped
or
some
or
even
if
you
want
to
test
a
new
version
of
kubernetes.
So
on
top
of
that
we
can
consider
having
a
multi
cluster
deployment
and
basically,
we
have
one
or
more
cluster.
Let's
say
one
that
is
designed
for
hosting
only
this
short-legged
stages,
or
things
like
that,
so
that
we
want
to
test
kubernetes
upgrade.
We
can
do
on
this
on
this
cluster
and
then
at
HJ
proxy.
C
C
Assuming
there
are
no
migration
involves,
or
things
like
that.
Basically,
we
can
extend
review
ups
review
ups
in
a
way
that
they
get
deployed
as
part
of
these
disposable
stage,
and
we
can
run
QA
on
that
with
production
data
in
production,
like
environment
and
if
it
works
well,
then
we
can
start
routing
real
traffic
and
say
2%
of
production
traffic.
This
change
is
really
is
working
well,
so
we
are
fine.
C
We
can
merge
it,
so
this
is
basically
where
I
would
like
to
go,
so
the
ability
to
run
not
just
to
version
or
full
version
of
gitlab
at
the
same
time
in
the
cluster,
but
the
ability
to
run
multiple
version
of
it
that
are
just
isolating
a
specific
change.
Either
code
change,
configuration
change
or
whatever
there
are
some
pain
point.
C
One
of
one
of
this
is
migration
are
kind
of
impossible
to
test.
In
this
scenario,
the
other
one
is
the
second
topic
that
is
in
my
list,
which
is
sidekick
affinity.
So
we
need
a
way
for
make
sure
that
if
we
schedule
a
sidekick
job
from
a
stage,
he
has
to
end
up
in
sat
in
a
sidekick
cluster
running
in
the
same
stage,
so
that
we
can
test
also
a
synchronous
job,
but
I
mean
it's
big
before.
A
You
mentioned
obviously
like
this
kind
of
breaks
down
when
we
are
changing
actual
data
as
in.
If
we
have
database
schema
changes,
I'm
assuming
in
cases
where
repositories
would
change
or
the
format
of
you
know
how
we
pull
data
from
a
persistent
data
from
from
the
wherever
it
stored
and
in
order
for
us
to
get
where
you
are
suggesting.
Here,
we
have
another
big
cultural
shift
to
do,
which
is
how
do
we
ensure
that
our
development
is
done
in
a
way
where
the
data
set
changes
are
the
ones
that
are
isolated
independently
from
the
codes?
C
So
my
idea
is
that
if
we
isolate
this,
only
for
let's
say,
testing
help
changes,
it's
something
that
involves
a
smaller
set
of
developers
is
only
distribution
team,
only
the
part
of
distribution
teams
working
on
health
charts,
and
it
gives
us
great
value-
and
it
is
a
way
for
testing
this.
It
doesn't
even
require
us
to
run
a
second
cluster.
It
can
just
be
another
namespace,
but
buddy
it's
a
first
step
right,
so
we
we
start
experiencing
this
idea.
C
A
C
Well
top
of
my
mind:
if
we
could
route
sidekick
jobs
to
cannery
stage
as
well,
then
we,
instead
of
because
the
problem
that
I
see
with
cannery
the
way
it's
configured
right
now
is
that
is
kind
of
hybrid,
because
several
paths
are
kind
of
defaulted
to
cannery.
Unless
you
opted
out
for
it
or
everything
is
obtaining.
So
it's
really
hard
to
understand
the
percentage
of
traffic
that
we
are
really
using
in
experiencing
uncannily
and
also
we
I
look
every
every
project
that
we
are
working
on
is
kind
of
configured
in
the
same
way.
C
So
we
do,
we
don't
do
fast
forward
merging
we
do.
We
rely
heavily
on
CI.
There's
all
of
our
project
looks
the
same
more
or
less
so
we
end
up
testing
only
the
things
that
we
are
using,
but
there
are
other
people
on
github.com
that
maybe
are
using
different
settings,
different
project
settings
or
things
like
that.
So
if
we
could
have
a
more
flexible
cannery
where
we
can
increase
the
percentage
of
traffic
to
it,
not
just
saying
now,
we
are
fine
with
cannery
and
we
and
we
just
roll
out
to
production.
C
D
D
A
C
C
Don't
know
because
a
lot
changed
right
for
sidekick
in
the
last
year,
so
what
I
was
hoping
to
get
is
that?
Because
we
have
this
open
issue
open
about
decorating
at
a
lot
balancer
level,
if
this
is
the
stage
of
the
recurse?
Basically,
because
we
had
the
outrage
last
week
or
two
weeks
ago,
I
don't
even
remember
where
we
had
a
mixer
drafting.
So
if
we
get
that
things
in
place,
then
we
can,
for
instance,
just
an
example.
We
can
do
Redis
namespacing
with
a
middleware.
A
C
A
Resolving
problems
at
the
infrastructure
level:
we
don't
know
what
we
are
introducing
by
introducing
into
the
application
by
resolving
the
thing
at
the
infrastructure
level,
because
application
still
works.
The
way
it
works,
yeah,
so
I
guess
to
me
to
make
your
proposal
actionable,
would
be
nice
to
see
how
much
work
and
what
kind
of
work
we
need
to
do
to
get
sidekick
affinity
or
psychic
namespacing
done
properly
with
that,
like
once.
A
C
A
A
Okay,
I,
don't
know
what
the
alternative
is.
When
you
put
it
on
the
same
line
as
the
main
stage,
then
we
can
start
thinking
about
how
you
can
do
multiple
of
this,
because
I
know
that
there
are
teams
in
this
company
that
are
really
really
dependent
on
experiments,
forget
love.com,
and
if
we
could
have
those
two
stages
first
done
properly
and
then
focus
on
another
team
that
could
leverage
this.
We
could
provide
real
value,
real,
quick
without
actually
going
through
the
whole
proposal
of
yours
immediately.
A
B
There
might
be
other
benefits
too,
but
I
think
we
can
achieve
the
same
with
namespaces
inside
the
inside
of
the
cluster
like.
If
we
wanted
to
do
something
like
you're
describing
which
are
green-blue
deployments
or
Bluegreen
deployments,
you
can
do
that
within
a
single
cluster
as
well
for
just
application
updates,
and
what
we
have
now
is
mobile.
Now,
our
two
namespaces,
the
canary
namespace,
isn't
being
used
a
whole
lot,
it's
just
being
used
for
registry
and
it's
really
not
getting
a
whole
lot
of
traffic.
There
was
an
issue
up
in
I.
B
Think
there
isn't
as
European
about
increasing
the
traffic
for
registry
by
just
taking
the
percentage
instead
of
using
request
paths,
and
we
all
agree
that
that's
what
we
want
to
do
is
haven't
had
time
to
do
it.
So
I'm,
not
sure
I'm
I'm
have
been
worried
about
one
is
that
we
can't
really
bring.
We
know
right
now.
B
We
we
have
trouble
scaling
up
and
down
quickly
and
if
we're
going
to
be
doing
something
like
a
Bluegreen
deployment
with
kubernetes
clusters-
and
you
know
tearing
down
a
cluster
and
bring
you
back
and
plaster,
it
means
we
have
to
scale
quickly
and
I.
Don't
think
the
Apple
I
mean
like
it
doesn't
seem
like.
The
monolith
is
ever
going
to
be
able
to
scale
very
quickly
given
how
slow
it
takes
to
boot
rails.
C
Yeah
this
is
why
I
was
more
focusing
on
having
disposable
stages
instead
of
real
Bluegreen
deployments.
So
this
is
never
about
having
two
things
running
in
parallel,
and
then
you
remove
the
old
one
and
start
using
the
other
one,
but
it's
more
about
flagging,
something
that
is
slightly
different
and
routing.
Some
production
data
is
more
about
collecting
information
and
validating,
then,
actually,
a
real
deployment
strategy,
yeah.
B
In
that
case,
I
think
like
we
could,
we
could
play
around
with
how
much
it
would
take
to
spin
up
and
separate
a
second
cluster
I.
Don't
I,
don't
know
if
this
would
be
difficult
or
not?
Actually
it
I
think
it'll
be
easy.
I
mean
we
basically
just
duplicate
what
we
have
in
terraform
spin
up
a
new
cluster
and
then
we
undo
back
end
in
each
a
proxy
to
start
routing
traffic
to
it.
Maybe
there
are
things
that
I'm
missing
in
star,
but
can
you
think
of
anything
that
would
be
difficult
about
that.
D
B
A
But
that's
this
is
a
good
time
to
actually
try
to
probe
a
bit
how
much
work
it
is.
We
haven't
moved
all
of
the
workloads.
If
we
did,
then
it
would
be
like
way
more
work
now,
given
that
we
are
I'm
assuming
majority
on
the
majority
of
things
we
are
currently
block
unblocked
under
immigration.
Yes,
we
can
do
certain
things,
but
like
actual
proper
migration,
we
are
proper
blocked,
I
would
I
would
say,
and
this
this
might
be
a
good
time
to
think
about.
A
B
We
can
yeah,
we
could
ya,
could
think
about
it.
A
bit
more,
but
skarbek
is
right
on
the
on
the
infrastructure.
Side,
I
think
it's
fairly
straightforward
on
the
configuration
side
yep,
you
know
we
make
a
lot
of
assumptions
about
things
and
also
on
the
monitor.
You
make
a
lot
of
assumptions
on
things.
We'd
have
to
create
a
need
to
mention
for
the
second
cluster
and
all
of
our
labeling
monitoring
and
configuration
and
yeah.
But
let's,
let's
think
about
it.
Oh
maybe
I'll
just
create
an
issue
and
brainstorm
yeah.
F
A
A
E
B
B
Last
night,
okay,
so
I
actually
didn't
check
on
nama
this
morning.
So
what
we
can
do
now,
unless
that
you
have
done
this
already
scar
back
I,
have
the
change
issue.
For
for
this,
we
just
need
to
execute
it
to
bring
the
replicas
back
online.
So
let
me
be
linked
to
that
change.
Issue
and
I
will
get
going
on
that
I.
A
A
This
is
one
of
the
steps
for
them
to
start
duplicating
some
of
these
jobs
right
like
they
needed
to
change
the
way
it
works
and
they
changed
its
priority
and
now
I
think
they
have
if
I
understood
it
correctly
now
they're
in
a
position
where
they
can
start
thinking
about
deduplicating.
These
hundreds
of
thousands
of
jobs
that
get
created.
Oh.
A
B
Dependent
on
the
NFS
story,
but
what
we
could
do
is
I
think
we
we
discussed
before
creating
NFS
shards
for
both
both
of
the
for
the
T
that
are
remaining
and
starting
to
like
move
over
queues,
and
you
know
we
with
metrics.
We
can
definitively
say
whether
something
uses
NFS
or
not
by
just
looking
at
like
NFS
client
metric.
So
so
that's
fairly
easy.
We
can
move
over
what
we
think
is
NFS
dependent
and
then
we
should
be
left
with.
You
know
stuff
that
isn't
and
then
we
can
migrate.
B
It
I
really
part
of
me
hates
to
spend
time
on
this
when
I
assume
that
people
are
actively
looking
into
the
traces.
You
know
removing
our
dependency
on
traces
and
it
just
sort
of
feels.
Like
you
know,
we
should
be
focusing
on
that,
rather
than
playing
around
with
shuffling
twos
around
them,
you
know
moving
moving,
some
of
the
cues
to
kubernetes
I.
Don't
think
really
buys
us
a
whole
lot.
As
far
as
like
we've
already
experienced
the
migration,
we
kind
of
know
what
to
expect.
A
To
answer
your
question
of
what
we
are
buying,
we're
buying
knowledge
which
is-
or
we
absolutely
know
about
traces
how
they
behave.
We
absolutely
do
not
know
anything
else
about
that
chart
what
it
actually
has
and
what
kind
of
traffic
it
generates.
So
by
doing
that,
we
actually
paralyzed
the
work
that
the
trace.
This
is
going.
A
We
can
do
some
of
our
other
work,
but
let
me
ask
a
different
question,
then,
if
we
are
not
focusing
on
the
two
remaining
shards,
because
we
know
that
there
is
like
an
SS
dependency
in
it,
what
are
we
looking
into
doing
short
term,
apart
from
the
things
that
we
already
discussed,
was?
Let's
look
for
some
ways
to
improve
and
harden
what
we
have
right
now,
which,
in
its
own,
is
a
good
thing
actually
to
do,
but
is
there
anything
else,
yeah.
B
The
alternate
things
we
could
work
on
is
finishing
up
the
console
work
which
is
gonna
be,
which
is
on
our
side,
that
we
could
shipping
with
the
charts
like
work
that
needs
to
get
done
before
we
can
either
migrate,
WebSockets
or
get
there's
those
two
issues.
I.
Think,
though,
that
distribution
team
is
already
on
top
of
them
and
our
planning
and
planning
to
have
been
done
for
the
next
release,
so
I
guess
those
are
what
we
could
work
on.
B
D
B
I
mean
I
mean
this
is
something
I
need
to
finish
up,
which
is
to
like
start
fairly,
the
Kate's
deploy
job
when
it
fails
and
I've
been
monitoring
it,
and
lately
it's
been
looking.
Ok,
occasionally,
we
have
failures
because,
like
some
configuration
change,
causes
deployment
to
fail
and
but
other
than
that,
I
haven't
seen
anything
related
to
CNG.
That
has
caused
it
to
fail,
so
I
think
we're
pretty
close
to
making
that
change.
B
E
A
This
week
was
pretty
horrific.
Actually
to
tell
you
the
truth,
so
you
know
you
you
you,
you
might
complain
about
all
the
pager,
the
UT
alerts.
That's
fine!
You
have
to
fight
the
system.
We
have
to
fight
humans
as
well,
sometimes,
which
is
the
really
really
hard
problem,
and
it's
not
only
that
something
is
failing,
but
it's
your
pager
duty,
our
own
stuff
and
other
people's
stuff.
F
A
Yeah,
what
I
wanted
to
say
is
jeff.
We
we
had
some
discussion
last
week
or
the
week
before
that
I
doesn't
even
remember
where
we
recognize.
There
are
a
couple
of
items
that
we
would
like
to
do
right
now
to
harden
the
cluster
and
to
make
it
to
pay
off
some
technical
debt
that
we
accumulated
in
the
past
two
months,
while
rushing
some
of
these
sidekick
queues
and
I
know
that
you
created
those
issues
but
for
the
life
of
me
right
now,
I
cannot
remember
what
those
were.
A
A
B
B
F
B
B
F
Get
very
far,
we
talked
briefly
about
the
the
whether
or
not
to
run
tools
locally.
Before
closing
that
box,
I
am
totally
happy
to
tie
a
bow
on
that
like
today.
I
don't
want
to
burn
too
much
oxygen,
but
there
this
morning
and
I
think
I
might
be
the
last
holdout
on
run.
Tools
on
your
laptop
so
be
good
to
just
spend.
B
Right
now
it's
it's
sort
of
like
you
have
to
run
duels
on
your
laptop,
because
there's
no
other
way
to
do
it
like
for
home
and
I.
Think
for
cube,
CTL
we're
all
running
it
from
the
console
host,
we're
kind
of
annoyed
by
the
wrapper
and
like
having
to
paste
issues.
I
think
what
grain
suggested,
if
you
saw
the
video
and
then
rebooting
was
that
like
we
should
probably
by
default,
have
read-only
and
there
should
be
very
like
open
permissions
for
read-only,
like
everyone
should
be
able
to
be
read-only
for
everywhere.
I!
B
F
Salutely,
the
main,
the
only
thing
that's
front
of
my
mind,
that
I
would
really
like
to
push
quite
hard
today
is
while
we
develop
such
a
system.
I
really
want
to
endorse
the
use
of
SSH
tunnels
to
access
the
API,
so
I
get.
Why
we're
not
opening
it
publicly?
In
fact,
I
think
none
of
us
disagree
on
any
of
the
facts
or
points
in
the
pro
and
con
column
right.
We
just
weigh
them
differently.
F
But
I
I
would
like
to
come
out
of
this
meeting
with
some
resolution,
not
to
put
our
protocols
on
non
prod,
console
boxes
and
awkwardly
juggle
cube
context
between,
like
all
CIO
knobs,
that
don't
have
console
boxes,
there's
no
one
to
one
mapping
of
console
to
cluster.
There
I
did
watch
the
video
I
did
read
the
comments,
but
it's
a
lot
to
digest.
It's
possible
that
I'm
treading
ground
that's
already
been
trod,
but
what
do
people
think
of
that?.
B
Well,
I'll
start
by
saying
I
think
you're
right
there
isn't
a
one-to-one
mapping
of
consoles
to
clusters
and
I.
Think
where
we
landed
this
morning,
was
that
using
the
console
boxes
probably
just
doesn't
make
sense,
you
know
and
where
we
probably
want
to
have
what
well
probably
what
I
have
are
like
burnt
use,
their
console
boxes,
possibly
per
user
per
environment,
console
boxes
or
at
least
have
the
division
between
production
and
non
production.
F
Yep,
that
sounds
that
sounds
totally
reasonable,
as
does
the
read-only,
and
there
was
lots
of
discussion
of
various
ways
to
implement
a
kind
of
like
pseudo
for
kubernetes
on
there.
I
know
this
meetings
running
out
of
time,
so
I'm
not
gonna
I'm,
not
gonna
even
put
my
two
cents
down
on
some
of
the
specific
implementations
there,
but
yeah
like
that
direction
sounds
absolutely
fine.
It
sounds
a
lot
better
than
running
as
cluster
roll
admin,
which
I
think
we
do
today
whenever
we
access
the
cube
API
and
so
on.
B
F
When
it
comes
to
accepting
something,
so
we
have
a
better
solution,
don't
forget
using
an
SSH
tunnel
and
then
authenticating
the
GCP
through
that
access
wise
is
exactly
the
same
as
using
a
console
box,
but
without
leaving
your
private
key
on
that
box.
Now
foot
gun
wise,
like
targeting
the
wrong
cluster.
It's
not
quite
the
same.
It's
also
not
quite
the
same
when
it
comes
to
like
endpoint
security
like
if
you
assume
that
laptop
is
compromised,
but
I
I.
B
B
B
B
C
F
Not
perfect
I'm
not
saying
that
I'm
not
sitting
here
stubbornly
arguing
that
this
is
what
we
should
be
doing
in
a
year's
time,
but
I
am
strongly
arguing
that
this
we
have
two
systems
today
right.
We
have
SSH
and
leave
your
key
on
the
box
or
we
have
open
the
tunnel
and
use
the
title.
These
systems
both
work
today
and
they
both
work
via
identical
authentication
mechanisms.
I'm
advocating
we
use
the
one
that
that
I'm
arguing
is
objectively
superior
in
most
ways.
Yes,
I
suppose.
F
If
we're
talking
about
the
foot
gun,
it's
maybe
not
quite
as
good
as
using
the
console
box
cuz,
you
have
to
get
the
shell
on
the
console,
see
the
prompt
and
then
paste
in
the
invalid
commands
after
logging.
In
so
that's
I'm,
trying
to
not
so
not
fall
back
on
a
clunky,
insecure
system
because
we're
making
the
wrong
assumptions
about
it
somehow
being
more
secure,
because.
B
F
B
F
B
Can
I
think
I
think
at
that
time
we
kind
of
realized.
It
was
not
ideal
from
a
security
perspective,
but
it
was
like
you
know,
thought
that
we
would
probably
figure
out
a
better
solution
soon
and
we
just
didn't,
but
it
sounds
like
now.
We
are
I
I.
Think
like
it's
important
for
consistency
that
we're
all
doing
the
same
thing
and
right
now
the
console
is
the
only
way
we
should
be
accessing
prod
I
mean
that's
that's
what
we
all
agreed
to.
F
Manger
of
running
the
wrong
helm
version:
if
we
set
up
with
asdf
and
are
in
the
directory
of
the
tool
versions
file,
then
a
tunnel
method
with
your
local
environment
is
safer
than
pulling
the
correct
of
home
every
single
time.
You
use
a
fresh
console,
I
suppose
that's
the
current
once
player
environment
right
here
once
on
staging
once
on
prod,
but
then
your
lag
behind,
etc,
etc.
You
get
my
point
yeah.
B
I
see
your
point:
I'm
just
I
guess
I'm
most
interested
in
the
next
thing.
We're
gonna
do
to
improve
this
now
changing
what
we
do
now.
I
mean
I,
understand
that,
like
there
are
some
benefits
to
changing
the
way
we
do
things
now,
because
we'll
all
be
doing
the
same
thing
and
you
know
I
have
you
know
maybe
more
flexibility,
but
so.
F
Let
me
give
you
an
example.
Earlier
today,
I
had
reason
to
check
how
many
Prometheus
pods
were
actually
running
in
the
production
gke,
not
how
many
were
declared
I
saw
that
value
and
that
that
seems
like
a
good
use
case
for
really
apxs.
Like
I
know,
we
should
be
trusting
automation
and
not
doing
a
lot
of
out
of
bands
manual.
F
Api
access
like
I,
totally
buy
that,
but
with
we're
just
not
there
today
and
think
things
like
that
are
quite
a
common
part
of
our
workflow
at
the
moment
sure
and
could
be
massively
accelerated
by
just
giving
the
thumbs
up
to
the
tunnel
like
the
unary
side
being
stopped
at
about.
This
is
because
it
was
identical.
F
B
B
B
D
Complain
because
it's
not
that
smart
of
a
wrapper
script
but
just
to
add
another
comment
to
this
like
we
have
an
established,
workflow
and
I
know
you're
wanting
to
change
it.
But
this
was
put
in
place
to
prevent
the
shotgun
to
your
foot
situation
and
if
we
run
into
that
situation
again,
despite
our
current
workflow
you're
gonna
find
yourself
in
hot
water
and
I.
Don't
want
to
see
that
happening.
F
F
One
just
one
final
clarifying
point:
the
change
would
just
be
giving
a
thumbs
up,
not
actually
implementing
any
change
or
testing
any
change
but
yeah.
Let's,
let's
move
off
of
this
and
on
to
the
next
things
of
what
there
was
a
lot
of.
There
was
a
lot
of
stuff
discussed
about
implementing
sort
of
better
consoles
in
the
video
and
what
was
some
of
the
frontrunners
I
like
I
lost
the
thread
on
one
point
five
times.
B
I
thought
maybe,
as
a
first
iteration,
we
can
just
start
spinning
up
her
user
console
machines
and
4sr
ease
and
they
can
just
be.
They
could
even
just
be
permanent
for
now
and
make
it
so
that
each
SR
he
only
has
SSH
access
to
their
own
server.
We
did
the
necessary
peering
to
have
like
maybe
production
and
non
production
set.
I,
don't
know
where
these
would
live.
They
could
live
in
the
ops
instance,
and
then
we
could
do
the
B
PC,
because
ops
has
already
be
PC
peered
with
everything.
So
we
can
just
use
firewall.
B
F
Like
I,
never
getting
dangerously
close
to
implementation
detail,
this
meeting
I
would
worry
about
like
latency
on
getting
one
of
these
consoles
if
we
shift
them
up
like
if
the
box
is
fresh,
D
provisions
and
has
to
be
shipped
all
the
shell
access
works
like
with
the
correct
version
of
helm,
helm
file
keeps
ETL
and
with
I
guess,
put
a
pin
in
whether
or
not
I
don't
have
credentials
in
place.
That's
probably
not
really
workable.
F
We'd
have
to
like
dump
all
my
credentials
and
a
chef
secret
or
something
but
and
but
even
that
is
a
bit
worrying.
If
there
plans
to
would
it
be
insane
to
have
a
I
know.
This
came
up
on
the
video
as
well
to
have
one
of
our
key
bellies
clusters,
probably
not
ops,
but
some
access.
Cluster
have
public
internet
API
access
and
use
that
to
provision
console
pods
from
a
docking
image.
With
all
this
correct,
tooling,
both
correct
tooling,
with
that
be
a
kind
of
Trojan
horse
that
undoes
yeah.
F
Yeah,
so
we
built
some,
we
built
various
types
of
developer
console
at
my
last
company,
which
is
a
fin
set
company.
We
built
two
classes,
one
more
traditional
one
that
we're
talking
about
now
to
be
used
by
developers
just
to
get
a
shell
with
direct
network
access
to
various
resources
like
a
database,
another
fun,
dangerous
stuff
like
that
and
another
for
people
whose
job
it
was
to
do
like
auditing
for
bank
stuff,
and
we
implemented
those
as
interactive
kubernetes
pods.
But
that
was
because
our
production
cluster
had
open
API
access.
F
B
F
B
So
so
I
would
say,
like
the
the
most
boring
first
iteration
I
can
think
of
would
be
to
spin
up
a
bunch
of
console
nodes
and
terraforming.
The
ops
project
have
to
sudden
that's
one
for
production,
one
for
everything.
That's
not
production
here
and
such
firewall
rules
appropriately,
and
then
this
is
the
place
where
we
do
our
kubernetes
like
stuff.
We
do
it
on
the
terminal.
Maybe
you
also
use
it
for
rail,
so
the
rails
is
tricky
right,
because
the
rails
consult
depends
on
the
application.
F
Yeah
so
the
way
I
mean
don't
forget
that
once
you're
in
a
network
that
has
API
access
that
once
you've
already
gone
to
some
bastion
and
you're
looking
at
the
cube
API
a
kubernetes,
interactive
pod
with
the
tty
is
not
transported
over
SSH
or
anything.
It's
just
forwarding
standard
out
and
standard
in
directly
over
TCP,
so
you
can
implement,
you
can
implement
rails
console
is
just
keep
CTL
run.
You.
F
F
B
B
F
Your
own
tools
of
a
TCP
through
this
is
opening
a
tunnel
to
transport
encrypted,
keep
CTR
traffic
and
then
the
tooling
you're
running
is
remote,
not
local.
Right.
B
B
So
so,
anyway,
so
I
think
this
is
kind
of
where
we
landed
in
the
call
this
morning
and
we
kind
of
decided
okay.
This
might
work
as
a
first
iteration,
but
then
we
have
to
find
time
to
do
it
or
discuss
with
the
reliability
who's
going
to
work
on
it
because
I
think
there's
some
overlap
here
for
console
access.