►
From YouTube: Remote development via GitLab Agent code walkthrough
A
Let's
do
that
I!
Thank
you
all
right,
so
let
me
just
firstly
give
you
an
overview
of
the
functionality
and
I'll.
Just
I'll
just
do
a
quick
demo
of
what
what
we
have
so
far
and
then
what
we
can
do
is
then,
then
I
can
quickly
walk
you
through
the
code.
Okay,
and
because
you
understand
the
code
base,
so
well,
I,
don't
think
it
will.
It
will
take
too
long,
but
but
let's,
let's
go
through
it
anyway.
A
So
basically,
the
idea
behind
gitlab
workspaces
is
that
you
should
be
able
to
spin
up
a
workspace
so
that
you
can
do
your
development
and
if
you
see
what
we
expect
in
the
future,
is
that
our
you
know
every
year
repository
will
have
this
def
file
or
yaml,
which
kind
of
defines
what
your
this
refresh
here
and
make
sure
it's
working,
which
defines
what
what
your
thing
looks
like:
okay,
what
your
environment
will
look
like.
A
So
in
this
case,
if
you
see,
we've
got
a
very
simple
Dev
file
which,
basically
all
it
has
is.
Is
this
image
right
and
talks
about
that?
This
is
basically
just
the
golang
latest
image.
It
does
nothing
special
and,
and
what
we
can
Define
is
on
things
like
you
know
how
much
memory
you
want,
how
much
CPU
on
things
like
that.
The
other
thing
you
can
Define
in
your
Dev
file
is
also
which
endpoints
you
want
to
open
up.
A
So
so
these
are
things
you
define
in
the
def
file
and
I'm
sure
you've
heard
of
Def
files,
because
we've
been
talking
about
it
quite
a
lot,
and
so
you
could
see
that
I've
also
got
this
new
workspaces
screen
where
you
could
see
a
list
of
your
workspace
and
there's
currently
nothing
so
from
this
repository
because
I
have
hopefully
this
agent
running,
you
can
see
this
agents
connected
and
running
and
and
because
I
have
this
agent
connected.
I
can
go
ahead
and
start
create
a
workspace.
A
So
all
I
need
to
do
that.
There
is
just
say:
create
workspace.
I
give
it
a
ref
of
the
repository
to
create
the
workspace
from
so
I
select
name
in
this
case
and
I'll
also
select
a
IDE
to
inject.
If
I
don't
have
an
IDE
already
in
the
in
the
image
that
I'm
using
so
I'm
going
to
just
say,
inject
vs
code,
because
here
I'm
using
a
standard
kind
of
go
image,
and
so
I
can
go
ahead
and
say
open
vs
code
and
so
that's
going
to
inject
vs
code.
A
So
what
this
does
is
it
starts
the
provisioning
process.
As
you
can
see,
it's
created
this
workspace
40
and
it's
going
to
start
provisioning.
This
workspace.
If
I
keep
refreshing
the
screen,
you
can
see
that
the
status
keeps
changing
and
it
keeps
reflecting
what
is
there
in
kubernetes
okay.
So
if
I
actually
go
to
my
kubernetes,
if
I
look
at
kubernetes
itself,.
A
So
if
I
say
okay,
actually
let
me
show
you
the
namespaces,
so
it
creates
a
new,
isolated
namespace
for
this
workspace.
If
I
go
into
that
namespace
and
I
say
gig
at
po,
you
can
see
that
I'm
actually
creating
a
a
pod
which
is
now
running
and
key,
get
if
I
say
deployment
you'll
see
here,
there's
a
corresponding
deployment.
There's
a
there's
a
service
right,
so
we
create
a
bunch
of
objects
like
that.
We
create.
We
create
this
the
service
we
create
a
bunch
of
other
services
for
open
ports.
A
We
create
something
for
SSH
as
well,
which
is
a
bit
beyond
the
scope
of
what
we're
talking
about
today
and
then
we
also
create
like
an
Ingress
object,
so
we
create
a
bunch
of
objects
to
support
the
workspace
right.
So
now,
if
I
go
back
to
my
workspace
here
and
I
refresh
my
screen,
what
you
can
see
is
that
the
workspace
is
now
running
it
reflects
what's
in
kubernetes.
I
can
go
ahead
and
open
the
workspace
now
as
I'm
opening
the
workspace
I
go
through
this
Authentication
routine.
A
So
we
go
through
that
authentication
authorization
and
once
that
happens,
I'm
in
my
workspace
and
as
you
can
see
this
because
I
selected
vs
code,
as
my
IDE
I'm
I'm,
thrown
into
vs
code
from
here
I
can
go
ahead
because
you're
also
attaching
I
didn't
show
you
that,
but
we're
attaching
persistent
volumes
so
that
you
can
access.
So
you
can
store
your
actual
code
right.
A
So
I
can
go
to
this
basic
go
example:
I
can
click
OK
that
should
open
up
my
Repository
and
then
from
my
repository
I
can
go
ahead
and
start
a
new
terminal
and
then
I've
got
this
very
basic
sort
of
go
program
here.
If
you
see
it
just
says:
hello,
gitlab
so
I
can
just
say:
go
run
may
not
go,
and
that
would
just
say
hello,
GitHub
oops,
think
I.
Don't
have
go
on
the
path
for
some
reason.
So
so.
B
A
A
I
may
have
made
some
changes
recently
because
of
which
it
it's
messed
up
the
path,
but
otherwise
you
know
you
basically
have
whatever
is
installed
on
the
image
and
and
you
should
be
able
to
run
the
code.
You
can
also
do
things
like
you
can
also
pre-install
extensions,
and
things
like
that,
if
you
wanted
to
so.
If
I
wanted
to
install
the
go
extension,
for
example,
I
could
do
that
so
that
I
don't
have
this,
so
everything
comes
kind
of
pre-installed.
A
So
that's
the
basic
functionality
now
there's
also
like
additional
functionality
of
we
can
do
web
development,
so
I
can
open
up
ports
and
things
like
that.
So
that's
also
also
possible
with
this.
So
that's,
that's.
That's
the
basic
functionality,
the
other
things
we
can
do
like
change
the
state
of
the
workspace.
So
right
now
it's
running
I'm
being
charged
for
this
workspace.
You
know
once
we
actually
start
charging
for
usage
and
things
like
that.
I
can
go
ahead
and
stop
my
workspace
at
any
time.
A
So,
if
I
go
stop
my
workspace
and
start
refreshing,
my
screen
you
can
see
the
status
is
stopped
if
I
actually
go
into
kubernetes
and
I,
say:
okay,
get
po.
A
A
No
it'll
just
it
injects
it,
it
should
detach
as
well.
It
just
does
to
to
install
the
editor.
All
we
do
is
run
an
init
container
which
installs
the
editor,
so
so
that
shouldn't
be
a
problem.
So
now
it's,
if
you
see
it's
terminated
fully
and-
and
if
you
see
here
it
should
be
I-
should
change
this
data.
So
stop.
Let
me
in
this
fully
kind
of
dominated
right.
A
So
so
that's
one
thing,
so
we
are
obviously
looking
at
the
status
in
you
know.
From
from
from
rails,
you
should
be
able
to
fire
events
to
be
able
to
change
the
status
so
I'm
doing
start,
but
I
can
also
terminate,
for
example,
so
I
could
completely
remove
that
workspace.
The
other
thing
that
I
wanted
to
also
enable
was,
if
I
did
actually,
you
know
if
I,
if
I
were
to
change
the
state
in
kubernetes,
for
example,
for
this
deployment
I
have
the
replica
set
to.
B
A
Yeah,
so
the
the
deployment
will
go
set
to
zero,
so
if
I
set
it
to
one
yeah
and
so
now
the
state
in
kubernetes
has
changed,
and
this
could
be.
For
example,
a
node
became
unavailable.
My
workspace
got
disabled
or
whatever,
but
now
it's
again
running,
so
we
wanted
to
reflect
the
state
in
kubernetes
in
of
the
workspace.
So
you
can
see
now
it
says
starting
up
so
because
I
changed
the
state
in
kubernetes.
A
It
reflects
that
same
state
of
the
developer,
workspace
in
in
rails
as
well,
so
that
you
know
we
were
always
reflecting
the
right
State,
and
this
is
going
to
be
useful
in
the
future
when
we
do
things
like
use
it
in
Billing
right.
If
you
do
usage
billing,
you
know
how
much
how
much
time
is
actually
up
for
we
only
charging
for
that
much
and
so
those
things
the
you
know.
We
need
to
be
able
to
enable
those
use
cases
good
morning.
A
So
now
you
can
see
it's
running
and
and
if
I
say
gigant
pool
now
you
can
see
the
the
workspaces
back
up
and
running
right
same
for
termination.
So
if
we
were
to
terminate
this
workspace,
you
could
just
go
ahead
and
hit
terminate
and
what
this
does
is
it
just
cleans
up
all
the
resources?
So,
as
you
can
see
it's
a
terminating
status,
it's
not
fully
terminated
yet
and
if
I
say
namespace
this
time,
you
can
see
that
that
workspace,
40
namespace,
is
now
in
a
terminated.
A
A
B
A
A
Okay,
perfect,
all
right,
so
I'll
just
start
from
server.go
I
think
that
that's
the
most
useful
part
of
the
the
thing
and
I'll
get
into
some.
Some
details
and
I
think
this.
This
first
part
which
I
want
to
talk
to
you
about
is
perhaps
the
most
controversial,
but
but
yeah
we
can.
We
can
discuss
like
better
ways
to
make
this
happen.
A
So
the
first
thing
is
that,
on
the
real
side,
it's
very
simple:
we
fire
events
and
those
events
basically
are
sent
to
sent
to
gas
right
and
the
way
we
do
that
is
by
this.
This
drpc
method
called
create
event
and
create
event,
basically
is
exposed
through
that
the
the
cast
grpc
Gem
and
so
in
in
real
itself.
We
have
this
method
here
with
just
triggers
of
an
event
and
and
basically
based
on
the
event.
A
A
These
events
start
stop,
terminate
it's
basically
an
event
triggered
from
rails
and
that's
basically
making
a
grpc
call
to
to
Cass
okay
via
that
gem.
So
that's
that's
where
this
this
kind
of
I'll
show
you
the
Proto
definition.
Maybe
that
will
help
as
well,
so
the
product
definition
we've
got
three
methods
in
the
photo
one's
one,
the
first
one
which
which
I'm
talking
about
right
now
is
create.
B
A
Method,
it's
a
unary
call
which
makes
a
request
and
gets
a
response
back
and
similarly,
we've
got
some
other
calls
which
I'll
get
into,
but
one
is
the
call
from
the
from
agent
key.
So
this
this
create
event.
This
happens
from
from
rails
Into
Cash
get
work
is
the
API
which
is
called
from
Agent
K
into
Agent
K
into
cash,
and
then
update
works.
The
work
work,
workspace
status
is
called
from
Agent
K
into
cast
as
well,
and
this
is
basically
to
update
the
statuses
of
things.
A
A
So
yeah,
actually
you
see
the
definition
of
the
request.
It's
quite
simple,
so
all
it's
got
is
it's
got
some
information
about
which
domain
to
invoke
it,
on
which
sorry
what
event
type
it
is
which
workspace
it's
for
and
and
some
generic
payload
details,
and
this
payload
is
basically
Json,
encoded,
payload
information
and
so
that
that's
what
it's
sent
here
along
with
that
some
other
other
information
like
oh,
what's
the
ID
to
inject,
and
you
know
what
workspace
it
is,
what
the
IDS
and
things
like
that.
A
So
so
that's
that's.
What
is
in
the
base
event
request,
so
that
is
where
server
go
comes
in,
so
so
it
handles
create
event
from
rails.
If
Marshalls
that
event
into
a
string
and
all
it
does,
is
it
publishes
that
event
to
a
redis
queue
right?
So
it
puts
this
into
a
redis
queue,
and
it's
doing
it's
just
doing
an
R
push
into
redis
to
get
the
to
actually
push
the
details
in
into
into
redis
and
and
what
it
does
is.
A
It
creates
one
one
event:
queue
per
agent
So
based
on
the
agent
ID.
We
have
like
a
separate,
separate
queue
so
that
you
know
the
agents
are
not
getting
like
the
wrong
messages
from
each
other,
so
we've
gotten
a
cube
per
agent
basically,
and
so
what
we're
doing
is
we're
doing
a
r
push
into
that,
and
that
goes
and
that's
pretty
much
it
that's
the
end
of
the
event
cycle.
So
this
is
just
the
event
has
been
written
into
redis
and
we
stopped
there.
A
I
had
a
question
here:
yeah
is
there
a
chance
that
there
will
not
be
redis
and
then
I
have
to
to
go
to
channels
use
channels
instead
of
redis?
Is
there
cash
deployments
where
redis
is
optional?
No.
A
Okay,
okay,
so
that's
fine,
because
I
was
thinking
in
case.
That
is
not
there.
Then
fallback
would
be
to
use
like
a
channel
to
just
transfer.
A
Single
instance,
gas,
okay,
so
once
that
is
done
so
once
it's
ready,
the
second
sort
of
cycle
continues.
So
this
is
simultaneously.
Agents
are
coming
up
right,
so
Agents
come
up
and
they
invoke
this
get
work
API
so
get
work
is
basically
the
grpc
call
from
so
this
is
so.
This
grpc
call
is
from
Agent
K
into
into
gas,
and
if
you
see
this
is
a
streaming
event,
so
so
when,
when
Agent
K
makes
a
request,
it
gets,
gets
multiple
responses
back
right.
B
A
Each
response
is
basically
just
basically
an
event
right,
saying:
hey
here's
something
apply
this
here;
something
apply
this
and
basically
that's
what
we
we
send
into
this.
So
most
of
the
logic
of
what.
A
All
right
so
so
we're
calling
this
so
we're
getting
the
agent
info
so
we're
getting
the
agent
ID,
which
is
important
because
we
then
listen
to
that
queue.
As
soon
as
this
get
work
is
called.
Cash
then
starts
listening
to
the
queue
to
send
messages
to
the
agent
yeah.
A
It
uses
the
same
redis
key
to
be
able
to
send,
and
then
once
that
happens,
I'm
just
using
a
go.
Template
and
I'll
show
you
that
later
to
be
able
to
basically
figure
out
what
I'm,
using
a
go
template,
just
to
figure
out
how
to
send
a
request
is
to
form
form
the
thing
that
needs
to
be
applied.
Okay,
so
I'll
show
you
what
that
go.
Template
looks
like
slightly
after,
but
basically
I
start
this
in
finite
Loop.
C
A
And
then,
if
it
gets
a
message
back
it
then
unmarshals
it
and
then
it
it
then
tries
to
figure
out
what
to
do
with
it
right.
So
it
gets.
B
A
A
It
makes
a
call
to
gitly
to
get
some
information
and
then,
finally,
what
it
actually
does
here
is
it
it
will
it'll
update
the
status,
saying,
hey
message
received,
and
things
like
that,
but
what's
most
important
here
is
that
it
causes
boss,
Dev
file
function
first,
so
the
past
Dev
file
function.
What
it
does
is
calls
the
defy
Library
to
be
able
to
take
that
information
from
the
def
file
and
convert
it
to
a
deployment
channel
right.
A
A
Okay
and-
and
it's
also
like
there's
some
complexities
here
like
we-
create
some
init
containers
to
inject
the
ID,
basically
injecting
the
IDE
is
as
simple
as
taking
the
IDE
binaries
like,
for
example,
with
zoom.
We
copy
the
web
binaries
or
if
it's
vs
code,
we
take
the
vs
code
binaries
and
we
just
copy
it
from
what
we
have
into
the
container
that's
running
so
that
you
don't
really
need
to
have
the
ID
installed
in
your
container.
A
We
also
create
the
containers
we
Mount
volumes
within
that
and
then
what
we're
really
doing
is
creating
a
security
context
and
there's
some
some
random
things.
There
Readiness
probes
and
we
are
adding
some
deployment
parameters
as
well,
it's
more
complex
than
I
remembered,
but
but
we
have,
we
have
added
a
few
things
there,
but
basically,
if
you
see
it's,
finally,
what
it's
doing
is
calling
the
defile
generator
to
create
a
deployment
passing
in
number
of
parameters
of
what
the
container
should
look
like
in
a
container
should
look
like.
A
It's
also
generating
Services
from
the
dev
file
Library,
so
it
generates
what
services
need
to
be
exposed
and
finally,
it
also
generates
Ingress
ingresses.
For
this
thing
right,
based
on
the
host
and
things
like
that,
this
is
a
bit
more
complex.
We
needed
to
see
whether
they're
public,
because
the
dev5
can
be
a
pretty
complex
thing.
You
need
to
see
whether
a
path
is
publicly
exposed
to
be
able
to
expose
the
service
completely
and
things
like
that.
A
So
we're
doing
that,
but
basically,
in
the
end,
we're
just
creating
an
Ingress
object
and
returning
that
so
at
the
end
of
this,
so
you
get
we
pass
the
dev
file.
We
get
some
objects
from
that.
If
I
go
back
to
server.go.
A
What
you
can
see
is
that
so
we
passed
that
file,
we
get
those
objects
and
then
what
we
do
is
we
parse
a
template,
and
so
we've
got
this
go
template
which
decides
how
to
render
the
what
all
objects
should
be
applied,
and
let
me
show
you
that
so
that's
just
a
very
simple
embed
template
yeah
and
what
we're
doing
is
we've
got.
You
know
what
are
the
things
that
should
be
created,
like
a
persistent
volume
claim
service.
A
Template
which
you
can
edit
anytime
to
decide
what
object
should
be
created,
okay
and
then,
finally,
once
we're
doing
that,
we
are
then
pushing
that
response
to
the
to
the
client
so
to
agent
key.
So
the
agent
key
we
go
and
and
say:
hey,
here's!
The
can
you
please
can
you
please
apply
these
changes.
A
I'll
show
you
the
agent
key
side
as
well
very
quickly,
just
so
on
the
agent
K
side.
If
you
see
we've
got
this
so
so
it
comes
into
the
non-context
right.
So
it
starts
running
it.
It
tries
to
get
the
config
another
number
of
things,
we've
included
in
the
config.
So
if
I,
what
I
didn't
show
you
that
time
was
a
config
of
the
workspace.
So
there
are
a
few
things
that
we
include
so
I've
added
a
number
of
things
to
the
config.
A
So
one
is
you
know
what
domain
does
this
need
to
be
exposed
on
what
suffix
for
the
domain
needs
to
be
given
for
every
workspace,
because
for
each
workspace
we
generate
a
new
domain
name,
whether
we
should
inject
the
auth
proxy
or
not.
That's
the
authentication
layer
which
this.
This
is
an
interesting
thing.
This
is
a
redirect
port
for
authentication.
We
need
to
start
an
HTTP
server.
I'll,
explain
that
in
more
detail
it
does
need
a
bit
more
explanation
and
then
a
secret
as
well.
A
So
we
need
secrets
to
be
able
to
do
authentication
like
the
client,
ID,
client,
secret
and
all
those
things,
so
what
what
secrets
are
used
as
well,
so
so
all
that
goes
into
the
configuration
and
then
there
are
a
few
more
config
parameters
which
I
don't
have
included
all
here,
but
but
basically
those
config
parameters
are
used
by
the
both
the
cache
and
element
and
the
agent
K
element
to
figure
out
what
order
to
provision
okay,
I'm
going
over
this
really
quickly.
A
I
think
it's
a
bit
complex,
but
you
know.
A
Okay,
perfect,
so
we
wait
for
the
conflict
to
be
found.
So
if
the
config
is
not
found,
we
don't
start
the.
So
if
there's
no
remote
Dev
config,
we
just
don't
don't
start
anything
right
because
excuse
me
until
the
agent
has
has
config
it
doesn't
make
any
sense.
So
once
it's
done
so
once
it
does
get
the
config,
we
start
three
things:
okay,
the
three
Loops
we
start
is
the
first
one
is
just
to
start
getting
work
right.
A
So
these
are
push
elements
coming
in
from
from
from
rails
right.
So
real
just
telling
us
hey
start
this
workspace.
Stop
this
workspace
whatever!
So
that's
the
core
Loop
right.
We
also
start
an
HTTP
server
for
authentication
redirection
and
we
need
this
because
because
of
an
issue
with
wildcard
I'll
explain
that
in
more
detail
later,
but
we
have
to
start
a
a
HTTP
server
as
well
and
then,
finally,
what
we
do
is
also
we
start
an
informa
and
I
think.
A
Perhaps
we
spoke
about
this
briefly,
but
basically
we're
listening
to
keep
kubernetes
event
so
that
in
case
kubernetes
cannot
run
the
workspace.
It
stops
the
workspace
for
whatever
reason
we
reflect
the
same
status
in
rails
right,
and
so
we
start
that
kubernetes
Informer
to
be
able
to
do
that
and
informal
is
very
simple.
A
Should
we
go
over?
Let's
go
over
the
informal
very
quickly,
first
I
suppose,
but
basically
what
it
does
is
it.
It
creates
a
new
shared
Informer
and
it
only
selects
those
deployments
which
the
label
matches
the
agent
ID.
So
because
you
could
run
more
than
one
agent
in
a
cluster
right.
We
want
to
make
sure
that
they're
only
looking
at
those
deployments
that
they
manage
right
and
so
we're
labeling
each
of
the
deployments
for
the
agent
ID,
so
that
we
know
that
hey
this
agent's
only
managing
this
deployment.
A
So
it
only
looks
at
those
deployments
and
things
like
that.
Now,
what
we're
doing
is
part
of
the
Informer
is
we
do
a
resync.
So
this
is
a.
This
is
a
config
parameter
which
is
passed.
What
happened
with
this
config
parameter
and
it
was
not
there
and
that
repository
I
showed
you,
but
basically,
if
you
do,
if
you
set
that
config
parameter
when
the
agent
comes
up
for
the
first
time,
it'll
sync
all
the
workspace
statuses
with
what's
on
the
server.
A
So
let's
say
the
agent
went
down
for
some
time,
but
you
want
to
make
sure
all
the
correct
workspace
statuses
are
reflected
in
in
rails.
What
you
can
do
is
set
this
resync
all
flag.
So
when
it
comes
up,
it
will
monitor
all
the
deployment
statuses
from
the
cache
and
it
will
then
go
and
it'll
then
go
and
update
rails
everything.
So
this
is
just
to
make
sure
that
the
latest
status
is
reflected
in
rails.
If
it
goes
out
of
sync
or
the
agent
goes
down
for
some
time,
for
whatever
reason
so.
A
Yeah,
so
it's
towards
the
status
of
the
deployment
in
the
database.
Now
one
of
the
things
I
had
considered
is
that
we
could
fetch
using
the
same
publish
mechanism.
We
could
actually
fetch
the
status
in
real
time
as
well.
So
we
could
say:
hey
give
me
the
status
and
then
you
know
we
could.
We
could
fetch
the
status
in
real
time,
but
I
think
just
persisting.
It
seemed
simpler,
I.
B
A
Yeah,
okay,
the
other
thing
we
do
is
if
a
workspace
has
been
terminated.
We
also
look
at
those
things.
So
if
a
workspace
has
been
terminated,
you
know
you
won't
have
any.
A
You
won't
have
any
recorded
that
in
kubernetes
anymore,
because
there's
no
there's
no
workspace
anymore
agent
has
missed
those
events,
and
so
we
we
handle
that
as
well
by
looking
at
what
what
are
the?
What
are
the
running
workspaces
in
in
rails,
and
so
it
it
reconciles
those
as
well
and
that's
it
they're,
all
the
all,
the
all
the
informal
really
does.
Is
it
it?
A
A
Now
we
could
have
used,
I
could
have
I
guess
directly
called
you
know,
using
that
I
I
saw
that
you
had
this
GitHub
git
some
help
thing.
It.
A
Yeah
yeah
Gate
lab
access,
I
use
that
somewhere
else,
but
yeah
I
I
saw
that
much
later
I
suppose
I
could
have
called
that.
I
am
actually
using
it
here
for
fetching
existing
workspaces,
I
use,
Poland
back
off
and
then.
A
That,
but
not
not
for
the
state
of
update,
but
anyway
I
suppose
that's
that's
some
reaction.
You
can
do
and
then
then,
the
actual.
This
is
the
main
work
Loop
right.
So
this
is
the
informal
Loop
that
I
showed
you,
and
the
last
thing
is
this
get
to
work
Loop.
A
What
get
work
Loop
is
doing.
Is
you
know,
as
soon
as
the
agent
starts
up?
What
it
needs
to
do
is
it
needs
to
start
getting
work,
but
even
before
that,
it
needs
to
read
some
secrets
and
the
secrets
contain
information
like
you
know
what
what
is
the
client
ID
client
secret,
with
search
to
use
to
use
https,
because
the
workspaces
need
to
be
built,
you
know
serve
over
https
and
things
like
that.
A
So
what
it
does
is
it
firstly,
fetches
the
secrets
when
it
comes
up,
it
starts
getting
work
from
the
server
and
starts
that
stream,
because
it's
a
push
and
then
it
keeps
receiving
messages
right
and
in
an
endless
loop.
Now,
if
the
context
closes,
it
will
close
this
and
stuff
like
that.
That's
different,
but
let's
say
it's
running
successfully:
it
will
get
some
work.
It
will
check
the
type
of
event
and
then
mostly
what
it
does.
A
It
doesn't
apply
for,
like
start
a
stop
even
for
provision,
it
doesn't
apply,
but
this
this
little
little
differences
in
provision
and
for
destroy
the
only
difference
is
it
destroys
the
whole
namespace,
so
just
delete
of
the
namespace
so
that
everything
is
cleaned
up
in
case
of
provision.
What
it
does
is
it
it.
You
know
it
creates
a
new
namespace
it.
It
creates
a
secret,
so
the
secret
information
like
it
never
leaves
the
cluster.
A
So
what
we
do
is
we
just
in
the
config
always
store
is
what
is
the
secret,
and
then
we
just
copy
this
information
from
the
agent
itself.
So
the
the
secret
information
never
actually
leaves
the
cluster
like.
None
of
the
client,
IDs,
client,
Secrets
or
any
of
those
things
are
traveling
back
to
Cass.
A
So
all
we
do
is
copy
copy
those
Secrets
across
so
the
TLs
Secrets.
First,
the
asserts
and
the
key
to
serve
the
workspaces
over
TLS.
A
We
create
a
cookie
secret
which
is
needed
for
the
auth
proxy
and
then
and
then
we
we
do
a
bunch
of
things
like
the
redirect
Uris
and
the
whitelist
domains,
and
things
like
that.
Those
are
also
copied.
So
a
bunch
of
things
happen
in
provision
which
are
a
bit
different
and
then
finally
it
just
applies,
and
it
applies
that
change
and
changes.
The
state
from
provisioning
to
running
once
I
applies
is
complete
and
our
apply
again
is
very,
very
simple
and
I've.
A
The
sync
CLI
apply
basically,
and
so
I
I
basically
copied
that
shamelessly,
because
it
it
made
sense,
because
you
got
that
whole
inventory
object
going
on
as
well
right,
yeah,
so
I
I
copy
that
as
well
and
so
yeah.
So
things
like
that
I
think.
That's
it.
The
only
other
part
is
this
thing,
which
is
start
HTTP
server.
This
is
needed
and
let
me
try
to
sort
of
show
you
this
I.
B
A
No
no
worries
at
all.
You
know
it's
early,
it's
early
for
you
late
for
me,
so
my
daughter
just
fell
asleep.
Luckily
so
yeah
anyway,
so
so
so
this
is
the
sort
of
process
and
how
redirection
works
in
the
IDE.
So,
basically,
when
a
user
tries
to
access
the
IDE,
we
inject
a
sidecar
onto
the
so
every
workspace.
A
So
if
you
saw
there
were
two
pods
running,
even
though
we
had
only
one
container,
which
is
the
go
container,
we
we,
what
we
do
is
we
actually
inject
a
sidecar
for
every
thing.
This
does
the
actual
authentication
what
it
does
is
it
checks
the
cookie
if,
if
I'm
already
logged
in
then
I,
don't
need
to
do
anything.
So
if
the
cookie
is
not
present,
it
redirects
to
gitlab
gitlab
prompts
the
user
for
credentials
the
user
interest
credentials.
We
redirect
with
auth
code
now
this.
This
is
interesting.
A
This
redirect,
with
auth
code,
has
to
go
to
Agent
K.
Now
the
reason
is
that
every
every
workspace
is
served
on
a
different
domain.
So
you
know
you'll
have
let's
say
workspace.
40
was
workspace
40
at
whatever
you
know,
dot,
dot,
localdf,
dot
me
or
whatever,
and
so
every
every
IDE
is
is
served
on
a
different
domain,
and
so
because
of
that,
we
can't
redirect
to
ascend.
You
know:
there's
no
wildcard
domains
allowed
in
redirect
URI,
so
we
need
to
redirect
to
a
centralized
place.
A
So
hence
we
redirect
to
Agent
K,
so
Agent,
K,
Services,
very
small,
lightweight,
HTTP
server
and
then
HTTP
server.
Then
figures
out
based
on
the
state
which
which
ID
to
actually
which
workspace
to
actually
redirect
to
so
it
does
that
and
then.
Finally,
it's
just
standard
oauth
after
that,
the
the
perhaps
the
most
complex
part
was
the
fact
that
we
had
to
redirect
to
the
agent
K
to
be
able
to
do
the
the
the
Authentication.
A
Yeah,
so
so
yeah,
and
for
that
that's
the
only
way
I
could
figure
out
on
on
how
to
how
to
do
it.
It
is
I
must
admit
this
is
fairly
complex
to
to
get
this
authentication
flow
working,
but
basically
that's
what
it
is.
It's
a
very
simple
go
server.
It
just
you
know,
starts
an
HTTP
server,
it
handles
a
redirect,
looks
at
the
state
and
then
just
redirects
to
the
correct
host.
So
that's
all
it's
doing
so.
A
It's
quite
a
simple
sort
of
server
function,
nothing
personal
context
and
nothing
else.
A
So
I
think
that
that's
about
it.
So
there's
not
a
lot
going
on
like
the
agent
K
side
is
pretty
generic.
You
know
it
just
has
three
things
which
is
the
apply
Loop,
the
starting,
the
HTTP
server
and
the
Informer,
and
the
server
side
is
the
server
side.
All
it
does
is
really,
you
know,
reads
the
def
file
and
it
from
the
dev
file.
A
It
figures
out
what
it
should
look
like
the
the
shape
of
the
workspace
and
then
generates
the
kubernetes
Manifest
and
then
applies
them.
So
that's
pretty
much.
It.
A
B
Mean
I
think
that's
so
I
guess.
C
B
What's
how
to
say
like
different
between
before.
C
A
That's
true
so
so
one
of
my
plans
is
that
from
the
real
side,
if,
if
a
workspace
is
stuck
in
a
status,
for
example,
let's
say
we
were
expecting
it
to
be
provisioned
and
it
got
stuck
in
a
provisioning
state
because
maybe
it
was
written
to
red
red
Center
is
never
picked
up.
What
we
can
do
is
we
can
re-trigger
the
event
from
the
real
side.
So
that
was
one
thing.
I
was
thinking
of
doing.
C
Yeah
then
another
thing
is,
as
you
probably
know,
with
events
it's.
B
C
C
A
B
A
Are
persisting
the
desired
state
in
Google,
State
and
kubernetes
tries
to
make
that
that
it's
our
desired
state
is
hey.
We
want
this
service,
this
Ingress
in
this
deployment,
but
the
only
thing
is
that
we
need
to
tell
we
need
to
give
it
when
the
changes
in
state.
We
need
to
inform
it
right
till
then
it
will
keep
trying
to
do
its
desired
state.
So
the
only
only
difference
is
that
we
are
triggering
things
to
let
it
know
hey
now
the
workspace
should
be
stopped
or
the
workspace
should
be
running
or
whatever
and
I.
A
Just
informing
the
desired
state
to
kubernetes.
Kubernetes
still
makes
the
state
happen
and
and
decides
what
it
needs
to
do
with
it
right.
So
it's
still
running.
A
C
If
I
mean
what
what
would
be
an
alternative
way
without
so
I'll,
several
probably
alternative
ways
so
to
be
clear,
I'm
not
saying
this
is
like
not
good
or
anything,
it's
just
like
brainstorming,
you,
you
asked
sorry
I'm
answering,
so
this
is
kind
of
an
indirect
way
to
make
an
API
call
basically
to
Agent
K
to
make
it
do
things
yeah.
We
can
just
make
that
instead
so
make
an
API
call
into
Agent
K
and
tell
it
to
do
stuff
or.
C
Without
yeah
yeah
without
redis,
if
you
like,
you,
don't
need
the
radius
basically
here
to
temporarily
persist
data
and
then
wait
for
Agent
K
to
fool
it.
You
can
just
so
like
with
kubernetes
reverse
proxy.
We
we
accept
an
HTTP
call.
We
make
some
processing,
we
wrap
it
into
hrpc
stream,
and
then
we
send
it
to
a
correct
Agent
K
by
a
reverse
thumb
so
interesting.
What
what
we
could
do
here
is.
We
could
use
the
private
AP
private
API
server,
the
one
that's
code
by
rails
do
that
transformation.
C
And
then
your
your
state
will
be
persisted
in
in
you
know,
in
kubernetes.
C
A
C
One
yes,
so
server
proxy
there's
a
lot
of
stuff
here,
but
basically
the
interesting.
This
is
just
Plumbing.
All
of
that
is
like.
C
C
Basically
make
a
request
to
for
the
kubernetes
API.
A
C
Yes,
it
all
works
via
a
reverse,
reverse
tunneling,
so
agent
establishes
tunnel
then
cast
finds
another
cast
which
has
that
tunnel
and
then
sends
that
traffic
there
and
then
that's.
B
A
C
But
the
data
is
not,
it
doesn't
need
to
be
persisted
there.
It's
it's,
it's
keep
how
to
say,
keep
being
refreshed.
So
every
n
seconds
each
class
ensures
that
information
about
the
connected
tunnels
that
it
has
is
there
and
then
another
cast
can
see
that
and
what.
A
C
Sorry,
yeah
yeah,
your
your.
C
People
who
do
the
same
thing
in
the
parallel
universe,
thank
you
kind
of
so
this
is
what
I
suggested
to
them
is
persist.
The
desired
state
in
the
database
and
then
agent
just
fetches,
that,
through
cast
from
the
database.
A
B
C
C
Very
similar,
yeah,
so
I
think
yeah
I
think
we
need
to
just
like
reconcile
the
two
implementations.
Maybe.
A
C
Know
the
other
and
another
difference
is
that
they
use
the
provider
operator
and
you
use
the
library.
B
C
So
I
think
using
the
library
is
a
better
approach
because
you
don't
like
user,
doesn't
have
to
install
anything.
But
I
know
that
they
want
to
use
the
operator
because
it's
more
mature
and
more
up-to-date
and
other
I
guess
benefits.
And
then
that's
also
not
a
critical
difference,
because
they
also
I
think
my
understanding
is
that
they
also
would
like
to
migrate
to
the
library.
Eventually,
I
mean
it's
in
everybody's
interest,
because
especially
in
the
user's
interest,
because
that's
less
things
to
manage
and
that's
good
yeah.
A
B
A
Make
sure
we
we
bring
the
code
together,
I
mean
the
thing
is
we
were
working
on
separate
approaches,
but
we're
kind
of
converging
on
the
same
approach
like
once
they
reach
a
majority.
A
You
know
we
will
sort
of
combine
the
the
code
anywhere
together
and
probably
we'll
stick
to
that
approach
of
polling.
For
now,
as
long
as
that's
effective,
the
only
difference
is
like
you
know,
we'll
have
to
keep
the
poll
interval
short
because,
typically,
when
users
are
like
creating
workspaces,
they
don't
want
to
wait
for
too
long.
B
C
A
C
Correct
so
then,
why
pops
up
all
classes
can
get
that
notification
and
receive
the
agent
an
agent?
Is,
you
know,
waiting
for
a
reply
from
from
cast
and
then
cast
can
immediately
fetch
the
data
or.
B
C
B
C
C
Same
kind
of
watch
plus
safety
released
yeah.
A
Yeah
yeah
yeah
I
mean
it's
a
good
approach
to
have
both
options.
You
know
where,
in
case
you
miss
an
event.
You
still
have
an
option
to
go
pole
and
and
get
that
event.
Yeah.
C
I
think
it
actually
solves
the
problems
of
like
you,
don't
need
to
think
very
hard
about
what
happens
if
this
doesn't
work.
If
that
doesn't
work
like
they
will
just
pull
later
and
eventually
succeed,
if
there
was
a
transient
failure,
like
a
network
or
whatever.
A
True
I
mean
that
that
absolutely
true,
okay,
I
I
I
will
I
will
think
about
that
as
well.
I'll
also
look
into
the
synchronous
approach
that
you've
got
I'll
look
deeper
into
the
code
to
understand
it,
a
bit
more
which
which
looks
pretty
cool.
Actually
so
I'll
see
how
this
is
working
it
it.
It
feels
like
magic,
opening
up
a
reverse
tunnel
and
then
being
able
to
make
causes
interesting.
A
The
other
thing
I
wanted
to
ask
you
I,
don't
know
whether
you
have
just
five
more
minutes
of
time
is
one
of
the
things
I
want
to
support
is
the
ability
to
stream
logs
from
the
workspace,
and
the
reason
is
you
know
the
workspace
may
be
misbehaving,
or
something
like
that,
so
you
may
trigger
off
an
image
image,
and
the
workspace
may
actually
have
some
problems
because
of
the
image
or
the
ID
is
not
starting
up
or
something
like
that.
A
So
I
want
to
be
able
to
stream
logs
from
the
from
the
workspace
onto
the
rails,
UI
right,
and
so
one
of
the
things
I
was
thinking
of
which
is
not
the
best
method
is
to
make
this
similar
call
synchronous.
Call
but
I
think
that
if,
if
I
have
to
cut
this
kubernetes
client,
which
I
can
call
from
Cass
right,
which
which
talks
to
the
agent,
then
I
can
actually
just
do
a
I'm
guessing
I
can
also
do
like
logs
and
get
the
logs
from
a
deployment
using
the
same
mechanism.
C
I
think
you
will
not
need
to
do
anything
at
all,
basically
well
on
the
UI
So.
Currently
holder
is
working
on
making
it
possible
to
make
kubernetes
API
requests
from
the
browser.
So
it's
mostly
authentication
everything.
Is
there
already
except
authentication,
so
he's
working.
B
C
Authentication
and
setting
cookies
and
all
of
that
so
once
it's
there
I
think
from
the
UI.
You
can
just
get
the
pods
in
that
workspace
via
the
kubernetes
API
and
then
for
each
board,
call
the
slash
logs,
endpoint
and
get
the
stream
of
logs,
and
all
of
that
is
in
the
front
end,
and
all
of
that
doesn't
require
any
code
in
your
module.
C
C
A
A
C
We
just
need
to
set
the
user's
cookie
on
the
cast
domain
on
the
domain
that
exposes.
A
A
A
Yes,
okay,
okay,
make
sense;
okay,
that
was
it
all
Michael.
Thank
you
for
your
feedback.
I
am
going
to
have
a
look
a
bit
more
deeper
into
into
some
of
this
code
yeah.
It
was
really
useful
for
me.
C
Oh
yeah,
thanks
for
the
chat
and
that's
that's
really
cool
to
see
this
happen,
I'm
really
looking
forward
to
so
I
just
tried
the
web
editor
several
times,
and
it's
it's
excellent,
and
this
this
like
stuff
would
be
even
better
and
that's
like
it
feels
like
you.
You
feel
the
evolution
of
how
how
people
develop.
A
No,
thank
you
all
right,
that's
pretty
much!
It
I'll!
Let
you
get
along
with
your
day!
Thank
you.
So
much.