►
From YouTube: Distribution Team Demo 2021-06-17 - loft-sh/vcluster
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
the
weekly
distribution
team
demo
this
week,
I
didn't
really
have
anything
too
exciting
to
bring.
I
just
came
back
from
being
gone
for
quite
a
while.
It's
still
catched
up
with
a
lot
of
stuff,
but
I
did
want
to
demonstrate
a
tool
that
I
have
learned
about.
That
soda
has
some
promise
and
some
interesting
applications
for
us
about
all
the
time,
but
once
a
while
for
some
of
our
uses,
let
me
bring
up
the
web
page
real.
A
B
A
Oh,
it's
a
go
tool
that
allows
you
to
simulate
multiple
kubernetes
clusters
within
a
cluster,
so
I
played
with
it
a
little
bit
while
I
was
gone.
It
is
still
beta
software
and
it
shows
a
couple
times
it
once
the
great
while
locks
disconnects
from
the
virtual
cluster
and
has
some
problems
reconnected,
but
I've
only
had
that
happen
once
or
twice
and
playing
with
it.
A
A
But
anyways,
let's
get
into
it,
so
I've
got
a
pretty
standard,
kubernetes
cluster
running
here
three
nodes
instead
of
my
normal
two
just
because
I
want
to
make
it
a
little
bit
bigger,
bringing
up
two
copies
of
get
lab
in
it.
So
that's
what
we're
seeing
on
the
large
terminal
on
the
left,
and
so
I'm
going
to
go
ahead
and
start
creating
clusters
and
it's
just
actually
I'll
start
with
the
bigger
window.
So
it's
easier
to
see
fairly
simple
command
line.
There's
a
create
connect,
delete,
that's
and
list.
A
Tell
it
what
name
space
we
want
the
virtual
cluster
to
be
in,
since
I
I
was
playing
with
this,
I
may
have
let's
take
a
quick
look.
A
And
then
we
give
it
a
cluster
name,
we'll
just
call
it
cl1
also,
and
what
that
does
is
actually
spawns
off
a
specific
helm,
chart
and
and
their
parameters
allow
you
to
specify
if
you,
if
you
modify
their
helm,
chart
a
bit
to
specify
another
helm
chart
to
use
another
repo
to
use
and
all
that,
and
as
we
can
see,
it's
spinning
up
right
now,
a
in
cl1
a
new
pod.
A
C
B
A
No
problem
no
problem
at
all
all
right,
so
we
had
the
first
cluster
up
and
the
second
cluster
is
starting
to
come
up.
Okay,
so
in
order
to
connect
to
it,
you
have
to
run
the
connect,
but
it
actually
runs
as
a
proxy
and
creates
a
cube
config
for
you
in
a
file.
So
basically
you
have
you'll
have
to
have
a
window
where
you
just
have
the
b
cluster
running
in
the
background,
so
you
can
talk
to
that
cluster.
So
then,.
A
And
I'm
sorry
forget
all
the
parameters
here,
but
just
name
space
yeah,
just
name.
A
A
Let's
call
it
cube
one
for
now.
If
I
cube
config
once
I
don't
confuse
myself
at
some
point.
A
A
A
A
A
A
A
A
Okay,
so
that
will
install
get
lab
in
the
first
virtual
cluster
and
we'll
see
that
start
to
populate
here.
In
a
moment,
we've
connected
to
the
second
cluster
we're
just
going
to
set
our
variable.
A
A
That's
interesting,
oh
actually,
this
is
something
I
had
not
tried
yet.
I
have
not
tried
using
two
v
clusters.
At
the
same
time,
I
kept
switching
back
and
forth
the
other
day
or
a
couple
weeks
ago,
and
I
was
playing
with
this.
Let's
see
what
we
can
find,
there's
got
to
be
a
way
specifying
ports.
A
A
A
And
now
we
see
just
the
single
core
dns
there
and
we
can
see
now
on
the
full
cluster
view,
the
the
actual
cluster
we
can
see
get
lab
starting
in
cl1.
A
A
Yeah,
I
should
have
changed
to
the
cube
system
there
too,
but
oh
well,
all
right,
so
that
will
start
up
external
dns
for
there
now,
the
I
did
experiment
with
this
a
little
bit
last
night
playing
around
trying
to
get
things
working
a
little
bit
better
and
one
of
the
things
I
did
find,
which
I'm
still
puzzled
about
investigate
a
little
bit,
but
I
wonder
if
it
could
also
be
just
partly
because
it's
still
beta
software,
if
I
install
get
lab
in
the
original
cluster,
everything
works,
fine
comes
up,
you
can
get
to
the
instance
and
just
like
we
normally
do.
A
If
you
install
get
lab
in
the
virtual
cluster,
everything
comes
up,
looks
good,
except
for
let's
encrypt
is
not
able,
but
actually
the
cert
manager
is
not
able
to
get
certificates,
negotiate
certificates
for
the
web
services,
and
what
I
see
is
that
specifically,
the
ingress
controllers
having
problems
getting
the
end
points
off
the
ingresses
and
services
so
we'll
see
if
that's
still
occurring
actually
by
now,
we
should
be
able
to
start
to
see
that.
A
A
What
I
was
seeing
in
the
output
of
the
or
the
ingress
controller
is
up
nope
we're
seeing
the
same
thing
here.
We
go
we're
starting
to
see
the
errors
right
here,
error
obtaining
endpoints,
which
I
haven't
quite
figured
out,
because
if
you
look
at
the
end
points
everything's
looking
good,
if
you
look
at
the
service
or
the
ingress,
everything
looks
good.
So
there's
something
there.
A
A
Yeah
so
for
some
reason
the
those
pods
are
either
rotating
or
somewhere.
That's
getting
bad
information.
I
don't
know
why
that's
the
case
yet
so,
unfortunately,
at
this
point,
getting
get
lab
up
and
running
inside
the
virtual
cluster
is
not
working
so
well,
but
we're
getting
it's
getting
close
seems
like.
A
So
from
what
I've
seen
vert
the
v
cluster
just
manipulates
the
the
name
of
the
pod
to
encode
the
name
space
and
things
like
that
for
the
virtual
cluster
and.
A
There
we
go
that's
running
we'll
start
popping
up
here
in
a
second
and
where
I
see
this
really
being
useful
is
things
at
least
at
this
point.
Given
the
again,
I
haven't
tried
a
whole
lot
of
things
yet,
where
I
see
this
being
really
useful,
is
bring
up,
get
lab
in
your
normal
cluster
and
then
create
a
virtual
cluster.
A
If
you
want
to
create
the
stereo
using
something
like
get
get
lab
runner
or
kaz,
or
something
like
that
in
a
separate
cluster,
or
you
want
to
create
an
environment
where
you
have
four
different
clusters
that
without
having
to
go,
create
four
separate
clusters
and
then
have
a
variety
of
the
agents
and
so
forth,
running
in
different
clusters
and
testing
the
being
able
to
connect
across
clusters,
I
sort
of
was
envisioning
this
for
geo.
A
You
know
where
you
could
create
two
or
three
clusters
and
try
to
get
geo
up
and
running
without
having
to
go
hassle
or
create
multiple
clusters
and
the
extra
expense
of
the
extra
clusters
and
so
forth.
But
since
gitlab's
having
an
issue
running
at
this
point
inside
the
virtual
cluster,
that's
not
too
yeah.
C
A
I
was
getting
this
last
night
quite
a
bit
because
it's
not
getting
a
certificate,
so
we
can
try
and
try
and
try
and
it's
not
getting
there.
We
could
certainly
do
a
port
forward
and
probably
get
it
up
and
running
just
ignore
the
certificate,
but
that's
sort
of
a
stock.
One
yeah
go
ahead.
A
A
Well,
mostly
because
I
just
have
the
environments
usually
getting
let's
encrypt,
I
in
theory
self
sign
would
work,
probably
just
fine,
so
that
would
be
a
good
experiment,
but
that's
really
sort
of
what
I
want
to
demonstrate
is
that
the
v
cluster
and
just
throw
it
out
there
and
let
people
think
about
that
for
a
bit
see
if
see
if
there's
any
other
interesting
uses
for
something
like
this,
where
we
can
say
creating
multiple
clusters,
we
can
create
clusters
quickly
on
the
fly
within
a
cluster
so
and
here
we're
looking
at
a
virtual
cluster,
it
looks
like
a
normal
installation
without
the
additional
encoded,
all
the
encoded
data
that
vcluster
puts
on
it.
B
So
crazy
question,
if
you
bump
into
say
taskrunner,
since
it's
not
falling
over,
what
does
its
internal
name
look
like?
So,
if
you
pop
into
it
we'll
see
what
its
hostname
is
according
to
itself,.
B
A
You
know:
okay
yeah,
I
mean
for
what
I'm
seeing.
It
seems
to
be
pretty
good,
I'm
still
a
little
puzzled.
I
might
try
to
get
a
bug
report
just
created
to
see
if
the,
if
the
developers
have
seen
this
or
know
about
this
issue
of
having
problems.
Seeing
the
end
points
again,
I
don't
know
if
that's
just
the
way
ingress
is
looking
at
stuff
and
trying
to
parse
through
data,
or
I
mean
ideally,
it
should
be
just
doing
an
kubernetes
api
call
that
just
says
give
me
the
ingresses
and
should
come
back
correctly.
A
So
is
it
that
v
cluster
is
not
representing
the
api
call
correctly
and
given
something
that
the
ingress
doesn't
like.
I
don't
know
yet,
I'm
not
sure
I'm
gonna
spend
the
time
to
really
dig
into
it
that
to
that
level,
it's
it
looks
like
a
useful
tool,
but
I
haven't
really
delved
into
it
to
where
I'm
gonna
go
yeah.
Let's
go,
spend
a
couple
hours
to
get
through
this
and
figure
it
out.
B
So
quickly
it's
something-
and
I
noticed
that
so
v
cluster
makes
use
of
k3s
to
run
its
api.
Well,
it's
actually
an
api
server,
but
we'll
call
it
emulation
for
the
sake
of
understanding
separation.
A
B
A
A
It
doesn't
do
any
if
you're
looking
to
do
any
testing,
where
you
need
some
latency
between
the
clusters
to
simulate
a
geographic
distance
yeah.
It
won't
do
that.
You'll
still
have
to
create
multiple
clusters
for
that,
but
otherwise
I'm
seeing
it
reasonably
good
and
I'm
hoping
that
when
they
do
get
beyond
beta,
some
of
these
issues
will
be
solved.
A
So
by
the
way,
if
you,
I
wrote
last
night,
the
asdf
plug-in
for
vcluster
and
it
actually
got
accepted
earlier
this
morning.
So
if
you
want
to
play
with
this,
you
can
just
bring
it
in
through
astf.
A
So
anything
else
or
any
other
questions.
People
got.
A
It's
not
gonna
like
cut
our
costs
in
half
or
anything,
but
if
you're
doing
something
where
you
need
a
couple
clusters,
it
can
save
a
couple
nodes
like
right
now,
I'm
running
on
three
nodes
which
it's
actually
pretty
reasonable
in
three
notes.
If
I
were
to
go
back
to
my
normal
two
nodes
and
try
to
run
two
instances
of
gitlab,
I
start
to
get
a
little
bit
uglier,
but
I
think
that's
sort
of
doable.
A
It's
just
gonna,
be
wicked,
slow
and
may
see
some
pods
faulting
once
a
while
for
memory
issues
but
yeah.
It
keeps
me
from
having
to
bring
up
another
note
or
two
for
multiple
clusters.
A
So
all
right,
I
think
that's
it.
I
I
don't
have
any
anything
else.
Unless
someone
has
any
other
questions.
A
I
I
thought
about
that
a
couple
weeks
ago,
when
I
started
playing
with
this,
can
we
use
it
in
ocp
to
create
multiple
open
shift
or
I
don't
think
they
look
to
the
level
really
gave
fully
to
openshift
yet,
but
because
it
k3s
doesn't
really
support
all
the
openshift
pieces
yet.
But
it
would
be
interesting
to
see
what
happens.
B
A
B
Agreed
knowing
knowing
rancher
and
the
k3s
folks
as
I
do-
that's
not
going
to
happen
anytime
soon,
but
maybe
they
can
find
somebody
who
will
pick
that
up
on
the
red
hat
side.
A
Yeah,
I
I
saw
it
announced
probably
about
four
weeks
ago.
I
don't
know
what
version
of
code
I
was
actually
playing
with
at
the
time,
but,
as
I
said,
I
was,
I
wrote
the
asdf
plug-in
last
night
with
beta2
the
zero
three
zero
beta
two
installation
and
did
testing
on
that.
And
then
this
morning
I
woke
up
and
there's
a
beta
3
now
so
not
saying
that
they're
iterating
really
fast,
but
they
are
at
least
still
working
on
the
code
and
pushing
it
forward.
A
So
hopefully
we'll
see
a
full
release
here
soon
and
maybe
a
little
bit
more
stability
and
a
little
bit
more
support
for
some
of
the
objects
but
yeah.
A
C
Not
not
to
cut
anyone
off
but
feels
like
we're
at
a
good
spot.
A
So
again,
that's
loft,
sh,
v,
cluster
or
loft
shp
cluster.
You
want
to
go
look
at
that.
Oh
and
I
I
just
realized.
I
didn't
put
anything
in
the
notes.
I'll
go
put
some
notes
in
the
document
and
put
that
link
in
there
and
so
forth.
C
I
think
some
of
us
got
got
those,
so
you
can
take
a
look
but
okay,
I
think
I
think
we've
got
most
of
the
at
least
the
links
and
whatnot
in
there.