►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Thanks
steven
hello,
everyone
welcome
to
the
distribution
team
demo.
For
the
week
of
november
8th,
we
moved
the
demo
to
monday
this
week
we're
wanting
to
get
out
kind
of
a
qa
test
of
our
centos
package
or
our
enterprise
package
running
on
elmo
linux.
We
want
to
get
a
qa
test
done
and
I
thought
that
would
be
a
really
great
demo
to
kind
of
show
to
walk
through
what
a
local
run
of
gitlab
qa
looks
like.
B
So
that's
we
want
to
do
it
early
in
the
week
and
it
seemed
like
a
good
demo.
So
that's
that's.
Why
we're
running
a
little
bit
early
earlier
in
the
week
this
week,
so
I
have
an
instance
set
up,
but
before
we
jump
into
that,
just
to
talk
a
little
bit
about
the
gitlab
qa
test
runs
for
those
who
aren't
familiar
with
it.
B
So
for
the
most
part
we
run
gitlab
qa
against
using
using
ci
pipelines
and
those
are
triggered
in
multiple
places.
So
they
they
run
in
merge,
requests
on
well,
there's
kind
of
like
two
areas
they
run.
So
one
is
on
kind
of
like
work
in
progress,
changes
in
merge
requests
and
those
can
be
triggered
from
the
main
rails
branch
they
can
be
triggered
from
now.
B
Our
cng
project
from
our
omnibus
project,
by
triggering
a
package
build,
and
also
from
from
charts
that
make
use
of
the
gitlab
qa
project
to
run
end-to-end
tests
on
the
package
suite
and
then
the
second
location
where
these
run
is.
We
have
instance,
environment
tests
that
run
against
our
staging
environment
and
a
couple
other
environments
that
are
set
up
by
qa
that
run
that
way.
B
So,
in
terms
of
the
the
work
in
progress,
merge
requests
they
they
tend
to
use
our
docker
images
for
running
the
tests
versus-
and
you
know,
are
able
to
part
of
that
is
even
the
test.
Suite
booting
those
images
and
setting
up
the
instances
within
those
soccer
images
and
then
there's
another
set
of
tests
which
is
similar
to
how,
like
we
run
against
staging,
which
are
tests
designed
to
run
against
an
already
existing
setup.
B
So
here
on
our
team
every
couple
months,
we
might
have
a
request
that
comes
in
that
involves,
do
running
the
qa
tests
against
a
platform
that
we
don't
have
in
automation.
So
part
of
that
is
because,
right
now
our
docker
images
are
pretty
much
all
debian
based
so
for
this
alma
linux
package.
We're
specifically
talking
about
you,
know
our
omnibus
packages
linux
packages
here.
So
we
only
in
terms
of
docker
images.
We
only
have
an
ubuntu
docker
image
for
these,
and
so
all
of
our
automated
tests
are
based
around
dark
images.
B
So
we
don't
currently
have
a
ci
pipeline
that
can
run
this
test
suite
for
me,
and
I
I
guess
I
would
have
had
to
set
up
even
if
we
did
have
it
probably
wouldn't
have
been
at
this
point
running
all
the
linux
yet
for
this
particular
test,
but
every
couple
months
we
have
something
like
this.
That
comes
up
recently.
It
was
a
matter
of
on
slash
linux.
B
We
had
an
old
package
that
was
uploaded
poorly
on
an
old
gitlab
version,
for
we
had
the
correct
package
for
an
older
version
of
slash
and
a
bad
package
for
a
newer
version
and
we
needed
to
validate,
and
it
was
so
far
back.
We
couldn't
rebuild
these
packages.
We
didn't
learn
that
it
was
corrupted
for
many
months
later,
so
we
needed
to
run
a
test
to
confirm
that
that
older
package
passed
all
of
our
validation
on
the
newer
sls
install.
B
B
I
just
boot
it
with
the
the
default.
You
know:
community
ami
from
alma
linux
as
specified
on
alma
linux's
wiki
found
the
ami
for
my
region
looked
up
installed
it
other
than
that.
I
just
opened
the
typical
ports
you
would
for
get
lab
for
just
the
basic
web
service,
so
port
80
and
you
know
port
443
in
terms
of
the
security
wizard
other
than
that,
it's
pretty
basic,
not
a
lot
of
other
additional
settings.
It's
a
what
is
it
a
c5
8
core
instance?
B
I
think
a
pretty
beefy
instance
just
to
make
sure
that
we're
able
to
run
these
tests
quickly
and
I
think
it
has
a
50
gig
ssd.
Oh
yeah,
it
was
a
c5
2x
launch
in
terms
of
customization
on
once
the
instance
was
running.
I
installed
gitlab
onto
it
and
did
customization
on
the
instance
to
the
config.
The
only
two
things
I
did
was
I
linked
to
them.
B
I
linked
to
them
in
the
our
notes.
Is
I
enabled
fast
look
up
via
gitlab
shell
in
the
essence
sshd
config
file?
I
don't
know
if
this
is
necessary,
but
I
was
in
there
anyway,
so
I
went
ahead
and
did
this
for
fast
looked
up
so
in
case
any
qa
tests
are
testing
against
this
feature.
B
It's
now
in
there
and
then
the
other
thing
I
did
was
this
is
in
the
same
file.
I
did
configuration
of
git
protocol
v2
in
the
hd
sshd,
and
this
is
from
prior
experience
running
these
qa
tests
that
we
do
have
tests
that
are
testing
that
basically
things
like
get
push
options,
work
which
I
believe
are
part
of
the
protocol
version
two
or
that
may
not
be
the
case.
Maybe
push
options
are
backwards
compatible,
but
we
do.
We
definitely
do
have
tests
that
are
specifically
for
git
protocol
version
2..
B
I
thought
it
was
good
for
the
coffee,
too,
that's
possible,
so
it's
been
the
last
time
I
ran
these
tests
was
like
two
months
ago
and
they
weren't
quarantined
then
so
this
will
be.
My
first
test
run
against
the
this
alma
box.
I
haven't
run
the
qa
test
against
it,
so
this
this
change
was
based
on
that
run
from
two
months
ago.
B
Basically,
that
I
ran
into
this
cool,
so
it's
possible
that
it's
not
required,
and
so
that's
probably
so,
I'm
gonna
go
ahead
and
get
the
test
run
started
and
probably
and
it'll
probably
take
longer
than
this
column.
We
won't
go
through
everything,
but
in
my
experience,
when
doing
a
local
test
run,
particularly
when
using
in
this
case,
because
this
build
I
installed
of
all
the
linux-
is
from
this
morning-
basically
off
a
master
and
I'll
be
using
the
nightly
qa
image
and
the
master
branch
of
the
gitlab
qa
project.
B
Because
of
all
those
things,
it's
very
unlikely
that
my
first
test
run
will
work
like
my
first
test
run,
will
even
work
in
the
way
that
I
expect,
and
it
will
probably
fail
in
some
way
that
I
have
to
explore
what
to
toggle
or
what
config
to
pass.
B
So
there
will
be
that
and
it'll
take
a
while
once
we
do
get
it
going,
it'll
take
a
while
to
run
the
test.
So
probably
the
more
interesting
part
of
the
demo
today
is
getting
getting
it
set
up.
B
So
I
have
my
instance
set
up.
B
Perfect
interesting
I'll
come
back
to
that
in
a
second
that
that
kind
of
was
a
tangent.
I
got
distracted
by
that
banner.
B
So
in
terms
of
the
qa
project,
there
are
docs
in
the
qa
folder
on
the
main
gitlab
project
that
are
kind
of
a
good
introduction
to.
B
The
end
to
end
tests,
they
largely
focus
on
running
them
in
the
gdk,
though,
which
is
a
little
bit
different
than
what
we're
doing
here,
but
they're,
not
too
long.
So
it's
worth
a
if
you
aren't
familiar
with
it
at
all.
It's
kind
of
a
good
place
to
start,
and
it
has
links
to
other
pages
within
the
gitlab
qa
project
itself,
like
this
all
supported
environment
variables,
and
that's
particularly
what
we're
very
interested
in
here
is
when
setting
up
our
test,
what
environment
variables
we're
going
to
grab
or
need
to
set
up.
B
B
What
we
will
be
doing
is
running
so
there's
on
this
page,
there's
lots
of
different
tests
that
you
can
run
depending
on
what
situation
you're
in
you
notice.
Most
of
these
take
like
an
image
address
for
a
darker
image.
So
in
this
case,
because
we
have
already
running
instance-
we're
limited
to
the
ones
that
we
can
provide
an
instance
to.
So,
if
you
look
at
this
list,
it's
this
one
right
here.
B
This
test
instance
any
part
of
the
the
setup.
So
basically
it
gives
an
example
of
what
this
would
look
like
that
you
know,
depending
on
your
environment,
you
would,
you
know,
expect
export
the
gitlab,
username
and
password
that
you
want
the
test
run
as
and
then
you
would
run
the
qa
binary
pass
in
this
test.
Instance
any
flag,
whether
it's
a
ce
or
ee,
and
give
it
the
image.
This
is
the
qa
image
to
run
against,
so
the
qa
and
tan
tests
themselves
aren't
versioned
in
this
repo.
B
B
B
So
I've
already
set
up
the
username
and
password
and
source
file
will
earn
a
batch
file
where
we
will
source,
but
you'll
notice,
there's
also
a
forker,
username
and
password
that
are
required.
I
haven't
set
those
up
yet
I
thought
it
might
be
interesting
to
show
that
setup
of
when
setting
up
for
a
gitlab
qa
run
what
like
a
quick
way
or
as
far
as
I
know,
we
don't
yet
have
any
like
easy
script
for
adding
a
bunch
of
these
users.
B
A
B
B
Now
we
need
in
order
for
the
test
like
when
you
first
log
into
this
user,
a
we're
gonna
have
to
set
a
password
now
but
b,
even
once
we
set
the
password
when
you
first
log
in
you're
gonna
have
to
like
there's
an
extra
screen
for
configuring
like
to
change
your
password
and
etc
that
the
tests
aren't
expecting.
So
we
have
to
go
that
far
in
as
well,
so
to
add
password
as
an
admin.
I
have
to
go
in
here
and
add
a
password.
B
B
We
saw
that
up
was
enabled
in
my
case,
so
the
test
should
be
able
to
auto,
create
them.
B
Currently
we
have
the
requires
admin
approval
on
so
it's
possible.
We
may
have
to
turn
that
off
temporarily
during
the
test
run,
but
we'll
find
that
out
so
after
the
various
qa
users,
there's
ldap
we're
not
gonna
test
ldap,
there's
admin
username
I
have.
I
just
set
that
to
the
same
as
my
regular
root
user,
sandbox
name,
we
can
use
the
same.
B
And
that
isn't
required-
and
this
is
not
ce.
This
is
not
an
ee
instance
at
the
moment.
B
I
do
want
to
run
the
test
on
the
ee,
and,
since
I
just
didn't
have
the
the
package
for
eu
was
still
running
at
the
time
that
I
set
this
up
so
shortly
after
this
I'll
be
rerunning
the
test.
Once
I
get
the
test
set
up,
it'll
be
pretty
quick
to
just
reinstall
the
ee
package
and
rerun
it
once.
I
know
that
the
tests
are
actually
running.
B
B
We
do
need
one
more
option
that
is
when
running
on,
at
least
on
my
system.
We
do
need
one
more
option
which
I
need.
I
put
a
note
in
the
docs
for
so
in
the
in
the
gitlab
kui
project.
There
is
an
option
on
chrome
to
disable
the
dev
shim
usage.
If
on
my
machine,
in
my
experience,
if
I
don't
disable
it
it's
after,
I
don't
know
like
15
minutes
into
the
test.
B
All
of
my
tokens
get
all
all
my
tests
are
failing,
so
I
need
to
enable
it
unfortunately,
and-
and
I
need
to
go
in
and
open
up
a
merge
request
for
this.
The
only
way
currently
to
get
the
test
to
disable
it
is
by
making
them
think
that
you're
running
in
ci.
B
B
B
So
that's
that's,
essentially
it
and
then
what
happens
is
as
things
break.
You
need
to
determine
whether
it's,
whether
the
error
is
with
your
local
qa
setup
or
whether
it's
with
you
know
whether
it's
an
actual
error
and
if
it's
not
with
your
local
qa
set
up.
You
have
to
then
determine
whether
like
this
is
something
that's
already
currently
failing
in
gitlab,
qa
master
or
something
specific.
B
So
I
have
the
general
set
up
here,
correct
at
least
we'll
see
as
test
failing
start
failing,
if
they're
like
testing
for
things
that
they
shouldn't
be
testing
for
on
this
instance
or
or
not
like,
for
example,
up
here
this
I
haven't
seen
this
before,
but
this
looks.
B
B
Yeah
so
we're
at
the
half
hour
mark
so
people
do.
Does
anyone
have
any
questions
or
comments
or
anything
like
that?.
B
I
guess
one
thing
I
can
show
is,
and
it's
documented
in
that
page.
B
But
in
temp
there's
now
a
gitlab
qa
folder,
which
is
going
to
have
if
there
is
an
error,
so
apparently
there
was
at
least
one
error.
I
must
have
missed
it
going
by,
but
this
is
where
you're
going
to
see
your
screenshots
of
errands
and
depending
on
the
test,
I
guess
yeah
is
this
test
it'll,
basically
just
be
the
screenshots
of
any
errors,
so
I
didn't
set
up
the
github
import,
so
I
actually
expected
it
to
skip
it.
B
B
B
If
not,
people
are
welcome
to
drop
off,
I'm
gonna
keep
the
recording
going
for
this
test
run
just
in
case
there's
any
like
debugging
or
something
else
I
wanna
show
when
it
fails
later
just
to
capture
in
the
recording.
If
people
are
interested
later.
B
B
Were
you
were
you
running
on
like
in
gdk,
or
were
you
running
against
external
instance?.
B
Yeah,
so
I
I
have
the
advantage
of
like
it's
only
running
so
this
is
also
a
subset
of
the
tests
as
well.
So
it
makes
some
assumptions
that
it
can't
do
certain
things,
because
it's
a
live
running
instance,
so
it
runs
more
than
our
smoke
tests,
but
less
than
it
it
runs
less
than
it
could.
B
So
this
one
skips
any
that
require
like
a
external
component,
also
any
that
require
like
reconfiguring
and
restarting
gitlab.
B
Yeah,
that's
it
like.
I
said
people
are
welcome
to
to
drop
off
I'll,
be
investigating
the
failures
I'll
be
waiting
to
see
if
it
actually
runs
through
fully
or
if
I
need
to
run
it.
You
know,
make
a
few
changes
and
run
it
a
couple
more
times.
B
B
Oh
yeah,
that's
it,
of
course,
anyone's
welcome
to
stay
on
the
call,
but
it's
going
to
get
real
boring
for
the
next
half
hour
or
45
minutes.
If
everything
went
well
thanks,
everyone.
A
B
A
B
So
it
has
a
check
that
if
you
have
a
docker
network
set
up
called
test,
it
will
use
it,
but
every
time
every
test,
it
checks,
and
you
get
this
error
in
the
logs
in
the
past
sometime,
I've
like
explicitly
just
created,
like
the
docker
network,
before
creating
a
test
just
to
get
the
the
message
out
of
the
log,
but
it
doesn't
actually
impact
anything
well,
it
depends
unless
you
want
to
run
other
stuff
in
your
default
network.
B
At
the
same
time
as
the
test
execute
that,
then
it's
a
good
idea
to
separately
create
the
test
network,
but
I
didn't
this
time,
but
I'm
not
running
anything
else
at
the
moment.
So
it's
just
annoying
that
it's
displaying
at
the
moment.
But
that's
the
thing
you
can
do
is
create
a
network
called
test
and
as
long
as
it's
called
test,
it'll
use
that
and
not
complain
so
much.
A
Even
in
the
pipelines
you're
not
guaranteed,
networking
will
always
work
when
doing
pipeline
qa
geo
testing.
This
can
fail
at
random.
Anything
we'll
have
intermittent
periods
of
like
one
to
two
months
where
everything
will
just
slow
down
and
break
or
fail
and
it'll
just
all
of
a
sudden
go
away,
and
a
lot
of
that's
due
to
how
the
network
construction
is
built
between
the
containers.
A
There's
not
it's
not
well
understood
how
that's
all
done,
and
it's
out
of.
I
looked
into
it
at
one
point
I
guess
a
year
ago,
but
no
one
that
I
talked
to
had
access
to
look
at
the
back
side
of
where
it
was
all
done,
because
the
networking
just
fails
so
the
geo
runners
can't
see
they
literally
can't
see
each
other.
A
B
So
there
we
go,
we
got
tests,
have
now
completed
in
52
minutes,
35
seconds.
B
And
we
have
four
failures
to
look
into,
so
that's
few
enough
that
I
can
probably
manually
look
into
them
and
you
know:
try
to
reproduce
them
manually
and
or
run
just
that
single
test
using
gitlab
qa.
B
On
manage
subgroup
transfers
to
a
subgroup
to
another
group
to
look
into,
we
got
managed
project
imports,
import,
gitlab,
github,
repo
via
api.
This
one's
actually
expected
because
I
did
not
provide
github
api
token.
I
must
have
missed
the
flag
to
turn
off
this
test,
because
I
did
explicitly
intend
to
turn
off
this
test
so
that
one's
fine.
B
Same
with
this
one,
let's
also
import
large
project
by
github
by
api
same
with
this
one.
So
we
actually
only
have.
B
B
B
B
Oh
okay,
we
need
to
look
higher
up
because
of
the
retry
logic.
B
A
No,
it
would
help
if
I
I
muted
myself,
can
you
go
back
to
the
actual
test
cleanup
for
a
second.
B
A
B
B
Anyways,
I
don't
think
it's
a
issue
with
this
instance.
I'm
going
to
log
an
issue
for
the
tests,
though
yep
but
other
than
that
and,
like
I
said,
yeah,
because
it's
just
in
this
section
that
it's
failing
and
this
expect
message
actually
is
showing
up.
B
I
think
I
think
we're
good
here.
I'm
gonna
this
would
end
up
being
a
ce
test.
I'm
gonna
rerun
it
on
ee,
but
I'll
post.
These
results
and
log
this
issue,
but
I
think
that's
it
for
the
recording
that
went
better
than
expected.