►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone:
this
is
the
distribution
demo
for
thursday
march
31st
2022..
We
have
a
special
guest
this
this
week.
Grant
with
the
grant
young
with
the
quality
team
will
be
showing
us
an
overview
of
the
get
tooling.
Take
it
away.
Grant.
B
Okay,
thanks
steven,
so
I
yeah
this
is
going
to
be
an
interesting
one,
because
I'm
very
much
appreciative
of
the
choir
here
with
the
distribution
team,
who
are
very
much
in
this
space.
I'm
very
much
aware
of
all
the
things
that
get
will
be
dealing
with,
but
I'll.
B
I
have
put
some
slides
in
the
document
and
I'll
cover
it
I'll
just
call
it
a
few
highlights
on
in
the
slides
that
are
relevant
here
and
I'll
go
over
to
the
running
bills,
which
gets
a
little
look,
not
that
exciting,
obviously,
but
just
be
terrified
an
answerable
output.
But
at
that
point
we
can
do
above
q
a
so.
Let
me
get
those
slides
up.
B
So
I
won't
bore
you
by
reading
this
out,
of
course,
but
that
slide
just
calls
out
what
what
get
is
doing
does
quite
a
few
bits,
and
the
key
to
call
for
this
group,
though,
is,
is
more
the
next
slide
and
that
get
is
designed
to
be
as
boring
as
possible.
B
It's
designed
to
be
just
a
straightforward,
simple
kind
of
layer
between
on
the
bus
with
on
the
bus
and
or
the
health
charts,
and
just
put
them
in
the
right
configuration
the
right
order
on
a
cloud
provider
that
makes
skit
lab
run
a
scale.
It's
not
a
replacement
from
the
bus,
it's
not
a
replacement
for
helm.
We
are,
we
try
and
be
very
strict
about
it
if
someone
suggests
so
we
should
do
this.
B
It's
like
actually,
that's
more
or
less
more
appropriate
to
do
that
on
the
bus
or
helm
charts.
You
know
we
try
and
just
we
take
kind
of
direction
from
the
two
to
the
distributions
that
you
do,
which
are
just
so
you
know
getting
accessible
on
the
bus
or
help
those
things
just
are
the
the
cores
of
git
lab
across
the
board,
so
guys
just
coming
in
just
to
try
and
get
that
middle
there.
B
Now
I
want
to
call
it
this
diagram
here,
which
kind
of
calls
out
where
get
kind
of
fits
in.
I
guess
in
the
in
the
kind
of
grand
scheme
of
things
so
yeah,
it's
exhausting
helm,
but
it's
on
the
right
order
on
the
cloud
fighters.
Obviously,
though,
to
get
them
on
the
cliff
fighters,
get
used
to
the
various
stuff
with
the
target
cloud
provider
to
get
the
right
infrastructure
up
so
vms
or
kubernetes
clusters
or
networking
disks
object.
Storage.
B
All
that
stuff
networking,
that's
actually
quite
difficult
that
one
and
that's
where
we
try
and
keep
it
as
simple
as
possible.
But
networking
yeah,
it's
not
it's
not
it's
not
simple
itself,
so
the
it
can
get
quite
involved
there.
So
it
touches
on
a
lot
of
points
and
to
right
get.
You
need
to
have
kind
of
knowledge
on
gitlab
on
the
bus,
helm,
terraform,
ansible,
aws,
gcp,
azure
kubernetes.
B
It
touches
on
a
lot
of
things,
it's
quite
a
vibrant
kind
of
area,
so
it's
quite
fascinating,
fascinating
to
work
with,
but
but
yeah
it's
a
it's
an
interesting
one
and
the
last
one
just
cause
just
just
a
slide
about
why
we
made
it
and
why
quality
are
kind
of
dealing
with
it.
Quality
engineering
in
quality
you'll
hear
the
phrase
a
lot
and
what
doesn't
happen
in
my
environment.
B
I
think
I
heard
that
my
first
day,
I
think
something
close
to
that
classic
issue
back
in
the
day,
one
had
their
own
bills
on
their
machines
and
then
one
person
flies
bug
and
someone
else
like.
Oh
that
doesn't
happen.
Mine
turns
out
it's
a
misconfiguration
of
one
person's
machine.
No,
it's
not
it's
actually
a
real
issue,
etc,
etc
and
so
quality.
B
You
know
it's
changed
that
space
has
changed
so
great
and
that's
why
it's
now
kind
of
called
quality
engineering
and
one
bed
that
I
think
is
continuing
to
grow
and
will
continue
to
grow.
Is
that
using
these
kind
of
configuration
and
provisioning
tools
to
build
out
environments
in
exactly
the
same
way
across
different
peoples
and
machines
are
on
a
central
place,
so
everyone
can
come
together
and
go
look.
This
is
a
real
bug.
B
Here
is
and
also
enables,
like
other
things,
such
as
more
realistic
testing
performance
testing
stuff
like
that,
so
in
the
quality
space,
we
always
need
a
tool
like
this,
so
we
built
it
ourselves
essentially
a
few
years
ago
and
then
there
was
an
effort
in
the
company
to
try
and
consolidate
and-
and
we
decided
to
go
this
tool,
which
was
effectively
called
the
performance
environment
builder,
which
isn't
as
catchy
and
we've
been
able
to
get
at
that
point
so
yeah,
I
think
that's
all
the
stuff
I
wanted
to
ramble
off
here.
B
We
should
have
a
build
going
already,
of
course
enough,
and
this
builds
in
our
ci
and
is
building
a
10k
reference
architecture,
which
is
closely
of
hybrids.
So
it's
deploying
several
services
on
helm,
such
as
a
web
service,
psychic
nginx
prometheus,
is
then
building
out.
Well,
it's
going
to
build
out
various
back-ends
gitly,
but
it's
only
getaway.
B
Actually
this
one
and
then
what
also
does
is
sets
up
some
of
the
aws
services,
so
rds
is
going
to
be
used
for
the
davis,
elastic
can
actually
use
for
reedus
and
then
get
will
come
in
and
tie
it
all
together
and
make
the
environment
let's
sing.
Essentially
that's
the
goal.
That's
probably
the
start
of
the
starting
of
the
rds
instance
takes
takes
takes
action
age.
Unfortunately,
this
will
shoot.
This
should
start
to
fill
in
very
soon,
but
there
we
go.
It
should
start
now.
So
that's
enough
of
my
rambling.
B
I
want
to
keep
it
short
because
distribution
already
probably
knows
a
little
bit.
Get
I'm
happy
to
answer
any
questions
or
discuss
any
other
points
specifically
so
feel
free
to
fire
away.
A
Yeah,
I
think
one
of
the
things
that
I
I
dropped
a
quick
link.
We
have
a
project
called
reference,
architect,
architecture,
tester
or
rat,
that
belu
put
together
and
I
was
curious.
If
maybe
we
could
just
real
quickly
kind
of
give
some
context
on.
You
know
the
the
you
know,
the
integration
between
those
tools.
B
Yeah
right,
it's
a
rapper
around
get.
I
it's
been
a
little
while,
since
we
worked
on
that
together,
I
was
mostly
with
nali
as
well
one
of
my
colleagues
so
but
yeah
they
worked
on
that
together
and
they
did
a
wrap
around
get
just
to
make
it
work
for
the
omnibus
pipelines.
B
A
Got
you
so
check
that
out,
but
yeah
yeah?
My
understanding
is
that
get
as
it
is
today
is
essentially,
it
has
to
be
attended.
There
is
meant
to
be
running
in
like
the
attended
mode
and
then
the
stuff
that
rat
does
was
specifically,
so
you
could
you
could
do
it
completely
unattended
from
from
start
to
finish,
is
that
still
the
case
or
his
get
moved
forward
to
where
it's,
I
guess
what
I'm
getting
at
is
like.
We
still
need
to
continue
to
work
on
on
this
tool
or
at
some
point
will
will
it
become.
B
And
so
get
get,
it
has
always
been
designed
to
to
be
unattended
the
pieces
that
aren't
so
attended.
It's
just
a
conflict,
really
the
config
you
need
to
generate.
You
need
to
have
in
the
right
gotcha,
generating
the
config,
essentially
yeah
the
configuration
we
rate
it
once
in
quality
for
our
pipelines,
and
we
have
them
a
different
project
and
rci
just
pulls
them
in
essentially
so
there's
different
ways
to
skin
that
essentially
to
tackle
that
that
piece
but
yeah.
B
I
need
to
refresh
yourself
for
that
I'll
get
back
to
you
on
that
and
see
what
it's
doing
specifically
by
the
saying
was
just
it
was
just
yeah.
I
I
feel
to
recall
what
rat
is
doing
on
top
of
get,
but
my
inside
is
still
quite
light,
but,
yes,
it
is
it
get
worse
for
the
intended.
The
screen
you
see
right
now
is
literally
it
working.
A
B
To
there's
a
few
other
pieces
that
you
know
get
is
designed
to
be
run
by
people.
You
know
like
by
users,
so
there's
a
few
pieces
of
our
full
ultimate
pipeline.
If
you,
if
and
if
a
five-minute
rate,
if
the
pipeline
is
building
an
environment
from
scratch
each
time,
so
there
will
be
a
few
humanity
to
run.
But
again
that
should
be
scripted
on
ci.
So
like
you
need
to
just
generate
the
bucket
for
a
terraform
state
or
if
it's
just
a
temporary
job.
You
need
to
do
that.
C
Need
to
have
your
config
yeah!
This
is
actually
how
some
of
course
is
working
with
10
control
and
the
other
bits
that
they're
doing
is
they've
they've,
wrapped
yet,
and
some
additional
component
additions
into
a
set
of
tooling
that
allows
them
to
automate
that
in
an
infrastructure
as
code
behavior
to
boot.
So
it's
kind
of
we're
stacking
one
more
lego
on
top
to
boot.
It
makes
it
very
controllable
for
what
we're
doing
in
this
project.
B
And
that's
right
and
proper
because
get
is
it's
a
base
tool?
It's
designed
to
give
you
an
environment
but
like,
like
you
say,
there's
those
pieces
that
are
gonna
work
with
like
doing
terraform
state.
You
need
to
have
access
to
that
that
that
therefore
means
you
need
to
have
a
gcp
or
an
aws
account
to
actually
be
able
to
create
that
bucket
and
that's
obviously
a
little
bit
of
a
sensitive
layer
and
that
on
the
horse
that
makes
sense
because
they're
going
to
have
their
own
users
and
their
own
stuff.
B
So
their
wrapper
is
just
mainly
to
automate
that
little
piece
and
as
well
as
their
stuff
that
are
very
specific
to
horse
and
wouldn't
be
appropriate
and
get
it's
kind
of
a
waterfall
effect
really
so
get
on
the
bus
users
get
uses
on
the
bus
and
the
horse
uses
gets,
which
then
therefore
uses
on
the
bus
and
helm.
This
is
kind
of
that.
Waterfall
effect
has
to
get
to
a
more
specific
project.
B
I'm
just
reading
right
now,
just
to
remind
myself
what
it's
doing.
It's
a
very
light
wrapper
yeah.
It's
only
got
one
one
shelf
script
in
it,
which
I
think
is
just
doing
what
we
discussed.
So
it's
getting
the
config
and
getting
the
device
pieces,
an
ssh
key,
for
example,
to
put
into
the
vms.
So
you
can
say
that
kind
of
stuff.
It's
a
very
light
light
touch.
So
I
wouldn't
say:
there's
any
need
to
to
worry
about
that.
B
D
B
I
could
talk
about
this
start
the
start
piece.
If
what
do
you
mean
by
start
like
how
you,
like
literally
how
you
would
get
started
to
get
or
yeah
yeah
sure?
So,
let's
see
what's
what
forgot
to
hand
at
least
I'll
make
like
easier
there.
I'm
actually
writing
up
a
quick,
quick
start
guide
right
now,
which
could
be
useful
to
to
show.
B
So,
as
I
said
at
the
top,
the
the
get
is
just
basically
it's
basically
just
terrifying,
ansible
scripts,
we're
not
doing
anything
special,
we're
not
trying
to
change
those
tools
and
because
they
are
yeah
well
well
known
and
well
used
in
the
industry.
So
that
means
you
have
much
more
familiarity
with
them
with
potential
users.
So
I'm
not
trying
to
change
anything
there.
So
the
start
piece
really
is.
B
So
also
another
video,
but
the
contents
there
kind
of
give
you
the
the
kind
of
the
baseline
there.
So
basically,
you
set
up
terror
from
first,
usually
that,
because
that's
unique
to
the
professional
machines,
you
have
a
few
config
files
that
work
for
terraform
modules,
you'd
sell,
just
authentication
to
just
you
create
a
bucket
for
your
states.
You'd
set
up
a
config
file
to
point
out
for
him
to
use
that
state
you'd,
then
initialize,
terraform
you'd
have
then
you'd
have
a
copy
file
which
just
sets
up
the
module
forget.
B
So
that
looks
like
this,
and
it's
just
it's
meant
to
be
a
very
simple
file
which
just
gives
it
all
the
information
needs.
It
just
needs
to
know.
What
should
I
call
the
machines
or
what
prefix
should
it
give
to
the
machines
in
any
other
infrastructure?
What
ssh
should
I
use
and
what
machines
am
I
actually
building
out?
So
in
this
case
it's
a
simple
one:
boss,
environment,
it's
going
off
and
building
various
vms
for
the
different
gitlab
components,
and
that's
essentially
the
main
file.
B
There's
one
extra
priority
here
about
how
to
set
the
ip
for
the
hd
proxy
instance
in
this
app.
There
is
other
examples
we
could
show
for,
like
cloud
native,
which
would
just
be
slightly
different,
where
you'd
actually
go
in
and
say,
I
want
to
set
up
a
kubernetes
cluster
of
these
node
tools
and
everything
else,
and
then
you
just
run
terraform,
but
it
just
goes
off
and
configures
everything
for
you
and
then
the
other.
The
second
half
is
just
doing
essentially
the
same
advancement.
B
You
set
up
your
config
files
for
ansible,
which
dynamic
inventory,
which
just
looks
like
this,
for
this
is
for
aws,
and
that's
just
to
tell
ansible
say
here:
this
is
where
you
go
in
the
us:
here's,
the
vms,
you
should
be
looking
for
or
the
other
infrastructure,
not
just
the
vms
and
here's
how
you
would
find
them,
and
then
that
gives
us
all
these
little
goals.
B
Okay,
here's
my
vms,
I'm
gonna
go
off,
and
then
you
just
have
a
standard
kind
of
configuration
file
just
for
some
bits
that
you
can
put
in
just
to
give.
I
get
a
lot
of
the
answer
for
the
right
data
to
set
up
gitlab,
so
mostly
passwords
will
provide
theirs
that
using
feature
bits,
and
then
you
just
run
the
ansible
like
now,
I'm
just
running
the
main,
the
main
playbook
there.
Does
that
answer
your
question,
or
was
it
waiting
for
something
a
little
bit
different.
D
B
A
B
A
A
B
A
final
review:
they
should
be
going
in
very
soon,
as
well
as
some
of
the
config
config
examples
as
well,
which,
where
are
those
so
I
can
show
you
quickly,
so
this
will
be
going
in
soon
as
well.
Just
a
bunch
of
examples,
just
starting
with
10k
just
to
start
with
point
we'll
be
expanding
as
we
go
along
so
anywhere.
Services
is
like
our
more
complicated
things,
so
we'll
let
you
have
all
the
config
files
you
need
that
you
just
place
and
get
in
gets
or
in
the
appropriate
folders.
B
So
terraform
yeah,
if
you
go
to
environment,
you'd,
see
a
slightly
different
file
compared
to
the
last
one.
So
you
still
see
some
familiar
stuff,
so
you've
got
console
and
gets
the
game
being
deployed
in
here.
You've
got
these
settings
which
call
out
the
node
pool
sizing
for
kubernetes
and
then
at
the
end,
actually
is
where
you
see
the
like
setup,
rds
and
elastic
cache
for
media
and
postgres
respectively.
So
that's
these.
B
These
examples
are
going
in
soon
as
well
again
to
try
with
the
the
current
release
of
get
that
work
on
is
just
trying
to
make
that
all
more
easier,
so
people
can
get
a
better
grasp
of.
Where
do
I
actually
start?
How
do
I
actually
get
going
and
then
once
you're
actually
over
that
that
hub,
it's
actually
very
easy
after
that,
you
just
run
terraform
apply,
ansible
playbook
run,
and
you
just
let
it
run
essentially,
unless
you
need
to
make
any
changes.
D
B
Well
I'll
answer
the
first
piece
there
so
terraform,
let's
say
just
standard
tools:
we
don't
we
don't
describe
really
where
you
want
it
from,
but
you
can
be
run
from
anywhere.
You
can
run
it
in
pipelines.
Also.
You
just
need
to
have
everything
ready
to
go,
so
you
need
a
state
bucket
and
turn
from
state
ready
and
you're
config
ready,
but
outside
of
that,
we
run
it
daily
in
our
pipelines,
but
you
can
also
run
it
via
the
wheel
school.
B
You
can
run
literally,
I
guess
the
same
environment
from
your
desktop
as
long
as
you've
got
exactly
the
same
finals.
Ansible
there's
no
issue
there.
Terraform
it
needs
to
share
the
states,
so
you
need
to
make
sure
you've
got
access
to
the
exact
same
status
as
as
everyone
else.
But
that's
why
we're
talking
about
setting
up
a
bucket
in
aws
with
all
users,
if
you're
working
on
a
team,
for
example,
would
point
to
the
exact
same
state,
so
they
try
and
run
terraform.
B
It
would
won't
clash
or
do
anything
else
along
those
lines.
I
wasn't.
D
More
in
terms
of
because
in
the
ci
environment,
there
are
multiple
things
that
you
can
do
to
make
your
life
easier
and
like
any
reference
implementation
for
the
ci
that
would
like
you
are
saying
that
you're
producing
the
quick
start
guide
for
I'm
assuming
running
it
from
the
desktop.
But
yes,
yes,
for
example,
gitlab
ci,
okay.
So
what
would
your
typical
setup
gonna
look
like
so
which
one
which
the
variables
you
would
set
for
the
project
or
how
would
you
partition
the
project
nci?
D
So
you
can
have
multiple
environments
and
things
like
that,
and
how
would
you
use
the
for
example:
terraform
we
can
use
the
terraform
state
files
and
pipe
them
into
the
gitlab
as
well.
We
can
save
that
not
in
just
a
bucket,
but
in
gitlab
itself.
So
any
of
those
points,
that's
where
I
was
kind
of
heading
with
that
question.
Do
you
have
any
documentation
any
pointers
on?
How
would
one
go
about
setting
that
up.
B
It's
an
interesting
question:
we
we
don't
have
any
documentation
per
se.
Specifically
on
that
scenario,
because
it's
quite
a
it's
quite
a
specific
scenario.
Most
customers
would
only
have
one
environment
right,
so
they
just
put
the
coffee
files
in
one
place
and
they
purchased
their
checkered
get
and
then
have
that
margin
and
do
that
for
quality.
B
We
have
many
environments
and
that's
what's
on
the
list
right
now
on
the
screen
right
now,
and
what
we
do
is
that
we
just
have
a
simple
shell
script,
just
just
to
say:
here's,
the
environment,
copy
of
the
config
files
into
the
get
folder
in
ci
and
then
get
that's.
That's
that's
it.
We
try
and
make
it
as
simple
as
possible,
and
we
don't
prescribe
because
everyone's
situation
is
a
little
bit
different
and
depends
on
how
they
want
to
handle
secrets
and
passwords.
B
D
Useful
our
case
just
to
explain
a
little
bit
more
of
our
case
like,
for
example,
we
run
three
different
instances
of
openshift
and
you
know
so
having
multiple
environments
is
not
uncommon
for
us
to
run
at
the
same
time
so
having
them
controlled
and
in
our
case
right
now
we
moved
in
the
direction
of
utilizing
as
much
of
git
labs
tools
as
possible.
D
D
I
was
looking
for
and
like
whether
you
guys
ever
kind
of
moved
that
far
in
implementing
your
stuff,
so
we
can
at
least
glance
into
what
kind
of
challenges
you
faced
and
how
did
you
guys
implement
this?
I
understand
that
you
don't
want
to
be
prescriptive,
but
at
the
same
time,
having
some
reference
implementation
to
kind
of
go
off
of
would
be
useful.
B
Sure
I
mean
well
so
this
this
this
project
right
now,
is
how
we
we
do
it.
As
I
say,
we
just
went
very
simple
and
used
the
shell
scripts
to
make
that
work
and
just
various
pipelines,
but
then
depending
control,
the
environment
variables
about
which
environment
they
run.
We
run
several
certainly
over
five
environments
every
week
with
our
far
tool,
but
yeah
we've
not
actually
explored
the
environment
piece
of
gitlab.
That
was
when
we
last
checked
that
was
more
targeted
towards
review,
apps
and
stuff
like
that.
B
But
it's
something
we'd
look
at
again.
If
there's
anything,
we
can
do
to
to
kind
of
hook
that
up,
but
we
just
use
ci,
essentially
to
run
our
environments,
like
say,
with
a
simple
javascript,
just
to
populate
the
right
files
in
the
right
places,
but
but
yeah
I
mean
I'm
happy
to
discuss
that
one
more
since
sounds
interesting.
A
A
All
right
that
sound
can
only
mean
one
thing
grant
wanted
to
to
give
you
a
shout
out.
Thank
you
for
for
taking
the
time
to
present
this.
I
know
it's
you're
in
a
completely
different
time
zone
than
many
of
the
team
members
here
in
the
states,
so
very
much.
B
Appreciated,
like
I
say,
we've
got,
the
project
got
the
channel
any
questions,
any
any
more
deep
dive
stuff
feel
free
to
reach
out
you
can
you
can
do
async
or
any
other
way,
I'm
happy
to
discuss
it
more
sounds.
A
Good,
I
got
a
feeling,
dimitrov
might
be
checking
in
with
you
here
absolutely
yeah.
I
mean
this
is
so
interesting.
I
can
see
it
in
his
eyes.
A
I'm
gonna
stop
the
recordings.
I
want
to
ask
a
question.