►
From YouTube: 2021-04-05 Multi-Large Working Group Weekly
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
good
morning,
good
afternoon,
good
evening,
everyone
today
is
2021
april
5th.
This
is
a
multi-large
working
group,
weekly
sync
meeting
and
let's
get
started
from
the
agenda.
The
first
item:
jason:
do
you
want
to
verbalize
your
update.
B
We
were
able
to
analyze
what
was
going
on
and
then
take
a
look
at
the
patterns
that
were
involved.
What
we
found
was
that
there
were
not
resources
requested
and,
as
a
result,
we
were
getting
basically
shunted
out
of
being
able
to
respond.
Console
was
taking
a
little
while
too
long
to
respond,
because
I
couldn't
get
the
cpu
time
to
actually
answer
those
requests
by
ensuring
that
we
get
at
least
the
average
of
what
it
was
using.
A
Thank
you
for
the
update,
looks
there's
a
pretty
good
progress
there,
but
still
not
fully
resolved.
Yet.
Is
that
correct.
B
Specifically,
we
are
concerned
about
the
number
of
times
where
there's
no
route
to
host
and
the
rare
time-
and
I
say
rare
because
we
barely
saw
it
before
we
saw
it
once
all
weekend
where
we
try
to
connect
and
then
it
gets
connection
refused.
Now.
To
that
end,
we
need
to
figure
out
if
those
are
scaling,
events
within
the
cluster
or
if
they're
pod
scaling
events
or
pods
restarting.
B
For
some
reason,
there
are
many
things
that
could
go
into
that
and
we
have
to
figure
out
all
of
the
ip
addresses
that
are
involved
and
which
services
they
are
bound
to
and
what
the
scaling
events
correlate.
And
why,
given
things
happen,
some
of
these
are
expected
over
time,
but
we
want
to
just
remove
as
much
variance
as
possible.
A
Thank
you.
So
let's
continue
the
research,
and
hopefully
we
nail
down
the
last
pieces
of
this
problem
so
and
also
just
what
I
just
want
to
highlight
that
this
is
the
only
outstanding
issue
for
development
at
this
point
for
this
from
this
working
group
and
just
to
make
sure
that
we
are
on
top
of
the
we
are
on
the
same
page.
If
there
are
issues
for
development,
please
label
that
with
the
this
working
groups
label,
so
it
appears
on
our
board.
A
If
not
moving
on
to
my
next
one
actually
what's
happening
right
now
is
blank,
but
I
want
to
remind
everyone
that
we
shall
have
something
always
for
what's
happening
next,
I
think.
Naturally,
of
course,
we
continue
to
work
on
the
issue
just
mentioned
in
last
update,
but
for
other
things,
a
majority
of
the
remaining
tasks
are
moving
the
web
web
nodes
and
moving
on
to
deeply
and
and
radius.
A
So
we
need
to
articulate
issues
and
make
make
sure
that
we
work
on
those
topics,
and
I
think
amy
is
amy
here
today.
A
No
well
I'll
sync
up
with
amy
offline,
but
we
should
have
something
what's
shipping
next
before
next
thing,
so
I
will
work
with
amy
offline
and
maybe
andrew.
We
can
work
together
with
amy
to
see
what
makes
sense
to
to
ship
before
next.
To
sync
sounds
good.
C
Hey,
hey
chan,
a
good
one
is,
I
think,
we've
done
all
the
work
to
basically
turn
off
nfs
on
gitlab.com.
This
is
like,
I
think,
item
number
one
in
the
long
going
on
going
one-on-one
with
sid.
So
we
should.
We
should
see
part
of
the
reason
we
were
thinking.
We
were
gonna
originally
do
this
at
the
end
of
march,
with
all
the
operational
issues
and
challenges.
A
Thank
you
any
other
comments
or
items.
A
Okay,
no
blockers,
then,
let's
move
on
to
the
discussion.
The
first
one
was
still
from
me.
I
just
see
one
issue
on
the
on
our
infrastructure
board.
So
it's
a
workflow,
it's
just
one
issue
for
moving
the
web
knows
is
in
labeled
as
workflow
charge.
So
I'm
just
wondering
if
I
make
sense
that
we
start
articulating
the
other
issues
for
moving
the
webinars.
This
is
probably
a
question
for
amy,
but
she's
not
here.
I
will
also
follow
up
offline
with
her.
A
My
next
one
is
a
steal
from
me
because
of
the
remaining
most
remaining
tasks
are
infrastructure,
so
I
was
asked
I
will
follow
up
with
amy
again
to
see
if
that
makes
sense,
to
promote
her
as
the
dri
for
this
working
group.
E
Josh
yeah
I'm
happy
to
respond.
I
wrote
my
response
there,
but
I'm
not
sure
if
there's
any
other
opinions
my
take
is
like
go.
F
Ahead,
yeah
awesome,
no
sorry
feel
free.
To
finish
your
comment,
but
yeah,
I
can
kind
of
give
a
quick
comment
on
it.
Yeah
yeah,
so
guest
purpose
initially
was
to
build
the
reference
architectures
and
we're
hoping
to
add
more
hooks
to
allow
for
more
customizations.
I
guess
you
in
our.
In
that
view,
dot
com
would
be
like
the
final
boss
really
because
com
is
like
a
massive,
quite
heavily
customized
install
to
serve
gitlab
at
a
scale
that
no
one
else
really
has
to
do
so.
F
The
answer
is,
I
don't
know
I
guess
today,
but
we
can
start
looking
into
it
more,
but
yeah.
I
think
there
probably
needs
to
be
quite
a
bit.
Work
done
to
build
support.com
and
all
of
its
intricacies,
but.
C
D
Let's,
let's
have
a
look
at
that:
we
don't
want
to
mess
up,
get
with
a
needed
complexity.
On
the
other
hand,
g
who
is
also
using
get
they're
going
to
get
to
scale,
and
I
I
think,
there's
a
list
of
customizations.
Some
of
them
will
need
to
get
get
added
to
get
no
pun
intended
and
some
of
them.
Probably.
We
should
stop
doing
something
strange
on
gitlab.com
and
we
should
do
the
same
run
the
same
thing
as
our
customers
run
and
we
should
start
dog
fooding
so
but
I'd
love
to
see
a
list.
D
So
after
you're
done
with
that,
don't
wait
for
the
working
group
but
feel
free
to
post
an
issue
and
also
post
on
the
ceo
channel,
because
I'd
be
interested
to
look
whether
the
kind
of
the
complexity
increase
in
get
is
worth
it
to
do
this,
and
I
think
we
should
be
dog
footing.
So
my
default
would
be
yes,
but
would
love
to
see
the
data.
F
Yeah,
that
seems
fair
enough
yeah.
I
couldn't
agree
more
yeah.
The
idea
is
to
keep
get.
You
know
maintainable
and
lean,
but
also
be
able
to
make
it
scale
up
to
these
larger
souls.
So
we'll
look
into
that,
for
you.
D
B
Is
the
fact
that
multi-regional
and
zonal
clustering
to
make
sure
that
we're
spreading
the
load
across
applications
to
give
some
of
the
resiliency
benefits
that
we
have
within
kubernetes
without
relying
on
godr
itself?
F
Yeah,
it's
a
fair
shot.
Yeah,
I'm
looking
into
easy
stuff
right
now,
funny
enough,
for
other
reasons,
adding
that
to
get
photoreformation
shouldn't
be
too
hard.
I
say
if
I
actually
haven't
done
it
yet
so
take
that
with
a
big
picture
so
but
yeah.
That
should
be
too.
That
should
be
okay.
D
B
I
I
may
be
an
error
in
terms
of
what
the
current
infrastructure
is
actually
doing
in
terms
of
regions.
B
That's
one
of
those
things
where
I
can't
strictly
answer
and
I
do
apologize.
The
the
concept
that
I've
seen
come
across
on
occasion
about
multi-regional
clusters
or
the
same
application
deployed
in
multiple
regions
and
reaching
back
to
the
same
distributed
state
is
that
you
can
have
resiliency
in
the
event
that
say,
u.s
east
goes
out.
You
still
technically
can
answer
all
requests
on
the
primary
site
from
us
west.
Now.
That
being
said,
that's
part
of
what
our
our
goal
with
geo
actually
is,
so
it
it
may
not
be
like
a
calm
thing.
C
F
So
get
has
the
ability
now
that
what's
left
to
do
is
mainly
the
dogs
get
the
building
there
to
do
it
on
on
gcp.
It
can
some
manual
tweaks
probably
run
on
over
clouds,
but
on
g
speed.
Today,
the
the
support
is
inbuilt,
it
deploys
our
recommended
cloud
native,
hybrid
architectures,
so
rails
is
psychic,
are
run
in
the
charts
named
web
servers
and
psychic,
as
well
as
a
few
other
sporting
services
like
nginx
and
other
things,
and
everything
else
is
running
on
the
bus.
F
That's
where
we
are
today,
maybe
in
the
future
we
can
make
more
product
improvements
to
actually
make
it,
so
we
can
run
more
in
kubernetes,
but
generally
the
rule
of
thumb
is
anything
stateful.
We
keep
a
large
scale
because
of
various
reasons,
understood.
C
E
The
phone
and
the
background
here
just
we
had
a
discussion
earlier
today
on
using
the
hybrid
architecture
as
a
recommended
sort
of
direction
for
our
customers,
as
opposed
to
everything
in
kubernetes.
B
A
Okay,
that's
a
good
update,
see
any
more
questions
are
on
this
topic.
A
Thank
you
and
the
next
also
yours.
D
This
is
super
confusing
in
my
book,
so
you
have
an
offer
here.
Try
gitlab
ultimate,
that's
fine!
We
should
have
that
get
your
free
trial.
That
makes
sense.
Then
this
text,
I
would
think
that
relates
to
the
offer
to
try
get
lab
ultimate,
would
tell
me
about
what
gitlab
ultimate
is
what
the
free
trial
is.
It's
not
it's
hints
about
that.
You
should
read
if
you
read
the
rest
of
the
page,
so
it
should
be
below
here
not
below
here.
D
B
D
E
Yeah,
I
think
it
would
be
the
distribution
product
manager.
We
can
have
a
discussion
on
marketing
and
whether
distribution
or
marketing
owns
that
page.
But
I
think
the
you
know,
disruption
pm
and
dylan.
Welcome
to
the
team
should
be
doing
a
product
review
dylan.
What
do
you
think
yeah?
I
wasn't
sure
if
this
was
a
page
owned
by
marketing
either,
but
I
should
be
reviewing
it.
Anyways
yep.
D
Yeah
in,
in
any
case,
I'd
appreciate
it
if
you
do
like
a
review
every
every
month
or
so,
and
make
sure
that
it's
still
up
to
date,
just
do
a
sanity
check
like
hey,
is,
is
cloudron
still
updating
their
stuff
and
we
still
recommend
the
gitlab
virtual
appliance
things
like
that.
D
F
We
have
got
an
ethic
up
and
it
shows
you
the
progress
we
have
to
that.
It
should
only
be
a
month
or
so
probably
until
we
get
that
at
that
point
we
were
going
to
do
earlier,
but
with
the
g
project
coming
in
and
new
priorities
coming
in,
but
the
hybrid
and
zero
down
time
we're
kind
of
we're
working
on
those
first
and
then
we're
wanting
to
get
the
docks
into
a
good
place
before
we
we
open
the
door
so
to
speak.
F
Although
it's
in
beta
is
more
like
a
google
beta
and
it
says
that
it's
it's
working
with
running
it
every
day
and
people
are
more
encouraged
to
to
run
through,
but
yeah.
We
just
want
to
get
those
last
bit
of
docs
in
as
well
as
the
extra
functionality
so
yeah
watch
the
space.
D
Thanks
for
that,
and
we
don't
do
google
betas,
we
we
take
stuff
out
of
beta
when
they're
good
enough.
I
think
that
the
consensus
is
it's
good
enough,
we're
recommending
it
to
people,
so
I
would
just
take
it
out
of
beta,
even
though
there's
still
improvements
to
be
made.
But
that's
that's
my
recommendation.
In
the
end,
I'm
not
a
dri
on
this.
A
Cool
okay,
that's
the
discussion
items
and
the
two
follow-up
items
are
that
those
are
both
for
me.
Follow
up
the
a
few
items
with
me
and
also
follow
up
the
fs
status
before
next
sync.
So
that's
it
any
questions.