►
From YouTube: 2020-10-19 Multi-Large Working Group
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
So
I'm
happy
to
announce
that
we
are
at
10
of
cloud
native
job
logs
roll
out
in
the
production
environment.
We
plan
to
go
for
25
tomorrow
and
yeah.
I
hope
that
is
going
to
progress
nicely
in
the
next
following
weeks.
So
yeah.
C
Yes,
so
I
have
pretty
expensive
update
on
pages
like
since
three
weeks
we
are
running
docks
gitlab
without
like
any
problems,
we
tried
percentage
rollout
up
to
10
out
of
the
data
that
we
have,
because
we
are
right
now
serving
from
the
ci
artifacts
and
like
we,
we
notice
problems
which
got
fixed.
C
We
actually
now
back
to
the
percentage
rollout
we're
actually
planning
that
tomorrow,
we're
gonna
increase
the
percentage
rollout
to
25
percent,
which
in
real
terms,
should
kind
of
make
it
around
probably
around
10
to
15
unique
domains
being
served
at
the
actual
moment,
as
kind
of
realized
for
the
performance.
C
It
seems
so
far
from
our
testing
that
it
scares
linearity
with
amount
of
the
requests,
you
can
read
more
details
about
the
performance
added
latency
and
the
different
processing
costs,
but
it's
kind
of
still
moving
at
peace,
because
we
are
optimizing
next
and
next
aspects,
while
we
work
on
the
other
stuff.
C
This
is
what
from
what
was
done.
Gary
your
item,
see
if
you
had
a
question,
actually
jason
yeah
thanks.
It's
it's
so
awesome
that
this
is
already
out,
and
I
think
there
was.
D
A
lot
of
diligence
done
here
up
front
and
that
it's
kind
of
this
quick
roll
out
is
a
testament
to
that
this
direction
is
the
right
way.
I
think
it's
awesome,
that's
only
25
milliseconds
can
you?
Can
you
maybe
elaborate
a
little
bit
of
what
that
is?
I
I
understood
this
to
be
like
hey,
we
know
where
the
files
and
the
zip
are,
and
we
just
go
to
object,
storage
and
we
say:
hey
start
at
this
location
in
the
object,
storage
and
we
download
that
and
we
send
it
to
the
user.
Is
it?
A
C
C
So,
given
that,
like
each
user
request,
translates
to
a
one-to-one
request
to
the
object
storage
time
to
the
first
byte
is
like
25
milliseconds,
basically
and
like
since
we
are
fetching
exactly
the
piece
that
we
care
about
and
like
we
do
some
caching
for
other
pieces
that
are
that
we
may
recalculate
in
some
cases.
C
Basically
for
the
majority
of
the
cases,
each
user
request
translates
to
exactly
one
request:
to
the
object,
storage,
to
thirds
to
to
fetch
the
relevant
part
and
in
line
we
kind
of
decompress
that
and
serve
it
to
the
user
in
the
form
that
user
is
expecting.
So
this
is
how
it
works.
C
Of
course,
there
is
like
it's
if
you
have
the
hot
cache,
but
if
you
start
with
the
new
archive
the
cost
right
now,
it's
free
requests
to
open
the
archive,
but
we're
gonna
shrink
that
to
the
two
requests
in
the
one
of
the
feature.
C
Iterations
so
like
opening
this
archive
reading
the
metadata-
it's
something
infrequent,
because
we
are
caching,
this
structure
in
the
memory
and
now
like
if
you
ask
about
the
cost
of
fetch,
like
catching
this
in
the
memory,
it's
right
now,
260
bytes
per
a
single
entry
of
the
zip,
but
if
we
at
some
point
figure
out
that
this
is
large
too
large,
we
can
shrink
that
to
50
60
bytes
in
the
best
scenario.
So
like
the
things
that
you
can,
we
kind
of
see
here
is
like
dog
ski
trap.
C
C
So
from
our
testing
like,
we
didn't
really
find
another
case,
the
dogskytop.com,
but
there
is
like
a
safeguard
that
we're
gonna
be
building.
It's
like
setting
upper
limit
of
the
five
to
one
one
hundred
thousand,
which
seems
way
beyond
even
what
we
currently
use
for
the
dogs
to
have
like
more
predictable
performance.
So
this
is
like
a
little
of
the
backstory
about
different
performance
aspects
of
that.
D
Well,
this
is
all
very
cool.
You
just
have
to
ask
like
hey,
you
said
getting
it
from
google
cloud.
Do
we
have
cloud
flare
in
front
and
if
we
do,
why
was
there
an
outage
this
weekend?
Because
there
was
a
ddos?
Is
that
still
something
we
need
to
like
fix
manually.
C
So
like
what
we
are
doing
is
kind
of
completely
independent
of
cdn
in
front
of
the
guitar
pages,
so
it's
definitely
possible
to
have
that
yeah
yeah.
I
think
we
should.
C
There
is
like
this
challenge
with
this
tls
snipe
to
serve
custom
domains,
but
from
what
I
talk
with
the
people
of
the
company
here
is
that
we
have
actually
some
some
ways
to
to
approach
that
using
cloudflare
workers.
If
you
would
like
to.
B
Here,
but
due
to
both
the
the
ddos
this
weekend
and
one
actually
today
we're
we're
looking
at
re-prioritizing
some
work
that
helps
us
get
cloudflare
in
front
of
pages.
B
Domains
and
some
other
architectural
nuances.
B
Yeah
same
there
was
a
question
from
casey.
F
Yeah
I
was
a
little
above,
so
I
was
gonna.
Ask
how
far
of
a
separation
can
we
confirm
that
we
actually
have
right
now
in
green
field
installations,
when
pages
is
enabled,
I'm
looking
to
move
forward
with
the
work
to
enable
pages
compatibility
with
the
helm
charts?
I
know
that
we're
not
at
100
at
this
point
and
there's
some
future
work.
That
needs
to
be
done,
but
I'm
just
trying
to
get
a
strong
idea
of
exactly
where
we
are.
C
So
like
quick,
like
like
enabling
cloud
native
to
to
have
this
configuration,
I
think
this
is
the
right
moment
to
start
having
that,
because
I'm
not
sure
how,
like
the
future
of
like
rollout,
is
gonna
go.
C
We
are
really
like
very
close
on
finishing
all
the
engineering
work
on
gitlab
pages
and
gita
plays
for
the
new
installations
so
like
if
you
would
have
purely
cloud
native
new
installation,
it's
very
likely
that
you
may
finish
in
this
milestone,
so
getting
cloud
native
configuration
could
be
really
right
moment
to
have
it's.
C
It
really
depends
on
like
on
like
the
role
of
all
the
feature
flags
that
we
have
now
how
quickly
we
get
packets,
how
the
testing
gonna
look
like,
but
as
far
as
the
like
engineering
aspects
like,
we
are
very
close
to
finish
and
move
on
to
data
migration.
As
for
the
baseline
implementation,.
C
Now,
access
control
works
completely
separately.
F
C
Like
feel
free
like
to
like,
we
can
like
discuss
that
yeah,
I
can
show
you
but
like.
I
don't
really
think
that
we
have
any
blockers
for
gitlab
pages
being
separate
container
as
soon
as
we
start
using
object,
storage.
Okay,
that's
great.
E
Josh
yeah,
actually
a
quick
question
on
the
proxy
model
effectively,
if
there's
a
cost
differential
here,
I'm
just
thinking
back
towards
the
fairly
significant
cost
benefits
we
were
able
to
achieve
with
the
direct
download
of
artifacts
and
since
we're
involved
in
both.
I
would
yeah
curious
on
your
thoughts.
C
So
the
only
cost
that
we
have
internally
accessing
object
storage
is
an
amount
of
the
requests.
There
is
no
address
traffic
cost
between
github
pages
and
the
object
storage
packet
itself.
If
we,
if
you
would
have
that
in
the
separate
availability
zone
or
something
like
that,
there
would
be,
of
course,
cost.
C
But
this
is
today
non
existing,
so
the
cost,
I
think,
10
10
000
request-
is
0.004
dollar.
As
the
last
time
I
checked.
Currently,
a
single
request
translates
to
something
close
to
1.2
request
to
object,
storage,
of
course,
putting
cdn
in
front.
We
reduce
this
number
by
significant
number
significant
amount,
however,
like
we
will
have
to
likely
be
prepared
for
this
to
send
a
proper
cash
control
headers.
C
So
as
for
the
cost,
it's
hard
to
compare
that
apple
to
apples,
because
nfs
with
the
spinning
or
solid-state
drives
and
compute,
and
servers
and
data
storage,
I'm
not
sure
if
it's
more
pricey
than
object,
storage
or
not.
Comparing
the
requests
like
it's
it's
hard
to
really
like
to
compare
that,
but,
like
the
only
the
only
cost
implied
with
the
object
storage
is
like
the
data,
storage
and
amount
of
the
requests
that
we
are
firing.
C
We're
gonna
likely
have
very
good
estimate
as
soon
as
we
start
storing
data
on
a
separate
bucket,
maybe
gonna
have
some
cost
estimates
then,
and
things
like
that,
but
those
if
you
are
interested
about
amount
of
the
request
that
we
fire
today
from
what
we
run
today.
We
have
metrics
for
that.
So
we
can
pay.
Probably
even
roughly
try
to
estimate
that
number.
Even
today,
looking
at
the
current
traffic
pattern
that
we
have.
E
Yeah
we
can
find
just
some,
not
good
math
here.
I
think
you
know
if
there's
like
no
nap
gateway,
which
I
think
was
a
significant
driver
of
cost
and
the
artifact
model
from
the
runner's
side.
You
know
it's
probably
just
going
to
be
the
to
your
point
to
eat.
Well,
I
think
we
do
get
ding
for
egress,
because
it's
multi
multi-region
buckets
unless
we've
decided
to
go
with
a
single
region
bucket,
but
but
I
think
we
can
take
that
further.
E
A
A
All
right
so
moving
on
sid
you
had
asked
for
a
documenting
deployment
technologies.
The
distribution
team
has
been
working
through
that
and
they're
down
to
nmr.
That
actually
outlines.
All
of
those
dmr
is
currently
under
discussion,
so
I
expect
that
it
will
be
merged
soon.
D
F
E
Yeah,
so
we're
working
through
a
couple
different,
existing
tools
right
now,
peb
is
looking
quite
promising
out
of
all
the
automation
tools
in
gitlab.
They
all
have
both
terraform
and
ansible,
and
so
ansible
seems
to
be
the
consensus
on
the
tool
that
is
best
for
managing
the
configuration
and
the
highest
value
component.
Actually
that
that
is
being
built.
The
tariff
from
itself
in
discussions
is
not
that
hard
to
replicate
it's
the
ansible
that
does
all
the
configuration
management
that
is
actually
the
really
important
piece
in
the
heart
and
which
is
harder
to
replicate.
E
Okay,
peb
is
the
performance
environment
builder,
and
so
we're
kind
of
that
it
has
support
for
all
reference.
Architectures
on
gcp
today
has
support
for
upgrades
does
not
do
zero
time
time
upgrades
the
orchestrator
has
some
better.
B
E
E
I'm
trying
sorry
talk
a
little
slower
as
well
the
pep's
performance
and
environment
builder.
It's
the
quality
tool
that
grant
has
been
building
to
easily
deploy
reference.
Architectures
we'll
talk
about
more
this
in
the
retrospective
here,
but
effectively
it
supports
today.
The
all
reference
architectures
on
gcp
it
has
support
for
upgrades,
does
not
do
zero
down
time,
but
again
it
still
rather
leverages
terraform
and
ansible,
with
the
terraform
being
relatively
simple
and
a
lot
of
the
complex
logic
and
enhancible,
and
that
that's
a
pattern
that
is
replicated
across
all
the
performance.
D
So
so
then,
maybe
the
problem
is
more.
The
problem
I
have
is
more
with
like
we
had
customer
success,
make
some
food
we
had
quality,
make
something
that
was
another
terraform
thing
and
then
the
solution
was
to
make
a
fourth
standard
like
are
we
getting
that
resolved
because
I
don't
I've?
I've
not
heard
about
anything
and-
and
I
just
keep
seeing
gillab
orchestrated,
which
for
me
means
the
fourth
standard
and
not
what
our
customers
are
using,
and
I
think
every
every
week
going
by
that
we
don't
resolve
that
is,
is
a
is
a
bit
wasteful.
E
Yeah
I
come
up
there
a
little
bit.
We
have
paused
orchestrator,
so
there's
no
further
work
going
into
our
procedure
at
this
point
in
time.
So
we're
we
are
not
continuing
down
parallel
paths.
We
can
talk
more
about
the
retrospective
here,
but,
and
you
know
once
we
have
it
now,
that
137
is
1365
is
done,
but
we
are
looking
to
basically
align
on
which
tool
to
go
forward
with
I.
E
I
think
it
probably
makes
sense
to
have
peb
but
we're
looking
at
the
overall
matrix
of
support
and
what
features
since
they're
so
common
in
tooling
and
automation
with
terraforming
ansible.
We
could
cobble
things
together,
but
I
we
we
will
standardize
down
to
fewer
and
pro
professional
services
also
has
buy-in
to
standardize
in
either
orchestrator
or
pdp.
B
B
Regarding
what
technologies
just
want
to
use
a
c
to
distinguish,
it
helps
that
we
set
up
a
separate
session
to
discuss
about,
because
all
the
tools
are
using
the
same
tool,
stack,
chef,
chef,
terraform
and
ansible,
so
we
may
need.
We
would
like
to
give
you
an
overview
of
the
all.
The
tools
are
using
this.
D
D
I
missed
part
of
that.
I
was
switching
networks.
I
don't
want
to
bother
people
in
this
group
anymore
with
this
jars,
maybe
plan
25
minutes
to
talk
me
through
the
latest
and
greatest
here.
I'm
super
concerned.
We
still
are
not
settling
on
a
single.
A
All
right,
christopher
and
I
already
discussed
the
sidekick
in
nfs
so
in
the
interest
of
time,
I'll
move
on
to
the
what's
next.
So,
as
camille
mentioned,
most
of
the
engineering
work
for
nfs
is
done.
We
still
have
a
ton
of
migration
work
to
do
on
gitlab.com,
but
I
think
this
group
can
start
refocusing
on
next
steps,
count,
charts
and
one
of
the
first
things
we
need
to
do
and
thanks
john.
A
There
is
already
an
issue
to
essentially
map
out
what
the
reference
architectures
look
like
in
cloud
native,
so
we
need
to
get
that
done
before
we
can
actually
proceed
full-on
with
like.
I
know
there
is
some
work
that
can
be
done
on
help
without
these
cloud
native
reference,
architectures
were
actually
kind
of
shooting
in
the
dark,
so
I'll
track.
That
issue
and
I'll
work
with
folks
to
make
that
a
reality.
A
Okay
and
then
camille,
I
think
you
have
some
points
on
the
migration
of
pages.
I
don't
know
that
we
need
to
go
into
detail
with
that.
We
have
three
minutes
left.
C
Not
really,
I
think
that
I
will
need
help
with
rollout
for
a
kitchen.com.
I
mean
provisioning
buckets.
So
if
you
could
look
the
point
e
and
there
is
also
called
native
point.