►
From YouTube: 2023-09-25 Analytics Section Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone
and
welcome
to
the
September
25th
analytics
section
meeting
very
light
agenda,
but
some
good
topics
to
discuss
want
to
get
everyone's
thoughts
on
this,
so
James
and
I
were
discussing
application
limits
and
how
we
want
to
like
Define
them,
and
that
involves
things.
Do
we
have
it
an
issue
link
for
that
chance,
I'll.
A
No
worries
the
general
context
is
like
we
want
to
figure
out
how
we
can
put
in
place
some
application
limits,
and
so
part
of
that
is
defining
it.
But
even
though
we
Define
things
like
you
know,
request
per
second
and
things
like
that
or
or
how
we
how
he
actually
wanted
to
find
those
limits.
A
We
need
to
understand,
what's
actually
possible
within
our
stack,
and
so
you
know,
I,
don't
think
we
really
have
anything
in
place
with
and
I,
don't
think
it's
necessarily
a
responsibility
of
The
Collector
to
really
be
aware
of
who,
which
events
it
can
and
can't
accept.
But
ideally,
when
we're
taking
in
events
from
from
different
application,
IDs
that
we
have
the
ability
to
shut
them
off.
Based
on
the
usage
data
that
we're
starting
to
collect.
A
You
know,
Max
has
been
working
on
that
and
actually
just
post
a
new
release
with
the
configurator
to
be
able
to
query
on
that.
So
you
know.
Theoretically,
if
we
have
a
feedback
loop
of
being
able
to
see
how
many
events
are
being
received
and
how
many
were
actually
processing
and
things
like
that,
we
want
to
be
able
to
say:
hey,
you
know
perspective
beta
or
beta
user.
You
are
approaching
the
data
data,
storage
limit
or
events
collected
limit,
and
how
can
we
actually
control
that?
A
So
we're
actually
open
to
ideas
here
on
what's
possible
done
some
research
into
using
something
like
gcp,
Cloud
armor,
to
be
able
to
say,
hey
based
on
certain
headers
coming
in?
If
we
were
to,
for
example,
send
an
application
ID
through
a
header,
then
we
can
start
to
block
requests
in
that
way,
but
we
need
to
kind
of
close
the
loop
there
in
terms
of
okay,
we
see
how
much
people
are
using.
A
We
may
have
some
users
that
are
using
too
much,
especially
in
the
beta
phase,
where
we're
trying
to
test
things
out
related.
You
know,
gitweb.com
is
firing
100
now
and
it's
I'm,
seeing
a
lot
of
Crash
loopback
also
happening
in
the
cluster
right
now.
A
So
it's
important
that
we
put
these
these
application
limits
in
place
so
that
we
have
a
little
bit
more
control,
especially
as
we
are
launching,
with
shared
shared
cluster
environments,
that
you
know,
customers
that
are
users
that
are
using
more
than
other,
don't
disrupt
other
other
issues
as
well.
So
yeah
Max.
You
raise
your
hand.
C
Yeah
I
think
this
could
be
really
difficult
to
solve.
I'm,
not
saying
we
shouldn't
so
like
in
previous
roles.
I
worked
on
applications
that
had
no
traffic
99
of
the
time
and
then
in
relation
to
I,
don't
know
an
advert
or
a
call
out
from
a
YouTuber
or
something
like
that
they
would
suddenly
get
what
would
appear
to
be
an
abusive
number
of
events
and
if
they
lost
that
data,
that
would
be.
C
You
know
incredibly
damaging,
because
it's
a
very,
very
valuable
data
for
them
for
the
the
one
percent
of
the
time
where
they're
actually
dealing
with
any
sort
of
meaningful
traffic
I,
don't
know
how
we
handle
that
and
that
obviously
does
need
to
be
handled.
C
I,
like
the
idea
of
saying
you
know
you're
using
this
much
most
of
the
time.
So
an
unexpected
Spike
would
be
considered
abusive,
but
we
also
don't
want
to
lose
our
customers.
Data
and
I.
Don't
know
what
the
answer
is,
but
it
certainly
is
considering.
A
Yeah
I
think
pirate
is
just
understanding
what's
possible,
I
think
and
James
correction
if
I'm
wrong,
like
I,
think
this
has
that's
a
fair
point.
Like
one
thing
we
wanted
to
do
is
like
test
like
surging
and
there's
no
way
of
saying
like
hey,
you
have
a
you,
have
a
request
per
second
limit.
When
yeah
you
know
the
holiday
rolls
around
or
some
you
know,
marketing
event
happens
where
you
have
a
certain
traffic
and
then
we
don't
want
to.
We
want
to
be
able
to
capture
all
of
that.
A
So
I,
don't
know
how
feasible
it
is
when
we're
in
general
availability
but
I.
Think
a
lot
of
the
focus,
at
least
right
in
the
short
term
has
been
around
just
making
sure
that
we
have
a
stable
environment
for
for
a
semi-stable
environment
for
for
when
we're
in
beta-
and
you
know
in
beta,
will
be
theoretically
opening
this
up
to
anyone
who
wants
to
opt
in
using
the
experiments
toggle,
but
at
the
same
time
it's
a
balance
of
like
what's
possible.
A
How
much
work
is
involved
actually
to
enforce
these
limits
and
then
does
it
then
make
sense
to
even
implement
it?
Because
of
you
know
to
your
point
like
we
don't
want
to
lose
any
of
that
data
just
because,
oh
we
set
these
policies
to
set
these
thresholds,
even
if
it's
like
a
10
buffer
or
whatever,
like
that
and
just
exceeds
it
based
on
some
sale
or
marketing
event
and
like
yeah,
it
was
almost
our
entirely
our
value
to
to
customers
just
because
we
wanted
to
save
theater.
B
Yeah
we
can,
as
we
get
to
more
external
users
and
users
who
are
using
it
more
for
customers,
customers
kind
of
beyond
our
initial
user
Persona
of
your
platform
team,
your
tools
team,
most
of
you,
users,
are
in
journals,
so
it
should
be
fairly
steady
state.
Then
we
can
start
to
implement
things
with
the
field
team
around
hey.
We
are
expecting
a
bump
because
we're
expecting
additional
traffic
for
whatever
it
might
be,
and
there's
going
to
be
funky,
outliers
I
think
we
should
focus
on
the
initial
use
case
of
hey.
B
Let's
just
not
let
somebody
accidentally
deed
out
the
and
deal
with
consequences
of
it,
especially
through
beta,
where
data
loss
is
a
little
bit
expected
and
built
in,
and
we
can
learn
from
that,
but
getting
something
in
place
to
prevent
that
on
the
front
end
and
then
getting
some
saying
limits
put
in
place
with
how
much
are
we
going
to
store
is
kind
of
the?
What
we're
looking
for
the
outcome
that
we
want
out
of
this
issue,
this
discussion
and
then
what
we
can
Implement
throughout
the
beta.
A
D
Yeah
I
was
only
going
to
say
the
only
thing
I
can
that
I've
had
experience
with
analytics
before
and
the
two
things
that
we
normally
limited
were
to
do
with
individual
users,
putting
a
tie.
How
many
events
we
would
track
per
user
per
second
say
five
events
per
second
or
something
like
that,
and
then
how
many
events
per
day
or
set
period
of
time.
D
We
provide
per
user
because
realistically,
a
normal
user
isn't
going
to
be
spamming,
refresh
constantly
necessarily
over
a
short
period
of
time,
and
that
was
how
we
reduced
or
put
limits.
But
we
didn't
limit
how
many
events
overall,
we
would
send
or
collect
for
the
website.
A
Yeah
I
think
that's
where
we
can
start
to
figure
out
is
like,
and
what
I'm
coming
to
to
realize
is
I.
Think
our
first
iteration
is
one
just
having
visibility
into
that
amount
of,
like
I,
think
I.
Think
it's
going
to
be
important
to
have
that
insight
into
like
the
users
and
how
much
they're,
how
many
events
they're
sending
and
then
being
able
to
figure
out.
A
Okay,
do
we
have
the
means
to
by
which
we
can
actually
exclude
specific
users
like
we
have
customers
that,
in
the
same
way
that
you
know
we
have
API
limits,
and
in
the
beginnings
of
that
we
were
able
to
at
least
identify
who
was
hitting
our
API,
the
most
to
be
able
to
say
hey.
This
is
probably
out
of
the
ordinary
I
think
we
want
to
be
lenient
to
start,
but
then
start
to
figure
out
like
okay.
A
For
you
know,
these
users
are
something
tens
of
thousands
compared
to
hundreds
of
events
a
day
and
then
being
able
to
to
exclude
them.
Maybe
if
a
good
like
first
or
second
iteration,
and
then
we
can
kind
of
move
from
there
at
the.
E
E
Yeah
I
think,
from
my
perspective,
for
like
a
first
division.
If
our
main
concern
is
that
we
don't
bring
down
the
whole
system
or
one
user,
like
James
said,
doesn't
teach
us
accidentally
done,
I,
don't
know.
If
how
much
we
actually
need
to
look
at
the
like,
compare
how
much
a
specific
customer
has
used
us
before
and
how
much
they're
using
us
now,
but
instead
just
put
in
an
overall
limit
of
okay,
a
based
on
our
infrastructure
and
maybe
what
we
want
to
pay
for
it.
E
As
long
as
it's
kind
of
somewhat
elastic
that
the
infrastructure
we
just
have
an
overall
limit
of
okay,
we
for
now
we
don't
expect
a
single
customer
to
exceed
x
amount
of
events
per
second
and
just
kind
of
put
a
hard
limit
on
that.
That's
also
something
we
could
communicate
that
okay.
This
is
not
made
for
your
Black
Friday
special
super
stale.
Yet
so
we
just
kind
of
put
that
hard
limit
in
place.
E
It
seems
to
me
like
that
would
maybe
be
a
lot
easier
to
implement
than
if
you
need
to
consider
kind
of
how
a
something
changed
over
time,
and
it
should
still
protect
us
from
this.
Okay,
one
customer
just
searches
so
much
that
they
impact
the
the
experience
of
other
customers,
because
I
think
from
what
I
understand.
That's
I
mean
the
main.
E
The
main
problems
we
could
have
is
either
one
customer
just
ddosing
and
and
creating
a
bad
experience
for
everyone
else
or
if
we
have
completely
kind
of
elastic
infrastructure.
Some
One
customer
just
creating
too
much
cost
in
a
in
a
beta
environment
where
they
don't
have
to
pay
yet,
and
so
there's
no
kind
of
they
could
just
use
it
because
it's
cheap
or
it's
it's
free
and
then
and
and
kind
of
misuse.
Us
that
way.
B
Yeah
busty
I
think
you
nailed
it.
The
the
first
use
case
is
that
DDOS
won
and
as
long
as
we
can
start
to
understand
how
many
events
we're
storing
and
where
the
cost
is,
we
can
build
out
then
cost
models
around.
What
do
later
limits
look
like
from
a
how
long
we're
going
to
store
something,
how
much
of
something
we're
going
to
store
as
we
get
into
more
pricing
the
packaging.
B
So
that's
really
what
I
wanted
to
drive
with
this
discussion
of
this
issue
is:
how
do
we
one
prevent
the
DDOT
case
and
then
to
capture
the
data?
We
need
to
understand
the
pricing
packaging,
so
great
discussion
I
wanted
to
then
kind
of
segue
into
how
can
we
at
the
stage
kind
of
inform
what
we
can
do
for
the
former?
What
do
we
have,
then?
If
you,
you
mentioned
something
on
the
Google
side
already?
B
Are
there
other
things
that
we
can
do
within
snow
plow
or
some
of
the
other
stack
to
implement
a
larger
or
more
broad
case?
That
applies
to
everyone.
If
this
is
how
many
events
per
second
per
user
or
whatever
the
right
nomenclature
is
during
our
beta
for
product
analytics,
that
ensures
we
get
that
reliability
and
stability,
we're
looking
for.
A
Yeah
I
think
we're
really
gonna.
So
the
the
first
key
thing
that
comes
to
mind
is
like
when
we
need
the
visibility,
because
we
we
don't
have
like
we
need
to
understand
the
cost,
but
also
we
need
to
understand
how
much
data
we're
getting
to
actually
have
any
idea
of
like
what
a
sensible
limit
would
be.
Otherwise
it's
just
all
theoretical
at
this
point
and
then
also
understanding
what's
possible.
A
So
there's
Cloud,
armor,
there's
also
I
know
a
lot
of
our
services
are
behind
cloudflare
and
you
know
they
have
data
DDOS
protection
as
well.
A
So
it's
trying
to
understand
where
we
can
leverage
this
type
of
protection
before
and
then
eventually
integrating
that
with
our
visibility
into
the
data
and
what
limits
we
want
to
place
there
after
which
like
so
this
is
recorded,
but
I
mean
okay
anyways,
like
the
collectors,
are
raw
endpoints
right
so
like
they're,
just
directly
pointing
at
our
load
balancers,
and
so
if
that
means
that
cloudflare
has
to
manage
design
points
so
that
we
get
those
DDOS
Protections
in
place
and
then
that's
what
we
need
to
understand
and
so
I'm
trying
to
understand,
I'm
trying
to
investigate
and
the
infrastructure
side
of
things
like
what's
what's
possible
there
after
that,
we
can
see
like
sorry
it's
like
confusing.
A
A
Yeah
so
I
would
say
two
issues
to
start
is
to
do
the
analysis
of
like
current
usage
and
then
costs
related
to
that
and
then.
Secondly,
what
is
possible
in
terms
of
actually
enforcing
these
limits
if
we
have
values
to
put
in
place
there,
so
that
could
be
Phantom
stack,
just
infrastructure
like
it
it
because
it
makes
sense
from
what
boss
he's
saying.
A
It's
like
you
know
what
what
limits
we
want
to
place
in
terms
of
just
overall
costs
and
how
we
allow
these
shirt
clusters
to
scale,
and
so,
ideally,
it
would
be
in
front
of
even
the
Clusters
or
at
least
at
the
cluster
level.
I,
don't
know
that
it
necessarily
means-
and
it
probably
shouldn't
be
specific
to
like
snow
plow
or
any
type
of
service
in
the
stack,
we'll
likely
be
a
layer
in
front
of
that,
but
anyways.
Those
are
the
first
two
issues,
I'd
start
and
I.
D
One
approach
we've
day,
I've
taken
in
the
past,
is
to
run
up
a
run,
a
duplicate
production
instance
and
then
run
something
like
Taurus.
D
The
load,
testing
tool,
I
run
a
couple
of
scenarios
and
just
ramp
it
up
until
you
break,
and
then
you
know
scale
up
the
copy,
the
test
cluster
at
different
limits
to
see
you
know
what
sort
of
what
sort
of
bird
to
see
the
different
budget
ranges
you'd
hit,
based
upon
how
much
data
you're
inputting
I
mean
with
our
scenarios
our
collector
processing
events
and
then
also
from
the
problematic
perspective
Cube
querying
for
events,
both
sessions
and
normal
page
views
type
events.
A
Yeah
I
think
we
did
the
metric
setup
first
and
I
forgot
to
mention.
Logging
in
monitoring
is
going
to
be
important.
Part
of
that,
since
we
have
to
measure
which
parts
of
the
stacks
are
healthy
or
not,
which
I
I've
got
a
point
of
contact
to
push
forward.
So
hopefully
I
can
get
someone
to
investigate
that,
but
yeah.
A
Cool,
so
we
have
some
action
items
there,
I'll
create
the
issues
and
then
see
if
we
can
kind
of
piece
that
out
and
get
some
DIYs
for
that,
but
are
there
any
other
thoughts
as
far
as
application
limits
are
concerned,.
E
Hey
so
some
of
you
might
remember
that
we
had
a
discussion
a
few
months
ago,
didn't
actually
check
when
it
was
around.
How
do
we
track
individual
users
and
how
do
we
calculate
the
user
count?
So
if
you
look
at
a
dashboard
or
or
something-
and
it
tells
you
about
unique
users,
how
do
we
calculate
that?
E
If
users
do
not,
for
example,
accept
a
cookie
Banner
or
they
do
not
actually
log
in,
and
so
we
got
to
an
agreement
there,
which
was
that
we
want
to
when
a
user
doesn't
even
opt
into
a
cookie
Banner.
E
We're
consider
we're
going
to
consider
every
single
event
that
they're
sending
as
a
as
a
new
user,
because
we
don't
have
any
additional
information.
We
cannot
realistically
track
anything
about
them
when
they
opt
into
the
cookie
Banner.
E
Then
we
get
a
a
cookie
based
user
ID
that
we're
going
to
consider
their
user
ID
their
unique
user
identifier,
and
then,
on
top
of
that,
when
they
actually
log
in,
we
would
consider
that
as
a
user
as
a
unique
identifier
and
the
problem
with
that
is
just
all
these
information
we
already
have
in
theory
in
the
database,
but
so
far
it
would
have
created
quite
complicated
queries
to
figure
out
figure.
E
This
information
out
and
unique
users
is
a
very
important
metric
for
us
to
display,
so
it's
important
to
be
able
to
query
it
fast
and
easily
to
Showcase
it
in
dashboards
and
so
on,
especially
if
we
like
with
the
topic
beforehand.
If
you
can
scale
to
millions
of
events,
it
still
needs
to
be
fast
and
so
I
just
wanted
to
quickly
show
how
we
now
are
able
to
do
that.
There's
an
MR!
E
That's
linked
there,
that's
in
progress
right
now,
but
it's
should
be
merged
any
day,
either
tomorrow
or
later
the
day
after
which,
what
I'm
now
showing
is
based
on.
Let
me
share
my
screen
quickly.
E
So
can
you
see
the
screen
yep
all
right,
so
this
is
what
we
normally
have
from
an
event
which
is
important
to
what
I'm
talking
about.
E
So
the
this
database
table
is
based
on
how
the
cluster,
how
the
configurator
worked
beforehand
and
how
did
the
events
the
tables
were
set
up
beforehand,
so
we
in
theory
for
these
all
these
events.
I've
said
pre
I've
sent
previously.
We
have
an
event
ID,
which
is
particular
to
the
single
event.
E
For
some
of
them
we
have
a
domain
user
ID,
and
for
some
of
them
we
have
a
user
ID
that
depends
on.
If
you
look
at
the
example,
we
have,
for
example,
if
I
send
an
event
like
this,
without
accepting
our
cookie
Banner
like
our
example
cookie
Banner
here,
but
if
someone
would
configure
the
SDK
to
not
accept
the
cookie
to
provide
a
cookie
Banner
first,
then,
what
you
would
see
is
that
you
only
get
the
event
ID.
This
is
an
event
like
this.
E
We
don't
have
a
domain
user
ID,
you
don't
have
a
user
ID
if
I
accept
the
cookie
Banner,
but
I
don't
identify
the
user
in
any
other
way.
Then
I
would
get
an
event
like
this,
where
I
just
get
a
domain
user
ID
and
the
user
ID
field
itself
is
still
empty
and
then,
if
I
would
press
this
identifier
with
id123
and
then
track
an
event,
then
I
get
a
user
ID
into
into
the
field.
E
So
a
good
thing
about
this
is
we
that
we
have
all
the
information
already
just
enter
quickly.
I
think
also
drop
the
migrations
table,
one
sec.
E
I'm
it
that
always
conveys
more,
especially
if
it
doesn't
work,
and
then
people
learn
even
more
like
it
does
not.
No
I
just
need
to
quickly
because
the
migration
already
ran
before
on
a
different
table.
E
All
right
now,
it's
successfully
migrated
database.
So
the
good
thing
now
is
that
we
should
have
a
user
or
user
ID
filled
Now
everywhere.
So
if
I
go
back
to
my
previous
query,.
E
What
we
have
now
and
I
think
I
wanted
to
share
this,
because
I
think
it's
important
to
build
dashboards
on
and
and
kind
of
use
now
in
the
future.
So
now
the
user
ID
field
is
filled
for
every
one
of
those
events,
and
what
you
can
see
is
that
you
also
have
a
user
ID
type.
So
the
type
for
those
events
where
there
was
a
specific
user
ID
sent
is
identify
the
type
for
the
events
where
we
have
a
domain
user
ID,
but
no
specific
user
ID
beforehand.
E
When
we
talk
about
unique
users,
and
then
we
can
also
still
think
about
how
do
we
handle
want
to
handle
those
on
nms
users,
so
in
theory
we
could,
for
example,
provide
a
I,
don't
know
a
filter
or
something
to
the
user
to
say:
okay,
let's
discard
Anonymous
users
because,
for
example,
I
think
in
the
metrics
dictionary
and
a
few
other
example
projects
that
we
currently
have
implemented.
We
don't
have
a
cookie
Banner,
so
every
single
kind
of
page
view
is
by
a
new
user.
E
Sorry,
your
unique
user
account
is
quite
inflated,
but
that's
something
that
we
have
under
control
now
and
we
can
kind
of
still
think
about
how
we
want
to
handle
that
yep.
I
think
that's
it.
A
This
is
great,
so
we've
made
changes
then
on
the
SDK,
so
I
said
that
so
I
guess
like
what
what
all
this
needs
to
happen
to
to
roll
this
out,
because
we
have
table
migrations
for
existing
data
sets
and
then
we
also
for
new
data.
You
need
that
change
needs
to
be
reflected
in
the
sdks
and
then
I
assume.
A
We
would
also
have
schema
changes
so,
like
our
vector
or
our
materialized
view,
queries
would
have
to
change
in
production
as
well,
right
or
I
guess
I
know,
and
this
is
very
easy
to
get
like
lost
in
the
details.
But,
like
you
know
what
what
are
the
high
level
steps
to
like
you
know,
bring
those
entrepreneur
to
production
or
do
we
need
to
figure
that
out
still.
E
So,
if
we're
not
mistaken,
then
we
only
need
to
update
the
configurator
okay,
because
this
is
now
everything
is
paid
based
on
the
configurator.
We
we
already
got
this
information
beforehand
and
the
configurator
has
migration
capabilities
now,
so
it
can
migrate,
databases,
tables
and
so
the
way
we
built
the
migration.
It
will
actually
also
migrate
all
existing
tables
in
databases.
So
it
will
go
through
your
all,
your
it's
take
all
the
snowplow
events
tables.
E
Remove
the
old
views
create
new
view
that
then
fills
in
the
the
then
you
call
them
create
a
new
column
first
and
also
migrate.
The
existing
data
so
use
statistic
data
to
kind
of
fill
in
these
these
columns,
because
in
the
end,
what
we
have
now
is
just
the
normalized
data.
It's
like
everything's.
Already
there
we
just
put
it
into
a
more
convenient
spot
to
be
able
to
use
it
I
think
so.
E
E
We're
presenting
the
completely
same
values,
the
that's
why
I
tried
to
show
the
before
and
after
of
the
of
the
same
database
table,
because
we
already
sent
all
this
information
before
and
we
sent
the
domain
user
ID
we
sent
the
event
IDs,
we
sent
a
user
ID
if
a
user
ID
user
got
identified,
and
so
this
information
is
already
there.
This
is
the
sdks
already
worked.
E
This
way,
it's
up
to
the
individual,
where
we
implement
the
SDK
to
make
sure
that
the
concrete
Banner
is
accept
that
whenever
it's
accepted
or
that
we
set
kind
of
the
right
information
there
or
the
same
with
this
identify
call,
for
example,
on
gitlab.com,
we
don't
use
this
dot
identify
call
yet
because
we
we
also
need
to
make
sure
that
we
have
the
proper
anonymization
in
place
for
kitlab.com
for
our
specific
case,
but
otherwise
this
should
just
work.
So
with
all
the
data
that
we
already
have.
A
E
E
Let's
do
the
one
yeah
I
think
so
so
the
new
one
fills
in
this
user
ID
type,
the
yeah
the
last
thing
down
here
and
that's
the
conveniently
named
multi
if
from
clickhouse.
E
This
is
based
on
the
on
the
on
the
user
ID
and
up
here
somewhere,
the
user
ID
itself
also
got
changed
to
if
the
user
ID
is
not
empty,
then
keep
the
user
ID.
If
it
is
empty,
then,
and
the
domain
user
ID
is
not
emptied
and
used
to
the
main
user
ID.
If
both
of
those
are
empty,
then
please
use
the
event
ID.
E
So
this
is
all
happening
while
the
event
is
taken
from
the
initial
queue
from
snowplow
and
then
put
into
the
specific
event
database
for
this
specific
project,
the
the
user
ID
is
extracted
correctly
and
the
user
ID
type
is
also
set.
Based
on
this
information.
A
D
Yeah
sure,
just
from
a
bulletin,
it's
a
specific
respect
perspective.
This
is
great.
Thank
you.
So
much
for
getting
this
done.
All
of
the
instrumentation
team.
D
D
So
even
though
this
isn't
fully
deployed
the
Mr
is
merged,
so
we
can
start
using
the
configurator
locally
and
if
we
want
to
and
pull
that
image
down,
and
if
we
wanted
to
start
all
that
in
we
can
do
either
this
master
and
other
next
and
the
second
point
the
one
thing
that
we'll
need
to
change
as
well.
A
lot
of
the
cube,
schemas
apparently
query
for
domain
user
ID.
So
we
will
need
to
update
that
to
make
sure
it's
consistently
using
user
ID
throughout,
but
very
small
change.
A
Cool
then
we
have
a
minute
left.
Is
there
anything
else
that
anyone
would
like
to
cover
all
right?
Well,
good
to
see
everyone
have
a
good
rest
of
your
Monday
and
if
we
don't
see
each
other
later
on,
have
a
good
rest
of
your
week.