►
From YouTube: Compliance weekly - 2023-09-20
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Welcome
to
weekly
compliance,
weekly
20th,
September
I
think
we
left
off
at
this
issue
last
time.
This
one
is
mine,
so
the
king
of
feature
flag
for
external
audit
events,
for
instance,
level
events
is
now
done.
A
B
So
this
this
was
like
creating
the
model
and
the
table
for
setting
an
audit
event
destination
for
Amazon
S3.
This
has
been
deployed
to
production,
the
Mr
for
like
the
ml
for
actually
using
those,
is
in
review
right
now.
B
The
next
one
was
related
to
an
incident
where
we
had
like
infinite
rollbacks
for
two
approvals
worker
and
we
fixed
it,
and
it's
also
replied
to
production.
The
feature
flag
is
also
enabled
globally
now
yeah
and
the
next
one
above
that
is
for,
like
filling
the
existing
compliance
standard
that
here,
it's
stable.
This
migration
has
also
been
completed.
So
from
backend
perspective,
almost
everything
is
completed.
We
just
have
a
few
Improvement
items
related
to
optimizing.
The
database
queries
that
is
there.
C
B
Yeah
cool,
the
only
thing
is
we,
don't
we
still
don't
have
any
pagination
on
the
UI,
so
they
may
not
be
able
to
see
all
the
projects
there.
For
example,
the
group
has
a
lot
of
projects,
for
example
the
even
the
issue
reproduced
weapon
gitlab
that
we
test
on
I
created
a
new
project
under
there
and
then
visited
the
compliance
ideas.
I
couldn't
see
that
because
it
was
not
listed,
I
mean
it
was
not
listed
there.
I
don't
have
a
pagination,
so
yeah.
D
Just
a
note
for
the
pagination:
it's
just
waiting
for
the
maintainer
approval
I
might
reassign
tomorrow,
because
it's
been
almost
two
days
yeah
just
just
to
know.
Thanks.
A
D
D
When
I
was
trying
to
add
a
streaming
audit
destination
in
the
admin
panel,
I
was
getting
feature
flag
like
that
feature
flag,
not
found
so
I'll
see
if
I
can
reproduce
it
and
then
write
an
issue
on
it.
But
I
just
noticed
it
today,
when
I
was
working
on
audit
events.
B
Yeah,
so
this
this
is
the
same
thing.
Adherence
report
back
end
MVC,
is
already
completed.
If
we
start
rolling
out
the
content
like
that
should
be
fine.
Now
the
only
concern
that
I
have
is
I'm
a
little
bit
concerned
about
the
existing
database.
Query
that
we
have,
we
can
still
roll
out
the
feature
flag.
If
you
see
any
issues
we
can
like
stop
there
yeah.
D
Yeah
and
for
the
UI
side,
I
think
it's
just
filtering.
That
is
the
remaining
work.
Everything
else
is
in
the
maintain
a
review
or
merged.
D
Oh
yeah
for
the
next
one
grouping
export
and
we
can
work
on
scan
on
demand.
That's
in
the
works,
but
that's
I,
think
targeted
for
your
next
milestone,
foreign.
A
Yes,
so
for
this
this
is
related
to
The
Click
house
and
how
we
are
going
to
migrate
data
from
postgres
to
the.
A
So
here
we
need
to
think
about
a
strategy
on
how
the
whole
process
will
occur.
So
this
is
currently
in
planning
stage
and
I'm
thinking
around
different
strategies.
But
one
strategy
which
you
might
be
using
is
to
build
CSV
reports
and
then
just
push
those
csvs
onto
the
click
house
and
do
some
kind
of
bulk
in
session.
A
D
For
the
retry
field,
external
status
checks,
the
Mr
already
got
the
maintainer
review.
There
was
one
more
note
from
the
maintainer
before
giving
the
final
approval
and
merging,
but
I
have
left
a
comment
and
I
hope
says
that
it
gets
merged
within
the
next
little
bit
like
few
hours.
A
Yeah-
and
next
is
with
me
so
for
this
Branch
protection
drop
down,
there
were
two
settings
which
we
needed
to
build
one
in
namespace
another
in
application
for
the
namespace
part.
The
Mrs
are
already
in
review
and
I
realized
that
there
was
no
issue
regarding
the
application
setting
in
the
original
epic,
so
I
will
be
creating
a
new
issue
regarding
the
application
one
and
start
working
on
it,
but
the
target
release
for
the
back
end
part
would
remain
16.5.
B
Yeah,
so
this
one,
the
modern
and
the
table
has
been
deployed.
The
Mr
for
using
that
is
in
review.
I
also
created
a
demo
and
shared
on
the
group.
Yesterday.
I
have
to
create
start
working
on
creating
graphql
apis
for
this,
so
that
front-end
can
use
that
and
create
the
form
for
creating
the
AWS
S3
destinations.
A
Okay,
next
one
is
that
me,
so
the
list
and
create
apis
for
these
have
been
deployed
to
the
production
and
I'll
be
rolling
out
the
update,
and
this
show
I
apis
I'm
also
so
most
likely
it
will
get
deployed
in
this
milestone.
B
C
Yes,
we
defined
the
MVC
and
16
4,
and
this
Milestone
of
what's
got
the
first
email
to
add
the
feature
flag
and
the
new
tab
already
up
and
then
maintain
a
review
at
the
moment.
So
yeah
we'll
just
hopefully
can
keep
going
through
a
couple
of
those
issues.
This
Milestone
until
we
can
get
earlier
back.
A
The
next
two
are
with
me,
so
I
am
working
on
the
migrations
and
models
Mr
for
this
as
soon
as
I'm
done
with
them.
I'll
start
creating
a
mouse
for
the
other
ones.
For
we'll
try
to
finish
up
the
group
level
apis
find
Mrs
in
this
milestone
in
the
next
one.
The
update,
update,
API
Mr
is
under
review,
maintain
a
review
and
mostly
will
get
deployed
in
few
days.
So
this
is
almost
a
in
terms
of
backend,
but
we
need
some
changes
from
the.
C
B
A
Yes,
I
just
wanted
to
give
a
quick
walk
through
over
a
house
clickhouse
project
being
going
on.
So
let
me
just
share
my
screen.
A
Is
it
visible,
yeah,
so
yeah,
so
I
just
wanted
to
give
some
updates
around
it.
So
first
one
is
the
main
epic
for
the
migration
of
audit
events.
Clickhouse,
it
has
bunch
of
issues
inside
it
and
so
I
can
just
walk
through
all
of
them
one
by
one
and
starting
with
the
schema
design.
So
we
have
already
designed
a
clickhouse
schema
for
audit
events,
so
how
we
did
that
was
we
took
the
current
schema
for
postgres
and
first
came
up
with
the
sample.
A
Schema
for
clickhouse
then
ran
a
bunch
of
queries
over
it,
so
it
needed
some
in
indexing
in
the
starting.
Then
we
came
up
with
the
indexing.
Put
ran
the
queries
again
with
tested
with
some
10
million
rows
and
Sample
queries
ran
under
0.027
seconds,
but
the
one
thing
to
note
here
was
that
the
this
data
was
totally
random.
Unlike
our
gitlab.com,
where
a
single
group
can
hold
majority
of
the
audit
events
and
many
other
small
groups
might
have
only
very
few
audit
events.
A
So
this
one
thing
was
missing
here,
which
we
came
to
know
in
the
later
steps,
but
yeah
till
now
we
are
proposing
this
schema.
A
Only
and
the
second
issue
was
regarding
how
we
are
going
to
retrieve
data
from
clickhouse,
so
clickhouse
working
group
developed
a
Jam
around
clickhouse
which
helps
you
get
which
helps
you
query,
click
house
over
HTTP
and
but
what
we
needed
here
was
first,
we
analyzed
our
how
audit
events
actually
need
the
data
and
how
it
builds
the
data
and
show
the
data
over
UI.
So
so
we
have
two
main
controllers
where
we
fetch
the
data,
one
being
inside
group
and
other
being
projection.
A
We
have
bunch
of
Scopes
which
help
us
build
a
dynamic
query
on
what
a
user
is
actually
requesting.
So
these
are
bunch
of
queries
which
can
be
fired
on
our
postgres
database.
So
what
mainly
we
needed
here
was
a
dynamic
way
to
generate
a
query
given
upon
user
input.
A
So
from
here
we
came
up
with
the
issue
that
we
specifically
for
the
audit
events
use
case,
we'll
be
needing
a
query
Builder,
so
other
use
cases
in
gitlab
might
not
need
this
query
Builder
because
they
are
mainly
using
clickhouse
for
analytics
purposes
and
they
will
be
just
writing
some
raw
sequels
in
our
code
base.
But
in
the
case
of
Auditors
you
can
see
if
we
start
writing
raw
sequels
it
will.
It
will
be
a
huge
mess
around
the
audit
events
code.
A
So
we
build
this
query
Builder
and
here
similarly,
like
active
record,
you
can
just
query
your
table
using
where
order
and
limit
offset
just
like
we
do
with
active
record.
So
we
build
this
and
this
is
merged
and
done
you
can
try
this
out
locally.
Second
part
was
around
query:
Builder
enhancement
that,
like
currently,
we
also
log
sqls
being
fired
on
post
stress,
but
we
sanitize
the
user
input
or
the
values
which
are
I
mean
the
user
data.
A
So,
for
example,
here,
if
I'm
querying
on
a
column
with
a
value
12,
the
redacted
SQL
generated
will
be
column,
is
equal
to
dollar
sign
one,
so
similar
enhancement
was
done
in
query
Builder,
where,
if
you
are
firing,
queries
over
using
it,
it
will
do
this
reduction
and
will
also
lock
the
queries.
This
is
also
done
for
the
raw
sqls,
which
are
being
fired
directly
using
the
clickhouse.
Gem
next
part
is
to
create
a
model
so
that
we
can
actually
start
using
the
query
Builder.
A
So
this
is
currently
open
and
not
a
closed
issue,
but
the
Mr
is
ready
for
this.
So
what
I'm
trying
to
do
here
is
I
wrote
a
simple
model
for
audit
event,
and
these
are
all
the
different
Scopes
which
were
present
in
the
our
active
record
audit
event
file.
So
here.
Similarly,
we
can
change
the
Scopes
and
also
write
a
fill
travel
logic
and
the
last
part
which
I'm
currently
working
on
is
the
load
testing
one.
A
So
here
what
we
are
trying
to
do
is
we
decided
to
use
a
tool
called
J
meter.
It
lets
you
write
a
bunch
of
it.
B
A
You
create
a
test
plan
inside
the
test
plan.
You
can
add
bunch
of
HTTP
requests
and
it,
let's
also
lets
you
add
some
scripts
and
also
some
Logics
around
how
you
are
going
to
run
those
HTTP
requests
and
also
Define
number
of
users
and
number
of
threads,
which
will
run
and
you
after
running
you,
you
can
actually
get
a
report
where
response
time
and
throughput
is
noted
down.
So
this
is
the
current
status
for
the
click
house.
The
next
step
includes
the
staging
load.
A
Testing,
for
that
I
was
thinking
on
on
ground
if,
if
it
will
be
possible
for
us
to
get
a
production
replica
on
clickhouse
staging
so
that
we
can
get
near
close
metrics,
but
that
is
still
in
a
planning
stage,
and
the
next
step
would
also
include
writing
architecture,
design,
workflow
for
this
and
or
planning
a
data
migration
strategy
from
posters
to
click
house.
So
these
are
all
in
planning
state
right
now
aiming
for
16.5.
For
this.
A
Possible
yeah
one
thing
in
clickhouse,
so
I
was
studying
around
it,
was
that
it's
it's
not
recommended
to
frequently
modify
your
schema
once
you
have
defined
in
clickhouse
and
also
not
modify
your
actual
rows
inside
the
I
mean
modify
your
actual
data.
So
that's
one
thing:
we
might
need
to
take
care
of
initially
that
we
have
a
defined
schema.
A
Yeah
yeah,
so
the
one
way
I
read
was
to
just
create
a
new
schema
and
then
dump
the
table
to
the
new
schema
and
then
just
drop
the
previous
table
and
then
change.
Then
new
table
name
to
the
previous
table.
Name.
C
Oh
yeah,
just
a
couple
of
if
wise,
so
if
you
missed
it,
there
is
a
new
label
to
add
to
your
issues
called
verified
by
author,
which
thank
you
to
Phil
for
suggesting
that
a
great
suggestion
and
it
just
stops
the
verification
process.
So
you
can,
for
verifying
after
author
wants
to
verify
it
like
feature
Flags
or
migrations.
Things
like
that
background,
migrations,
yeah
and
the
other
one
was
just
it's
the
end
of
16
4.
C
So
please
get
your
thoughts
and
any
ideas
or
updates
that
you
have
into
the
Retro
issue.
A
Okay,
thanks
for
joining
the
session,
I'll
be
stopping
the
recording.
Now,
that's.