►
From YouTube: Artifact Sync 2022-02-07
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
all
right,
my
name's,
jackie
porter,
and
I'm
here
with
gina
doyle,
to
talk
a
little
bit
about
artifacts.
So
first
we
have
this
issue
that
we've
been
iterating
on,
which
is
where
ui
should
explain.
If
no
junit
report
artifacts
are
available
to
populate
the
unit
test
report,
so
it
looks
like
when
this
issue
is
created.
A
year
ago
we
had
a
different
api
available,
which
is
this
build
report
result,
and
now
we
have
this
new
api,
which
is
more
performant
and
uses
less.
A
B
B
So
if
I
have
this
old
pipeline
that
I
triggered
a
year
ago-
and
I
go
into
the
test-
I
see
a
summary
that
looks
like
it's
active
because
that's
what
all
of
our
other
ones
look
like.
But
then,
if
I
go
into
this
specific
test
job,
I
guess
we're
calling
this,
but
we're
also
calling
it
test
case.
I
don't
have
anything
here,
but
I
don't
know
if
that's
because
it
is
because
they're
expired.
B
A
Yeah,
like
I
don't
think
the
the
graphic
here
is
the
graphic
that,
like
what
evan
shared,
I
think,
is
a
general
empty
state
graphic.
I
don't
know
if
he
is
actually
saying
that
we
need
to
create
a
test
report
in
this
case.
Oh
okay,
like
he's
explaining
this,
is
what
this
usual.
This
is
the
pattern
of
an
empty
state
inside
of
our
instead
of
her
platform.
A
So
I
think
that's
what
he
was
what
he
was
saying
and
then
I
think
his
suggestion
after
that
was
like
well
in
the
current
thing
that
we're
looking
at
in
that
jobs
example
yeah.
We
do.
A
We
need
to
reframe
what
people
are
looking
at
here
like
or
do
we
need
to
have
them
see
that
there's
actually
no
test
report
instead
of
test
cases,
because
when
you
go
back
in
the
previous
view,
yeah
this
is
the
test
job
view,
but
when
they
click
onto
that
job
name
test
junit
report,
yeah
they're,
looking
to
see
the
report
details
right,
so
I
think
that
that
was
what
evan's
point
was,
and
then
he
was
just
kind
of
making
the
clarification
that
technically
isn't
an
empty
state,
because
this
is
what
an
empty
state
looks
like
elsewhere
in
our
product.
A
B
A
Depends
on
how
the
user
has
it
configured
if
they're
self-managed
and
if
they're
gitlab,
I
think
we
do
have
automatic
deletion
after
like
30
days,
so
this
one
right
here
triggered
one
year
the
artifacts
would
be
gone,
but
right,
I
think,
about
how
are
people
going
to
configure
this
and
self-managed
and
some
will
always
keep
their
artifacts
and
other
people
won't.
A
So
we
can't,
I
don't
know
if
we
can
actually
like
set
up
a
rule
that
says
retrieval,
artifacts
or
not,
might
delete
them,
and
some
people
might
not,
but
I
think
evan
has
this
last
suggestion
in
his
latest
comment
a
day
ago
that
talks
about
adding
specific
texts
around
test
reports
require
job
artifacts,
but
all
artifacts
are
spired
or
expired,
learn
more,
which
I
think
is
a
great
suggestion,
and
then
we
just
need
to
have
the
front
end
validate
that
when
they
add
that
text
that
this
will
only
show
when
things
are
expired
rather
than
not
configured
because
yes,
okay,
then
the
second
one
would
be
more
relevant.
A
A
Okay
and
then
this
next
thing
that
we
wanted
to
talk
about
was
your
general
findings
on
and
then
it's
all
it's
loading
for
me
right
now.
So
give
me
a
second
okay
on
the
insights
for
research.
So
I
think
some
things
that
we
found
that
were
really
interesting
that
might
be
great
for
us
to
prioritize
are
what
you
created
first,
which
was
traceability.
A
You
already
created
that
epic
and
I've
started
hydrating
that
epic,
with
additional
issues
that
are
from
our
backlog.
So
we
have
like
a
whole
bunch
of
scope
that
fits
into
that,
but
I'll.
Let
you
kind
of
speak
and
give
your
overview
on
what
you
think
is
most
relevant,
and
then
I
can
give
some
commentary
on
what
I
think
is
important
for
us
to
pursue
this
year.
B
I
think
it
might
be
good
to
maybe
not
even
like
make
changes
or
propose
changes
to
the
product
and
maybe
just
propose
doing
more
research
on
some
specific
areas,
because
I
still
think
that
these
span
over
so
many
different,
so
many
different
groups,
it's
kind
of
like
like
I
think
the
compliance
thing
could
be
really
interesting
to
solve.
But
I
don't
know
if
we
can
go
right
into
solving
that,
because
we
really
only
had
that
information
from
one
study.
A
Yeah,
I
think,
there's
also
this.
This
market
push
around
compliance
and
security.
That
means
we
have
to
maybe
evaluate
this
in
a
couple
of
different
problem
validation
cycles,
because
one
is
going
to
be
how
we
want
to
support
something
like
security
workflows
with
our
artifacts
and
how
than
the
other
one
is.
How
do
we
want
to
help
people
do?
Enforcement
of
certain
artifact
behavior
treatments,
which
are
really
different
audiences,
really
different
personas,
but
they
do
appeal
to
the
greater
market
need
for
how
do
we
build
build
security
inside
of
applications?
A
A
Around
audit
reports
for
artifact
actions,
like
downloading
artifacts,
running
bulk,
deletes
on
artifacts
who's
accessed
an
artifact
like
doing
audit
auditing
audit
logs
yeah
different
artifact
behaviors
might
help
with
the
traceability
and
also
meet
the
needs
for
compliance.
B
This
also
brings
me
back
to
the
fact
that
we
have
that
we
have
a
feature
flag,
the
view
of
like
all
of
your
artifacts
and
under
a
feature
flag
right
now,
yeah
and-
and
I
think
something
like
that-
would
be
useful
honestly
for
a
lot
of
the
things
that
came
up
in
this
research
is
just
being
able
to
see
all
the
artifacts
at
once.
Let.
A
B
A
Yeah,
so
let
me
go
ahead
and
add
that
blocking
relationship,
so
I
think
you're
right
that,
like
there's
opportunities
and
just
the
visibility
portion
yeah
that
can
help
us
accomplish
some
more
things
with
compliance,
which
is
great
yeah.
B
One
other
thing
that
came
up
that,
I
think
was
really
interesting,
was
the
it
was
like
input
and
output
of
artifacts
within
a
pipeline
like
which,
because
usually
artifacts,
are
carried
over
between
jobs,
and
they
wanted
to
know
which
of
your
jobs
are
actually
outputting
an
artifact
and
which
one
was
like
taking
it
taking
it
in.
Basically,
I
thought
that
that
was
a
really
interesting
problem
to
solve
definitely
hard,
but,
but
I
think
visualizing
that
in
some
way
could
make
it
easier.
A
Let
me
think,
there's
this
like
parent-child
pipelines
and
artifacts.
A
That
has
been
like
the
biggest
challenge
for
a
lot
of
our
users
around
how
are
pipelines
passing
artifacts
and
the
reason
why
it's
a
problem
is
because
sometimes
you
can't
see
like
when
something
has
been
triggered
downstream,
especially
in
like
a
dynamically
generated
child
and
as
a
result,
you
there's
no
visual
representation
of
that
information.
So,
going
back
to
your
point
like
how
do
we
help
people
visualize
these
dependencies?
That's
what
this
epic
is
about.
A
Okay,
but
I
didn't
realize
that
okay,
cool
and
you
you
shouldn't,
because
it's
currently
like
nested
across
pipeline
offering
and
pipeline
executions
there's
no
reason
for
you
to
know
this.
The
only
reason
why
we
play
a
part
in
it
is
because
of
something
like
artifacts
exposed,
not
working
with
child
pipelines
and
child
read
child
pipeline
artifacts.
For
mr
reports,
which
is
particularly
like,
which
is
our
area,
but
some
of
these
other
issues
like
aren't,
aren't
even
build
artifacts
related,
like
some
of
them,
are
literally
pipeline
authoring
related.
A
A
If
we
think
about
it
from
a
pipeline
insights
angle,
are
we
looking
to
help
debug
or
look
at
failures
or
do
flaky
test
detection,
or
are
we
helping
people
author
pipelines
faster
and
if
it's
the
offering
pipelines
faster,
then
that
might
be
more
in
the
pipeline
authoring
use
case
and
we
just
like
dove,
beat
the
dri
of
that
front.
A
Yeah
debugging
and
triaging
failures
is
like
the
number
one
thing
that
come
out
that
comes
out
of
artifacts,
which
is
why
it's,
which
is
why
it
was
moved
to
testing
when
we
think
about
how
people
are
interacting
with
artifacts.
It's
because
they're
there
as
a
result
of
something
going
wrong
with
their
jobs
and
that's
usually
detected
through
testing,
usually
detected
through
logging
and
some
sort
of
failure.
Indication
so
completely
agree
with
that.
B
B
That
was
a
small
portion
of
what
I
of
what
I
saw
come
out
of
the
research.
But
there
was
things
about
being
able
to
maintain
a
low
storage
capacity
like
automating
deletion
of
artifacts,
which
I
thought
we
already
allowed,
but
you,
but
maybe
we
only
allow
it
at
a
certain
level
because
they
specifically
were
asking
for
applying
rules
at
the
instance
level
and
then
and
maybe
being
able
to
like
manage
the
storage
better
than
we're
allowing
today
this
like
completely
connected
with
artifacts.
A
It's
a
good
call
out
so,
as
I
start
to
think
about
artifacts
and
our
storage,
our
push
for
storage
mechanisms,
we
are
leading
our
users
there,
because
they're
going
to
get
told
that
they
have
a
limit
that
they
didn't
have
before.
So
it's
definitely.
It
definitely
makes
sense
that
this
didn't
come
up
in
research,
because
it's
not
a
problem
that
they
yet
have.
A
Okay,
so
they'll
start
to
have
it
once
we
start
to
enforce
storage
limits
on
gitlab.com,
for
example.
Now,
if
you're
talking
to
self-managed
users,
this
is
definitely
a
problem
for
them,
but
the
audience
is
typically
going
to
be
your
git
lab
administrator
or
your
devops
engineer,
who
is
responsible
for
the
billing
and
costs
around
storage
and
compute?
So
if
you're,
only
talking
to
a
test
engineer
or
you're,
only
talking
to
a
developer
really
unlikely
that
they're
going
to
care
about
storage
because
they're,
not
the
persona
or
the
role
responsible
for
maintaining
those
costs
there.
A
A
So
we
we
will
be
like
with
all
the
work
that
we've
been
doing
around
bulk,
delete
and
supporting
usability
of
artifact
management,
which
is
also
like
a
dog
fooding
initiative.
So
the
admins
we're
talking
to
are
really
like
our
delivery
team,
our
infrastructure
team,
our
database
team
and
all
those
mechanisms
will
help
are
informed
by
our
internal
use
cases
and
then
they'll
apply
to
the
greater
use
cases,
but
in
the
end,
like
self-managed
users
get
to
get
to
configure.
However,
they
want
retention
policies,
automatic
deletion.
B
There's
a
lot
I
keep
having
ideas
break
off
my
head,
but
I
there's
one
other
bigger
thing
that
I
wanted
to
talk
about,
which
was
when
can
when
artifacts
came
up
in
container
registry
and
packages?
A
That's
a
really
great
question,
so
they
were
definitely
referring
to
binaries
like
any
sort
of
software
binary
that
is
built
and
delivered,
which
it's,
like
all
the
images
that
are
stored
and
delivered
inside
of
a
container
or
package
industry.
So
you're,
absolutely
right
that
that
is
really
tailored
toward
what
people
consider
a
software
binary
and
less
about
how
we
currently
define
job
and
pipeline
outputs.
A
Or
well,
it's
it's
a
nomenclature
that
we
have
to
be
aware
of,
because
if
we
think
about
it,
we're
being
more
specific
with
the
flavor
of
artifact
yeah,
so
we're
talking
about
job
and
then
pipeline
artifacts,
which
then
can
eventually
become
that
software
package
that
gets
delivered
right
so
like
it's
not.
I
don't
think
they
have
to
be
the
things
that
we
build
for
pipeline
and
job
artifacts
aren't
going
to
exclude
package
artifacts
like
you're,
still
going
to
need
to
maintain
and
analyze
those
and
think
about
them.
A
But
if
we
take
a
step
back
and
look
at
like
what's
happening
with
the
artifacts
in
the
package
work
stream,
they
already
have
all
of
their
policies
for
like
cleaning
up
packages
because
they're
stored
in
a
different
part
of
the
database
in
the
application,
so
they're
treated
differently.
They
have
their
own
rules.
People
know
that
those
are
a
different
flavor
of
artifact
and
we
don't
really
document
them
as
an
artifact.
That's
just
like
what
people
use
as
a
part
of
the
software
language.
A
Okay
and
I
think
in
our
build
artifacts
page,
if
it's
helpful,
we
can
always
cross-link
the
package
direction
and
kind
of
explain
why
packages
are
different
than
artifacts.
A
We
have
like
for
more
information
about
storing
containers
or
packages,
or
so
you
see
the
package
stage
direction
page
so
like
we
do
reference
like
hey
go
here
if
you're
looking
for
this
kind
of
artifact,
but
we
can
be
more
explicit
about
if
you're
thinking
about
binaries
that
are
produced
after
the
build
process
for
software
delivery,
you
are
likely
thinking
of
package
and
here's
the
direction.
If
you
think
that
would
be
helpful
and
being
more
explicit
here,.
B
That's
a
good
question.
I
think
I
thought
that
they
overlapped
with
each
other,
but
that
doesn't
mean
that
others
do
and
also
sorry
the
machines
are
starting
up
again
next
door.
So
I
don't
know
I
it
seems
like.
Maybe
it
was
just
a
me
thing.
Maybe
not
users.
B
Yeah,
it
could
be-
maybe
that's
an
area
that
I
need
to
just
look
into
more
because
I
didn't.
B
Yes,
I
agree.
I
am
looking
more
into
all
these
different
comments
for
for
the
different
categories
and
I
really
think
that,
starting
with
like
an
mvc
of
an
event,
log
or
an
audit
log,
like
you
were
saying,
would
help
us
like,
invest
or
solve,
not,
maybe
not
solve,
but
partially
solve
a
lot
of
these.
But
I
don't
think
it
needs
like.
B
I
think
it
needs
to
be
things
that
are
actions
that
are
taken
through
the
ui
but
also
stuff,
that's
generated
on
the
back
end,
and
I
think
that
that
could
also
help
us
build
like
a
a
structure
for
being
able
to
then
start
solving
the
pipeline,
like
how
they're
included
in
different
pipelines,
if
that
makes
any
sense.
So
not
just
like
somebody
deleted
this
artifact,
or
this
was
generated
on
this
date,
but
being
able
to
say
this
came
from
this
pipeline
and
maybe
even
connecting
it
back
from
a
commit.
A
Yeah,
so
if
we
think
about
the
different
series
of
events,
we
would
have
user
generated
events
and
then
system
generated
events
so
having
out
of
gitlab
for
artifact
behaviors.
I
think
would
be
helpful,
so
I'll
I'll
create
an
epic
on
under
our
artifact
traceability,
epic,
for
event,
audit
logs
for
artifacts
and
then
we'll
have
two
sub
epics,
one
on
user
generated
and
one
on
system
generated
logging.
B
A
Link
them
yeah.
If
I
was
to
like
distill
what
I
think
are
the
top
three
priorities
for
us
to
pursue
with
artifacts.
I
would
say
that
it's
debugging
capabilities
from
artifacts
event,
logging
with
artifacts
and
storage
management.
A
So
I
think
those
are
the
top
three
priorities
that
we
have
to
solve
and
like
the
debugging
portion,
is
that
workflow
of,
like
pipelines,
examining
failures
and
pipelines.
Analyzing
coverage
like
it's
all
of
these
when
it
comes
to
debugging
because
like
how
do
we
think
about
what's
happening
in
our
pipelines
and
our
tests
related
to
artifacts.
A
B
If
I
didn't
know
if
I
was
not
aware
that
our
tests
were
generated
from
artifacts,
then
I
wouldn't
know
that
right,
don't
make
it
obvious,
and
that
goes
for
any
other.
I
guess
analysis
portion
all
right.
That
makes
sense.
Okay,
that's.
A
This
was
your
research
was
super
helpful
I'll,
get
started
on
those
epics
and
you'll,
probably
notice
that
I'm
pinging
you
a
bunch
on
a
bunch
of
other
epics
too,
because
I'm
doing
like
a
mass
reorganization
of
all
of
our
categories
and
our
like
thinking
about
what
things
exist
today
and
creating
new
epics
so
I'll
get
that
for
event
logging
and
then
we
can,
when
we
have
our
think
big
with
the
team,
we'll
also
get
their
thoughts
on
what
are
ways
to
quickly
surface
traceability,
insights
inside
of
the
dui,
because
I
think
that
if
there's
quick
wins
that
we
can
show
there
where
we're
just
exposing
things
to
users,
I
think
that
will
really
help
our.