►
From YouTube: Kubernetes SIG Testing - 2021-01-12
Description
A
So
hello,
everybody,
this
is
the
kubernetes
sig
testing
bi-weekly
meeting
today
is
tuesday
january
12
2021
happy
new
year.
This
meeting
is
being
publicly
recorded
and
will
be
posted
to
youtube
later,
where
we
can
all
watch
ourselves
adhere
to
the
kubernetes
code
of
conduct,
which
basically
means
don't
be
a
jerk.
If
you
feel
like
you,
have
a
problem
with
somebody's
conduct
in
this
meeting,
you
are
welcome
to
contact
me
at
spiff,
xp
and
all
the
places
or
conduct
at
kubernetes
dot
io
on
today's
meeting
agenda.
B
Excuse
me
it's
not
currently
being
recorded.
Do
you
wish
to
record
it.
B
Okay,
that's
interesting
because
the
zoom
record
flag
isn't
on.
A
I'm
not
sure
if
recording
to
the
cloud
looks
different
than
looks
different
for
other
clients
or
not,
but
I
can
see
the
little
recording
option.
Okay,
sorry
to
interrupt
yeah,
it's
all
good!
So
actually
that
kind
of
reminds
me
before
we
jump
into
the
agenda.
I
feel
like
there
are
some
new
names
and
faces
here.
A
C
Absolutely
I
can
start
so.
This
is
our
second
meeting
we
attended
one.
So
I
work
in
matt's
team,
matt
foley.
C
I
reporter
man
we're
part
we're
trying
to
figure
out
a
good
way
to
do
our
internal
integration
testing
and
also
just
learn
a
bit
more
about
work
figure
out
a
nice
way
if
we
can
have
a
good
like
a
framework,
a
testing
program
or
just
do
some
lot
of
integration
testing
internally
and
just
here
to
learn
more
here
and
figure
out
how
we
can
collaborate
better
and
come
up
with
a
good
plan
here
and
I'll.
Let
matt
talk
after
this.
D
Oh
I'm
mimi,
I'm
with
anka
and
matt,
and
I'm
on
the
test,
automation
site
and
for
tools.
If
attachment
toss
the
admission
and
then
we're
apple.
B
Yeah,
we
introduced
ourselves
a
couple
of
weeks
ago,
with
apple
and
very
interested
in
being
able
to
test
our
internal
builds
of
of
kubernetes
in
integration
tests,
with
the
stack
of
other
components
got
you.
A
Names
all
right,
cool
thanks
everyone,
so
I'm
gonna
hand
it
off
to
grant
to
talk
about
building
metric
dashboards
for
the
communities
ci
great
everyone.
Thank
you
co-host.
You
want
to
show
yourself
yeah.
E
Okay,
go
for
it
cool!
Thank
you.
I
hope
everyone
had
a
good
new
year.
I'm
just
I'll
make
this
pretty
quick.
I
linked
a
cl
that
I
had
up
with
the
issue
in
the
cl
that
I
had
created
a
while
ago
in
our
meeting
notes.
E
How
many
runs
we're
doing
in
ci
and
then
things
that
maybe
the
community
cares
a
little
bit
more
about
in
ci
signals
like
these
repeated
failures
where
we
have
jobs
have
been
repeating
consistently
for
over
a
thousand
days
in
a
row,
just
highlighting
some
cleanup
that
we
might
be
able
to
do
and
highlighting
like
some
of
the
major
flakes
we
have
and
for
now
my
idea
was
to
I'm
not
sure
if
anybody's
gone
around
the
metrics
part
of
our
sig
testing
repo,
but
we
host
just
jason.
E
That
represents
all
the
metrics,
and
I
was
planning
to
just
put
this
as
a
flat
file
in
our
gcs
bucket.
Next
to
those
metrics,
and
hopefully
the
whole
community
can
view
these.
E
The
only
things
I'd
like
from
the
community
is
really
some
input
on
what
metrics
we'd
like
to
see.
I'm
not
sure
if
anybody's
looked
at
what
metrics
we
have,
but
we
have
information
such
as
flags
commit
consistency,
build
consistency.
E
I
know
rob's
not
in
this
meeting
today
it
looks
like
he
he
might
have
some
of
the
best
input
and
the
other
thing
is,
I
guess
I
might
be
able
to
take
offline,
but
I'm
not
able
to
make
this
public
to
everyone
from
my
personal
account.
So
I
might
need
to
use
one
of
our
cncf
accounts.
E
The
data
is
public,
but
for
some
reason
internal
doesn't
allow
me
to
share
this
from
my
my
google
account,
but
yeah
just
kind
of
a
psa.
If
anybody
has
any
input
on
what
they'd
like
to
see
or
what
might
be
helpful,
I'd,
love
to
know
and
start
culminating
some
ideas
for
what
the
dashboard
might
look
like
and
what
might
be
most
helpful
for
the
community.
A
So
where
does
the?
Where
does
the
data
for
all
that
come
from.
E
All
this
data
is
from
data
sets
in
our
bigquery
table.
So
for
those
of
you
who
are
not
familiar
with
bigquery,
we
have
a
service
called
kettle
that
kind
of
scrapes
all
of
our
proud
job
results
and
uploads
build
data
to
our
bigquery
database
and
we
have
tables
for
all
the
builds
weekly
builds
and
daily
builds
and
data
studios
access
to
all
of
that
data
and
can
kind
of
like
cross
link
it
to
other
data
and
filter
based
on
other
data,
like
date
spans-
and
we
could
say
you
know,
for
for
a
specific
build.
E
I
want
to
see
certain
job
stats
for
the
history
of
this
specific
job.
So
there's
quite
a
lot
of
a
lot
we
can
do
and
I
think
I
should
be
able
to
make
it
available
for
anyone
to
edit
I'm
not
sure
how
to
do
it
within
the
org.
So
we
don't
have
just
everyone
on
the
internet
able
to
change
our
dashboards,
but
maybe
we
can
create
a
an
editor's
list
or
a
file
or
something-
and
I
can
keep
it
up
to
date
or
we
can
maybe
automate
through
a
data
studio
api
who
has.
A
D
I'm
actually
very
interested
in
this
project
because
we're
thinking
about
incorporating
some
kind
of
a
monitoring
as
a
test
in
our
final
integration
tests
so-
and
I
will
follow
the
link-
is
the
link
that
you
send
it
to
that.
I
can
follow
in
get
more
details
as
the
implementation,
how
we
can
incorporate
it
into
our
test
environment.
E
Sure
I
I
put
the
link
to
just
a
template
cl,
but
I'll
also
put
the
link
to
the
issue
I
created
and
add
some
more
details
on
how
I'm
doing
this.
D
A
F
A
Yeah
we
we
do
need
to
do
that
at
some
point,
but
just
so
it's
clear
that
data
migrating
where
the
data
lives
doesn't
particularly
change
things.
What
I'm
trying
to
say
is
the
data
today
is
publicly
accessible,
like
literally
anybody
can
query
that
data
the
same
way
literally.
Anybody
could
go
query.
A
The
github
archive
bigquery
data
set,
it
is
publicly
readable.
However,
the
way
bigquery
works
is
that
you
need
someplace
to
charge
the
the
compute
time
that
you
use
to
run
the
query.
A
You
only
have
to
do
that
after
you
have
we
read
over,
like
I
forget,
if
it's
a
terabyte
or
a
petabyte
worth
of
data,
but
you
are
it,
is
it
is
very
possible
for
you
to
use
a
free
tier
google
cloud
account
to
query
that
data
set
without
charging
any
money
or
anything,
but
I
do
agree
that
we
will
eventually
want
to
find
a
way
to
create
like
either
a
shared
service
account
or
a
group,
or
something
so
that
we
could
run
queries
that
are
larger
than
the
free
tier
limit
and
have
trusted
members
of
the
community
not
worry
about
being
charged
on
their
personal,
google
cloud
accounts
or
or
what
have
you
like?
A
I
guess
I'm
trying
to
say
like
if
people
want
to
experiment
and
run
queries
against
this
data
set
today
they
can
at
no
cost
up
to
a
point,
but
I
want
to
make
people
feel
more
comfortable
and
more
empowered
to
do
so,
and
the
issue
you
linked
is
kind
of
like
one
step
in
this
process,
but
totally
something
we
need
to
do.
A
I
use
my
my
only
comment
for
for
grant
would
be
that
I've
been
around
the
project
long
enough
to
have
seen
like
we
used
to
have
this
thing
called
velodrome,
which
exposed
many
of
those
same
graphs
in
the
same
way,
and
I
still
feel
like
we
didn't
have
people
actually
act
based
on
that
data
like
a
thousand
days
is
a
really
long
time
for
a
job
to
be
continuously
failing
and
it.
A
It
seems
like
it's
right
up
in
front
of
your
face
and
it's
it's
very
red,
but
people
seem
to
ignore
it,
and
I
think
that's
one
of
those
things
that
falls
in
the
bucket
of
like.
Well,
it's
the
community's
responsibility
as
a
whole.
So
if
it's
everybody's
responsibility,
it's
nobody's
responsibility.
A
I
think
one.
One
way
that
we've
gotten
better
about
driving
action
on
failing
jobs
or
failing
tests
is
when
we
start
looking
at
them
at
the
like
test
grid
dashboard
level.
So
if
you
are
sig
like
sig
network
sig,
network's
gonna
be
responsible
for
all
the
dashboards
that
are
within
their
their
test
grid
grouping
and
then,
similarly,
like,
we
have
a
whole
team
of
people
called
ci
signal
who
really
care
about
all
the
jobs
that
are
on
the
release
blocking
dashboard,
and
so
I
feel
like
something
that's
lacking
at
the
moment.
A
Is
that
kind
of
grouping
information
that
would
allow
us
to
take
the
really
awesome
metrics
that
we
can
get
out
of
bigquery
that
we
can't
get
out
of
test
grid
but
allow
us
to
group
them
in
the
same
way
that
we
do
in
test
grid
so
that
maybe
a
more
focused
group
of
people
can
work
on
more
actionable
stuff.
I
guess.
E
That's
a
good
idea,
I
mean
there's
nothing
that
shouldn't
preclude
us
from
adding
owner
data
to
bigquery
and
having
like
per
per
group
data
studio
dashboards,
so
that
might
be
pretty
nice
to
have
teams
be
able
to
edit
and
build
their
own
dashboards.
A
Yeah,
I
I
don't
know,
I
guess
that's
that's
my
only
comment
for
now.
It
looks
really
great,
like
I'm
really
happy
to
see
that
come
back.
A
The
the
way
that
I
was
using
metrics
data
back
in
the
day
was
to
compute
all
of
the
metrics
that
we
use
to
define
whether
or
not
a
job
should
be
released,
blocking
or
not
so
like
what's
its
failure
rate,
what's
its
flake
rate,
what's
its
99th
percentile
duration
and
then
we're
getting
really
close
to
being
able
to
automatically
say
whether
or
not
a
job
qualifies
to
be
release
blocking
or
not,
and
then
you
could
use
something
similar
to
say
whether
or
not
a
job
qualifies
to
be
merge
blocking
or
not
based
on
its
health.
A
So
I
would
love
to
see
us
get
get
back
to
that
place.
Okay,
any
other
questions
comments,
concerns
for
grant.
A
Okay,
moving
on
antonio,
I
guess,
since
you
sort
of
grouped
all
the
things
together,
I'm
gonna
hand
over
to
you,
but
you're
welcome
to
hand
off
to
the
individual
people
about
the
individual
image
promotion
issues.
G
G
So
the
thing
is,
I've
tried
to
promote
in
in
the
k8s
dot
io
repo,
but
I
cannot
get
the
hash
because
the
hash
is
not
being
published
because
the
job
is
failing,
and
this
is
what
crowd
you
was
explaining.
There
is
some
failure
with
the
architecture
s39x.
H
Yeah
there's
some
flakiness
when
it
comes
to
most
of
them,
from
what
they
saw
is
on
that
architecture
still
looking
at
that,
it's
kind
of
hard
to
find
the
exact
issue
and
try
to
fix
it,
because
whenever
I
try
to
pinpoint
the
issue
itself,
it
always
starts
working
at
the
moment.
I
think
a
couple
of
free
tries
will
just
work
eventually
if
we
need
the
job.
Do
we?
If
you
need
the
image
right
now,
and
I
think
we
can
just
do
that
for
now.
H
I
think
we
can
ping
the
on
the
the
kubernetes
on
call
infra
people
to
rerun
the
job
for
us,
but
eventually
we'll
have
to
to
fix
that
flakiness
at
one
point
I'm
currently
looking
at
it,
but
for
now
we
can
retry
the
re.
Try
the
job.
H
I
H
Think
we
previously
done
this
for
the
window
server
core
cache
image
to
which
I
was
the
owner
of,
and
I
was
added
there
and
I
could
easily
just
retry
the
job.
A
Yeah
we
could,
I
feel,
like
I
would
like
to
get
release
engineering
as
a
team
capable
of
retrying
all
those
jobs.
Getting
some
successful
folks
in
sounds
useful
and
then
claudio.
I
feel
like
you
and
antonio
and
a
couple
other
folks
who've
been
sort
of
pushing
the
most
on
making
this
transition
to
using
images
that
are
pushed
to
staging
and
community
owned
in
e2e
tests.
You
should
also
have
approval
and
there's
a
way
to
specify
all
of
this.
A
A
If
it's
just
flakiness
yeah,
I
guess
retries
will
help.
I
don't
have
enough
context
to
understand
what
specifically
is
liking
here,
but
we
can.
We
can
work
on
that.
H
H
Currently,
looking
at
that
specifically
trying
to
to
see
if
that
source
of
our
flakiness
and
if
it
does,
I'm
gonna
send
a
pull
request
to
the
testing
from
slash
images,
slash,
gcp,
docker,
gcloud
image
or
however
it's
it's
named.
It
should
be
just
one
line
of
change,
pull
request
for
that.
A
And
just
to
help
me
understand
when
you're
talking
about
flakiness
is
this
for
a
specific
image,
or
is
this
for
all
of
the
image
building
jobs.
H
And
another
thing
that
I've
noticed
is
that
most
flakes
occur
whenever
it's
the
apk
command
being
run.
H
And
another
curious
thing
that
kinda
makes
it
hard
to
to
notice
is
the
the
fact
that
build
x
kind
of
deletes
its
own
logs
in
the
console
at
the
very
least
which
it's
not
very
helpful.
H
So
basically,
it's
gonna
run
the
commands.
For
example,
you're
gonna
see
that
it's
pulling
the
the
apk
registries,
it's
updating
them
and
so
on
and
so
forth.
Then
it
errors
out
and
just
deletes
them
all
the
log
lines,
and
just
you
see
nothing
which
is
really
unfortunate.
A
H
H
You
know
fix,
but
yeah
I'm
actively
blocking
on
it.
H
A
H
And
if
there's
anything
else,
I'm
going
to
ping
the
sig
testing
with
people.
H
Other
than
that
currently
testing
out
a
fix
for
the
echo
server
image,
I'm
gonna
set
up
a
request
for
that
soon.
A
All
right
that
sounds
good,
oh,
go
ahead.
Antonio.
G
Yeah
related
to
the
image
promotion.
This
is
the
the
issues
building
the
other
problem
that
we
have.
I
don't
know
if
this
is
sick,
testing
or
or
the
working
group
for
infra-
is
that
the
the
job
that
promotes
the
images
is
failing,
because
it
complains
about
one
shot
that
has
moved
or
something
like
that,
so
you
can
see
in
no
sorry,
that's
the
that's,
not
the
pull
request.
The
pull
request
is
this
the
scribe
use.
G
G
A
So
the
container
image
promoter
is
not
something
I
am
intimately
familiar
with.
Sig
release
has
been
working
to
take
ownership
of
that,
specifically
the
release,
engineering,
subproject
or
the
lease
engineering
team.
A
So
I
would
check
in
with
steven
augustus,
and
he
can
probably
redirect
you
to
the
most
knowledgeable
person
to
troubleshoot.
Okay,
it.
It
could
be
like
some,
some
kate's
incred
permission
related
stuff,
but
it's
my
intent
that
the
release
engineering,
folks
kind
of
drive
that
troubleshooting
process.
G
H
A
Okay
and
then
I
noticed
I'm
guessing
it's
steven
haywood
had
had
an
issue
about
trying
to
bump
agnosts
to
the
most
recent
version
and
cloud
you
you
commented
on
his
pr.
Is
this
something
you
think
you
can
help
him
troubleshoot.
H
We
have
no
idea
where
he
found
that
image,
because
the
the
the
image
builder
job
failed.
So
there
shouldn't
be
any
236
image.
I
tried
to
pull
it.
There's
no
image
like
that.
J
Yeah,
it
was
a
bit
of
a
new
year
whatsoever
blooper.
I
found
some
of
the
stuff
from
the
the
gcp
list
of
all
the
various
test
images
it
was
in
the
process
of
looking
at
another
particular
conformance
job
that
we're
working
on
at
the
moment.
So
I
was
misinterpreting
some
stuff
from
the
slack
messages
over
the
christmas
break
a
new
year
period
and
what
I
thought
was
actually
available.
J
So
I
didn't
notice
the
fact
that
the
windows
images
weren't
on
the
list
so
yeah
it
was
never
going
to
work.
So
I
bet,
but
a
manual
push
of
the
image
would
be
extremely
helpful.
H
J
Oh
see
if
I
can
just
track
down
the
url
that
showed
the
list
of
the
various
images
that
were
already
built
and
it
didn't
show
any
of
the
windows
ones
built,
but
it
was
still
showing
this
it's
390,
but
I
realized
that
that
the
process
is
just
not
working
at
the
moment.
So
yeah,
it's
not
really
a
problem
at
the
moment
other
than
hopefully
just
getting
a
manual
questioning
image
would
be
very
helpful.
A
So
I
don't
I'd
rather
have
ci
build
and
push
images
going
forward
instead
of
a
human
manually
pushing
it.
So
this
is
a
matter
of
needing
to
retry
a
job
or
trigger
a
new
instance
of
a
job.
We
can.
A
We
can
go
that
route,
but
I
I
guess
I
want
to
take
kind
of
a
step
back
for
a
second,
because
I
feel,
like
we've,
been
talking
about
a
bunch
of
specific
instances
of
image,
building
and
image
promotion
and
I'm
feeling
a
lot
of
uncertainty
or
lack
of
clarity
in
where
we
are
with
regard
to
building
and
promotion
of
of
images
that
are
used
in
kubernetes
and
to
end
tests
so
about
a
year
ago.
A
A
Do
it
by
running
commands
from
their
laptops
or
their
their
developer
machines,
and
so
people
had
to
manually
bug
googlers
and
then
I
feel
like
where
we
pushed
over
the
course
of
the
last
year
was
so
that
anytime,
anybody
made
a
change
to
an
image
it
would
get
automatically
built
by
a
proud
job.
A
There
was
a
proud
job
for
every
single
image
and
that
proud
job
would
be
responsible
for
building
image
on
google
cloud,
build
and
then
pushing
that
image
to
a
staging
repository
that
was
not
google.com
owned
and
then
the
work
we
had
not
completed
last
year
was
actually
ensuring
that
we've
pushed
every
every
single
test
image
over
to
the
the
cncf
owned
repositories
and
that
we're
actually
using
all
the
cncf
known
images
instead
of
the
google.com
owned
images.
A
A
I
may
have
totally
mischaracterized
things
and
I
also
feel
like
I'm
talking.
A
lot
did
any
of
that
make
sense.
A
Okay,
cool
cool,
so
here's
what
I'm
suggesting
I
feel
like
stephen
antonio
claudia,
the
three
of
you
like
you're,
doing
awesome
work.
You
are
pushing
us
forward
in
making
sure
that
end
and
test
images
are
actually
being
built
and
pushed
and
promoted
the
correct
way.
I
think
we're
still
working
on
that
workflow
keep
keep
doing
what
you're
doing,
but
I
feel
like
we
opened
an
issue
last
year
that
was
about
like
making
sure
this
is
done
for
all
the
and
test
images
like.
A
So
I
will
go
find
that
issue
and
make
sure
that
that's
something
we
discussed
at
the
next
sig
testing
meeting
like
as
sig
testing.
I
want
to
make
sure
we
commit
to
all
of
the
images
that
are
used
in
e2
testing
should
live
in
community
owned
infrastructure
because
I
don't
think
they
do
right
now.
I
think
y'all
are
individually
working
on
that
for
specific
images,
but
not
all
the
images,
and
I
would
love
to
get
the
community's
help
in
doing
that.
A
For
all
the
images,
I
think
it's
it's
going
to
be
a
matter
of
just
like
finding
the
checklist
and
going
through
image
by
image
and
making
sure
they
all
work
and
yeah.
A
So
I
will
make
sure
that
my
next
sick
testing
meeting
that
that
issue
is
teed
up
as
a
help
wanted
issue,
and
anybody
who
wants
to
help
with
that
work
can
help
out.
But
at
the
moment
I
feel
like
antonio
stephen
and
cloudy
are
the
most
knowledgeable
individuals.
If
people
want
to
contact
them
to
figure
out
how
they
can
help
with
all
this.
H
Yeah,
I
have
a
request
just
for
the
docker
hub
images.
I've
also
written
it
as
the
last
point
in
the
google
doc.
Yes,.
G
J
G
A
Right,
I
agree
I
feel
like
we
can
get
this
done
via
an
umbrella
issue.
If
I
have
to
go
write
a
cap
I
will
but
I'd
I'd
really
rather
just
make
sure
we've
we've
got
the
plan
somewhere
and
then
we
all
run
through
it.
So
I
will
work
on
that
cloud.
You,
yes,
you
did
have
pr.
I
thought
that
my
co-chair
had
some
problems
with
it.
A
It
seems
like
it
seems
like
maybe
not
I'm
gonna
check
in
with
them,
and
I
will
get
back
to
you
if
there
are
still
any
concerns,
because
it
seems
like
yeah
if
we
could
just
push
that
forward.
That
would
be
great.
A
Images:
okay,
thank
you
all
for
going
through
all
that.
In
such
great
detail,
we
don't
have
anything
else
on
our
agenda,
so
I'm
happy
to
release
you
all
and
give
you
20
minutes
of
your
lives
back.
I
did
want
to
say
one
thing:
real
quick,
which
is.
I
mentioned
the
cap
word.
There
are
some
caps
that
sig
testing
needs
to
author
and
will
be
pushing
for
this
release.
A
I
know
that
at
least
one
of
them
is
related
to
forward
progress
on
some
of
the
ci
policy
issues
that
we
worked
on
last
year.
A
So
we
had
stated
that,
like
all
of
the
release
blocking
and
all
of
the
release
and
forming
jobs
need
to
run
on
community-owned
infrastructure,
they
can't
run
on
google.com
infrastructure
and
we
have
basically
done
all
of
that.
Definitely
a
shout
out
to
arno
who's
here
for
helping
with
a
lot
of
the
the
bazel
build
and
basal
test
related
jobs.
A
The
the
really
tricky
finicky
thing
now
is
that
there's
a
lot
of
code
in
the
wild
that
refers
to
ci
artifacts
that
live
in
a
bucket
called
kubernetes
release,
dev
and
that's
a
google.com
owned
bucket,
and
we
need
to
get
the
projects
and
the
community
and
the
ecosystem
as
a
whole
to
pull
ci
artifacts
from
community-owned
buckets
called
kate's
release
them,
and
this
is
a
preview
of
something
we're
going
to
have
to
go
through
for
release
artifacts
as
well.
A
So
it's
it's
involved
enough.
I
started
using
the
code
search
cs.kates.io
to
look
at
all
of
the
places
this
stuff
was
referenced
just
within
the
project
itself
and
there's
a
lot
of
a
lot
of
little
stuff.
That's
going
to
need
to
be
taken
care
of
so
I
plan
on
putting
together
a
cap
for
that,
and
I
would
really
appreciate
folks
review
on
on
all
this.
A
They
are
not
here
at
the
moment,
but
one
of
the
things
this
affects
is
projects
that
use
cube
adm,
which
most
of
the
cluster
api
projects
use
those
end
up
using
images
that
are
built
by
ci,
which
also
live
in
a
google-owned
bucket,
and
we
need
to
transition
cubadm
as
a
whole
to
download
from
another
bucket
anyway.
It's
it's
far
enough
reaching
that
I'm
gonna
make
sure
we
get
out
of
cap.
We
communicate
about
this
deprecation
with
the
community
so
that
everybody
is
informed.
A
K
Is
it
the
I'm
not
sure?
Is
it
the
bucket
or
the
domain,
that's
being
deprecated?
In
this
case
it
is
the
bucket
okay.
A
To
to
provide
some
further
context
in
the
most
ideal
world
possible,
I
would
just
transfer
ownership
of
this
bucket
from
google
to
the
cnc
app
for
technical
reasons.
You
cannot
transfer
buckets
between
projects
like
that,
let
alone
projects
that
are
in
different
google
cloud
organizations.
A
A
There
are
urls
that
are
that
have
like
the
bucket
name
baked
in
to
them
there.
There
are
not
many
paths
by
which
people
use
some
like
redirector
url
to
get
ci,
artifacts
or
ci
images.
Most
people
reference
the
bucket
name
directly.
A
Okay,
I
really
appreciate
everybody's
time.
I
look
forward
to
seeing
you
all
again
in
two
weeks
and
happy
tuesday
take
care
all
right.
Thank
you.