►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everyone
I'm
haley
similar
and
we
are
going
to
go
through
the
request
flow
for
blob
pushes
to
the
container
registry
today
right
and
we
are
probably
not
going
to
get
into
cross
repository
blob
mounting
unless
we
have
time
at
the
end,
because
it
is
it's
not
the
usual
case
and.
A
A
So
if
you
saw
the
first
part
of
the
video
you're
going
to
get
more
context
because
we're
I
don't
intend
to
re-explain
things,
structures
within
the
registry
objects
within
the
registry
at
the
same
level
of
detail
just
to
move
this
along
because
it
is
a
bit
more
complicated.
A
So
this
is
the
push
request
flow,
so
in
general
we're
going
to
check
it's
the
same
as
every
request.
Do
we
support
the
v2
api
we
do
and
we're
going
to
head
each
of
the
blobs
first
off,
so
the
client
is
going
to
check
if
the
blob
is
already
there,
and
this
is
a
repository
scoped
request.
So
that
means
that.
A
A
A
So
if
the
blob
is
already
there,
the
registry,
the
client
just
it
knows
it
doesn't
have
to
push
it.
Push
is
expensive
potentially,
depending
on
you
know
how
big
the
image
is
all
right.
So
another
thing
that
can
happen
is
that
blob's
not
there,
it's
either
not
known
to
the
repository
or
it's
not.
On
the
back
end,
some
combination
of
the
two
and
it'll
send
a
four
or
four.
We
didn't
find
it
so
we're
gonna
get.
A
So
this
is
where
the
blob
cross
represent
happens.
We'll
come
back
if
we
need,
if
we
have
time,
but
let's
pretend
that
doesn't
exist
and
what
we'll
do
is
typically
we'll
start
an
upload
we'll
do
a
post
to
the
endpoint,
so
v2
name
of
the
repository
blobs,
uploads
and
you're
gonna
get
back
from
this
request,
an
id
and
that's
the
idea
of
the
upload.
A
So
you
can
delete
that
upload.
You
can
say
I'm
I'm
done
with
the
upload.
It's
not
going
to
go
through
for
whatever
reason,
and
then
you
can
also
upload
parts.
A
Upload
you
use
the
uuid
use
the
digest,
so
that
can
either
be
the
the
end
of
a
multi-part
upload
or
it
can
be
the
entire
upload
if
the
blob
is
relatively
small
and
that
happens
by
the
client's
discretion.
A
So
it
isn't
something
that
we
can
communicate
back
and
say,
like
oh,
like
if
your
bob's
that
big,
then
you
know,
don't
don't
even
try
it
right,
do
multi-part
upload,
so
the
clients
do
decide
how
to
what
whether
to
try
a
multi-part,
upload
or
not
all
right
and
then
at
the
end
the
client
will
again
do
a
head
request
to
the
block
and
very
likely
should
be
there
if
everything
goes
correctly,
especially
if
there
were
no
errors.
A
Beyond
that
point
and
after
all,
that's
done,
then
we
upload
the
manifest
just
to
put
to
v2
repository
name
manifest
and
reference.
So
that's
either
a
digester
tag
and
there's
probably
a
whole
hour
of
video
within
that
last
step.
Believe
it
or
not.
A
So
let's
go
and
look
at
so
we
went
overhead
already
in
the
previous
video.
It's
just
the
same.
There's
nothing
special!
So
let
us
look
at
post,
so
I'm
gonna
do.
A
Right,
each
blob,
every
everything
that's
stored
in
the
registry-
that's
actually
data,
so
not
not
like
a
tag,
for
example
everything,
but
a
tag
has
a
shot
hash.
A
Yes,
there's
and
blobs
are
only.
A
At
least
so
you
know
you
have
a
tag
that
references,
a
manifest
that
is
a
blog
or
you
can
have
even
a
tag
that
references,
a
manifest
that
references,
a
manifested
reference
to
the
blob,
so
there's
an
indirect
relationship
between
tags
and
blobs,
but
typically,
if
you're
getting
a
blob
you're
getting
it
by
its
digest.
A
B
A
A
A
All
right
so
this
is
this-
is
blob
mount
options
which
were
just
just
to
blob
mounts
the
procrastinator
cross
repository
blob,
mounting
that
we're
just
going
to
pretend
doesn't
exist
for
right
now.
A
A
So
it's
the
link
blob
store
from
before,
and
we
called
this
blobs
create
little
function
here.
So
we're
gonna
see
what
that
does
so
we're
gonna
go
to
like
blob
store
and
we're
gonna
do
create
yeah,
so
it
returns
a
blob
writer,
which
is
something
we
have
not
talked
about
yet,
and
it's
very
exciting
that
blob
rider.
A
So
this
is
the
mount
prosecutor
cross
repository
blob
mounting
thing
again,
so
we'll
just
pretend
that
we're
not
doing
that.
So
we
generate
a
uuid,
that's
the
uid
for
the
upload.
We
generate
a
started
app
type.
This
is
used
to
clean
up
stale
upload
data,
so
there's
a
function.
A
You
know
sometimes
they
just
don't
delete
the
blob
uploads.
If
you
know
something
goes
wrong,
it
can
just
leave
stale
data
behind
so
this
started
at
gives.
That
gives
that
process
like
some
kind
of
something
to
work
against
like
so.
It's
not
deleting
in
progress.
Uploads.
A
A
So
that
is
repositories
name
underscore
uploads,
the
id
says
uploads
id
data,
and
then
you
have
the
data.
So
that's
the
blob,
you
have
the
started
app
and
you
have
hash
dates.
So
the
data
and
the
start
of
that
are
the
important
things
more
or
less.
A
A
A
So
here
we
write
the
started
at
file
and
then
we
return
this
lbs
new
blob
upload.
So
what
is
that?
And
that's
just
a
constructor
for
the
distribution
block
writer.
So
what
we
do
is
we
get
a
file
writer
right
from
the
source
driver?
So
that's
just
you
know,
that's
just
a
writer.
What
is
that
file
writer
yeah?
So
this
is
driver
file,
writer
and
it
just
handles.
Like
the
multi-part
uploads,
it
handles
chunked,
reads
and
writes
so
yeah
and
we
just
have
all
the
configuration
passed
to
here.
A
A
So
we
have
this
blob
upload
handler
upload
is
upload
great.
So
we
write
the
response.
We
set
headers
every
we
return
the
status
so
that
that
just
sets
it
up
for
the
further
requests.
A
B
A
That
we
do
build
it
is,
you
know,
let's
look
at
it
for
upload
response,
so
not
too
exciting,
not
too
excited
right.
So
we
built
it
here.
So
we
can
get
the
upload.
You
know
upload
size.
A
But
we
do
need
it
just
to
write
these
upload
states
for
now.
A
Yeah
so
one
of
the
tricky
things
that
I
am
going
to
go
over
this
now
because
it
will
get
you
is
that
this
looks
like
I
don't
know
you
saw,
you
saw
the
secret
before
right.
Nope
start
yeah,
so.
A
There
we
go
so
this
blob
upload
create,
writes
data,
it
does
stuff
it
will.
It
will
do
things
a
lot
more
than
you
think
it's
doing
it.
It's
not
just
an
instructor
and
I
think
it's
kind
of
hard
to
get
into
what
it
does
without
going
into
a
cross-repository
blob
mounting.
But
this
is
not
just
a
constructor
function,
so
be
aware,
yeah.
So,
let's
go
back
to.
If
you
don't
have
any
further
questions,
we'll
go
back
to
the
diagram.
A
A
Diagram
yeah,
thank
you
for
that
which
I
it
works
on
my
machine.
Okay,
I
can
see
it.
A
So
we
get
this
blah
blah
blah
it's
like
if
the
upload's
nil,
but
what
we
haven't
done.
Anything
like
every
request
is
different
and
special,
so
we
need
to
look
at
how
we
do
that.
So.
A
A
A
So,
oh
that
bugs
me
so
much
though.
Basically
we
just
find
it
stop
upload
response,
and
this
is
the
same
thing
we
saw
previously
repository
name
upload
id.
What's
the
offset,
when
did
you
start
the
upload?
All
these,
the
upload
url,
like
all
these
things.
A
Yeah,
so
all
these
things
you'll
just
get
same
as
at
the
beginning,
same
as
at
the
end
of
initiating
the
the.
A
The
upload
so
look
at
delete.
A
Oh
we're
gonna
find
that
out
and
it
bugs
me
but
yeah.
So
if
we
do
find
the
if
we
do
find
an
upload.
A
A
B
B
B
A
A
Find
yeah
registry
is
a
big
project.
Sometimes
these
things
happen,
but
this
does
in
fact
work.
So
we're
going
to
look
at
patching
the
blob
data
and,
as
you
can
see,
if
the
error
of
the
blah
blah
handler
upload
is
nil,
it
will
just
have
this
error
blob
upload
unknown,
like
we
can
find
anything
by
the
uuid
yeah,
so
content
type
some
white
space.
It
doesn't
need
to
be
here
yeah,
so
we
just
copy
full
payload.
So
this
is
just
an
append
operation.
A
A
A
A
We
need
one,
because
we're
gonna
check
that
digest
later,
so
to
make
sure
that
all
the
bytes
came
through
okay
and
that
you,
actually
you
sent
what
you
expected
to
send
you
being
client,
so
requires
the
digest.
A
We
copy
full
path
payload
again-
and
this
is
because
this
this
is
how
you
would
do
a
single,
a
single
blob
upload.
So
if
you
have
a
very
small
blob,
you
will
just
do
this.
It
will
just
send
everything
all
over
once,
but
they
have
to
be.
I
don't
know
the
exact
cutoff
for
docker
client,
but
they
have
to
be
pretty
small
before
they'll
actually
get
single
single
operation
uploads
and
then
we
commit.
A
A
lot
of
work
is
done
by
this
blob
writer,
so
we
really
need
to
look
at
that
because
all
these
are
just
coordinating
the
work
of
the
blob
writer.
So
we're
gonna.
Look
at
that
and
it's
a
bit
hard
to.
A
Than
other
things,
in
a
way,
it's
it's
kind
of
hard
to
like
go
to
particular
methods,
one
by
one
and
like
we've
done
for
the
link
bob
store,
so
so
yeah
just
has
reading
reading
and
write
methods.
What
you'd
expect
these
are
not
that
exciting,
read
right
from
close
they're
just
implementing
the
interface
they're.
Very.
A
This
is
just
very
basic,
like
low-level
file.
Reads
offsets
all
that
good
stuff,
byte
arrays
offsets
you
name
it
it's
that
kind
of
thing
so,
and
you
can
see
that
these
whole
state,
you
know,
started
at
the
file
writer
the
driver.
A
All
these
kinds
of
all
these
kinds
of
configurations
so
commit
is
when-
and
this
is
the
last
step
so
you've
done
all
the
writing
and
either
that's
your
patches
or
just
like
one
single
commit
is
where
a
lot
of
the
work
gets
done.
A
A
So
we're
closing
we're
closing
the
writer
we're
closing
we're
getting
the
final
blob
size,
we'll
validate
the
blob
and
we'll
spend
a
good
portion
of
time
going
over
that
in
a
second,
then
we
do
this
move
blob
and
that's
a
very
simple
move
from
the
upload
directory
to
the
canonical
path.
So
let's
look
at
that,
so
we
get
the
blob
data
passed
back
and
that's
going
to
be.
A
This
is
the
blob
side
of
the
registry
blob
data
path.
Spec
is
v2,
blobs
shot
256.
Then
it's
going
to
be
the
first
two
hex
bytes
and
the
hex
bytes
of
the
digest.
Then
the
digest
and
the
data.
So
you
under
v2
blob,
sha
256
you're,
going
to
see
a
a
a
b
and
on
on
and
on
onward
and
so
forth
until
you
get
to
ff
and
that
is
for
object,
storage
drivers,
they
tend
to
like
more
prefixes,
and
that
gives
you
a
fair
number
of
prefixes.
A
I
think
with
super
super
large
deployments.
We
could
really
use
two
more,
but
you
know
two
more
digits
to
play
with,
but
we
don't
have
that
and
then
the
actual
hex
digest
and
then
the
data
which
is
just
the
the
data
which
is
normally
a
tar
file.
But
you
will
also
see
just
json
here
from
the
manifests
and
from
the
configuration
blobs.
A
That's
so
much
nicer,
so
we
stat
and
the
block
path
pretty
pretty
standard,
and
this
is
done,
which
is
the
storage
driver.
This
is
not
the
link
bob
store.
This
is
the
storage
driver
directly
citing
the
path,
and
we
want
to
see
that
it
doesn't
exist.
A
If
the
path
already
exists,
we
don't
we
exit
now,
and
so
we
don't.
We
don't
overwrite
things
because
writing
is
you
know.
Writing
is
more
expensive
than
not
writing,
but
the
the
the
the
tricky
part
with
this
is,
if
that,
if
that
blob
data
goes
wonky,
this
process
won't
refresh
it.
A
Now
we,
the
registry,
would
have
to
delete
it
because
clients
can't
decide
blobs
in
common
storage.
The
client
can't.
A
Them
the
only
thing
the
client
can
do
is
untag
images,
delete
tags,
delete
manifest
and
delete
particular
layers
from
the
repository,
but
the
registry
may
never
remove
the
blogs
if
it
doesn't
want
to
do
any
lifecycle
management.
A
So
what
would
probably
happen
here
is
that
it'd
just
be
a
ticket
and
there'd
be
some
data
repair
that
happens
manually
right,
because
even
if
we
do
clean
up
images,
there's
a
delay.
A
You
know
if
it's
offline
garbage
collection,
you
know
is
that
once
a
week
you
know
once
a
month,
it's
it's
like
it's
not
quick
enough
to
just
let
it
resolve
on
its
own.
If
there's
really
a
problem
here,
but
we
haven't
encountered
that
since
I've
been
here
so
apparently
it's
pretty
rare.
So
that's
that's
fantastic.
A
Yeah,
so
we're
going
to
we're
going
to
set
a
different
path,
we're
going
to
stop
the
stat,
the
blob
writer
path,
and
this
is
the
upload
directory.
A
So
if
we
don't
find
this,
we
need
to
check
and
see
if
it
is
a
zero
length
block,
that's
possible,
there's
a
canonical
digest
for
that,
which
is
this
that's
what
an
empty
blob
digest
looks
like.
A
I
don't
expect
you
to
memorize
that,
but
you
know
that's
like
that's
the
joy
of
content,
addressing
storage.
So
if
there's
an
empty
blob,
we'll
just
write
empty,
empty
space.
A
B
A
A
Like
during
the
movie,
you
get
a
problem
during
the
move,
you
could,
you
could
have
a
good
client
disconnect
right,
but
it
it
would
need
this
first
driver
to
do
the
move,
fail
and
just
silently
fail
and
just
return
a
nil.
A
So
it's
possible,
but
that's
like
a
much
lower
level
error,
yep,
yep
yeah.
So
what
we
did
there
was
we
moved
to
the
blob
into
stable
storage
right
into
the
well.
There
was
no
sample
storage
before,
but
we
moved
it
into
the
you
know
where
it's
supposed
to
live
long
term.
That's
this
content,
addressable
location!
A
A
A
Yeah,
so
we
call
that
remove
resources
again.
So
if
the
upload
is
deliberately
canceled
or
it's
it's
a
successful,
it
will
try
to
clean
up
after
itself,
because
that's
great
right,
you
don't.
The
upload
pressure
is
like
we'll
have
to
do
some
work
on
that
in
the
future.
But
it's
it
has
some
quirks.
Shall
we
say,
and
then
we
get
this
blob
access
controller
set
descriptor,
that
is
for,
if
we're
using
the
blob,
descriptor
cache.
A
A
So
that's
about
it,
but
we
didn't
go
to
the
validate
blob
step
and
that's
that's
a
cool.
That's
a
that's
an
involved
one.
So
are
there
any
questions
so
far.
A
So
you
know,
like
we
said
earlier:
we
everything
that
makes
it
to
content.
Addressable
storage,
we
assume,
is
all
checked
out
everything's
good.
So
this
is
a
pretty
important
step,
all
right,
so
basic
stuff,
hey.
If
there's
no
digest
what
are
we
about?
You
know
what
are
we
validating
against?
So
all
this
is
basically
is
that
we
are
hashing
the
content
with
the
with
the
algorithm
and
seeing
if
it
matches
the
descriptor
that
the
client
passes
up
at
the
end.
A
A
Size
yeah
so
we're
just
getting
the
stats.
That's
just
the
file
information,
so
it's
possible
to
get
a
zero
length
manifest
or
zero
link
blob.
So
that's
why
that's
there
so
we're
just
not
gonna
like
explode
on
that
error,
so
we
set
the
size
and
then,
if
it
is
bigger
than
zero,
then.
A
So
if
the
descriptor
size
is
not
the
size,
so
so
we
stack
that
file
and
we
get
the
goodness.
Where
does
the
size
come
from.
A
So
if
the
scripture
size
is
greater
than
zero,
so
they
gave
us
a
size,
we'll
say
like
hey:
this
should
be
the
size
that
we
we
found
on
disk,
or
maybe
it's
zero
or
negative.
So
we
don't
assume
that
it
cares
about
the
size.
A
Yep
it's
this
is,
this
is
a
not
the
easiest
part,
so
zoom
digest
so.
A
If
the
so,
can
I
algorithm
so
that's
the
algorithm
of
the
the
blob
writer
does
it
if,
if
it
matches.
A
The
the
same
will
say
we'll
set
this
verified
to
true.
If
we
do
a
different
algorithm,
we're
going
to
have
to
download
and
rehash
the
uploaded
content.
A
So,
if
we're
not
also
for
not
using
resumable
digests,
we'll
have
to
hash
the
full
content
or
there's
an
error
from
what's
else
there.
Oh
yeah,
so.
A
So,
let's
see
if
the
written
size
and
the
algorithm
is
the
same
and
it's
verified
okay,
so
it's
like
if
we
were
around
for
the
entire
life
cycle,
this
upload
it
we
can
take
the
shortcut,
but
if
we
can't
check
on
file
size,
what
we
have
to
do
is
we
have
to
read
the
file
completely
from
the
backend.
A
And
this
is
going
to
be
from
the
upload
path,
so
read
the
complete
file,
we
copy
it
to
the
verifier
and
see
if
it's
verified
and
that's
just
gonna
yep.
So
it's
a
lot
this.
This
is
like
this
is
really
hard
to
get
to
really
grasp
in,
like
this
quick
survey
like
this,
but
it's
possible
in
a
blob
upload
that
we
have
to
download
all
the
data
from
the
blob
back
end
and
and
rehash
it.
So
that's
a
possibility.
A
Well
that's
how
that
works,
and
then
we
check
and
see
if
we
set
this
verified
flag
at
any
point.
If
we
didn't,
we
just
say:
hey
it
didn't
work
out.
Canonical
diagnosis
does
not
match
provided
digest
and
we
turn
an
error.
Just
like
hey,
it's
not
a
good,
not
a
good
digest
doesn't
work
for
us
and
then
we
just
you
know,
set
some
set
some
metadata
and
go
through
so
yeah.
That's,
unfortunately,
that's
just
really
not
the
easiest
to
understand,
and
I
don't
want
to
like.
A
So
you
can
feel
free
to
dig
in
that,
but
there's
a
you
know:
there's
a
library
that
verifies
it,
so
we
do
have
a
bit
of
an
extra
time.
So
let's
try
to
go
over
across
repository
blob,
mounting.
A
A
A
So
keep
that
in
mind.
It's
it's
about
client
knowledge,
that's
the
important
thing
and
that's
decided
with
like
a
local
data
on
the
client,
local
data
on
the
client,
because
people
ask
about
that,
and
it's
very
confusing,
and
it's
very
scary,
if
you
see
mounted
from
super
secret
repository
to
customer
repository,
but
it's
fine.
A
So
here
we
have
like
the
from
repo
which
and
the
mount
digest
so
hey.
If
these
are
both
set,
we
should
create
a
bloodbound
option,
which
is
we
parse
the
digest.
We
make
sure
the
repository
is
a
good
good
repository
that
is
properly
formatted.
They
didn't
just
put
like
some
random
characters
that
we
don't
it's
not
good.
Never
could
never
be
a
repository.
A
And
we
get
the
we
figured
make
sure
the
digest
is
the
the
right
digest
for
that
repository
so
and
then
we
have
with
mount
from
canonical.
So
that
is
just
sets
up
this
little.
A
A
Let's
see
all
right,
so
we
passed
that
option
to
create,
and
then,
if
the,
if
the
error
is
error,
blob
mounted
so
that's
essential
error.
It's
not
really
a
real
error
ignore
the
database
stuff.
Then
we
go
right,
blob
headers,
just
like
we
do
all
the
time
all
like
every
place.
So
that's
the
end.
It's.
A
A
You
know
it
was
determined
that
we
should
so
we'll
do
lbs
mounts,
so
you
can
provide
it
with
a
stat
already
and
we'll
do
this
for
the
database
and
that
just
skips
this.
But
if
we
don't,
we
get
the
we
get
the
repo
from
the
source
repository
and
we
get
the
link
blob
store
and
we
stat
it
to
make
sure
it's.
There
has
access
everything's,
good.
A
A
So
we
give
the
stat
size
we
give
the
media
type
which
for
blobs
like
this,
is
always
going
to
be
application.
Octet
stream.
We
get
the
digest
and
this
is
a
little
gross,
but
we
return
the
digest,
but
we
also
return
the
result
of
linking
the
blob
right
so
that
links
it
to
that
and
that
kind
of
does
all
the
work
right.
It
just
says
like
it,
writes
that
layer
file
that
we
saw
again
in
past
and
just
to
illustrate
that
again
again
it
all
all
it
does.
A
A
So
this
is
we'll
actually
return
an
error,
we're
turning
the
air
like
blob,
mounted
air
blob
mounted
right
so
and
that's
the
sensible
era.
So
we
write
the
headers,
we
go
about
our
business
and
that's
about
the
that's
about
the
size
of
it.
Do
you
have
any
any
questions
any
anything?
I
should
go
back
over
and
treat
more
carefully.
A
A
A
B
B
A
Yeah-
and
you
know
these
are
I'll
upload
this
probably
tomorrow,
which
is
to
yeah
so
I'll
do
this
tomorrow,
and
so
you
can
always
rewatch
it
probably
at
double
speed.
A
And
you
can
always
review
it,
and
I
think
the
intricacies
of
intricacies
of
this
took
me
a
long
time
to
grasp
just
because
a
lot
happens
here.
There's
a
lot
of.
A
A
Most
of
the
time
it
is,
I
believe
you
know,
I
think
I
think,
for
you
know
one
of
the
ways
that
it
skips
validation
is
like
hey
we're
around
for
the
entire
upload
of
this
blob,
and
it's
like
probably
not
right
and
probably
weren't
here
for,
like
it
probably
wasn't
the
same
handler
for
the
entire
20
gigabyte
blob.
Somebody
decided
to
upload
so
and
that's
if
you
look
at
our
monitoring
and
you
look
at
the
memory
usage
you'll
see
this
like
it
spikes
up
and
down,
and
that's
what
that
is.
B
B
A
Yeah,
yes,
you
could
so
there
are.
There
are.
A
We
use,
let
me
share
my
screen
again.
A
They're,
okay,
but
so
this
is
going
to
be
handlers,
api
integration
test.
A
These
are
the
most
comprehensive
tests
we
have.
It
is
almost
7000
lines
of
tests
and
helpers
which
we're
moving
out
to
a
special
helpers
file,
but.
A
A
What
do
we
do
so
we
do
without
the
database
with
the
database
of
file
system
mirroring
with
something
we
can't
do
now
for
reasons.
So
if
we
do
run
off
the
database,
we'll
do
it
with
migration
enabled
and
file
system
mirroring
disabled,
we'll
do
immigration
enabled
and.
A
Without
like
a
what's
it
right
here,
oh
yeah,
so
we
do
it
with
mirror,
mirror
fs
enabled
and
disabled
and
there's
there's
other
things
that
we
can
do
here,
that's
complicated.
A
We
run
them
in
random
order,
but
these
are.
These
are
the
tests
that
are
just
like,
no
matter
how
we've
configured
this
registry
because,
like
I
said
in
the
previous
talk,
we
had,
we
basically
have
two
registries
overlaid
on
top
of
each
other,
but
we
really
have
three
registries
that
were
played
on
top
of
each
other,
because
we
have
just
the
file
system
metadata.
A
We
have
just
the
database
metadata
and
we
have
a
migration
right
mode
where
we're
slowly
moving
data
over
and
you'll
be
around
for
all
of
that,
so
you'll
get
really
familiar.
So
these
sort
of
say,
like
no
matter
these
tests
are
for
like
no
matter
what
this
has
to
happen.
These
have
to
be
identical.
A
And
there's
a
lot
of
these.
I
hesitate
to
sh
to
like
get
too
deep
in
these
because,
like
there's
pretty
they're
pretty
high
level
in
a
way
because,
like
we'll
see
them
manifest
and
push
it
by
tag,
and
that,
like
that's
all
done
because
we
had
to
help,
we
have
a
helper
for
that.
That
just
is
like
does
all
the
stuff,
because
we
have
to
do
it
so
much
in
writing
these
tests.
A
We
will,
you
know
just
make
the
http
request
against
the
you
know:
write
the
environment,
that's
running
in
this
test
and
we'll
you
know,
check
the
response.
We
want
to
see
these
kind
of
error
codes.
We
want
to
see
all
this
kind
of
stuff,
but
they're
super
high
level.
A
A
They're
very
broad,
but
they
don't
break
very
often
if
they're
not
supposed
to
so
they're,
not
very
fragile,
and
you
know
they've
they've
caught
numerous
things
over
the
years,
so
I'm
very
proud
of
these
at
least
but
as
far
as
like,
I
can
use
these
tests
to
understand
what
the
handler
is
doing.
A
Not
so
much
you
can
like
get
your
expectation
so
like
what,
if
you
get
a
humanity
manifest
by
digest,
and
it's
like
it's
not
associated
with
the
repository
like
you
should
get.
You
know
not
found
status
in
error
code
manifest
unknown,
and
so
you
know
that,
and
so
you
can
kind
of
like
figure
out
how
that
would
happen.
I
guess,
but.
B
A
These
are
very
important.
This
is
extremely
important,
really
expanded
a
lot
on
these
because
because
they're
so
broad
and
they're
just
like
when
I
do
this,
I
want
this.
I
want
this
back
right
and
having
rewriting
the
metadata
subsystem
completely
for
the
database.
It's
completely
different
than
the
file
system
metadata.