►
From YouTube: Implementations Sync: 2021-03-04
Description
Meeting notes: https://bit.ly/38pal2Z
A
Nice
status
updates.
I
will
start
I'm
already
talking.
I
guess
the
working
on
getting
the
ggcr
updates
into
image,
util
and
lifecycle.
I've
got
an
outstanding
pr
to
get
to
bump
image
util
now
that
it
has
been
already
bumped,
and
so
once
that
gets
merged.
I
plan
on
putting
a
pr
pr
prf
to
cherry
pick
those
commits
into
the
release
branch-
that's
already
outstanding.
So
then
we
can
hopefully
cut
a
patch
of
a
life
cycle.
B
I
put
a
pr
yesterday
about
the
opt-in
layer.
Caching,
everyone
is
more
than
welcome
to
review
it.
I
still
need
to
run
some
manual
tests.
B
B
C
I
was
working
on
the
image:
util
change,
that's
in
service
of
the
windows,
image,
cache
or
windows.
Cache
image
fix
mostly
trying
to
add
some
acceptance
tests.
Just
because
there's
a
couple
edges,
I
wasn't
sure
about-
and
I
realized
I
wanted
some
tests
for
that-
trying
not
to
enable
all
the
tests,
but
just
some
specific
ones
and
as
an
agenda
item.
I
have
a
couple
questions
for
you
all
see
how
you
feel.
C
I
know
emily's
not
here,
but
I
think
you
all
can
answer
a
bunch
of
them
for
me.
D
D
So
the
first
thing
I've
kind
of
started
out
doing
is
trying
to
reproduce
some
of
these
things,
just
by
like
doing
a
bunch
of
face
image
fetches
in
parallel.
But
this
is
like
it's.
It's
really
hard
to
replicate
these
failures.
They
seem
like
they
happen
once
every
so
often,
which
I
guess
is
pretty
annoying
in
your
ci
pipelines,
but
makes
it
pretty
difficult
to
just
like
spin
something
up.
D
That'll
replicate
this
so
kind
of
given
that
wondering
we
can
talk
about
this
later
on
in
the
meeting,
but
just
like
wondering
about
how
reasonable
it
is
to
like
write
some
code
that'll
cause
retries,
even
though
we
can't
actually
create
failures
very
easily,
all
right,
so.
A
All
right,
the
next
standing
item
is
release
planning.
I
don't
think
we
have
any
updates
that
I
mentioned
earlier.
Try
to
get
a
patch
out,
I'm
not
like
a
huge
rush
for
it,
but
I
am
trying
to
kind
of
at
least
push
that
along
so
I'll
continue
doing
that.
A
All
right,
I
already
looked
at
each
discussion
in
rfcs,
I
clicked
in
those
a
moment
ago
and
they're
both
empty.
So
I
think
micah
you're
up
first
with
imageutil,
with
default
platform,
changes.
C
Awesome
thanks
and
thank
you
all
for
the
reviews
for
the
image
util
changes
from
earlier
in
the
week
or
last
week.
In
some
cases
I
think
the
I
tried
to
make
a
bunch
of
changes
based
on
your
suggestions,
which
are
all
solid.
C
Some
of
the
behavior
I
intend
to
keep
as
it
currently
is,
and
some
of
those
were
it's
kind
of
counter
to
some
of
the
suggestions
that
emily
has
but-
and
I
can
put
the
pr
in
there-
although
I
don't
think
too
much
of
it
is
all
that
controversial.
C
But
I
was
wondering
from
all
of
your
perspective
is
this
was
all
really
in
service
of
fixing
a
life
cycle
bug
that
lifecycle
was
creating
cache
images
in
the
wrong
format
for
a
windows
app
that
was
creating
linux
images
so
technically,
that's
been
fixed
for
a
little
bit
and
the
interface
changes
and
imagery
chill
just
kind
of
make
the
calls
and
life
cycle
a
little
bit
cleaner,
and
so
I'm
kind
of
wondering
what's
like.
A
C
C
What
we'll
start
doing
after
the
life
cycle
changes
in
there
is
we'll
just
set
a
different,
we'll
set
the
different
os
on
all
the
image
configs
that
we
write
and
we'll
have
that
windows.
Shim
layer,
that's
in
there,
so
most
of
the
layer
contents
will
actually
be
identical
too,
but
the
images
the
image
sha's
will.
C
Change
so
my
my
my
thought
was:
if
y'all
had
different
opinions,
that's
totally
fine.
I
might
actually
try
to
get
the
the
image
util
change,
or
at
least
there's
a
piece
that
image
util
change,
accepted,
bump
the
image
util
version
in
life
cycle
use
that
for
the
release
just
to
bug,
fix
and
then
I'll
continue,
adding
some
acceptance
tests
for
this.
Some
of
these
are
kind
of
the
quirkier
acceptance
tests
around
analyzer.
A
I
don't
have
a
strong,
didn't
you,
I
think
yeah,
I
think
as
long
as
we're
as
long
as
they
accept
this
test
pass
for
everything
that
we
already
have.
I
don't
feel
too
worried
about
whatever
change
you're
making
here,
especially
because
it's
really
just
in
service
of
cash
images
and
and
windows,
which
is
not
so
heavily
used
right
now.
So
I
feel
okay
with
whatever
you
want
to
do.
C
Okay,
great
yeah
I'll
proceed
to
that
and
make
sure
I
put
it
up
for
discussion
in
the
pr
and
explain
it
there,
but
as
long
as
that
feels
like
a
reasonable
direction
to
go
I'll.
Do
that
thanks
for
the
for
the
thoughts
and
thanks
again
for
the
reviews.
A
Cool,
let's
see
yeah
you're
up
next
tom
multicode
file.
B
Yeah
so
yesterday,
after
I
picked
pr,
jesse
gave
me
some
comments
and
we
have
like
a
ping
pong
like
discussion
and
I've
managed
some
changes
last
night,
but
I
wanted
to
talk
about
some
things
that
we've
discussed
so
first
of
all,
before
I've
made
some
changes,
I
called
to
tamil
the
code
file
like
a
few
times
in
a
row
in
my
code,
which
is
pretty
ugly,
so
I
totally
agree
with
this,
but
putting
that
aside,
I
was
wondering
how
like
how
is
how
bad
it
is.
I
mean.
Is
it
really
like?
B
First,
do
we
really
care
about
the
performance
in
this
case
and
how
does
it
really
influence
our
performance,
so
it
was
related
both
to
tamil
the
code
file
and,
to
the
I
mean,
checking
the
the
api
version
of
our
build
pack
again,
I
totally
agree
with
that.
It
was
pretty
ugly
to
do
it
over
and
over
again,
but
just
wondering
related
to
performance
voice.
B
A
Yeah,
I
don't
know,
I
have
not
measured
it,
so
I
have
no
idea
what
the
what
the
tamil
parsing
looks
like
like.
I
have
no
idea
what
kind
of
overhead
we
might
have
on
that
and
whether
it's
worth
it
like
you
said
right,
like
I
know
performance
is
a
is
a
thing
that
we've
historically
that's.
Why
creator
exists
right.
It's
mostly
for
performance.
A
B
Okay,
the
the
solution
for
this
like
problem
of
checking
like
decoding
the
file
over
and
over,
was
to
decoding
the
file
only
once
the
problem
is
that
actually
maybe
I'll
share
my
code,
I
mean
I
put
a
solution,
but
I'm
not
sure
if
it's
the
best
one.
So
if
anyone
has
any
ideas
on
something
else,
something
else
I
can
do,
I
would
love
to
know
so.
B
The
thing
in
this
issue
is
that
we
are
moving
three
flags
from
the
top
level
of
the
tomo
file
into
a
types
table
so
for
build
pack,
api
less
than
06
will
be
the
top
level
for
buildback
api,
starting
from
zero
six.
It
will
be
inside
the
types
table,
so
we
would
like
to
decode
a
file,
but
we
also
want
to
either
warn
or
error
if
the
user
put
the
like,
the
buildback
author
put
that
flags
in
the
wrong
place.
B
B
This
is
the
type
stable,
so
I'm
decoding
the
file
into
this
tract
and
then
I'm
checking
like
whether
like
is
it
in
the
in
the
wrong
format
like
checking
for
buildback
api
lesson:
zero,
six,
whether
it's
inside
the
types
table
and
for
building
ideas,
starting
from
zero
six,
whether
it's
inside
whether
it's
in
the
top
level.
Sorry,
is
there
a
better
way
to
do
this
like
not
to
create?
B
E
I
think
does
this
still
kind
of
bring
about
the
topic
of
performance?
Rightly.
B
E
E
B
A
B
B
B
B
B
E
E
Here
I
mean,
wouldn't
it
be
an
else.
E
E
As
we
potentially
go
through
different
formats
in
future
point
in
time,
right
yeah,
if
you
had
a
one-to-one
of
saying
like
I
have
this
struct
that
translates
to
this
build
pack
api,
you
know
and
kind
of
keep
them.
That
way,
then
that's
very
easy
to
reason
about.
But
if
you
have,
I
have
this
like
massive
thing
that
might
mutate
and
like
you
might
even
get
conflicts.
So
I
don't
think
that
that
would
even
work
long
term
right
like.
B
E
I
mean
there's
also
like
programming
strategies
around
this
right,
like
change
chain
of
responsibility,
something
where
you
could
say
if
it's
this
api
version
use
this
decoder
right
and
like
so,
you
could
just
basically
create
a
list
of
that
sort
of
stuff,
so
you
don't
have
to
have
if
else
statements
right,
but
that's
optimizations.
You
can
make
that
you
can't
make.
If
you
have
like
this
one
massive
construct.
A
Okay,
I
agree
too
yeah.
I
think
multiple
structs
would
be
clear
and
I
think
we
could
leverage
like
right
now,
you're
kind
of
doing
a
a
mega
struct
to
a
struct
that
we
pass
out
and
use
everywhere
else
like.
I
think
we
could
do
multiple
structs
and
they
all
adhere
to
an
interface
to
sort
of
give
us
the
so
that
you
don't
have
to
map
them.
Do
this
mapping
from
like
you
know
your
lm
lmfaf
to
the.
A
File
but
that's
that's
that's
another
thing
that
we
would
have
to
kind
of
think
about
and
thread
through
because
of
course
interfaces
are,
you
know
different
than
structs,
but
I
do
think
that
we
should
trend
towards
getting
rid
of
structs
being
sort
of
the
interface
we
code
to
in
most
places.
But
I
don't
know,
that's
that's
a
long-term
thing,
but
I
like
multiple
strokes,
yeah,
I
think
yeah,
it's
better.
B
E
And
it
gets
maybe
to
further
elaborate
on
jesse's.
Thinking
too,
is
maybe
you
don't
need
an
interface
right.
You
could
just
have
that
normalized
struct,
that
you
pass
throughout
the
code
and
the
thing
responsible
for
decoding
a
very
specific
version
right.
It
would
be
the
the
version
struct
and
then
the
version
function
that
does
the
decoding.
That
in
itself,
could
then
translate
it
to
that
normalized
version
right.
E
And-
and
you
know,
then,
once
we
go
through
that
exercise
kind
of
bring
it
back
up
here
and
like
show
people
what
we
came
up
with.
D
Yeah,
that
is
me,
so
I
think
I
linked
just
the
issue
that
this
comes
up
on
yeah.
I
guess
there's
not
we
we
can
read
over
this,
but
the
real
question
is
like
how
much
yolo
are
we
willing
to
accept
like
before?
D
That
feels
a
little
weird
right
and
it
does
set
like
kind
of
a
bad
precedent
that,
when
other
people
are
like
hitting
kind
of
failures
in
their
large
ci
pipelines
to
use
pac
that
we're
just
like.
Okay,
we're
gonna
help
you
by
adding
additional
functionality
to
pack
that
we
don't
really
know
fixes
an
issue
right.
D
E
E
And
so
what
it
lets
you
do.
Is
it
lets
you
do
your
whole
system
proxy,
your
entire
system,
through
this
application
that
then
you
could
intercept,
requests
on
and
you
could
essentially
drop
requests
or
you
know
change
their
result
or
response.
E
I
don't
know
if
this
is
more
in
line
with
it
with
what
would
help
this
situation?
It's
not
something
that
obviously
helps
through
the
automation
piece,
but
probably
helps
through
the
development
phase,
and
then
I
would
think
that
for
the
implementation
testing
part,
if
you
find
the
right
seam
to
write
unit
tests
in
that
should
satisfy
you
know
the
confidence
level
there
necessary
for
for
the
implementation,
but
for
reproducibility,
I'm
hoping
something
like
charles
would
enable
you
to
to
reproduce
it
at
least
consistently.
A
There's
also
fiddler,
which
is
a
little
I
don't
know
if
it's
more
user-friendly.
I
haven't
used
charles
in
a
while,
but
they're
very
similar.
If
you
use
windows,
fiddler
is
sort
of
the
de
facto
one
there.
If
you
had
something
that
needed
to
happen
on
windows,
but
if
yeah
it's
cross-platform
now
I
think
too.
B
E
And
if
it
passes
a
hundred
times,
you
know
it's
flick
is
gone.
The
likes
you
have
to
get
a
reproduction
rate
before
the
code
fix
right.
What,
if,
without
any
changes
it
passes
a
hundred
times
fixed.
A
E
My
recollection
is
that
there's
a
there's,
a
certain
statement
that
denotes
whether
an
error
is
something
that
can
be
retried
and
it's
only
a
subset
of
status
error
codes,
some
of
which,
or
basically
this
particular.
I
believe
it's
like
a
general
network
error
does
not
fall
into
this
temporary
retriable
logic.
E
C
E
E
C
C
It
gives
you
kind
of
a
loop
where
you
can
maybe
switch
on
some
logic
that
you
that
you
get
in
there
I'll
send
you
over
the
link
to
that,
but
I'm
calling
does
make
it
kind
of
easy
to
get
in
there
and
mess
with
the
connections.
Oh.
A
Yeah,
I
don't
know,
I
don't
know
how
I
feel
about
retrying
like
from
a
pack
perspective.
I
want
pack
to
retry
like
more
often
like,
because
we
have
like
when
we
create
builders
and
stuff
like
that.
Even
like
amazon's,
ecr
stuff
is
incredibly
unreliable
at
random
points,
especially
when
you
do
like
they're
trying
to
do
raid
limiting
and
recounting
and
stuff,
and
you
know
that
that
doesn't
work
well.
When
you
have
like
you
know,
50
build
packs
and
a
builder
you
know,
and
one
of
them
fails.
A
E
Is
this
issue
life
cycle
specific
or.
D
Yeah,
it
is,
and
there's
not
a
huge
amount
of
information
in
the
like
debug
output.
She
gave
us,
but
it
looks
like
what
she's
doing
is
she's
building
something
and
pushing
it
to
a
remote
registry
that
doesn't
have
any
image
mirrors
and
so
you're
like
streaming
down
the
entire
run
image,
which
is
massive
in
this
case
every
single
for
every
single
one
of
our
builds,
and
that's
really
the
point
of
failure.
Right.
E
A
And
streaming
run
images,
so
it
I
haven't
used
that
feature
what's
what's
that
look
like
do
you
just
like?
Have
another
registry,
that's
closer
to
what
you're,
where
you're,
building
or
something
that
already
has
the
images
pulled?
Is
that
kind
of
the
idea
to
speed
that
up.
D
Yeah,
so
I
guess
it's
that
when
you
have
like
an
image
built
on
top
of
another,
like
kind
of
like
a
from
statement
and
a
docker
file
right,
if
you,
the
registry,
already,
has
that
underlying
image,
it
doesn't
need
to
get
all
of
those
bits
again
right.
It
already
knows
what
the
layers
are.
The
shot
is,
but.
E
Yes,
I
think
it's
was
it.
Co-Location
was
one
thing
and
then
cross
repo
blob
mounting
would
be
another
optimization
there
right.
So,
if
you're
trying
to
push
to
ggcr
or
sorry,
gcr
and
gcr
already
has
most
of
the
blobs
it,
it
should
already
allow,
or
it
should
not
have
you
upload
them,
but.
D
E
I
guess
I
do
have
a
slight
announcement
today
will
be
our
first
office
hours
coming
up
so
different,
potentially
different
format,
right
where
we
could
have
more
discussions
versus
just
going
through
rfcs
and
stuff
like
that.
So
you
know
I
welcome
everybody.
If
you
have
any
questions
very
similar,
but
more
you
know
generic
or.