►
From YouTube: Implementations Sync: 2021-01-21
Description
Meeting notes: https://bit.ly/38pal2Z
A
I
don't
have,
from
my
side
too
much
to
share
been
reviewing
some
prs
and
taking
a
look
at
this
build
pack
package
refactor,
which
I
put
on
the
agenda
so.
B
Nice
yeah
I've
been
working
on
sort
of
swapping
analyzer
and
detector
in
the
giraffe
pr
and
sort
of
simultaneously
getting
the
spec
stuff
going
for
allowing
another
order
timeline
that
resolution
so
getting
those
things,
hopefully
wrapped
up,
soon-ish
depending.
B
C
I'm
waiting
for
my
pr
with
them
adding
manifest
size
to
report
a
tunnel
to
be
hopefully
merged
soon,
so
if
anyone
can
also
review
it
and
like
so,
we
can
merge
it.
It
will
be
great-
and
I
didn't
have
a
lot
of
time
to
work
on
the
other
issue
that
I
picked
up
last
week
regarding
the
default
process,
but
I
hope
to
continue
working
on
this
today.
A
Cool,
if
there
are
no
more
updates,
we
can
move
on
to
release
planning.
I
feel
like
probably
not.
A
lot
has
changed
from
last
week,
but
I
can
share
my
screen
so
that
we
can
look
at
the
milestone.
I
pulled
this
up
in
our
stand
up
oops,
not
that.
A
Long,
here's
the
milestone,
the
14,
open
and
zero
closed
was
was
worrying
me
a
little,
but
how
is
how's
everyone
else
feeling.
C
I'm
not
sure
what
I
mean
we
talked
about
it
last
week
or
the
week
before.
I'm
not
sure.
What's
the
right
way
to
look
at
our
release,
whether
to
like
put
everything
that
we
want
in
it
and
then
like
take
out
things
that
aren't
we're
are
not
going
to.
I
mean
we're
not
going
to
finish
on
time
or
just
put
just
a
little
bit
every
time.
C
I
feel
that
the
first
approach
is
making
us
feel
bad.
Like
now.
We
have
14
open
zero
close,
but
I
don't
have
any
preference.
D
A
Personally,
I
so
I
put
the
thing
that
I've
been
working
on
on
the
agenda.
I
kind
of
wanted
to
talk
about
that,
both
in
the
the
sense
of
like
does
this
make
sense?
What
I'm
doing,
but
also
kind
of
does
it
make
sense
that
I'm
working
on
this
now,
because
there
are
sort
of
there's
reasons
why
that
could
be
a
good
idea
and
reasons
why
maybe
it's
better
to
be
focusing
on
the
actual
features
of
the
next
apis.
A
That
that
that
was,
I
think,
maybe
part
of
the
reason
why
I
was
feeling
a
little
worried,
because
the
the
current
thing
that
I'm
looking
at
feels
like
a
big
task
that
has
a
fair
amount
of
uncertainty
in
it.
D
A
B
C
D
Yeah
sounds
like
we'll
have
to
kick
some
things
out.
I
do
worry,
though,
because
one
of
the
reasons
you
want
to
get
the
refactor
in
and
so
that
stack
packs
have
a
better
place
to
work
from
right,
so
we're
not
combining
refactoring
with
stack
packs.
D
C
D
A
D
For
that
reason,
I
think
that's
the
one
issue
where
there's
like
a
little
bit
of
weirdness.
I
think
you
could
still
ship
just
the
platform
right.
So
if
we
say
the
platform's
job
is
to
assume
build
text
with
api,
less
than
o
six
want
web
to
be
the
default
and
newer
ones
will
it
specify
their
default.
D
A
Okay,
I
I
would
just
kind
of
been
looking
at
this
and
with
the
exception
of
the
windows
stuff,
which
I'll
be
honest,
I
I'm
like
don't
feel
100.
I
have
a
good
grasp
on
it,
but
with
the
exception
of
these
two
issues,
like
everything
else,
that's
part
of
the
apis
feels
like
a
relatively
manageable,
like
small
chunk
of
work,
actually
like
the
biggest
stuff
in
this
milestone
are
all
the
other
stuff
we
threw
in
there.
D
I
guess
it
will
depend
whether
life
cycle
should
validate
stack
ids.
I
think
this
is
sort
of
the
big
thing.
The
reordering
of
the
phases
falls
into
this.
It's
not
labeled
with
an
api
yet
because
it
kind
of
came
in
here
before
the
rfc
did,
but
it
could
get
labeled
that
way
and
then
be
a
big
big
chunk
of
work
right.
B
B
I
don't
think,
but
it's
something
that
we're
moving
towards
for
snack
packs
and
then
we've
also
got
the
yeah
anyway.
It
just
seems
like
the
work
that
I'm
have
isn't
represented
in
this
milestone.
Yet
I
don't
know
if
it's
just
like
a
reporting
thing
or
if
it
that
I'm
working
on
the
wrong
thing.
I
guess
quotation
marks.
I
don't
know.
D
D
D
C
A
A
So
I
don't
know
which
one
we
want
to
do
first,
but
I
kind
of
feel
like
getting
our
code.
Compiling
on
our
workstations
is
on
main
would
be
ideal.
Maybe
maybe
I'll
just
pull
that
up
very
quickly.
This.
This
pin
to
an
older
version
of
xsys,
which
we've
already
merged
into
image,
util
and.
A
Oh
yeah,
no,
I
I
mean,
I
think
we
had
kind
of
aligned
that
that
was
the
that
was
the
goal,
but
then
we
were
like
still
having
second
thoughts.
I
guess-
and
I
think
personally
I
don't
know
what
are
the
alternatives
up
other
than
waiting
for
ggcr
to
bump
their
version
of
docker,
which
they
were
like
kind
of
looking
at.
A
D
D
Oh
well,
maybe
it
won't
work
as
well,
then
yeah,
let's
just
let's
just
do
what
it
takes
to
make
everything
compile
right,
whether
that's
a
replace
or
a
downgrade.
I
don't
really
care.
D
If
there's
a
replace,
my
question
would
be:
what
are
we
worried
about?
It
sounds
like
it's
something
to
do
with
windows.
Can
we,
you
know,
do
the
validation,
so
we
don't
have
to
worry
about
it
anymore,
like
let's
get
something
into
maine
that
works,
and
because
this
release
isn't
close
to
done
we'll
have
time
to
reevaluate
how
confident
we
are
shipping
it
later.
A
Well,
thank
you
for
the
clear
direction.
I
feel
like
that
that
at
least
unblocks
us
in
that
area
and
we
can
work
on
getting
the
release
shipped
soon,
because
there's
no
other
open
questions.
A
Okay,
so
I'm
going
to
move
back
to
our
needs
discussion,
but
I'm
also
very
mindful
that
this
is
the
second
subteam
sink
that
dan
has
joined,
trying
to
discuss
his
his
blog,
and
I
didn't
want
to
leave
time
for
that.
I'm
gonna
do
that.
First,
though
yeah
it
could
be
yeah
dan.
You
want
to
take
it
away.
E
Yeah,
so
I
guess
in
summary,
this
is
a
bug
that
we
get
when
we're
pushing
images
that
have
large
layers
up
to
docker
hub,
and
the
reason
it
happens
is
because
it
takes
quite
a
long
time
to
actually
do
gzip
compression,
especially
when
we
just
like
do
it
on
demand,
because
we're
using
all
these
readers
so
the
time
it
takes
to
like
compress
a
byte
and
then
send
it
as
a
chunk.
In
our
request.
E
It's
pretty
long,
and
so
we
just
like
give
docker
hub
all
these
tiny
little
chunks
that
are
like
256
bits
long
or
bites
long
so,
and
it
really
doesn't
like
that
very
much
so,
there's
like
some
pretty
easy
fixes
all
around
like
where
do
we
buffer
stuff
before
putting
it
in
one
of
these
requests,
and
some
of
them
are
much
easier
like
this,
where
we
just
add
like
an
extra
flag.
But
then
they
have
like
larger
consequences
to
how
the
life
cycle
works
and
how
much
memory
it'll
be
using
when
we're
actually
exporting
layers.
E
E
Yeah,
so
we
get
a
504
back
when
we
try
and
publish
these
large
layers,
because
we
basically
gzip
compress
everything
right
before
we
need
to
put
it
into
a
request
right
and
that
we
do
because
ggcr
just
like
one
massive
request
per
layer.
It's
broken
up
into
these
little
chunks
like
just
by
the
http
protocol
and
because
it
takes
a
long
time
for
gzip
compression
to
actually
give
the
request
to
these
bytes.
You
just
get
all
these
extremely
small
pieces
of
a
request
and
docker
hub
is
not
super
happy
about
that.
B
A
D
E
Yeah,
so
actually
the
crane
works,
okay,
just
because
all
of
the
tests
that
we
do
aren't
using
or
all
of
the
tests
that
you
can
do
with
crane,
don't
actually
like
everything
is
already
gzip
compressed.
When
you
have
these
images,
so
you
don't
run
into
this
problem
at
all.
It's
just
like.
Okay
awesome,
this
is
gzip
compressed.
All
I
have
to
do
is
stick
it
in
a
request
and
send
it
off.
E
I
don't
have
to
like
wait
for
these
chunks
to
get
generated,
but
yeah,
I
guess,
like
the
kind
of
question
I
have
is
like
this
third
option
would
definitely
save
us
time
at
other
points
in
the
life
cycle
running
right,
because
we
actually
we
go
over
this
tarball
at
least
two
times
no
matter
what,
but
it
does
mean
that
we
just
like
end
up
storing
a
tarball.
That's
like
not
compressed
super
well,
because
we
use
pretty
low
compression
settings
so
just
like
eat
up
a
bunch
of
extra
memory.
C
D
Yeah,
if
it
speeds
things
up,
I'm
open
to
it.
I
guess
my
question
would
be
like
how
much
more
memory
does
it
use
right.
E
Yeah
yeah,
so
it
could
be
quite
a
bit
and
the
other
thing
is
like
I
think
ggcr
just
does
all
the
layers
at
one
time.
It's
just
like
go
forth.
We're
gonna
do
everything
once
which
is
also.
Could
we
just
change
that
a
little
bit?
It
would
make
this
way
less
intensive,
but
I
don't
know
there's
a
couple
different
levers.
We
could
pull.
D
E
Yeah
that
yeah-
that's
that's
fair.
I
think
it's
probably
hard
to
just
come
up
with
a
decision
point
right
now,
but
I
think
that's
that's
fine.
We
can
probably
move
on
there's
five
minutes
left
if
there's
not
gonna
be
like
instant
resolution,
then,
as
long
as
I
can
like
get
an
extra
pair
of
eyes
on
this
thanks.
A
A
Well,
that's,
I
guess,
let's
just
circle
back
on
the
other
issue
that
needs
discussion.
A
I
added
this
following
a
inquiry
that
came
through
slack,
someone
was
noticing
that
the
life
cycle
was
emitting
error
code
401
and
it
was
showing
as
145.
so
there's.
I
guess
been
a
request
to
compress
the
ranges
that
we're
using
for
error
codes
so
that
it
fits
within
a
limit
that
is
works
with
bash.
D
But
I
didn't
realize
there
was
a
a
limit
in
bad.
D
A
Okay,
that's
it
for
for
needs
discussion.
There
are
a
couple
others
that
we
didn't
get
to.
I
see
someone
has
added
change
from
30
to
60,
which,
given
that
we
consistently
run
out
of
time,
might
be
something
we
explore,
but.
B
D
B
Yeah,
what
pr's
do
folks
want
reviews
on
right
now
like
that,
might
be
something
like
I
know,
there's
a
lot
floating
around
and
someone
mentioned
earlier.
They
were
waiting
on
reviews
or
whatever
I'd
love
to
which
ones
are
we
interested
in
getting
eyes
on?
I
can
devote
the
next
half
hour
looking
at
some
of
these.
A
So
through
the
ones
that
are
open,
I
think
yeah
l's
pr.
I
was
about
to
approve
it,
but
if
someone
wants
to
give
it
a
second
set
of
eyes,
that
would
be
awesome.
There's.
B
A
You
I
think
it's
this
in
pretty
good
shape.
It's
like
you,
know,
sanity
check.
It
looks
like
this.
This,
oh.
B
Yeah
that
one
needs
to
be
closed
to
prepare
analyzer
to
be
run.
I
can
tell
joe
to
close
that
one,
because
I
basically
just
started
over
when
we
decided
to
do
it
that
way.
A
D
It's
interesting
right
because
we
don't
say
anywhere
in
the
spec
that
we
don't
change
the
render
or
the
stack
working
dirt.
It's.
D
Like
I
think
the
next
version
of
the
spec
will
require
that
this
is
set
in
the
working
dirt.
I
think
that's
true,
but
I
don't
think
that
means
we
can't
change
it
all.
The
time
like.
I
don't
think
we
have
to
do
an
api
check
here.
I
think
it's
fine
if
we
just
set
the
working
derm.
A
B
Yeah,
it's
kind
of
blocked
by
the
spec
pr
that
needs
to
occur.
I
think
the
rfc
is
just
now
getting
enough
approvals
that
it
will
probably
be
fcp
next
week,
so
I
don't
think
we're
ready
to
like
completely
move
on
this,
but
it's
it's
close
enough
that
I
think
we
could.
A
And
this
build
pack
code
package,
I
have
to
say
that
I
I
feel
a
little
uncertain
because
there
seems
like
there's
a
kind
of
different
directions.
We
could
go
with
this
one,
and
I
I
I
shared
this
with
the
cnb
contrib.
You
know
a
vmware
team
in
one
of
our
earlier
meetings
and
got
a
lot
of
good
ideas,
but
I'm
kind
of
at
a
place
where
I'm
hoping
to
circulate
it
with
people
who
have
knowledge
of
the
life
cycle
code
and
at
a
high
level
like
that.
A
B
Yeah,
I
recently
read
some
go
articles
on
how
to
structure
the
stuff
or
how
the
go
team
structure.
Some
of
this
I'm
sure
you've
seen
some
of
it
too.
So
yeah
I'll
I'll,
take
a
look
as
well
and
maybe
yeah
see
if
I
can
pull
anything
out
of
my
head
that
I
remember
from
reading
all
that
stuff.
A
Cool,
do
you
want
to,
I
guess,
adjourn,
but
try
to
find
like
a
scheduled
hour
next
week.