►
From YouTube: Kubernetes Release Engineering 20200526
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
hello,
fellow
release
engineers
today
is
May
26th.
It
is
an
edition
of
the
cig
release,
release
engineering
sub
project
meeting.
This
is
a
meeting
that
is
recorded
and
available
on
the
internet.
So
please
be
mindful
of
what
you
do
and
say
please
be
sure
to
adhere
to
the
kubernetes
code
of
conduct
and
in
general,
just
be
really
awesome
people.
So
we've
got
a
few
things
on
the
agenda
and
I'm
realizing
from
the
channel.
That's
that
marki
is
actually
going
to
be
missing
the
meeting.
B
We
had
any
meeting
since
the
last
three
quest
Monday.
We
were
planning
to
meet
again
on
Friday
and
until
the
end,
the
plan
is
that
we
try
it
a
little
bit
live
on
it
locally
cloud
ran
in
such
stuff
and
see
what's
going
to
happen,
but
besides
that,
we
don't
really
have
any
other
rabbits
for
now
got.
It
appreciate.
A
All
right,
so
next
up
is
the
golang
walkthrough
we've
been
kind
of
in
the
background
working
on
some
of
the
process,
improving
some
of
the
process
around
the
way
that
we
do
go
updates
and
it
has
been.
It
has
changed
frequently
enough
or
it's
changed
enough,
and
also
is
something
that
is
not
quite
known
by
a
lot
of
people.
A
So
I
wanted
to
take
the
opportunity,
especially
since
we
have
we
have
Marky
and
Veronica
working
on
the
golang
1:13
11
update
to
take
an
opportunity
to
walk
through
some
of
the
changes
that
have
been
made
in
recent
time.
So
if
you
want
to
check
that
out,
you
can
I'm
popping
the
link
in
the
chat
and
the
notes
right
now.
A
That
is
about
an
hour
long
recording,
nothing
too
crazy,
but
it'll
walk
you
through
the
cube
cross,
image,
building
and
promotion.
What
the
P
R
looks
like
when
you
bump
cube
cross
in
kubernetes
and
then
handling
the
build
dependencies
that
yamo
file
and
grouping
at
a
screw,
Burnett
ace.
So
that's
good
information
overall
to
know
as
a
release
engineer
so
take
a
chance
to
look
at
that
when
you
get
a
chance
part.
Two
of
that
is
happening
today
later
today.
A
If
you
are
on
the
release,
managers
list
or
you're
on
their
release
team
list,
you
should
have
had
an
invite
received
an
invite
to
that.
If
you
have
not
received
an
invite
and
you're
part
of
one
of
those
teams
and
would
like
to
attend,
please
let
me
know
I'll
forward
the
calendar
information
to
you.
Yes,
any
questions
on
that.
A
A
One
of
the
things
I
noticed
is
that
we
need
to
do
a
repo
infra
bump,
for
so
repo
infra
contains
a
lot
of
the
a
lot
of
the
utilities
around
basil
that
we
use
in
various
repos
once
that
repo
is
kubernetes
kubernetes.
When
you
do
a
going
update,
it
is
often
necessary
to
do
a
update
of
repo
infra,
so
you
can
pull
in
new
basil
rules
that
go
rules.
A
So
I
have
a
set
of
cherry
picks,
so
the
the
repo
infra
bump
that
we
need
for
for
the
master
branch
has
already
merged,
and
that
is
available
in
the
giant
in
the
meeting.
Notes.
Excuse
me
and
if
you
check
that
out,
you'll
see
a
link
to
a
link
to
the
various
cherry
picks
that
have
been
happening
so
Jerry
picks
across
the
118
117
116
branch.
A
There
I'm
running
into
some
basil
problems,
surprise
surprise
getting
getting
them
updated
on
the
on
the
other
release
branches,
but
I
should
be
able
to
work
through
that
this
week,
once
that's
done,
we'll
be
able
to
so
we're
already
able
to
bump
one
1311
on
master
and
that
will
be
happening
soon
and
then,
after
those
cherry
picks
for
repo
infra
bump
merged
to
the
release
branches.
We'll
be
able
to
cherry
pick,
the
the
go
1
13
11
updates
over
to
those
release
branches
as
well.
A
Okay,
excellent
up
is
github
GCSE,
alright,
so
github
GCS
is
a
tool
that
I
decided
to
write
over
the
weekend.
That
is
to
provide
you
some
background
and
I
was
talking
about
this
a
little
bit
with
some
other
people.
We
have
some.
We
have
some
tools
that
we
depend
on
within
within
kubernetes
kubernetes
that
are
not
quite
kubernetes
right.
There
may
be
I,
guess,
kubernetes
adjacent
and
so
a
few
of
those
tools
are
your
utilities
or
CNI.
A
Our
CNI
plugins
CNI
plugins,
are
leveraged
by
the
cubelet
when
instantiating
clusters,
so
you'll
see
that
the
kubernetes
CNI
package
for
apt
and
yum
repos
is
something
that
we
publish
as
a
result
of
every
release.
Now
that's
not
entirely
a
true
statement.
There
are
some
idiosyncrasies
to
the
way
that
our
current
packages
are
can
are
configured
that
prevent
us
from
doing
exactly
that.
So
and
the
idiosyncrasies
are
based
on
the
way
the
dependencies
were
configured
for
previous
packages.
A
Like
intend
tests
or
doing
things
like
local
testing
for
kubernetes,
so
the
question
becomes,
who
maintains
these
buckets
right
and
previously
those
buckets
were
were
Google
owned,
buckets,
which
means
to
get
those
buckets
updated.
You
have
to
find
the
set
of
Googlers,
that's
responsible
for
updating
those
buckets,
maybe
create
and
push
images
or
create
and
push
new
artifacts
to
to
those
and
it's
a
process
that
has
honestly
been
a
gap
for
multiple
cycles.
So
this
issue
was
opened.
You
can
see
it
was
July
31st,
2008
2018,
not
that
long
ago.
A
So
a
few
things
happening
here,
actually
creating
the
GCS
bucket,
updating
the
the
fetch,
URLs
and
kaykai
to
use
the
new
buckets,
actually
updating
the
the
plugins
to
0-5
for
CNI,
at
least
and
then
making
sure
that
the
both
the
release
managers,
the
maintainer
of
the
CNI
plugins,
actually
have
access
to
write
to
those
buckets.
So
we're
kind
of
here
in
this
in
this
updating
release,
documentation,
we've
also
got
some
we've.
Also,
we
are
tracking
to
how
to
deprecate
the
kubernetes
CNI
debin
RPM
packages.
A
The
so
I
have
a
recent
PR
that's
up
here,
and
there
are
some
CI
fixes
that
need
to
be
done
essentially
for
this
PR.
What
we're
doing
is
we're
doing
a
bit
of
cleanup
on
the
on
the
the
control
files,
the
the
package,
definitions
for
Debian
in
the
the
RPM
specs
on
the
RPM
side,
to
bundle
the
to
bundle
the
CNI
plugins
with
with
the
cubelet
instead
of
having
them
in
as
an
individual
package.
Now
the
reason
that
we
decided
to
go
this
route
is
so
that
the
dependency
is
essentially
kind
of
self-contained
right.
A
Cubelet
is
the
only
thing
that
requires
it.
So
if
we
package
it
with
the
cubelet,
then
we
know
no
longer
have
to
say
that
it
depends
on
a
certain
version
of
it,
which
means
that
we
can
safely
in
to
reintroduce
it
into
our
package
stream
without
having
to
worry
about
the
older
versions
of
kubernetes
that
have
that
dependency.
A
Okay,
so
I
am
hoping
to
get
this
merged
in
this
week
and
then
from
there.
We
need
a
way
to
so
going
back
to
the
whole
going
back
to
the
whole,
there
are
a
set
of
Googlers
that
are
responsible
or
able
to
upload
to
these
buckets.
Now
that
we've
created
a
new
bucket,
at
least
for
CNI,
we're
able
to
basically
delegate
that
responsibility
to
the
release
managers
as
well
as
the
the
CNI
plugin
they'll
maintain
errs.
A
All
right
and
that's
just
kind
of
the
output
of
what
that
looks
like
and
I
think
that
it's
probably
possible
that
this
is
happening
a
little
differently.
Every
time
so
I
said
to
myself
what
if
we
could
write
a
tool
that
was
fairly
simple
to
use.
That
would
be
able
to
do
this
right
and
that
we
could
eventually
use
to
wire
into
cio
whether
it
be
prowl
and
GCD
so
automatically
essentially
rip
the
releases
from
the
github
releases,
github,
release
assets
and
then
upload
them
to
upload
them
to
some
GCS
bucket
right.
A
So
that's
what
I
wrote
over
the
weekend
and
if
we
want
to
check
it
out,
we
can
do
that
here,
so
I'm
going
to
make
that
a
little
bigger.
Can
you
see
that
text?
Okay,
someone
just
give
me
a
yes
yeah
cool
awesome
right
so
clearing
my
screen.
I
am
in
the
release,
repo
I'm
gonna
check
out
master
and
do
a
little
update.
A
A
It
requests,
github,
org,
github,
repo,
a
GCS
bucket
that
you're
going
to
be
uploading
to
a
release
directory,
which
is
the
directory
that
you
want
to
land
those
releases
in
within
the
GCS
bucket
and
then
some
tags
that
you
want
to
some
release
tags
as
you
want
to
target
right.
So
it's
going
to
go.
Look
at
essentially
it's
going
to
create
a
release,
config
or
destruct.
A
A
So
in
this
output,
it's
kind
of
you
know
there
are
some
some
tips
like
hey
I'm
downloading
from
this.
This
repo
at
these
tags
and
then
started
kicking
off
the
download
with
zero
six
zero,
and
it's
writing
these
to
a
temp
directory
and
it's
showing
you
the
asset
IDs
for
each
of
those
files,
as
well
as
the
download
URLs.
If
you
wanted
to
use
that
information
to
debug
or
compare
things
later,
you
can
do
that
now,
since
we're
in
debug
mode.
It's
showing
you
that
it's
using
the
following
options
for
GCS
and.
A
So
it's
going
to
run
concurrently,
recursively
and
not
clobber
the
existing
files,
and
then
it
also
shows
you
the
command
out.
But
if
you
wanted
to
do
that
yourself
with
the
files
that
exist
right,
so
it's
letting
me
know
that
I
have
done
this
before
I've
potentially
done
this
before,
and
those
assets
already
exist
on
the
remote,
and
this
is
kind
of
the
URL
to
get
them
or
the
GCS
URL
to
get
them
so
over
to
0-7.
A
It
wasn't
able
to
get
the
local
source
directory,
which
means
the
directory
does
not
exist.
This
is
kind
of
like
an
OS
stat
and
it
tells
you
that
here
doesn't
exist.
It's
going
to
skip
this
GCS
upload
and
then
moving
on
to
zero
eight
six.
It's
also
a
release
that
existed
on
that
bucket.
So
it's
going
to
skip
those
items,
and
this
is
kind
of
output
from
this
gsutil
copy
command
right.
So
if
we
go
back
to
that,
PR
I
have
a
link
here.
A
That
gets
you
to
the
bucket,
and
this
could
be
you
know:
GS
utility,
LS
and
then
whatever
the
bucket
name
is,
but
you
can
see
that
we've
got
all
these
things
uploaded
and
you
know
zero
six
zero.
All
the
cni
plugins
are
here
and
you
can
see
that
they
were
uploaded
yesterday,
our
last
modified
yesterday,
but
also
uploaded
yesterday.
A
So
that
is
so
again.
The
idea
is
that
we
would
take
this
tool
it's
early
days.
It's
just
merged,
but
I
think
it's
does
what
I
expected
to
or
does
exactly
at
least
what
that
bash
script
was
doing
before
and
a
little
more.
The
next
piece
of
this
I
would
like
to
do
is
here
supporting
downloading
releases
via
llamó
config.
So
the
idea
is
that
a
you
know
this
we've
got
this
config
struct,
which
would
contain
a
set
of
release.
Configs
and
the
release.
A
A
Right
so
kind
of
specifying
the
org
repo
releases
that
you
want
to
update
and
then
the
and
then
it's
it's
basically
a
set
of
those
right.
So
if
I
was
to
kind
of
run
this
without
the
tags,
it
is
and
I'm
not
gonna.
Actually
let
that
run,
but
you
can
see
that
it's
actually
going
to
go
into
the
repo
and
pull
all
of
the
releases
right,
so
zero,
eight
six
through
zero,
six
zero
right
and
it's
going
to
do
that
download
operation.
This
would
also
result
in
the
Skip
existing
items
so
yeah.
A
To
answer
Sasha's
question:
is
it
possible
to
update
the
files
in
the
bucket
or
as
a
retention
policy,
not
allowing
for
that
use
case?
The
short
answer
is
I,
don't
know
yet
I
use
the
bucket
that
I
know
I
have
access
to.
So
there
are
some
additional
things
that
we
want
to
check
out
regarding
retention
policies
to
ensure
that
that
works
all
the
time
and
thank
you
for
doing
the
review.
A
Alright,
so
Sasha
actually
question
for
you
I
swear.
There
is
I,
know
that
there's
one
issue
that
I
open
for
CRI
tools
about
the
Deb's
and
rpms
I
swear
that
I
remember,
reading
a
conversation
somewhere
about
creating
the
GCS
bucket
for
for
CRI
tools
as
well,
but
I
have
no
idea
where
it
is.
Do
you
do
you
want
to
talk
a
little
bit
more
about
that
yeah?
We
had
last.
C
Week,
I
think
so,
where
my
cry
tools
update,
they
all
got.
We
wrote
it
because
I
changed
the
download
URL
to
go
to
github
and
there
was
this
API
rate-limiting
from
some
test
hit
and
yeah.
The
idea
would
be
now
because
I
don't
have
access
to
that
packet.
The
idea
would
be
no
to
use
reengineering
packet
to
pull
it
directly
from
there
got.
A
And
that
will
hopefully
happen.
The
cycle
is
I
need
to
I
need
to
kind
of
turn
my
attention
back
to
a
queue
pkg
to
update
the
specs
and
stuff
like
that,
specifically
on
the
RPM
side.
I
think
most
of
the
Deb
stuff
is
okay,
but
the
RPM
side
since
I'm
on
Debian
has
not
had
one
has
not
had
the
same
vetting
so
yeah,
let's,
if
you
can
actually
open
a
request
in
Kate's
at
I/o,
and
tag
me
on
that
for
a
CRI
bucket.
A
A
Alright,
so
next
thing
up
is
dist
release
and
removing
debian
base
images.
This
was
a
previous
conversation
topic
and
a
current
one
and
an
active
one
and
a
continuing
one
and
we
actually
have
dims
on
the
call
today
and
so
yeah.
Let's
talk
about
that
a
little
bit,
I'm
DIMMs.
Can
you
give
an
update
from
your
side
and
I
can
fill
in?
Whatever
else
is
going
on.
D
So
distillates
work
has
been
stuck
had
been
stuck
for
quite
some
time
and
the
reason
was
there
was
a
problem
with
key
log
when
in
scalability
jobs,
so
we
were
trying
to
figure
out
an
alternative
way
to
unblock
that
work
and
we
were
able
to
find
something
that
works.
So
essentially
the
problem
turned
out
to
be.
We
needed
shell
bash,
specifically
in
many
of
our
images,
because
we
were
redirecting
stdio
and
a
steady
error.
Sorry,
a
steady
error
and
a
study
out.
So
the
answer
was
to
have
a
go
based.
D
Script
go
based
binary,
which
would
redirect
the
streams
instead
of
using
bash.
So
that
kind
of
that
is
what
we
ended
up,
calling
go
runner,
that's
a
new
image.
It
has
a
single
binary
called
go
Runner
which
would
redirect
the
output
and
the
error
streams.
So
we
turned
around
and
use
this
go
runner
for
making
updates
to
the
API
server
and
the
scheduler.
So
that
was
like
the
first
thing
that
got
updated
that
unblocked
those
two
images
to
be
disrelish,
which
means
we
don't
have
to
it.
D
D
For
these
images,
coming
in
from
random
things
that
ended
up
in
the
debian
base
image
so
and
then
the
next
problem
was
the
controller
manager.
The
problem
with
controller
manager
was
there
were
some
scenarios
in
controller
manager
where
we
needed
to
shell
out
and
one
of
those
was
flex
volumes.
So,
as
you
know,
flex
volumes
you
can
have
like
a
script
that
is
dropped
in
the
specific
directory
and
the
controller
manager
would
run
the
the
Flex
volume
script
for
initialization
and
attach
detach
functions
so
and
the
we
were
talking
to
stick.
D
Six
storage
and
six
storage
had
already
sent
out
information,
saying:
okay,
who
is
what
who
needs
this
facility
now
that
we
have
CSI
and
whatnot?
So
it
turned
out
not
to
be
too
much
of
a
problem,
because
what
we
are
saying
now
in
six
storages,
we
are
not
deputy
deprecating
the
feature
or
flex
flex
volumes.
The
feature
is
going
to
be
there.
You
just
need
your
own
image
with
passionate
if
you
want
this
functionality
so
that
helped
us
strip
the
strip
out
one
more
thing
and
then
the
next
thing
was
hcd.
D
Hcd
was
more
of
an
issue
than
the
rest
of
the
images
because
it
CD
there
is
a
startup
script
in
the
image
which
will
turn
around
and
look
to
see
if
it's
any
versions
have
changed
and
whether
the
HDD
on
disk,
the
volume
on
disk
needs
to
be
updated
and
it
does
a
series
of
calls
to
it.
Cd
CTO
and
the
HDD
binary
itself
to
update
the
information
in
the
HDD
data
directory.
So
so
it's
still
needed
bash.
So
what
we
ended
up
doing
for
this
was
instead
of
having
the
entire
DB
n
base.
D
We
would
just
use
bash
static,
so
statically
linked
bash,
so
this
would
significantly
reduce
you
know
so,
basically,
between
Goren
or
on
the
rest
of
the
things
and
bash
static
on
the
HDD
image,
we
were
able
to
get
things
going
and
unblock
right
now.
The
only
image
that
needs
everything
as
much
so
far,
so
the
only
thing
that
we
haven't
been
able
to
poke
at
is
Q
proxy
Q
proxy
by
definition,
users,
IP
tables
and
IP
vs.
D
A
Update
thanks
James,
so
that
is,
you
know,
so
that's
an
exciting
update
on
a
few
different
levels.
You
know
it
increases
our
security
posture
across
across
the
the
base
of
images
that
we
produce
during
releases.
It
also
allows
us
to
I
think
I
think
within
there
are
kind
of
underneath
all
of
those
updates.
A
really
important
one
is
that
those
images
are
now
on
on
kubernetes
community
infra
right.
A
So
that
means
that
that
means
that
you
you,
I
dim
all
the
people
who
have
access
to
to
cut
those
images
now
have
the
ability
to
do
that
and
push
them
and
promote
them
without
having
to
without
having
to
find
someone
who
can
do
that
within
Google
right.
So
that's
a
really
really
big
step
for
us.
That's
us
taking
control
more
control
of
the
release
process
overall,
so
really
excited
to
see
this
work
and
I
really
appreciate
the
all
the
help
and
work
that
you've
done,
dims
to
kick
that
stuff
back
off,
I've
taken.
A
A
So
the
base
exceptions
list
is
basically
a
list
of
base
exceptions,
images
exceptions
to
the
base
images
right
so
places
where
we
cannot
rebased.
You
destroy
lists
because
of
some
right,
so
that
list
includes
includes
where
we're
getting
the
image
for
sort
of
what
the
image
is
about
the
base
that
it's
using
the
reason
for
the
exception,
as
well
as
the
owners
to
go
to
for
that.
So,
given
that
that
work
has
kind
of
kicked
off
or
been
reinvigorated,
this
cycle
we're
gonna,
be
working
on
updating
the
the
kept
always
important
to
update
caps
right.
A
A
Okay,
so
I
realize
there.
There
was
also
a
question
in
the
channel
earlier
today
regarding
regarding
the
base
images
right.
So
it's
hey
team,
I
just
observed
some
irregularities
and
kubernetes
one
18.3
WIP
table
seems
to
be
updated
using
updated
tag
of
debian
base,
but
the
make
file
for
debian
based
still
has
the
older
tag,
and
so
I
had
some
follow-up
questions.
One
debian
base,
debian
iptables,
will
use
pre-built
debian
base
image
after
pulling
from
the
registry
and
not
the
one
which
is
currently
built.
A
A
The
the
configs
are
meant
to
be
used,
or
rather
we
do
image.
Building
directly
from
master,
so
master
will
always
have
the
most
up-to-date
configs
for
base
images.
What
we
do
is
we
create
new
base
images
from
from
the
master
branch
we
promote
those
we
leverage,
those
on
master
and
then
we
start
cherry-picking
them
back
to
the
active
release.
A
Branches,
so
I
intentionally
did
not
update
the
versions
or
the
dependencies
at
yamo
in
makefile
in
the
dependencies
IMO
and
various
places
in
the
docker
file,
because
one
of
the
big
things
that
changes
between
the
current
master
branch
and
the
previous
branch
is.
Is
that
we're
moving
to
Kate's
infra
for
for
building
pushing
promoting
these
images
right
so
that
so
the
so
some
things
within
the
make
file
are
intentionally
different.
A
What
I,
don't
want
to
do
is
enable
enable
image
building
for
previous
branches
that
results
in
a
place
where
someone's
building
an
image
off
of
a
previous
branch,
and
it
somehow
gets
into
some
image
stream
that
someone
cares
about
so
I,
don't
know
what
the
curious
about
like
what
you
think
the
best
way
to
handle
this
is
maybe
dims
you
have
some
opinions.
I
was
considering
removing
the
removing
the
those
images
altogether
rather
removing
the
configs
or
the
make
files
and
whatnot
for
those
images
all
together
within
the
the
previous
branches.
A
Yeah
I
think
we'd
have
to
determine
where,
where
else
it's
being
used,
I
think
if
we
I
think
if
we
remove
them
from
previous
branches,
it
does
say
signal
that
it's
only
available
in
master
I.
Think
some
of
this
also
needs
to
be
accompanied
with
I.
Think
that
a
lot
of
the
work
that
we
have
done,
we've
done
because
we
know
enough
about
each
of
those
individual
pieces
to
tie
them
together.
But
there
is
not
currently
documentation
on
how
to
do
any
of
this.
So
that's
something
that
we
need
to
fix.
D
A
Yeah,
that's
that's
fair.
The
going
back
to
my
point.
I
have
no
idea
what
the
right
way
to
do.
This
I
think
it's
I
think
it's
it's
new
territory
for
some
of
us,
because
we
haven't
really
cut
over
infrastructure.
That
many
times
in
the
part
so
I
think
we're.
A
You
know
we're
learning
as
we
go
and
I
think
we're
getting
closer
to
the
point
where
we
understand
all
the
systems
in
place
to
actually
get
these
things
done
so
again,
I'm
super
excited
and
appreciative
for
all
the
work
that
you
all
have
been
doing
to
push
this
along
I
think
we
were
like
I
want
to
say
we're
close
we're
so
close
like
we're
so
close
we've
we
have.
You
know:
we've
mostly
wrangled
image,
building
we're
getting
to
the
point
where
wrangling
a
lot
of
the
other
artifacts
on
the
Deb's
and
RPM's.
A
A
Jamie
had
reached
out
to
me
and
I
forgot
to
respond.
Regarding
the
CVE
stuff,
CV,
stuff
or
images,
so
I'm
gonna
kick
that
conversation
back
off
in
Kate's,
infra,
dims
I,
don't
know
if
you
have
comments
on
that.
I
know
that
there
were
some
work
to
actively
to
actively
scan
images
going
into
into
the
new
community
GCP
projects.
D
Saw
that
there
is
an
option
to
scan
them
adjust
but
I,
don't
think
any
of
us
have
looked
into
what
it
means.
Who
will
it
send?
You
know
nog
emails
to
and
stuff
like
that,
so
you
need
to
figure
out
the
whole
thing.
What
I
would
say
is
we
should
open
up
an
issue
in
Cuba
notice.
Kate's
are
yo
and
follow
up
there
and
and
get
ice
from,
say,
tamarkan,
Aaron
and
Bart,
and
folks,
like
that,
yeah.
D
A
D
A
So
one
of
the
things
that
one
of
the
things
I
was
thinking
about
is
like:
how
does
that
work
for
images
that
are
already
already
there
right?
Are
we
going
to
do
backfill
scans?
Do
we
like
what's
the
point
at
like?
What's
the
policy
for
this,
like
you
know
how
many
images
to
be
care
about
which
versions,
yada
yada,
yada,
alright
and
it's
gonna
cost?
Also,
it's
not
a
free
service.
D
A
So
Jim
once
we
start
up
some
of
that,
once
there's
active
work
to
be
done,
I'll
ping,
you
a
few
people,
have
pinged
me
as
well
on
the
on
the
CV
images
and
on
the
CVE
scanning
and
obviously
because
their
CV
related
these.
Not
all
of
these
conversations
have
been
public,
or
most
of
these
conversations
have
not
been
public,
but
we're
getting
closer
to
the
point
where
we
can
start
considering
this.
So
just
giving
you
an
update,
awesome.
A
A
C
C
All
going
based
implementation
so
yeah,
we
have
basically
multiple
functions
and
released
as
H,
which
are
have
a
higher
complexity
than
I
initially
thought.
But
the
idea
is
now
to
move
it
partially
over
to
our
cooling
based
implementation
and
actually
at
tests
and
I'm
moving
forward
with
that,
and
if
I'm
done
with
the
was
it
called,
get
release
or
that
release
version
function,
then
we
can
get
rid
of
fine
cream
build,
for
example,
and
we
could
also
my
create
one
part
of
integral
or
the
calling
based
n
equal
version
on
top
of
that.
C
I
also
created
another
issue
where
I
did
some
research
on
the
initial
work
from
Hannah's
and
because
we
right
now
the
pet
releases
from
GCB
manager,
which
is
pretty
cool.
But
we
also
have
some
parallel
implementation,
which
is
a
goaling
report
for
existing
best
fresh
scripts
and
in
PKG
pitch
internal,
and
I
we
use
parts
of
that
source
code
for
the
credit
announced
sub
command.
C
New,
but
I
think
we
have
to
decide
somehow
if
you
want
to
move
forward
with
credit,
gzp
manager
for
building
pet
releases,
which
I
would
prefer,
and
then
we
would
do
a
little
cleanup
of
the
internal
pitch
calling
stuff
and
we
can
also
remove
some
other
batch
scripts.
Like
the
rare
notes,
script,
which
just
was
used
by
credit,
pigeon
owns
right
now,.
A
Awesome
awesome
so
yeah
so
I
was
I
was
looking
over
some
of
that
stuff
and
I
didn't
get
a
chance
to
do
a
full
review
just
yet,
but
for
patch
announce
feel
free
to
make
that
go
away.
We
don't
currently
use
it.
It's
I
think
we
can
take
what
we
learned
from
it
and
reshape
that
to
be
maybe
a
sub
command
or
a
flag
for
for
the
Krell
announce
tool.
Alright,
so
it
can
be
Krell
announce
you
know,
type
patch,
or
something
like
that
right.
A
That
would
that
would
allow
us
to
send
patch
notifications
as
well.
The
from
the
SendGrid
side.
I
have
a
SendGrid
account.
That's
we
can
use
for
kubernetes
and
I
think
that
within
their
free
tier,
it's
it's
way
more.
It
allows
you
to
send
way
more
mail
than
we
will
be
for
for
this
tool,
so
I
kind
of
created
that
account
and
put
it
off
to
the
side,
but
I'll
bring
it
back
up
and
see
how
we
can
start
wiring
some
of
the
stuff
up
to
send
test
emails.
A
I
think
that,
for
the
announced,
we're
going
to
leave
a
post
post,
merge
review
on
that
announced.
Pr
I
think
the
one
thing
that
we
need
to
be
careful
about
is
where
we
send
mail
right,
so
there
should
be
a
mock
mode
that
can
only
send
mail
to
yourself
kind
of
thing.
I
think
that
that
was
my
one
suggestion
that
I
was
planning
on
making
for
the
forgets.
What
does
it
get
job
cache
or
whatever
the
thing
is
called
yeah
to
get
job
cache?
A
That
is
probably
one
of
the
more
important
things
that
we
could
do
to
get
job
cache
like
you,
like.
You
were
mentioning
kind
of
tumbles
into
a
bunch
of
other
tools
that
shell
based
tools
that
were
blind
on
for
the
current
release
process.
So
I
think
once
that
falls
into
place
more
of
more
of
the
Onaga
re-implementation
and
go
can
can
happen
and
faster.
A
So
thank
you
for
that
work.
The
the
the
job
cash
I
think
the
the
job
cash
stuff
within
within
the
shell,
based
tools
or
is,
is
probably
one
of
the
more
complex
pieces
of
bash
I've.
Seen
like
ever
so.
Thank
you
for
thank
you
for
starting
to
tackle
that.
That's
I
think
that
that
was
I.
Think
that
once
we
get
over
that
hurdle,
we're
gonna
be
a
lot
closer
to
the
NGO
based
and
not
go
implementation.
C
A
The
the
problem
I
think.
The
problem
that
we
have
today
regarding
announce
is
that
we
do
not
have
our
release.
Process
doesn't
run
as,
as
you
know,
it
doesn't
run
kind
of
end
to
end
right.
There
are
things
that
we
have
to
do
in
between
and
then
also
there
is
a
post
release
process
of
actually
generating
the
generating
and
uploading
the
debs
and
rpms
right
for
that
current
release
version.
A
So
that's
kind
of
the
reason
that
I
don't
want
announced
on
for
for
our
current
process,
because
what
we're
announcing
is
essentially
that
we
haven't
finished
the
release
right.
We
should
only
be
using
that
announced
announced
that
we
have
completed
the
release
and
right
now
we
cannot
complete
a
release
unto
we
can
support.
A
We
can
support
Deb's
and
rpms
being
pushed
into
our
own,
our
own
apt
and
young
repose,
or
we
have
a
mechanism
to
ensure
that
the
Googlers
that
are
building
the
Deb's
and
rpms
today
can
do
that
quickly
after
the
release
is
cut
so
yeah.
That
was
kind
of
the
reason
for
not
enabling
and
now
it's
right
now,
okay,
glue
things
so
Rob
you
have
a
quick
CI
signal,
update,
slash
question:
go
for
it.
E
E
And
just
a
quick
update
is
Dan
has
set
up
a
sheet
and
divvied
out
and
the
jobs
across
the
the
shadows
on
the
team.
We
just
helped
to
focus
all
of
us
on
a
specific
set
of
jobs
and
trying
to
upscale
and
monitor
those
jobs
and
so
on
and
so
forth.
So
that's
an
a
it's
early
days
with
that
spreadsheet
and
it
may
not
be
the
way
it's
done
forever
and
but
in
in
in
looking
at
that,
and
it
kind
of
gives
us
a
roster,
and
it
gives
us
like
I
said
jobs
to
focus
on.
E
One
of
the
things
that
I
was
looking
at
in
test
grade
was,
that
is,
M
offers
a
little
REST
API
to
returns
some
SVG,
which
gives
a
little
button
to
say
whether
or
not
a
test
is
passing
failing
or
whether
or
not
it's
flaky
and
and
so
I
try
to
pull
that
and
scalable
vector
graphic
into
Google
sheets
and
but
that
was
problematic
and
so
I'm
trying
to
do
so.
I've
loved
a
little
bug.
E
You
know
why,
as
a
consumer
with
Google
sheets
saying,
and
can
we
insert
some
SVG
as
an
image,
because
that
feature
doesn't
exist
and
then
on
the
test,
grid,
side
and
I
loved
an
issue
to
say:
can
we
work
and
can
we
can?
We
modify
the
SVG
code
to
return
some
and
proper
XML?
That
sheets
will
more
likely
consume
and
so
that
its
names
face
under
has
an
XML
tag
on
the
answer
on
the
other
net
at
her
element
and
so
on
and
in
terms
of
reaching
out
to
somebody
on
task
wit?
A
A
A
A
That's
the
current
state
of
the
master
and
informing
jobs
right.
So
maybe
you
can
look
at
some
of
that
implementation,
sure
that
was
initially
done
by
Carlos
4
or
it
was
adapted
I'm
forgetting
now,
but
Carlos
is
one
of
the
people
who
has
been
updating
this
to
support
kind
of
like
print
options
and
and
Sasha
I
think
you
also
worked
on
this
along
with
honest
yeah,
that's
cool,
so
so
yeah.
It
would
be
cool
to
speak.
A
Yeah,
but
like
kind
of
kind
of
pushing
the
teams
together
right
finding
finding
ways
that
we
use
these
different
tools,
making
sure
that
they're
useful
to
everyone
right
so
yeah
check
that
out.
It
would
be
cool
to
see
a
simple
go
implementation
of
this
one,
although
it's
probably
not
necessary,
but
I,
could
imagine
a
go
tool
that
worked
on
CI
signal.
Doing
a
lot
more
for
us
than
just
than
just
to
doing
snapshots.
Yeah
yeah.
A
For
sure
so
last
thing
on
the
gonna
do
one
last
thing:
I
know
we
have
a
minute
left.
If
any
release
managers
have
bandwidth
to
review
this
PR,
it
is
pretty
important
for
us,
I'm
gonna
pop
it
in
the
chats,
because
I
can't
find
the
meeting
notes
at
the
moment,
but
essentially
that
is
work
from
linus
to
allow
us
to
essentially
edit
manifest
within
Kate's
at
I/o.
So
this
is
a
manifest
for
image
promotion.
So
the
idea
that
I
had
in
my
head
was
essentially,
if
we're
doing,
staging
and
and
promotion
of
release
images.
A
We
need
a
way
to
ensure
that
the
image
is
that
we
can
easily
grab
the
new
staging
images
since
it's
a
lot
of
them,
it's
kind
of
shorted
across
all
of
the
different,
the
various
images
plus
you
know
times.
However,
many
architectures
right,
so
this
tool
should
create
kind
of
a
new
manifest
available
for
promotion
that
we
can
then
add
to
Kate's
that
IOR
update
in
a
case
that
I
OPR.
A
So
if
you
have
time
to
check
a
look
at
that
that
take
a
look
at
that,
that
is
kubernetes,
SIG's,
Kate's
container
image
promoter
pull
210,
and
that
is
all
we
got
time
for
so.
Thank
you.
Everyone
thank
you
for
hanging
out
with
us
every
week
and
really
appreciate
it.
If
you
were
on
the
release,
management
or
at
least
team
release,
release
team
119
release
team,
we
have
that
going
up
walkthrough
happening
later
today,
so
check
that
out
and
catch
you
at
the
next.
One
later
awesome
see
you.