►
From YouTube: SIG Release Meeting for 20221025
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
So
well,
right
now,
I'm
also
just
helping
out
with
the
release.
B
B
Hey
my
name
is
Ellisville
and
I
am
one
of
the
tech
leads
with
Liberties
as
well
projects
and
I
worked
at
trainer
and
yeah.
B
B
With
Amazon.
B
With
signal
is
prior
to
the
stuff
looking
to
get
some
information
about
how
we
can
mortality
a
couple,
more
version
release.
B
B
Hey
hi
I'm,
Arnold
I'm,
the
teacher
of
sea
kitchen
from
also
with
his
managers,
yet
so
I
am
to
get
back
releases
and
I
also
have
been
involved
in
the
project
for
a
long
time
now.
So
welcome
to
everyone
I'm
trying
to
basically
project
my
screen,
so
we
can
share
the
documents.
Also,
sorry
about
all
this.
E
E
Okay,
my
name
is
Carlos
Santana
started
from
the
sake
release
in
124,
1155
release
notes
and
with
Amazon
I
just
joined
AWS,
and
one
of
the
things
that
I
want
to
do
is
like
bring
more
address
people
into
the
six
and
securities
I
think
it's
a
cool
place
to
start.
F
D
Hey
I'm
Jason,
the
temperos
I
worked
at
Cisco
in
the
open
source
program
office,
I'm,
transitioning
from
roles
and
like
say,
cluster
life
cycle
more
towards
helping
out
the
project
in
a
more
core
way
through
Second
trivex
and
also
Sig
release.
B
B
B
C
B
Might
be
a
little
touch
and
go.
There
are
a
few
places
that
have
some
outdoor
options.
If
that's
what
people
are
really
new
towards,
otherwise
we
can
go
to
like
the
the
alleyway
called
the
belt.
It
has
a
few
places
it
was
close
to
where
the
contribute
it
is
where
the
contributor
thing
was
yesterday,
but
it's
the
valley
outside
there's
a
brewery
with
kind
of
like
picnic
table,
seating.
D
D
Fairly
close
that
would
work,
they
have
food
trucks.
Things
like
that,
a.
B
Little
further
out,
Detroit
shipping
company
has
kind
of
similar
things.
Those
well
and.
B
Mile,
the
other
one's
a
little
further,
if
I
will
require
lift
rides
for
people,
but
we
can.
We
can
try
to
coordinate
that
if
people
want
to
sheer.
B
A
B
Okay,
I'll
I'll
add
a
couple
places
into
the
city,
release
slack
Channel
and
we
can
discuss
in
a
threat
there.
I'll
start
it
right
now.
All
right.
Thanks.
B
A
A
E
B
B
So
for
Alpha
we
if
we
were
adding
or
straight
Indica,
we
thought
about
adding
content
as
as
a
first
step
that
we
implemented
as
part
of
our
pre-existing
release
background,
which
means
that
we
sign
our
country
languages
directly
when
pushing
them
to
The,
Container
industry
and
there's
hard
I
mean
it
wasn't
directly
part
of
this
step.
But
we
also
have
to
containers
and
signing
implementation
as
public
promo,
which
runs
on
image
promotion.
B
Today,
I
feel
good
at
this
thing,
I,
don't
think
so
so
wait
for
the
better
graduation.
We
are
now
thinking
about
completing
this
exciting
as
well
by
also
I,
don't
really
know
which
things
and
we
will
not
implement
it
as
follows
our
career
stage
or
release
pipeline.
You
will
you
can
hear
more
about
that
tomorrow,
how
it
looks
in
detail,
but
the
main
goal
is
now
that
we
provide
the
new
step
and
move
the
signing
logic
into
the
dedicated
Google
development
job
to
isolate.
B
So
there
are
a
couple
of
things
that
we
are
not
doing
properly,
which
is
not
into
designing
so,
for
example,
when
we,
when
we
send
it
with
our
images,
we
are
sending
them
inside
of
the
release
process,
which
is
well.
This
is
not
the
final
signature
that
we
add
to
the
public
images.
This
is
we
when
we
publish
the
kubernetes
emergency,
go
out
with
two
signatures,
one
as
part
of
the
release
process
like
a
stem
from
Civics
and
then
once
we
promote
them
for
final
public
consumption.
B
They
get
a
second
signature
from
the
image
promoter
which
is
supposed
to
be
the
community
and
organizations
signature
that
whole
manages.
So
the
first
one
is
the
one
that
we
are
doing
inside
of
the
release
position.
So
the
problem
with
that
one
is
that,
if
you
think
about
it,
the
build
process
itself
that
we
execute
to
compile
kubernetes
is
really
how
to
try
to
control
that
we're
pulling
from
the
repository.
So
anybody
that
can
sneak
a
commit
can
potentially
get
access
to
the
signing
key
of
savory
So.
B
The
plan
now
is
to
move.
There
was
an
issue
with
some
cheesy
diagrams
that
I
opened
some
more
and
explaining
this
a
little
bit
more
detail
and
the
idea
is.
B
We
are
going
to
move
that
to
some
dedicated,
as
such
I
said
one
step
in
the
pipeline,
and
so
that's
one
and
then
we're
also
we're
all
doing
the
same
thing
with
the
prominental
stations
that
we
will
we're
building
those
inside
of
the
release
process
and
we
want
to
move
the
permanent
side,
so
the
tooling,
for
to
learn
to
sign
and
create
the
provenance
method
that
is
already
done.
B
I
know
just
imported
the
repository
of
the
new
tool
and
oh
yeah.
This
is
the
one
so
and
then
so
we,
it
literally
happened
like
last
week
right
now
that
we
finalized
the
move
of
the
repository
to
the
kubernetes
organization.
So
now
that
we
have
it
there,
we
can
start
using
the
newer
tester
to
generate
the
problem
and
solve
the
builds,
and
the
other
part
that
is
still
missing
to
complete
is
adding
the
signing
of
files
to
real
the
files
and
also
of
the
images
the
new
image
Superman.
B
The
new
Corel
sign
the
command
that
we
want
to
build
to
send
everything
outside
of
the
good
process,
so
there
so
well.
That
is
the
kind
of
the
way
that
we
want
to
do
it
just
with.
If
certainly
so,
there
are
a
few
people
I've
signed
into
some
of
the
issues
in
the
signing
tasks,
so
I
will
I
think
this
is
a
good
time
to
maybe
rethink
and
see
who's
interested
in
working
a
lot
and
maybe
just
checking
back
on
the
availability
to
see.
B
If
months
back,
when
someone
assigned
a
new
an
issue
for
themselves-
maybe
Racine
can
say:
okay,
I
can
I
can
or
I
cannot
work
on
this,
because
we
need
to
really
present
on
this
issue,
because
we
we
we're
about
together,
so
I
think
it's
a.
B
So
when
we,
if
the
idea
the
idea
is,
we
want
to
rework
the
way
we
are
generating
the
advanced
structuring
them.
So,
for
example,
I
would
like
to
see
linear
albums
attached
to
the
images
themselves,
and
but
in
order
to
do
that,
you
need
to
attach
the
response
to
the
images
when
we
build
them
and
also
promote
them,
but
that
will
have
a
direct
impact
and
dimensional
motion
process.
B
And
thirdly,
as
we'll
see
in
the
next
steps
in
the
next
topics,
the
image
promoter
needs
some
help,
whether
it's
a
feature
a
couple
of
things
there
and
so
I
I
have
the
yes
phone
called
in
my
branch
in
my
computer,
but
I'm
being
pulling
on
that,
because
it's
it's.
If
we
merge
it
now,
it's
gonna
just
make
you
much
more
slow,
so
yeah,
that's
that's!
Basically,
the
state
of.
C
C
G
The
question
is,
what
will
the
user
experience
be.
B
When
they,
you
know,
want
to
install
cubelet
or
qcdl
tab,
especially
with
respect
to
the
signature.
Okay,
that's
that's
the
next
topic.
Well,
I
I
think
it
was
from
the
agenda.
The
packages.
B
Okay,
so
maybe
yeah
that's
yeah,
we
that
is
like
a
whole
nother
discussion
and
insults,
are
really
a
present
one,
because
we
are
about
to
lose
a
lot
of
support
from
the
people
that
currently
signed
the
artifacts.
So
we
need
to
get
that
volume,
but
before
we
change
into
that,
do
we
have
a
list
of
existing
issues
that
we
want
to
review
for
the
signing
piece
and
they
mentioned.
A
B
So
basically,
what
we
need
is
so
I
I.
Can
someone
comment
on
the
state
of
the
signing,
Library
I
think
we've
worked
on
the
files.
G
But
then
I
guess
after
that,
the
person
who
worked
on
like
adding
tests
for
points
any
modified
file,
sending
a
lot.
Because
when
we
signed
files
we
essentially
get
a
binary
signature.
Instead
of
for
images,
because
the
signature
is
like
uploaded
to
the
registry
by
cosine
itself
and
there.
G
Changes
over
there
I
think
as
a
library
piece
that
is
ready
to
be
used.
What
we
need
to
decide
on
is
so
the
first
issue,
which
is
remaining
two
for
two
six
one:
eight
we
have
to
decide
like
when
we
run
the
new
Step
till
sign.
What
exactly
are
we
going
to
do
in.
E
G
Like
okay,
how
do
we
do
things
because
promotion
happens
twice
or
building
happens
twice
and
like
when
do
you
sign?
What
do
you
sign
is
something
like
we
should
determine
and
probably
write
it
in
the
issue
itself
or
the
third
and
then
start
working
on
the
implementation.
So
before
we
like
keep
the
implementation,
maybe
we
should,
or
maybe
we
can
expend
a
few
minutes
on.
G
What
exactly
do
we
sign
and
one
one
of
the
ideas
that
came
up
was
then
we
can
go
through
the
response
itself.
If
it
is
there
or
The
prominent
side
effects
list
through
them
and
then
see
what
we
generated
to
the
end
users
and
then
sign
each
of
them
and
then
sort
the
signature
that
place
itself.
Another
way
would
be
like
bucket.
G
Look
at
the
files
which
are
published
to
the
end
users
and
sign
them
in
place
and
store
the
sequences
data.
So
these
were
the
two
ideas
and
I
was
supposed
to
work
on
it,
but
then
I
got
busy
with
like
some
event
stuff
as
a
support
not
like
that.
First
traction
on
this
in
the
last
month,
which
is
like
September
yeah,.
B
B
Yeah,
so
there
was
some
initial
discussion
on
whether
we
shared
or
not
build
the
signing
vote
into
Grill
and
the
determination
such
a
verification,
if
I'm
wrong
is
that
we
need
to
implement
it
ourselves,
because
we're
able
to
do
is
basically
take
every
everything
that
we
staged,
pull
it
down
to
a
bucket
sign
everything
and
resync
the
market
back
up.
So
that's
that's
why
we
need
it
and
that's
why
we
couldn't
do
it:
medical
science
so,
and
we
have
more
control
of
yourself
some
pieces
of
that
process.
B
So,
technically
speaking
in
the
system
is
interesting,
because
the
fire
signing
will
look
completely
different
and
container
in
which
side,
because
we
don't
have
something
like
an
object
promotions
for
binary
artists,
yeah,
and
this
makes
it
special
because
our
use
cases
for
signing
files
and
container
images
and
cosine
is
also
kind
of
special.
So
we
don't
cosine
does
not
have
any
consumers
who
really
take
a
huge
list
of
images
in
the
huge
list
of
binary,
Center
design
Rush.
B
Cosine
is
not
built
for
that
performance
wise
and
that's
the
the
biggest
thing
we
have
to
deal
with,
because
if
you
don't
also
now,
we
don't
sign
in
clear,
promote
it's
fine,
but
at
some
later
point
we
also
want
to
have
something
regular,
binary,
artificial
promotion
in
place,
and
this
would
mean
that
we
also
do
it
with
K
promo
I
mean
which
files
and
images.
B
And
I
think
that's
it
to
this
topic,
so
we
still
have
to
consider
that
we
I
was
I
was
evaluating
into
the
direction
of
the
past
the
past
weeks
that
we
probably
can
change,
because
our
Coastline
works
and
make
it
more
look
for
more
parallel
use
cases,
and
this
would
imply,
for
example,
reusing
connections
and
tokens
and
caching
things
that
would
make
it
faster.
We
are
not
there
yet
Yeah.
B
We
actually
hit
some
some
issues
in
close
time,
so
we
we've
been
having
two
issues
with
the
way
we
interact
with
sister,
so
the
first
one
was
itself
where
we
so
cosine
initialized
that
a
tough
cash
of
the
rookies
and
then
it
had
a
great
condition
where
it
was
locked
up
by
threads
trying
to
rental
somewhere.
So
we
Upstream
that
changed
and
then
we
fixed
it
and
now
the
next,
the
one
that
we
have
is
that
we
got
hit
by
a
red
limit.
B
Recently
we
have
here
a
number
of
the
G18
who
can
speak
to
them.
So
I
think
that's
fixed
right
now,
yeah,
so
it's
so
that
was
it.
So
that's
the
state.
So
so
you
assigned
the
the
initial
flight
sign
interest
to
yourself,
okay,
so
the
voice.
So
if
we
we
can
move
them
back
so
earlier.
You
know
the
questions
about
this
part.
B
B
D
Yeah,
so
we
were
looking
at:
how
can
we
do
the
Debian
and
RPM
packages?
Because,
unlike
the
binaries
and
the
container
images,
we
can't
leverage
some
like
cosine
to
just
sign
those
packages
and
they
would
work
on?
You
know
the
various
distributions
that
consume
them
right
now.
The
only
option
is
to
use
Long
Live
tpg
keys
that
are
stored
on
the
host
and
you
know,
live
for
the
life
of
the
Repository.
D
D
So
we
had
a
couple
of
alternatives
to
look
at
on.
How
do
we
replicate
this?
We
could
have
built
parallel
infrastructure,
similar
to
what
Google
has
tried
to
get
the
signing
process
running
in
Cloud
run,
but
we
already
are
stretched
in
on
trying
to
maintain
infrastructure
in
the
project
anyway,
so
we
decided
to
look
at
variable
third-party
systems
that
would
be
able
to
potentially
post
those
and
the
POC
was
specifically
around
looking
at
suse
open,
build
service,
and
there
are
a
couple
of
benefits
to
that
one.
D
Is
we
never
interact
with
the
actual
gpg
key
that
signs
the
repository
of
the
packages
it's
controlled
by
the
OBS
system,
and
we
would
just
simply
Leverage
The
API
to
kick
off
builds
and
what
we
need,
and
they
would
also
do
the
hosting
for
the
packages
as
well,
and
we
had
some
discussions
because
we
don't
build
devs
and
RPMs
in
the
normal
way,
normally
when
you're
building
fence
and
RPMs
you're
building
it
from
the
source.
So
the
file
storage
is
is
not
a
real
big
deal.
D
Are
you
talking
about
just
text
files
and
then
the
binary
artifacts
are
stored
at
the
end,
but
they're
highly
compressed
for
us
it's
different,
because
we
pre-built
binary
images
and
we
want
to
use
the
binary
images
that
are
built
through
the
release
process.
So
so
we
talked
with
the
team
about,
even
if
it
would
be
feasible
to
start,
because
if
we
were
going
to
try
to
throw
a
whole
bunch
of
data
down
and
they
weren't
going
to
be
able
or
willing
to
host
that
data,
then
there
was
no
need
to
proceed.
D
Thankfully,
the
state
folks
have
been
very
generous
with
their
time.
We've
met
with
them
a
few
times
now
and
they
helped
talk
us
through
the
process,
ways
that
we
can
try
to
integrate.
That
sort
of
thing.
The
other
nice
thing
about
OBS
is:
if
we
decided
we
wanted
to
sell
post
it
later.
We
can
basically
run
OBS
in
our
own
infrastructure
and
go
that
route
as
well
either.
D
So
Marco
has
done
an
incredible
job
of
actually
trying
to
POC
this
out
and
I
see
that
there's
a
a
link
in
the
channel
that
has
a
you
know
to
the
comment
on
how
to
take
the
tires
on.
You
know.
Basically
the
initial
POC
version
of
this.
D
If
we
hit
any
blockers,
then
we
always
have
the
option
of
standing
up
infrastructure
similar
to
Google's
and
automating
that
via
Cloud
run
and
but
then
we
also
have
to
do
gpg.
Key
Management
for
a
community
run
through
that
people
cycle
in
and
cycle
out
of,
and
it's
not
like
you
can
just
replace
the
gpg
key.
D
You
know,
because
that
requires
users
to
make
changes
on
their
own.
So
that's
one
of
the
hopes
that
we're
having
is
that
this
POC
goes
well.
We'll
only
need
to
do
a
single
migration
users
will
have
to
accept
it.
That
first
time
and
then
we
can
put
a
proxy
or
some
type
of
a
redirector
up
on
our
infrastructure,
so
that
it'll
point
to
the
OBS
packages
behind
that
way.
D
If
we
need
to
move
to
sell
posted
OBS
architecture,
or
we
want
to
go
full
so
posted
custom
architecture,
we'll
have
that
option
in
the
future,
without
disrupting
users
and
correct
me,
if
I'm,
wrong,
Ray
I
think
we
have
the
option.
D
We'd
have
the
option
to
go
to
migrating
to
our
own
infrastructure,
to
actually
take
the
signing
keys
that
are
being
used
in
the
public
OBS
now
and
install
those
on
our
system
so
that
we
wouldn't
have
to
require
users
to
change
that
if
we
made
that
approved.
So
it's
a
bit
of
a
roundabout
way
and
we
don't
know.
How
is
this
going
to
look
yet
and
that's?
Why
there's
not
a
full
cap
design
around
this
specific
migration
process
other
than
the
high
level
cap
is
because
we
want
to
try
it
out.
B
Okay,
any
anybody
has
any
questions
about
that.
I
am
I'm
a
little
bit
concerned
about
the
state
of
that
line.
So
from
what
I've
been
hearing
from
Google,
we
need
to
make
this
happen
like
yeah,
so.
B
B
Like
in
the
interim,
but
he
he
expressed
that
it
seems
like
it
really
really
should
be
a
really
big
priority
for
us
to
move
as
quickly
as
possible
to
eliminate
that.
So
maybe
that's
we've
moved
running.
B
B
D
Yeah
I
just
wanted
to
add.
On
top
of
that,
we
got
to
think
about
the
impact
users
here
as
well.
So,
even
if
we
were
to
have
the
poc
in
a
state
that
we
could
migrate
to
and
start
publishing,
it
would
probably
be
a
few
releases
before
we
could
stop
publishing
to
the
Google
infrastructure,
because
it
is
a
URL
change.
It
is
a
gpg
key
change
and
that's
gonna
take
some
time
to
get
a
good
roll
out
with
the
messaging
that
we
need
so
that
you
know
users
aren't
kind
of
caught
off
edge
on
it.
D
The
other
thing
is
is
when
I
say
users
here:
I'm,
not
just
talking
about
users
who
are
reading
the
documentation
and
going
step
by
step.
This
would
also
affect
things
like
cluster
API
and
the
image
Builder
that
they
use
to
build
images
there
and
other
types
of
kubernetes
and
sellers
that
are
consuming
the
beds
and
rpns
today,
as
well.
B
This
might
be
a
silly
question,
but
can
we
potentially
run
both
s
in
parallel
and
have
kind
of
the
official
Community
release
and
then
it's
kind
of
like
side
area
with
a
different
GPT
key,
and
this
is
how
voltage
channels
running
at
some
point
we
could
make
that
switch
or
potentially
like
in
an
emergency
situation.
It's
like!
Oh
crap.
We
don't
have
Google
support
anymore.
We
could
then
say:
okay,
here's,
the
URL,
here's
the
keys,
here's!
What
you
need
to
do
this
kind
of
covering
both
bases.
Is
there
any
issues
running
both
streams?
Parallel.
B
B
Okay,
so
well.
Well,
this
question
is
a
little
bit
more
in-depth
between
yesterday.
I
have
one
thing
that
could
buy
us.
Some
time
is
exactly
then
Sasha
could
mentioned
it.
That
I
think
the
journey
that
just
if
we
start
building
the
RPMs
ourselves,
then
it's
less
of
a
very
long
time
to
the
people
that
Google's
helping
us.
So
the
alignment
has
to
be
taking
off
as
much
time
from
them
as
we
can
so
that
they
don't.
B
They
can
schedule
it
more
easily
and
also
to
factor
into
performance,
abuse
and
everything,
because
now
people
that
help
us
are
going
to
be
basically
involved
during
their
own
sense,
the
company
time
it's
going
to
be
on
their
own
set.
So
we
can
move
it
like
move
the
field
inside
as
well.
We
do
the
release,
get
the
the
RPMs
ready
just
with
them
to
download
signing
enough
to
capture.
B
B
B
G
B
C
B
D
B
D
We're
we're
kind
of
dancing
around
this
would
be
a
very
good
change,
but
perhaps
from
a
community
perspective.
So
if
there's
many
cures,
your
prostitute
spray
or
associate
cut
those
are
points
where
we
can
say
hey.
We
need
you
to
we're
looking
at
one
or
two
shapes
right,
you're
really
going
to
need
to
change.
D
I
think
we
need
to
take
into
account
the
state
of
the
world
right
now
and
that
not
everybody
has
free
access
to
the
internet,
they're
going
through
proxies
and
things
like
that
and
by
publishing
the
gpg
keys
and
Equipment
signing
it.
They
can
verify
that
it's
not
intercepted
in
flight
and
they're,
not
getting
something
else.
D
I
I
wish
I
could
say
that
wasn't
the
state
of
the
world
right
now,
but
I
do
think.
That's
something
we
need
to
take
into
consideration,
especially
as
we're
talking
about
businesses
that
are
deploying
this.
You
know
in
production
environments
as
much
as
we
say,
don't
use
necessarily
what
we
publish
in
production
environments.
You
should
take
precautions
on
your
own.
There
are
still
people
that
are
doing
it
today.
D
B
I
think
we've
already
spoke
about
this
also
during
the
evaluation
a
little
bit,
because
we
do
not
sign
the
travel
time
by
using
cosine
and
just
three
to
death.
So.
D
So
I
would
I
would
argue
against
that.
Just
because
my
experience
that
have
deal
and
a
lot
of
the
clients
that
we
were
working
with
before
they
came
to
us,
they
were
absolutely
using
the
Upstream
market
price
directly
in
production
and
also
while
I
was
at
packet
now
electromax
metal,
but
all
of
our
customers
were
deploying
the
Upstream,
vanilla,
kubernetes
artifacts
in
their
environments.
D
Okay,
let's
see
intro
you've.
C
B
Right
then,
we
should
look
at
like.
C
B
B
So,
for
example,
if
we
take
the
wheel
of
the
packages
inside
are
going
to
release
buttons,
isn't
that
getting
us
like
a
the
enrolling
to
just
handling
this
ourselves
instead
of
without
telling
how
to
obvious
I,
don't
know
I
I?
It
feels
to
me
that
once
we
have
those
built,
we
only
would
need
that
for
us
to
sign
them
and
from
the
repository
in
our
pocket
somewhere
it
would
be
done,
but
well
maybe
I'm,
right
and
I
I
used
to
run.
B
B
I
agree
an
abstract
I,
just
think
that
the
less
we
build,
the
more
we're
able
to
focus
on
particular
high
value,
artifact
types
and
security
of
those,
and
for
me,
our
pins
and
dubs
versus
containers.
I,
would
focus
on
the
containers.
But
It's
tricky
to
figure
out
the
right
deprecation
policy,
or
something
like
that.
That
is
a
big
change,
but
we
always
struggle
upon
Human
Resources,
so
keeping
every
potential
type
of
artifact
on
everyone's
for
every
type
of
architecture
in
multiple
different
forms.
Binaries
containers,
Arkansas,
that's
just
a
lot
of
work.
B
Yeah
I
think
that's
a
great
point.
I
think
we
could
also
read
rewrite
the
discussion
about
what
we
wanted
to
partnership
because
we
started
working
on
all
the
supported
platforms
because
three
years
ago,
and
we
really
have
to
evaluate
platforms-
are
necessary.
I
mean
we
dropped
them
out
of
the
make
fires
in
East
browsers
from
time
to
time
to
take
it.
They
implicated,
for
example,
closed
captioning
not
available.
C
So
one
other
thing
I
was
saying
was:
if
you
do
something,
we
should
do
the
same
for
all
the
artifacts.
So
let's
talk
like
let's
say
you're
at
first
class
Citizen,
and
so
we
are
going
to
sign
this,
but
not
the
other
one.
So
I
should
try
to
do
that.
The
other
one
is
you
know
if
you
don't
get
enough
money,
you
might
not
report
on
this
part
right.
G
C
B
B
One
option
is
that
we
sort
of
Outsource
the
build
of
the
packages
to
like
Debian
and
Ubuntu,
and
the
distributions
is
that
an
option
we've
ruled
out
right
and
then
the
other
option,
maybe
also
rolled
out,
is
engaging
other
vendors.
That
say,
you
know.
Currently
we
trust
Google,
we're
talking
about
the
community
packages
and
sales
being
trusted,
but
maybe
other
vendors
could
produce
the
bills
as
well
anyway,.
D
I
just
wanted
to
raise
it
as
possibilities
yeah.
So
let
me
start
with
the
first
one,
which
was
Outsourcing
it
to
the
sorry,
quick
feedback.
What's
Outsourcing
it
to
the
distribution,
vendors
and
the
problem,
there
is
a
lot
of
the
distribution
of
vendors,
have
really
tight
policies
for
each
of
the
releases
on
what
they
can
update.
So
they
wouldn't
do
you
know
a
minor
update
within
say
using.
D
Once
floor,
37
is
cut
whatever
version
they
ship.
It
would
only
ship
patch
releases
to
that
from
there
on
without
some
type
of
another
exception,
so
it
would
limit
Us
in
two
ways
there
one
there
would
only
be
one
person
available
at
any
time
and
you
wouldn't
be
able
to
install
an
older
version
if
you
wanted
to
and
then
also
you
would
have
to
deal
with
their
policy
that
they
would
have
to
really
explore
at
38
before
you
accumulates
the
next
kubernetes
release
and
things
like
that.
D
So
that's
a
limiting
factor
there
plus
they
still
want
us
to
support
like
the
specs
and
the
sources,
and
things
like
that
anyway,
and
that
was
one
of
the
things
that
was
nice
about
OBS
is
he
could
build
packages
for
all
three
of
those
in
one
system
instead
of
having
to
you
know,
work
with
you
know,
because
the
floor
already
has
Red.
Hat
already
has
a
system
that
allows
you
to
build
your
own
custom
packages
anyway,
you
can
add
them.
D
Releases
but
then
we're
interacting
with
multiple
external
systems
to
try
to
support
that.
We
are
having
to
coordinated
with
multiple
vendors
to
kind
of
get
these
out
as
well,
which
adds
to
you
know
the
burden
of
the
release
team
as
well.
So
I
I
think
multiple
factors
cause
us
to
select
rule
that
out
from
the
very
beginning,
as
far.
B
B
E
B
B
B
So
it
seems
like
the
immediate
need
is
to
shift
that
responsibility
when,
when
do
we.
B
Yeah,
what's
happening
for
this
release,
I
don't
know
if.
B
I
have
to
check
because
I
don't
have
the
scope.
I
don't
have
an
idea
of
how
much
we
need
to
to
be
able
to
reveal
our
free.
B
And
then
for
128,
we
could
move
to
a
beta
state
where
we
switch
to
default,
but
leave
the
Google
packages
available
and
then
probably
129
to
completely
remove
the
older
packages.
Okay,
so
that
would
be
OBS.
If
we
wanted
to
do.
You
know
just
take
the
the
Rapture
scripts
that
they
have
today.
Is
that
something
we
would
want
to
do
for
126
and
shift
that
responsibility
now
to
see
a
thumbs
up
and
a
head
not
from
bibs.
G
I
think
it's
worth
it
to
drive
the
building
signing
scripts
are
now
separated,
I
mean
just
a
matter
of
fact.
Maybe
you
can
sit
and
do
waiting
of
both
the
scripts
Ben
is
suggesting.
Like
can
we
run
the
bit
scripts
ourselves?
They
will
still
do
the
signing,
but
then
it
will
reduce
time.
G
To
ask
for
permissions
because
because
they
just
need
to
sign
the
packages
that
would
be
built
and
goes
to
the
push
to
some
bucket,
we
can
decide
on
which
Market,
probably
we
don't
have
one
right
now.
Specifically
for
this.
B
E
C
B
E
B
B
A
B
A
G
G
It
becomes
easier
for
them
to
seek
permission
for
the
keys
and
the
signing
bits,
so
they
have
less
time
access
and
they
can
just
revert
back
to
us
easily
right
now.
According
to
my
estimates,
it
takes
two
hours
for
them
from
the
time
we
signal
that
a
the
release
is
available
to
be
published,
and
the
coming
back
to
us
and
saying
like
hey
Debs
are
published.
Go
ahead
is
the
announcement.
G
G
That
so
Google
s
will
not
do
a
certain
part
of
the
process.
We
do
it
in
our
own
CI
and
then
they
can
just
handle
the
signing
fits
so.
D
B
B
Because
we
separate
foods,
foreign,
the
first
step-
will
be
to
handle
the
world
of
packages
and
publish
that
to
inocular
Services
called
by
connecting
and
product.
So
now
the
Google
army
will
make
the
last
less
time
to
basically
to
publish
it
and
then
buy
Here
specification
to
security
things.
Basically,
it's
about
12
hours.
The
token
basically
was
just
before.
B
Okay,
so
the
timing
was
that
would
be
and
you're
at
a
point
with
the
OBS
PLC
that
and
we
discussed
it
with
Marco
in
a
recent
meeting
that
we've
built
some
pyramid
infrastructure
where
we
have
to
implemented
the
credit
in
any
case,
which
is
great.
So
we
could
also
move
that
step
to
now
and
start
outlining
how
it
could
go
together
with
drill
and
just
call
them
just
for
example,
and
then
later
on
and
move
that
implementation
on
that
change.
D
D
Too,
that
OBS
is
strictly
a
POC
right
now,
until
we
can
prove
that
it'll
actually
meet
our
needs,
so
I
don't
want
to
make
it
seem
like
we're
being
presumptive
here.
It's
really
a
matter
of
we're
taking
the
tires
seeing
if
it
would
actually
meet
our
needs,
and
if
it
does,
then
we
can
proceed
with
that
or
we
alternate
about
you
know,
building
out
the
federal
infrastructure.
D
With
other
implementation,
so
just
recycled
code
for
bjte,
for
example,
yeah
I,
think
the
control
of
the
part
of
the
references
.com.
So
we
won't
have.
B
Any
issues
like
this
looks,
and
it
also
pushes
to
the
same
location
right,
so
you
can
better
not
mention
that
you
can
run
it
on
any
developer
machines,
which
is
always
applications
so
for
the
two
fishing
part
right
now
now
is
pushing
to
do.
The
notation
location
is
a
Google
reference.
B
B
B
B
B
C
G
Sorry
I've
got
to
look
at
the
script,
but
what
I
hear
is
industry
builds
the
packages
and
like.
G
And
it
then,
like
you,
do
a
GCS
link
right
now
that
we
do
anyway,
for
our
other
bits
like
we
do,
make
release
make
quick
release
and
then
just
do
a
GCS
are
synced.
Maybe
we
can
do
something
like
that
again,
something
that
just
came
on
top
of
my
head.
E
C
G
That
we
have
to
investigate
the
engineering
efforts
that
are
needed
to
make
it
work
right
now
for
both
Google
build
admins,
because
release
is
near
in
almost
like
one
and
a
half
months.
Suppose
this
word
doesn't
go
through.
They
still
need
to
cut
the
release
and
the
patch
release.
Work
also
doesn't
need
to
be
hampered,
because
every
we'll
have
patch
releases
this
month
as
well,
and
the
script.
G
B
E
B
To
the
package
building
POC,
oh
adult
yeah
I
mean
that's
the
ideation,
but
I
would
freely
suggest
that
everybody
is
interested
in
seeing
how
we've
been
into
that.
This
will
take
a
little
bit
of
space
and,
let's
settle
not
for
the
weekend.
Visual
factors
we're
here
yeah
and
for
so
two
weeks
from
now,
and
we
can
go
ahead
and
that.
B
B
Well,
yeah!
So
as
many
of
our
countries,
managers.
A
B
Here
and
elsewhere
now
we
are
currently
undergoing
a
change
in
the
infrastructure,
and
so
the
deal
there
on
that
is
that
we
are
now
publishing
the
images
that
we
promote.
We
approaching
them
to
20
I.
Think
at
the
factory
is
three
regions
instead
of
the
old
PCR
repositories,
at
least
that
will
have
really
that
impact
on
the
image
promotional
process
the
process
jumped
from
30
40
minutes
or
an
hour
to
about
six
hours.
I,
don't
know
how
much
something
unreal
for
people
or
anything.
B
So
there
are
well.
There
are
other
conversations
going
on
on
what
we
should
do.
There
are
some
ideas
for
the
user
side.
There
are
some
ideas
on
the
release:
engineering
side.
There
are
some
optimizations
that
we
can
do
those
three
Sixers
to
make
it
happen
today.
So
the
problem
is
that
whenever
we
do,
we
use
promotion,
we
send
a
ton.
A
B
Requests
to
register-
and
some
of
these
are
faster
than
others
and
there's
also
the
signing
process,
which
means
we
need
to
upload
information
to
the
let's
say,
search,
and
that
also
has
an
impact.
So
when
we
started
promoting
to
the
new
regions,
but
I
mean
it
takes
to
do
all
that
multiply.
So
there
are
some
such
I
have
been
doing
in
some
student
sessions,
and
I
would
like
to
see
this
regions
yeah
and
there
are
some
I
suspect,
it's
a
good
feeling,
mostly
that
we
have
about
that.
B
E
B
And
the
idea
is
now,
for
example,
pages
and
also
signatures
should
not
change,
so
we
could
serialize
this
cache
into
some
structure
and
reuse
it.
But,
on
the
other
hand,
I
think
this
is
makes
it
extremely
and
suitable,
but
not
for
these
false,
but
it
makes
it
extremely
okay
easy
to
take
adjacent
mode
and
change
something
right
and
it
gets
reused
by
the
line
which
just
doesn't
make
any
sense
at
all,
but
in
general
I
think
we
can't
really
optimize.
B
For
example,
how
close
signing
looks
because
we
need
a
huge
amount
of
connections
to
the
right
to
the
industries,
so
we
need
to
create
artifacts
and,
for
example,
creating
a
Blog
upload.
So
there's
something
which
I
mean
everyone
knows
is
pushing
images
to
Registries
just
takes
time,
and
if
you
do
it
for
Thousand
Villages,
then
it
takes
a
thousand
times.
You
can
reuse
connections,
but
we
already
do
know
I
create
a
decision
for
go
container
registry
where
I
outlined
it.
B
It's
too
slow
for
my
taste,
but
I
know
how
to
fix
it
and
then
I
got
a
closer
to
it,
where
I
could
reuse
the
connection,
and
that's
now
part
of
the
resisting
thing.
So
we
have
to
see
if
it
really
speeds
picks
up,
and
it
does
that
we
can
probably
listen
to
that
and
we
could
also
split
up
the
jobs
by
yes
by
this
sorting
the
images
or
something
like
this
and
split
up
pictures.
B
B
B
2010
over
the
entire
gcp
structure,
because
of
that
the
image
promoter
is
doing
many
AP
calc.
So
every
time
you
do
a
promotion,
the
promoter
will
try
to
just
copy
to
one
of
these
20
rupees
through
it
and
that's
why
you
need
to
see
the
top
now.
What
is
like
the
possibility,
yeah
like
Mom
I'd,
say
we're
getting
on
ice
vibrations,
because
it's
not
visible
in
the
console,
but
the
logging
logging
to
the
broker
to
say
we're
eating.
But
why
do
we
go
to
the
console?
Try
to
waste?
We
put
a
request.
C
B
Are
not
reaching
19
90
percent
of
Dakota,
so
it's
more
like
a
drug
I'm
trying
to
basically
raise
that
I
inside
Google
and
somebody
because
it's
like
we
don't
have
support
with
that.
Us
is
probably
so
the
meditation
about
this.
The
mitigation
about
this
is
more
like
we
choose
the
number
of
controversions
when
we
go
back
to
motivational.
B
C
F
B
But
the
market
is
living.
Okay,
I
was
going
to
do
research-
mother,
yes,.
B
So
because
of
the
position
we
might
end
up,
people
sleeping
at
3am,
so
I
think
we
need
to
we
need
to
avoid
to
the
timing
is
also
we
are
Thai.
So
we've
got
a
good
admin
at
some
point,
so
we
should
not
take
too
long
to
be
responsive
because
also
they
are
also
of
business
hours.
So
we
have
to
make
sure
we
cultivate
it
in
one
day,
there's
also
the
which
is
we
are
started
so
there's
also
technical
side
on
that.
B
So
the
the
words
that
you
can
see
in
the
image
from
11
looks
Red
Line
errors
because
we're
hitting
so
much
yeah.
It's
also.
We
need
to.
B
B
That
we
want
to
use
to
do
a
promotion.
We
also
need
to
see
that
in
the
future,
but
so
the
the
thing
with
the
research
usage
is
that
the
promoters
can
start
mostly
waiting
for
traffic
like
something
to
say
certain.
That's
what's
spreading,
So
in
theory
of
being
the
currency
may
help
the
time,
because
more
the
way
it
is
now
in
parallel.
But
the
problem
is
that
we
may
see
the
red
initial
s.
B
B
Thing
is
this:
all
of
this
happened
some
or
if
they
started
happening.
E
B
G
B
A
G
G
G
Make
sure
that
everything
is
working
well
and
I'm
concerned
about
like
the
team
as
well
as
the
community,
they
should
not
also
push.
The
team
should
also
not
push
out
things
out
at
the
end
of
the
cycle.
B
Yeah
we
kind
of
squashed
these
two
which,
beginning
of
December
this
time
to
have
more
time
for
release
team,
especially
after
the
rupees
I
mean
I,
would
create
that
we
I
mean
we
outline
the
schedule
before
so
I'm.
Not
wondering
was
there
anything
we
could
probably
do
better
and
considering
it's
about
countries
right,
yeah,
I
was
and
I
don't
see
anything
right
now,
maybe
I'm
missing.
Something.
G
B
I
feel
like
for
most
people
working
the
project.
It
should
not
be
a
surprise
at
all,
because.
A
B
Yes,
we
do
so
historically
the
last
release
of
the
year
as
the
next
week
for
Keystone,
but.
D
A
B
With
with
the
125
124,
release
was
delayed
that
also
shortened
125
about
a
week
and
also
about
26
as
well,
but
folks
but
about
the
schedule
has
been
outlined
before
126
and
it's
been
communicated
now.
It's
folks
are
awesome.
It's
gonna
be
surprised
that
whenever
your
cookies
comes
or
whatever
coupon
week
comes,
but
just
remind
folks,
we
used
to
do
four
and
release
Cycles
a
year.
G
Or
so,
and
now
we
do
three
but
there's
this
people
will
still
be
surprised.
G
Few
years
back
when
you're
used
to
do
for
releases
per
year,
so
that's
why
I
don't
like
I
think
you
should
be
comfortable
with
less
number
of
features
and
even
like
people
who
consume
kubernetes,
I
guess
like
everyone
wants?
Yes,
the
point
is
still
like
sustainability,
but.
C
G
English
or
somebody
showing
up
in
our
meetings
and
asking
like
hey,
can
we
please
have
an
extension.
B
B
B
A
way
to
basically
trying
to
capture
the
traffic
going
to
the
new
Empire,
we
want
to
backboard
to
change.
We
introducing
KK
to
the
different
version
like
12
from
12
24
to
23,
because
what
countries
end
up
in
enough
life
and
of
this
week
I
think
unless
we
thought
the
last
five
release
from
122..
So
I
don't
know
how
we
want
to
do
this,
but
the
idea
is
to
move
back
for
that
change.
B
B
B
A
change
explicit
communication-
oh
this
is
this:
you
know
those
factories
introduce
a
change
being
careful
and
be
check
your
infrastructure
policy
before
you
go
through
it
again
trying
to
do
something
so
I
think.
That's
like
my
first
one,
because
I
have
all
the
questions
about
this.
Now
before
we
move
to
the
next
so
yeah
we
were
coming
out
yesterday,
I
I,
don't
know,
I've
been
giving
it
some
thought
and
I
don't
feel
quite
comfortable
backwards,
doesn't
change.
B
So
there
are
so,
in
my
view,
I
mean
we
had.
This
conversation
to
that
day
is
already
left,
but
one
of
the
one
of
the
ideas
must
not
bring
it
back,
because
it's
gonna
kind
of
register
I
mean
I
would
like
to
get
like
a
full
understanding
of
what
were
all
of
the
points
that
we
were
changing
but
and
at
some
point,
I
feel
we
cannot
have
to
keep
up
with
the
promise
of
not
backwards.
B
The
thing
that
we
have
to
consider
as
a
project
is,
if
we
don't
get
users
using
that
as
soon
as
possible,
aren't
looking
at
reducing
the
spend
on
the
gcp
account
until
that
happens,
and
users
are
very
slow
to
upgrade
so
the
earliest
release
that
we
can
make
this
a
required
change
is
when
we
start
to
actually
pick
up
users.
We
see
a
lot
of
traffic
for
very
old
kubernetes
uses.
B
That
was
like
the
next,
the
next
thing.
Okay,
so.
B
Well,
I
think
it's
really
up
to
stability
to
make
that
decision,
because
we
we
absolutely
now
one
point.
It
was
like
a
different
conversation.
Oh
maybe
in
the
future
of
this
life
is
going
to
be
a
security
fixed
again.
Now
we
take
the
opportunity-
that's
really
important,
so
many
requests
like
do
we
want.
We
want
to
get
a
120
degrees.
B
B
There's
nowhere
else
relevant
in
the
repo,
maybe
the
like
base
image
that
we're
pulling
from
when
you
build
a
little
bit
less
than
snap
it
out
there.
So
there's
that
much
traffic
from
that,
so
mostly
I
think
the
cluster
life
cycle
that
node.
B
Keep
up
with
this,
but
if
I
want
signals
to
know
like
this
is
something
we're
looking
at
to
affect
ing
the
registry
well
funded
and
everything
else
you
know
not
getting
any
overhead
space
kind
of
that
budget
next
year
means
that
we
can't
continue
other
infrastructure
around
or
setting
infrastructure
fund
used
to
be.
In
fact,
any
new
infrastructure
is
going
to
have
to
be
somewhere
else
where
it
will
help
until
it
can
get
that
actually
happening.
We
have
these
Amazon
resources,
but
we
don't
have
to
use
them.
B
Point
is
more
like
Ben
trying
to
next
points
related,
so
one
long
shot
we
had
to
try
to
fix.
This
was
to
get
the
GCR
team
to
redirect
the
existing
registry
to
the
new
endpoint.
B
We're
getting
a
lot
of
perfect
users
that
don't
understand
that
this
is
like
kubernetes
versus
Google.
They
just
see
GCR
breaking
them
and
these
characters.
You
know
having
hard
time
with
that,
if
it's
like,
if
it's
a
Google
user,
they
have
to
revert
kind
of
it
like
they
don't
like,
there's
not
even
really
option.
That's
just
standard
operating
procedures.
You
get
a
book
filed
against
you,
there's
an
outage.
B
Sufficient
for
users,
so
I
think
we're
gonna
happen
with
with
that
approach.
So
fast,
you
know,
there's
there's
something
like
25
times
more
traffic
hitting
the
old
registry
than
the
current
one,
and
it's
because
users
are
using
older
versions
and
art
operating.
So
one
thing
that
we
realized
is
while
we
switch
the
endpoint
in
125.
Is
the
default
then,
like
most
places
in
the
project,
we're
still
publishing
the
GCR?
So
there
are
users
that
will
continue
to
pull
those
images
from
GCR
just
because
of
the
vendor.
B
So
what
we'd
like
to
do
is
do
something
similar
to
the
vanity
domain
flip
and
move
the
config
for
the
Registries,
a
copy
of
it
over
to
the
registry
guide
case
study
of
the
directory
in
the
case
I
only
vote
and
start
promoting
that
separately,
stop
promoting
to
GCR
so
that
only
the
existing
tags
are
there
at
least
for
1.5
forward.
B
C
A
project-wide-
if
we
don't
do
that,
then
we
just
start
doing
it
for
I.
Think
about
125
for
it,
since
we've
already
changed
the
default.
In
doing
that,
we
currently
rely
on
pulling
from.
B
G
B
B
Yeah,
it's
just
something
to
me.
That
sounds
like
a
more
reasonable
deprecation
but
like.
If
you
do
not
move
the
registry,
then
you
don't
get
anything
breakage
and
also
the
the
question
about
patch
releases
is
business
because
are
you
going
to
just
stop
sending
the
attacks
in
the
usual
child
foreign.
B
We
may
want
to
use
like
resources
for
Microsoft
or
something
or
a
CDN
provider,
and
the
same
users
that
would
somehow
be
broken
by
this
change
are
going
to
be
broken
by
that
change
and
like
we
have
to
do
something
to
keep
things
online,
we're
exceeding
our
free
billion
dollars
on
gcp
and
the
other
infrastructure.
We
don't
even
have
like
transparency
issues
like
what
resources
we
have,
so
we
could
well
exceed
those
I
think
that
at
some
point
we
have
to.
We
have
to
break
this
pattern.
B
So
I
don't
know
if
we
definitely
do
the
pack
releases,
so
I
think
we
have
to
seriously
consider
it,
because
it's
one
of
the
best
opportunities
we
have
to
actually
get
the
traffic
shifting
into
timely.
If
we
continue
to
wait
on
tripping
to
this,
it
could
be
years
before
we
actually
have
significant
usage
on.
Like
126.
B
and
in
the
meantime,
even
if
we
get
resources
from
other
providers,
we're
pretty
stuck,
the
only
room
we
would
have
is
for
someone
to
start
paying
off
for
kcp
and
I'm
in
conversations
about
this,
but
it's
hard
to
get
Google
to
commit
to
even
more
resources
here,
especially
on
a
short-term
notice
and
other
providers,
I
mean
they
don't
want
to
tape.
B
It's
a
little
bit
different
because
it's
a
Google
product
s,
people
don't
understand
the
distinctions
but
I
think
as
the
open
source
project.
Since
it's
you
know,
no
one's
company's
brand
here
I
think
we
have
to
be
able
to
say,
like
you
know,
you're,
depending
on
our
enormous
resources,
for
the
images.
If
you
need
to
be
very
picky
about
the
trafficking
permit,
then
you're
going
to
be
severe.
That's
your
problem.
B
Everywhere
default
from
the
beginning,
I
think
it's
much
easier
to
stop
publishing
tags
for
those,
but
we
could
still
consider
changing
the
default,
so
that
also
gives
users
a
path
to
say.
Okay,
this
default
broke
me
we'll
just
talked
about
it
and
continue
using
this
whole
registry,
but
we
can
push
as
many
users
as
possible
for
most
users.
This
shouldn't
be
a
breaking
change.
It's
an
oci
compliant
registry.
It's
public
read,
it
just
works.
It's
only.
B
If
you
are
filtering
what
traffic
you
permitted
notice
to
access
yeah,
so
yeah
and
some
of
the
conversations
in
Valencia
like
going
out
to
some
possible,
don't
stay
polarity
for
some
companies
that
happening
so
I,
don't
yeah
I
mean
I.
I
know
that
they
are
but
I
also
know
these
otherwise
interface
users.
They
probably
shouldn't
be
depending
on
completely
infrastructure,
and
you
really
should
consider
varying
if
they're
Republican
is
sensitive
for
it,
but
I
mean
just
as
well.
The
registry
could
go
down,
we
don't
have
a
staff
knockout
for
this.
This
is
right.
B
When
there's
an
outage,
it's
like
mirror
I
know
trying
to
fix
something
middle
of
night,
because
also
we
don't
want
to
give
many
people.
You
know
direct
access
to
the
same
endpoint
until
people
are
actually
using
our
Sim
store
images.
We
have
to
trust
the
registry
itself
to
the
Lucia
privacy
implementation.
B
B
Which
actions
are
we
going
to
take
on
this
I
would
love
to
see
that
we
back
toward
the
default
to
the
controverses
with
a
you
know,
big
flaring
really,
so
that
we
continue
to
provide
images
on
PCR
for
the
older
releases,
so
there's
an
escape
hash
and
for
125
Ford,
where
we
never
advertise
GCR,
we
stopped
publishing
tags
to
be
the
hardest.
Push
was
equally
125.
B
But
stop
stop
I
I.
Think
you
get
a
non-stick
release
for
1.5
billion
typically
should
be
aware
of
most
certainly
a
big
release.
Note,
but
I
think
you
know
we
have
to
go
talk
to
a
cluster
life
cycle
and
those
folks,
if
we're
talking
about
a
backwards.
I
know
that
some
of
the
folks
were
already
interested,
in
fact,
for
pause
because
of
the
pending
SKU
with
the
container
run
times.
Anyhow,
there
are
seen
onward
position
bullet,
but
I
can
talk
to
everyone.
B
I,
don't
know
that
yet
and
I
haven't
had
it
I,
don't
think
I
know
if
I
have
a
chance
to
talk
with
cluster
life
cycle,
yet
I
think
we've
only
talked
to
this
year
on
too
bad.
We
need
to
talk
about
the
benefits,
but
if
they
can
agree,
then
we'll
come
back
to
cigarettes
and
say:
okay
they're
on
board,
I
expect
that
this
needs
to
be
well
communicated
in
the
police.
B
So
it
does
seem
like
one
of
the
first
takeaways
or
actually
I
have
done
here.
Use
communication
like
regardless
of
what
we
do.
There's
sounds
like
there's
a
storm
of
brewing
and
it
sounds
like
people
are
slower.
Laggers
to
upgrade
and
I
think
we
all
know
the
reality
we're
in,
and
so
we
can
do
is
start
with
some
of
the
communication,
while
we're
figuring
out
are
you
backboarding?
B
I
actually
want
to
add
to
the
time
urgency
thing.
So
this
is
actually
something
that
we
were
like
pushing
last
year
for
not
me,
but
the
project
and
we
didn't
have
progress
when
we
were
over
budget
back,
then
I've
been
pushing
last
in
January
and
we've
made
some
progress,
but
because
people,
because
there's
so
much
latency
to
it
actually
taking
effect.
B
What
we're
looking
at
right
now
is
I
mean
the
project
is
going
to
be
out
of
decent
credits
like
beginning
of
November
I'm
working
with
my
employer
to
try
to
get
some
like
emergency
credits
to
make
us
through
the
end
of
the
year
but
I'm.
You
know
the
feeling
I'm
gonna
they're
not
going
to
be
inclined
to
keep
doing
that
on
a
recurring
basis.
B
So
next
year's
Budget
on
the
gcp
resources
may
be
tight
if
we
can't,
if
you
can't
start
to
actually
take
advantage
of
this-
and
this
is
by
far
the
largest
posting-
these
images
is
one
of
the
few
things
that's
on
a
community
budget,
that's
taking
like
over
two-thirds
and
and
there's
other
discussions,
and
we
have
a
lot
more
but
like
the
binary
downloads
right
now
are
actually
filled
directly
to
Google
on
the
internal
infrastructure,
and
now
that
we're
having
these
conversations
about
Envision
credits,
get
more
people
learn
just
how
much
stuff
is
running.
B
It
actually
costs
more
than
what's
been
donated
to
the
community
so
far,
so
yeah.
If
we
can
get
this
traffic
shifted,
I
think
that's
the
most
stable
thing
to
do.
I'm.
You
know
not
super
Singularity,
so
this
is
something
like
Google
position
or
anything
I'm,
just
a
person
in
the
project
that
happens
to
work
here.
It
cares
about
people
itself
and,
from
my
point
of
view
like
we
have
got
to
get
the
traffic
costs
more
sustainable
spread
around
sooner.
B
This
is
really
unfortunate
that
we
didn't
get
the
endpoint
to
change
with
assembly
much
much
further
back
and
going
forward
I
think,
depending
on
who
is
providing
company
resources
to
us
that
we
take
advantage,
we
are
going
to
have,
to
you,
know,
start
shifting
traffic
for
adults
and
different
things
out
of
all
of
our
costs,
letting
end
users,
download,
unlimited
container
images
and
binaries
is
massively
adoption
costs.