►
From YouTube: CNB Weekly Working Group: 2022-01-13
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
And
start
with
platform.
B
D
That's
true,
but
you
know
I
I
don't
really
play
a
role
in
releasing
necessarily
so
I'm
sort
of
out
of
the
loop
there
as
well.
Maybe
it's
worth
pinging
david.
A
Okay,
but
no
no
release
in
the
works
right
now
is
that
right
or
that's
our
understanding.
C
D
A
Okay,
implementation
release
planning
anything
there.
E
A
Cool,
no,
no
updates
from
distribution.
Does
bat
team
have
any
release
updates.
C
A
All
right,
yeah
on
to
the
regular
agenda
s
bomb
migration
with
matthew,
mcnew.
F
Yeah,
so
I've
discussed
this
with
a
couple
of
you-
and
I
mentioned
it
yesterday,
but
I'm
working
on
drafting
an
rfc
that
proposes
that
we
add
the
ability
to
have
build
packs,
provide
both
the
previous
legacy
best
bomb
and
the
new
s-bomb
release
in
the
build
pack.
F
C
F
F
F
What
we'd
like
to
see
or
would
like
to
provide
is
a
migration
path
which
would
enable
us
to
update
the
build
packs
to
produce
the
previous
s-bomb
or
sorry,
the
new
s-bomb,
as
well
as
continue
to
produce
the
legacy
s-pump,
so
that
we
could
provide
that
to
cus
cluster
environments
that
are
still
relying
on
the
legacy
s
legacy
bomb.
F
Whereas
they
begin
we
can
begin
to
transition
users
to
the
new
s
bomb
and
then
in
a
we
could
release
a
new
version,
which
is
a
hard
breaking
change
which
would
allow
the
old
legacy
bomb
to
essentially
not
be
created
so
that
we
can
begin
transitioning
people
to
environment
that
isn't
even
creating
that
label
bob.
Is
that
because
the
label
bomb
certainly
has
problems,
it
can't
get
too
long.
Creating
that
kubernetes
issue.
F
So
because
of
the
way
the
spec
outlines
it.
It
seems
that
we
would
need
to
introduce
a
new
build
pack
api
that
explicitly
allows
both
the
previous
bom
fields
to
be
written.
G
I
wonder
if,
because
this
is
mostly
a
problem
for
old
platform
apis
like
if
we
could,
let's
say
that
we
had
a
build
pack
api
where
folks
could
specify
both
bom
formats
right.
But
then
one
of
those
formats
would
be
like
this
compat
format
or
old-fashioned
format.
We
don't
even
have
to
specify
the
exact
same
way
we
did
in
the
old
build
pack
api
because,
by
definition,
you're
sort
of
like
rolling
forward
to
a
new
build
pack
api
here.
G
So
instead
of
like
the
existing
extensions,
we
could
have
some
sort
of
like
compat
extension,
but
instead
of
giving
that
optionality
in
a
new
platform
api,
I
wonder
if
we
would
just
would
it
make
sense
to
take
that
compat
and
stick
it
in
the
old
label.
Only
if
you're
in
the
old
platform
api
and
never
sort
of
expose
it
in
new
platform
aps
to
be
a
way
for
new,
build
packs
to
fulfill
the
old
contract,
but
it
wouldn't
be
adding
labels
or
exposing
the
old
contract
in
new
platform.
Apis.
C
C
G
You
would
always
allow
the
build
pack
to
export
in
both
formats,
but
you
just
ignore
the
compat
format.
Unless
you're
in
this
old
buildpack
api,
I
mean
old
platform
api.
If
you're
in
the
old
platform
api
you
sort
of
like
ignore
the
new
formats
and
just
grab
all
the
compat
stuff
and
put
it
in
the
label.
G
It
would
require
a
new
build
pack
api
that
allows
for
both
to
be
specified
if
you
would
have
an
old
platform.
Apis
you'd,
only
see
old
stuff
and
a
new
platform
apis
you'd
only
see
new
stuff
you'd,
never
see
both
and
it'd
be
the
platform
api
that
toggles
it
build.
Packs
would
have
to
provide
both
in
order
for
that
to
like
work
in
both
situations,
but
we
never
exposed
both
to
platforms.
C
C
Explicitly
disallows
like
that
specific
behavior
in
the
platform
like
if
the
platform
api
is
08.
G
H
Yeah,
that's
what
I
was
gonna
say
this
is
all
really
about
old
platform
api,
but
we
you
want
them
to
use
new,
build
packs
like
you,
don't
want
to
publish
a
version
of
build
packs
that
are
still
on
o
o6
or
whatever
o7.
I
don't
know
I
forget
which
version
it
was
is
that
is
that
kind
of
the
goal
here,
because
you
could
publish
other
build
packs
for
older
platforms
so
that
they,
you
know
when
they're
ready
to
upgrade
the
older.
The
newer
platform
would
use
the
newer,
build
packs
or
whatever
right.
G
Right
now,
it's
impossible
to
have
a
build
pack
that
would
work
for
both
platform
apis,
I
feel
like
the
problem
you
get
into
with
upgrades
is
like
it's
never,
okay
to
have
to
upgrade
the
platform
and
the
build
packs
at
the
same
time,
and
that's
sort
of
the
situation
we've
created
here.
If
you
want
a
bomb
like,
you
should
be
able
to
upgrade
the
platf,
the
build
packs
without
upgrading
the
platform
or
the
platform
without
upgrading
the
build
banks,
because
otherwise
you're
in
a
a
rough
situation
for
most
people.
F
F
That's
explicitly
what
I
was
hoping
and
like
platform
api,
oh
nine
or
some
future
one.
We
would
have
an
explicit
cutoff.
It
would
either
be
when
this
executes
the
legacy
bomb
format
does
not
get
into
the
label
or
the
user.
The
platform
has
to
explicitly
opt
into
it.
Just
so
that
you
have
a
cutoff
point
that
you
can
transition
users
to
that
point,
but
that
format
is
not
available
on
the
platform.
D
G
H
G
I
think
the
way
we
transition
to
totally
killing
the
old
version
is
to
actually
start
deprecating
old
apis.
So
I
feel,
like
we
shouldn't,
start
killing
the
old
version.
Until
we
deprecate
platform
06,
where
it
specifies
you
will
get
a
label
that
has
these
things
in
it
right.
We
have
to
actually
start
cleaning
up
from
the
back
before
we
can
kill
it
on
the
front
end.
C
I
mean
it's,
it's
all
problems
of
our
own
making,
if,
like
we
just
like
a
platform,
implementing
08
will
have
to
change
zero
things.
If
the
life
cycle
allows
this,
they
would
just
have
to
update
the
life
cycle,
image
and
it'll
work.
There's
there's
nothing
in
the
platform
api
that
changes
in
terms
of
the
flags
it
passes
the
things
it
expects
like.
C
None
of
that
will
change,
that's
all
handled
by
the
lifecycle
so
like
we're
going
through
all
of
this
bumping,
spec
and
everything,
but
it's
like
removing
two
or
three
lines
in
the
life
cycle:
it's
like
it.
It
seems
like
it's
a
problem
of
our
own
making
like
we're
doing
all
of
this
stuff,
just
because
the
specs
are
frozen
in
time,
and
I
also
pointed
out
there
have
been
times
in
the
past
where
the
spec
was
incorrect,
but
it's
still
frozen
in
time,
so
we
can't
go
back
and
correct
it.
G
C
C
G
G
D
C
G
I
feel
like
I'm,
usually
the
stickler
for
doing
things
the
really
formal
way,
but
I
think
the
reason
I'm
using
the
stickler.
So
we
can
avoid
problems
if
what
we're
trying
to
do
is
like
dig
ourselves
untie
ourselves
from
this
particular
problem
so
that
we
can
make
progress,
then
I'm
like
more
willing
to.
C
Hey
because
this
thing,
like
happened
again
with
platform
o7,
where,
like
one
of
the
parameters,
was
missing
and
that
was
required
by
the
platform,
the
spec
is
incorrect
and
like
we
can't
go
back
and
change
it
so
like
if
someone's
trying
to
implement
platform
or
seven
they'll,
probably
implement
an
incorrect
version
because
we
committed
it
it's
frozen
in
time.
I
really
don't
like
that.
Specs
of
forever
frozen
in
time.
H
I'm
always
down
for
less
ceremony,
so
the
I'm
never
a
stickler
for
that.
So
I'm
okay.
D
I'm
plus
one
on
ceremony,
but
I
I
feel
like
that's
a
it's
a.
I
think
what
sam's
suggesting
is
changing
the
process
to
actually
have
patch
versions
of
the
spec,
which
I
think
ben
was
the
one
who
was
pretty
against
it.
I
I
forget
why
we
decided
we
didn't
want
patch.
C
I
mean
we
it's
up
to
us
whether
we
want
to
change
the
process
or
not,
but
for
this
specific
problem
we
don't
like
we.
We
need
to
modify
two
lines
in
the
spec
in
two
lines
in
the
life
cycle.
If
that,
from
the
platform's
perspective,
nothing
would
have
changed,
except
from
some
restrictions
imposed
by
the
life
cycle
like
it,
it
doesn't
the
inputs,
don't
change
the
outputs
that
the
platform
interacts
with,
doesn't
change
the
application
image
changes
which.
G
I'm
a
little
unclear
on
what
a
patch
version
of
the
spec
would
even
look
like.
I
kind
of
agree
with
the
motivation
there,
but
right
now
we
don't
have
spec
versions.
We
just
have
api
versions,
it's
like
there's
no
such
thing
as
a
patch
version
of
an
api
right.
It's
like
it's
either
a
different
api
or
it's
not.
You
can't,
like
bug,
fix
a
document
you
know
like.
Have
you
changed
what.
D
G
D
C
But
clearly,
speaking
from
an
api
perspective
like
platform,
api
is
not
changing,
like
the
inputs
are
the
same.
The
output
that
the
platform
is
expected
to
interact
with
are
the
same
so
by
definition
of
an
api
version,
we're
not
introducing
any
new
things.
It's
a
bug
fix
I'm
saying
it's
a
spec
patch
fix
rather
than
api
patchflix,
where
we
just
change
a
few
wordings
in
respect,
update
the
lifecycle.
The
platform
doesn't
have
to
know
anything
doesn't
have
to
change
anything.
C
Which
we've
done
in
a
lot
of
places
like
lifecycle
implementation
is
specified
in
like
either
buildback
api
or
the
platform
we
pay
in
the
spec
right
now,
which
we
shouldn't.
But
we
also
do
this
thing
where
we
do
the
implementation
first
and
then
write
the
spec
based
on
the
implementation,
which
also
seems
counterintuitive.
G
Really
part
of
the
issue
is
like
sometimes
the
spec
is
written
with
the
goal
that
you
could
replace
a
life
cycle
phase
with
your
own
implementation
right.
So
if
it's
about,
if
you
want
to
replace
all
of
lifecycle,
you
wouldn't
need
these
implementation
details.
But
if
you
want
to
replace
one
phase
of
lifecycle,
then
we
need
to
spec
all
these
details
right.
G
I
feel
like
that's
the
difference,
but
I
wonder
if
that's
a
silly
goal
at
this
point
anyways,
because
we're
expecting
out
the
creator
so
there's
is
there
even
a
use
case
for
replacing
one
phase
and
not
the
whole
thing,
because
you
need
to
be
then
implementing
the
creator,
which
is
the
whole
thing.
Anyways.
H
Yeah,
but
can
I
ask
a
question
real
quick
if
we,
if
we
patched
this
and
just
released
a
new
version
of
lifecycle
for
one
of
our
goals
with
this
change,
was
to
stop
producing
the
label?
That
gets
too
long
right
that
if
you
suddenly
are
on
08
and
you
went
to
08,
because
you
didn't
want
those
labels
on
your
images
now
they'll
be
on
your
image
again.
If
you
have
build
packs
in
use
that
have
the
old
style.
Is
that
right.
D
H
C
F
H
F
C
A
Hey,
I
want
to
do
a
time
check
we're
half
past.
So
can
we
maybe
wrap
this
up,
and
I
mean
it
sounds
like
we
need
an
rfc.
Maybe
is
that
sort
of
the
next
step.
G
F
A
F
I
was
just
curious
if
I
could
get
a
little
guidance
on
what
rfcs
we
should
write.
It
sounds
like
we
want
one,
especially
proposing
an
individual
lifecycle
release
if
we
were
to
go
down
this
route,
lifecycle,
patch
release
and
then
perhaps
also
an
rfc
for
the
future
platform
api
that
disables
the
legacy
format.
D
A
Okay,
I'm
going
to
move
this
along
then.
D
C
I
wanted
to
talk
about
the
like
some
partial
support
for
the
support
docker
files
rfc.
So
as
far
as
I
understand
this,
this
rfc
is
mostly
accepted,
since
we
are
already
working
on
a
spec
pr
for
this
and
we
have
pocs,
but
we
are
waiting
on
the
poc
and
the
implementation
details
itself
to
like
update
the
rfc
with
more
information.
C
C
So
what
I
was
hoping
we
could
do
is
the
life
cycle
implementation
to
actually
orchestrate
the
argument
passing
and
the
orchestration
of
the
this.
This
dockerfile
extension
api
could
be
implemented
along
with
just
supporting
a
from
instruction
in
the
output
docker
files.
C
The
reasoning
behind
that
logic
is
the
the
from
instruction
can
be
implemented
fairly
trivially
in
the
life
cycle,
as
it
is
right
now.
The
life
cycle
phases
except
slash
from
image
flag
in
the
export
to
face
to
figure
out
which
base
image
to
use
for
the
application
image,
and
this
would
involve
just
setting
that
flag
dynamically
instead
of
like
providing
it
up
front
and
we
don't
have
to
deal
with
the
complexity
of
building
the
dockerfile
using
mechanical,
build
or
anything.
C
So
it
sort
of
breaks
down
the
implementation
of
this
specific
rfc
into
the
orchestration
part
and
a
trivial
docker
file
example
that
the
lifecycle
can
execute
from
the
complexity
of
actually
implementing
a
full-fledged
docker
file,
executor
or
builder,
and
the
iran
image
specific
parts
of
it
were
like
useful
enough
that
it
already
had
a
different
rfc.
But
the
reason
this
rfc
was
put
in
draft
stage
was
we
figured
out.
We
could
provide
this
functionality
through
the
dockerfiles
rfc.
C
H
Just
the
run
image
right.
Sorry,
are
you
just
specifically
wanting
to
do
the
run
image
part
because
I
know
there's
build
and
run
image,
docker
files,
but
you're
you're,
proposing
just
a
run
image.
Implementation
first.
H
A
Yeah,
I'm
definitely
a
fan
of
anything
we
can
ship
incrementally
on
this.
Given
this
size
and
scope
of
support,
docker
files,
I
do
I
kind
of
wonder
if
the
using
the
from
directive
is
the
right
interface,
but
I
feel
like
if
it's
like
for
the
specific
thing
you're
talking
about
but
like
I
feel
like
it's
not
problematic
to
add
a
different
interface
later
and
like
you,
you
would
still
want
to
support
what
you're
describing
anyways,
because
it
would
be
a
valid
docker
file.
E
Are
you
proposing
that,
so
this
docker
file
would
be
written
by
an
extension
right
with
the
other
capabilities
of
extensions
like
the
ability
to
specify
s-bomb
information
like?
Would
that
also
be
there
and
what
you're
proposing.
C
C
So
I'm
not
asking
like
dynamic,
smart
generation
because
we
don't
need
to
these
are
like
pre-canned
images
that
we're
choosing
from
it's
just
it's
just
a
list
and
we're
choosing
one
out
of
it:
we're
not
creating
anything
new.
So,
ideally,
all
of
these
images
would
it's
a
scenario
like
this.
You
have
end
different
builders.
They
have
the
same,
build
image,
but
in
different
round
images,
there's
no
way
to
specify
that
configuration
right
now
this.
This
would
allow
you
to
do
that.
C
C
If
the
python
build
pack
is
selected
from
the
same
builder,
you
can
have
like
a
normal
ubuntu
based
image,
so
instead
of
maintaining
two
builders,
one
which
is
a
scratch
builder,
that
just
contains
the
same,
go
build
back
in
the
same,
build
image,
but
a
different
scratch
page
one
image.
You
now
have
one
single
builder
that
can
just
choose
from
any
different
drone
images
based
on
the
build
packs
that
were
selected.
C
The
orchestration
logic
can
be
reused
when
we
get
to
support
docker
files
like
the
whole
extensions
passing
arguments
and
everything.
It's
just
that
the
complexity
of
running
can
go
in
figuring
out
like
how
to
do
how
to
dynamically
build
images
is
all
gone
away.
You
just
select
one
of
the
final
images,
pass
it
to
the
run
image
flag
and
use
that
as
the
base
image
for
the
application.
B
B
C
B
D
B
That
interface
is
a
little
bit
clunky
too,
if
it
was
exposed
to
the
end
user
right.
But
it's
not
it's
really
the
builder
implementer
that
has
to
care
about
that
or
the
extension
builder.
Whatever.
C
Yeah
I
mean
the
other
thing
is
like
you
could
potentially
like.
So
since
it
is
an
extension,
it
can
read
the
app
directory
the
you
can
still
provide
that
through
some
other
interface,
like
the
user
can
say,
like
my
project.tamil,
contains
this
run.
Image
use
that,
instead
of
like
whatever
the
builder
shipped
with,
and
if
the
extension
that
shipped
with
the
builder
allows
for
that
format,
it
could
provide
that
functionality
to
the
user
as
well.
That's
that's
implementation.
Details
outside
the
scope
of
the
project.
C
It's
just
that
this
this
whole
feature
set
seems
fairly
large
and
covers
a
lot
of
things,
so
I
think
it
would
be
nice
to
just
like
first
test
out
like
whether
this
like
whole
extension
api
needs.
Some
rework,
as
we
invest
more
into
docker
files,
whether
docker
files
as
a
concept
work
well
enough
and
so
on.
So
it's
just
like
we're.
Not
shipping
too
many
changes
at
once
and
still
provides
some
valuable
features
to
our
users.
H
Yeah,
I
really
like
the
idea
of
implementing
it.
Yeah
run.
Image
selection
has
been
a
ask
for
a
long
time,
so
it
gets.
It
gets
that
sort
of
resolved
and
then
also,
I
do
think
we'll
run
into
some
complexities
here.
We'll
need
to
make
sure
that
rebase
knows
like
we're.
Gonna
have
to
make
sure
that
you
know
all
that
stuff
makes
makes
sense,
because
we
don't
want
folks
rebasing
and
going
from
scratch.
Images
to
the
builders
run
image
and
things
like
that
on
accident,
but
yeah.
C
A
So
I
don't
think,
is
there
any
updates
required
to
the
rfc
for
what
you're
talking
about
or
is
it
really
just
a
change
to
how
we
approach
the
project
and
maybe
set
milestones,
and
things
like
that.
C
I
don't
know
because
the
rfc
is
technically
not
merged
in
so
I
like
I'm
guessing
it
has
some
part
of
it
has
to
be
merged
in
before
we
release
the
spec,
the
life
cycle,
changes
and
everything.
So
either
we
break
the
rfc
into
two
parts
or
we
most
like
rfc
as
a
whole
and
then
do
the
implementation
in
two
parts.
A
We
probably
need
to
do
check
in
on
that
whole
thing,
because
we've
got
a
bunch
of
pieces
just
kind
of
sitting
out
there
and
I
know
work
still
being
done
in
the
poc,
but
you
probably
need
to
tie
it
all
back
together.
B
A
I
feel
like
it's
overkill,
to
try
and
separate
one
of
the
rc's
into
two
like
I
feel
like
we,
I
feel
like
the
thing
that's
blocking
the
rfc
is
that
we
haven't
really
kind
of
buckled
down,
and
you
know
I
think
we
can
do
the
spec
pr
based
on
what
we
know
from
the
poc.
I
could
be
wrong
about
this.
Maybe
the
poc
needs
to
get
further
along
before
we
get
to
that
point,
but
I
feel
like
we
should
be
able
to
get
this
rfc.
A
E
A
Need
to
happen,
sorry
yeah.
I
guess
you
know
I
was
gonna
say
that
I
think
that's
true,
I'm
not
sure
how
mature
it
needs
to
the
spec
pr
needs
to
be.
In
order
for
us
to
merge
the
rfc.
I
think
my
complaint
about
the
rfc
was
that
it
had
no
spec
changes
at
all.
It
wasn't
mentioned
at
all,
but
I'm
not
sure
finalizing
the
spec
pr
is
what's
required.
A
So
splitting
it
up
sounds
like
a
good
next
step.
I'd
be
happy
to
take
a
stab
at
that
on
the
because
I've
got
that
pr
open,
and
maybe
I
can
work
with
sam
on
figuring
out,
what's
needed
just
for
that
first
piece
of
it.
A
Okay,
so
I
think
next
step
is
sam,
write
down
your
thought
somewhere
and
then,
let's,
let's
maybe
work
together
on
the
spec
changes.
A
Cool,
do
you
mind
if
we
move
on
to
the
next
one.
A
Cool
all
right,
javier
project,
descriptor.
B
D
C
G
C
We
can
also
prevent
that
right.
Like
the
same
way,
container
d
does
like
just
limit
the
label
size
to
2
kb,
so
like
even
a
build
back
on,
but
arbitrarily
large
labels
same
thing
for
like
it's
apart
from
like
built
back
metadata,
there's
like
store
terminal
build
normal.
All
of
those
just
can
be
arbitrarily
large
and
they're
all
put
as
labels.
I
believe.
D
C
I
mean
currently
the
the
way
container
day
checks
it
is,
is
at
like
a
unit
level
and
cubelet
dies
down
at,
like
the
the
whole
metadata
pass,
is
16
megawatts.
So
what
container
d
tries
to
fix
that
by
just
doing
unit
level,
checks
hoping
for
the
best
and
then.
A
D
A
B
Think
basic
c4
would
increase
it.
Yep
yeah,
I
think
base64,
I
think,
gives
you
at
least
a
third
bigger
weight
than
the
content
that
you
give.
It.
G
B
Just
compress
it
to
binary:
stick
it
in
the
label.
That'll,
look
better
on
the
terminal.
C
The
the
other
bad
thing
is
that
labels
inherit
through
images
so,
like
you,
took
a
build
back
app
image
did
a
from
from
it.
The
the
subsequently
images
would
also
get
all
of
that
metadata.
If
you
put
it
in
a
manifest
annotation
that
doesn't
happen,
they
are
meant
for
like
single
images.
So
that's
also
useful,
but,
like
all
comes
down
to
daemon.
G
B
I
mean
they're
killing
themselves
now
right
with
docker
desktop.
So
at
least
there's.
D
C
I
was
just
going
to
say
this
is
also
how
like
other
non-docker
container
tools,
two
things
they
have,
they
publish
in
an
oci
format
and
they
have
a
minus
minus
load
that
loads
it
to
the
daemon,
but
the
output
is
either
an
oci,
tarble
or
a
publisher
registry,
and
that
way
you
get
the
image
on
your
local
daemon.
You
may
not
be
able
to
do
things
like
what
we
do
today,
with
like
efficiently
restoring
caches
and
like
reusing
layers
that
that
wouldn't
happen
the
same
way
it
does
today.
D
G
B
G
C
C
If,
if
you
stored
it
in
a
local
format,
and
then
you
you'll
you,
you
can
also
have
like
a
load
so
like
pack
build
minus
minus
load
which
puts
it
somewhere
loads
it
in
the
game
and
but
the
rebuild
and
everything
like
the
the
caching
and
everything
happens
on
volumes
or
like
some
disk
thing
rather
than
the
daemon.
But
you
still
end
up
with
the
new
image
in
the
daemon
at
the
end
of
it.
B
B
B
I
think
so
I'm
trying
to
think
of
like
what
was
it
bamboo?
Did
they
rename
it
the
bit
bucket
version
where,
basically,
you
have
multiple
workers.
G
B
B
B
A
Okay,
I'm
not
sure
if
there's
a
takeaway
from
this,
but
I
want
to
use
the
last
couple
of
minutes
to
address
the
project
descriptor.
A
So
I
don't
know
if
you
think
there's
something
we
should
put
on
next
agenda
or
for
large
labels
or
some
action
item
we
should
take
away.
I
guess
add
that
to
the
doc.
A
For
the
project,
descriptor
yeah
I'd
encourage
you
to
take
a
look
at
javier's,
what's
hack
md.
Is
that
what
it's
called
yeah
and
that's
what
you
linked
in
the
dock
yeah
I
so
my
my
impression
of
this
is
that
javier
javier
and
I
are
pretty
close
to
getting
aligned.
A
B
A
Yeah
definitely
the
capex
side
is
important.
I
think,
like
one
of
the
concerns
that
I
want
to
talk
about
javier,
is
that,
like
the
is
the
default
concept
and
how
it
it
adds
verbosity
to
something
that
I
feel
like
shouldn't
be
very
verbose.
You
know
what
I
mean,
but
I
mean
otherwise.
I
feel
like
we're
aligned
on
the
principle
of
it.
B
Yeah
to
me
that
that's
somewhat
bite
shady
right
and
I'm
okay.
If
everybody
agrees
that
it's
unnecessary.
Like
I
mentioned
in
our
thread,
I
have
an
opinion
that
I
really
like
it
and
I
think
it
it
means
a
lot
to
the
end
user,
but
bullying.
A
A
All
right
cool
well
we're
at
time,
so
have
a
good
rest
of
the
day.
Everybody.