►
From YouTube: Foundational Infrastructure Working Group [Feb 10, 2022]
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
welcome
everybody:
let's
go
over
the
board.
I
think
there
was
no
progress
on
this,
but
let's
check.
B
A
You
reached
out
last
week.
A
Yeah,
so
there
has
been
some
progress
made
on
the
the
political
sides
of
this,
but
nothing
to
share
yet,
but
some
there
has
been
some
movement
going
on
because
of
the
cloud
golf
involvement.
A
A
We
had
a
discussion
about
the
what
to
do
with
the
registry
removal
and
especially
how
to
announce
these
things.
A
This
will
break
v81
compatibility,
so
we
will
do
an.
We
will
bump
the
version
number
and
do
at
least
mention
it
in
the
release
notes,
but
we
there
are
also
ideas
of
maybe
doing
like
a
deprecation
warning
first
and
those
types
of
things,
but
since
it
has
been,
we
had
the
v2
api
has
been
out
for
such
a
long
time.
It
doesn't
make
sense
right.
We
have
people,
we
have
thoughts
of
layer
that
this
will
affect
them.
A
Yeah
I
mean
that
was
the
main.
Our
main
concern
also
like
nobody
is
using
this,
and
it
would
be
expensive
to
add
that
right,
because
that
would
mean
we
would
have
a
deprecation
warning
now
and
then
we
would
have
to
wait
some
time
during
which
we
would
have
to
keep
this
pull
request,
open
and
rebase
it
and
everything
that's
just
too
much
work.
So
we
will
just
continue
with
this
work.
Merge
it
once
it's
approved
and
then.
A
Yeah
this
pr
came
in
this
week
already
briefly
looked
at
it,
the
initial
implementation,
which
was
just
a
for
loop.
I
mentioned
that
it's
good
to
use
the
retry
strategy.
It
was
refactored
already
and
now
it's
ready
for
review.
A
C
Yeah
we
we've
had
very
strange
aws,
behaviors
and
annoying
ones,
so
hopefully
hoping
this
could
fix
the
problem.
A
C
That's
yeah.
A
So,
can
you
look
at
that
vermont
yeah.
C
A
Yeah,
okay,
so
that's
just
a
comment
from
long
okay
yeah.
I
did
this.
I
found
it
annoying,
but
we
keep
getting
all
these
gems
that
are
like
really
machine
specific,
I
mean.
Sometimes
we
we
have
dips
because
of
parts
in
part
changed
it's
like
hardcoded
to
someone's
machine
and
stuff.
So
this
should
definitely
not
be
in
here.
Hey
it's
a
catch.
B
A
A
Yeah
once
you
do
bundle
install
it,
it
installs
it,
but
it
should
it's
machine
specific
this
directory,
so
it
should
not
be
committed,
but
I
removed
all
the
stuff
that
was
there
in
one
commit
and
then
ignored
the
direct
we
were
already
ignoring
it,
but
then
for
darwin
and
a
really
specific
version,
and
that
version
apparently
increments
and
stuff
so
yeah
or
I
just
made
it.
C
Yeah
I
I
thought
we
had
made
a
star
only
for
the
the
last
figure,
but
if
it
makes.
A
C
B
C
A
A
It's
that's
basically,
the
team.
We
have
a
few
old
things
that
are
still
stuck
and
we
have
the
ruby
3.1
stuff
yeah.
That's
it.
Let's
go
over
the
issues,
see
if
there's
something
there.
It
has
been
updated.
So
we
last
time
we
went
over.
This
was
the
third.
A
Yes,
this
was
reported,
I
assigned
it
to
connie.
B
B
A
A
Okay,
we
have
an
azure
director,
so
I
think
it
should
be
pretty
easy
for
us
to
test.
C
They
do
this,
it's
sap
is
running
on
ali
cloud,
azure,
aws
and
gcp
yeah,
but
I
mean.
A
I
know
how
to
get
we
have
access
to
that,
but
should
be
pretty
easy
to
verify
this
one.
So
I
will
keep
this
assigned
to
gunny.
You
can
look
into
it.
B
Yeah
open
for
pull
request.
I
don't
know.
B
A
C
Yeah,
so
the
the
description
is
wrong
when
saying
default:
ssh
keypair,
because
you
would
never
put
any
private
key
here,
you
just
need
the
public
key
to
be.
A
A
C
A
C
A
C
Yeah
h
agents
should
be
fine.
Well,
he
if
he
can,
he
if
he
can
type
monitor
marie,
then
it
means
he
could
purchase
a
sage.
So
the
agent
is
there,
but
some
priesthood
scripts
may
have
failed,
and
prison
script
happened
before
setting
up
monet
5's
when
it
configuration
before
gathering
money
configurations.
C
C
No,
no,
no,
no,
no,
the
money
files
are
gathered.
C
B
A
A
A
A
C
A
Yeah,
I
mean
that's
what
we
I
don't
know
how
to
continue
here.
Leave
it
like
this
director's
missing,
oh
yeah,
that
one.
C
Yeah
I
need
to
check
out,
but
I
will
have
some
time
soon
to
refresh
my
environments
and
and
bump
everything
and
etc.
I've
delayed
that
way
too
much,
but
this
is
going
to
happen.
C
And
I
guess
we
could
take
a
step
back
and
think
of
the
semantics
what
it
means
an
instance
that
is
stopped,
and
I've
commented
in
the
issue
from
fedex
about
that
and
the
problem.
The
basic
problem
is
that
usually,
when
you
push
deploy
you
get
positive
or
negative
feedback
about
jobs,
starting
or
not
and
with
with
a
started
instance,
you
cannot
get
that
feedback,
but
we
could
possibly
assume
that
the
feedback
is
successful
and
delay
any
errors,
starting
the
jobs
for
later.
C
I'm
raising
another
problem
we've
had
when
debugging.
C
Well,
it
was
that
one
deployment
has
the
vm
that
was
stopped.
The
deployment
was
updated,
removing
some
unnecessary
bush
release
and
in
porsche
releases
and
bosch
deployments.
The
deployment
appears
as
not
being
used
not
being
not
using
this
posh
release
anymore,
but
when
we
try
to
delete
the
release,
there
is
still
a
relation
in
the
database,
because
the
instance
that
was
stopped
is
actually
still
related
to
the
bush
release
in
the
database.
C
C
And
there
sap
yeah
it's
it's
it's
clearer.
Now
they
are
with
that
command.
D
Yeah
yeah,
so
for
us
it
was
a
working
scenario
which
now
just
stopped,
and
I
mean
we
could
consider
providing
such
flag
to
to
get
this
resolved
quickly
so
that
we
can
upgrade
again.
I
think,
benjamin
you
bring
up
a
broader
discussion.
I
guess
right.
C
Yeah
I'm.
I
just
feel
that
we
are
trying
to
duct
tape,
a
bigger
problem.
We
are
trying
to
duct
tape,
a
small
issue,
whereas
we
have
a
bigger
problem
with
semantics
of
what
we
are
dealing
with
overall.
So
we
and
the
the
correct
solution
would
would
be
to
ex
to
be
clear
about
what
it
means
for
bosch
and
how
boss
should
behave
with
stop
instances
and
ignore
instances,
possibly
in
the
way
of
moving
of
not
erroring
when
running
a
an
errand
or
no
stopping
sense.
C
But
here
there's
there
are
inconsistencies
right
now,
because
the
stop
instance
is
not
updated
by
when
was
deploying,
and
yet
you
try
to
execute
it,
execute
something
on
it
which
may
be
stale
so
yeah.
We
we
currently
have
do
have
inconsistencies,
so
I
suggest
we
should
solve
those
inconsistencies
first,
before
implementing
a
quick
fix.
A
Would
at
least
be
good?
I
think
if
the
other,
the
issue
that
you
mentioned
would
be
reported
separately,
because
then
we
have
all
the
because
otherwise
it
would
believe
this
threat.
I
guess
yes.
D
What
might
be
the
goal,
but
our
intention,
of
course,
is
at
some
point
to
be
able
to
upgrade
again,
because
I
don't
expect
big
changes
on
that
service
fabric
algorithm
side
of
things.
A
D
Yeah
I
mean
if
it's
here
how
the
flag
would
look
like,
then
we
have
something
where
we
can
work
on,
but
I
mean
benjamin
brought
up
some
concerns
which
might
be
related
to
the
general
stop
behavior,
which
I
cannot
judge
to
be
honest,
but
that's
maybe
something
also.
You
should
be
make
up
your
mind
on
the
mv
site
at
some
point
in
time
whether
they
are
valid
or
not.
D
Yeah
well,
we
could
also
discuss
whether
we
revert
at
the
original
change
and
bring
it
back
once
this
is
clarified
so
that
we
can
upgrade
it's
also
an
option.
I
guess,
depending
on
on
the
time
frames.
A
Yeah,
the
I
think,
the
the
reason
for
doing
this
is
because
it
cost
us
a
lot
of,
or
I
mean
a
lot,
but
I
mean
we
have
instances
where
people
are
doing
this
unintentionally.
D
A
D
A
A
A
A
If
it
relies
on
things,
the
errands
should
be
like
fully
self-contained,
ideally
right,
it
shouldn't
have
other,
because
at
this
point
you
have
something
else
orchestrate
interacting
with
bosch,
and
because
of
that,
it
is
safe
to
run
that
errand,
because
you
know
the
other
stuff
is
in
that
state.
But
ideally
you
would
be
able
to
just
run
the
errand.
That's
like
the
expected
model
from
from
bosch
right.
A
You
should
be
able
to
just
run
an
errand,
and
then
the
errands
should
take
care
of
stopping
all
the
things
that
it
needs
to
be
that
need
to
be
stopped
when
running
right.
So
this
feels
like
a
coat
smell
in
a
release
like
having
to
rely
on
a
bosch,
a
bush
stop
it's
something
you
cannot
guarantee
like.
If
you
do
it
properly,
the
errands
should
error.
If
the
service
is
not
running
and
if
you're
checking
that
anyway,
then
you
could
also
just
stop
it.
D
They
basically
said
like
we
cannot,
it
doesn't
run
anymore,
you
upgraded
something
at
least
revert
it
and
I'm
trying
to
understand
the
use
case.
But
I
can
see
your
point.
Yes
might
be
an
approach,
I'm
not
sure
yeah,
so
they
would
need
to
go
through
the
through
monitor
on
the
bm
via
the
errand
and
stop
things
would
watch
accept
that
or
would
it
interfere
from
the
outside.
D
Right
yeah
yeah,
I
mean
I
can
reach
out
to
re-discuss
it
with
them
and
come
back
with
it.
A
D
D
Okay,
but
that
basically
means
that,
from
your
perspective,
this
additional
flag
is
of
the
table
or
could
still
be
an
accepted
workaround.
A
D
Yeah,
okay:
could
someone
with
experience
on
some
releases
would
do
it
adequately
summarizes
in
this
item?
So
we
have
a
clear
message
that
I
can
bring
to
them
how
they
would
need
to
change
things.
A
D
A
Thanks:
okay,
if
there's
nothing
else,
then
I
think
we
can
end
it
here.
D
Yeah
yeah,
we
still
have
this
pr
open
for
bosch
agent.
Retry
stuff
team
will
provide
more
details
about
the
scenario.
I
think
that's
one
thing
that's
open,
but
yeah
you
will
see
then.
A
Yeah,
I
got
all
the
details
already,
I
think
from
benjamin
or
ramon
okay
so
and
it
was
refactored.
So
it's
currently
being
reviewed.